Stop Waiting for IT Approval: Small DIY EdTech Projects via AI


A few weeks ago, I wrote again about why teachers should learn to code, arguing that basic programming skills give educators agency in spaces increasingly defined by technology. The response was expected: “That sounds great, but I don’t have time to learn JavaScript. I barely have time to grade papers.” Fair point. So let me tell you about a different approach that doesn’t require becoming a programmer.

Think about the kinds of tasks that fill the margins of educational work. An instructional coach who spends a few hours every Friday manually creating professional development certificates for 45 teachers. She downloads responses from a Google Form, cross-references them with a tracking spreadsheet, calculates total hours for each teacher, designs certificates in Canva, emails them individually, and updates her master tracking sheet. Or an administrative assistant who updates the same budget spreadsheet every Monday, routing purchase orders based on amounts and categories. Or a special education coordinator who spends hours scheduling IEP meetings across six different calendars.

When I describe automation to educators like these, the response is almost always some version of: “I wouldn’t even know where to start. I’m not a programmer.” Here’s the thing: you don’t need to be. The landscape has shifted since I learned JavaScript to build a reading practice tool for my students. AI can now be your coding partner, translating what you need into working solutions even if you’ve never written a line of code. This isn’t about becoming a developer. It’s about eliminating the tedious manual work that keeps you from focusing on what actually matters in education.

What Changed (and Why This Matters Now)

For years, educational professionals have been stuck in a frustrating middle ground. The work we do involves clear, repetitive processes that should be automated, but the tools available are either too generic, too expensive, or require IT approval that takes months. Custom solutions required hiring developers or learning to code yourself, both of which felt out of reach.

In my previous article about why teachers should code, I described spending weeks learning JavaScript to build a reading practice tool because nothing existed that could handle my classroom’s specific needs. That kind of time investment made sense for a tool I’d use daily with 60 students. But it’s hard to justify for a process that takes three hours per week.

AI tools like Claude change this. You can now describe what you need in plain language, and AI can write the code to make it happen. You’re not becoming a programmer. You’re learning to collaborate with a coding partner that can handle the technical implementation while you bring the domain expertise about what actually needs to happen.

I should be clear about what I mean by “automation” here. I’m talking about tools you build using Google Apps Script that work within the Google Workspace ecosystem most schools already use. These run in the background, connect your existing spreadsheets and forms, send emails, update calendars, and handle the routine data processing that currently eats up your time.

Who This Actually Helps

Before we go further, let’s be specific about the kind of work that automation can actually address. If you regularly see yourself or others doing any of these things, you have an automation opportunity:

  • Administrative assistants might find themselves updating the same spreadsheets every Monday with last week’s data, routing purchase orders to different people based on amounts and categories, or manually sending event confirmations and reminders to hundreds of participants.
  • Support services staff often spend hours scheduling IEP meetings across six different calendars, tracking service hours in multiple systems to compile monthly reports, or chasing down compliance deadlines across different programs.
  • Administrators frequently pull budget data from four different sources for board reports, send policy updates and track who actually read them, or manage staff evaluation timelines that require dozens of individual reminder emails.
  • Instructional coaches coordinate professional development schedules and follow-up across 30 teachers, compile classroom observation data from multiple sources quarterly, or track individual teacher goals and send progress reminders.

The pattern across all of these: repetitive processes with clear rules and multiple touchpoints. These are perfect automation candidates because the logic is consistent even if the specific details change.

How I Learned This (Messily, Like Everything Else)

The process I’m about to describe comes from my own experience building automation tools with AI assistance. It was not clean or linear. It was messy, full of mistakes, and occasionally frustrating. But it worked, and I’ve started sharing this approach with colleagues who are recognizing similar opportunities in their own workflows. What I’ve found is that the process tends to follow a predictable pattern, even when the specific tool is different.

Let me walk through what that pattern looks like, because I think it’s more useful than a polished tutorial that hides the rough edges.

Phase 1: Mapping the Mess

The first thing I learned is that you have to write down every single step of your current manual process before you touch any code. Not the idealized version. The actual version, including the parts that feel too obvious to mention.

When I sat down to map one of my own workflows, the initial list looked straightforward enough: check for new form submissions, look up previous records, calculate totals, generate output, send notifications, update tracking. Simple, right?

But then I started asking myself the questions I’d normally answer on autopilot. What do I do when someone submits the form twice? What about entries that don’t match the expected format? What happens when a name in one spreadsheet doesn’t exactly match the same name in another? These decision points were invisible to me because I’d been handling them intuitively for months.

This is the part nobody talks about when they describe automation. The actual workflow in your head is more complex than the steps you’d initially write down. I found it helpful to talk through the process with a colleague, not because they needed to understand it, but because explaining it out loud surfaced three or four decision points I’d completely missed.

If you try this yourself, push past the obvious steps and ask yourself: What starts each step? Where does the information actually come from? What decisions am I making, even small ones? What happens with the results? Who else needs to know this happened? You can’t skip this mapping phase. AI can’t build the right tool if you can’t explain what “right” looks like. But you also don’t need to map it perfectly on the first try. Start with the basic flow, then add the decision points as you think of them.

Photo by Christina Morillo on Pexels.com

Phase 2: The First Conversation with AI

The idea of asking AI to write code felt different from asking it to summarize an article or help draft an email. There was a mental barrier there, a sense that coding was a fundamentally different kind of task that required a different kind of expertise to even request properly.

What I found is that it doesn’t. Instead of trying to build the whole system at once, I started by asking Claude to automate just one piece of a larger workflow: reading form submissions and sending a confirmation email. That’s it. No calculations, no document generation, nothing fancy.

My first prompt was something like: “I have a Google Form where people submit information. When they submit the form, I want to automatically send them an email confirming I received their submission. Can you help me do that with Google Apps Script?” Claude responded with complete code and step-by-step deployment instructions. I copied the code, pasted it into Google Apps Script, set up the trigger, and tested it with a sample form submission. It worked.

That moment of seeing an automated email arrive from a script I’d deployed changed something for me. The barrier between “I don’t know how to do this” and “I can make this happen” suddenly felt crossable. If you’ve ever taught a student who believed they couldn’t write and then watched them finish a first draft, you know exactly this feeling from the other side.

What I took away: start with the smallest possible automation that would still be useful. I could have stopped right there with just automated confirmations and saved myself time every week. But because I proved to myself that I could do it, I was ready to tackle the next piece.

Phase 3: Adding the Hard Parts

With the basic confirmation working, I wanted to add the more complex logic: looking up previous records, calculating totals, checking thresholds, generating documents, and updating tracking sheets. This is where things got messier.

My next prompt tried to describe everything at once. I wanted the script to read submissions, cross-reference another spreadsheet, do calculations, check conditions, generate output, send emails, and log everything. Claude gave me code for all of it, but when I tried to run it, nothing happened. No error messages. No emails. Just silence. This is the moment where a lot of people would give up. I was frustrated. I’d done exactly what Claude told me to do, and it didn’t work.

Here’s what I discovered: the script didn’t have permission to access one of the spreadsheets I was referencing. Once I fixed that, it ran, but then hit a second problem: the document generation was trying to use a template that wasn’t structured the way the script expected.

Complex automations rarely work perfectly on the first try. You’re going to hit permissions issues, data format problems, and logic gaps. This isn’t failure. This is normal software development, even for professionals. The key is being willing to troubleshoot one piece at a time instead of trying to debug the whole system at once.

Going back to Claude with specific questions made all the difference. Instead of “it doesn’t work,” I learned to say things like: “The script isn’t finding the matching record in this spreadsheet. Here’s what the spreadsheet structure looks like. What am I doing wrong?” Specific questions get specific, useful answers.

Phase 4: The Almost-Working Version

After a few rounds of troubleshooting, I had a version that worked most of the time. It handled about 80% of submissions correctly. But it struggled with edge cases: duplicate entries within the same time period, data with unexpected formatting, and records with special characters that broke the matching logic.

Each edge case meant another conversation with Claude, another small fix, another test. But here’s what I realized: I didn’t need to solve every possible edge case before the tool was useful. The automated version handled 80% of the work perfectly and flagged the remaining 20% for me to handle manually. That was still a massive time savings compared to doing everything by hand.

My first attempt at one tool tried to do too much. When I stripped it back to just flagging items that needed attention and preparing draft outputs for my review, rather than handling everything end to end, it became much easier to build reliably. You can always add complexity later. Start with the simplest version that provides value.

What the End Result Looks Like

After a few weeks of iterative building, testing, and fixing, a process that used to take hours of manual work each week now runs mostly on its own. I review a summary of what happened, handle the small number of exceptions the script flags, and move on.

More importantly, building that first tool changed how I look at repetitive work. Every time I catch myself doing the same sequence of copy, check, calculate, email, update, I ask: “Could I automate this?” The answer is surprisingly often yes. I’ve since built several small automation tools for different parts of my work, each one taking less time to create than the last because the patterns carry over.

This is the part I’ve started sharing with colleagues, and the reactions are encouraging. When people see a concrete example of a process they recognize being automated with tools they already have access to, the “I’m not a programmer” barrier starts to soften. I suspect this is the beginning of a broader shift as more educators realize that the gap between their manual processes and automated solutions is narrower than they assumed.

Starting Your First Project

If my experience is any guide, the technical barriers to building automation tools are far lower than the conceptual ones. A few core concepts are worth understanding before you start, not because you need to become a programmer, but because they’ll help you troubleshoot and communicate with AI more effectively.

Google Apps Script is JavaScript that runs in Google’s cloud and connects to your Workspace tools. Triggers are what start your automation: time-based ones that run on a schedule, event-based ones that respond to things like form submissions, or manual ones tied to a button click. Permissions determine what your script can access, and they’re the most common source of first-run problems. Data flow is the path information takes from input (form submissions, spreadsheet data) through processing (calculations, comparisons) to output (emails, updated sheets). When something isn’t working, tracing that flow backwards usually reveals where the problem is.

With those basics in mind, the most useful thing I can offer is a starting template for your first conversation with AI. Open Claude (or your AI tool of choice) and start with context, not code:

I need help building a Google Apps Script automation for a repetitive task I currently do manually.

My role: [Learning specialist, administrative assistant, instructional coach, etc.]

Current manual process: [Describe step-by-step what you do now, including where data comes from and where it goes]

What this process accomplishes: [Explain the outcome, who benefits and why this matters]

Specific goals for automation: [What you want the automated version to do]

Technical environment:

We use Google Workspace I have [admin/editor/viewer] access to [specific tools] [Any other technical constraints]

Success will look like: [Specific, measurable outcomes, time saved, errors eliminated, etc.]

Please help me build this automation. Start with the core functionality and explain each major component so I can modify it later if needs change.

One piece of advice that I wish someone had given me earlier: when Claude gives you code, don’t copy-paste the entire thing and hope it works. Set up just the trigger first and have it send you a test email confirming it ran. Then add one piece of functionality at a time, testing each addition before moving to the next. This feels slower, but it’s actually much faster because you’ll catch problems when they’re small.

Making Tools That Stick Around

Building an automation that works today is different from building one that still works next year. The single most important habit is putting configuration settings in a spreadsheet rather than hard-coding them into your script. Create a “Settings” tab where you list email recipients, threshold values, and other parameters that might change. Your script reads these values from the sheet, so changing them doesn’t require touching code. This also means colleagues can adjust settings without understanding how the script works.

When it comes to sharing and handoffs, be thoughtful. Give view-only access to people who just need to understand what the tool does. Give edit access only to people who will actively maintain it. And while the automation is fresh in your mind, write simple documentation covering what it does, how to modify common settings, and what to do if it breaks. Your future self will be grateful.

Rather than building one massive automation that handles everything, I’d encourage a portfolio approach: small, focused tools that each do one thing well. This is how I’ve approached what I’m calling WebTools on my site, small scripts that solve specific problems. Each one started as a process that was eating too much of my time for how routine it was. None of them is particularly complex on its own. Together, they add up to a meaningful amount of reclaimed time. After you’ve built a few, you’ll start noticing reusable patterns, form submissions that trigger workflows, spreadsheet calculations that feed into notifications, scheduled scripts that compile reports. These patterns become templates for the next tool.

One thing worth noting: not everything should be automated. Some processes are genuinely better done manually because they require human judgment, they change too frequently to build stable automation, or they involve high-stakes decisions that shouldn’t run unsupervised. Automate the routine data processing. Keep the human judgment.

The Reality Check

Let me be direct about something: building automation tools with AI is genuinely accessible to non-programmers, but it’s not effortless. You will spend time mapping processes, troubleshooting permissions, and handling edge cases. Your first project will probably take longer than you expect.

But here’s what I’ve experienced myself: the time investment pays off quickly, and the skills transfer to other problems. My first automation project took probably 10 hours spread across a few weeks. That felt like a lot. But the process it replaced was costing me 2-3 hours every week, so I broke even after about a month and have been saving time since.

More importantly, the next project took a fraction of that time. And the one after that even less. The learning curve is real, but it’s not permanent.

This connects back to what I argued in my coding article: understanding how to create technology gives us agency in educational spaces increasingly defined by it. You don’t need to become a software developer to benefit from that agency. You just need to believe you can build solutions to problems that matter in your specific context.

The goal isn’t perfect automation. It’s eliminating enough tedious manual work that you can focus on what actually requires your professional expertise. Your insights about teaching and learning. Your relationships with students and teachers. Your ability to solve novel problems that don’t fit templates.

Automation can’t do those things. But it can clear space for you to do them better.

Resources for Going Deeper

If you want to learn more about Google Apps Script specifically, Google’s own documentation is surprisingly readable once you have a basic automation working. Start with the guides on triggers and permissions.

If you get stuck and need troubleshooting help, the Google Apps Script community on Stack Overflow is active and helpful. Search for your error message first (someone has probably hit the same problem), then post specific questions with your code and error messages if you can’t find an answer.

If you want to connect with other educators exploring automation, I’m always happy to talk through specific use cases and share what’s worked (and what hasn’t). Reach out at licht.education@gmail.com or connect with me on LinkedIn.

The automation tools you build won’t be perfect. They’ll have quirks and edge cases and occasional glitches. But they’ll work, and they’ll give you back time that you can spend on work that actually matters.

That’s worth starting imperfectly.

Additional Resources

Addendum: Additional Lessons Learned Through Experience

After building dozens of automation tools, I’ve noticed patterns in what causes issues. These aren’t things you’d find in Google’s documentation; they’re my hard-won lessons from watching real projects succeed and fail.

The Silent Failures That Waste Hours

Dates will break your automation without telling you.

This is the single most frustrating issue I’ve encountered. When your script returns data to your web interface, Google converts it to JSON behind the scenes. Date objects don’t survive this conversion. But instead of throwing an error, the entire return value just becomes null.

You’ll see this: Your backend script runs perfectly. The logs show exactly what you expect. But your interface stays stuck on “Loading…” forever because the data never arrived.

Here’s what happened to me with a dashboard project. The script was reading observation dates from a spreadsheet and returning them to display in a calendar view. Backend logs showed 150 observations loading successfully in under 2 seconds. Frontend showed nothing. No errors. Just an empty screen.

The problem was a single line that returned a Date object:

observations.push({
teacher: row[2],
date: row[1] // This is a Date object from the spreadsheet
});

The fix is simple once you know about it. Convert dates to strings before returning them:

observations.push({
teacher: row[2],
date: row[1].toLocaleDateString() // Now it's a string
});

How to prevent this: Before you deploy any script that returns data to an interface, test that the data can be converted to JSON:

function testMyData() {
var data = getDataForInterface();
try {
var json = JSON.stringify(data);
Logger.log('Test passed - data is serializable');
} catch (error) {
Logger.log('Test failed: ' + error);
}
}

If that test fails, you have Date objects (or other non-serializable data) somewhere in your return value.

When “It Works in Testing” Means Nothing

I built a script that sent personalized emails to teachers about their professional development progress. Tested it with myself as the recipient. Worked perfectly. Deployed it for all 45 teachers. Five of them never received their email.

The issue? Their names had apostrophes. “O’Brien” broke my email template because I was building the HTML by concatenating strings without escaping special characters.

The lesson: Test with messy real data, not clean sample data. Get a copy of your actual spreadsheet with actual names, actual email addresses (change them if you need to for privacy), and actual edge cases. Run your automation on that before deploying to everyone.

Create a test row in your spreadsheet with deliberately problematic data:

  • Names with apostrophes, hyphens, and accents
  • Email addresses with plus signs and dots
  • Text fields with commas and quotation marks
  • Numbers that look like text (ZIP codes starting with zero)
  • Blank fields where you expect data

If your script handles all of those, it’ll probably handle your real data.

Performance Problems You Won’t Notice (Until You Do)

A certificate automation worked great for the first month. Then in April, it started timing out. What changed? Nothing about the script. But it had gone from 50 total PD submissions to 300. The script that took 3 seconds in February was taking 45 seconds in April.

The culprit was reading from the spreadsheet inside a loop. Every time the script processed a certificate, it looked up the teacher’s previous total hours by searching the entire tracking sheet. With 50 submissions, that meant 50 spreadsheet searches. With 300 submissions, it meant 300 searches. Eventually that hit Google’s execution time limits.

The pattern that saves you: Load data once, process it in memory.

Instead of:

function processSubmissions() {
var submissions = getSubmissions();
submissions.forEach(function(submission) {
var previousHours = lookupHours(submission.teacher); // Spreadsheet read
var newTotal = previousHours + submission.hours;
// Process...
});
}

Do this:

function processSubmissions() {
var submissions = getSubmissions();
var allHours = getAllHours(); // One spreadsheet read
submissions.forEach(function(submission) {
var previousHours = allHours[submission.teacher]; // Memory lookup
var newTotal = previousHours + submission.hours;
// Process...
});
}

You won’t notice the difference with 10 items. You’ll definitely notice with 100.

When Something “Works” But Doesn’t Actually Work

I watched someone build a script that was supposed to send reminder emails when tasks were overdue. They tested it, said it worked, and deployed it. A week later someone asked why they never got their reminder.

The script did run. It did check for overdue tasks. It did try to send the email. But it failed silently because it didn’t have permission to send email on behalf of that user. The script logged “Email sent successfully” even though nothing was sent, because that’s what the developer wrote when they assumed success.

The fix: Check if things actually happened, don’t just assume they did.

Instead of:

function sendReminders() {
var tasks = getOverdueTasks();
tasks.forEach(function(task) {
MailApp.sendEmail(task.assignee, 'Reminder', 'Task overdue');
Logger.log('Email sent to ' + task.assignee);
});
}

Do this:

function sendReminders() {
var tasks = getOverdueTasks();
var successCount = 0;
var failCount = 0;
tasks.forEach(function(task) {
try {
MailApp.sendEmail(task.assignee, 'Reminder', 'Task overdue');
successCount++;
} catch (error) {
Logger.log('Failed to email ' + task.assignee + ': ' + error);
failCount++;
}
});
Logger.log('Sent: ' + successCount + ', Failed: ' + failCount);
}

Now you know if it actually worked.

The Debugging Process That Actually Works

When something doesn’t work and you can’t figure out why, don’t make random changes hoping something fixes it. That’s how you end up with scripts that are mysteriously broken in new ways.

Here’s the systematic process that actually finds problems:

Step 1: Can you run the backend function from the Apps Script editor?

Go to the script editor, select your main function from the dropdown, click Run. Does it work? Look at the execution log. What does it show?

If it works here but not in your deployed version, the problem is deployment or permissions. Create a new deployment version.

If it fails here, you have a backend code problem. The error message will tell you what line is broken.

Step 2: Is the data in the format you expect?

Add logging at the start of your function that shows exactly what data it received:

function processData(formData) {
Logger.log('Received: ' + JSON.stringify(formData));
// Rest of your code...
}

Run it again. Look at the logs. Is the data structure what you thought it was? Are the field names correct? Are values the right type (text vs number)?

I’ve spent hours debugging problems that turned out to be “the field is called ’emailAddress’ not ’email’” or “the number is actually text with a dollar sign in it.”

Step 3: Does the simplest possible version work?

Comment out everything except the absolute core function. Can you successfully trigger the script and get a simple response?

function testConnection() {
return "Hello from the backend";
}

If that doesn’t work, you have a connection or permission issue, not a code issue.

If that works, uncomment one section at a time until something breaks. That section has the problem.

Step 4: Are you making assumptions that aren’t true?

The most common assumptions I see:

  • “This field will always have data” (It won’t. Someone will leave it blank.)
  • “Names don’t have special characters” (They do. Apostrophes, hyphens, accents.)
  • “This will finish in under 6 minutes” (It won’t, once you have real data volume.)
  • “The data is in the first sheet” (Someone renamed or reordered the sheets.)
  • “Email addresses are always lowercase” (They’re not. “John.Smith@School.org” vs “john.smith@school.org”)

Add checks for these assumptions:

function processSubmission(data) {
if (!data.email) {
Logger.log('ERROR: No email provided');
return { success: false, error: 'Email required' };
}
if (!isValidEmail(data.email)) {
Logger.log('ERROR: Invalid email format: ' + data.email);
return { success: false, error: 'Invalid email' };
}
// Continue processing...
}

Knowing When to Stop

The hardest part of building automation tools isn’t the coding. It’s knowing when to stop adding features.

Here’s how I think about it now: Build the version that handles 80% of cases and flag the other 20% for manual review. Ship that. Use it for a month. Then decide if the remaining edge cases are worth automating.

A certificate script doesn’t handle teachers who submit hours for events that aren’t on the approved list. Instead of building logic to validate events, check master lists, send clarification emails, and await responses, that script just flags those submissions as “Needs review” and let’s someone handle them manually. That’s 2-3 submissions per month. Not worth automating.

Ask yourself: What would happen if this edge case hit and I handled it manually? If the answer is “I’d spend 5 minutes fixing it,” don’t spend 5 hours automating it.

Leave a comment