Over the past several months, I have been building an argument across multiple articles. I have made the case that educators can and should build their own tools using AI-assisted coding. I have explored the power dynamics of who gets to automate whose work. I have been honest about the platform constraints and practical limits of building with Google Apps Script. And I have written about designing AI-powered tools that structurally require human judgment rather than treating review as optional.
This article is about a different question, and it is the one I keep getting asked at conferences and in emails: what about the vibe-coded products that other people are building and selling to schools?
Vibe coding has had a remarkable year. Collins Dictionary named it the Word of the Year for 2025, and platforms like Lovable, Replit, and Cursor have built entire businesses around the idea that anyone can build software without understanding code. The appeal in education is obvious. A school administrator who needs a student check-in form can have one built in an afternoon. For people who have been waiting on a developer queue or a vendor sales cycle, vibe coding feels like the answer to years of frustration.
That appeal is real, and I do not want to dismiss it. But the conversation has moved well past “useful for prototyping” and into territory that concerns me. We are starting to see vibe-coded tools marketed as products, offered to schools, and treated as though generating functional code is the same thing as building reliable software. It is not, and the security evidence that has accumulated over the past year makes the case more clearly than I can.
The Problem With Forgetting the Code Exists
I have written before about the distinction between vibe coding and AI-assisted coding, and about the practical limits of building tools with AI. What I want to focus on here is not the platform constraints or the terminology, but the security evidence that has accumulated since those earlier pieces, and what it means specifically for education.
The short version: multiple studies now converge on the finding that AI-generated code contains significantly more security vulnerabilities than human-written code, with some analyses putting the rate at two to three times higher. Technical debt accumulates faster. And the types of flaws that show up are not obscure edge cases. They are the foundational security problems that experienced developers check for as a matter of course and that vibe coders, by definition, do not.
These are not theoretical concerns. They have already played out in ways that should give anyone working in education serious pause.
What Happens When Nobody Checks the Code
The most instructive example so far came from Lovable, a vibe coding platform that at its peak was valued at $1.8 billion. In March 2025, a security researcher discovered that applications built on the platform routinely shipped without proper database security policies. The AI-generated code frequently failed to configure Row Level Security, the mechanism that determines who can access which data.
When researchers tested 1,645 Lovable-created web apps featured on the platform’s own showcase page, 170 of them allowed anyone to access user data without authentication. Names, email addresses, financial information, API keys, even personal debt amounts were all accessible to anyone who knew how to modify a basic API request.
What happened next is arguably more concerning than the vulnerability itself. Lovable released a security scanner, but the scanner only checked whether a security policy existed on a database table. It did not check whether the policy actually worked. It was security theater: a feature that made users feel protected while the underlying architectural problem persisted. The full public disclosure came 69 days after the initial report.
The platform told users that the security scanner was available and that implementing its recommendations was at their discretion. The users, meanwhile, were people who chose the platform specifically because they could not write or evaluate code themselves. The accountability landed on the person least equipped to do anything about it.
A separate analysis by the Escape security research team examined over 5,600 publicly available vibe-coded applications and identified more than 2,000 vulnerabilities, over 400 exposed secrets, and 175 instances of exposed personally identifiable information, including medical records and financial data. The types of vulnerabilities were not exotic: missing input validation, hardcoded API keys visible in client-side code, authentication logic that lives entirely in the browser where anyone can modify it, and databases configured with permissions so broad that an unauthenticated user can read or write anything.
The pattern across all of this research is consistent. Experienced developers know to check for these problems. The people most drawn to vibe coding, by definition, do not.
The Automation Problem
Most of the public attention has focused on vibe-coded apps with user interfaces and databases. But automations, the scripts and workflows that move data between systems, send notifications, generate reports, or modify records, carry their own version of these risks. In some ways, automations are more dangerous because they operate in the background where their behavior is harder to observe.
When someone vibe-codes an automation that connects to a student information system, a learning management system, or a human resources platform, the AI-generated code makes decisions about how to authenticate, what data to request, how much access to grant itself, and where to store what it retrieves. An automation that pulls student records and logs them to a Google Sheet that anyone in the organization can access has just created a shadow data problem. An automation that stores API credentials in the script itself has just made those credentials available to anyone who can view the file.
This is not a new problem in education technology. Research has consistently shown that school administrators lack resources to properly assess privacy and security issues around the tools they already use. One benchmark report found that 96 percent of apps used by schools share data with third parties, and 78 percent share information with advertisers, often without the school’s knowledge. If professionally developed, commercially supported EdTech has these problems, vibe-coded tools built by individuals without security training multiply the risk substantially.
What This Looks Like in Education
The EdTech angle is where this gets personal for me, because the tools showing up in educational contexts now are exactly the kind of thing I build and recommend in my own work. The difference is in how they are built and what they promise.
I have seen a growing number of vibe-coded educational apps pitched at conferences and shared in educator communities. Translation tools, assessment generators, curriculum organizers, chatbot tutors. Many of them look great in a demo. The problem is that a demo is the best a vibe-coded app will ever be. It is the moment before someone enters unexpected data, before the user base grows past the developer’s testing, before a dependency updates and breaks something the creator cannot diagnose because they never understood the code in the first place.
The security implications alone should give us pause. Education involves student data, and student data is not something we get to be casual about. When a vibe-coded app handles student information without proper input validation, without secure authentication, without the kind of careful data handling that requires actually understanding what your code is doing, the risk is not abstract. It is a real exposure that a real district will have to deal with.
And then there is the maintenance question. Software does not exist in a finished state. APIs change, platforms update, edge cases surface, users find workflows the creator never anticipated. When the person who built the tool cannot read the code it runs on, every one of those inevitable maintenance moments becomes a crisis. The tool either gets abandoned or gets patched by throwing more prompts at the AI and hoping for the best, which is how technical debt compounds into something genuinely dangerous.
The Wow Factor Problem
There is a dynamic in education technology spaces that makes vibe coding especially seductive. In conference presentations and vendor demos, the most impressive moment is usually the one where someone describes what they want and a working application appears in real time. The speed is genuinely remarkable. What does not happen in those moments is a security audit, a review of how the tool handles authentication, or a conversation about what happens when the AI-generated code encounters an edge case it was not designed for.
I explored this problem in detail in a recent article on designing for human judgment in AI-powered tools. The core argument there is that if you want professional expertise to remain part of an AI-assisted workflow, you have to build the slowdown into the tool itself. You cannot rely on the user to voluntarily pump the brakes while the tool is handing them a faster car. The wow factor of vibe coding demos works against that principle entirely. The implicit message is: look how fast you can build something without needing to understand it. That framing skips over the most important questions, and it does so in front of audiences who are hungry for solutions and may not have the technical background to know what questions to ask.
Where I Learned This the Hard Way
I want to be honest about the fact that I am not making this argument from a safe distance. I have made these mistakes myself, and the experience is a significant part of why I hold these positions now.
Earlier this year, I built a curriculum mapping tool that started as a focused solution to a real problem: helping educators organize standards and align them across a scope and sequence. The initial version used AI to analyze standards, assign additional weights, and generate sorting recommendations. It was impressive in a demo. It was also, I came to realize, far more complex than it needed to be.
The AI-powered analysis added layers of logic that made the codebase increasingly difficult to maintain. Every time a user encountered an edge case or wanted the tool to handle a slightly different standards framework, the fix required navigating code that I had not fully written and did not fully understand. I could feel the pull toward expanding it further, adding more AI capabilities, making it smarter. I could also see the path that expansion was on: toward a product that would be too fragile to trust, too opaque to secure, and too complex to hand off to anyone else.
So we paired it back. The version that actually works, the one I am comfortable sharing, does something much simpler. It takes standards, lets educators assign weights, and uses a straightforward algorithm to sort and surface recommendations based on those weights. No AI analysis at runtime. No black-box logic making decisions the user cannot inspect. It solves a direct problem, it does so transparently, and it is maintainable because the code doing the work is code I can read and explain.
That experience crystallized something I had been circling for a while: the most useful tools I build do one thing, or a small number of things, and they do those things in a way that is legible to both the person using them and the person maintaining them. The AI, when it is involved at all, shows up in the building process rather than in the running product.
What Actually Works
The tools in my WebTools collection that have held up over time share a few characteristics. They solve a specific, well-defined problem. They are built on infrastructure that is transparent and inspectable, usually Google Apps Script and Google Sheets, where the configuration and the logic are visible to the user. They do not require an AI model running in the background to function. And they are designed to be given away, not sold, which changes the incentive structure in ways that matter.
That last point is easy to overlook, but it shapes the entire design philosophy. When a tool is free and open-source, there is no pressure to add features that justify a price point. There is no incentive to make the tool more complex than the problem requires, because complexity does not translate to revenue. The tool can be exactly as simple as the problem demands, and that simplicity is what makes it sustainable. A free tool that does one thing well and breaks cleanly when it fails is more valuable than a commercial product that promises ten things and fails in ways nobody can diagnose.
This is the distinction I keep coming back to: the best AI-assisted tools are not the ones that use the most AI. They are the ones that use AI where it genuinely helps and rely on simpler, more transparent approaches everywhere else. Keyword matching before API calls. Simple algorithms before machine learning. Human judgment as a structural requirement, not an optional step. These are not concessions to limited technology. They are design choices that make tools more reliable, more maintainable, and more respectful of the people using them.
The Approaching Wave
What worries me about the current vibe coding enthusiasm is not the technology itself. It is the emerging market dynamics. There are already platforms designed specifically to list and sell vibe-coded apps. The “build an app in a weekend” narrative has created a class of would-be software entrepreneurs who have never maintained a codebase, never handled a security incident, and never had to explain to a school district why their tool exposed student data.
We have seen this pattern before in EdTech. A low barrier to entry floods the market with products that solve problems well enough to get adopted but not well enough to be trusted long-term. Schools and districts, often under-resourced and without dedicated technical review capacity, adopt tools based on demos and promises. When those tools fail, the cost falls on the educators who built workflows around them and the students whose data was handled carelessly.
For anyone working in K-12 education, where data governance involves overlapping requirements from FERPA, IDEA confidentiality provisions, and state-specific frameworks, the accountability gap around vibe-coded tools is not abstract. If a vibe-coded tool mishandles protected student records, the question of who bears legal responsibility does not go away just because an AI wrote the code.
What I Would Recommend Instead
I do not think the answer is to reject AI-assisted tool building. I use AI in my own building process constantly, and I think it has made it possible for people like me, people who are not professional software developers, to solve real problems in ways that were not accessible before. But there is a meaningful difference between using AI as a collaborator in a building process you understand and using AI as a substitute for understanding what you are building. The first approach produces tools you can maintain, explain, and take responsibility for. The second produces tools that work until they do not, and when they stop working, nobody knows why.
If you are an educator or an instructional designer thinking about building tools for your context, I would encourage a few things.
Start with the smallest version of the problem you are trying to solve. Not the most impressive version, not the version that would make the best conference demo, but the version that addresses a real friction point in your actual workflow. Build that, and only that, and see if it holds up under real use before expanding.
Keep your tools internal. The gap between “this works for me and my team” and “this is ready for strangers to depend on” is enormous, and it is mostly invisible in a demo. Internal tools can be rough, can be adjusted on the fly, and can fail without catastrophic consequences. That is where AI-assisted building shines: solving a local problem for a known audience with a tolerance for imperfection.
Understand what your code is doing, at least at the level of architecture and data flow. You do not need to be able to write every function from scratch, but you do need to know what data your tool collects, where it goes, how it is stored, and what happens when something goes wrong. If you cannot answer those questions about a tool you built, you are not ready to share it with others.
And be deeply skeptical of any vibe-coded product that handles student data, promises AI-powered analysis, and was built by someone who describes themselves primarily as a prompt engineer. The skills required to build a working demo and the skills required to build secure, maintainable software are not the same skills, and the market has not yet figured out how to communicate that distinction to buyers.
Sitting With the Tension
I recognize that there is a tension in my own position here. I am someone who builds tools using AI assistance, sharing them freely, and advocating for educators to build their own. At the same time, I am arguing that most vibe-coded products are not worth trusting with student data. Those two things are not contradictory, but they do require holding a nuance that the current conversation often flattens.
The nuance is this: the value is in the building, not in the product. When an educator uses AI to help them construct a small tool that solves a problem in their own context, they learn something about their workflow, about what technology can and cannot do, and about the gap between an idea and an implementation. That learning has value regardless of whether the tool itself survives. When a stranger sells them a vibe-coded app that promises to do the same thing, that learning disappears, and what remains is dependency on software that nobody fully understands.
I would rather see a thousand educators build rough, imperfect, single-purpose tools for their own classrooms than see a hundred polished vibe-coded products flood the EdTech market. The rough tools will break, and when they do, the people who built them will know why. The polished products will also break, and when they do, nobody will.
If you are thinking about building something and want to talk through the approach, I am always happy to hear from people working through these questions. You can reach me at licht.education@gmail.com, and there are more tools, articles, and resources at bradylicht.com.
