You might think the world's doom and gloom, now that ai can create everything, but let me tell you: it's not that doomy! AI right now is like a child that can hold a hose but doesn't know anything about putting out a burning building.
when i first heard about how you can build an app in half an hour, this is what i thought, that people who actually enjoy building apps would see their work taken away from them, leaving them joyless.
But after working for a couple of weeks with AI, building this game i'm building, lemme tell you. AI is not here to replace us. Sure, our job might shift, but it's not true that you ask an AI to build you an app, and that's that.
First off, let's consider the scope of the app. If you want a login/register page and then a blank screen, sure, AI's gonna do that in a couple of minutes. But after that, it's not so simple. AI's aren't smart. They look at patterns and predict what you want. If you don't ell them exactly what you want, they'll start making shit up around the thing you asked.
And then there's different AI's. Each built for coding but each built by a different set of people, with a different set of principles, ideologies, etc. You think you can switch AI's half through and not see you entire code get refactored, you'd be wrong.
So what do you do?
Building an app with AI takes the coding part away from you, sure, but turn you into a producer; one that has to know how to manage AI agents, because these days, an AI is not a bundled thing. You work with AI x, but that AI might delegate part of the work to an agent. And what if you want to change the agent the AI delegated the work to? each agent now also has its differences. You end up being a conductor, having to learn the in and outs of AI's and Agents.
So you're a producer and a conductor. But also, a director. You have to direct all these different parts, tell them exactly how and what not to do, to let them do what they do do, of what you asked of them.
Example. i started using Gemini 3 (not pro) and Copilot (free). Copilot itself delegates you r work to whatever agent it thinks is bet for your job. that another can i worms i haven't opened to understand. But then i upgraded Copilot, and half my code was refactored (because the Pro version of copilot knows more than the basic version). It was a good refactor because it improved the quality of the code, but that means i have to keep paying for it, and use it alone. Because as soon as i called Gemini into the picture, the code Copilot Pro was doing was subpar according to Gemini Pro. So now my code gets refactored again. and i have to pay for the Gemini Pro version.
Alright, using the pro versions of these AI or the free versions is a decision i make between having better code or worse code. not in the sense that the code does work, but that with higher level AI's (pro version), seeing that they are "smarter" they can thing about more thing around features i implement. things like security, edge cases, better practices. Etc. So, its a choice between code that works and code that works a little bit better.
The fun part has just begun. See, AI's have "context". the more context an AI has the more he remembers your chat and the code you are working on with the AI. it you chat too much, the AI starts forgetting wtf you talked about at the beginning of the chat, and this is subtle and you might miss a lot of these subtle memories loses. this turns into the code he's actually spitting out now, differ from the original idea, set o principles, instructions that you set at the start of the chat. So not you have to tell the ai in every chat to reply as long as he can recall the first line in the initial chat prompt, so you can be sure he hasn't forgotten anything. as soon as he does tell you "hey, listen, i know you asked to change this part of the code, but I'm starting to forget what we initially talked about, so maybe ill start giving you answers that are not align with what you want." fun.
we can deal with that: at the start of each chat sessions we feed it a bunch of instructions, including the one that he tells us he's starting to forget, and another instruction that says "if you're starting to forget summarize what we're doing and tell me to move to a new chat". moving to a new chat clears his memory, but he has more context (more memory again).
so we paste the summary into the new chat, and the files we're working on, and we continue until he runs out of memory again.
the not so fun part: as your code grows so does the amount of memory the AI uses to store your code in memory in the first prompt of a new chat. so as we loop this and continue starting new chats, the less chats we can do before we have to sumurize and move onto another chat. until, eventually the summary and code will exceed the ai's context in the first prompt, which, at that point, you can no longer work with the AI. but there are several choices at this point: either you pay yet another level and get more context (gemini Ultra Pro) that goes for about 300 bucks a month, or you start just feding it parts of the code (and not ehc ode in ins entirety) and you now see youself stitching snippets of code in several files of your app, or you do the code yourself (which, in reality you cant compete with the speed at which ai chits out code), or you buy yourself a beast of a GPU, run an ai locally (and hope that the open source AI are better that something like Google's Gemini, which, highly unlikely).
anyhow. sure. ai can make an app in half an hour. but a less good one. and you cant grow it without costs and hurdles. and you cant maintain it with costs and hurdles.
even using one single ai and not managing two (which i do so that each ai can audit the other ai's work) is hard work. as you work with it you 'll see his personality come though and it thinks thata the code you ask should be "this way" and you dont want it to be "this way", how have to write that doen in an instructions text and feed it to him at the start of conversations. and that text grows. not only to stop the ai from being an freethinking asshole, but becasue AI are trained on public code, which is full of errors and you sometimes have to instruct it to go against its training and code it the right way.
i started with asking copilot to build me a login page. short after i saw gemini saying copilot's code was missing important features. i moved to gemini. then i started seeing that gemini loves to introduce his own twist on the things i ask of it, big twists! i started to panic. Then i found Jules which shits on gemini's code, but Jules is kind of a genius but with some part of his brain missing where he understand what the fuck he's being asked.
so now, i have a 400 line document that ive been building that instructs Gemini one how to act, Jules how to act, makes AI prompts, from my human prompts, which i can pass to Jules. Ive then build an extension for my IDE witha button to ddetail how my code is, before i brainstrom ideas with gemini, then it gets a detailed description of what Jukes built so i can feed it to gemini and see if what gemini asked of hum align with what he actually did.
it s whole fucking process of instructions, insurances, imutable truths, eyerthing. and it is NOT "making an app in 30 minutes" i can tell you that for absofuckinglutely sure.