Users are experiencing issues where the AI system consumes credits for "neverending prompt reprompts" that are "falsely messaged as 'done'" but do not deliver the desired results. The platform needs to improve the accuracy of AI prompt processing and provide reliable status updates to prevent wasted effort and credits.
This article reflects on my two-week experience with Lovable - what I learned, where it fell short, and what I believe it reveals about the hype cycle around AI app building platforms. These are my personal observations and hypotheses - they're not representative of the company I work for. Other people's experiences with Lovable may be different to mine. I’ve spent nearly 15 years launching digital products and I'm pretty tech savvy, but I’m not a designer or an engineer, however according to Lovable, I don't need to be.. Until recently, the closest I’d come to writing code was crudely editing HTML email templates or static landing pages, a decade ago. And that’s a key point of this whole article. Lovable markets itself as a platform that allows anyone to prototype and build web apps through natural language. Its promise is simple and seductive: tell it what you want, and it'll design and build it for you - no coding required. It sounds utopian, with Lovable positioned as a Valhalla for non-techie founders, business stakeholders and product people... simply describe your vision and it appears, like magic. The hype machine is in overdrive and investors LOVE the story I’ve been using Lovable intensively for two weeks, and my view now is simpler and less flattering: the slick illusion of simplicity hides something more complex, potentially deceptive, and far less magical. Before explaining the details of what I tried to create, it’s important to first highlight what Lovable claim their cloud platform can build for you, as taken from their website: Social apps with user profiles and messaging Community platforms with posts, comments, and moderation features Inventory systems with product catalogs, filters, and stock tracking Collaboration tools for sharing files and media Learning apps with courses, progress tracking, and certifications It's comprehensive, covering a broad base of familiar use cases and patterns. From this list, for the app I attempted to build with Lovable, I've marked the core functionality in bold. Why I tried out Lovable A couple of weeks ago, I had surgery for a broken hip. Since then, I’ve been immobile and in recovery, confined to bed. Having rinsed Alien: Earth, The Terminal List, and a full rewatch of Marvel’s The Punisher, I needed something better to keep me busy. I love tech and innovation, so it was a good opportunity to properly put Lovable through its paces. By day, I work for a digital consultancy called Waracle , where we design and build digital products trusted by millions of people. Our clients rely on us for software that’s very high quality, secure, scalable, and delivers a great user experience, so I know firsthand how much effort it takes to make that happen. Building great apps is complex, deliberate work. So while I'm a huge advocate for the value and potential of Al, I’ve been watching the hype around Lovable (and similar platforms) with a lot of scepticism. The bold claims that “an AI platform can build your app without human intervention” don't align with my experience. If you're not familiar with Lovable, they're a Swedish AI powered app building platform, a hyper scaling unicorn 🦄 valued at around $2 billion, with an eye catching strap line: “Create apps and websites by chatting with AI” I wanted to see if Lovable could deliver on its promise: can an AI app building platform, without human developers or designers, produce something high quality, that genuinely works as intended and importantly, can it compete against skilled human product teams? Putting my biases aside, with an open mind and a well formed app idea, I created an account and got stuck in to see if it's as lovable as the hype makes out. In less than an hour, I had what appeared to be a functioning web app. I’ll admit, the first impression was pretty intoxicating. It felt like I was chatting to a human and despite clear differences between my spec and what it generated, seeing the first iteration materialise quickly from an initial prompt, felt pretty powerful, almost like alchemy. But my initial excitement faded pretty fast and although Lovable appeared to briefly redeem itself a few times, it then went steadily downhill. What began as enthusiasm quickly turned into frustration, and eventually, fascination of a different kind. Not because the concept I'd envisioned now worked perfectly (it absolutely didn't), but because I'd started to understand how and why it didn’t work. Sleight of hand, smoke, mirrors, and pre-packaged components Having used it for two weeks, my hypothesis now is that Lovable isn’t truly generating apps from scratch. Instead, it appears to stitch together pre-conceived, off the shelf components and stock templates, dressed up as bespoke generative engineering. Most frameworks lean on reusable code and established patterns to move quickly. The problem here however, is that one of Lovable’s supposed strengths, its speed, exposes its weakness: it doesn’t actually understand what it’s building. It assembles parts from a palette of templates and the results rarely meet a standard you’d call, complete. Task it to do simple CRUD and basic auth - it looks OK. My first pass had auth screens, a simple dashboard, static cards and forms. Impressive for ten minutes. But that moment passes, when you realise it's not actually what you asked for, doesn't work as intended and when you prompt it for improvements or for anything more complex, that's when the real problems immediately appear. Here are a handful of problems I encountered, there are dozens more: Broken layouts and floating UI elements. My bottom navigation bar, which should have been fixed in place on mobile, started drifting unpredictably whenever a user interacted with components, content or modals. It couldn't resolve this. Inconsistent component behaviours. Inputs, buttons, and cards looked similar but behaved differently across the app, despite my prompts including instructions to always use semantic tokens and to stick to our previously agreed design language. Accessibility and semantics ignored. ARIA labels missing, headings out of order, tab indices broken, pretty much everywhere. If anything, it was consistently inconsistent. Dashboard failures. The core dashboard in my app refused to load at all, despite it being incredibly simple, based on Lovable's own schema and mock data. These aren’t edge cases. They’re basic building blocks of any web app and established familiar patterns, but it struggled with all of it. Why? The impression is of a platform that can mimic sophistication, and with plenty of visual cues and persuasive design techniques, it does a good job of trying to convince you of its marvellous proficiency, but in reality it doesn't (and can't) sustain it. So what was I trying to build? For context, I wasn’t trying to reinvent Facebook, build a complex AI-driven metaverse or create the next Uber. It was much simpler: a post-surgery recovery companion, a practical web app to help people like me to track daily medication, physio exercises, and symptoms during recovery. The concept itself was modest and realistic, structured around common patterns you’ll find in countless health, fitness and wellness apps, and importantly, it seemed aligned to the platform's capabilities, marketed clearly on the Lovable website: Medication schedules: add prescriptions, set reminders, and log daily doses. Daily dashboards: a clear overview of progress and upcoming tasks. Exercise tracking: record reps, sets, and notes. Notifications: timely reminders to take medication or perform exercises. Wellness check-ins: short daily reflections or symptom logs. In other words, nothing exotic, a collection of well-established UX conventions built on tried and tested patterns: forms, event driven notifications, and simple state management; the features seen in everyday apps; things that human devs & designers implement easily. When my initial prompt produced a functioning prototype, it genuinely impressed me at first. Within moments, it had created my five screens - Home, Meds, Exercises, Schedule, and Options. They seemed to be linked together, were styled reasonably coherently, and integrated with a database. The initial magic trick worked ✅..... but the engineering did not. ❌ Once I tried to move beyond the first draft to refine behaviour, fix interactions, and improve usability - the cracks started to show. What should have been incremental improvements, quickly became an exercise in frustration. Add in a data schema (a simple transactional relational model with some event logs), to improve and replace the initial mock data that Lovable had generated, and it was pretty much an instant FAIL, on repeat. I then wrote well structured prompts with explicit new instructions, detailed acceptance criteria, and supporting references. I gave it user stories, data schemas, UI inspiration, and phase by phase implementation plans; everything you’d give a real team to maximise clarity and success and to reduce the potential for error. Lovable confirmed each request confidently, rephrasing my instructions back to me with reassuring precision after every prompt. It even generated task checklists ✅ that seemed perfectly aligned with my specifications, and appeared to verbally mirror my enthusiasm, before announcing that it was “starting the implementation.” And then, almost inexplicably, everything got even worse... What followed wasn’t refinement; it was regression and a disaster. Buttons now responded sporadically, layouts broke, styles conflicted, data disappeared, and features that had semi-worked previously now totally failed. I tried more prompts, rolling back versions and even completely restarted with a fresh project. Nothing recovered any meaningful quality. At that point, it was obvious: Lovable hadn’t misunderstood what I wanted. It simply couldn’t execute it properly - it's basically a templates library, and that’s the deeper flaw, the system is designed to give the impression of generative competence, yet its entire premise depends on consistent execution, limited by templated constraints. When it fails at that, the illusion immediately collapses. Expectations vs. reality As advertised, I expected Lovable to behave a bit like a low code / no code tool - ask it for what I want, and it'll abstract away the complexity, handle deployment pipelines, build and integrate the UI components, letting me focus on business logic and jobs to be done. Instead, what I got was a stream of half baked code and mediocre UI design, it failed to grasp simple tasks and needed hours of manual debugging, refactoring, and structural fixes. Even when I gave it better prompts, complete with diagrams, reference code, example design and both happy path and edge case scenarios, it simply couldn’t handle interactions beyond the most basic of CRUD operations. Worse still, each time I asked it to fix a new bug or make improvements, it consumed more of my credits. Every fix looked promising in the chat window - “working… fixed!” - but when I looked at the preview, or compared pre and post code, nothing meaningful had changed, and in some cases the code was identical, so what had it been doing? As humans we're programmed to detect when someone's deceiving us, so it’s an uncanny experience, watching an AI simulate that it's fixing something, narrating its thought process convincingly while quite effectively telling thinly veiled lies, when in reality the code often remains untouched, or changed somewhat, but not enough to fix the problems. By the tenth or even twentieth loop on the same bug, and the sea of red squiggles waiting to be resolved in VS Code, the novelty wears off fast. You realise that you’re watching theatre, not engineering. The AI isn’t developing; it’s performing the act of development and projecting confidence, while shuffling the same lines of code. Poor implementation of common web standards The irony is that Lovable starts from a solid foundation. It bootstraps projects with React and Vite, and ships Progressive Web Apps (PWAs) by default, both great starting points for responsive, modern web experiences. Other platforms like Replit also use React-Vite (and a wider variety of other languages, libraries and frameworks), but Lovable can't seem to implement even basic standards. A fixed bottom navigation bar, for instance, is a common design pattern and a well solved problem. Thousands of PWAs replicate native mobile behaviour pretty well. Yet in Lovable, the moment I introduced interaction, modals or dialogs, everything fell apart, the nav detached and floated around, scrolling broke, layouts became unpredictable and largely unusable. I won't even start on how it tried and failed to handle data. Despite countless attempts, it failed to fix any of it. Then there’s the issue of design systems. Lovable advertises support for popular libraries like Material-UI, Ant Design, Chakra and tons of others. It even suggests that it can install other frameworks of your choosing. This 'flexibility' is a mirage. This is what it told me it can implement: Currently Installed/Available: shadcn/ui - Already extensively integrated (based on Radix UI primitives) Radix UI - Headless component primitives (foundation of shadcn) Lucide React - Icon library (currently installed) Tailwind CSS - Utility-first CSS framework Can Be Installed: Popular Full-Featured UI Kits: Material-UI (MUI) Ant Design Chakra UI Mantine PrimeReact NextUI Tailwind-Based: DaisyUI Flowbite React Headless UI Other Options: React Bootstrap Semantic UI React Blueprint.js Evergreen (Segment) Theme UI Grommet Elastic UI Park UI Ark UI I'd instructed Lovable explicitly to use Material-UI for my build and to always use semantic tokens. It confirmed understanding repeatedly, taking action with the chat window showing dependencies being installed, and echoing my instructions back to me, verbatim. But when I looked at the code, it had defaulted to Shadcn-UI, with Tailwind baked in. No design tokens ❌ No semantic structure ❌ No consistency across the app ❌ It wasn’t just messy, it was misleading. Lovable presents the appearance of optionality (“choose your stack”), but in reality it’s running everything through one narrow design pipeline, using what appears to be a handful of stock templates and the total inability to cross-configure and meet a user's specifications. I'd instructed it very clearly to follow established design patterns and not to deviate, but it was pretty clear that I was seeing a different type of pattern emerge. The illusion of intelligence: Inside Lovable’s design outputs After a week or so, I decided to stop trying to fix and instead, act on my suspicions, to figure out if it had even understood the thing it had built based on my detailed instructions. So I asked it a simple series of structured prompts: Generate the exact design tokens used in this project. Describe the design system used in this project - colours, typography, components. Export a pattern library for this project that could be shared with a designer. Create a JSON manifest for this project that I can import into Figma or Storybook. It responded instantly, and beautifully, producing JSON manifests, markdown tables, and neatly formatted colour variables. It talked confidently about typographic scales, spacing units, and component hierarchies. It looked professional, thoughtful and intelligent. But it wasn’t real. ‼️ Every variable and colour token it produced was straight from Shadcn UI and Tailwind defaults - not from my actual project. None of the fonts, hues, or spacings it described existed anywhere in my repository. It hadn’t inspected the code, it hallucinated a plausible sounding design system and wrapped it in structure and confidence. I was impressed by its blagging prowess... This is Lovable’s hidden trick, it simulates introspection. It doesn’t read your project; it generates something that sounds like it did. It’s the linguistic equivalent of nodding profoundly in agreement with someone when you've no clue what they're talking about. When you see cleanly formatted tokens and human-like commentary, your brain assumes there’s substance behind it. You think to yourself, "this thing knows my app and knows what it's doing". But it doesn’t. It’s spoofing the shape of intelligence, not the reality. It reminds me of those early chatbots that could hold a conversation for a few exchanges, before collapsing into nonsense when you broke the pattern. Lovable’s “design understanding” works the same way, detailed enough to impress, but shallow enough to crumble quickly when you lift the lid. This really matters, to me at least. If you’re building a real product, a design system isn’t decorative; it’s the foundation of consistency, scalability, and accessibility. Without an actual source of truth, a tokens file, a pattern library etc, you don’t have a design system. How would a product team take a Lovable app forward without this? Lovable’s illusion hides that absence. It speaks the language of design systems - tokens, components, manifests - without the underlying data or rigour that make them valuable. Maybe that’s clever product design, or maybe it’s just good stage lighting. Lovable shines brightly enough to distract you from the fact that the lights are on, but the responsible adults aren't home and the kids are running amok. Low quality foundations that never hit the mark As I dug deeper, I realised a major issue isn’t just the constant bugs. It’s the quality of the underlying code written by Lovable that's a cause for concern. The scaffolding it generates looks syntactically correct, but it lacks the fundamentals of software craftsmanship: relational integrity, validation, error handling, and performance. Given that Lovable integrates directly with Supabase, and now offers its own “Lovable Cloud” backend as a service (read on for more about this), you’d expect it to at least follow established good practice for data management. But it doesn’t. There’s no real sense of data governance or data maturity - no referential integrity, no consistent audit-ability, and seemingly no adherence to FAIR data principles (Findable, Accessible, Interoperable, Reusable). The data may exist, but it’s trapped inside a structure that’s opaque, fragile, and difficult to trust or reuse. Lovable’s responses, tone of voice, micro interactions and simulated 'workings' within the chat window, give you the impression of sophistication, but the moment you scratch the surface, you realise it fails quite basic tests of good engineering and data stewardship. Example 1: Modal scroll bug 🚫 Lovable’s initial version (in response to my prompt - it looked fine, but broke scrolling permanently). It then attempted 8 or 9 'fixes' (using up credits), but failed to fix the issue. useEffect(() => { if (isOpen) { document.body.style.overflow = "hidden"; } }, [isOpen]); ✅ Corrected version (I used ChatGPT5 to rewrite it properly, got stuck into VS Code myself to replace, commit and deploy, restoring scroll after closing the modal. Fixed): useEffect(() => { if (isOpen) { const scrollY = window.scrollY; document.body.style.position = "fixed"; document.body.style.top = `-${scrollY}px`; return () => { document.body.style.position = ""; document.body.style.top = ""; window.scrollTo(0, scrollY); }; } }, [isOpen]); At a glance, Lovable’s version appeared ok. But it completely ignores scroll restoration, a core part of user experience when working with modals. It’s exactly the kind of oversight that shows that it simply doesn't have the knowledge, and therefore lacks the capability to actually deliver software that works properly and as intended. Example 2: Supabase data schema 🚫 Lovable’s scaffold (technically valid, but fragile), followed by more 'fixes' and reworking that used up multiple credits while trying to unsuccessfully improve it. create table medications ( id uuid primary key default uuid_generate_v4(), user_id uuid, name text, dosage text ); ✅ Corrected schema (relational, validated, performant) [not perfect, but functional] create table medications ( id uuid primary key default uuid_generate_v4(), user_id uuid not null references users(id) on delete cascade, name text not null, dosage_mg integer check (dosage_mg > 0), created_at timestamp with time zone default now() ); create index idx_medications_user_id on medications(user_id); Lovable’s schema “works,” but only in a loose sense. It would allow orphaned rows, invalid data, and poor performance at scale. These aren’t advanced database concepts - they’re the basics of relational database design. I'm not a software engineer but according to Lovable, I don't need to be, because the platform can do everything for you. Quantity, not quality By default, quality isn't baked into what Lovable generates, which is a surprisingly big gap in its value proposition. It spits out large volumes of code, with claims that its QA process is "correctness-focused during development", but given just how many bugs and problems it creates and how poorly it resolves them, much of this code is unusable and needs lots of manual intervention before it could be deployed live to customers. No automated testing - it doesn't conduct unit/integration tests unless explicitly requested, and even when they are specified it doesn't run them ⚠️ Code review approach - it claims to validate syntax, types, and logic. Really? ⚠️ Runtime debugging - in the chat window it shows 'analysing console logs', or 'network requests', and that it 'diagnoses issues after they occur'. But no fixes ⚠️ Design system compliance - it claims to apply rules so that all components consistently use semantic tokens and follow established patterns. Nope ⚠️ Reactive fixes - It responds to but fails to fix the vast majority of user-reported errors rather than proactively testing and fixing ⚠️ I even tried to meet it halfway by writing structured user stories, acceptance criteria, and intended behaviours in clear BDD format, the kind of inputs that map naturally to modern testing frameworks like Vitest, Jest, and Cypress. It never once scaffolded a test. No assertions. No coverage. No validation of expected behaviour, despite acknowledging the tests and confirming it would do so. It even told me features or buttons were working as intended multiple times, when in reality, it doesn't even have the capability to actually test features interactively. Accessibility testing - a legal and ethical minimum in most production apps - was also totally absent. Maybe it's too much to expect from an AI app building platform, but given how many automated testing tools and frameworks there are available, at least some level of testing should be present. If Lovable wants to be taken seriously as a platform, quality assurance and accessibility should be non-negotiable. Ironically, it's reminded me how important frontend, backend, QA and DevOps engineers really are. When you strip them away, you don’t gain speed and you lose all reliability. I'm not the only user frustrated with the mission I was excited to build something, but was now spending days on a bug hunt. After finishing my own experiments, I found a LinkedIn post by another frustrated Lovable user. Different project, similar story.... A quick Google search uncovered lots more just like this. He’d used Lovable to build a temporary internal system that integrated with Shopify - fetching product data, showing it in a custom UI, and exporting CSVs. The spec sounds straightforward and presumably a doddle for Lovable to create. His verdict mirrored mine almost word for word: “Within three or four hours I had a working app ready to share. The UI was clean and the features worked - something that would’ve taken me days otherwise. But then over 80% of my credits were burned asking it repeatedly to do the same thing or fix bugs.” He even found Lovable “fixing” a login issue by disabling access controls altogether, exposing user data to anyone. When he challenged it, the AI responded cheerfully: “You’re absolutely right to be concerned… this policy allows unauthenticated users to read the allowed_users table.” His conclusion captured it perfectly: “Would I build a public-facing app with it? Not without expert oversight - which sort of negates the benefit.” That’s the contradiction in a nutshell: Lovable promises to empower non-technical users, but the moment you want it to do anything meaningful, you actually need a lot of technical expertise, time and BUDGET - being stuck in a "fixing" loop, comes at a high cost. Enter Lovable Cloud Mid project and just as I was making peace with its not so lovable quirks, they launched Lovable Cloud. On paper, it sounds good; the backend is abstracted away, no need to worry about databases, servers, or configurations. In practice? It’s a pretty serious concern. Previously, when setting Lovable up first, you connect with your Github and integrate with your own Supabase account. That gave you a semblance of real control and a feeling of this being a 'grown up platform' - SQL queries, schema visibility, tuning performance, and the confidence of seeing and knowing what was happening under the hood. With Lovable Cloud, that control is now gone. You’re locked inside their box, shielded from the underlying services, with a rudimentary UI and limited access to backend features. While still built on top of Supabase, it's Lovable's instance, not your own. On one level, that’s deliberate, it makes the “AI magic” seem real, so you don't have to worry about the complex, distracting mess behind the curtain. But the more Lovable hides, the less you govern and own. That becomes painfully clear the moment you try to leave. Migrating away from Lovable Cloud is not trivial. If you wanted to take your app seriously, to scale or extend it outside their walled garden, the “ease” you were sold at the start becomes a pretty big burden. Now the promise of “you don’t need technical skills” flips into “to do anything meaningful at all, you’ll actually need very technical skills”- plus the pain of unpicking their abstractions. I also wouldn't trust any database schema Lovable generates, so you'd probably need to completely recreate it from scratch. Not such an attractive proposition after all... It’s a cul-de-sac disguised as a shortcut. And from a valuation perspective, that's a big red flag: growth based on lock-in and opacity can make the numbers look good on paper, but it's pretty risky - churning customers will simply move to a more reliable and trustworthy alternative (including back to traditional methods, which right now are still superior). The behavioural economics of frustration 📉 Here’s a controversial hypothesis: the more I think about it, the clearer it seems that Lovable’s problems may not be purely technical; but potentially economic by design. Is what I’m describing a bug at all or is it actually a feature of the platform? Lovable’s model appears to thrive on activity rather than outcomes. Steady acquisition and recurring credit top-ups look great in an investor deck, but if most users are burning credits without shipping something stable, that’s not traction, it’s motion masquerading as progress. Churn will be high, retention low and trust brittle. For investors, high acquisition rates, droves of gushing AI influencers (🚩) and stable revenue growth may seem attractive in the short term, but it's fragile in the long term. Let's look at pricing mechanics. Initially, you can pay $20 a month for 100 credits, plus a few daily freebies and the option to upgrade your plan, which, you'll undoubtedly need to do. Every prompt burns through credits. Ask it to add a feature? Credit. Fix a bug? Credit. Retry? Credit. The trouble is, Lovable rarely gets it right the first time, or the fifth, or the twentieth. That $20 can very quickly escalate to $400, $800 or potentially thousands more, but still won't guarantee you a high quality app that delivers what you asked for. Each “working…” spinner, each triumphant “issue fixed!” toast in the chat window delivers a small dopamine hit, so because you think you're almost there you reach for the confusingly designed pricing page (a deceptive pattern), and top up your credits, in earnest. It seems you’re not paying for outcomes; you’re paying for the anticipation of success. You’ve already invested time and money, so you keep going, topping up credits, convinced the next attempt will finally fix it. Meanwhile, Lovable benefits from your perseverance rather than your success. Users running continuous retries drive consistent revenue for Lovable, regardless of the results. Is that intentional? Who knows. It could be seen as clever psychology - but it’s another deceptive pattern and in my opinion, a poor business practice. This behaviour certainly appears to power Lovable’s business model: retries mean more credits burned, which makes it hard not to wonder if the system is intentionally designed to work this way. For Lovable, endless tinkering is great for business. Engagement looks high, usage looks healthy, every “try again” drives revenue, even when users end up back where they started. For the user, it’s a false economy - a fruit machine in a local boozer dressed up as a smart productivity tool. You pull the lever, lights flash, it makes noises and you feel close, but you never win the jackpot. The Lovable model doesn’t just waste time; it seems like it monetises inefficiency, turning user frustration into revenue. And that, to me, is potentially the quiet genius and the silent danger of Lovable: a system that appears to be optimised not to finish things, but instead, to keep you believing you’re just one or two prompts away from achieving your goal. Through a behavioural lens, a never ending 'fixing' loop like this could be seen as exploiting a handful of well documented cognitive biases. The variable reward schedule (the same mechanism that keeps gamblers hooked), releases a dopamine spike every time you think a fix might finally work. The sunk cost fallacy keeps you committed even when logic says stop. Loss aversion makes abandoning your half working app feel like throwing money away. Add a touch of choice architecture with a confusing pricing screen here, an "upgrade or downgrade current plan" button there, and no immediate clarity on what you're signing up to - the system appears to gently steer you to keep paying. In UX terms, if the hypothesis is correct, it's blending two classic deceptive patterns: Hidden Costs, where the true price of engagement with the platform isn’t clear until you’ve already committed, and the Roach Motel, where entry is easy but escape (or resolution) is almost impossible. Together, they create a revenue stream powered by user frustration, not satisfaction. Every (Lovable) cloud has a silver lining... So....despite all of this, it wasn’t ALL bad, but the upsides are few and far between. Lovable failed to deliver what I asked it for spectacularly, which was disappointing. But I learned a lot more than I expected to, which was a good outcome at least. Two weeks ago, if you’d put VS Code in front of me, I’d have stared blankly. Now I can navigate it fairly confidently. I also better understand Git branching, Postgres schemas, and Supabase config. While I don't plan to be writing React code or SQL functions any time soon, it's given me the impetus to apply my new knowledge practically and the confidence to spend more time actually building out an MVP for my recovery product concept - without Lovable, but I'd consider experimenting with Replit or v0 instead. I also enjoyed flexing my product manager muscles, while learning and getting buried in problem solving all served as welcome distractions from being sore and immobile in bed. More than anything, its helped strengthen an even deeper respect for the disciplines Lovable tries (but fails) to replace: product designers, frontend and backend engineers, QAs, and DevOps. With AI becoming ubiquitous, it solidifies for me just how increasingly necessary and important these roles are for building great apps and digital products. Perhaps Lovable's strap line going forward should be: “Lovable - we’ll drive you mad, and you won't really get what you asked for, but at least you’ll learn something.” I should highlight, this article is absolutely not a criticism of Supabase, I've been really impressed by Supabase itself and I'm not at all surprised that they're doing so well. Their platform is very intuitive, with a nicely designed UI and overall a great user experience. The agents (Performance and Security Advisors) feel intelligent and quickly resolved a couple of issues that were well above my pay grade. It was easy to find what I needed and the platform feels very stable, useful and usable. Even with my very limited experience of DevOps, I could navigate my way around it without any help. ✅ 404 Error: Lovable expectations not found ⛔️ So did Lovable deliver on it's promise? No. ❌ If you’re thinking of using Lovable to build something serious, my advice is simple: Don’t. Not yet. Right now, it’s a fun sandbox, a curiosity, but definitely not a construction site. The basics feel fragile, the fixes are surface level and cosmetic, QA doesn't exist, data handling feels risky, the pricing model is vague and appears to be based on locking you into a never ending fixes feedback loop, all stacked in Lovable’s favour. I hope it evolves. We all remember those generative AI videos of Will Smith eating pasta, but just look at what Gemini & VEO 3 can do now. Until it does improve, in my opinion no responsible CTO should let Lovable near production code. As a design prototyping tool, the outputs are rudimentary and it doesn't follow good design conventions. Traditional tooling that's already proven to work well, is still far superior. As a toy to maybe learn from? Sure. But as a tool for enterprises to build with? No. Maybe the real lesson is that there’s no shortcut to good code, quality, or ownership. Lovable’s greatest illusion is that you can skip the hard parts - the design systems, the QA, the DevOps, the debugging, but still end up with something robust and production ready. You can’t. It struggles with the basics, let alone being capable of shipping anything I'd trust to scale or put in front of real paying customers. So thanks, Lovable. You didn’t give me the app I wanted, but you did give me a couple of other things instead: a welcome distraction, a reinforced respect for the people who design and build real software, some useful dev skills, and you confirmed a simple truth: Software craftsmanship and genuine creativity can’t (yet) be automated.