CodinIT.dev
Self-hosted AI & machine learning tool that creates production-ready web apps using natural language. Generate full-stack code.
Open-source AI development platform, honestly reviewed. Built as an alternative to Lovable, Bolt, and v0 — but still early.
TL;DR
- What it is: Open-source (Apache-2.0) AI app builder — describe what you want to build, get a full-stack app. Self-described alternative to Lovable.dev, Bolt.new, and Vercel’s v0 [3].
- Who it’s for: Indie hackers, non-technical founders, and developers who want Lovable-style app generation without vendor lock-in, credit-based billing, or a closed-source black box [1][3].
- Cost savings: Lovable.dev runs on a credit/message-based model that scales with usage. CodinIT’s core is free and self-hostable; you bring your own LLM API keys [1][3].
- Key strength: True multi-model flexibility — 19+ AI providers including Ollama for fully local inference, plus WebContainer-based in-browser execution so you don’t need to install anything to try it [1][3].
- Key weakness: 167 GitHub stars at time of writing, the GitHub README explicitly calls itself a “Demo,” the Pro paid tier is marked “COMING SOON” and doesn’t exist yet, and there are no independent third-party user reviews to speak of [profile][3].
What is CodinIT.dev
CodinIT.dev is an AI-powered app builder: you type a prompt, it generates a full-stack application. The generated code runs instantly in your browser via WebContainers (in-browser Node.js runtime, same technology StackBlitz uses), or in a secure cloud sandbox via the E2B SDK if you need more compute. The output isn’t a prototype you have to rewrite — it targets deployable Next.js, Vue.js, Svelte, Streamlit, or Gradio apps with Postgres, auth, and functions wired up [README][1].
The pitch is explicitly positioning against the closed-source incumbents. The homepage meta keywords include “Bolt.new alternative, Lovable.dev alternative, v0.dev alternative,” and the FAQ page directly answers “How does CodinIT.dev compare to Bolt.new?” and “How does CodinIT.dev compare to Lovable.dev?” [homepage]. The core distinction the project makes isn’t about the AI itself — it’s about who controls the infrastructure, the model choice, and the data [1][3].
The project is built by Gerome Elassaad, sponsored by E2B for Startups, and sits at 167 GitHub stars with an Apache-2.0 license [profile][README]. There’s also a separate desktop app repository at codinit-dev/codinit-dev — the original repo (gerome-elassaad/codingit) is the web version, and the README explicitly directs users to the newer desktop app [README]. That kind of repo split at this stage of the project is worth noting.
Why people choose it
The available sources on CodinIT are limited and skewed: the most detailed comparison article [1] is written by CodinIT’s own documentation team (the URL is codinit.dev/docs/comparisons/), the Onei.ai entry [3] reads like a product description rather than an independent review, SourceForge shows no actual user reviews [2], and OpenAlternative was blocked [4]. So take the “why people choose it” framing with that caveat — there aren’t enough independent voices yet to synthesize a consensus.
What the available data does show is a consistent positioning argument:
Versus Lovable.dev. CodinIT’s own comparison page [1] frames this as the central fight. Lovable is proprietary, fully cloud-hosted, React + Tailwind only, and runs on managed LLMs. CodinIT is open-source, runs locally or in E2B sandboxes, supports 10+ frameworks, and lets you bring any of 19+ LLM providers including local models via Ollama [1]. The code ownership framing is the same argument you hear from anyone switching off Lovable: you own what you build, the platform can’t change pricing on you, and you can self-host the builder itself [1][3].
The local LLM angle. This is where CodinIT genuinely differentiates from every cloud-native competitor. Lovable, Bolt.new cloud version, and v0 all route your prompts through their own managed API. CodinIT lets you point the builder at a local Ollama instance — meaning the entire generation pipeline, including inference, stays on your hardware [1][3]. For founders building anything sensitive (medical, financial, internal tooling), that’s not a marginal benefit.
The free-as-in-free-forever core. The Pro plan that would cost money is listed as “COMING SOON” [3]. Right now, the entire platform is functionally free to self-host. The catch is that you need your own LLM API keys and, for cloud code execution, your own E2B API key — and E2B is not free past the free tier.
Features
Based on the README, homepage, and third-party descriptions:
App generation:
- Prompt-to-app generation in Next.js, Vue.js, Svelte, Streamlit, Gradio, and more [README][3]
- “Prompt Enhancer” button that expands short ideas into detailed, structured prompts before generation [3]
- Visual reference support — you can paste screenshots, video, or Figma designs as context for generation [homepage]
- Web research mode — CodinIT can search for implementation context before generating code [homepage]
Execution environments:
- In-browser execution via WebContainers (no local install needed) [1][3]
- Cloud sandbox execution via E2B SDK for heavier workloads [1][README]
- Install any npm or pip package inside the sandbox [README]
- Streaming output in the UI [README]
Stack and AI providers:
- 19+ AI providers: OpenAI, Anthropic, Google Generative AI, Google Vertex AI, Mistral, Groq, Fireworks, Together AI, Ollama, xAI, DeepSeek [README][homepage]
- The homepage claims “500+ AI Models” — this is the total model count across all providers, not 500 distinct systems [homepage]
- Add custom LLM providers via the extension system [README]
Full-stack features (web version):
- Postgres database per project, structured and ready from start [homepage]
- Email/OAuth authentication (GitHub login, Google Vertex) [README][homepage]
- Cloud storage for files [homepage]
- Serverless functions [homepage]
- Realtime data sync [homepage]
Deployment:
- One-click deploy to Vercel, Netlify, or GitHub Pages [3]
- Desktop app for macOS, Windows, Linux as alternative to browser-based usage [3][README]
MCP integration:
- MCP server support — connect your own MCP templates or use pre-built ones [3]
Community templates:
- Users can remix and share projects; the homepage shows community-built apps with remix counts (186 remixes, 116 remixes, etc.) [homepage]
App testing:
- CodinIT “simulates real user interaction and debugs the app for you” — this is a homepage claim without detail on how it works in practice [homepage]
Pricing: SaaS vs self-hosted math
This section is necessarily thin because CodinIT’s monetization is still being built.
CodinIT self-hosted (current reality):
- Platform cost: $0 (Apache-2.0) [profile]
- LLM costs: your own API keys, billed directly by the provider (OpenAI, Anthropic, etc.)
- E2B cost for cloud sandboxes: E2B has its own pricing. The free tier covers limited compute hours; production workloads will incur costs. Data not available on exact E2B pricing impact for typical usage.
- VPS to self-host the builder: $5–10/month
CodinIT Pro (announced, not yet available):
- Pro Developer: $89.99/year — includes priority AI models (GPT-5, Claude, Gemini), private hosting, debugging tools, team collaboration [3]
- This tier is marked “COMING SOON” as of this review [3]
Lovable.dev for comparison:
- Free: limited messages per day
- Starter: credit-based pricing scaling with usage volume
- Teams and Pro tiers add collaboration features at higher monthly costs
- The closed-source platform controls the LLM routing — you can’t substitute a cheaper or local model
v0.dev (Vercel) for comparison:
- Free tier with limited generations per month
- Premium tiers for higher volume; pricing tied to Vercel account
Concrete savings math: The honest version of this calculation is hard to run because CodinIT’s Pro pricing doesn’t exist yet and the E2B cost depends entirely on your usage pattern. What’s clear: if you self-host CodinIT and use Ollama or a cheap API provider like Groq, the marginal cost per generation is close to zero. If you use OpenAI GPT-4 or Claude for every generation, your API bill from those providers will dominate.
The $89.99/year Pro tier (when it ships) is cheap compared to Lovable’s monthly credit model if you’re a heavy user — but that comparison can’t be made concretely until it launches.
Deployment reality check
There are two paths here and they have very different setup profiles.
Browser-based (web app at codinit.dev): No setup at all. Visit the site, connect your LLM API key, start building. This uses E2B cloud sandboxes for code execution. It works immediately but means your code runs on E2B’s infrastructure, and you’ll need an E2B API key for anything beyond the free tier [README][3].
Self-hosted:
The README setup requires git, Node.js, an E2B API key, and at least one LLM provider API key [README]. The .env.local file has roughly 25 environment variables covering LLM providers, Supabase for auth, Vercel KV for rate limiting, PostHog for analytics, and GitHub OAuth [README]. That’s a significant configuration surface for a “simple” self-hosted tool.
What can go sideways:
- The E2B dependency is non-trivial. CodinIT uses E2B to run the generated code safely — without it, you get WebContainer-only execution which has limitations on what Node packages can run. Setting up your own E2B account, getting an API key, and managing that billing is an extra step most Lovable users won’t expect [README].
- The project has two repos (
gerome-elassaad/codingitfor web,codinit-dev/codinit-devfor desktop). The README on the web repo says to use the newer desktop app. Which version is actively maintained and how the two relate is not clearly documented [README]. - At 167 GitHub stars, this is early-stage software. Expect rough edges, breaking changes, and limited community troubleshooting resources if something doesn’t work.
Realistic time estimate for a technical user following the README: 1–2 hours including configuring all the environment variables and getting a working development server. For non-technical founders: this is not a “follow a tutorial” install — the 25-variable .env.local alone will require reading documentation for each service.
Pros and Cons
Pros
- Apache-2.0 license. More permissive than most alternatives. You can self-host, fork, white-label, and embed in commercial products without negotiating a license [profile]. Lovable, Bolt, and v0 are all closed-source.
- True multi-model flexibility. 19+ providers including Ollama means you can route generation through a fully local model — no prompts leaving your network [README][1]. No other major AI app builder in this category offers this.
- WebContainer + E2B hybrid. In-browser execution for lightweight apps, E2B cloud sandboxes for heavier work. You don’t need Docker or a VPS to run generated code [1][3].
- Multi-framework output. Not locked to React. Next.js, Vue, Svelte, Python/Streamlit, Gradio — the output language matches your preference or your team’s skill set [README][3].
- Community remix system. Projects can be shared and remixed, which creates a growing template library [homepage].
- Prompt enhancer. The explicit “improve prompt” button before generation is a useful UX detail that competitors often skip [3].
- MCP support. Connect your own MCP templates or use ready-made ones [3].
- Free right now. The Pro tier doesn’t exist yet, which means everything is free while the project builds momentum [3].
Cons
- 167 GitHub stars. This is early-stage software by any measure. Compare to Bolt.new (tens of thousands of stars), Lovable (widely adopted), or n8n (100K+ stars). The community, documentation depth, and troubleshooting resources all reflect the early stage [profile].
- GitHub README calls it a “Demo.” The exact title is “CodinIT.dev Demo | Open-source, AI app builder prototype” [README]. That’s not a marketing word choice — it means the current codebase is explicitly positioned as prototype-quality.
- E2B is a required paid dependency. E2B handles sandbox code execution. CodinIT is open source, but the cloud code execution backend isn’t — you’re adding another vendor relationship and bill [README][3].
- No independent user reviews. SourceForge shows no CodinIT reviews [2]. Onei.ai has no user ratings [3]. The primary comparison content is written by CodinIT themselves [1]. For a tool you’re betting your product on, “no one has written about their experience” is a meaningful data point.
- Pro plan doesn’t exist. The $89.99/year tier is announced but “COMING SOON” [3]. If you need team collaboration or private hosting features today, you don’t have an upgrade path.
- Split repository situation. Two separate GitHub repos (web + desktop) with the README directing users away from the primary repo. Tracking which codebase is active and how they relate requires more investigation than it should [README].
- Complex self-hosting setup. 25 environment variables covering 6+ external services (Supabase, Vercel KV, PostHog, E2B, GitHub OAuth, LLM providers) is not a beginner-friendly install [README].
- No documented REST API. If you want to trigger generation programmatically or integrate CodinIT into a CI/CD pipeline, there’s no documented API surface [profile].
Who should use this / who shouldn’t
Use CodinIT if:
- You’re an indie hacker or developer comfortable with Node.js tooling who wants a free, open-source alternative to Lovable for experimenting with AI-generated apps.
- Local LLM execution is a hard requirement — you’re building something sensitive and can’t route prompts through a third-party cloud [1][3].
- You want to try AI app generation without paying for Lovable credits while evaluating whether the category is useful to you.
- You’re comfortable with early-stage software and willing to contribute fixes when things break.
Skip it (wait or pick Lovable instead) if:
- You need to ship something to real users soon and can’t afford the rough edges of a 167-star prototype.
- You’re non-technical and expect a polished, documented onboarding flow.
- Team collaboration is a current requirement — the Pro plan with team features doesn’t exist yet [3].
Skip it (pick Bolt.new instead) if:
- You want the same “prompt-to-app” concept with a larger community, more tutorials, and a tool that has proven production usage behind it.
Skip it (pick v0.dev instead) if:
- You specifically want UI component generation that drops into an existing Next.js project, rather than full app generation from scratch.
Skip it (pick a proper framework) if:
- You’re building something with non-trivial business logic. AI-generated full-stack apps in this category still require significant hand-editing for anything beyond CRUD demos, and CodinIT is no different.
Alternatives worth considering
- Lovable.dev — the direct competitor CodinIT positions against. Polished, production-proven, active community, but closed-source and credit-based pricing. The obvious choice if you don’t care about self-hosting [1].
- Bolt.new (StackBlitz) — also uses WebContainers for in-browser execution, larger community, well-documented. Closed-source cloud product [homepage].
- Vercel v0 — focuses on UI component generation for Next.js specifically, not full app scaffolding. Better for iterating on existing codebases [homepage].
- Hostinger Horizons — newer entrant, similar “describe it, get an app” premise, bundled with Hostinger hosting [2].
- Retool — for internal tools specifically, not public-facing apps. More mature, more expensive, better for teams [2].
- Cursor / GitHub Copilot Workspace — if you want AI-assisted development inside a real IDE rather than a generated scaffold, these are a different category but often more practical for anything production.
For a non-technical founder evaluating AI app builders: the realistic shortlist today is Lovable vs Bolt.new. CodinIT is worth watching but not ready to be the primary tool for anything you’re shipping to users.
Bottom line
CodinIT.dev is an honest open-source attempt to build what Lovable and Bolt.new built, but without the vendor lock-in, the credit billing, and the closed-source constraints. The technical architecture is sound — WebContainers for in-browser execution, E2B for cloud sandboxes, 19+ LLM providers, Apache-2.0 license — and the local LLM support is genuinely differentiated. But at 167 stars, no independent user reviews, a README that calls itself a “Demo,” a paid tier that doesn’t exist yet, and a repository split that creates confusion about which version to use, this is a project to watch rather than a tool to bet on. If you’re an indie hacker who wants to poke at AI app generation for free and can tolerate rough edges, clone the repo and try it. If you’re a non-technical founder who needs to ship something real, give it six months and check the star count again.
Sources
- CodinIT.dev Official Docs — “Lovable vs CodinIT — Comparison”. https://codinit.dev/docs/comparisons/lovable-vs-codinit
- SourceForge — “CodinIT.dev Reviews in 2026”. https://sourceforge.net/software/product/CodinIT.dev/
- Onei.ai — “CodinIT.dev - AI Tool Details & Review”. https://onei.ai/apps/codinit.dev
Primary sources:
- GitHub repository and README: https://github.com/gerome-elassaad/codingit (167 stars, Apache-2.0 license)
- Desktop app repository: https://github.com/codinit-dev/codinit-dev
- Official website: https://codinit.dev
- E2B SDK (execution dependency): https://github.com/e2b-dev/code-interpreter
Features
Integrations & APIs
- Client SDKs
- REST API
AI & Machine Learning
- AI / LLM Integration
Category
Related AI & Machine Learning Tools
View all 93 →OpenClaw
320KPersonal AI assistant you run on your own devices. 25+ messaging channels, voice, cron jobs, browser control, and a skills system.
Ollama
166KRun open-source LLMs locally — get up and running with DeepSeek, Qwen, Gemma, Llama, and more with a single command.
Open WebUI
128KRun AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
OpenCode
124KThe open-source AI coding agent — free models included, or connect Claude, GPT, Gemini, and 75+ other providers.
Zed
77KA high-performance code editor built from scratch in Rust by the creators of Atom — GPU-accelerated rendering, built-in AI, real-time multiplayer, and no Electron.
OpenHands
69KThe open-source, model-agnostic platform for cloud coding agents — automate real software engineering tasks with sandboxed execution, SDK, CLI, and enterprise-grade security.