How to safely bring vibe coding to the enterprise [Q&A]

Vibe coding has surged in popularity in the last year. Tools like Lovable, Replit, and v0 are giving anyone the ability to generate apps without writing a single line of code. The experience is fast, intuitive, and surprisingly powerful, fueling a wave of innovation across both consumer and enterprise settings.
But as companies rush to adopt these tools, a new challenge has emerged. Employees are beginning to build their own AI-powered applications using whatever platforms they can find, often connecting them to live business data. It is a trend some experts are calling the rise of “shadow AI,” where software is created outside of established security and governance frameworks.
To understand the challenges and opportunities of bringing AI-native development safely into the enterprise, BetaNews sat down with Saksham Sachdev, Director of Engineering at Superblocks.
BN: I saw that Superblocks recently launched its new product, Clark, which helps customers build internal apps using AI. What do you see as the biggest challenge in introducing an AI-native tool like this into the enterprise?
SS: Most of the difficulty lies in capturing business context versus the model itself. Enterprises run on proprietary APIs, private databases, and idiosyncratic workflows. Getting an LLM to generate useful software in that environment requires deep integration with internal systems, reliable schema introspection, and strict control over execution. The challenge is balancing the creative latitude of an AI agent with the deterministic, auditable behavior enterprises require.
BN: What are the biggest risks with using consumer vibe coding tools, like Lovable or Bolt, in the enterprise?
SS: Consumer-grade tools are actually great for things like prototypes or personal projects. But in production, they don’t have the infrastructure and compliance needed for an enterprise environment. They lack things like audit trails, change management, and security isolation. For businesses, those gaps become liabilities where sensitive data can leak, and unvetted code can run against production systems. Without role-based access control, approval chains, and consistent observability, “vibe coding” quickly turns into “shadow AI”.
BN: What new governance challenges arise when non-technical users can build or modify workflows with AI?
SS: You shift from managing code to managing intent. Governance needs to move upstream. The system needs to know:
- Who is allowed to describe applications and workflows? Or;
- How is the generated logic reviewed before it runs?
Enterprises need to create a sandbox for AI agents that is fully featured with tools and context but simultaneously prevents errors and unintended consequences for non-technical users. The challenge is in figuring out where in the stack sandboxing should be applied.
BN: How do you decide what parts of the workflow should be handled by LLMs versus traditional logic or APIs?
SS: Most of the boilerplate, authentication, and mission critical logic can be handled entirely using traditional APIs. LLMs simply build on top of this foundation and are used for business specific logic per application.
BN: What guardrails or approval workflows are needed before deployment?
SS: Generated apps should pass through the same controls as human-written ones. They need static validation (linting, typechecking etc.), dependency scans, environment gating, and human review. The difference is speed. AI lets users build faster than governance can catch up. That means automated guardrails need to be built in from the start, where every AI output must produce an audit log and changelog before deployment is even an option.
BN: Where do you draw the line between autonomy and human oversight in these AI-native systems?
SS: AI can propose, assemble, and test but never deploy without review. Enterprises must retain human checkpoints for actions with irreversible consequences like schema migrations, permission changes, production writes.
BN: When you look ahead a year or two, what do you think will separate the companies that successfully operationalize AI across the enterprise from those that just experiment with it?
SS: In the next few years, the companies that truly operationalize AI will be the ones whose leadership and engineering teams deeply understand both its potential and its limits. AI-driven workflows must also account for the current capabilities of the technology itself. Success will come from teams that move fast, experiment broadly, and pair technical rigor with creative, product-minded thinking to uncover real, durable use cases.
As AI-powered app generation becomes part of everyday work, enterprises will need to balance accessibility with accountability. The tools are evolving quickly, but the principles of good engineering remain the same. The companies that succeed will be the ones that move fast, build responsibly, and never lose sight of the safeguards that keep innovation sustainable.