AI Needs Constraints to Boost Productivity

With the tech industry captivated by the promise of generative AI, many leaders see it as the ultimate solution for accelerating software delivery. However, our expert today, Dominic Jainy, argues that this view is dangerously simplistic. With deep expertise in AI and a pragmatic focus on enterprise systems, he contends that without the right guardrails, AI could drown organizations in a sea of complexity. The real path to productivity, he suggests, isn’t about writing more code faster; it’s about building smarter, more constrained systems through platform engineering. This conversation explores the crucial difference between raw production and true productivity, the hidden costs of unconstrained AI, and how “golden paths” can turn AI from a chaos engine into a powerful, reliable tool.

Many leaders see AI as a way to ship more code faster. How does this focus on “production” differ from true “productivity,” and what specific long-term costs related to security and maintenance are being overlooked in this rush for volume?

That’s the fundamental error so many are making right now; they’re confusing the act of production with the goal of productivity. If your only metric is shipping code, then yes, AI is a miracle. But in any real-world enterprise, code isn’t an isolated asset; it’s a long-term liability. Every new service, every dependency, every clever abstraction you generate adds to your operational surface area. You now have to secure it, observe it, patch it, and integrate it. The hidden cost is that AI makes creating this surface area virtually free, so speed today is purchased with fragility tomorrow. We’re encouraging a mindset where teams celebrate their velocity right up until the system has to be audited or handed off, and that’s when the supposed productivity win reappears on the balance sheet as a crushing operational cost.

We see conflicting data on AI’s impact: some studies report faster task completion, while others show experienced developers slowing down. What key environmental factors determine whether AI acts as a turbocharger or an anchor for a team? Please share some examples.

The ambiguity in those studies is the entire point. The data from METR, showing experienced developers taking 19% longer on complex tasks, and the GitHub report of faster completion on isolated tasks aren’t contradictory; they’re two sides of the same coin. The outcome depends entirely on the environment you put the AI into. It’s a systems claim, not a tool claim. If you have a healthy, well-architected system with clear standards, AI can absolutely be a turbocharger, handling boilerplate and letting developers focus on the core logic. But if you drop that same AI into a fragmented, chaotic environment—what I call sprawl—it becomes an anchor. It accelerates bad decisions, generates inconsistent code, and compounds the existing chaos because it removes the natural friction that used to slow people down.

AI has been described as making complexity cheap, allowing a junior engineer to generate sprawling services with plausible but poorly understood code. Can you walk us through how this initial speed boost can create a massive integration and maintenance tax down the line?

It’s a deceptively dangerous cycle. In the past, a junior engineer’s ability to create architectural chaos was limited by the time it took to actually implement their ideas. Now, an AI assistant can scaffold an entire suite of microservices in minutes. The code looks plausible, the unit tests might even pass, and the team feels an incredible sense of speed. The problem is, the engineer often doesn’t grasp the underlying complexity of what they’ve just created. The real tax comes due later. It appears when that sprawling system needs to be scaled, or when a security team has to audit it, or when another team has to integrate with it. Suddenly, no one understands how the pieces cohere, and what felt like a productivity win becomes an enormous integration and maintenance burden that slows the entire organization to a crawl.

The DORA metrics and the SPACE framework offer alternatives to measuring productivity by lines of code. Could you explain the “time to compliant deployment” metric and describe how a manager could implement it to get a more honest picture of their team’s performance?

These frameworks are a necessary reality check because they force us to look beyond volume. But if I had to give a manager one metric to force honesty in the AI era, it would be “time to compliant deployment.” It’s brutally simple and effective. You start the clock the moment a developer says their work is “ready for review” and you stop it only when that software is actually running in production, having passed every single security control, observability check, and policy gate. This metric cuts through all the noise. It doesn’t care how fast the code was written; it measures the health of your entire delivery system. If AI-generated code consistently gets stuck in security reviews or fails compliance checks, this number will expose that friction immediately, giving you a much truer picture of performance than any measure of developer “speed.”

You suggest that as AI writes more code, engineers must move up the abstraction ladder to focus on architecture and integration. What new skills does this shift demand, and what are the risks if a team lacks this deep systems-level understanding?

This shift is often framed as a promotion—engineers become architects—but it’s a tremendous burden if they aren’t prepared. As Gergely Orosz points out, the job moves from writing to reviewing and integrating. This demands a profound level of systems thinking that is not evenly distributed across most teams. The risk is that you cheapen creation but make coordination incredibly expensive. If every team uses AI to generate their own bespoke solutions without a shared architectural vision, you end up with a fragile patchwork quilt of technologies. When it’s time to integrate, secure, and operate that mess, the organization grinds to a halt. The whole system becomes incoherent because no one has the deep, cross-cutting knowledge to make it work as a whole.

The concept of a “golden path” or “paved road” is presented as a solution. Can you contrast two scenarios: one where a developer uses an unconstrained AI, and another where the AI is constrained by a platform? What are the step-by-step differences in their workflow?

Let’s imagine two developers building a new microservice. The first developer uses an unconstrained AI. They ask it to build the service, and the AI scrapes the public internet, grabs a popular-but-random framework, and spits out code. The developer feels incredibly fast for about ten minutes. Then they submit it and spend the next week in a painful back-and-forth with the security team because the code complies with zero internal policies. Now, consider the second developer. They’re on a “golden path.” They make the same request, but their AI is constrained by the company’s internal platform. It generates a service using pre-approved templates, complete with the company’s standard authentication libraries, logging sidecars, and deployment manifests. The code is predictable, even boring. But it’s compliant by default, and it deploys to production in ten minutes. The productivity win here didn’t come from the AI writing code; it came from the platform setting useful, productive boundaries for the AI.

What is your forecast for developer productivity over the next five years?

My forecast is that the industry is about to split into two distinct camps. The first camp will chase the illusion of productivity by giving developers unconstrained AI tools. They will see a short-term burst of activity followed by a massive hangover of integration debt, security incidents, and operational chaos, just as Forrester predicts with architects spending 90% of their time on glue work. The second, more successful camp will realize that AI is not a magic bullet but an amplifier. They will invest heavily in platform engineering, creating “golden paths” that constrain AI to be a productive, reliable partner. The most productive developers of the next five years won’t be the ones with the most freedom; they will be the ones with the best constraints, allowing them to focus on solving business problems instead of wrestling with self-inflicted complexity. True productivity will be a direct outcome of a mature platform strategy.

Explore more

Is Your HubSpot and Dynamics 365 Sync Ready for 2026?

A closed deal celebrated by your sales team in HubSpot that fails to translate into a seamless order fulfillment process within Dynamics 365 represents a critical breakdown in operations, not a victory. This guide provides a definitive blueprint for auditing, implementing, and future-proofing the crucial data synchronization between these two platforms. By following these steps, organizations can transform their siloed

General ERP vs. Industry Solution: A Comparative Analysis

Navigating the complex landscape of enterprise software often forces businesses into a critical decision between adopting a broad, foundational platform or investing in a highly tailored, industry-specific solution. This choice is particularly consequential for MedTech manufacturers, where operational precision and regulatory adherence are not just business goals but absolute imperatives. The debate centers on whether a general-purpose system can be

Review of Minisforum AtomMan G7 Pro

Is This Compact Powerhouse the Right PC for You? The long-standing compromise between desktop performance and a minimalist workspace has often forced users to choose one over the other, but a new class of mini PCs aims to eliminate that choice entirely. The Minisforum AtomMan G7 Pro emerges as a prime example of this ambition, merging high-end components into a

On-Premises AI vs. Cloud-Native AI: A Comparative Analysis

The race to deploy autonomous AI systems at scale has pushed enterprises to a critical architectural crossroads, forcing a decision between keeping artificial intelligence workloads close to sensitive data within their own firewalls or embracing the expansive scalability of cloud-native platforms. This choice is far more than a technical detail; it fundamentally shapes an organization’s approach to data security, governance,

Sovereign Cloud Infrastructure – Review

The long-theorized goal of a truly European digital infrastructure has taken a significant leap forward, materializing into a tangible solution through the strategic partnership between OVHcloud and OpenNebula Systems. This review explores the evolution of this sovereign cloud offering, its key features, its alignment with European policy, and the impact it has on the push for digital sovereignty. The purpose