Is IBM i Ready for AI Coding Without Git-Native DevOps?

Article Highlights
Off On

Lead: The Moment AI Met the Green Screen

Across busy IBM i shops, a quiet shock rippled as developers watched AI assistants generate usable RPG, CL, and DDS in minutes—code that compiled, ran, and even passed early tests without the usual handholding many expected to be required for legacy platforms once considered immune to such leaps. That speed thrilled management but raised a sharper question on the floor: could legacy delivery practices keep up without turning “faster” into “fragile,” especially when PDM menus and manual promotions still held the release keys.

At COMMON POWERUp, AI demos no longer felt like theater. IBM’s “Bob,” Anthropic’s Claude, and OpenAI’s ChatGPT showed they could draft refactors, create test scaffolds, and convert DDS to DDL with confidence. The spotlight, however, shifted to the delivery backbone. “The code isn’t the blocker anymore,” an IBM i architect remarked. “The pipeline is.”

Nut Graph: Why This Shift Mattered

AI reached practical utility on IBM i just as many teams continued to treat the host as the single source of truth through source physical files and member-based edits. That tension made the gains precarious. Local development—where AI pair programming thrives—often collided with environments that mirrored code to Git nightly, treated pull requests as ceremony, and promoted via scripts maintained by folklore. The consequences were immediate. Change volume surged, but reviews did not scale. Security checks, SQL performance tuning, and boundary tests slipped through manual nets. Without automated gates and auditable pipelines, the risk profile tilted in the wrong direction. As one operations lead put it, “AI amplified what was already brittle.”

Inside the Workflow Clash

Veteran developers prized library discipline and years of muscle memory in PDM, RDi’s Remote System Explorer, and Code for IBM i. New hires expected local clones, branches, and PRs backed by CI. The culture gap widened when AI entered the room. “AI made branching nonnegotiable,” a team lead said. “Otherwise, how do you isolate and reason about thousands of generated lines?” Git’s role became the fault line. In many shops, repositories served as backups, not the system of record. Symptoms were easy to spot: nightly exports, minimal branching, and releases triggered by emails or spreadsheets. In contrast, Git-native teams ran feature branches, enforced PR reviews, and kept rollback targeted and fast. The difference showed up in audits, too—pipelines produced a trail; ad hoc scripts produced questions.

The Pivot: From Host-Centric To Git-Native

Eradani argued that the path forward began on developer machines. Local clones unlocked AI-assisted iteration, quick tests, and safe experiments. Standardized sync and packaging then moved changes to IBM i for builds and deploys, preserving the platform’s strengths while freeing teams from host-only editing. “Local-first with IBM i-aware automation changed the conversation,” an engineer noted. “We stopped debating tools and started debating code.”

The second move was governance. CI/CD pipelines added static analysis with tools like SonarQube, syntax checks, unit and integration tests, and environment-specific approvals. Policy gates enforced review coverage and quality thresholds regardless of authoring source—human or AI. Eradani’s customers reported fewer manual promotion steps, higher review participation, and faster, commit-linked rollbacks. “It wasn’t about distrusting AI,” a security lead said. “It was about trusting the process.”

Field Notes: Voices, Data, and Turning Points

User groups shared that AI experiments clustered around modernization work—SQL conversions, service wrappers, and refactors that had languished on backlogs. Early adopters cited notable time-to-code gains, especially for repetitive patterns and test scaffolds. Yet the most significant wins surfaced after pipelines matured. “The day we turned on branch protections, our defect escape rate dropped within a sprint,” one manager said.

Blended environments proved workable when tools respected existing structures. Teams kept PDM and RDi for certain edits while contributing through branches with shared build scripts and the same deploy engine. Integrations with GitHub, GitLab, Azure DevOps, or Bitbucket connected to Jira or ServiceNow for change control, and releases became artifacts with owners, not events with mysteries. Eradani’s iDeploy surfaced as a bridge for IBM i-aware releases, keeping audits clean and rollback precise.

The Roadmap: From Inventory To AI at Scale

Progress followed a phased cadence. First, teams mapped libraries, source files, members, dependencies, and artifact flows, then defined “done” and rollback criteria. Next, Git became the system of record—one repository per application, branching standards, required PRs, and named code owners. Automation followed: static analysis, tests, and environment-specific deploy stages with approvals and logs. Only then did AI move from experiments to policy-governed branches, with metadata tagging AI-authored commits for traceability.

Training sealed the shift. Short workshops on branching etiquette, PR reviews, and rollback drills gave veterans confidence and helped new hires understand IBM i constraints. Shared templates—for apps, pipelines, and release notes—created consistent outcomes across different authoring tools. “Parity was the point,” a delivery director said. “No matter where code started, it ended up accountable the same way.”

Conclusion: The New Standard Took Shape

As AI matured on IBM i, the decisive factor had been delivery discipline rather than model wizardry, and teams that advanced from host-centric practices to Git-native pipelines gained speed without trading away safety. The clearest next steps had involved establishing Git as the system of record, enforcing branch protections and PR reviews, automating analysis and tests, and channeling AI changes through the same governed path. With that foundation in place, modernization accelerated, audits quieted, and releases moved from anxious events to routine operations. At POWERUp, the takeaway was settled: AI coding on IBM i worked best when pipelines did the heavy lifting and every change told its own story from commit to production.

Explore more

Is Email the Ultimate Owned Channel for AI-Driven Ecommerce?

Lead When AI agents pick products before shoppers search and feeds mutate minute by minute, one channel still shows up with surgical precision and zero gatekeepers: the inbox. While social algorithms chase their own engagement highs and marketplaces rewrite ranking rules overnight, email lands directly in a subscriber’s hands with brand voice intact and measurable intent attached. A 55-year-old medium

Are AI Overviews Forcing a Shift From SEO to AEO?

Lead When only a sliver of users—roughly eight percent—click a traditional result after skimming an AI summary that now appears on a significant share of searches, the center of gravity in discovery shifts from blue links to the answer itself. The first screen used to be a gateway to websites; now it acts like a destination. AI Overviews compress the

Will Network Intelligence Make FedNow Payments Safer?

A Split-Second Test Before Money Moves Every instant payment promises certainty in seconds, yet that very speed invites deception to sprint through the cracks unless a smarter check happens before the funds are gone for good. The Federal Reserve Financial Services is moving that check to the front of the line with a network intelligence API that scores risk as

Will PolicyStreet’s $21M Turbocharge Embedded Insurance?

Lead Checkout clicks across Asia are silently wrapped in tiny promises that approve in milliseconds, price to the cent, and now draw the attention of sovereign money. Those promises—embedded insurance tucked inside ride-hailing apps, travel checkouts, and gig platforms—have shifted from novelty to necessity as digital commerce has scaled. PolicyStreet’s latest move underscored that shift. The Malaysian InsurTech closed a

Can Insurers Scale AI Responsibly Fast Enough to Win?

Lead Boardrooms across the industry are asking a sharper question than the hype allows, wondering which insurers will convert responsible AI at scale into lasting advantage before rivals do, while customers, regulators, and climate volatility raise the stakes of every decision. The clock is not just ticking on technology; it is ticking on execution. The spread between early winners and