Intent-Driven Development: Human Gates in Agentic Flows for Enterprise AI Control

Pop art illustration showing a businesswoman turning a control dial from manual to fully automated, representing intent-driven development, human-in-the-loop governance, and agentic AI workflow automation in an enterprise setting.

Intent-Driven Development: Human Gates in Agentic Flows for Enterprise AI Control

Agentic flows based on Intent-Driven Development (IDD) address the biggest barrier to enterprise AI adoption: managing risk while maintaining control.

The data is stark. McKinsey’s 2025 Global Survey on AI reveals a troubling gap between adoption and impact:
88% of organizations now use AI in at least one business function – but nearly two-thirds say their organizations have not yet begun scaling AI across the enterprise. Even more revealing: only 39 percent report EBIT impact at the enterprise level, and among those that do, most attribute less than 5% of their EBIT to AI.

The gap between AI adoption and AI impact has never been wider.

Even agentic AI adoption shows the same pattern: 62% of survey respondents say their organizations are at least experimenting with AI agents, but in any individual business function, no more than 10% of respondents report scaling AI agents.

The bottleneck isn’t technology – it’s organizational transformation capability. Most organizations are still trapped in the pilot loop, with scattered wins, afraid of losing control, unsure where humans fit, and lacking a framework for gradual adoption with appropriate governance.

Enterprise leaders tell me their real concerns:

  • “What if the AI makes a critical mistake?”
  • “How do we maintain compliance?”
  • “Who’s accountable when things go wrong?”
  • “We can’t just trust a black box with production.”

You’re right. You shouldn’t.

That’s the real challenge. Not “can AI write code?” but “how do we deliver AI projects without surrendering accountability, compliance, and control?”

After sharing the complete agentic flow in my last post, I received dozens of messages with variations of the same concern: “This sounds like agents doing everything. Are we automating ourselves out of existence?

No. You’re gaining a framework for focusing human intent where it matters most while delegating routine verification to agents you control.

IDD-based agentic flows solve this by making human-in-the-loop explicit through what I call risk dials. At every decision point in the agentic flow, you set a dial that controls how much autonomy the agent has:

🔴 High Control = Always Human Review
🟡 Medium Control = Human Spot-Check
🟢 Low Control = Human Monitors

This isn’t about whether to use AI – it’s about how to use it responsibly in contexts where mistakes have real consequences. McKinsey’s research shows high performers (~6%) pull ahead by treating AI as transformation, redesigning workflows, showing visible leadership ownership, instituting human-in-the-loop governance Substack.

IDD-based agentic flows give you what those high performers have: the control framework to join the 6%.

Let me show you how this actually works.

The Control Framework: Your Risk Dials

At every step where agents can act, you decide how much autonomy to grant:

🔴 High Control = Always Human Review

  • Human must review and approve before proceeding
  • Agent output is a suggestion, not a decision
  • Nothing happens without explicit human sign-off
  • This is where everyone should start

🟡 Medium Control = Human Spot-Check

  • Agent proceeds, but human reviews sample
  • Human can override or rollback at any time
  • You’re delegating, not abdicating

🟢 Low Control = Human Monitors

  • Agent proceeds autonomously
  • Human receives notification
  • Human can intervene whenever needed
  • You only reach this after proving trust

You set these dials, not the AI, not the vendors, you.
 

The IDD Flow – With Your Control Points

1. UCD discovers user needs
  [Always Human-Led – No dial, you decide what to build]

2. DDD models domain
  [Always Human-Led – No dial, you define your business]

3. IDD specifies intent + AI generates tests
  🔴 Start: Human validates every test
  🟡 Mature: Human spot-checks coverage
  🟢 Low-risk: Human monitors report

4. BDD scenarios (AI generates)
  🔴 Start: Stakeholders validate all scenarios
  🟡 Mature: Stakeholders review summary
  🟢 Low-risk: Stakeholders notified only

5. AI implements code
  [No human gate here – validation comes next]

5.5. Code Review
  🔴 Always: Security/Payments/PII/Core infrastructure
  🟡 Mature: AI self-reviews, human spot-checks
  🟢 Low-risk: Styling/Copy changes, human monitors

5.6. Security Review
  🔴 Always: Production systems (this dial rarely moves)

6. Automated validation
  [Fully automated – tests you defined in step 3]

7. Agentic exploratory testing
  🔴 Start: Human reviews all findings
  🟡 Mature: Human reviews critical findings
  🟢 Low-risk: Human reviews summary only

8. UCD validation
  🔴 Start: User testing on all releases
  🟡 Mature: User testing on major features

9. Deployment
  🔴 Start: Human approves every production deploy
  🟡 Mature: Human approves via notification
  🟢 Low-risk: Human can intervene/rollback

Notice what has no dial – what you always control:

  • What to build (business intent)
  • Who it’s for (user needs)
  • How to model it (domain design)
  • Whether to build it (ethical decisions)
  • Final accountability (your name, your responsibility)

Agents don’t make strategic decisions. You do.

What Never Leaves Your Control

Some decisions never move off 🔴, regardless of maturity:

1. Strategic Intent

  • What problems should we solve?
  • What does success mean?
  • What are we optimizing for?

→ You decide, always

2. Ethical Boundaries

  • Should we build this?
  • Who could be harmed?
  • What are the implications?

→ You own, always

3. Security & Compliance

  • Production system security
  • Customer data handling
  • Regulatory compliance

→ You’re accountable, always

4. Final Production Approval

  • Is this ready for customers?
  • What’s the rollback plan?
  • What could go wrong?

→ You authorize, always

These align with IDD’s core questions – defining intent, success criteria, validation approach, constraints, and ethics – ensuring humans own the strategic decisions while agents handle implementation.

The AI doesn’t get a vote on these. Ever.

The Core Message for Enterprise

You don’t have to choose between:

Speed OR control
AI OR humans
Innovation OR safety

IDD-based agentic flows, based around proven, existing methodogies such as UCD, DDD, BDD, and TDD, give you:

  • Clear decision points where you control delegation
  • Framework for measuring when to trust
  • Non-negotiable human gates for critical decisions
  • Path from cautious (🔴 everything) to confident (differentiated)
  • You remain in control of what matters

Agents amplify your capabilities. You decide which ones, when, and how much.

Start cautious. Build trust through measurement. Move gradually. Keep critical controls high. Never surrender ethics, strategy, or accountability.

This isn’t about AI replacing you. It’s about you using AI as a tool you control.

That’s the difference between reckless automation and disciplined augmentation.

Join the 6% of high performers and stop being stuck in pilot purgatory.

#IntentDrivenDevelopment #IDD #AI #AgenticAI #EnterpriseAI #SoftwareArchitecture #HumanInTheLoop #TechLeadership

Check out the other articles in this series …

ntent-Driven Development illustration showing UCD, DDD, BDD and TDD converging into IDD, guided by a woman validating intent while AI systems wait to build.

How Intent-Driven Development (IDD) Bridges UCD, DDD, BDD, and TDD in the AI Era

User-Centred Design, Domain-Driven Design, Behaviour-Driven Development and Test-Driven Development each solve part of the problem. In the AI era, Intent-Driven Development (IDD) brings them together by making intent explicit before automated systems turn ideas into working software.

Next Article ->

Pop art–style illustration of a professional woman adjusting green, amber, and red risk dials on a dashboard, with an ‘Intent-Driven Development Interface’ layer connecting human intent to interchangeable AI agents below, showing governed, tool-agnostic agentic automation.

Why Intent-Driven Development Survives Rapid AI Model Evolution

As AI models evolve rapidly, frameworks tied to specific tools quickly become obsolete. This article explains why Intent-Driven Development (IDD) remains resilient by separating stable human intent and governance from fast-changing AI capabilities.

0 Comments

Leave a Reply

Interviews

Are you looking for some interviews with leading industry experts? Then check out these 👇
Anti-Money Laundering – Future of Finance

Anti-Money Laundering – Future of Finance

This is the second article in our Future of Finance series, in which the amazing Dr Janet Bastiman talks about how “intelligence driven” anti-money laundering and compliance technology can rise to the challenges of different payment devices, microtransactions, and digital currencies. There are also some juicy AI/ML topics to sink your teeth into!

AI In Reality

AI In Reality

AI in Reality is a realistic view of the current state of AI and ethics, looking beyond the hype of ChatGPT and Generative AI, with industry expert Nayur Khan

Discover more from Richard Stockley

Subscribe now to keep reading and get access to the full archive.

Continue reading