How Intent-Driven Development (IDD) Bridges UCD, DDD, BDD, and TDD in the AI Era
After posting about Intent-Driven Development yesterday, I’ve had several great conversations asking: “How does IDD fit with TDD? What about BDD? Where does DDD come in? What about UCD?”
The answer is more interesting than I first shared. These methodologies don’t just coexist – they flow together, with IDD as the pivot point that makes them all work effectively when AI becomes your implementation partner.
Here’s how they all fit together:
- UCD (User-Centered Design) teaches you to understand user needs – discover what problems you’re actually solving and who you’re building for through research, personas, and testing.
- DDD (Domain-Driven Design) teaches you to model the business domain – establish ubiquitous language, define bounded contexts, and structure the problem space. Critically, DDD creates the shared mental model that helps AI understand your business world as you see it, not just as generic code patterns. Without this, AI builds technically correct but semantically wrong solutions.
- IDD (Intent-Driven Development) teaches you to articulate intent – take what you’ve learned from users and domain modeling, then specify it clearly with success criteria, validation, constraints, and ethical boundaries. IDD embodies test-first thinking (TDD) – the tests that prove your intent are part of the specification itself, not an afterthought.
- BDD (Behavior-Driven Development) teaches you to communicate in stakeholder language – translate technical specifications into Given-When-Then scenarios that non-technical people can validate, creating shared understanding before implementation begins.
- TDD (Test-Driven Development) teaches you to validate through tests – but in an IDD world, this test-first thinking is incorporated into the specification stage, amplified by AI that can generate comprehensive test suites including edge cases humans might miss.
They’re not just complementary – they form a complete flow.
The full agentic flow:
1. UCD discovers (Human-led)
- What do users actually need?
- What problems are we solving?
- Research, personas, journey mapping
2. DDD models (Human-led, AI-assisted)
- What’s the domain model?
- What’s the ubiquitous language?
- What are the bounded contexts?
- AI can help identify patterns, but humans define the business world
3. IDD specifies (Human-led, AI-amplified) This is where IDD embodies TDD:
- Human defines intent and validation approach based on UCD needs + DDD model
- AI generates comprehensive test suite – including edge cases humans might miss
- Human reviews and refines: “Are these the right tests?”
- Tests become executable specifications
- This already should happen in non-AI approaches – UCD and DDD should inform tests
- AI’s advantage: Catches edge cases, generates tests faster, thinks through scenarios
4. BDD communicates (AI-assisted, Human-validated)
- AI generates BDD scenarios from IDD specification
- Translates technical specs into Given-When-Then stakeholder language
- Human stakeholders validate: “Yes, this is what the business needs”
- Stakeholders add context AI couldn’t know
- Refined scenarios may feed back to adjust IDD
5. AI implements
- Builds code to pass the tests defined in IDD
- Works within the domain model from DDD
- Iterates until all tests pass
6. Automated validation
- Predefined tests run
- Pass/fail determines if implementation meets specification
7. Agentic exploratory testing (AI-led)
- Functional exploration: AI agent tries unexpected combinations, edge cases beyond automated tests
- Performance testing: Load, stress, endurance testing against IDD constraints
- Security probing: Common vulnerability patterns, injection attempts
- Accessibility testing: Screen reader compatibility, keyboard navigation
- Generates comprehensive report of findings
- This is what manual QA + performance testing teams would traditionally do
8. UCD validation (Human-led)
- Real users test with actual workflows
- Validates: “Does this solve my actual problem?”
- Discovers usability issues agents can’t perceive
- Feeds learning back into the next iteration
Why DDD becomes even more critical with AI:
Without domain modeling, AI makes assumptions based on generic patterns. You might ask for “order processing” and get something technically correct but semantically wrong – code that works, but doesn’t match how your business actually thinks about orders.
DDD gives AI the ontology – the vocabulary, relationships, and rules – so it builds within your business model, not against it. When you tell AI “an Order is an aggregate containing OrderLines, and the Inventory bounded context communicates via domain events,” you’re giving it the mental model of your world.
IDD takes that domain model and adds the specification layer – including tests – that AI needs to implement correctly.
The key relationships:
IDD embodies TDD – Test-first thinking happens at specification, not after implementation
BDD complements IDD – Translates technical specs into stakeholder language for validation
UCD feeds IDD – User needs inform what tests prove success
DDD constrains IDD – Domain model defines the “world” AI builds within
Agents amplify, humans decide:
- AI generates tests → Humans validate they’re the right tests
- AI generates BDD scenarios → Humans validate business intent
- AI performs exploratory testing → Humans validate user experience
- AI implements → Humans define what “correct” means
The complete picture in the agentic era:
Before AI:
UCD → DDD → BDD → Human codes → TDD → Manual testing → Deploy
With agentic AI:
UCD → DDD → IDD (embodying TDD) → BDD (AI-generated, human-validated) → AI codes → Automated validation → Agentic exploratory testing → UCD validates → Deploy
IDD becomes the specification discipline that:
- Embodies test-first thinking (TDD)
- Bridges discovery (UCD) and modeling (DDD)
- Feeds collaboration (BDD)
- Enables AI implementation
- Defines what validation means
IDD doesn’t replace the methodologies we trust – it’s the discipline that makes them all work together when AI becomes your implementation partner.
The question isn’t whether these methodologies survive the AI era. It’s whether we have the discipline to use them properly when agents can build so much, so fast.
Humans discover, model, specify, and validate. Agents generate, implement, and explore. Together, we build the right things, correctly, at speed.
Stay tuned, there’s more to come …
#IntentDrivenDevelopment #IDD #UCD #DDD #BDD #TDD #AI #AgenticAI #SoftwareArchitecture #TechLeadership
Check out the other articles in this series …
Intent-Driven Development (IDD)
We’ve moved from being limited by what technology can do to being constrained by how clearly we can express what we want. With agentic AI capable of building almost anything at breakneck speed, the real question becomes: are we asking for the right thing?
Intent-Driven Development: Human Gates in Agentic Flows for Enterprise AI Control
Intent-Driven Development based agentic flows show how to design enterprise AI systems with autonomous agents while retaining human control, accountability, and trust through explicit human-in-the-loop gates.







0 Comments