Open Source · Model-agnostic · Spec-driven

Engineering Discipline
for AI-Assisted Development.

A structured engineering process that decomposes projects into scoped specs, validates before generation, and traces every line of code back to a business requirement.

Alpha — Available now. Evolving fast.
Define Scope
Decompose
Seed Tasks
Run
Verify & Ship

Fast to Generate. Impossible to Maintain.

AI closes the gap between idea and code. It doesn't close the gap between code and understanding. Without traceable intent, versioned context, and enforced structure, you're not building a system — you're accumulating code.

Prompts Are Not Specs

A prompt describes what you want. A spec defines what to build, which files to touch, which interfaces to respect, and what "done" means. Agents need specs.

No Decomposition

You give the agent "build auth system." It figures out the breakdown on the fly. Different agents decompose differently. Architecture becomes ad-hoc.

No Traceability

Code gets generated. But which requirement does it satisfy? Which acceptance criteria? When something breaks, there's no trail back to intent.

Validation After the Fact

Errors are caught after code is written — in tests, in review, in production. A bad plan costs more than a bad line of code.

Blocked Business Adoption

Without audit trails and clear accountability, AI-generated code can't pass compliance reviews. Entire organizations can't adopt AI coding because the governance layer doesn't exist.

honest-assessment.sh
$ Does your agent know which files it should NOT touch? [no]
$ Can you trace this function back to a requirement? [not really]
$ Was the plan validated before code was generated? [what plan?]
$ Do two agents share the same architectural constraints? [hope so]
$ Could a new developer understand WHY this code exists? [good luck]

Your Code Must Be Auditable. AI Doesn't Change That.

Regulated industries, enterprise contracts, and serious products all require the same thing: know what was built, why it was built, and who is accountable. AI-generated code doesn't get a free pass.

Audit Trails

SOC 2, ISO 27001, and the EU AI Act demand traceability. OpenAgile.AI links every function to an acceptance criterion, every test to a requirement, every change to a decision.

Clear Ownership

Who approved this feature? Which requirement does it serve? When something breaks in production, you need answers — not a prompt history buried in a chat window.

Compliance-Ready

Fintech, healthtech, govtech — if your industry requires you to demonstrate that code meets documented requirements, OpenAgile.AI builds that traceability in from the start.

Risk Visibility

Which parts of the codebase were AI-generated? Which acceptance criteria are covered by tests? OpenAgile.AI makes these questions answerable, not guessable.

Specs Before Prompts. Validation Before Generation.

OpenAgile.AI sits above your coding tools. It decomposes projects into structured specs, scopes context per task, and validates everything before any agent writes a line of code. Then your AI models — Claude, Gemini, OpenAI, local models — execute with precision.

01

Hierarchical Decomposition

Project
Epic
Story
Task

A real project decomposed into 7 epics and 37 stories — each with scoped context, acceptance criteria, and dependency maps. Each level inherits only what it needs.

02

Structured Specs, Not Prompts

  • Scope Define features, constraints, and acceptance criteria
  • Decompose Break into epics, stories, and atomic tasks
  • Specify Each task gets scoped files, interfaces, and constraints
  • Execute AI generates code from validated, scoped specs
03

Validate Before You Generate

342 domain checks across 15 perspectives
Semantic deduplication (3-layer)
Cross-reference validation
Deterministic scoring with auto-fixes
Auto story splitting for oversized stories

Catch errors at the specification level, where they're cheapest to fix. A bad plan found in 30 seconds beats a bad feature found in production.

From Idea to Traceable Code.

0

Install & Launch

Install globally via npm, then run avc in your project folder. It creates a .avc/ config directory and opens a local Kanban board at localhost:4174. Add your LLM API key in Settings — OpenAI, Anthropic, Gemini, Xiaomi MiMo, or local models (LM Studio/Ollama).

npm install -g @agile-vibe-coding/avc
cd my-project && avc
1

Define Scope

Describe what you're building — features, constraints, tech stack. The output is a structured project spec, not a chat transcript.

Kanban UI
2

Decompose

The project breaks into epics, stories, and atomic tasks. Each task gets scoped context — only the files and interfaces it needs. This is what makes your agents precise.

Kanban UI
3

Validate

Specs are validated before any code is written. Missing interfaces, conflicting constraints, unclear acceptance criteria — caught here, not in your PR review.

Kanban UI
4

Execute

Each task runs in an isolated git worktree via an agentic tool-calling loop. The agent reads the doc chain (project → epic → story → task), implements, tests, and commits — all sandboxed.

Kanban UI
5

Verify & Ship

Behavioral tests validate each acceptance criterion. E2E browser tests run for UI stories. Failed criteria trigger auto-fix cycles — the responsible task is reset, patched, and re-verified. On pass, worktree merges to main with post-merge test gates.

Kanban UI

The Engineering Layer Your Agents Are Missing.

Scoped Context per Task

Each task spec defines exactly which files, interfaces, and constraints are relevant. Your agents get precise briefs, not your entire repo.

Bidirectional Traceability

Code ↔ acceptance criteria ↔ story ↔ epic. When something breaks, trace it back to the requirement in seconds. When requirements change, find every affected line.

342-Check Validation

3-tier micro-check system across 15 perspectives. Semantic deduplication, cross-reference validation, and deterministic scoring with auto-fixes — all before code generation.

Autopilot

Chain all ceremonies end-to-end: Seed → Run → Commit, with parallel execution for independent tasks. Watchdog timer detects stuck sessions. State survives restarts.

Verify & Auto-Fix

Behavioral tests validate each acceptance criterion. E2E browser tests for UI stories. Failed criteria trigger fault diagnosis and automatic fix cycles.

Safe Merge

Worktree branches merge with conflict resolution, post-merge test gates, and automatic rollback on failure. Dependent tasks auto-promote on success.

Multi-LLM Support

Claude, Gemini, OpenAI, Xiaomi MiMo, local models (LM Studio/Ollama). Auto-provider fallback if your primary is unavailable.

Local Kanban Board

Run avc and see everything: project breakdown, task specs, validation status, agent progress. Real-time WebSocket updates. All local.

Run Multiple Tasks with Confidence.

When every task has scoped context, explicit constraints, and isolated execution — you can run as many as you want simultaneously without them conflicting.

implement-jwt-middleware auth-system
create-user-endpoints user-api
setup-database-migrations data-layer
configure-ci-pipeline devops
build-dashboard-components frontend

Scoped context eliminates conflicts

Each task spec defines which files to touch and which to leave alone. Tasks working on different parts of the codebase can't step on each other.

Isolated worktrees, clean merges

Every task runs in its own git worktree. Review each independently. Merge when ready. No conflict resolution hell.

Solo devs ship like teams

One developer, five tasks running simultaneously. Ship an entire feature set in hours instead of days. Structure is what makes this possible.

What OpenAgile.AI Actually Generates

OpenAgile.AI generates both the engineering specs and the solution code. Specs live in .avc/, code is written in isolated worktrees that run in parallel, and every artifact is structured for long-term maintenance, traceability, and accountability.

doc.md
Project brief generated by the Sponsor Call — mission, scope, target users, tech stack, constraints.
context.md (project)
Machine-readable project context: identity, tech stack, auth mechanism, project characteristics, and the full epic map with story counts.
context.md (epic)
Per-epic specification: scope (in/out), features, data model sketch, non-functional requirements, cross-epic dependencies, and success criteria.
context.md (story)
Per-story spec: user story, acceptance criteria, technical notes (data model, auth, error handling), explicit scope boundaries, and dependency references.
context.md (task)
Atomic task brief: scoped file list, interface constraints, acceptance criteria subset, and implementation notes. Ready to feed to an AI agent.
Solution code
AI agents generate implementation code from validated specs. Every file traces back to a story and acceptance criterion — nothing is orphaned.
Git worktrees
Independent tasks run in parallel, each in its own isolated worktree branch. Code is reviewed independently and merged with post-merge test gates.
ceremonies-history.json
Full execution log: which ceremonies ran, when, their status, and checkpoint timestamps for resume-after-failure.
token-history.json
LLM usage tracking per ceremony: input/output tokens, cache hits, provider used.

Hierarchical Structure

Context flows downward. Each level inherits from its parent and adds only what's specific to its scope. The agent executing a task reads the full doc chain: project → epic → story → task.

.avc/ — generated project structure
.avc/
├── avc.json                    # config & model settings
├── ceremonies-history.json     # execution log
├── token-history.json          # LLM usage tracking
└── project/
    ├── doc.md                  # project brief (Sponsor Call)
    ├── context.md              # project context + epic map
    ├── context-0001/           # Epic: Foundation Services
    │   ├── context.md          # epic spec + NFRs
    │   ├── context-0001-0001/ # Story: User Registration
    │   │   └── context.md     # ACs, tech notes, deps
    │   ├── context-0001-0002/ # Story: Login & Auth
    │   └── context-0001-0003/ # Story: RBAC Middleware
    ├── context-0002/           # Epic: Core Business Logic
    ├── context-0003/           # Epic: Data Management
    └── context-0004/           # Epic: Frontend Shell
.avc/project/context.md — project-level context
# Project Context

## Identity
- type: web-application
- deployment: cloud
- team: small

## Tech Stack
- react, vite, typescript
- node.js, express.js
- postgresql, prisma

## Authentication
- mechanism: session-based (httpOnly cookies)
- IMPORTANT: All epics and stories MUST use this
  auth mechanism consistently.

## Project Characteristics
- hasCloud: true
- hasFrontend: true
- hasPublicAPI: false

## Epic Map
- context-0001: Foundation Services (5 stories)
- context-0002: Core Business Logic (6 stories)
- context-0003: Data Management (4 stories)
- context-0004: Frontend Shell & UI (8 stories)
.avc/project/context-0001/context-0001-0001/context.md — story spec
# Story: User Registration and Email Invitation
# id: context-0001-0001 | epic: Foundation Services

## User Story
As an admin, I want to invite new users by email
so they can create accounts and join the team.

## Scope
In:  Invite endpoint, email with tokenized link,
     password setup page, duplicate check, audit log
Out: Login flow, token refresh, RBAC enforcement

## Acceptance Criteria
1. POST /api/users/invite (admin only) accepts
   { email, role } → returns 201 { id, email, role }
2. Invitation email sent with tokenized link
   valid for 48 hours
3. POST /api/auth/setup-password accepts
   { token, password } → returns 200
4. Duplicate email → 409 EMAIL_ALREADY_EXISTS
5. Non-admin callers → 403 FORBIDDEN
6. Audit event 'user.invited' emitted

## Technical Notes
- Data Model: User + InvitationToken table,
  hashed token, expiresAt timestamp
- Security: bcrypt cost=12, crypto-random tokens
- Email: SMTP via env vars, retry on failure

## Dependencies
- (none — foundation story)

Where OpenAgile.AI
Fits in Your Stack

Coding Agents Alone + OpenAgile.AI
Input Natural language prompt Structured spec with scoped context
Decomposition Agent decides on the fly Defined upfront: epic → story → task
Context scoping Agent explores the repo Each task specifies which files & interfaces
Validation After code is written Before code is generated
Traceability Git blame Code ↔ criteria ↔ story ↔ epic
Methodology Ad-hoc per session Repeatable engineering process
Verification Manual testing Auto-verify per AC, fault diagnosis, auto-fix cycles
Recovery Start over Resume after failure, rollback on merge failure

Engineering Principles for the Age of AI Agents

Grounded in the Agile Vibe Coding Manifesto — principles for developers who want to ship fast without sacrificing the engineering practices that make software maintainable.

Values

Accountability over anonymous generation
Traceable intent over opaque implementation
Discoverable domain structure over scattered code
Human-readable documentation over implicit knowledge

Principles

I
Customer value remains the primary objective.

Speed without value constitutes waste. Acceleration must deliver validated customer value.

II
Humans remain accountable for software systems.

Clear human responsibility exists for all deployed systems, regardless of production method.

III
Every change has traceable intent.

Features and modifications connect to requirements, decisions, or problems being addressed.

IV
Systems remain deterministic and verifiable.

Software behaves predictably and gets verified through testing.

V
Documentation preserves shared understanding.

Human-readable documentation maintains system intent and structure clarity.

VI
Code structure reflects the domain.

Organization centers on domain concepts rather than technical convenience.

VII
Architecture guides and constrains generation.

Clear boundaries and patterns direct automated generation processes.

VIII
Automation must remain verifiable.

Generated outputs stay understandable, reviewable, and verifiable by humans.

IX
Generated systems remain understandable and maintainable.

Software retains readability and evolvability regardless of production source.

X
Context is explicit and versioned.

Requirements, architecture, and domain language get externalized and versioned.

XI
Knowledge remains accessible.

Critical knowledge resides in documentation, tests, and architectural records.

XII
Teams regularly reflect on the use of automation.

Teams evaluate and adjust automated system practices continuously.

Open Source. Free. Runs Locally.

Available Now

  • Sponsor Call (project scoping)
  • Sprint Planning (decomposition + validation)
  • Seed (task generation)
  • Run (agentic code generation in worktrees)
  • Commit (safe merge with test gates)
  • Verify (behavioral + E2E tests)
  • Autopilot (end-to-end chaining)
  • Local Kanban board with real-time updates
  • Multi-LLM support with auto-fallback
  • 342-check micro-validation system

Coming Next

  • State of the art AI coding loop
  • Integrated Kanban AI agent
  • Integrated code editor
  • Initialize on existing projects
  • Team collaboration

OpenAgile.AI is open source, free, and runs entirely on your machine. No cloud. No accounts. We ship weekly — your feedback shapes what we build next.

Try It Now — Alpha Release

Alpha — The spec & planning layer is ready. Code generation ceremonies are in very early stages. Expect rough edges.
1

Install

npm install -g @agile-vibe-coding/avc
2

Launch in your project folder

cd my-project && avc

This creates a .avc/ directory and opens the Kanban board at localhost:4174. Add your LLM API key in Settings — OpenAI, Anthropic, Gemini, Xiaomi MiMo, or local models (LM Studio/Ollama).

OpenAgile.AI CLI running with Kanban board
3

Run Sponsor Call

Describe your project idea. The tool generates a structured project brief with scope, tech stack, and constraints.

4

Run Sprint Planning

The project decomposes into epics, stories, and tasks — each with acceptance criteria, scoped context, and dependency maps. Explore them in the Kanban board.

5

Review what was generated

Open the Kanban board at localhost:4174. You should see the full epic → story hierarchy with structured specs, acceptance criteria, and dependencies. This is what OpenAgile.AI does best today.

Kanban board showing task card with acceptance criteria
Don't expect generated code yet. The Seed, Run, Commit, and Verify ceremonies exist but are still preliminary. Focus on the spec and planning layer for now — that's where the value is at this stage.

Your Agents Are Fast.
Make Them Precise.

Specs before prompts. Validation before generation. Traceability by default. The engineering process layer your AI agents are missing.