The AI Codebase Crisis: Why 'Sloppifying' Code With AI Agents Demands a New Architecture Discipline
"The Only Thing That Sloppifies a Codebase Faster Than 1 Coding Agent Is a Swarm of Them"
A new manifesto published by developer Nate Swerdlow tackles one of the most pressing — and least discussed — problems in the AI era: what happens to code quality when AI agents write most of your code?
The warning is stark:
"The only thing that sloppifies a codebase faster than 1 coding agent is a swarm of them."
As AI coding tools like Claude Code, Codex, and Cursor become standard parts of every developer's workflow, the sheer volume of AI-generated code is creating a new category of technical debt. The manifesto proposes a framework for keeping AI-generated code maintainable.
Two Types of Functions
The core insight is a distinction between two kinds of code:
Semantic Functions
Building blocks. These are the atomic units of your codebase. The rules:
- Minimal — do one thing, do it well
- Self-documenting — no comments needed; the function name describes exactly what it does
- Pure inputs/outputs — take everything needed, return everything produced
- No hidden side effects — safe to reuse without understanding internals
- Highly unit-testable — well-defined inputs and outputs make testing trivial
Examples range from quadratic_formula() to retry_with_exponential_backoff_and_run_y_in_between().
The key principle: even if a semantic function is only used once, it serves as an index of information for future humans and agents reading the code.
Pragmatic Functions
Orchestrators. These wrap semantic functions into complex workflows:
- Handle real-world messiness
- Combine multiple semantic functions
- Expected to change over time
- Tested via integration testing, not unit tests
- Should be used in only a few places (if used everywhere, refactor into semantic functions)
Examples: provision_new_workspace_for_github_repo(repo, user) or handle_user_signup_webhook().
Why This Matters for AI Agents
The manifesto identifies several patterns that AI agents tend to create:
1. Monolithic functions
Agents often generate code that does too much in one place, combining business logic, data transformation, and side effects.
2. Implicit dependencies
AI-generated code frequently relies on external state without making it explicit in function signatures.
3. Copy-paste patterns
When agents see similar patterns in a codebase, they tend to replicate them rather than abstracting shared logic into reusable semantic functions.
4. Missing abstractions
Agents optimize for the immediate task without considering whether the code might be useful in other contexts.
Practical Guidelines
The manifesto offers concrete rules:
- Code should be self-documenting — the structure of functions and data flow should tell the story
- Name things precisely —
calculate_user_total_orders_since_date()overprocess() - Isolate complexity — if logic is hard to follow, break it into semantic functions that describe each step
- Test at the right level — unit test semantic functions, integration test pragmatic functions
- Separate data flow from side effects — semantic functions should be pure wherever possible
Available as a Skill
Notably, the manifesto is available as a skill you can give your AI agents:
npx skills add theswerd/aicode
This means you can enforce these patterns automatically — giving Claude Code, Cursor, or any agent the same guidelines before it writes code.
The Bigger Picture
This framework addresses a fundamental tension in AI-assisted development:
- Speed vs. quality — AI agents write code 10x faster, but maintainability may suffer
- Flexibility vs. structure — Agents are creative, but creative code isn't always maintainable code
- Individual vs. team — Code that makes sense to the agent that wrote it may bewilder the next human who reads it
The semantic/pragmatic distinction gives both humans and agents a shared vocabulary for discussing code quality — something that's increasingly necessary as AI agents become primary code authors.
Industry Implications
This isn't just a best-practice debate. Companies are making strategic bets on AI coding:
- Startups are shipping faster with fewer engineers
- Enterprises are worried about accumulated technical debt
- Investors are questioning whether AI-written code can scale
If the industry doesn't develop standards for AI-generated code quality, we risk a future where codebases are unmaintainable by anyone — human or AI.
Source: aicode.swerdlow.dev