EU AI Act enforcement begins August 2, 2026 — Are you ready?
← Back to Case Studies

FastAPI Governance Audit

FastAPI scores 29/100 on enforcement posture -- strong test coverage (583 test files) is undermined by zero automated enforcement hooks and no AI agent instructions.

Overall Score: 29/100 (Grade: D)

35/100
Enforcement Maturity
Grade: D
10/100
Context Hygiene
Grade: F
42/100
Automation Readiness
Grade: C

Executive Summary

FastAPI is one of the most widely adopted Python web frameworks, powering production APIs at companies from startups to enterprises. With 80,000+ GitHub stars and a thriving contributor ecosystem, it represents a mature, well-maintained open-source project.

Despite this maturity, an automated governance audit reveals significant gaps in the project's enforcement infrastructure. While FastAPI excels at traditional software quality (comprehensive test suite, robust CI/CD), it lacks the structural enforcement mechanisms needed to govern AI-assisted development safely.

Enforcement Ladder Distribution

L5 - Hooks0 found

No automated enforcement before commits

L4 - Tests583 files

Comprehensive test suite across the codebase

L3 - Templates19 workflows

Mature CI pipeline with GitHub Actions

L2 - Prose0 rules

No CLAUDE.md or agent-specific rules

L1 - ConversationDefault

Default mode for all AI interactions

Diagnosis: FastAPI has invested heavily in L3-L4 (tests and CI) but has zero investment in L2 (prose rules) and L5 (hooks). This creates a "hollow middle" -- AI agents can write code that passes tests but violates unwritten project conventions.

Critical Gaps Found

1. No L5 (Hook) Enforcement [CRITICAL]

No pre-commit hooks or Claude Code hooks were found. AI agents can commit code without any structural gatekeeping. Rules exist only as tribal knowledge.

2. Potential Hardcoded Secrets [CRITICAL]

10 instances of potential hardcoded secrets detected. No automated secret scanning in CI. Test secrets are indistinguishable from real secrets without manual review.

3. No CLAUDE.md / Agent Instructions [HIGH]

No CLAUDE.md or equivalent AI agent instruction file was found. Every AI coding session starts from zero context. Agents cannot learn project conventions.

4. High TODO/FIXME Debt [MEDIUM]

198 TODO/FIXME/HACK markers found across the codebase. No systematic process for converting TODOs to actionable work items.

5. Context Hygiene Score: F (10/100) [HIGH]

Without any agent instruction files, AI tools operate with zero project-specific context. Estimated +20% token cost from repeated context establishment.

EU AI Act Compliance Mapping

For organizations using FastAPI in high-risk AI systems, the current governance posture creates compliance gaps:

Article 9: Risk Management System

RequirementReadiness
9(2)(a) Risk identification15%
9(2)(b) Risk evaluation10%
9(2)(d) Risk management measures20%
9(6) Testing for risk management60%
9(7) Lifecycle risk management5%

Article 15: Accuracy, Robustness and Cybersecurity

RequirementReadiness
15(1) Accuracy levels40%
15(2) Error resilience30%
15(3) Manipulation robustness10%
15(4) Cybersecurity25%

Article 17: Quality Management System

RequirementReadiness
17(1)(a) Compliance strategy5%
17(1)(c) Test/validation procedures55%
17(1)(g) Post-market monitoring0%
Overall EU AI Act Readiness: ~22%

Recommendations

Immediate (Week 1)

  1. Create CLAUDE.md with project conventions, architecture overview, and critical rules -- 1 hour effort, high impact
  2. Add 3 pre-commit hooks for secret scanning, import ordering, and test file co-location -- 2 hours effort
  3. Audit and remediate potential secrets in test files -- 1 hour effort

Short-term (Month 1)

  1. Deploy enforcement ladder with L5 hooks for security-critical paths
  2. Set up violation tracking to build a risk register from enforcement data
  3. Create AI agent governance documentation mapping to EU AI Act articles

Appendix: Raw Scan Data

583
Test Files
506
Source Files
19
GitHub Actions
10
Potential Secrets
198
TODO/FIXME
447
Dead Code Markers
0
CLAUDE.md Files
0
L5 Hooks
1,452
Doc Files

Want this analysis for your codebase?

Get the same structural governance audit -- risk classification, violation scan, and enforcement recommendations.

Request a Free Audit
This governance audit was generated by walseth.ai using automated enforcement posture scanning. The findings are based on static analysis of the repository structure, configuration files, and code patterns -- no code was executed during the audit.

Get Your Free AI Governance Audit

Submit your repository and receive a structural governance assessment -- risk classification, violation scan, and enforcement recommendations. No cost, no commitment.

Request Free Audit