EU AI Act enforcement begins August 2, 2026 — Are you ready?
← All Frameworks

FastAPI Governance Score

Strong test coverage undermined by zero enforcement hooks and no AI agent instructions.

80,000+ GitHub starsAssessed: 2026-03-11View Repository

Overall Score: 29/100 (Grade: D)

35/100
Enforcement Maturity
Grade: D
10/100
Context Hygiene
Grade: F
42/100
Automation Readiness
Grade: C
Portfolio average29/100
FastAPI29/100

Key Findings

No Hook Enforcement [CRITICAL]

Zero pre-commit or Claude Code hooks. Rules about import ordering, error handling patterns, and documentation style exist only as tribal knowledge. No structural mechanism prevents violations.

10 Potential Hardcoded Secrets [HIGH]

Many are likely test fixtures, but no automated scanning distinguishes real secrets from test data. FastAPI's tutorial-heavy codebase may normalize credential patterns in example code.

No CLAUDE.md or Agent Instructions [HIGH]

Zero project-specific context for AI agents. FastAPI's conventions around dependency injection, response models, and error handling are undocumented for automated contributors.

Why FastAPI's Governance Score Matters

FastAPI has become the default choice for building AI/ML APIs in Python. Its async-first design, automatic OpenAPI documentation, and Pydantic integration make it the framework of choice for serving ML model predictions. With 80,000+ GitHub stars, FastAPI powers AI inference endpoints across thousands of production deployments.

FastAPI's 583 test files and 115% test-to-source ratio demonstrate solid testing discipline. But without L5 hooks, nothing prevents commits that bypass security patterns, break API contracts, or introduce dependency injection errors. The 447 deprecated/dead code markers suggest accumulated technical debt that governance tools could help manage.

Enforcement Ladder Analysis

FastAPI follows a common pattern: strong L3 automation (19 GitHub Actions workflows) and solid L4 testing, but nothing at L5 (hooks) or L2 (prose). This creates a governance model that validates code after it enters the repository but never prevents problematic code from being committed.

For a framework used primarily to serve AI model predictions in production, this gap is significant. API contract changes, authentication bypasses, and rate limiting modifications can all be committed without structural validation.

What This Means for Teams Using FastAPI

FastAPI's design encourages good patterns -- type hints, dependency injection, automatic validation. The governance risk is less about using FastAPI and more about maintaining FastAPI-based applications at scale:

  1. Add pre-commit hooks that validate API route definitions, dependency injection patterns, and response model schemas
  2. Create CLAUDE.md documenting your project's FastAPI conventions, including middleware ordering and error handling patterns
  3. Implement API contract testing that catches breaking changes before they reach production
  4. Track deprecated patterns -- FastAPI's 447 dead code markers indicate significant deprecation debt

EU AI Act Compliance Impact

FastAPI is the most common serving layer for AI models in production. Organizations deploying AI systems via FastAPI endpoints need to ensure their API layer meets EU AI Act requirements for logging, transparency, and human oversight. With 22% compliance readiness, the key gaps are in audit trail capabilities (Article 12) and human oversight mechanisms (Article 14) at the API layer.

Recommendations

Immediate (Week 1): Create CLAUDE.md covering architecture, dependency injection patterns, and API conventions (1 hour). Add 3 pre-commit hooks for API route validation and security patterns (2 hours). Audit 10 potential secrets (1 hour).

Short-term (Month 1): Deploy L5 enforcement hooks for security-critical paths (authentication, rate limiting, CORS). Set up violation tracking for API contract changes. Implement deprecation cleanup plan for 447 dead code markers.

Strategic (Quarter): Build enforcement ladder documentation linking API governance to compliance requirements. Establish automated API contract testing in CI. Implement autoresearch optimization to continuously tune enforcement rules.

Raw Scan Data

583
Test Files
506
Source Files
19
GitHub Actions
10
Potential Secrets
198
TODO/FIXME
447
Dead Code Markers
0
CLAUDE.md Files
0
L5 Hooks

EU AI Act Readiness

22%

Estimated compliance readiness based on enforcement posture, documentation, and automated quality controls. EU AI Act enforcement begins August 2, 2026.

See how your project compares

Run our free governance scanner on your own repository and get an instant enforcement posture score.

Scan Your Repository
This governance assessment was generated by walseth.ai using automated enforcement posture scanning on 2026-03-11. Findings are based on static analysis of the repository structure, configuration files, and code patterns. Scores reflect a point-in-time assessment and may change as the project evolves.

Get Your Free AI Governance Audit

Submit your repository and receive a structural governance assessment -- risk classification, violation scan, and enforcement recommendations. No cost, no commitment.

Request Free Audit