Django Governance Audit
Django scores 29/100 on enforcement posture -- its industry-leading test suite (1,995 test files) is undermined by zero automated enforcement hooks and no AI agent instructions.
Overall Score: 29/100 (Grade: D)
Executive Summary
Django is the most deployed Python web framework, powering production systems at major banks, healthcare platforms, and government agencies worldwide. With 80,000+ GitHub stars and over 20 years of continuous development, it represents the gold standard of Python web frameworks -- battle-tested, well-documented, and trusted for critical infrastructure.
Despite this unmatched maturity, an automated governance audit reveals a significant structural gap. Django's 1,995 test files yield a 205% test-to-source ratio -- among the highest of any open-source project -- but the framework has zero investment in enforcement mechanisms for AI-assisted development. As AI coding agents become primary contributors, this gap exposes even Django's well-guarded codebase to governance drift.
Enforcement Ladder Distribution
No automated enforcement before commits or tool use
Exceptional -- 205% test-to-source ratio, among the highest in open source
Mature CI pipeline with GitHub Actions
No CLAUDE.md or agent-specific rules
Default mode for all AI interactions
Diagnosis: Django has the strongest L4 (test) investment of any framework we have audited, with nearly 2,000 test files covering 971 source files. However, it has zero investment in L2 (prose rules) and L5 (hooks). An agent can write code that passes all tests while violating 20 years of accumulated project conventions -- a dangerous paradox for the framework trusted by regulated industries.
Critical Gaps Found
1. No L5 (Hook) Enforcement [CRITICAL]
No pre-commit hooks or Claude Code hooks were found. AI agents can commit code without any structural gatekeeping. Django's security-critical patterns (CSRF, SQL injection prevention, authentication backends) are enforced only by human review.
2. Potential Hardcoded Secrets [CRITICAL]
25 instances of potential hardcoded secrets detected. No automated secret scanning in CI. Test secrets are indistinguishable from real secrets without manual review -- a heightened risk for a framework used in banking and healthcare.
3. No CLAUDE.md / Agent Instructions [HIGH]
No CLAUDE.md or equivalent AI agent instruction file was found. With 20+ years of accumulated conventions (ORM patterns, middleware ordering, template tag structure), every AI session starts from zero context about Django's architecture.
4. Missing Root README [MEDIUM]
Django uses a separate documentation site rather than a root README. AI agents and automated tools have no quick-access project summary, impacting first-session context and contributor onboarding velocity.
5. Context Hygiene Score: F (10/100) [HIGH]
Without any agent instruction files, AI tools operate with zero project-specific context. Estimated +20% token cost from repeated context establishment. No guardrails for AI agents making architectural decisions in a framework where architecture is everything.
EU AI Act Compliance Mapping
For organizations using Django in high-risk AI systems (enforcement deadline August 2, 2026), the current governance posture creates compliance gaps:
Article 9: Risk Management System
| Requirement | Readiness |
|---|---|
| 9(2)(a) Risk identification | 15% |
| 9(2)(b) Risk evaluation | 10% |
| 9(2)(d) Risk management measures | 25% |
| 9(6) Testing for risk management | 65% |
| 9(7) Lifecycle risk management | 5% |
Article 15: Accuracy, Robustness and Cybersecurity
| Requirement | Readiness |
|---|---|
| 15(1) Accuracy levels | 45% |
| 15(2) Error resilience | 35% |
| 15(3) Manipulation robustness | 10% |
| 15(4) Cybersecurity | 20% |
Article 17: Quality Management System
| Requirement | Readiness |
|---|---|
| 17(1)(a) Compliance strategy | 5% |
| 17(1)(c) Test/validation procedures | 60% |
| 17(1)(g) Post-market monitoring | 0% |
Recommendations
Immediate (Week 1)
- Create CLAUDE.md with Django conventions, MVT architecture overview, security patterns, and ORM rules -- 1 hour effort, high impact
- Add 3 pre-commit hooks for secret scanning, migration file co-location, and security module protection -- 2 hours effort
- Audit and remediate the 25 flagged potential secrets -- triage and refactor test fixtures with clear naming -- 2 hours effort
Short-term (Month 1)
- Deploy enforcement ladder with L5 hooks for security-critical paths (middleware, auth backends, CSRF, SQL query construction)
- Set up violation tracking to build a risk register from enforcement data
- Create AI agent governance documentation mapping to EU AI Act articles
Appendix: Raw Scan Data
Want this analysis for your codebase?
Get the same structural governance audit -- risk classification, violation scan, and enforcement recommendations.
Request a Free Audit