We scanned 7 major open-source AI/ML frameworks for enforcement maturity, context hygiene, and automation readiness. The average score is 29/100. None have L5 enforcement hooks.
| Framework | Score | Grade | Stars |
|---|---|---|---|
| Transformers | 45/100 | C | 140,000+ |
| LangChain | 40/100 | C | 100,000+ |
| Django | 29/100 | D | 80,000+ |
| FastAPI | 29/100 | D | 80,000+ |
| Pydantic | 29/100 | D | 22,000+ |
| scikit-learn | 18/100 | F | 60,000+ |
| CrewAI | 13/100 | F | 25,000+ |
The largest AI framework shows governance awareness but enforcement has not kept pace.
Early governance signals exist but zero enforcement hooks leave 100K-star framework exposed.
The most deployed Python web framework has 1,995 test files but zero enforcement hooks.
Strong test coverage undermined by zero enforcement hooks and no AI agent instructions.
The data validation library underpinning FastAPI and LangChain has zero enforcement hooks.
The foundational ML library has zero secrets but no structural enforcement.
The leading multi-agent AI framework scores lowest in our governance portfolio.
See how your project compares
Run our free governance scanner on your own repository and get an instant enforcement posture score.
Scan Your Repository