Real governance audits on major open-source projects. Each study runs our automated enforcement posture scanner and maps findings to EU AI Act compliance requirements.
Early governance signals (CLAUDE.md, AGENTS.md) show awareness, but 68 potential secrets, 1,303 TODOs, and zero enforcement hooks reveal that awareness has not yet translated into structural enforcement.
Early governance signals (CLAUDE.md, AGENTS.md) exist but zero enforcement hooks, 25 potential hardcoded secrets, and monorepo complexity create significant gaps.
The most deployed Python web framework has 1,995 test files but zero enforcement hooks and no AI agent instructions, leaving governance to manual review alone.
The data validation library underpinning FastAPI and LangChain has solid test coverage but zero enforcement hooks and no AI agent instructions.
Strong test coverage (583 test files) is undermined by zero automated enforcement hooks and no AI agent instructions, leaving the project vulnerable to governance drift.
The foundational ML library has zero hardcoded secrets (best in our portfolio) but zero enforcement hooks and embedded test structure that hides coverage from governance tools.
The leading multi-agent framework scores lowest in our portfolio -- zero test files at root, 56 potential secrets, and no AI agent instructions in the very infrastructure designed to orchestrate AI agents.