EU AI Act enforcement begins August 2, 2026 — Are you ready?
← All Frameworks

LangChain Governance Score

Early governance signals exist but zero enforcement hooks leave 100K-star framework exposed.

100,000+ GitHub starsAssessed: 2026-03-11View Repository

Overall Score: 40/100 (Grade: C)

26/100
Enforcement Maturity
Grade: D
75/100
Context Hygiene
Grade: B
26/100
Automation Readiness
Grade: D
Portfolio average29/100
LangChain40/100

Key Findings

No Hook Enforcement [CRITICAL]

Zero pre-commit or Claude Code hooks despite 18 CI/CD workflows. CLAUDE.md rules (2 found) are advisory only -- nothing structurally prevents violations of documented conventions.

25 Potential Hardcoded Secrets [CRITICAL]

Secrets detected including patterns in tests. No automated secret scanning. No test credential convention distinguishes real secrets from test fixtures.

Monorepo Test Discovery Gap [HIGH]

Zero test files at root. Tests exist in libs/*/tests/ following monorepo convention, but governance tools scanning at root level see zero coverage.

Why LangChain's Governance Score Matters

LangChain is the most widely adopted framework for building LLM-powered applications. With 100,000+ GitHub stars, it defines patterns for how enterprises integrate large language models into production systems. Its governance posture matters not just for LangChain itself, but for the thousands of production applications built on its abstractions.

LangChain stands out in our portfolio for having the highest Context Hygiene score (75/100, Grade B). The presence of CLAUDE.md (253 lines with 2 explicit rules) and AGENTS.md shows that the LangChain team is aware of AI governance needs. But awareness without enforcement is a gap, not a solution. The 2 CLAUDE.md rules are advisory only -- nothing prevents an AI agent from violating them.

Enforcement Ladder Analysis

LangChain's enforcement distribution reveals a project in transition. At L2 (prose), it has the strongest context documentation in our portfolio. At L3 (templates), 18 GitHub Actions workflows provide solid CI automation. But at L5 (hooks), nothing exists -- the documented rules have no enforcement mechanism.

The monorepo structure adds complexity. Tests distributed across libs/core/, libs/community/, and other packages make governance assessment challenging. The 1,362 deprecated/dead code markers -- the highest in our portfolio -- suggest significant technical debt that governance tools could help manage.

What This Means for Teams Using LangChain

LangChain's rapid evolution means governance is a moving target. Breaking changes, deprecated abstractions, and evolving patterns make it essential to actively manage your LangChain dependency:

  1. Extend LangChain's CLAUDE.md in your own projects with application-specific rules for chain construction and prompt management
  2. Add pre-commit hooks that validate chain definitions, prevent prompt injection patterns, and enforce output parsing
  3. Implement integration tests that verify chain behavior end-to-end, not just individual component function
  4. Track deprecation warnings -- LangChain's 1,362 dead code markers indicate rapid API evolution

EU AI Act Compliance Impact

LangChain is the primary framework for building AI applications that interact with users. In EU AI Act terms, LangChain applications often fall under transparency requirements (Article 52) and may be classified as high-risk if used in regulated domains. With 18% compliance readiness, the critical gaps are in logging and audit trails -- LangChain's callback system provides hooks for this, but most applications do not implement them.

Recommendations

Immediate (Week 1): Expand CLAUDE.md from 2 rules to 10+ covering chain construction patterns, prompt safety, and output validation (2 hours). Add secret scanning to CI pipeline (1 hour). Add 3 pre-commit hooks for security-critical paths (2 hours).

Short-term (Month 1): Deploy L5 enforcement hooks for security-critical paths (libs/core/, libs/community/). Create unified test orchestration across monorepo packages. Implement deprecation cleanup plan for 1,362 dead code markers.

Strategic (Quarter): Build enforcement ladder documentation mapping LLM application patterns to EU AI Act requirements. Establish automated chain behavior testing in CI. Implement autoresearch optimization (20-50 iterations) to tune enforcement rules for LLM-specific patterns.

Raw Scan Data

0 at root
Test Files
1,672
Source Files
18
GitHub Actions
25
Potential Secrets
162
TODO/FIXME
1,362
Dead Code Markers
1
CLAUDE.md Files
0
L5 Hooks

EU AI Act Readiness

18%

Estimated compliance readiness based on enforcement posture, documentation, and automated quality controls. EU AI Act enforcement begins August 2, 2026.

See how your project compares

Run our free governance scanner on your own repository and get an instant enforcement posture score.

Scan Your Repository
This governance assessment was generated by walseth.ai using automated enforcement posture scanning on 2026-03-11. Findings are based on static analysis of the repository structure, configuration files, and code patterns. Scores reflect a point-in-time assessment and may change as the project evolves.

Get Your Free AI Governance Audit

Submit your repository and receive a structural governance assessment -- risk classification, violation scan, and enforcement recommendations. No cost, no commitment.

Request Free Audit