Add a Governance Score Badge to Your GitHub README in 30 Seconds
Add a Governance Score Badge to Your GitHub README in 30 Seconds
Your README already has build badges, coverage badges, and license badges. Now add one that shows whether your project has structural governance in place.
The Governance Score Badge is a shields.io-style SVG that displays your repo's governance score out of 100. It updates when you re-scan. It links visitors directly to a free scan of your repo.
Here's what it looks like on our own repos:
Step 1: Scan Your Repo
Go to walseth.ai/scan and enter your GitHub repo URL. The scan takes about 10 seconds and scores your project across 6 dimensions: enforcement, CI/CD, security, testing, governance config, and project hygiene.
No signup. No cost. Instant results.
Step 2: Copy the Badge Markdown
Replace OWNER and REPO with your GitHub owner and repository name:
[](https://walseth.ai/scan?repo=OWNER/REPO)
For example, if your repo is acme-corp/ml-pipeline:
[](https://walseth.ai/scan?repo=acme-corp/ml-pipeline)
HTML and reStructuredText formats are available on the badge page.
Step 3: Paste in Your README
Add the badge line near the top of your README.md, alongside your other badges. Commit, push, done.
Every visitor to your repo now sees your governance score. Clicking the badge takes them to a full breakdown of how the score was calculated.
Why Governance Scores?
Most AI/ML repos have CI/CD pipelines but almost no structural enforcement. Our RSA 2026 leaderboard scanned 21 of the most popular AI repos -- the average score was 53/100. Only 2 scored above 70.
A governance badge signals to contributors, users, and auditors that your project takes structural governance seriously. It shows you have:
- Enforcement hooks and automation (not just documentation)
- Security policies and dependency management
- AI-specific governance config (CLAUDE.md, .cursorrules)
- Testing infrastructure and CI/CD pipelines
As EU AI Act requirements take effect, governance posture becomes a differentiator. Projects with high scores are ahead of the compliance curve.
What If My Score Is Low?
The badge still works. It shows your current score honestly, and clicking it lets anyone run a fresh scan. A low score is not a scarlet letter -- it is a starting point.
The scan results page includes specific findings and recommendations. Fix the gaps, re-scan, and your badge updates automatically.
Badge Details
- Format: SVG, shields.io-style, cached for 1 hour
- Grades: A (80+), B (60-79), C (40-59), D (20-39), F (below 20)
- Colors: Green (A), blue (B), yellow (C), orange (D), red (F), gray (not scanned)
- Clickthrough: Links to your repo's scan results on walseth.ai
- Not scanned? Badge shows "not scanned" in gray and links to the scanner
The badge endpoint is https://walseth.ai/api/badge/OWNER/REPO. It returns image/svg+xml with a 1-hour cache. GitHub proxies it through camo.githubusercontent.com like any other external image.
See It Live
We added governance badges to our own repos first:
- governance-scanner -- the CLI scanner tool
- governance-scan -- the governance scan engine
- session-export -- Claude Code session history exporter
- clawaudit -- security audits for AI skills
- crust-space -- social network for AI agents
View the full badge documentation and copy-paste snippets at walseth.ai/badge.
Add a badge to your repo today: walseth.ai/scan
Run our open-source governance scanner on any public repository. Six dimensions scored, instant results, no signup required.
Try the Free Governance ScannerGet AI Governance Insights
Practical takes on enforcement automation and EU AI Act readiness. No spam.
Related Articles
AI Governance Leaderboard: We Scanned 21 Top Repos Before RSA 2026
We ran our governance scanner against 21 of the most popular AI agent frameworks, ML libraries, and AI SDKs. The average score was 53/100. Only 2 repos are on track for EU AI Act readiness. Here are the full results.
6 min readAI Coding Agents Need Enforcement Ladders, Not More Prompts
75% of AI coding models introduce regressions on sustained maintenance. The fix is not better prompts -- it is structural enforcement at five levels, from conversation to pre-commit hooks.
4 min readStructural Enforcement vs Arthur AI: Middleware Guardrails Compared
Arthur AI ships middleware guardrails and model monitoring. Structural enforcement prevents violations permanently. Two AI governance philosophies compared.
4 min readFramework Governance Scores
See how major AI/ML frameworks score on enforcement posture, context hygiene, and EU AI Act readiness.
Want to know where your AI governance stands?
Get a Free Governance Audit