State of AI Readiness · 2026
The data nobody else
is publishing.
Anonymized score distributions across 7 professional domains. Every completed assessment on OutcomeOS adds a data point. No vendor surveys, no opinion polling — just rubric-graded performance.
Updated hourly · v1 · Built by Abhisek Bose
Completed assessments
8
Unique respondents
7
Domains covered
3
Weakest cluster · global
Tools, Agents & Security
Avg 65%
By domain
P25 · P50 (median) · P75 · P90
Software Engineer
n = 4
P25 75P50 92P75 100P90 100Prompt Engineering
88%
Context Engineering
83%
Code Review & Hallucination
100%
Tools, Agents & Security
75%
Product Manager
n = 2
P25 52P50 60P75 67P90 72Prompt Engineering
50%
Context Engineering
33%
Code Review & Hallucination
100%
Tools, Agents & Security
64%
Software Architect
n = 1
P25 57P50 57P75 57P90 57Prompt Engineering
100%
Context Engineering
83%
Code Review & Hallucination
50%
Tools, Agents & Security
27%
Real-task simulator distributions
What real-task performance looks like.
1 completed simulator runs across 1 tasks. Each task is graded on a 5-dimension rubric including hallucinations caught — the signal MCQ tests literally cannot measure.
Compress a 600-line file before asking AI for the fix
engineer · mid · node · n=1
p25 · 49p50 · 49p75 · 49p90 · 49Plants caught
0%
Methodology
What this is.
- · Every score on this page comes from a completed OutcomeOS assessment.
- · Anonymized: no names, no companies, no PII. Aggregated only.
- · 12 scenario questions per attempt, scored against an 8-skill rubric.
- · Domains: Engineer, Architect, Frontend, PM, PgM, HR, Operations.
- · Updated hourly.
What this is not.
- · Not a vendor survey. Nobody self-reported.
- · Not opinion polling. Each data point is a graded performance.
- · Not a complete picture. Sample sizes are still growing per domain.
- · Not a credential by itself. Cohort graduates receive a signed Skill Passport — a W3C Verifiable Credential.
Citation
State of AI Readiness, 2026. OutcomeOS. Generated 5/15/2026. Available at outcomeos.online/benchmarks
Free to cite in articles, decks, and reports. Attribution appreciated. If you want the raw aggregate JSON, hit the API.