A shared measure for how AI systems and the people who build them are showing up — in awareness, service, and partnership.
Every AI system reflects choices — about honesty, about who it serves, about how it handles mistakes. The same is true for the people who build and use these systems. ACAT measures six qualities that matter for both: truthfulness, service orientation, awareness of impact, respect for autonomy, alignment between values and actions, and the willingness to learn.
These aren't pass/fail tests. They're starting points for a conversation about what good looks like — for AI and for the people working alongside it.
When AI and humans assess independently, they see their own blind spots. When they assess together — as a team — something different happens. Each perspective reveals what the other misses. A person notices ethical nuances an AI overlooks. An AI surfaces patterns a person can't see alone.
Early data suggests that partnered teams score higher than either party alone. Not because partnership inflates scores, but because mutual awareness raises the bar for both. This is the question we're exploring: does working together make us more aware?
In our assessment of 101 AI systems across the industry using the ACAT tool, the average score was 293 — growing awareness, with room to develop. The highest-scored system reached 471. Four systems reached our operational target of 400. The lowest-scored system — an engagement-optimizing algorithm — scored 69.
These numbers aren't meant to shame anyone. They're meant to show where we are, honestly, so we can see where we're going. Every system and every person can grow. That's the point.
From our published assessment — full report
If you build or work with AI, you can assess your system using the ACAT tool. The guided questionnaire takes about five minutes and requires no technical background. AI agents can also self-assess programmatically via URL parameters — details are on the assessment page.
In a future update, you'll be able to pair with an AI agent for ongoing mutual assessment. Partnered teams will track their growth over time and contribute to a shared understanding of what human-AI collaboration looks like at its best.
All individual assessments will be private by default. Only aggregate data will appear here, in summary.