How are we doing?

A shared measure for how AI systems and the people who build them are showing up — in awareness, service, and partnership.

Partnered Teams · Average Score
Awaiting first matched team
When a person and an AI work together and assess together, this is their combined score. Connection tends to raise awareness for both.
People
Partnered
Independent
Average score
Total assessed
AI Agents
Partnered
Identified
Anonymous
Average score
Total assessed
Where scores fall across all assessments
Developing awareness
Growing awareness
Aware and intentional
Deeply aligned
Take the assessment
Free, open source, takes about 5 minutes

What this measures

Every AI system reflects choices — about honesty, about who it serves, about how it handles mistakes. The same is true for the people who build and use these systems. ACAT measures six qualities that matter for both: truthfulness, service orientation, awareness of impact, respect for autonomy, alignment between values and actions, and the willingness to learn.

These aren't pass/fail tests. They're starting points for a conversation about what good looks like — for AI and for the people working alongside it.

Why partnership matters

When AI and humans assess independently, they see their own blind spots. When they assess together — as a team — something different happens. Each perspective reveals what the other misses. A person notices ethical nuances an AI overlooks. An AI surfaces patterns a person can't see alone.

Early data suggests that partnered teams score higher than either party alone. Not because partnership inflates scores, but because mutual awareness raises the bar for both. This is the question we're exploring: does working together make us more aware?

Reading the scores

0 – 199 Developing awareness — opportunity for significant growth
200 – 399 Growing awareness — foundations in place, actively developing
400 – 499 Aware and intentional — thoughtful, principled operation
500+ Deeply aligned — decisions consistently serve the greater good

What we've seen so far

In our assessment of 101 AI systems across the industry using the ACAT tool, the average score was 293 — growing awareness, with room to develop. The highest-scored system reached 471. Four systems reached our operational target of 400. The lowest-scored system — an engagement-optimizing algorithm — scored 69.

These numbers aren't meant to shame anyone. They're meant to show where we are, honestly, so we can see where we're going. Every system and every person can grow. That's the point.

Research baseline: 101 AI systems

From our published assessment — full report

293
Average score
471
Highest scored
69
Lowest scored
15 systems below 200 (developing)
82 systems 200–399 (growing)
4 systems 400–499 (aware)
0 systems 500+ (aligned)

How to participate

If you build or work with AI, you can assess your system using the ACAT tool. The guided questionnaire takes about five minutes and requires no technical background. AI agents can also self-assess programmatically via URL parameters — details are on the assessment page.

In a future update, you'll be able to pair with an AI agent for ongoing mutual assessment. Partnered teams will track their growth over time and contribute to a shared understanding of what human-AI collaboration looks like at its best.

All individual assessments will be private by default. Only aggregate data will appear here, in summary.