The idea underneath Sensemaking AI
The skill AI literacy keeps missing.
Something is missing from how AI literacy is being taught. Almost every course and certification focuses on getting better outputs from AI tools. Better prompts. Better workflows. Better integrations. The skill being trained is how to use the tool.
Convenience and capability are different goals.
Most current AI tools are optimized for convenience. That is not an accusation; it is an observation about design choices and what they reward. Tools that hide complexity, produce smooth-sounding answers, and minimize friction get used. Tools that ask you to slow down, inspect assumptions, or do the thinking yourself feel worse to use, even when they make you more capable.
Convenience and capability are different goals. They can look identical in a single interaction. Both can produce a useful answer to your question. They diverge over the long arc of a working life.
A person who only ever optimizes for convenience with AI can become less able to:
- recognize when an output is subtly wrong
- decide what kind of problem they are actually solving
- judge when human review matters
- hold their own thinking under pressure from a confident-sounding system
That is not a hypothetical risk. It is already happening, at scale, in workplaces where AI literacy got checked off as a one-day training and forgotten.
There is a name for this skill, and it is older than AI.
Philosophers call it practical wisdom: the capacity to act well in specific, uncertain situations. Cognitive scientists study a closely related thing: how humans deploy attention when there is too much information to process and only some of it matters.
That second framing is the one I have credentials in.
My dissertation at MIT studied how people search familiar visual scenes: how they decide where to look in a complex image, given everything they already know about what kind of scene it is. The finding that mattered most: humans do not search by treating every pixel equally. They use context, expectations, and learned structure to deploy attention efficiently. They know where to look before they have finished processing what they are looking at.
That is the same skill AI literacy actually requires. Knowing where to look in a generated answer. Knowing which assumptions to inspect first. Knowing when an output’s smoothness is hiding a problem versus reflecting real understanding.
The cognitive scientists who study attention have built decades of empirical work on what makes this skill develop or atrophy. Almost none of that work has made it into AI literacy curricula yet.
This is the work Sensemaking AI is built on.
What changes when you take this seriously.
The philosophy shapes the product in concrete ways.
Sensemaking AI does not teach facts about AI. It puts learners into realistic scenarios: a small nonprofit deciding what to automate, a community foundation deciding how AI should help review grant applications, and similar situations where the hard part is judgment.
The feedback is not graded. It is reflective. The point is not whether you got the right answer. It is whether you noticed what the situation actually required: where automation was appropriate, where it was not, and what made the difference.
Progress is not tracked in modules completed. It is tracked across six durable capabilities: problem framing, representation, tool choice, workflow debugging, AI judgment, and governance. Those capabilities compound over time the way practical wisdom does.
Keeping a human in the loop is not framed as a fallback for when AI fails. It is framed as a deliberate design choice worth practicing. Knowing when human judgment belongs in the loop is itself a skill, possibly the most important one we can teach right now.
What I am actually building.
None of this is about being skeptical of AI. The skeptics already have plenty of literature, and most of it is not building anything.
This is about something different: the assumption, quietly baked into every AI literacy product right now, that getting better outputs is the same as getting better at thinking with AI. It is not.
There is a real skill underneath good AI use, and it is teachable. It is also more interesting than prompt engineering, more durable than productivity hacks, and more honest about what working with AI actually demands of a person.
That is what I am building.
If you have felt the gap I am describing, the early list is below.