Capability
Can I actually use the tool? This includes knowing what AI can do, breaking problems into parts, choosing useful structures, and debugging when things go wrong.
How Sensemaking AI teaches
You do not develop it by memorizing definitions of LLMs or reading about prompt patterns. You develop it the way you develop any practical skill: by encountering real situations, making real decisions, and reflecting on what the situation actually required.
The triad
Underneath all the specific skills, three capacities determine whether someone uses AI well or badly.
Can I actually use the tool? This includes knowing what AI can do, breaking problems into parts, choosing useful structures, and debugging when things go wrong.
Should I use it here? This includes evaluating output, recognizing uncertainty, choosing between tradeoffs, and knowing when AI is not the answer.
What happens because I used it? This includes privacy, accountability, fairness, downstream effects, and where human oversight needs to stay in the loop.
Most AI training teaches only the first one. Sensemaking AI teaches all three, in proportion to how much each one actually matters in real work.
Capability model
The triad organizes six concrete skills the curriculum develops over time.
| Capability | Capability | Judgment | Responsibility |
|---|---|---|---|
| Problem Framing | |||
| Representation | |||
| Tool Choice | |||
| Workflow Debugging | |||
| AI Judgment | |||
| Governance |
Clarifying what you are actually trying to solve before reaching for tools. Distinguishing symptoms from causes. Identifying who is affected and what better would look like.
Choosing useful ways to organize a problem: a list, a table, a workflow, a decision tree. Noticing when the structure you are using is hiding important details.
Matching tasks to the right kind of tool, including knowing when not to use AI. Comparing automation, rules, search, databases, and models against task stability, risk, and data quality.
Finding where a process breaks down and improving it step by step. Separating data problems from process problems. Testing one change at a time instead of guessing.
Knowing when AI output needs verification, context, or human interpretation. Spotting when an answer may be incomplete, biased, outdated, or overconfident.
Considering privacy, consent, accountability, safety, and fairness. Knowing where information should not be entered, and who is responsible if an automated process causes harm.
Learning pathways
The capabilities do not get learned in the abstract. They get practiced inside specific kinds of situations.
Learn to slow down, define the problem clearly, and avoid solving the wrong thing beautifully.
Learn how tables, workflows, maps, timelines, and decision trees make messy situations easier to reason about.
Learn when to use checklists, spreadsheets, automation, databases, search, AI, or human judgment.
Learn to diagnose where a process fails: unclear inputs, missing data, broken handoffs, poor feedback.
Learn to use AI responsibly by deciding where humans need to review, interpret, approve, or override.
Governance is not its own pathway. It threads through all five, because responsible practice is not a separate skill. It is a quality of every other skill.
What an exercise looks like
Each exercise opens with a realistic scenario. A small nonprofit deciding what parts of customer communication to automate. A community foundation deciding how AI should help review grant applications. A small business owner trying to clarify what is actually wrong with customer support.
Learners sort realistic items into judgment categories, receive specific feedback, and see which capabilities the activity exercised.
Scenario
A small nonprofit wants to respond to client questions faster. Before building anything, decide what should be automated, reviewed by a person, or investigated first.
Judgment categories
Feedback style
“Strong reasoning. This could become a useful AI-assisted step, but only after the nonprofit defines risk categories, review procedures, and escalation paths.”
The point is not whether someone memorized a rule. The point is whether they noticed what the situation required.
Capability growth
Most learning apps reduce progress to a percentage: 60% complete, 80% complete, finish the course. That model does not work for capability development, because capability is not a finite curriculum to complete.
Sensemaking AI tracks progress as a six-dimensional shape, one axis per capability, that fills in unevenly as you work through different scenarios.
One activity may strengthen AI judgment and governance. Another may strengthen representation or workflow debugging. The shape changes because the practice changes.
No streaks. No leaderboards. No completion percentages. Just a visible record of how your judgment is developing over time.