How Sensemaking AI teaches

Practical judgment with AI is a capability.

You do not develop it by memorizing definitions of LLMs or reading about prompt patterns. You develop it the way you develop any practical skill: by encountering real situations, making real decisions, and reflecting on what the situation actually required.

The triad

Three things every good decision with AI requires.

Underneath all the specific skills, three capacities determine whether someone uses AI well or badly.

Capability

Can I actually use the tool? This includes knowing what AI can do, breaking problems into parts, choosing useful structures, and debugging when things go wrong.

Judgment

Should I use it here? This includes evaluating output, recognizing uncertainty, choosing between tradeoffs, and knowing when AI is not the answer.

Responsibility

What happens because I used it? This includes privacy, accountability, fairness, downstream effects, and where human oversight needs to stay in the loop.

Most AI training teaches only the first one. Sensemaking AI teaches all three, in proportion to how much each one actually matters in real work.

Capability model

Six capabilities, mapped to three capacities.

The triad organizes six concrete skills the curriculum develops over time.

Capability Capability Judgment Responsibility
Problem Framing
Representation
Tool Choice
Workflow Debugging
AI Judgment
Governance

Problem Framing

Clarifying what you are actually trying to solve before reaching for tools. Distinguishing symptoms from causes. Identifying who is affected and what better would look like.

Representation

Choosing useful ways to organize a problem: a list, a table, a workflow, a decision tree. Noticing when the structure you are using is hiding important details.

Tool Choice

Matching tasks to the right kind of tool, including knowing when not to use AI. Comparing automation, rules, search, databases, and models against task stability, risk, and data quality.

Workflow Debugging

Finding where a process breaks down and improving it step by step. Separating data problems from process problems. Testing one change at a time instead of guessing.

AI Judgment

Knowing when AI output needs verification, context, or human interpretation. Spotting when an answer may be incomplete, biased, outdated, or overconfident.

Governance

Considering privacy, consent, accountability, safety, and fairness. Knowing where information should not be entered, and who is responsible if an automated process causes harm.

Learning pathways

Five pathways for practicing what matters.

The capabilities do not get learned in the abstract. They get practiced inside specific kinds of situations.

See the Problem · Clarify the real question.

Learn to slow down, define the problem clearly, and avoid solving the wrong thing beautifully.

Model the World · Choose a useful representation.

Learn how tables, workflows, maps, timelines, and decision trees make messy situations easier to reason about.

Choose the Right Tool · Match the task to the method.

Learn when to use checklists, spreadsheets, automation, databases, search, AI, or human judgment.

Debug the Workflow · Find weak links and fix them.

Learn to diagnose where a process fails: unclear inputs, missing data, broken handoffs, poor feedback.

Stay Human in the Loop · Know where judgment belongs.

Learn to use AI responsibly by deciding where humans need to review, interpret, approve, or override.

Governance is not its own pathway. It threads through all five, because responsible practice is not a separate skill. It is a quality of every other skill.

What an exercise looks like

Decisions, not quizzes.

Each exercise opens with a realistic scenario. A small nonprofit deciding what parts of customer communication to automate. A community foundation deciding how AI should help review grant applications. A small business owner trying to clarify what is actually wrong with customer support.

Learners sort realistic items into judgment categories, receive specific feedback, and see which capabilities the activity exercised.

Scenario

Is this really an AI problem?

A small nonprofit wants to respond to client questions faster. Before building anything, decide what should be automated, reviewed by a person, or investigated first.

beginner 8 minutes sort items

Judgment categories

Best for automation Stable, factual, low-risk tasks. Answer hours and location
Needs human review Sensitive, ambiguous, consequential tasks. Handle crisis messages Draft a response
Investigate first Needs rules, data, or policy clarity. Flag high-risk wording Check consent status

Capability growth

Progress is a shape, not a number.

Most learning apps reduce progress to a percentage: 60% complete, 80% complete, finish the course. That model does not work for capability development, because capability is not a finite curriculum to complete.

Problem framing Representation Tool choice Workflow debugging AI judgment Governance

Six-dimensional growth

Sensemaking AI tracks progress as a six-dimensional shape, one axis per capability, that fills in unevenly as you work through different scenarios.

Scenario-weighted practice

One activity may strengthen AI judgment and governance. Another may strengthen representation or workflow debugging. The shape changes because the practice changes.

No empty gamification

No streaks. No leaderboards. No completion percentages. Just a visible record of how your judgment is developing over time.