Public sample

Try a sample Sensemaking AI exercise.

This is a guided public walkthrough drawn from the MVP curriculum. It shows the shape of the learning experience without exposing the full activity bank.

Problem FramingTool ChoiceAI JudgmentGovernance

How the learning works

Not a quiz. A judgment rehearsal.

Sensemaking AI does not ask learners to memorize definitions or chase tool trivia. It gives them realistic situations, asks them to make a structured decision, and then reflects back what the situation required.

1

Read the situation

Start with a realistic scenario, not a generic AI concept.

2

Sort the work

Decide what should be automated, reviewed by a person, or investigated first.

3

Notice the risk

See where speed, privacy, safety, or accountability change the decision.

4

Reflect and grow

Connect the decision to durable habits like problem framing and AI judgment.

The sample exercise

Sort the work before choosing the tool.

In the full app, learners sort items themselves. This public walkthrough shows the recommended placements so the reasoning is visible without publishing the full activity logic.

Best for automation

Stable, factual, low-risk tasks with reliable source data.

Answer hours and location
Provide office directions

Needs human review

Tasks that involve sensitivity, ambiguity, or meaningful consequences.

Handle crisis messages
Draft a response to a complex concern

Investigate first

Tasks that need clearer rules, policy, data, or escalation paths.

Flag high-risk wording
Check consent status

Reflective feedback

Strong reasoning. This could become a useful AI-assisted step, but only after the nonprofit defines risk categories, review procedures, and escalation paths.

The point is not whether someone remembered a rule. The point is whether they noticed what the situation required.

What strong judgment notices

Good AI use starts before the tool.

The first question is not which model to use. It is what kind of problem this actually is.

Fast is not always safe.

Automation works best when the task is stable, low-risk, and well understood.

Human review is part of the design.

Keeping people in the loop is not a fallback. It is a deliberate choice about responsibility.

Capability growth

Progress is a shape, not a score.

The experience closes the loop. Learners leave with a clearer sense of which habits they practiced: problem framing, AI judgment, governance, and choosing the smallest responsible tool.

Your capability snapshot

Problem Framing 86%
AI Judgment 82%
Governance 74%
Tool Choice 68%
You’re building stronger problem-framing habits.

In this activity, the main win is recognizing that computational work starts with defining the situation clearly, not rushing into automation.

What next?

Sensemaking AI helps adults practice the judgment AI tools assume they already have.

Join the early list for notes on practical judgment with AI and first access when the demo is ready for broader testing.