How I Build with AI

Concept Project · AI-Native Workflow

AI-Assisted
Clinical Note
Triage

Discovery, persona, journey mapping, requirements, and a working prototype — the full process behind the concept project used to illustrate how I build with AI.

Type Concept Project · Case Study
Domain Healthcare UX · Clinical Coordination
User Hospital Care Coordinators
Tools Claude · Lovable · AI Prototyping

Care coordinators at large academic medical centers manage panels of 50+ patients, each generating a constant stream of unstructured clinical notes. The volume is unmanageable without structure. The result: things fall through the cracks, follow-ups get missed, and coordinators spend cognitive energy parsing notes instead of acting on them.

This concept project applies AI to surface what matters, rank it by urgency, and present it clearly — while keeping the coordinator firmly in control. What follows is the full process: from first stakeholder question to working prototype.

Phase 1 · Discovery

Stakeholder Interview Synthesis

Structured discovery questions run with care coordinators at a large academic medical center. Responses synthesized with Claude after each session — turning 90-minute conversations into structured requirement lists.

Discovery Q&A 3 sessions · Large Academic Medical Center · Care Coordination Dept.
Q 01
Who is the primary user of this tool — what kind of organization are they at?
Hospital / health system Large academic medical center (500+ beds)
Q 02
What are the core pain points this tool solves for care coordinators?
Too many notes — no way to prioritize Things fall through / missed follow-ups No visibility into patient status across a panel
Q 03
What does the AI actually surface to the user in the interface?
Plain language summary of the note Urgency score (0–100) + confidence indicator Actionable flags with specific next steps Suggested coordinator action Ability to override or correct AI output
Q 04
How does the coordinator currently manage patient note volume?
Shared Excel / Google Sheet Manual tracking — no systematic prioritization No audit trail · High cognitive load
Q 05
What does a successful shift look like?
Every patient has a clear next action documented No missed follow-ups from the prior 24 hours Discharge plans confirmed before 11 AM Didn't work through lunch
✦ Key Finding — Most critical insight from workflow mapping Coordinators were prioritizing based on who they remembered last — not clinical urgency. The most critical patient on the floor wasn't necessarily getting attention first. Memory is not a clinical tool.

Phase 1 · Discovery

User Persona

Composite persona built from care coordinator interviews. Used to pressure-test requirements and anchor every design decision in a real person's context.

SR
Sarah R.
Care Coordinator
Age 34 · 6 years experience
Organization
Large Academic Medical Center
Panel Size
50+ patients at any given time
Tech Comfort
High — uses 4+ systems daily
Current Tools
Epic EHR · Excel · Email · Phone
"I feel like I'm reacting all day instead of actually coordinating care."
— Sarah, Care Coordinator · St. Luke's Medical Center
Starts every morning without knowing which patients are most urgent — defaults to whoever she saw last
Discovers missed follow-ups only after something goes wrong — a readmission, unsafe discharge, delayed consult
Spends 60–90 minutes each morning manually pulling charts and cross-referencing a shared spreadsheet
Opens one tool at 7 AM that tells her exactly where to start
Every patient has a documented next action before she leaves for the day
Discharge plans confirmed and communicated before 11 AM. No missed follow-ups. No surprises.

Phase 4 · Journey Mapping

Current State vs. Ideal State

Sarah's morning workflow mapped before and after the tool. Human-led — healthcare workflows are too domain-specific for AI to navigate reliably. This map pressure-tested the requirements against real workflow logic.

● Current State Without the tool
1
Arrives. Opens Epic EHR.
Pulls up the unit census. 50+ patients. No indication of who needs attention first.
Overwhelmed
2
Opens shared Excel sheet
Cross-references yesterday's notes. Column B is out of date. Two coordinators edited the same row.
Frustrated
3
Manually works through the list
Opens each chart. Reads every new note in the order they appear. No filter. No ranking. Prioritizes whoever she gets to first.
Anxious
4
60–90 min later: ready to act
Finally has a mental model of her day. But something urgent may have already been missed.
Behind
5
Reacts to whatever surfaces
Fires at incoming pages, calls, and chart updates. Coordination is reactive, not proactive.
High stress
✦ Insight Memory is not a clinical tool. The most critical patient wasn't getting attention first — whoever Sarah remembered last was.
● Ideal State With the tool
1
Arrives. Opens the triage tool.
Queue already loaded. AI has read overnight notes and ranked by urgency. 3 critical, 4 high, 2 medium.
Oriented immediately
2
Reviews AI summary + flags
Scans plain-language summaries. Confidence score visible on each. Red flags surfaced — nephrology consult needed STAT.
In control
3
Acts with context, not memory
Selects action: follow-up, escalate, or no action. Each decision is documented and timestamped automatically.
Confident
4
15–20 min later: queue reviewed
All notes triaged. Clear action log. Discharge plans flagged for 11 AM. Nothing fell through.
Relief
5
Proactive coordination all day
Sarah is ahead of her panel instead of behind it. Coordinating, not reacting.
High confidence
✦ Outcome From 60–90 minutes of manual triage to 15–20 minutes with full confidence everything critical was caught.

Phase 3 · Product Proposal

MVP Requirements

Excerpted from the full product proposal document. Shared with stakeholders for alignment before any design work began. Sign-off here prevents expensive rework later.

ID
Requirement
Priority
REQ-001
AI must generate a plain language summary of each clinical note, free of jargon, readable in under 10 seconds
Must Have
REQ-002
Urgency score (0–100) must be surfaced for every note with a visible confidence indicator — no black-box outputs
Must Have
REQ-003
AI must surface actionable flags — critical, warning, informational — with specific coordinator next steps, not generic alerts
Must Have
REQ-004
Suggested coordinator action surfaced by default — coordinator confirms or overrides
Must Have
REQ-005
AI never takes autonomous action. Every note requires explicit coordinator review before any action is logged
Must Have
REQ-006
When coordinator action differs from AI suggestion, the override is logged and used as a training signal for model improvement
Must Have
OOS-001
Writing or editing clinical notes. Tool is read-only. Coordinators document actions in their existing system.
Won't Build
OOS-002
Replacing physician judgment. Tool surfaces clinical information to coordinators — it does not make clinical recommendations to physicians.
Won't Build
✦ Design Constraint The human must always be in the loop — AI never acts autonomously. But we went further: when a coordinator overrides the AI's suggested action, that divergence is captured and fed back into model training. Every correction makes the model smarter. This turned the override from a failure signal into a feature.

Phase 5 · AI Prototyping

First Output vs. Refined Version

This is what the AI prototyping tool produced from the requirements document — and what it became after applying design judgment. The gap between these two versions is where the work actually happens.

Before — AI first output from requirements
Clinical triage prototype — AI first output
Click to enlarge

What design judgment fixed

Information Architecture & Layout
The AI placed contextual metrics in the wrong structural location — header-level callouts with no surrounding context — and defaulted to collapsed states for the most-used content. Section header sizes were inconsistent across related panels (Needs Immediate Attention, Actions, Recent Patient Notes), creating false visual hierarchy that implied relationships between content groups that didn't exist. These are structural decisions that require understanding how a user reads and moves through a workflow, not just where components sit on a page.
AI output — inconsistent section header sizes
Click to enlarge
AI output — content truncated inside containers
Click to enlarge
Workflow & Mental Model
The AI inverted the actual clinical triage workflow — labeling already-reviewed patients in a way that contradicted how and why coordinators work through critical cases first. The interface reflected a generic task-management mental model, not the specific logic of clinical prioritization. Correcting this required domain knowledge, not prompting.
User-Centered Relevance
The AI repeatedly surfaced override tracking as a visible UI element for the end user — even after multiple explicit requests to remove it. This data has real value for model training on the backend, but nurses and coordinators are focused on patient outcomes. Surfacing it in the clinical view adds cognitive noise to a high-stakes environment. Knowing what to leave out is as important as knowing what to include.
AI output — override messaging surfaced repeatedly
Click to enlarge
Color as Communication
The default color usage swung between two failure modes: oversaturated elements that screamed for attention even when nothing was critical, and colors so muted they failed to signal anything at all. In clinical UI, color isn't decorative — it's directional. It needs to be reserved for critical states, primary actions, and status changes. A noisy palette trains users to ignore it. A washed-out one gives them nothing to hold onto.
AI output — oversaturated progress bar colors
Too saturated — progress bar
AI output — neon green confidence score
Too saturated — confidence score
AI output — colors too light, washed out
Too muted — filter labels and highlight colors
After — Design-refined version  ·  clinicaltriageai.lovable.app ↗
Clinical triage prototype — refined version
Click to enlarge
clinicaltriageai.lovable.app Open prototype ↗
← How I Build with AI All Work →