Sensory Sprout

Constraint-first meal planning for families where preferences rarely overlap

Recipe search tools assume flexible preferences. For families with sensory-sensitive children, that assumption causes daily breakdowns. This project explores what meal planning looks like when constraints are treated as fixed inputs—not obstacles to overcome.

Consumer Product Human-Centered Design High-Stakes Domain Family Systems Trust-Sensitive Design
Project Context
  • Role: Product Designer — sole decision-maker across research, interaction design, and build. Scoped the problem, defined the intervention, and shipped a working MVP.
  • Team: Solo designer; 5 research participants (interviews + prototype feedback)
  • Context: Personal project; standalone web app for shared office environments
  • Duration: 6 days (research → synthesis → prototype → user feedback → mockups → iterate → QA → launch MVP)
  • Status: Working MVP launched — try it live. Core monitoring, feedback escalation, and PiP support included.
Innovation
Reframed volume management as a feedback loop problem, not a behavior problem — enabling a design intervention that closes the gap before social friction occurs.
Technology Lens
Intentionally non-AI in the product itself: real-time audio level measurement only, no recording or transcription. AI tools accelerated prototyping and build.
The Problem

Headphones Remove Self-Awareness — With Nothing to Replace It

Return-to-office mandates have pushed people back into open floor plans, but there still aren't enough meeting rooms to support the number of video calls happening throughout the day. So people take meetings at their desks — usually wearing noise-canceling headphones.

Headphones solve one problem — but they quietly create another. They filter out ambient noise so you can focus on the meeting. But the same isolation that blocks incoming sound also removes the ambient feedback you normally use to regulate your speaking volume. With that feedback gone, you can't tell how loud you are anymore.

Some people try to compensate by wearing one ear on and one ear off. It's an unsatisfying workaround — environmental noise competes with meeting audio, and the split attention undermines the focus headphones were meant to provide.

For considerate people, this creates a constant, unresolved question: Am I too loud right now? Can people around me hear this conversation?

And headphones aren't the only trigger. When meeting rooms are full, people huddle at desks or talk in common areas. In the flow of conversation, especially when it becomes animated, they lose track of how far their voices carry.

By the time someone reacts, the disruption has already happened — a look, a comment, a pointed gesture — and the person speaking is the last to know.

The Insight

Social Feedback Is the Only Fallback — and It Makes Things Worse

This isn't a problem of inconsiderate people. Most people who are too loud in shared spaces don't know it — and would adjust immediately if they did.

Social feedback fails in predictable, compounding ways. It arrives late and carries emotional weight.

"It's kind of deflating. Especially if you're getting loud because you're getting passionate."
— P04

Instead of helping people recalibrate, it causes withdrawal.

"People have come over to me and literally said, 'you're too loud.' And then I spend the rest of the meeting trying to be really quiet."
— P05

The feedback channel itself produces the wrong behavior.

The Structural Solution

Replace Social Correction With a Private, Continuous Signal

The intervention isn't "be quieter." It's closing the awareness gap before social feedback kicks in.
  • Signal must be continuous and real-time
  • Signal must be private and non-judgmental
  • Signal must feel ambient, not corrective
Mezzo monitors speaking volume in real time and provides visual feedback through three states.
  • Within range
  • Outside range briefly
  • Outside range sustained

The system is equally defined by what it doesn't do. No recording. No transcription. No alerts. No labeling behavior.

Design Implication
Independent judgment was protected before discussion, so decisions reflected structure rather than early influence.
Structural Principle
Independent judgment was protected before discussion, so decisions reflected structure rather than early influence.
The Intervention

Structure protected independent judgment

Visibility preceded commitment.
  • Individual input collected privately
  • Signals hidden until everyone participated
  • Group results revealed simultaneously
StoryJam interface during private input phase
Independent input is collected without visibility, preventing anchoring before commitment.
Real-time photoshoot simulation with earbuds.
Casual practice session on speaker.

Deep Dive (optional): Evidence & Rigor

The sections below provide supporting evidence for readers who want to understand what informed the key decisions, not to re-argue or restate them.

1. Strategic Context & Constraints

Replace this with your content. Supports bold text, paragraphs, lists, images, and embedded video.

2. Discovery That Changed the Scope

Placeholder — example with a full-size linked image:

Description

And a regular image (no link):

Description
Caption describing the image.
3. The Strategic Pivot: From Tool to Platform

Placeholder — example with landscape video:

4. Expanded Flagship Decisions (Evidence Behind the Calls)

Placeholder — example with a callout:

This is a callout box for highlighting key takeaways or design implications inside the accordion.
5. Evidence From Real Team Behavior

Placeholder — example with a list:

  • Observation from user research
  • Behavioral pattern identified
  • Supporting data point
6. Measurement Strategy

Replace with your content.

Key Strategic Decisions

Every decision was filtered through the same question: would this make someone feel nudged —or watched?

Ephemeral by Design: No Accounts, No Stored Data, Organic Visuals
  • Observed: Technical UI language ("audio captured," "level detected") triggered privacy concerns and surveillance associations in participant testing.
  • Decision: Structurally ephemeral: no accounts, no saved data, no audio storage. Organic visuals (ink brush, hand-drawn forms) over technical precision.
  • Tradeoff: No usage history, no personalization over time, no ability to show users their own improvement patterns.
  • Trust Implication: Ephemeral architecture and organic aesthetics make surveillance associations structurally impossible —not just discouraged by messaging.
Expanded the "You're Fine" Range for Human Behavior
  • Observed: Early thresholds calibrated for acoustic accuracy. Normal behavior — laughing, getting excited, leaning back — triggered warnings.
  • Decision: Widened the acceptable range significantly. Required sustained loudness before escalation. Brief spikes never trigger feedback.
  • Tradeoff: Some genuinely loud moments go unaddressed. The system deliberately under-reports to preserve trust.
  • Adoption Implication: Flagging normal behavior trains users to ignore the system permanently. Threshold forgiveness protects the feedback channel for moments that actually matter.
Separated System Liveness From Loudness Feedback
  • Observed: Audio bars served double duty —showing volume and implying system activity. In quiet environments, flat bars looked like a broken tool.
  • Decision: Split into two signals: pulsing glow for liveness, audio bars for volume. Escalation responds to duration, not momentary spikes.
  • Tradeoff: More visual elements to maintain. Brief loud moments receive no feedback, even when genuinely disruptive.
  • System Reliability Impact: Users can verify system activity regardless of volume. Feedback maps to sustained disruption patterns, not momentary acoustic events.
Pivoted From Browser Extension to Standalone Web App
  • Observed: Browser extension required multi-day platform approval and produced inconsistent microphone permission flows across browsers.
  • Decision: Rebuilt as a standalone web app with Picture-in-Picture capability for peripheral use during meetings.
  • Tradeoff: Lost always-on background capability. Users must keep a tab open or launch PiP manually. No persistent taskbar presence.
  • Usability Impact: Consistent mic permissions across browsers. Eliminated distribution friction that would have blocked iteration and testing within the 6-day timeline.

Impact At a Glance

Mezzo was validated through video prototype walkthrough with five participants and launched as a working MVP within six days. Validation focused on legibility, trust, and emotional tone — the adoption gates identified in the design.

Quantitative Impact

These metrics reflect behavioral adoption teams choosing to use StoryJam repeatedly, not just feature interaction or launch-week curiosity.

5/5 participants
Understood the tool from the walkthrough alone

without explanation or onboarding

5/5 participants
Read the tone as neutral and supportive

no participant felt judged or corrected

4/5 participants
Requested native meeting integration

Zoom, Teams, or taskbar presence

6 days
Launched working MVP

research → synthesis → prototype → feedback → mockups → iterate → QA → ship

Metric Interpretation: These signals reflect comprehension and emotional safety, not usage or adoption — the tool has not been tested in sustained real-world use. What they confirm is that the core design bets (non-judgmental tone, legible feedback model, privacy-safe architecture) survived first contact with the target audience. A measurement framework targeting adoption, engagement, and self-reported usefulness is in place for post-launch validation.

What Did Not Happen:
  • No participant raised privacy concerns about the tool itself — despite high sensitivity to being monitored at work
  • No participant felt the feedback was punitive or judgmental
  • No participant misunderstood what the tool was doing or how it worked

Qualitative Impact

"Might be nice to have a more obvious indicator of when you're over your intended volume for a sustained period of time."

— P04

This was the only substantive design critique — and it validated the temporal escalation model already built into the system rather than challenging it.