Sensory Sprout
Constraint-first meal planning for families where preferences rarely overlap
Recipe search tools assume flexible preferences. For families with sensory-sensitive children, that assumption causes daily breakdowns. This project explores what meal planning looks like when constraints are treated as fixed inputs—not obstacles to overcome.
-
Role: Product Designer — sole decision-maker across research, interaction design, and build. Scoped the problem, defined the intervention, and shipped a working MVP.
-
Team: Solo designer; 5 research participants (interviews + prototype feedback)
-
Context: Personal project; standalone web app for shared office environments
-
Duration: 6 days (research → synthesis → prototype → user feedback → mockups → iterate → QA → launch MVP)
-
Status: Working MVP launched — try it live. Core monitoring, feedback escalation, and PiP support included.
Headphones Remove Self-Awareness — With Nothing to Replace It
Return-to-office mandates have pushed people back into open floor plans, but there still aren't enough meeting rooms to support the number of video calls happening throughout the day. So people take meetings at their desks — usually wearing noise-canceling headphones.
Headphones solve one problem — but they quietly create another. They filter out ambient noise so you can focus on the meeting. But the same isolation that blocks incoming sound also removes the ambient feedback you normally use to regulate your speaking volume. With that feedback gone, you can't tell how loud you are anymore.
Some people try to compensate by wearing one ear on and one ear off. It's an unsatisfying workaround — environmental noise competes with meeting audio, and the split attention undermines the focus headphones were meant to provide.
For considerate people, this creates a constant, unresolved question: Am I too loud right now? Can people around me hear this conversation?
And headphones aren't the only trigger. When meeting rooms are full, people huddle at desks or talk in common areas. In the flow of conversation, especially when it becomes animated, they lose track of how far their voices carry.
By the time someone reacts, the disruption has already happened — a look, a comment, a pointed gesture — and the person speaking is the last to know.
Social Feedback Is the Only Fallback — and It Makes Things Worse
This isn't a problem of inconsiderate people. Most people who are too loud in shared spaces don't know it — and would adjust immediately if they did.
Social feedback fails in predictable, compounding ways. It arrives late and carries emotional weight.
Instead of helping people recalibrate, it causes withdrawal.
The feedback channel itself produces the wrong behavior.
Replace Social Correction With a Private, Continuous Signal
- Signal must be continuous and real-time
- Signal must be private and non-judgmental
- Signal must feel ambient, not corrective
- Within range
- Outside range briefly
- Outside range sustained
The system is equally defined by what it doesn't do. No recording. No transcription. No alerts. No labeling behavior.
Structure protected independent judgment
- Individual input collected privately
- Signals hidden until everyone participated
- Group results revealed simultaneously
Deep Dive (optional): Evidence & Rigor
The sections below provide supporting evidence for readers who want to understand what informed the key decisions, not to re-argue or restate them.
1. Strategic Context & Constraints
Replace this with your content. Supports bold text, paragraphs, lists, images, and embedded video.
2. Discovery That Changed the Scope
3. The Strategic Pivot: From Tool to Platform
Placeholder — example with landscape video:
4. Expanded Flagship Decisions (Evidence Behind the Calls)
Placeholder — example with a callout:
5. Evidence From Real Team Behavior
Placeholder — example with a list:
- Observation from user research
- Behavioral pattern identified
- Supporting data point
6. Measurement Strategy
Replace with your content.
Key Strategic Decisions
Every decision was filtered through the same question: would this make someone feel nudged —or watched?
- Observed: Technical UI language ("audio captured," "level detected") triggered privacy concerns and surveillance associations in participant testing.
- Decision: Structurally ephemeral: no accounts, no saved data, no audio storage. Organic visuals (ink brush, hand-drawn forms) over technical precision.
- Tradeoff: No usage history, no personalization over time, no ability to show users their own improvement patterns.
- Trust Implication: Ephemeral architecture and organic aesthetics make surveillance associations structurally impossible —not just discouraged by messaging.
- Observed: Early thresholds calibrated for acoustic accuracy. Normal behavior — laughing, getting excited, leaning back — triggered warnings.
- Decision: Widened the acceptable range significantly. Required sustained loudness before escalation. Brief spikes never trigger feedback.
- Tradeoff: Some genuinely loud moments go unaddressed. The system deliberately under-reports to preserve trust.
- Adoption Implication: Flagging normal behavior trains users to ignore the system permanently. Threshold forgiveness protects the feedback channel for moments that actually matter.
- Observed: Audio bars served double duty —showing volume and implying system activity. In quiet environments, flat bars looked like a broken tool.
- Decision: Split into two signals: pulsing glow for liveness, audio bars for volume. Escalation responds to duration, not momentary spikes.
- Tradeoff: More visual elements to maintain. Brief loud moments receive no feedback, even when genuinely disruptive.
- System Reliability Impact: Users can verify system activity regardless of volume. Feedback maps to sustained disruption patterns, not momentary acoustic events.
- Observed: Browser extension required multi-day platform approval and produced inconsistent microphone permission flows across browsers.
- Decision: Rebuilt as a standalone web app with Picture-in-Picture capability for peripheral use during meetings.
- Tradeoff: Lost always-on background capability. Users must keep a tab open or launch PiP manually. No persistent taskbar presence.
- Usability Impact: Consistent mic permissions across browsers. Eliminated distribution friction that would have blocked iteration and testing within the 6-day timeline.
Impact At a Glance
Mezzo was validated through video prototype walkthrough with five participants and launched as a working MVP within six days. Validation focused on legibility, trust, and emotional tone — the adoption gates identified in the design.
Quantitative Impact
These metrics reflect behavioral adoption teams choosing to use StoryJam repeatedly, not just feature interaction or launch-week curiosity.
without explanation or onboarding
no participant felt judged or corrected
Zoom, Teams, or taskbar presence
research → synthesis → prototype → feedback → mockups → iterate → QA → ship
Metric Interpretation: These signals reflect comprehension and emotional safety, not usage or adoption — the tool has not been tested in sustained real-world use. What they confirm is that the core design bets (non-judgmental tone, legible feedback model, privacy-safe architecture) survived first contact with the target audience. A measurement framework targeting adoption, engagement, and self-reported usefulness is in place for post-launch validation.
- No participant raised privacy concerns about the tool itself — despite high sensitivity to being monitored at work
- No participant felt the feedback was punitive or judgmental
- No participant misunderstood what the tool was doing or how it worked
Qualitative Impact
"Might be nice to have a more obvious indicator of when you're over your intended volume for a sustained period of time."
This was the only substantive design critique — and it validated the temporal escalation model already built into the system rather than challenging it.