Two features, one problem: users knew the feeling they wanted, but couldn't describe it in terms the system understood. We fixed both sides of that equation.
Soundstripe users are video editors, brand teams, and agency producers. They need the right track fast — and they think in feelings and references, not genre tags or BPM. We built two features to close that gap: AI search that understands how they talk, and a metadata system that shows them why a track is the right one.
Soundstripe serves video creators, brand teams, and agencies who need licensed music fast. Their library is large and high-quality. But there's a fundamental mismatch at the heart of music search: users think in feelings and references ("something cinematic, like a Nintendo ad"), and search engines understand tags and metadata ("orchestral, 92 BPM, C major").
The gap between those two languages was killing conversions. Users would search, get results that felt slightly off, search again with different words, get frustrated, and leave. Downloads — the primary proxy for subscription intent — were low. Dead-end sessions were high.
Users weren't failing to find good music. They were failing to describe what they were looking for in terms the system understood.
Music licensing users are often video editors, brand creatives, and agency producers — people who think in visual and emotional language, not musical terminology. When they search "upbeat background music," they could mean a dozen different things. When they search "sounds like a Nike ad," the old system had nothing to work with.
Standard search experience: keyword-based, no refinement, no context. One shot per query. Search often returns thin or no results due to challenges matching natural-language queries to song metadata tags.
Soundstripe had invested heavily in rich metadata for every track — tempo, key, mood tags, instrumentation, use-case labels. This metadata was already powering the filter system and search ranking. But it was almost entirely invisible to users while browsing.
The only way to see a track's full metadata was to navigate to its individual song page — a separate destination that almost no one visited. Users in the browsing flow, scrolling through results, had no way to understand why a particular track matched their search, or what made it different from the track next to it.
Gap two: rich metadata lived behind each track but was invisible in the browsing flow. No way to see why a track matched or how it differed from the one next to it.
I designed both the AI search experience and the metadata slide-out feature independently, working with a PM, three engineers, and data/analytics throughout. A second designer provided peer review. These weren't exploratory concepts — both shipped to production.
Identifying that the search problem was fundamentally a language gap — not a relevance or catalog problem — and that solving it required meeting users at their vocabulary, not Soundstripe's.
Designing the natural language search interface, the refinement prompt system, and how results communicated their match to the query.
Designing the metadata slide-out — how to make rich track data available in browsing context without interrupting the flow or overwhelming the interface.
Working with data/analytics to define download rate as the primary success metric, and tracking the compound effect of both features on conversion.
The first design decision was where the AI should live. One path was to keep it buried in a search bar — a slightly smarter autocomplete. I pushed in the opposite direction: give the AI a presence and let it guide the search through dialogue.
The result was Supe — a named AI music supervisor embedded directly in the product. Rather than asking users to find the right search terms, Supe asks clarifying questions, offers structured choices, and refines results conversationally. The intelligence is visible in the exchange, not just the output.
Natural language in, curated results out. The intelligence is in the results — the interface stays familiar and out of the way.
After each result set, the interface surfaced three AI-generated follow-up prompts — short suggestions for how to refine the current search ("fast, urgent, high energy," "mid tempo, heroic, soaring," "slow build, big ending"). These weren't new searches — they were extensions of the current one, building specificity progressively.
This solved the dead-end problem directly. Instead of a failed search meaning start over, it meant a suggested next step. The refinement prompts gave users a path forward without requiring them to already know what to say next.
The prompts feel like refinement, not restart. Forward momentum instead of dead ends.
The metadata slide-out was designed around a simple insight: users needed to see what made a track tick while they were listening to it, not after clicking away to a separate page. The solution was an inline expansion — a lightweight panel that appeared beside the track row in the results list, showing mood tags, use-case labels, tempo, key, and instrumentation without disrupting the browsing flow.
The metadata slide-out: inline, lightweight, and anchored to the track — everything visible without leaving the results list.
The AI search and metadata slide-out solve different parts of the same problem. AI search meets users at their language and gets them to good results faster. The metadata slide-out helps them understand why a track is the right one — and teaches them the vocabulary to search better next time. Together, they close the loop between how users think about music and how the system organizes it.
Downloads are Soundstripe's primary indicator of subscription intent — a user who downloads a track is significantly more likely to convert to a paid plan. Both features directly moved this number.
AI search: download rate of 45.57% — nearly 8× higher than the 5.89% rate for keyword search users, measured ongoing, not just at launch.
AI search: 8× subscription conversion — users who engaged with AI Search converted to paid at 16.28%, versus 2.03% for those who didn't. The feature wasn't just popular, it was driving revenue.
Metadata slide-out: 32× download rate — users who opened the slide-out downloaded at 23.59% versus 0.72% for those who didn't. Understanding why a track fit made the difference.
Metadata slide-out: 40% downloaded within 60 minutes — compared to 4.2% of non-engagers in the same timeframe. The slide-out didn't just increase downloads, it accelerated them.
The natural next step would be personalization — using download history, saved tracks, and search patterns to proactively surface music that matches a user's taste profile before they even search. The metadata system was already rich enough to support this. The AI search behavior and the slide-out engagement data would give the personalization model strong signal. The infrastructure was there; it just needed the next design layer on top of it.