
Branding
What five years of sound engineering taught me about designing for trust
The best interfaces work the same way the best film scores do — invisibly.
I was mixing a short film when I first understood what trust actually feels like from the inside.
The scene was a woman walking through an empty apartment after a breakup. Nothing dramatic. No music cue. The editor thought it wasn't landing — that it needed a score to carry the emotion. I disagreed. Instead, I pulled up the room tone track and boosted a 40Hz rumble in the low end. Sub-bass. Inaudible as a conscious sound. You don't hear 40Hz — you feel it as a vague unease in your chest, a sense that the air in the room has changed.
We played it back. The scene landed. The editor teared up.
Nobody in that room could have told you why. The visuals hadn't changed. The dialogue hadn't changed. A frequency nobody consciously registered had restructured how everyone felt about what they were watching.
That moment is the clearest explanation I have for what I try to do in enterprise UX.
The parallel most designers miss
A film score's job is not to be heard. Its job is to create an emotional and cognitive state in an audience that isn't consciously evaluating the music. The moment the audience notices the score, the score has failed — it's pulled them out of the scene and into awareness of the machinery.
Enterprise UX works identically. The moment a user consciously notices the interface — the layout, the component, the transition — something has already gone wrong. Great UX doesn't get noticed. It gets trusted.
The parallel runs deeper than metaphor. Both disciplines are fundamentally in the business of subconscious state management. A sound designer controls arousal, attention, and emotional valence through signals the audience can't articulate. A UX designer controls confidence, comprehension, and operational certainty through patterns the user can't fully describe. In both cases, the professional's job is invisible by design.
What five years of production work — including projects that reached 2.5M+ listeners through IMDb-listed work — taught me is that audiences are not passive recipients of experience. They are active trust-evaluators running continuous background checks on every signal you send them. The audio engineer's job, like the UX designer's, is to pass those checks without triggering conscious inspection.
Four audio concepts that translate directly to UX
Dynamic range as information hierarchy. In audio, dynamic range is the distance between the quietest and loudest elements in a mix. Compress it too much and everything feels equally important — the result is listener fatigue, the sonic equivalent of a wall of text. Leave it too wide and the quiet elements disappear entirely.
In interface design, the equivalent is visual hierarchy. When every element competes at the same weight — same size, same contrast, same spacing — users can't find the signal. When the hierarchy is well-mixed, the eye moves through the page the way the ear moves through a well-balanced track: effortlessly, landing on what matters without consciously searching.
Signal-to-noise ratio as interface clarity. Every element in a mix that isn't serving the emotional or narrative goal is noise. It doesn't have to be loud to cause damage — a mid-range frequency sitting in the wrong place can mask the very element it sits next to, making both less intelligible.
Enterprise dashboards fail this test constantly. A screen with 14 data points, 3 CTAs, and 2 notification banners has a terrible signal-to-noise ratio — not because any individual element is wrong, but because each one is masking the others. The discipline of removing the element that seems useful but creates cognitive interference is the same whether you're working in Pro Tools or Figma.
Rhythm and timing as interaction pacing. In music, rhythm creates expectation and its resolution creates satisfaction. A beat that lands exactly where the listener expects it to creates confidence; one that arrives late creates unease. This isn't metaphorical — it's measurable in milliseconds.
Interaction timing works the same way. A button that responds in 80ms feels different from one that responds in 300ms, not because the user is consciously evaluating the latency, but because their nervous system is. Loading states, micro-animations, and transition timing aren't aesthetic choices. They're rhythm. They tell the user's nervous system whether the system is reliable before their conscious mind has formed an opinion.
Frequency masking as cognitive load. When two audio frequencies are close together, the louder one masks the quieter one — the brain simply stops processing the masked signal. This is why a poorly mixed vocal gets buried by a guitar in the same frequency range even when both are technically audible.
The UX equivalent is cognitive masking: when two interface elements require the same type of attention simultaneously, one will be ignored. Not because the user is inattentive — because attention has a bandwidth, and the interface has exceeded it. The fix in audio is EQ — carving space in the frequency spectrum for each element to exist distinctly. The fix in UX is the same: designing each element to occupy its own cognitive register, not competing for the same slice of attention.
Why sonic feedback matters in enterprise specifically
Enterprise environments are acoustically complex — open offices, shared screens, notification-heavy workflows. Most enterprise UX treats sound as either absent or decorative. Both are mistakes.
Multimodal feedback — combining visual and sonic confirmation — creates a more robust trust signal than either channel alone. Not because sound is "better," but because two congruent signals arriving through separate sensory channels produce a stronger confidence response than one signal through one channel at twice the volume. This is psychoacoustics applied to product design.
A low-stakes confirmation sound — a brief, soft, resolved tone on action completion — doesn't need to be noticed consciously to do its work. It tells the nervous system the action succeeded. It completes the rhythm of the interaction. In a high-frequency operational environment like a fulfilment dashboard or an order management system, this matters more than most design teams realise.
Close
The designers who will build the most trusted AI interfaces are not, I think, the ones who are best at prompting models or composing the most articulate approval flows.
They're the ones who understand that trust is mostly subconscious.
It's built in 40Hz rumbles nobody hears. In 80ms responses nobody measures. In hierarchy so clear nobody notices they're following it. In rhythm so consistent nobody registers it breaking.
The film score doesn't ask you to trust it. It makes you trust the scene.
The interface that earns trust at scale does the same thing. It makes you feel confident in your operation before you've consciously decided to be.
That's not a UX principle. It's acoustics. I just learned it in a studio before I learned it in a product.


