The AI Hype and the Decibel Meter

Mar 17, 2026

There was a time when "AI" was the only word anyone wanted to hear at the bank.

I was working in the Center of Excellence for Data Science. It sounds prestigious, but mostly it meant I spent my days in glass-walled meeting rooms, listening to people from other departments explain how they wanted to sprinkle "magic AI dust" on every mundane problem they had.

It was that specific moment in corporate history where if you weren't using a neural network to decide what brand of coffee to buy, you weren't being "innovative" enough.


One of these meetings was with the problematic loans department.

They had a genuine issue. Thousands of calls every day, and somewhere in that mountain of audio were the "unethical" ones. The calls where things got ugly. They wanted a way to find them without having to listen to every single minute of every single day.

My boss was already halfway there. He was talking about full transcription pipelines, sentiment analysis, and using LLMs to categorize the "ethical posture" of the conversations.

He was excited. The stakeholders were nodding. It was the perfect corporate AI project—expensive, complex, and great for a quarterly report.

Corporate meeting in a glass office

But I was sitting there, looking at the actual numbers.

I asked them a simple question: "If we find these calls for you, how many can your team actually sit down and review in a week?"

They looked at each other. "Maybe 70. 80 if we really push it."

That was the moment the whole "AI magic" vision started to feel absurd to me. We were about to build a Ferrari to drive across the street. We were going to process thousands of calls with cutting-edge tech just to hand over a list that a human would only ever see the top 1 percent of.


I didn't say anything in the meeting. I just went back to my desk and started playing with the raw audio files.

I wasn't looking for "sentiment." I was looking for patterns.

And the pattern was so obvious it was almost funny. When people are being "unethical" on a recorded bank line, they aren't being subtle about it. They aren't using complex linguistic structures to hide their anger.

They are shouting.

The customers are screaming because they're losing their houses, or the workers are losing their temper because they've had ten people scream at them already that day.


I realized I didn't need an LLM. I didn't even need to know what they were saying.

I just needed to know how loud they were saying it.

I wrote a script to analyze the decibel levels across the daily call volume. I looked for the spikes—the moments where the audio clipped or the volume stayed high for more than a few seconds.

I pulled the top 80 loudest calls and sent them over.


It worked. Better than the AI model ever would have.

The department got exactly what they could handle, and the "unethical" behavior was right there in the red zones of the audio waves.

It taught me something about the way we work now. We are so obsessed with the "how"—the newest tool, the biggest model, the most complex path—that we completely forget about the "what."

Real data science isn't about being the smartest person in the room with the most complex code. It's about being the person who realizes that sometimes, the answer isn't in the text. It's just in the noise.