A good 16min technical/mathematical video uploaded the other day by a female programmer titled "Fading Audio is ROUGH on CPUs", on the history of CPUs and floating point performance and overall design to deal with this. Contains a short demonstration at the end, with a multitrack song and plug-ins showing CPU spikes. I remember how awful non-Intel CPUs were in the early 2000s for music, and I went all Intel at the first opportunity in early 2002 and stuck with them ever since. The question posed in the video is should the method adopted back then be kept or not?
Fading out audio is one of the most CPU-intensive tasks you can possibly do!
When numbers get really small, the number of CPU operations can explode 100-fold. Both x86 and ARM have special CPU instructions just to handle it. But why?
In this video, we'll explore the IEEE 754 (floating point) standard, the fight between Intel and DEC, and I'll write some demonstration C++ code that illustrates this problem even today!
---
Timestamps:
00:00 Subnormal Arithmetic Cost
02:25 An Accuracy Debate...
06:28 Too small to calculate?
08:56 IEEE 754 Standard
10:17 Digital Audio Workstation Conundrum
14:18 A Massive CPU Spike
Work built on the shoulders of William Kahan, the father of the IEEE floating point standard. His work at Hewlett-Packard was groundbreaking in terms of numerical analysis and error propagation. And led to some pretty amazing machines. There's a bunch of material about him that's well worth a read.
Incidentally I still marvel that the Raspberry Pi Foundation's second generation chip, the RP2350, has full hardware floating point support in a chip that sells for a pound or so. When I were a young lad, a floating point coprocessor sold for hundreds of quid ( this is just before the pc era but after the z80 first became available)
Neo-Classical Guitar Man wrote: ↑Thu Aug 14, 2025 12:48 am
... I remember how awful non-Intel CPUs were in the early 2000s for music, and I went all Intel at the first opportunity in early 2002 and stuck with them ever since. ...
The video is saying that an Intel Pentium 4 can be particularly bad for audio because of the way it treats (I call them) denormals.
Believe it or not it was AMD that came up with x86_64. Intel wanted to go with the Itanium architecture, which never caught on.
Neo-Classical Guitar Man wrote: ↑Thu Aug 14, 2025 12:48 am
... I remember how awful non-Intel CPUs were in the early 2000s for music, and I went all Intel at the first opportunity in early 2002 and stuck with them ever since. ...
The video is saying that an Intel Pentium 4 can be particularly bad for audio because of the way it treats (I call them) denormals.
Believe it or not it was AMD that came up with x86_64. Intel wanted to go with the Itanium architecture, which never caught on.
Yes that is true. In my case it was partly floating point performance, but especially VIA chipset issues that derailed my studio life for a couple of years. I had a Pentium 4 for a while, one of the later models and it was generally okay for my needs, but was otherwise as you describe and the girl in the video.
Pentium 4 issues were fixed with 'flush denormals to zero'. If a DAW and plugins were compiled with that option, all should have been well. It did take a while before all audio software adopted that.
I found this post on the Ardour (an open source DAW) blog from 2006:
merlyn wrote: ↑Fri Aug 15, 2025 6:29 pm
I liked the video. Laurie Wired uses Linux.
It's her speciality I gather. Her channel is one of the better ones out there. Sure she toes the YouTube line by knowing how to interest us boys, but she can also walk the talk with her knowledge and experience, so fair play to her I hope she does well.