I have a technical question for the development team

While testing and using a large number of plugins, I noticed a consistent behavior related to processing latency modes.

When plugins that normally introduce a small amount of processing latency are switched to zero-latency mode, I observe noticeably higher instantaneous CPU usage peaks.

However, when switching the same plugins back to a mode that introduces some latency, the average CPU usage increases slightly, but the momentary CPU spikes almost completely disappear.

Based on my experience, it seems that with very small system buffers, zero-latency plugins are forced to meet extremely tight real-time processing deadlines (time-stamp deadlines).

To meet these deadlines, the system generates short but intense CPU bursts. When the deadline is missed, this results in CPU spikes accompanied by audio dropouts.

On the other hand, when higher-quality modes that introduce some latency are used, the total amount of computation increases, but the available processing window becomes larger. This allows the workload to be distributed more evenly over time, significantly reducing peak CPU usage and resulting in a more stable system overall.

From my testing, introducing a small, controlled amount of latency actually improves real-time stability, despite higher average CPU load.

Could you provide a technical explanation for this behavior?

Hey mrpaman, back at it with the great questions I see!

This behaviour actually makes a lot of sense in real-time audio systems, but can seem confusing due to the behaviour of CPU monitoring.

In zero-latency modes, plugins are forced to complete all their processing within a very tight buffer deadline as allocated by the host. At small buffer sizes this leaves almost no slack, so small variations in processing time can cause short CPU bursts. When those deadlines are missed, you see CPU spikes and sometimes (but not always) audio dropouts, even if the average CPU load is relatively low.

The transform platforms architecture has been engineered purposely to suppress these problems by utilising real time operating systems and core isolation, this suppresses the impact of these variations.

When a plugin introduces a small amount of latency, it gains greater internal buffering and a larger processing window. Although the total amount of computation may increase, the CPU load can be spread more evenly over time and this significantly reduces worst-case execution time and peak CPU usage, resulting in a more stable plugin performance.

So while zero-latency modes optimise for real time audio applications, adding a controlled amount of latency can improves stability, especially at very small buffer sizes.

Adding latency doesn’t reduce CPU usage because they switch to different algorithms that do more total work. Latent modes often enable look-ahead or oversampling, both of which increase overall computation but are much easier to schedule reliably. The extra latency provides timing slack, not efficiency, so average CPU usage can rise while peak CPU spikes and dropouts are reduced.

I hope this makes sense to you.
As always, please report improper plugin behaviour to [email protected]

Best,
Ross
Product Specialist

encountered on my own system, which is not a Transform engine but an M3 Ultra / DAD CORE 256 setup.

In SuperRack Performer, I am currently processing approximately 96 I/O channels with around 280 plugins at a 64-sample buffer, running very aggressively in real time. Most of these plugins appear to be operating in zero-latency modes, but at a 96 kHz sample rate, a significant number of plugins that normally introduce a small amount of latency at 48 kHz effectively switch to zero latency.

When the sample rate is doubled from 48 kHz to 96 kHz, the processing window becomes much shorter, and in many cases the plugin’s internal latency is reduced dramatically or eliminated entirely. This behavior turned into a major issue for me recently while using Tone Projects’ UNISUM compressor, and I spent a considerable amount of time investigating and resolving the problem.

Most of the errors found in the system logs looked like the following:

2026-01-24 09:56:04.856498+0900 localhost coreaudiod[171]: (AudioAnalytics) [com.apple.audioanalytics:carc] Sending message. { reporterID=169646913224708, category=IO, type=error, message=[“HostApplicationDisplayID”: Optional(com.WavesAudio.SuperRack-Performer), “wg_instructions”: Optional(27458193), “scheduler_latency”: Optional(5541), “is_prewarming”: Optional(0), “anchor_sample_time”: Optional(1761842), “io_page_faults_duration”: Optional(0), “io_cycle_usage”: Optional(1), “wg_cycles”: Optional(13132715), “lateness”: Optional(28), “deadline”: Optional(3208290362), “io_frame_counter”: Optional(-1088438848), “io_buffer_size”: Optional(64), “cause_set”: Optional(12), “num_continuous_nonzero_io_cycles”: Optional(50088599),

Alongside this error, I found numerous HAL stability errors and log entries indicating that audio was skipped because I/O timing was slower than the processing deadline.

After analyzing these logs together with both the Waves team and the DAD team, we collectively concluded that this was not a system-level performance issue. Through extensive stress testing—both individual and grouped—across all 280 plugins, I was ultimately able to isolate the problematic plugin. It was indeed that specific plugin.

The fact that the same behavior occurred even when increasing the buffer size to 96 or 128 samples was a key factor in identifying the root cause. In the end, the computer and audio interface had more than enough performance headroom, but a single plugin failing to complete its processing within the exact timing window was enough to destabilize the entire system.

By increasing the processing quality of that plugin—resulting in only about 0.3 ms of additional latency at 96 kHz—the system became stable to a degree I had not anticipated. At this point, even at a 64-sample buffer, the system is extremely stable, aside from very brief transient spikes that occur when opening plugins. I am still excluding a few plugins that generate unexplained errors, but overall stability has improved dramatically.

Fourier Audio addresses this type of issue proactively in their architecture, which is why I am very satisfied using their platform. With the defeedback plugin currently being discussed, I believe that if a small, controlled amount of latency were introduced instead of strict zero-latency processing, many users would be able to use it far more comfortably and reliably.

Running at slightly higher latency on slower computers or audio interfaces ultimately means that processing time and scheduling margin are being respected, which directly improves system stability.

Thank you very much for the excellent explanation.

2 Likes