Posted on

## New Research: Adapting to HRTFs

I spend most of my time working on my plugins or developing new tools for clients to use in their projects and products. But sometimes I have the chance to be involved in fundamental research.

A paper I co-authored with Brian Katz (Sorbonne Université, Paris) and Lorenzo Picinali (Imperial College London) was published earlier this year in Scientific Reports by Nature. If you want to read the paper, head over here – it’s Open Access so you can read it for free!

The title is: Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning.

The paper looked at how well people can adapt to an HRTF over time with training. We then looked to see if, over time and without training, they would retain the localisation abilities they had gained. The “twist” was that we gave subjects an HRTF that was initially badly rated for them. We did this in order to investigate the worst-case scenario for content distributed without HRTF choice.

Studies like this are important for spatial and immersive audio because it still seems like it will be a while before consumers can have customised HRTFs. This means there will always be some people listening through an HRTF that is not well suited to them. If we can find ways to adapt users to these HRTFs then we can go some of the way to alleviating this problem.

#### Reference

Stitt, P., Picinali, L., & Katz, B. F. (2019). Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning. Scientific reports, 9(1), 1063.

#### Abstract

This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.

Posted on

## 5 Things You Should Know About Ambisonics

Ambisonics is a wonderful format for 3D sound/spatial audio for many reasons: it is flexible, interactive, future-proof, and realistic. Despite being around since the 1970s, it is still very new to a lot of people and, like every technique, it has a bit of a learning curve. Here are 5 things every beginner should know about Ambisonics before getting started.

## 1  – You can’t listen directly to Ambisonic signals

If you work with traditional surround formats (5.1, 7.1 etc.) then you’re used to sending the sound where you want it. Dialogue to come from the screen? Centre channel. Sound effects and ambiences? Rear channels. You get the drift.

Ambisonics is totally different. You take your mono or stereo sound and pass it through an encoder, such as the aXPanner, and you get B-format signals out the other side. Unlike traditional surround, you cannot pass these signals directly to your speakers and listen to them. If you do, you’ll not get anything that sounds particularly spatialised.

Instead, you’ll need a decoder that takes into account your loudspeaker positions and converts your B-format signals to loudspeaker signals. Or you can convert it binaural 3D audio for headphone listening. The aXMonitor plugin will do this for you.

## 2 – Ambisonics gets better with order

As soon as you start reading about Ambisonics you will quickly come across phrases like first-order, third-order, higher order. But what exactly does this mean? Without going into the deep maths of it, the order is a measure of how much spatial detail is in your sound scene.

Zeroth-order is the same as an omni-directional recording – all of the sound is capture but none of the directional qualities. First-order adds in x, y and z directions so we can now move the sound around. Higher orders use more complex mathematical functions. This increases the spatial resolution so it’s easier to discriminate the directions of multiple source when you are listening. If you would like to read in this about more detail you can check out one of my earlier posts.

Essentially, the higher the order you are able to use, the better the spatial quality of your work will be. The trade-off is that higher order require more audio channels to carry the spatial information. This needs more CPU. At first-order we need 4 channels, third-order it’s 16 channels and seventh-order it’s 64 channels!

Personally, I will always work in seventh-order to keep my work future-proof so I can archive in the highest possible quality. It’s trivial to convert from seventh- to first-order by dropping some channels. However, going the other way requires you to change settings or plugins throughout your projects(s). Better to do it right the first time!

## 3 – The channel sequencing matters!

Your Ambisonic panner will output the signals in a specific sequence. The decoder that you use will expect them to arrive in a particular sequence. If these don’t match, the final rendering will not have the intended spatial qualities. It should be easy to work without this becoming a problem, yes?

Unfortunately, no. There are quite a few Ambisonics conventions floating around and if you are using tools from different manufacturers you need to be sure they are all working with the same sequencing format. Channel sequencing can cause headaches even for the most experienced Ambisonics users.

These different conventions have tended to arise from mathematical formulations or practical considerations during Ambisonics’ time in the wilderness. The two best known these days are FuMa (short for Furse-Malham) an AmbiX (short for Ambisonic eXchange). For first-order signals FuMa uses the channel sequence W-X-Y-Z, while AmbiX uses W-Y-Z-X. This really isn’t something you can neglect.

Thankfully, the industry seems to have largely settled on the AmbiX convention for most purposes. This means you’re less likely to run into any confusion, but it can still happen – some tools, like Sennheiser’s AMBEO microphone A-to-B plugin, give the choice of FuMa and AmbiX. Just make sure you set it to the format expect by your decoder. The aX Ambisonic plugins all use AmbiX format specifically to avoid the confusion of different formats.

## 4 – The level relationship matters, too

This one is related to the last point. Different conventions set the levels differently between different Ambisonic channels groups. For example, the omni W channel is 3 dB weaker in FuMa than AmbiX format, while their first-order  (x, y, and z) channels match in level (but, remember, not in sequencing!).

Generally, if you get your channel order correct, the level relationships will follow. You just have to careful that you do not change the level of one channel without doing exactly the same to all of the others. Doing so will mess with the spatial qualities of your sound scene. This also applies for frequency-dependent level changes, like EQ.

## 5 – Ambisonics is very sensitive to phase changes

If you’re processing your B-format Ambisonics then you had better be careful you’re using the right tools. Anything you do to one channel has to be repeated exactly on all the others because even a small phase change in only one channel can ruin the spatialisation of your work.

I’ve prepared a short audio demonstration of this with a sample of pink noise. The noise is panned to the left using first-order Ambisonics and rendered binaurally using the aXMonitor. Every two seconds it switches between a correct rendering and one in which one of the B-format channels is delayed by only 0.1 ms (4.41 samples). Hardly a massive delay! With stereo it would barely be audible. With Ambisonics, it completely ruins the spatial impression – listen as the noise goes from fully to the left to splitting into two spatial distinct sounds.

The practical point to be made here is that plugins that change phase (or level) have to have been designed carefully. Using multi-mono plugins will apply processing individually to each B-format channel and almost certainly ruin the spatial quality of your audio. The SSA Plugins aXCompressor, aXGate and aXEqualiser give you dynamic range processing and EQ that you can apply to B-format signals and preserve the spatial integrity of your audio.

So here are 5 things you need to know about Ambisonics before you get started. If you have more questions about setting up an Ambisonic project, leave a comment or get in touch. I’m always happy to answer questions to help you down the road to spatial audio and 3D sound.

Posted on

## 50% Discount on aXPanner and aXMonitor

Today I’m having April sale and putting a 50% discount on my Ambisonic panning and decoding plugins for Windows (VST) and MacOS (VST/AU): aXPanner and aXMonitor. This offer runs until the 30th April 2018.

The aXPanner converts mono and stereo signals to YouTube360 compatible AmbiX-format Ambisonics. The aXMonitor decodes these Ambisonic signals to two-channel stereo and binaural (3D audio over headphones) formats to allow easy monitoring. Together they form the essential signal chain for spatial audio and are a great way to get started with Ambisonics.

You can check out my short tutorial on getting started with a basic Ambisonics chain here.

The aXPanner and aXMonitor available for three levels of spatial resolution: first, third and seventh order Ambisonics. Higher orders increases the spatial fidelity of the sound scene.

This 50% discount can be combined with additional 20% bundle discounts for additional savings.

You can read more details about them in my web store :

Posted on

## Product Spotlight: aXCompressor

The aXCompressor is a compressor VST plugin (Windows and Mac) made specifically for Ambisonics signals. It comes in three variations: first order (a1), third order (a3) and seventh order (a7), allowing you to process . They accept any Ambisonics format that has the W channel as the first channel. This means it works for the more modern AmbiX and legacy FuMa format.

There are plenty of Ambisonics encoders and decoders but not so many things to process between these two points on the signal chain. I wanted to help bring some of the tools we take for granted when working in stereo to VR/AR and immersive audio, hence the aX Plugins. If you’re interested in trying out any of the plugins, including the aXCompressor, you can download the demo versions. You can support future development by making a purchase at from my web shop.

Posted on

## Introducing the aX Ambisonics Plugins

Today I am very happy to be releasing my latest work: the aX Ambisonics plugins. They are the result of a lot of work and it is great to be able to finally release them into the world.

The aX Plugins are a set of VST plugins intended to make your work with spatial and immersive audio that little bit easier. They come in three variations each with equivalent plugins – a1, a3 and a7.

Which one you choose will depend on the level of spatial resolution you need for your project (how accurately the spatial properties are reproduced to the final listener). The different levels are known in the Ambisonics world as the order and can theoretically go to infinity. In practice we can (thankfully!) stop somewhere quite a bit before infinity! The aX Plugins give you a choice between basic, advanced and future-proof version.

What are the plugins and what can they do?

There are currently seven plugins in each suite with a different purpose. Here is a quick summary:

1. aXPanner – a stereo to Ambisonics encoder to bring your sounds into the spatial domain.
2. aXRotate – this plugin will let you rotate a single track or a full sound scene to make sure you have everything exactly where you want it.
3. aXMonitor – Ambisonics needs a decoder to be listened to. This plugin decodes to binaural 3D audio (over headphones) or to standard stereo. This means you can always share your creativity via traditional channels.
4. aXCompressor – Ambisonics requires careful handling of the audio to avoid changing the spatial balance. aXCompressor lets you compress the signal without alteration.
5. aXGate – simiarly, this plugin acts as a noise gate and downwards expander while preserving the spatial fidelity.
6. aXEqualizer – safely sculpt the tone of your signals.
7. aXDelay – get creative with five independent delay modules that can be rotated independently of the original signal.

I will be doing a series of posts going into more detail about each plugin. You can also get more information on the product pages. In the meantime, if you are curious, you can download demo versions of these plugins (for evaluation purposes only) here and if you like them you can support future development by buying them from the shop. Thanks!

Posted on

## What’s Missing From Your 3D Sound Toolbox?

Audio for VR/AR is getting a lot of attention these days, now that people are realising how essential good spatial audio is for an immersive experience. But we still don’t have as many tools as are available for stereo. Not even close!

This is because Ambisonics has to handled carefully when processing in order to keep the correct spatial effect – even a small phase change between channels significantly alter the spatial effect – so there are very few plugins that can be used after the sound has been encoded.

To avoid this problem we can apply effects and processing before spatial encoding, but then we are restricted in what we can do and how we can place it. It is also not an option if you are using an Ambisonics microphone (such as the SoundField, Tetra Mic or AMBEO VR), because it is already encoded! We need to be able to process Ambisonics channels directly without destroying the spatial effect.

So, what is missing from your 3D sound toolbox? Is there a plugin that you would reach for in stereo that doesn’t exist for spatial audio? Maybe you want to take advantage of the additional spatial dimensions but don’t have a tool to help you do that. Whatever you need, I am interested in hearing about it. I have a number of plugins that will be available soon that will fulfil some technical and creative requirements, but there can always be more! In fact, I’ve already released the first one for free. I am particularly interested in creative tools that would be applied after encoding but before decoding.

With that in mind, I am asking what you would like to see that doesn’t exist. If you are the first person to suggest an idea (either via the form or in the comments) and I am able to make it into a plugin then you’ll get a free copy! There is plenty of work to do to get spatial audio tools to the level of stereo but, with your help, I want to make a start.

Posted on

## Free Ambisonics Plugin: o1Panner

I am working on some spatial audio plugins to provide some more tools for VR/AR audio and I am kicking things off with a freebie: the o1Panner. It is free to download from the Shop.

### What is it?

The o1Panner a simple first-order Ambisonics encoder with a width control.

### How to use it

There are two display types: top-down and rectangular. The azimuth, elevation and width are controlled in different ways in each of these views. The views are selected by right clicking on the display.

For the top-down view, azimuth is controlled by clicking and dragging on the main display, the elevation is controlled by holding shift and dragging up/down and width is controlled by holding ctrl and dragging up/down.

For the rectangular view, azimuth and elevation correspond to the x- and y-coordinates respectively and width is controlled by holding ctrl and dragging up/down.

### What does it output?

The output is AmbiX (SN3D/ACN) Ambisonics. This is the format used by Google for YouTube 360 and is quickly being adopted as the standard for Ambisonics and HOA.

### What’s coming up?

I am working on several Ambisonics and HOA plugins that will be available in 2018. Some of them will do things that other plugins do, but most of them should do something new. Some of them will do something more creative and experimental. If you want to see a certain effect for spatial audio, just get in touch and let me know what you want. If you’re the first person to suggest a plugin that gets developed then you will get a free copy to say thanks!

The industry is rapidly moving on from first-order Ambisonics and embracing HOA. For example, ProTools recently added support up to third-order Ambisonics. Higher order tools are in the pipeline, so check back soon.

### Stay Up To Date

If you want to keep current with upcoming plugin news and about updates to the o1Panner, subscribe to the mailing list:

Posted on

## Ambisonics to Stereo Comparison

In my last post I detailed two methods of converting Ambisonics to stereo. Equations and graphs are all very good, but there’s nothing better than being able to listen and compare for yourself when it comes to spatial audio.

With that in mind, I’ve made a video comparing different first-order Ambisonics to stereo decoding methods. I used some (work-in-progress) VST plugins I’m working on for the encoding and decoding. I recommend watching the video with the highest quality setting to best hear the difference between the decoders.

There are 4 different decoders:

• Cardioid decoder (mid-side decoding)
• UHJ (IIR) – UHJ stereo decoding implemented with an infinite impulse response filter.
• UHJ (FIR) – UHJ stereo decoding using a finite impulse response filter.
• Binaural – Using the Google HRTF.

The cardioid decoder more quickly moves to, and sticks in, the left and right channels as the source moves, while this is more gradual with the UHJ decoder. To me, the UHJ decoding is much smoother than the cardioid, making it perhaps a bit easier to get a nice left-right distribution that uses all of the space, while cardioid leads to some bunching at the extremes.

The binaural has more externalisation but pretty significant colouration changes compared to UHJ and cardioid decoding, but also potentially allows some perception of height, which the others don’t.

The VSTs in the video are part of a set I’ve been working on that should be available some time in 2018. If you’re interested in getting updates about when they’re release, sign up here:

Posted on

## Ambisonics Over Stereo

Ambisonics, especially Higher Order Ambisonics, is great for 3D sound applications. But what if you have spent a long time mixing for a 3D audio format but want to share it with listeners who are only listening on stereo?

The first thing depends if they’re going to be using headphones or loudspeakers. If they’re using headphones then you can create a binaural mix in the usual way. If they are using loudspeakers then binaural is no longer an option (unless you want to go down the fragile transaural route). In this post we will focus on how you can decode from first order Ambisonics to stereo using one of two common options.

## Mid-Side Decoding

The first option is probably the simplest – treat the Ambisonics signal as a mid-side recorded scene by taking the W and Y channels, with W being the mid and Y being the side. Then you can make your left and right (L and R) stereo playback channels using \begin{eqnarray} L = 0.5(W+Y),\\ R = 0.5(W-Y) \end{eqnarray}

This is effectively the same as recording a sound field with two cardioid microphones pointing directly left and right. Sounds panned to 90 degrees will play only through the left loudspeaker and those at -90 degrees through the right.

The advantage of this sort of decoding is that it is very conceptually simple and, as long as your DAW can handle the routing, it is even possible to do without any dedicated plugins. It also results in pure amplitude panning, meaning that it has all of the advantages and disadvantages of standard intensity-stereo. However, we’ve got another option to choose from when we want to play back over a stereo system that has some advantages.

## UHJ Stereo

A more complex and interesting technique is UHJ. We’re only going to go over how UHJ for stereo listening, but it is worth noting that UHJ is mono compatible and that a 4-channel version exists from which full first-order Ambisonics information that can be retrieved via correct decoding. 3-channel UHJ can get you a 2D (horizontal) decoder by retrieving the W, X and Y channels. A nice property of the 3- and 4-channel versions is that they contain the stereo L and R channels as a subset. This means, importantly, 2-channel UHJ does not require a decoder when played back over two loudspeakers. All you need to do is take the first two channels of the audio stream.

The stereo L and R channels can be calculated using the following equations:\begin{eqnarray} \Sigma &=& 0.9397W + 0.1856X \\ \Delta &=& j(-0.3430W + 0.5099X) + 0.6555Y\\ L &=& 0.5(\Sigma + \Delta)\\R &=& 0.5(\Sigma – \Delta)\end{eqnarray} where $$j$$ is a 90 degree phase shift.

You can see from these equations, converting to UHJ from first-order Ambisonics results in signals with phase differences between the L and R channels. This creates quite a different impression to the kind of mid-side decoding mentioned above. There will obviously be some room for personal taste as to whether or not UHJ is actually preferred to mid-side decoding. Sound sources placed to the rear of the listener are more diffuse when reproduced of a stereo arrangement than those at the front, while for mid-side decoding there is no sonic distinction between a sound panned to 30 degrees or to 150 degrees.

Beyond front-back distinction, UHJ can actually result in some sounds appearing to originate from outside the loudspeaker pair by a small amount. This is why it is sometimes referred to as Super Stereo. In my experience, this effect is very dependent on the sound being played, both its frequency content and how transient it is.

Because UHJ stereo relies on phase differences between the two channels, any post-processing or mastering applied should preserve the phase relationship between L and R, otherwise there is a very real risk that the final presentation will be phase-y and spatially blurred.

Figure 1 shows the localisation curves for a sound played back over a stereo system where the signal in the Ambisonics domain is panned fully round the listener. Obviously the sound stays to the front, but the actual trajectories between UHJ and mid-side decoding are quite different. (These localisation curves were calculated using the energy vector model of localisation, so they are most appropriate for mid/high frequencies and broadband sounds).

Which of the two stereo loudspeaker decoding strategies you’ll want to use will depend on the needs of your project. Mid-side decoding is simpler and results in pure amplitude panning. UHJ can result in images outside of the loudspeaker base, but relies on the phase information being preserved. If you want to retrieve any spatial information then UHJ is absolutely the way to go.

## Tools for Stereo Decoding

I have an old Ambisonics to UHJ transcoder VST that you can download here, but they are old and I am not sure how compatible they are with newer version of Windows and Mac OSX. To remedy that, I’ve been working on an updated version that will provide simple first-order to stereo decoding. Just select which method you want to use and pump some Ambisonics through it. Keep an eye out in the near future for when it is made available!

I’m curious to hear from anyone who has used both techniques what you prefer. Leave a comment below!

Posted on

## Better Externalisation with Binaural

Some research that I was involved in was published last week in the Journal of the Audio Engineering Society [1]. You can download it from the JAES e-library here. The research was led by Etienne Hendrickx (currently at Université de Bretagne Occidentale) and was a follow on from other work we did together on head-tracking with dynamic binaural rendering [2, 3, 4].

The new study looked at externalisation (the perception that a sound played over headphones is emanating from the real work, not inside the listener’s head). It specifically investigated the worst case scenario for externalisation – sound sources directly in-front of ($0^{\circ}$) or behind the listener ($180^{\circ}$). It tested the benefit of listeners moving their head, as well as listeners keeping their head still and the binaural source following a “head movement-like” trajectory. Both were found to give some improvement to the perceived externalisation, with head movement providing the most improvement.

The fact that source movements can improve externalisation is important because we don’t always have head tracking systems. Most people will experience binaural with normal headphones. This hints at a direction for some “calibration” to help the listener get immersed in the scene, improving their overall experience.

Also importantly, the listeners used in the study were all new to listening to binaural content. This is important because lots of previous studies use expert listeners, but the vast majority of real-world listeners are not experts! The results of this paper are encouraging because they show that you don’t need hours of listening to binaural to benefit from some instant perceptual improvement in a fairly easy manner.

### References

[1] E. Hendrickx, P. Stitt, J. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Improvement of Externalization by Listener and Source Movement Using a ‘Binauralized’ Microphone Array,’” J. Audio Eng. Soc., vol. 65, no. 7, pp. 589–599, 2017. link

[2] E. Hendrickx, P. Stitt, J.-C. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis,” J. Acoust. Soc. Am., vol. 141, no. 3, pp. 2011–2023, 2017. link

[3] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The Role of Head Tracking in Binaural Rendering,” in 29th Tonmeistertagung – VDT International Convention, 2016, pp. 1–5. link

[4] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The influence of head tracking latency on binaural rendering in simple and complex sound scenes,” in Audio Engineering Society Convention 140, 2016, pp. 1–8. link