Hence, my suggestion is to be able to choose whether you want to attach the voice track to the current speaker, either as an option in Audio config or generic char behaviour. When this option is enabled, an audio source component is added to the root of the character prefab and you have the option to finetune these settings. Then, at runtime it will be the receive the voice clips whenever it is the author of the printed message.
Additionally, a parameter for @sfx specifying the audio source would also be beneficial. Say when the character trips off-screen and you want the settings of the sound byte to match the voiced character, you can add "@sfx trip source: Kohaku" and it'll automatically be added to the root of the prefab just like the voice track.
By having an audio source in the prefab, it will also make it much easier to hook up the character to a realtime lipsync addon. I've been researching these addons and in order for the audio to be analysed in realtime, it needs to point to an audio source and since the audio source in audio controller spawns and despawns constantly, I don't think it's possible with the current setup.
Of course, if there's a workaround using simple C#/Bolt code then I'm happy to have a go at it! But having delved into the associated engine service and script, I'm not exactly sure where to start.
Hey, Thanks for the suggestion! Have you tried using AudioMixer API (accessible via
IAudioManager.AudioMixer) to apply the desired spatial effects? You can also assign a custom AudioMixer asset via the audio config for more control. Regarding the audio controller despawns, it shouldn't happen; the object is supposed spawn on engine initialization and destroyed only when the engine is destroyed. If it's being destroyed while the engine is initialized, please let me know.