Option to attach audio controller sources to chars
The amount of control this engine gives, especially when it comes to creating 3D setups, is tremendous however this possibility doesn't currently extend to audio. as all the audio tracks are confined to one object in the middle of the scene. This means that whenever a voiced character is off-screen and you want to give the player the impression that they're off-screen on the right through audio, you'll have to edit the sound files which may be a time-consuming activity.
Hence, my suggestion is to be able to choose whether you want to attach the voice track to the current speaker, either as an option in Audio config or generic char behaviour. When this option is enabled, an audio source component is added to the root of the character prefab and you have the option to finetune these settings. Then, at runtime it will be the receive the voice clips whenever it is the author of the printed message.
Additionally, a parameter for @sfx specifying the audio source would also be beneficial. Say when the character trips off-screen and you want the settings of the sound byte to match the voiced character, you can add "@sfx trip source: Kohaku" and it'll automatically be added to the root of the prefab just like the voice track.
By having an audio source in the prefab, it will also make it much easier to hook up the character to a realtime lipsync addon. I've been researching these addons and in order for the audio to be analysed in realtime, it needs to point to an audio source and since the audio source in audio controller spawns and despawns constantly, I don't think it's possible with the current setup.
Of course, if there's a workaround using simple C#/Bolt code then I'm happy to have a go at it! But having delved into the associated engine service and script, I'm not exactly sure where to start.