Game engines write additional functionality on top of these platform-level features. In addition to codecs, platform audio APIs provide all the other features that an audio engine might need, including volume control, pitch control (realtime sample-rate conversion), spatialization, and DSP processing. Many platforms provide hardware decoders to improve runtime performance. These APIs often provide platform-specific codecs and platform-specific encoder and decoder APIs. Typically, each hardware platform provides at least one full-featured, high-level, audio-rendering, C++ API. Audio renderers widely vary in their architecture and feature set, but for games, where interactivity and real-time performance characteristics are key, they must support real-time decoding, dynamic consuming and processing of sound parameters, real-time sample-rate conversion, and a wide variety of other audio rendering features, like per-source digital signal processing (DSP) effects, spatialization, submixing, and post-mix DSP effects such as reverb. This document describes the structure of the Audio Mixer as a whole, and provides a point of reference for deeper discussions.īackground and Motivation Audio RenderingĪudio rendering is the process by which sound sources are decoded and mixed together, fed to an audio hardware endpoint (called a digital-to-analog converter, or DAC), and ultimately played on one or more speakers. It enables feature parity across all platforms, provides backward compatibility for most legacy audio engine features, and extends UE4 functionality into new domains. The Audio Mixer is a multiplatform audio renderer that lives in its own module. The Problems with Platform-Specific Audio Rendering APIsĪdditional Submix Features: Analysis, Recording, and Listening Platform-Level Features: Audio-Rendering APIs