This is a Zoom presentation by the Audio Engineering Society’s Chicago Chapter and is open to non-members. Link to announcement at the bottom.
Thursday, November 18, 2021
7:30 pm (central time)
Meeting ID: 842 9906 0362
Participant Passcode: 476693
Northwestern University’s Interactive Audio Lab, headed by Prof. Bryan Pardo, develops cutting-edge deep learning methods for automatic labeling of audio content (e.g. song ID, labeling sound events in natural scenes), cutting-edge methods for source separation (separating a mono recording of a band into separate tracks for each instrument), manipulation of audio content (e.g. changing the prosody of speech), and generation of sounds (e.g. AI-composed music tracks). In this talk, Prof. Pardo will give an overview of lab work in these areas. He will then describe a software framework that lets developers easily integrate new deep-models into Audacity, a free and open-source Digital Audio Workstation that has logged over 100 million downloads since 2015. This will let deep learning practitioners put tools in the hands of the artistic creators without the need to do DAW-specific development work, without having to learn how to create an audio plugin, and without having to maintain a server to deploy their models. The ability to quickly share models between model builders and sound artists should encourage a conversation that will deepen the work of both groups.