Learn the technology behind hearing aids, Siri, and Echo
Audio source separation and speech enhancement aim to extract one or more source signals of interest from an audio recording involving several sound sources. These technologies are among the most studied in audio signal processing today and bear a critical role in the success of hearing aids, hands–free phones, voice command and other noise–robust audio analysis systems, and music post–production software.
Research on this topic has followed three convergent paths, starting with sensor array processing, computational auditory scene analysis, and machine learning based approaches such as independent component analysis, respectively. This book is the first one to provide a comprehensive overview by presenting the common foundations and the differences between these techniques in a unified setting.
- Consolidated perspective on audio source separation and speech enhancement.
- Both historical perspective and latest advances in the field, e.g. deep neural networks.
- Diverse disciplines: array processing, machine learning, and statistical signal processing.
- Covers the most important techniques for both single–channel and multichannel processing.
This book provides both introductory and advanced material suitable for people with basic knowledge of signal processing and machine learning. Thanks to its comprehensiveness, it will help students select a promising research track, researchers leverage the acquired cross–domain knowledge to design improved techniques, and engineers and developers choose the right technology for their target application scenario. It will also be useful for practitioners from other fields (e.g., acoustics, multimedia, phonetics, and musicology) willing to exploit audio source separation or speech enhancement as pre–processing tools for their own needs.