Team

Who we are

Organising member of the SPEAR Challenge.

The core organising team is from the Speech and Audio Processing team of Imperial College London with support from Meta Reality Labs.

Patrick A. Naylor | Professor, Imperial College London

Patrick Naylor is a Professor of Speech and Acoustic Signal Processing at Imperial College London. He received the BEng degree in Electronic and Electrical Engineering from the University of Sheffield, UK, and the PhD degree from Imperial College London, UK. His research interests are in speech, audio and acoustic signal pro-cessing. His current research addresses microphone array signal processing, speaker diarization, and multichannel speech enhancement for application to binaural hearing aids and robot audition. He has also worked on speech dereverberation including blind multichannel system identification and equalization, acoustic echo control, non-intrusive speech quality estimation, and speech production modelling with a focus on the analysis of the voice source signal. In addition to his academic research, he enjoys several collaborative links with industry. He is currently a member of the Board of Governors of the IEEE Signal Processing Society and President of the European Association for Signal Processing (EURASIP). He was formerly Chair of the IEEE Signal Processing Society Technical Committee on Audio and Acoustic Signal Processing. He has served as an associate editor of IEEE Signal Processing Letters and is currently a Senior Area Editor of IEEE Transactions on Audio Speech and Language Processing.

Alastair Moore | Research Fellow, Imperial College London

Alastair Moore is a Research Fellow at Imperial College London and spatial audio consultant with Square Set Sound. He received the M.Eng. degree in Electronic Engineering with Music Technology Systems in 2005 and the Ph.D. degree in 2010, both from the University of York, York, U.K. He spent 3 years as a Hardware Design Engineer for Imagination Technologies PLC designing digital radios and networked audio consumer electronics products. In 2012, he joined Imperial College, where he has contributed to a series of projects in the field of speech and audio processing applied to voice over IP, robot audition, and hearing aids. Particular topics of interest include microphone array signal processing, modeling and characterization of room acoustics, dereverberation, and spatial audio perception. His current research is focused on signal processing for moving, head-worn microphone arrays.

Sina Hafezi | Research Associate, Imperial College London

Sina Hafezi is a post-doctoral Research Associate at Imperial College London. He received the BEng degree in Electronic Engineering in 2012 and the MSc degree in Digital Signal Processing in 2013 both from Queen Mary, University of London, UK. He worked in the Centre for Digital Music as a researcher and software engineer on autonomous multitrack mixing systems, which led to patent and spin-out company. In 2018, he received his PhD in acoustic source localisation using spherical microphone arrays at Imperial College London, UK. He spent 2.5 years at Silixa as Senior Signal Processing Engineer developing algorithms and software for Distributed Acoustic Sensing systems. In 2021, he re-joined Imperial College London, where he has contributed to academic and industrial projects on hearing aids and spatial audio. His research interests are microphone array processing, spatial audio rendering, beamforming, source localisation and room acoustic modelling with applications for augmented and virtual reality.

Pierre Guiraud | Research Associate, Imperial College London

Pierre Guiraud is a post-doctoral Research Associate at Imperial College London. After enrolling in a double degree program, he received a MSc in Engineering from the Ecole Centrale de Lille, France, as well as a Master in Engineer Acoustic from the Technical University of Denmark in Copenhagen in 2017. His master thesis focused on ambisonic sound reproduction in collaboration with the company Kahle Acoustics. He went on to pursue a PhD at the IEMN in Lille, France, about the thermoacoustic sound generation in porous metamaterials. This work was sponsored by Thales Underwater Systems and done in partnership with CINTRA Singapore. He obtained his PhD in 2020 and joined Imperial College London in 2021 on two projects. The first project is about speech enhancement for Virtual/Augmented Reality together with Meta Reality Labs Research (SPEAR Challenge). His second project aims to improve binaural intelligibility for hearing loss users in real environments using machine learning. This is done within the ELO-SPHERES project in collaboration with the University College of London.

Thomas Lunner | Research Director, Meta Reality Labs Research

Dr. Lunner’s career has focused on man-machine issues related to the correlation between hearing and cognition. He was the first scientist to convincingly show the importance of cognitive ability in relation to being able to recognize speech in adverse listening conditions. He also demonstrated how working memory was a significant factor for the selection of optimum signal processing in hearing aids to be fitted to a particular individual. These studies led to increasing cooperation with other research groups globally, and the establishment of cognitive hearing science as a research field of its own. Dr. Lunner’s Ph.D. research in the mid-1990s resulted in patented signal processing algorithms which led to the development of the first digital hearing aid manufactured by Oticon. The core of this project was a digital filter bank which provided the necessary tuning flexibility with an equally important low power consumption. The filter bank was used in several successive hearing aid models in the years that followed, fitted to millions of hearing aid users worldwide. Two of the models were awarded the European Union’s prestigious technology prize, the IST Grand Prize in 1996 and in 2003. Currently, he leads research in Superhuman Hearing at Meta Reality Labs Research.

Vladimir Tourbabin | Research Lead, Meta Reality Labs Research

Vladimir Tourbabin received his B.Sc. degree in materials science and engineering, his M.Sc. degree in electrical and computer engineering, and his Ph.D. degree in electrical and computer engineering from Ben-Gurion University of the Negev, Beer-Sheva, Israel, in 2005, 2011, and 2016, respectively. After graduation, he joined the Advanced Technical Center, General Motors, Israel, to work on microphone array processing solutions for speech recognition. Since 2017, he has been with Meta, leading research and development of audio capture signal processing technologies for augmented and virtual reality applications.

Jacob Donley | Research Scientist, Meta Reality Labs Research

Jacob Donley is a research scientist at Meta Reality Labs Research in Redmond, WA with previous experience lecturing in Signals and Systems at the University of Wollongong and Engineering at Western Sydney University. He holds a Bachelor of Engineering Honours (Computer) and a Ph.D. with research focused on Digital Signal Processing (DSP) aimed at improving the reproduction of personal sound in shared environments. Jacob’s research interests are in signal processing, speech enhancement, machine learning, array processing (microphone and loudspeaker), beamforming, and multi-zone sound field reproduction. Jacob has received awards and scholarships from Telstra Corporation Limited, the Australian Department of Education and Training, and the University of Wollongong. He is also an active member of the IEEE and IEEE Signal Processing Society.

Steve Peha | Project Manager, Meta Reality Labs Research

Steve Peha’s work with digital audio began in 1985 when he became President of the Boston Computer Society’s Music and Computers Group, at that time the largest such group in the world. His work in audio and music technology includes: creating the Petrucci PostScript music font for the launch of the Finale Music Publishing System, writing the documentation for Ray Kurzweil’s K-250 Sampling Synthesizer, serving on a working group for the creation of the MIDI (Musical Instrument Digital Interface) v1.0 specification, as well as scoring, production, and sound design work for film, TV, and stage. In 1988, he founded Music Technology Associates to develop software for the Windows Multimedia PC platform. In 1993, MTA was acquired by Midisoft Corporation of Redmond, WA, where he served as Director of Product Development. For the next 25+ years, prior to joining Meta, he worked in the field of education technology. In In 2011, he published the first major article on the use of Agile methodologies in K-12 schooling. From 2012-13, he served as a Product Owner for the Gates Foundation’s Shared Learning Infrastructure, an open source enterprise reference platform for state and federal Student Longitudinal Data Systems.