Pre-trained models
All nominated pre-trained models
Pre-Trained model nomination list
There are many approaches that teams could take in the SPEAR Challenge and, of course, machine learning is a very promising one. It is well known that systems generally perform better when trained on more data. The Datasets provided in the SPEAR Challenge are already big but they are still limited. To allow for re-use/adaption of previously developed methods, the SPEAR Challenge allowed teams to nominate up to 5 pre-trained models. In order to keep everything fair and allow other teams to also benefit from these pre-trained models, we required that the pre-trained models be publicly available, that is,
- the model weights are available to download, and
- the data that it was trained on have a permissive licence.
We are now pleased to announce that the following models have been approved for use in the SPEAR Challenge.
Name | Purpose | Links | License | Data trained on |
---|---|---|---|---|
Deep Filter Net 2 | Speech Enhancement | Article Git | Dual License: MIT, Apache 2.0 | Audio: Microsoft DNS ; License: CC 4.0 and MIT |
Full Sub Net | Speech Enhancement | Article Git | Apache 2.0 | Audio: Microsoft DNS ; License: CC 4.0 and MIT / RIR: SLR26 and SLR28 ; License: Apache 2.0 |
Cone of Silence | Speech Separation | Article Git | MIT License | Simulated data and VCTK corpus ; License: CC 4.0 |