Release DeepSpeech 0.9.3 · mozilla/DeepSpeech (original) (raw)

General

This is the 0.9.3 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not backwards compatible with earlier versions. However, models exported for 0.7.X and 0.8.X should work with this release. This is a bugfix release and retains compatibility with the 0.9.0, 0.9.1 and 0.9.2 models. All model files included here are identical to the ones in the 0.9.0 release. As with previous releases, this release includes the source code:

v0.9.3.tar.gz

Under the MPL-2.0 license. And the acoustic models:

deepspeech-0.9.3-models.pbmm
deepspeech-0.9.3-models.tflite

In addition we're releasing experimental Mandarin Chinese acoustic models trained on an internal corpus composed of 2000h of read speech:

deepspeech-0.9.3-models-zh-CN.pbmm
deepspeech-0.9.3-models-zh-CN.tflite

all under the MPL-2.0 license.

The model files with the ".pbmm" extension are memory mapped and thus memory efficient and fast to load. The model files with the ".tflite" extension are converted to use TensorFlow Lite, has post-training quantization enabled, and are more suitable for resource constrained environments.

The acoustic models were trained on American English with synthetic noise augmentation and the .pbmm model achieves an 7.06% word error rate on the LibriSpeech clean test corpus.

Note that the model currently performs best in low-noise environments with clear recordings and has a bias towards US male accents. This does not mean the model cannot be used outside of these conditions, but that accuracy may be lower. Some users may need to train the model further to meet their intended use-case.

In addition we release the scorer:

deepspeech-0.9.3-models.scorer

which takes the place of the language model and trie in older releases and which is also under the MPL-2.0 license.

There is also a corresponding scorer for the Mandarin Chinese model:

deepspeech-0.9.3-models-zh-CN.scorer

We also include example audio files:

audio-0.9.3.tar.gz

which can be used to test the engine, and checkpoint files for both the English and Mandarin models:

deepspeech-0.9.3-checkpoint.tar.gz
deepspeech-0.9.3-checkpoint-zh-CN.tar.gz

which are under the MPL-2.0 license and can be used as the basis for further fine-tuning.

Notable changes from the previous release

Training Regimen + Hyperparameters for fine-tuning

The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.

In contrast to some previous releases, training for this release occurred as a fine tuning of the previous 0.8.2 checkpoint, with data augmentation options enabled. The following hyperparameters were used for the fine tuning. See the 0.8.2 release notes for the hyperparameters used for the base model.

The weights with the best validation loss were selected at the end of 200 epochs using --noearly_stop.

The optimal lm_alpha and lm_beta values with respect to the LibriSpeech clean dev corpus remain unchanged from the previous release:

For the Mandarin Chinese model, the following values are recommended:

Bindings

This release also includes a Python based command line tool deepspeech, installed through

Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip install deepspeech-gpu

On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

pip install deepspeech-tflite

Also, it exposes bindings for the following languages

npm install deepspeech-gpu  

On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
npm install deepspeech-tflite

In addition there are third party bindings that are supported by external developers, for example

Supported Platforms

Documentation

Documentation is available on deepspeech.readthedocs.io.

Contact/Getting Help

  1. FAQ - We have a list of common questions, and their answers, in our FAQ. When just getting started, it's best to first check the FAQ to see if your question is addressed.
  2. Discourse Forums - If your question is not addressed in the FAQ, the Discourse Forums is the next place to look. They contain conversations on General Topics, Using Deep Speech, Alternative Platforms, and Deep Speech Development.
  3. Matrix - If your question is not addressed by either the FAQ or Discourse Forums, you can contact us on the #machinelearning:mozilla.org channel on Mozilla Matrix; people there can try to answer/help
  4. Issues - Finally, if all else fails, you can open an issue in our repo if there is a bug with the current code base.

Contributors to 0.9.3 release