CMU Sphinx


CMU Sphinx, also called Sphinx in short, is the general term to describe a group of speech recognition systems developed at Carnegie Mellon University. These include a series of speech recognizers and an acoustic model trainer.
In 2000, the Sphinx group at Carnegie Mellon committed to open source several speech recognizer components, including Sphinx 2 and later Sphinx 3. The speech decoders come with acoustic models and sample applications. The available resources include in addition software for acoustic model training, Language model compilation and a public domain pronunciation dictionary, cmudict.
Sphinx encompasses a number of software systems, described below.

Sphinx

Sphinx is a continuous-speech, speaker-independent recognition system making use of hidden Markov acoustic models and an n-gram statistical language model. It was developed by Kai-Fu Lee. Sphinx featured feasibility of continuous-speech, speaker-independent large-vocabulary recognition, the possibility of which was in dispute at the time. Sphinx is of historical interest only; it has been superseded in performance by subsequent versions. An archival article describes the system in detail.

Sphinx 2

A fast performance-oriented recognizer, originally developed by Xuedong Huang at Carnegie Mellon and released as Open-source with a BSD-style license on SourceForge by Kevin Lenzo at LinuxWorld in 2000. Sphinx 2 focuses on real-time recognition suitable for spoken language applications. As such it incorporates functionality such as end-pointing, partial hypothesis generation, dynamic language model switching and so on. It is used in dialog systems and language learning systems. It can be used in computer based PBX systems such as Asterisk. Sphinx 2 code has also been incorporated into a number of commercial products. It is no longer under active development. Current real-time decoder development is taking place in the Pocket Sphinx project. An archival article describes the system.

Sphinx 3

Sphinx 2 used a semi-continuous representation for acoustic modeling. Sphinx 3 adopted the prevalent continuous HMM representation and has been used primarily for high-accuracy, non-real-time recognition. Recent developments have made Sphinx 3 "near" real-time, although not yet suitable for critical interactive applications. Sphinx 3 is under active development and in conjunction with SphinxTrain provides access to a number of modern modeling techniques, such as LDA/MLLT, MLLR and VTLN, that improve recognition accuracy.

Sphinx 4

Sphinx 4 is a complete re-write of the Sphinx engine with the goal of providing a more flexible framework for research in speech recognition, written entirely in the Java programming language. Sun Microsystems supported the development of Sphinx 4 and contributed software engineering expertise to the project. Participants included individuals at MERL, MIT and CMU.
Current development goals include:
A version of Sphinx that can be used in embedded systems. PocketSphinx is under active development and incorporates features such as fixed-point arithmetic and efficient algorithms for GMM computation.