Blog

All articles

Speech synthesis. History

Speech synthesis. History

The history of speech synthesis has its roots in the depths of history. Although, if we eliminate legends and myths, the first real copy of the speech synthesizer was created in the XVIII century, by the Danish scientist Christian Kratzenstein. The model could pronounce the sounds [aː], [eː], [iː], [oː] and [uː]. The device was created with the help of acoustic resonators of different shapes, which made different sounds via vibrating tongues driven by air.

In 1778, the model was improved by an Australian scientist, after which it learned to produce not only certain consonants, but also combinations of sounds.

In 1837, Charles Wheatstone modified the device even more, forcing it to reproduce almost all consonants and vowels.

As early as in 1846, a talking organ Euphonia was introduced by Joseph Faber. He could (albeit not in an ideal way) synthesize not only speech, but also singing.

Later, Alexander Bell created his “speaking” model, similar to that of the Wheatstone’s. With the advent of the 20th century, sound wave generators appeared, which greatly simplified speech synthesis.

In the 1930s, Homer Dudley, working at Bell Labs to find ways to reduce bandwidth for telephony, invented VOCODER, a keyboard-controlled analyzer and speech synthesizer.

Of course, the first synthesizers were far from natural sounding, and in order to make out the phrases they needed to be very persistently listening.

The first computer-based speech synthesizers began to appear in the late 1950s, and the first TTS synthesizer was created in 1968.

Speech generated by modern speech synthesis systems is sometimes difficult to distinguish from ordinary human speech. Research is also being conducted in the field of creating mechanical speech synthesizers, in particular, for use in humanoid robots.