top

Search

Python Tutorial

.

UpGrad

Python Tutorial

Speech Recognition in Python

Introduction

Have you ever considered including voice recognition in your Python project? Or wondered as to how speech recognition in Python works? It's not as difficult as one may presume. Let's find the answers to the above question. 

Speech recognition is the ability of software to identify speech in sound and translate it to text. There are several intriguing applications for voice recognition Python, and it is simpler than one may expect to incorporate it into its own programs.

Overview

The popularity of voice-enabled gadgets such as Alexa and Siri has demonstrated that some level of voice assistance will be a vital component of home technology for a long time to come. When you contemplate the reasons are rather apparent. Integrating speech recognition Python provides a degree of participation and connectivity that few other technologies can equate. 

The accessibility enhancements alone are worthwhile. Speech recognition using python project report enables seniors, as well as the physically impaired and visually challenged, to connect with cutting-edge products and services in a natural and rapid manner without the need for any graphical user interface.

The best part is that using speech recognition Python programs is quite straightforward. Let us discover and understand Python Speech recognition. Converting speech to text Python

What is Speech Recognition? 

Speech recognition is described as the automated recognition of human voice and is regarded as one of the most vital tasks associated with the development of apps such as Alexa or Siri. Python has various libraries that enable speech recognition capability. The voice recognition library will be used as an example as it is the most basic and straightforward to learn.

Speech recognition has its origins in early 1950s research at "Bell Labs". Early systems had just one speaker and a few dozen words in their vocabulary. They have vast vocabularies in several languages and can distinguish speech from different speakers.

  • Voice Recognition is the process through which an electronic device or gadget records and converts human voice into written representation.

  • Automatic Speech Recognition (ASR), computer speech recognition, and Python Speech to Text (STT) are other names for it.

  • The Speech Recognition module is related to subjects such as linguistics, computer science, and electrical engineering.

How Does Speech Recognition Work?

Let us now understand the underlying principle of voice recognition and how it works. The image above clearly depicts the working concept of Speech Recognition in Python. 

It is based on an auditory and linguistic modeling algorithm. 

  • Acoustic Modeling: It is represented by the relationship between linguistic units of speech and audio impulses.

  • Language Modeling: This modeling differentiates between similar-sounding words by pairing the audio or sound with word patterns.

Python Voice recognition begins by translating the sound energy provided by an individual, who is speaking, into electrical energy using a microphone. This electrical energy is subsequently converted from analog-digital, and eventually to text using Python algorithms. Natural Language Processing and Neural Networks are used to do the above transitions. Hidden Markov models can be used to detect and improve temporal patterns in speech.

Picking and Installing a Speech Recognition Module 

On PyPI, there are a few packages for Python voice recognition. Some of them are as follows:

  • apiai

  • assemblyai

  • google-cloud-speech.

  • pocketsphinx

  • speech =recognition

  • watson-developer-cloud

  • wit

The packages like wit and apiai, provide built-in functionality that go beyond simple voice recognition and incorporate language processing for determining a speaker's objective. Packages like "google-cloud-speech", are primarily concerned with speech conversion.

SpeechRecognition is one software that stands out in terms of usability.

Installing Speech recognition

SpeechRecognition is compatible with the Python series, although Python 2 requires some additional setup procedures. You can use pip to install SpeechRecognition from the command line:

$ pip install Speech Recognition

Once installed, verify by launching an interpreting session and writing:

>>> sr__version__

>>> import speech_recognition as sr
‘3.8.1’

If working with existing audio files, SpeechRecognition will function right away. 

Opening a URL With Speech 

To open a website using speech_recognition Python, we will use Google speech recognition and several engines and APIs, online and offline. 

1. First and foremost, we need to give the path to the browser. Here we are using Google Chrome, thus the route for my browser.

path = "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s"

2. First we established a recognizer object, and then we need to add this line of code to remove noises.

r.adjust_for_ambient_noise(source)

3. In this next step, we are listening to the audio

audio = r.listen(source)

4. To recognize the speech using Google Speech

dest = r.recognize_google(audio)

5. Now, to open the browser

web.get(path).open(dest)

6. Run the complete code and the result will be

Required Installations 

To use all of the functionality of the library, one must have the following

  • Python 3.8 (required)

  • PyAudio 0.2.11 (required only if microphone input is used)

  • PocketSphinx (required only if the Sphinx recognizer is used)

  • Google API Client Library for Python (required only if you need to use the Google Cloud Speech API)

  • FLAC encoder (required only if the system is not x86-based Windows/Linux/OS X)

  • Vosk (required only if Vosk API speech recognition is used)

  • Whisper (required only if you need to use Whisper )

  • openai (required only if Whisper API speech recognition is used)

Troubleshooting 

Till now we have covered how to install and use this application. Speech Recognition works very well easily and accurately and it's quite complex for a built-in program. However, it is not without flaws. Let's look at some of the most prevalent Speech Recognition issues and how to solve them.

1. Try decreasing the property or calling

 >>>recognizer_instance.energy_threshold

>>> recognizer_instance.adjust_for_ambient_noise(source, duration=1)

2. Try using noise-canceling techniques like adjusting the ambient sounds.

3. Check for the correct functioning of your system’s microphone, from the control panel

4. Ensure the speech recognition module is correctly installed.

5. If using Visual Studio Code, then also install the code shell command and set permissions for microphone access.

Functioning with Audio Files 

SpeechRecognition's audio file class makes it simple to work with audio files. This class takes a path to an audio file as an argument and offers a context manager approach for interacting and reading with the file's contents.

If using "x-86-based" Linux, macOS, or Windows, "FLAC" files are easily operated. Other platforms require the installation of a "FLAC" encoder and accessibility to the "FLAC" command line utility. 

Supported File Types 

The below-given file types are supported by SpeechRecognition:

  • WAV: "PCM/LPCM" format is required.

  • FLAC

  • OGG-FLAC: not supported.

  • AIFF and AIFF-C

Capturing data using record() 

To illustrate we are using an audio file by the name “xyz.wav” file. To process the contents of the "xyz.wav" file, enter the following into your interpreter session:

">>> xyz = sr.AudioFile(‘xyz.wav’)
>>> with xyz as source:
… audio = r.record(source)
… "

The context manager examines the file's contents and stores it in an AudioFile instance identified as source. The data from the complete file is then recorded into an AudioData object via the record() function. You may confirm this by looking at the audio format:

>>> type(audio)
<class ‘speech_recognition.AudioData’>

You can now use recognize_google() to try to identify any speech in the audio. Depending on the speed of the internet connection, you may have to wait a few seconds before viewing the result.

>>> r.recognize_google(audio)
‘the stale.....................................
.....favorite a zestful food is the hot
Cross bun’

That’s your first translated audio file.

Duration and Segment Offset Capturing 

What if you simply want to save a small portion of the speech in the file? The duration keyword parameter is accepted by the record() function, which pauses the recording process after a certain number of seconds.

For example, let's capture the portion of speech in the first five seconds

>>> with xyz as source:
… audio = r.record(source, duration=5)
…
>>> r.recognize_google(audio)
‘the stale smell of old beer lingers’

When used within a block, the record() function always moves the file stream up ahead. This implies that if you record initially for five seconds and then record for another five seconds, the second recording will return the five seconds of audio following the initial five seconds.

>>> with xyz as source:
… audio1 = r.record(source, duration = 5)
… audio2 = r.record(source, duration = 5)
…
>>> r.recognize_google(audio1)
‘the stale smell ............of old beer lingers’
>>> r.recognize_google(audio2)
‘it takes heat to bring .............out the odor a cold dip’

Make a note that audio2 contains part of the file's third phrase. When a time is specified, the recording can stop in the middle of a sentence or even a word, reducing transcribing accuracy.

In addition to providing a recording period, the offset keyword parameter may be used to designate a precise beginning point for the recording. This value reflects the number of seconds to disregard from the starting point of the file before commencing to record.

Start with an offset of four seconds and record for, say, three seconds so you capture only the second sentence in the file.

>>> with xyz as source:
… audio = r.record(source. offset=4, duration=3)
…
>>> r.recognize_google(audio)
‘it takes heat to .............bring out the odor’

If you know the arrangement of the words in the audio file, the offset and duration keyword parameters might help you segment it. However, if they are used hastily, they might result in bad transcriptions. 

Another reason for erroneous transcriptions is Noise. In the above example, the audio file is very clear, thus resulting in accuracy and performing nicely. In the actual scenario, noise-free audio is difficult to find.

Conclusion

In this article, we have discussed how to install the SpeechRecognition package and use its Recognizer class to quickly recognize speech from a file (using record()) and microphone input (using listen()). We also learned how to use the offset and duration keyword parameters of the record() function to handle audio file segments.

FAQs

1. Are there any open-source projects for speech-to-text recognition?

Yes, a few open-source projects for speech-to-text recognition are

  • DeepSpeech

  • SpeechRecognition

  • Leon

  • Wav2letter

  • Annyang

2. Does speech recognition have an API key?

Speech recognition ships with an API key. With Google speech recognition API python, one can start immediately as it comes with its own API recognize_google() which is free. 

3. What is Audio Preprocessing?

When transmitting audio data, if you receive an error, it is because the audio file's data type format is incorrect. To avoid this type of issue, audio data must be preprocessed. There is a class called AudioFile that is specifically for preprocessing audio files. 

Leave a Reply

Your email address will not be published. Required fields are marked *