ALSpeechRecognition API

NAOqi Audio - Overview | API


Namespace : AL

#include <alproxies/alspeechrecognitionproxy.h>

Methods

std::vector<std::string> ALSpeechRecognitionProxy::getAvailableLanguages()

Returns the list of the languages currently installed on the system.

Example: [‘French’, ‘Chinese’, ‘English’, ‘German’, ‘Italian’, ‘Japanese’, ‘Korean’, ‘Portuguese’, ‘Spanish’]

Returns:List of installed languages (language names are given in English).
std::string ALSpeechRecognitionProxy::getLanguage()

Returns the language currently used by the speech recognition system.

Example: ‘French’

Could be one of the available languages.

For further details, see: ALSpeechRecognitionProxy::getAvailableLanguages().

Returns:Current language used by the speech recognition engine.
float ALSpeechRecognitionProxy::getParameter(const std::string& parameter)

Gets a parameter of the speech recognition engine.

Note that when the ASR engine language is set to Chinese, no parameter can be retrieved.

Parameters:
  • parameter – Name of the parameter
Returns:

Value of the parameter

std::vector<std::string> ALSpeechRecognitionProxy::getPhoneticTranscription(const std::string& word)

Deprecated since version 1.12: This method is not available on NAO V4 (ATOM).

This function allows you to get the phonetic transcription(s) used by the speech recognition engine when it is asked to recognize a word. Note that when the ASR engine language is set to Chinese or Japanese, no phonetic transcription can be retrieved.

Parameters:
  • word – Word to phoneticize
Returns:

Phonetic transcription(s) of the word

void ALSpeechRecognitionProxy::loadVocabulary(const std::string& vocabulary)

Deprecated since version 1.12: This method is not available on NAO V4 (ATOM).

Loads the vocabulary to recognize contained in a .lxd file (ACAPELA grammar file format). This method is not available with the ASR engine language set to Chinese or Japanese.

Parameters:
  • vocabulary – Name of the .lxd file containing the vocabulary

Note

On NAO V3 (GEODE), in Japanese and Chinese this method is inactive.

void ALSpeechRecognitionProxy::setAudioExpression(const bool& setOrNot)

When set to True, a “bip” is played at the beginning of the recognition process, and another “bip” is played at the end of the process. This is a useful indication to let the user know when it is appropriate to speak.

Parameters:
  • setOrNot – Enable (true) or disable it (false)
void ALSpeechRecognitionProxy::setLanguage(const std::string& language)

Sets the language currently used by the speech recognition system. Each NAOqi restart will however reset that setting to the default language that can be set on NAO’s web page.

Parameters:
void ALSpeechRecognitionProxy::setParameter(const std::string& parameter, const float& value)

Sets parameters of the speech recognition engine. For now the only parameter that can be set is the sensitivity [0 - 1] of the voice activity detector used by the engine.

Parameters:
  • parameter – Name of the parameter.
  • value – Value of the parameter.

Note

On NAO V3 (GEODE), the “sensitivity” parameter is not available.

void ALSpeechRecognitionProxy::setVisualExpression(const bool& setOrNot)

Enables or disables the LEDs animations showing the state of the recognition engine during the recognition process.

Parameters:
  • setOrNot – Enable (true) or disable it (false).
void ALSpeechRecognitionProxy::setVocabulary(const std::vector<std::string>& vocabulary, const bool& enableWordSpotting)

Sets the list of words/phrases (vocabulary) that should be recognized by the speech recognition engine. If word spotting is disabled (default), the engine expects to hear one of the specified words, nothing more, nothing less. If enabled, the specified words can be pronounced in the middle of a whole speech stream, the engine will try to spot them.

Parameters:
  • vocabulary – List of words that should be recognized
  • enableWordSpotting – Enable (true) or disable it (false)

Note

On NAO V3 (GEODE) the following differences apply:

  • the “word spotting” option is inactive
  • In Japanese, only single words are recognized (and not phrases)
void ALSpeechRecognitionProxy::setWordListAsVocabulary(const std::vector<std::string>& vocabulary)

Sets the list of words/phrases (vocabulary) that should be recognized by the speech recognition engine. To enable “word spotting”, please use ALSpeechRecognitionProxy::setVocabulary() instead.

Parameters:
  • vocabulary – List of words that should be recognized
void ALSpeechRecognitionProxy::subscribe(const std::string& name)

Subscribes to ALSpeechRecognition. This causes the module to start writing information to ALMemory in “WordRecognized”. This can be accessed in ALMemory using ALMemoryProxy::getData().

Parameters:
  • name – Name to identify the subscriber
void ALSpeechRecognitionProxy::unsubscribe(const std::string& name)

Unsubscribes to ALSpeechRecognition. This causes the module to stop writing information to ALMemory in “WordRecognized”.

Parameters:

Events

Event: "WordRecognized"
callback(std::string eventName, AL::ALValue value, std::string subscriberIdentifier)

Raised when one of the specified words with ALSpeechRecognitionProxy::setWordListAsVocabulary() has been recognized. When no word is currently recognized, this value is reinitialized.

Parameters:
  • eventName (std::string) – “WordRecognized”
  • value – Recognized words infos. Please refer to ALSpeechRecognition for details.
  • subscriberIdentifier (std::string) –
Event: "LastWordRecognized"
callback(std::string eventName, AL::ALValue value, std::string subscriberIdentifier)

Raised when one of the specified words with ALSpeechRecognitionProxy::setWordListAsVocabulary() has been recognized. This value is kept unchanged until a new word has been recognized.

Parameters:
  • eventName (std::string) – “LastWordRecognized”
  • value – Last recognized words infos. Please refer to ALSpeechRecognition for details.
  • subscriberIdentifier (std::string) –
Event: "SpeechDetected"
callback(std::string eventName, bool value, std::string subscriberIdentifier)

Raised when the automatic speech recognition engine has detected a voice activity.

Parameters:
  • eventName (std::string) – “SpeechDetected”
  • value – True if voice activity detected.
  • subscriberIdentifier (std::string) –