NAOqi Audio - Overview | API
The ALSpeechRecognition module gives to the robot the ability to recognize predefined words or phrases in several languages (English is the default language).
Note
This module is only available on a real robot, you cannot test it on a simulated robot.
ALSpeechRecognition relies on sophisticated speech recognition technologies provided by:
Step | Description |
---|---|
A | Before starting, ALSpeechRecognition needs to be fed by the list of phrases that should be recognized. |
B | Once started, ALSpeechRecognition places in the key SpeechDetected, a boolean that specifies if a speaker is currently heard or not. |
C | If a speaker is heard, the element of the list that best matches what is heard by the robot is placed in the key WordRecognized. |
The WordRecognized key is organized as follows:
[phrase_1, confidence_1, phrase_2, confidence_2, ..., phrase_n, confidence_n]
where:
Note that the different hypothesis contained in that key are ordered so that the most likely phrases comes first.
Cannot be tested on a simulated robot.
Step | Action |
---|---|
Connect Choregraphe to a real robot. | |
Drag and drop the Audio > Voice > Speech Reco. box onto the Flow Diagram panel. | |
Connect its input to the main input of the behavior. | |
Click the Play button. | |
When the eye LEDs get blue and turn, say “yes” or “no” to the robot. Eye LEDs should become yellow (while hearing and analyzing) then green (when a word is recognized). |
Step | Action |
---|---|
Click the Parameter button of the box and enter your own word list. You can also try to modify the other options. |
|
Click the Play button to test the result. |