NAOqi Vision - Overview | API | Tutorials | ALVideoDevice - Advanced
See also
This architecture is especially designed for NAO’s camera. Other video sources are using this architecture by emulating parts such as the driver in stream mode and its circular buffers management.
A Vision Module (V.M.) needs to work on a specific image format to perform its processing. So it subscribes to ALVideoDevice that will apply transformations on the stream to the required format (resolution and colorspace). If this format is the native one of the video source, a direct raw access can be asked (as an advanced feature, it implies some constraints).
The video source used by ALVideoDevice is defined in the VideoInput.xml preference file. Right now, 3 different kinds of video sources are available:
Note
To switch between cameras, use ALVideoDeviceProxy::setParam() with kCameraSelectID parameter (or kCameraFastSwitchID advanced parameter if available).
ALVideoDevice manages the video source. If the video source is NAO’s camera for instance, it will:
Note
Except there is no I2C communications and V4L2 driver for SimulatorCam and FileCam video devices, both devices are running in a similar way than NaoCam with ALVideoDevice (a circular buffer has been implemented to simulate the one of V4L2 driver and SimulatorCam video flow is converted into YUV422 for keeping an abstraction layer with the active video device).
The V.M. sends a request via the broker to subscribe to ALVideoDevice with the following parameters:
Note
Only 8 instances of this module are allowed to avoid programming mistakes from the user that would lead to a performance loss. You can unsubscribe all instances by calling ALVideoDeviceProxy::unsubscribe().
At this stage, ALVideoDevice can look in the database to know what to provide to every V.M. sending a request.
In the figure below, you can see that three different VMs have subscribed to ALVideoDevice (shown in the blue section). For an explanation on how ALVideoDevice works internally, let’s consider that the two first VMs are asking for the same image format. Suppose further that the third VM needs the same image format as the one provided natively by the video source device.
In the ALVideoDevice thread section (green part), you can see the ALVision image containers that have been created to manage future requests from the Vision Modules. The first set of containers to the left will just receive a pointer access to driver buffers. It is just an encapsulation to our image format without any memory allocation in order to set different attributes (width, height, resolution, color space, lockers, etc.) to buffers containing only raw data. The second set of containers to the right have their own memory allocated because they will receive transformed images with the resolution and color space requested by VMs.
Note
Actually all the buffers from the right set allocate the maximum amount of memory that resolution and color space combinations allow. So changing resolution or color space will not need any memory reallocation, which is time consuming, but just parameters modification.
A Vision Module can request images in two ways:
Both ways can be used in local or remote mode.
We recommend this mode. When using getImageLocal() function, the image is provided by ALVideoDevice with the format needed by the VM.
Now suppose that another Vision Module wanting the same kind of image sends a request to ALVideoDevice.
Note
Buffers are timestamped at the driver level when we start acquiring a new frame, so their accuracy is higher than a millisecond.
Note
Calling releaseImage() from a remote VM as we do for a local one is not mandatory but is a good habit to ease switching the running mode of your module from remote to local and doesn’t take processing time as it just do nothing when it follows a getImageRemote.
Once a buffer has been released, this one is available again for writing but can still be accessed for reading if needed.
Warning
It’s obvious that VMs are not supposed to modify the incoming image in order to let other VMs to obtain correct data. Therefore, it is strongly recommended to use an outcoming image if the result of your VM process is an image.
In this mode, the user has a direct access to raw image buffers from the driver. This means that the VM process must be done in order to work on the native video source format. In this case, instead of copying data from the unmapped driver buffer to an ALImage buffer with the correct format (arrow 4) and then providing a pointer (arrow 5), the VM process will get a direct access through a pointer to the driver’s latest updated buffer (arrow 4bis). It’s faster and consumes less processing power.
But there are restrictions for using this mode:
Note
Like for the remote standard access mode, remote direct raw access mode releases automatically raw buffers as soon as the ALValue conversion is done.
Let’s introduce our Aldebaran Robotics Video format (.arv), light and integrated into ALVideoDevice for grabbing transparently video streams from any desired VM.
Note
Changing some inner parameters of the video device, like the resolution or color space, will automatically close the file.
Note
This is a beta version, code is not protected from bad manipulations like, for instance, not setting a correct file path or forgetting to call setVideo before subscribing a VM. Replay modes 2 and 3 are not implemented yet.
Inspired by the HSV and HSL color spaces and optimized for speed on embedded systems.