/**
*************************************************************************************************
* @file readme.txt
* @author MCD Application Team
* @brief Description of the Artificial Intelligence Hand Writing Character Recognition example.
*************************************************************************************************
*
* Copyright (c) 2019 STMicroelectronics. All rights reserved.
*
* This software component is licensed by ST under BSD 3-Clause license,
* the "License"; You may not use this file except in compliance with the
* License. You may obtain a copy of the License at:
* opensource.org/licenses/BSD-3-Clause
*
*************************************************************************************************
*/
This project demonstrate a complex application that is running on both CPU1(CA7) and CPU2(CM4).
The application is a launcher that recognize hand writing character drawn on the touch screen in order
to execute specific actions.
CPU1 (CA7) control the touch event and the Graphic User Interface.
CPU2 (CM4) is used to offload the processing of a Cube.AI pre-build Neural Network.
The communication between the CPU1(CA7) and the CPU2(CM4) is done through a Virtual UART to create an
Inter-Processor Communication channel seen as a TTY device in Linux.
The implementation is based on:
* RPMSG framework on CPU1(CA7) side
* and OpenAMP MW on the CPU2(CM4) side
OpenAMP MW uses the following HW resources
* IPCC peripheral for event signal (mailbox) between CPU1(CA7) and CPU2(CM4)
* MCUSRAM peripheral for buffer communications (virtio buffers) between CPU1(CA7) and CPU2(CM4)
Reserved shared memeory region for this example: SHM_ADDR=0x10040000 and SHM_SIZE=128k.
It is defined in platform_info.c file
A communication protocol has been defined between the CPU1(CA7 and the CPU2(CM4).
The data frames exchanged have the follwowing structure:
----------------------------------------------------------------
| msg ID | data Length | data Byte 1 | ... | data Byte n | CRC |
----------------------------------------------------------------
- 3 types of message could be received by CPU2(CM4):
* Set the Neural Network input type (0x20, 0x01, data, CRC)
* data = 0 => NN input is letter or digit
* data = 1 => NN input is letter only
* data = 2 => NN input is digit only
* Provide the touch screen coordinate (0x20, n, data_x1, data_y1, ... , data_xn, data_yn, CRC)
* n => the number of coordinate points
* data_xn => x coordinate of the point n
* data_yn => y coordinate of the point n
* Start ai nn processing (0x22, 0x00, CRC)
- 4 types of acknowledges could be received on CPU1(CA7) side:
* Bad acknowledge (0xFF, 0x00, CRC)
* Good acknowledge (0xF0, 0x00, CRC)
* Touch screen acknowledge (0xF0, 0x01, n, CRC)
* n => number of screen coordinate points acknowledged
* AI processing result acknowledge (0xF0, 0x04, char, accuracy, time_1, time_2, CRC)
* char => this is the recognized letter (or digit)
* accuracy => this is the confidence expressed in percentage
* time_1 => upper Bytes of the time (word) expressed in ms
* time_2 => lower Bytes of the time (word) expressed in ms
On CPU2(CM4) side:
- CPU2(CM4) initialize OPenAMP MW which initializes/configures IPCC peripheral through HAL
and setup openamp-rpmsg framwork infrastructure
- CPU2(CM4) creates 1 rpmsg channels for 1 virtual UART instance UART0
- CPU2(CM4) initialize the Character Recognition Neural Network
- CPU2(CM4) is waiting for messages from CPU1(CA7) on this channels
- When CPU2(CM4) receives a message on 1 Virtual UART instance/rpmsg channel, it processes the message
to execute the associated action:
* set the NN input type to the desire value
* or register the touch event coordinate to generate the picture that will be processed by the NN
* or start the NN processing and wait for the results
- On every previous action, the CPU(CM4) is sending back to the CPU1(CA7) and acknowledge already defined
above.
On CPU1(CA7) side:
- CPU1(CA7) open the input event to register the touch events generated by the user's finger drawing
- CPU1(CA7) configure the input type (Letter only) of the Neural Network running on the CPU2(CM4) by
sending a message throught the virtual TTY communication channel
- when the drawing is finished, CPU1(CA7) process the touch event data and send it to the CPU2(CM4)
- CPU1(CA7) start the Neural Network processing wait for the result and display the recognized character on
the diplay
Some information about the Character Recognition Neural Network:
The Character Recognition Neural Network used is a Keras model processed by Cube.AI to generate the executable
that can be run on the CPU2(CM4).
The Keras model used is located in the root directory of this project:
model-ABC123-112.h5
This model has been used in Cube.AI to generate the Neural Network binary.
The model accept as input a 28x28 picture encoded with float in black and white (black = 0.0 or White = 1.0).
The output layer of the Neural Network contains 36 neurons (A -> Z and 0 -> 9).
Notes:
- It requires Linux console to run the application.
- CM4 logging is redirected in Shared memory in MCUSRAM and can be displayed using following command:
cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
Following command should be done in Linux console on CA7 to run the example :
> /usr/local/demo/bin/ai_char_reco_launcher /usr/local/demo/bin/apps_launcher_example.sh
You are ready to draw letter on the touch screen
Hardware and Software environment:
- This example runs on STM32MP157CACx devices.
- This example has been tested with STM32MP157C-DK2 and STM32MP157c-EVAL board and can be
easily tailored to any other supported device and development board.
Where to find the M4 firmware source code:
The M4 firmware source code is delivered as demonstration inside the STM32CubeMP1.
For the DK2 board:
<STM32CubeMP1>/Firmware/Projects/STM32MP157C-DK2/Demonstrations/AI_Character_Recognition
For the EV1 board:
<STM32CubeMP1>/Firmware/Projects/STM32MP157C-DK2/Demonstrations/AI_Character_Recognition