Vision AI Module by Seeed
Grove Vision AI V2 is based on Arm Cortex-M55 and Ethos-U55 embedded vision modules. The Ethos-U55 has 64 to 512 GOP/s of arithmetic power, which meets the increasing demand for downloading Machine Learning models to the edge for inference.
Use the online site to download your favorite model to Grove Vision AI
Downloading the model directly to Grove Vision AI V2 using the web-side tool (SenseCraft) is the easiest way to use the module.
SenseCraft AI (seeed-studio.github.io)
Please make sure that the CH343 driver is installed before using it, if not this page will help you to install it automatically, but you need to reboot your computer device after installation.
When you open this URL, you will see the above page. At this point you need to connect your device
There are two options in the drop down box. Please select Grove Vision AI V2.
Just click on the appropriate COM port and connect it. I choose COM3 in this picture.
On this website, Sensecraft shows us a number of efficient and interesting models that are adapted to be deployed directly to our Grove Visio AI V2.
Click directly on the model shown in the image (e.g. Face Detection in the figure) and then click send.This web page prompts the following message, which means that the model is being burned. Just be patient for a few minutes.
When the model has been burned successfully, the real-time inference results and images can be viewed in the preview on the right side. There are two hyperparameters that can be adjusted, one is CONFIDENCE, only inference results above this threshold will be displayed. The other is IoU, if IoU is high, multiple prediction frames for the same real object will be displayed. It is recommended that both parameters be left at their default values.
Using grove vision ai to communicate with petoi robot dog
You can use arduino IDE to modify our open source program to use grove vision ai. our program integrates target tracking with Grove Vision AI V2 . You can enable this feature by simply modifying the code. Also you can develop richer functionality with the api related to the SSCMA library.
Make sure not to comment in OpenCatEsp32.ino.
#define CAMERA
Comment out the
#define MU_CAMERA
statement in camera.h; activate the
#define GROVE_VISION_AI_V2
statement, recompile and upload the code to board.
Note that you need to bring in the relevant libraries and do the following.
Download the latest version of the Seeed_Arduino_SSCMA library from the
GitHub repository
.
Add the library to your Arduino IDE by selecting
Sketch > Include Library > Add .ZIP Library
and choosing the downloaded file.
Or you can
You can write your own functional code modeled after our code for Grove Vision AI V2 integrated in opencatESP32.
The following is a brief description of how to use the SSCMA's common api.
Here's a brief description of how to use SSCMA's common api.
Train and deploy your own models to Grove Vision AI V2 on your own PC
If you want to train models on your own dataset for visual detection tasks, we recommend using the yolov5 family of models.
First, we will show you how to train your own yolov5 model on your own PC. Then, we will show you how to quantize and transform the trained model and deploy it to the grove vision ai.
We recommend that the training phase of the yolov5 model is done on windows operating system. It is possible to use vscode or pycharm as your text editor, as well as anaconda or virtualenv for python environment management. All in all, we need a python interpreter.
We hope that your device has an Nvidia GPU as this will greatly speed up our training process.
If you have not used git before, you need to download the git software, on Win, you can download the git software from the following link.
Git - Downloading Package (git-scm.com)
You need to clone the yolov5 repository.
git clone https://github.com/ultralytics/yolov5.git
You then need to open the project in an environment that has the specified python interpreter. For example, I use anaconda to manage my win system python environment, so I need to create the required environment in the anaconda software.
We open the project in the environment we created.
Above is a screenshot of the project opened with the vscode editor and you can see that there is a requirements.txt. this file contains the third party libraries we need.
Use the following commands to install third-party libraries in the current environment:
At this point we have most of the third-party libraries installed, but we need to reinstall pytorch since the pytorch libraries you need to install, as well as the torchvision library, need to be cuda-compliant.
then you will see:
As above, cuda version 12.5.Download pytorch and torchvision for the corresponding cuda from PyTorch, just make sure that your cuda version is higher than the cuda version that pytorch needs. Since my cuda version is 12.5, I can install pytorch with cuda12.1.
At this point, we've finished installing all the third-party libraries. However, we would like you to double check that pytorch is installed correctly.
If torch.cuda.is_available() outputs True, the installation is correct.
You can use the dataset provided by yolov5 for retraining, or you can make your own dataset for training.
If you use the dataset provided by yolov5 for model training, execute the following command in the directory where you installed yolov5:
python train.py --weights yolov5n.pt --data ${dataset yaml file path} --imgsz 192
The argument to --data in this command can be either yolov5\data either yaml file, for example, it can be --data coco128.yaml
. Also, --imgsz 192
is the size of the input image for the grove vision ai v2, if the model is going to be downloaded to the grove vision ai, the imgsz needs to stay unchanged.
The output during training is shown below:
The trained model is saved under: yolov5\runs\train\exp10\weights similar paths
After training you need to convert the pt model to a saved_model model. Run :
python export.py --weights E:\Project\yolov5\runs\train\exp10\weights\best.pt --imgsz 192 --include saved_model
When finished, the following folder appears:
Install Tensorflow:
pip install tensorflow
Create the following new python script to convert our saved_model model file to a tflite model file:
Execute this python script, for example, I will command this script as saved_model2tflite.py.Execute:
python saved_model2tflite.py
get yolov5n_int8.tflite.
Graph optimization
pip3 install ethos-u-vela
You may have encountered the following error during the build of vela:
You will need to download and install Microsoft C++ Build Tools to resolve this issue.
After you have installed vela,you need to download vela
related configuration file, or copy the following content into a file, which can be named vela_config.ini.
Finally, use the following command to optimize the graph
vela --accelerator-config ethos-u55-64 \ --config vela_config.ini
--system-config My_Sys_Cfg
--memory-mode My_Mem_Mode_Parent
--output-dir ${Save path of the optimized model}
${The path of the tflite model that needs to be optimized}
Then you get something like a model file with the name yolov5n_int8_vela.tflite, which is a model that has been optimized.
You can upload your own trained model to Grove Vision AI V2 via SenseCraft AI as mentioned above.
You can customize the name of the model, load the model file to the web side, and then you need to train the model according to the number of labels used, for example, if you use coco128 dataset for training, then there will be a total of 80 labels. Be careful to keep a one-to-one correspondence between the serial number of the tags and the order of the tag names.
When you finish downloading the model, you can see in the right window that grove vision ai performs inference using a user-defined model. Different models return different inference results, after our test, the yolov5 model trained by ourselves can correctly output the recognized labels, but it can't output the coordinates as well as the box. So if you are also programming with arduino, you only need to focus on sorce
and target.
Use ultralytics hub to train models on the server
We explained in the above section how to train the yolov5 model on your own computer. But for those who don't have the necessary hardware equipment, training in the cloud in the Ultralytics HUB is also a good option.
Above is the web link for Ultralytics HUB, open this link and you can see the introduction and tutorial of using it.
To use the Ultralytics HUB you need a github account or a Google account or an Apple account.
Also, you need a google email to use Google Colab.
At this point you already have the desired model file locally. If you have downloaded a TFLite model and wish to deploy it to Grove vision, you will also need to perform graph optimization, see the Graph optimization section earlier in this document.
Also see the previous section on how to use the website for model deployment for how to deploy graph-optimised models to Grove Vision AI V2.
Last updated