In the ‘Model Training’ section, we explained how to train a yolov8 model, but in order to deploy it on Grove Vision V2, we need to further quantify the model. This section is described below:
Model INT8 quantification
Model optimisation
Model INT8 quantification
First of all, we need to get the pt model file. In ‘Model Training’, I explained that we can get the trained pt model file through local training and cloud training.
local
cloud
Here's the translation:
Create a new environment using Anaconda (for example, you can name it petoi_convert_local), and in the new environment, execute the following commands in sequence:
(! Note: We have used anaconda to create a new petoi_train_local environment in the model training section, while the petoi_convert_local environment and petoi_train_local environment used in this section are two different environments, and petoi_train_local environment must not be used for the following operations.)
Next, we need to quantify the model and execute it:
You will see a yolov8n_saved_model folder in the current folder containing the yolov8n_full_integer_quant.tflite model file.
Model optimisation
Next, we will perform model optimization. If you are using a Windows computer, you need to install Microsoft C++ Build Tools. If you are a Mac or Linux user, you do not need to install it."
vela --accelerator-config ethos-u55-64 --config vela_config.ini --system-config My_Sys_Cfg --memory-mode My_Mem_Mode_Parent --output-dir ${Save path of the optimized model} ${The path of the tflite model that needs to be optimized}