# Petoi AI Vision

## Function introduction

Petoi AI Vision Module is based on the Arm Cortex-M55, and Ethos-U55 embedded vision module. The Ethos-U55 has 64 to 512 GOP/s of arithmetic power to meet the growing demand for downloading machine learning.

<figure><img src="/files/KvzLHwtJrnWdIolNXbp4" alt=""><figcaption></figcaption></figure>

## Hardware setup <a href="#hardware-setup-1" id="hardware-setup-1"></a>

### BiBoard V0

<figure><img src="/files/myiA2xtdqR4C9w9sYwWj" alt="Bittle X: BiBoard V0 with AI vision module"><figcaption><p>Bittle X</p></figcaption></figure>

### BiBoard V1

<figure><img src="/files/kvbXOFbbsAZdgrcmXSxY" alt="Bittle X: BiBoard V1 with AI vision module"><figcaption><p>Bittle X</p></figcaption></figure>

<figure><img src="/files/7frVMq8R691deB39SAHb" alt="Bittle X+Arm: BiBoard V1 with AI vision module"><figcaption><p>Bittle X+Arm</p></figcaption></figure>

Fix the end connected to the camera to the robot's head (included in Bittle's / Bittle X's mouth or attached to Bittle X+Arm's robotic arm).

{% hint style="info" %}
If you use the version of Petoi Desktop App <= **V1.2.5**,  you need to connect the Petoi AI vision module to the following Grove socket:\ <img src="/files/ll0WaBRIu3W07QRdQHtc" alt="Bittle X" data-size="original"><br>

<img src="/files/YL6x90xM5TtNWe2IoBHH" alt="Bittle X+Arm" data-size="original">
{% endhint %}

## Software setup <a href="#software-setup-1" id="software-setup-1"></a>

There are two methods to upload the firmware :

* Using the Petoi Desktop App
* Using the Arduino IDE

### **Petoi Desktop App**

You can use the [Firmware Uploader](https://docs.petoi.com/desktop-app/firmware-uploader#select-the-correct-options-to-upload-the-latest-firmware) within the Petoi Desktop App.

Please select the correct ***Product*** type, ***Board version***, and ***Serial port*** according to your actual use. \
The mode should be **Standard**, so press the **Upgrade the Firmware** button. \
For example, Bittle, BiBoard\_V0\_2, COM5 as follows:

<figure><img src="https://docs.petoi.com/~gitbook/image?url=https%3A%2F%2F1565080149-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MQ6a951Q6Jn1Zzt5Ajr-887967055%252Fuploads%252FaleqWtxk5PSH9bWe9CfF%252Fimage.png%3Falt%3Dmedia%26token%3Dc92b21ff-992f-4163-a981-86078e26eedd&#x26;width=768&#x26;dpr=4&#x26;quality=100&#x26;sign=308febb4&#x26;sv=1" alt=""><figcaption></figcaption></figure>

### **Arduino IDE**

For more details, please refer to [Upload Sketch for BoBoard](/arduino-ide/upload-sketch-for-biboard.md).

After uploading, there are two methods to ***activate/deactivate*** the camera mode:

* Serial Monitor
  * [Open the serial monitor](/arduino-ide/serial-monitor.md#biboard) and use the serial command "***XC***" to activate the camera mode.
  * Open the serial monitor and use the serial command "***Xc***" to deactivate the camera mode.
* Mobile App
  * Create [a mobile app command](https://docs.petoi.com/mobile-app/controller#create-a-single-command) called "**Activate camera**" and use the code: *`X67`*
  * Create a mobile app command called "**Deactivate camera**" and use the code: *`X99`*

### Web debug GUI

Combined with the empowerment of the web debug GUI ([SenseCraft AI Model Assistant](https://sensecraft.seeed.cc/ai/device/local/36)), you can easily upload a wide variety of co-created models and directly observe the results.

{% hint style="info" %}
The camera is already plugged in. After opening the web debug GUI, simply connect Petoi AI Vision to your computer using a Type-C cable and then click the **Connect** button.
{% endhint %}

For how to use this web debug GUI, please refer to:

<https://wiki.seeedstudio.com/grove_vision_ai_v2_software_support/#step-2-connect-the-module-and-upload-a-suitable-model>

{% hint style="info" %}
If the camera mode can't be activated, as follows:

<img src="/files/zEFPuPNKp0nUdsvjFCPg" alt="" data-size="original">

You can use the [web debug GUI ](https://sensecraft.seeed.cc/ai/#/device/local)to upgrade the camera firmware and upload the Face Detection model.

<img src="/files/RU8cHFpfjlKr57cl3j3E" alt="" data-size="original">
{% endhint %}

{% hint style="warning" %}
To run the example code (inference.ino) in the library [Seeed\_Arduino\_SSCMA](https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA/releases), you should add the library to your Arduino IDE by selecting Sketch > Include Library > Add .ZIP Library and choosing the downloaded file.

Or you can install the library in the Library Manager of the Arduino IDE as follows:

<img src="https://docs.petoi.com/~gitbook/image?url=https%3A%2F%2F1565080149-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MQ6a951Q6Jn1Zzt5Ajr-887967055%252Fuploads%252FQGc7naMMVovRINWe5Dmr%252Fimage.png%3Falt%3Dmedia%26token%3D173d8e90-f3b1-4e1c-94de-da12bdd6b79f&#x26;width=768&#x26;dpr=4&#x26;quality=100&#x26;sign=661a539a&#x26;sv=2" alt="" data-size="original">\
\
![](/files/znfLyYpthnnbaepvWLP2)
{% endhint %}

## More Application

The Petoi AI vision module also supports taking photos and transmitting images via Wi-Fi, but it needs to be installed on a MCU with more powerful computing power (such as ESP32S3, ESP32C3, etc.). For the specific development process, please refer to the wiki technical documentation:

{% embed url="<https://wiki.seeedstudio.com/grove_vision_ai_v2_demo/>" %}

{% embed url="<https://wiki.seeedstudio.com/grove_vision_ai_v2_webcamera/>" %}

{% embed url="<https://wiki.seeedstudio.com/vision_ai_v2_crowd_heat_map/>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.petoi.com/extensible-modules/petoi-ai-vision.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
