AIRASPI Build Log
By Aron Petau • 12 minutes read •
AI-Raspi Build Log
This document chronicles the process of building a custom edge computing device for real-time image recognition and object detection. The goal was to create a portable, self-contained system that could operate independently of cloud infrastructure.
Project Goals:
Build an edge device with image recognition and object detection capabilities that can process video in real-time, targeting 30fps at 720p resolution. Portability and autonomous operation are critical requirements—the device must function without an active internet connection and maintain a compact form factor suitable for installation environments. All computation happens locally on the device itself, making it a true edge computing solution with no cloud dependency.
This project was inspired by pose2art, which demonstrated the creative potential of real-time pose detection for interactive installations.
Hardware
- Raspberry Pi 5
- Raspberry Pi Camera Module v1.3
- Raspberry Pi GlobalShutter Camera
- 2x CSI FPC Cable (needs one compact side to fit pi 5)
- Pineberry AI Hat (m.2 E key)
- Coral Dual Edge TPU (m.2 E key)
- Raspi Official 5A Power Supply
- Raspi active cooler
Setup
Primary Resources
This build wouldn't have been possible without the excellent documentation and troubleshooting guides from the community. The primary sources I relied on throughout this project were:
- coral.ai official documentation - Google's official setup guide for the M.2 Edge TPU
- Jeff Geerling's blog - Critical PCIe configuration insights for Raspberry Pi 5
- Frigate NVR documentation - Comprehensive guide for the network video recorder software
Raspberry Pi OS Installation
I used the Raspberry Pi Imager to flash the latest Raspberry Pi OS to an SD card. The OS choice is critical for camera compatibility.
Needs to be Debian Bookworm. Needs to be the full arm64 image (with desktop), otherwise you will get into camera driver hell.
Initial Configuration Settings:
Using the Raspberry Pi Imager's advanced settings, I configured the following before flashing:
- Used the default arm64 image (with desktop) - critical for camera driver compatibility
- Enabled custom settings for headless operation
- Enabled SSH for remote access
- Configured WiFi country code for legal compliance
- Set WiFi SSID and password for automatic network connection
- Configured locale settings for proper timezone and keyboard layout
- Set custom hostname:
airaspi
for easy network identification
System Update
After the initial boot, updating the system is essential. This process can take considerable time with the full desktop image, but ensures all packages are current and security patches are applied.
sudo apt update && sudo apt upgrade -y && sudo reboot
Preparing the System for Coral TPU
The Raspberry Pi 5's PCIe interface requires specific configuration to work with the Coral Edge TPU. This section was the most technically challenging, involving kernel modifications and device tree changes. A huge thanks to Jeff Geerling for documenting this process—without his detailed troubleshooting, this would have been nearly impossible.
# check kernel version
uname -a
# modify config.txt
sudo nano /boot/firmware/config.txt
While in the file, add the following lines:
kernel=kernel8.img
dtparam=pciex1
dtparam=pciex1_gen=2
Save and reboot:
sudo reboot
# check kernel version again
uname -a
- should be different now, with a -v8 at the end
edit /boot/firmware/cmdline.txt
sudo nano /boot/firmware/cmdline.txt
- add pcie_aspm=off before rootwait
sudo reboot
Modifying the Device Tree
Initial Script Attempt (Deprecated)
Initially, there was an automated script available that was supposed to handle the device tree modifications. However, this script proved problematic and caused issues during my build.
maybe this script is the issue? i will try again without it
curl https://gist.githubusercontent.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e/raw/32d21f73bd1ebb33854c2b059e94abe7767c3d7e/coral-ai-pcie-edge-tpu-raspberrypi-5-setup | sh
Yes, it was the problematic script. I left a comment documenting the issue on the original gist: My comment on the gist
Manual Device Tree Modification (Recommended)
Instead of relying on the automated script, I followed Jeff Geerling's manual approach. This method gives you complete control over the process and helps understand what's actually happening under the hood.
In the meantime the Script got updated and it is now recommended again.
The device tree modification process involves backing up the current device tree blob (DTB), decompiling it to a readable format, editing the MSI parent reference to fix PCIe compatibility issues, and then recompiling it back to binary format. Here's the step-by-step process:
1. Back up and Decompile the Device Tree
# Back up the current dtb
sudo cp /boot/firmware/bcm2712-rpi-5-b.dtb /boot/firmware/bcm2712-rpi-5-b.dtb.bak
# Decompile the current dtb (ignore warnings)
dtc -I dtb -O dts /boot/firmware/bcm2712-rpi-5-b.dtb -o ~/test.dts
# Edit the file
nano ~/test.dts
# Change the line: msi-parent = <0x2f>; (under `pcie@110000`)
# To: msi-parent = <0x66>;
# Then save the file.
# Recompile the dtb and move it back to the firmware directory
dtc -I dts -O dtb ~/test.dts -o ~/test.dtb
sudo mv ~/test.dtb /boot/firmware/bcm2712-rpi-5-b.dtb
# Reboot for changes to take effect
sudo reboot
Note: msi-parent seems to carry the value <0x2c> nowadays, cost me a few hours.
2. Verify the Changes
After rebooting, check that the Coral TPU is recognized by the system:
lspci -nn | grep 089a
You should see output similar to: 0000:01:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
Installing the Apex Driver
With the device tree properly configured, the next step is installing Google's Apex driver for the Coral Edge TPU. This driver enables communication between the operating system and the TPU hardware.
Following the official instructions from coral.ai:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gasket-dkms libedgetpu1-std
sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
sudo groupadd apex
sudo adduser $USER apex
sudo reboot
This sequence:
- Adds Google's package repository and GPG key
- Installs the gasket DKMS module (kernel driver) and Edge TPU runtime library
- Creates udev rules for device permissions
- Creates an
apex
group and adds your user to it - Reboots to load the driver
After the reboot, verify the installation:
lspci -nn | grep 089a
This should display the connected Coral TPU as a PCIe device.
Next, confirm the device node exists with proper permissions:
ls -l /dev/apex_0
If the output shows /dev/apex_0
with appropriate group permissions, the installation was successful. If not, review the udev rules and group membership.
Testing with Example Models
To verify the TPU is functioning correctly, we'll use Google's example classification script with a pre-trained MobileNet model:
# Install Python packages
sudo apt-get install python3-pycoral
# Download example code and models
mkdir -p ~/coral && cd ~/coral
git clone https://github.com/google-coral/pycoral.git
cd pycoral
# Run bird classification example
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
The output should show inference results with confidence scores, confirming the Edge TPU is working correctly.
Docker Installation
Docker provides containerization for the applications we'll be running (Frigate, MediaMTX, etc.). This keeps dependencies isolated and makes deployment much cleaner.
Install Docker using the official convenience script from docker.com:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
After installation, log out and back in for group membership changes to take effect.
Configure Docker to start automatically on boot:
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
Test the Edge TPU (Optional)
To verify the Edge TPU works inside a Docker container, we can build a test image. This is particularly useful if you plan to use the TPU with containerized applications.
Create a test directory and Dockerfile:
mkdir coraltest
cd coraltest
sudo nano Dockerfile
Into the new file, paste:
FROM debian:10
WORKDIR /home
ENV HOME /home
RUN cd ~
RUN apt-get update
RUN apt-get install -y git nano python3-pip python-dev pkg-config wget usbutils curl
RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" \
| tee /etc/apt/sources.list.d/coral-edgetpu.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install -y edgetpu-examples
RUN apt-get install libedgetpu1-std
CMD /bin/bash
Build and run the test container, passing through the Coral device:
# build the docker container
docker build -t "coral" .
# run the docker container
docker run -it --device /dev/apex_0:/dev/apex_0 coral /bin/bash
Inside the container, run an inference example:
# run an inference example from within the container
python3 /usr/share/edgetpu/examples/classify_image.py --model /usr/share/edgetpu/examples/models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label /usr/share/edgetpu/examples/models/inat_bird_labels.txt --image /usr/share/edgetpu/examples/images/bird.bmp
You should see inference results with confidence values from the Edge TPU. If not, try a clean restart of the system.
Portainer (Optional)
Portainer provides a web-based GUI for managing Docker containers, images, and volumes. While not required, it makes container management significantly more convenient.
This is optional, gives you a browser GUI for your various docker containers.
Install Portainer:
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
Access Portainer in your browser and set an admin password:
- Navigate to: https://airaspi.local:9443
VNC Setup (Optional)
VNC provides remote desktop access to your headless Raspberry Pi. This is particularly useful for testing cameras and debugging visual issues without connecting a physical monitor.
This is optional, useful to test your cameras on your headless device. You could attach a monitor, but I find VNC more convenient.
Enable VNC through the Raspberry Pi configuration tool:
sudo raspi-config
Navigate to: Interface Options → VNC → Enable
Connecting through VNC Viewer
Install RealVNC Viewer on your computer (available for macOS, Windows, and Linux).
Connect using the address: airaspi.local:5900
You'll be prompted for your Raspberry Pi username and password. Once connected, you'll have full remote desktop access for testing cameras and debugging.
Frigate NVR Setup
Frigate is a complete Network Video Recorder (NVR) with real-time object detection powered by the Coral Edge TPU. It's the heart of this edge AI system.
Docker Compose Configuration
This setup uses Docker Compose to define the Frigate container with all necessary configurations. If you're using Portainer, you can add this as a custom stack.
Important: you need to change the paths to your own paths.
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "64mb" # update for your cameras based on calculation above
devices:
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
volumes:
- /etc/localtime:/etc/localtime:ro
- /home/aron/frigate/config.yml:/config/config.yml # replace with your config file
- /home/aron/frigate/storage:/media/frigate # replace with your storage directory
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "******"
Key configuration points in this Docker Compose file:
- Privileged mode and device mappings: Required for accessing hardware (TPU, cameras)
- Shared memory size: Allocated for processing video frames efficiently
- Port mappings: Exposes Frigate's web UI (5000) and RTSP streams (8554)
- Volume mounts: Persists recordings, config, and database
Frigate Configuration File
Frigate requires a YAML configuration file to define cameras, detectors, and detection zones. Create this file at the path you specified in the docker-compose file (e.g., /home/aron/frigate/config.yml
).
This is necessary just once. Afterwards, you will be able to change the config in the GUI.
Here's a working configuration using the Coral TPU:
mqtt:
enabled: False
detectors:
cpu1:
type: cpu
num_threads: 3
coral_pci:
type: edgetpu
device: pci
cameras:
cam1: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam1
roles:
- detect
cam2: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam2
roles:
- detect
detect:
enabled: True # <+++- disable detection until you have a working camera feed
width: 1280 # <+++- update for your camera's resolution
height: 720 # <+++- update for your camera's resolution
This configuration:
- Disables MQTT: Simplifies setup for local-only operation
- Defines two detectors: A Coral TPU detector (
coral
) and a CPU fallback - Uses default detection model: Frigate includes a pre-trained model
- Configures two cameras: Both set to 1280x720 resolution
- Uses hardware acceleration:
preset-rpi-64-h264
for Raspberry Pi 5 - Detection zones: Enable only when camera feeds are working properly
MediaMTX Setup
MediaMTX is a real-time media server that handles streaming from the Raspberry Pi cameras to Frigate. It's necessary because Frigate doesn't directly support libcamera
(the modern Raspberry Pi camera stack).
Install MediaMTX directly on the system (not via Docker - the Docker version has compatibility issues with libcamera).
Double-check the chip architecture when downloading - this caused me significant headaches during setup.
Download and install MediaMTX:
mkdir mediamtx
cd mediamtx
wget https://github.com/bluenviron/mediamtx/releases/download/v1.5.0/mediamtx_v1.5.0_linux_arm64v8.tar.gz
tar xzvf mediamtx_v1.5.0_linux_arm64v8.tar.gz && rm mediamtx_v1.5.0_linux_arm64v8.tar.gz
MediaMTX Configuration
Edit the mediamtx.yml
file to configure camera streams. The configuration below uses rpicam-vid
(Raspberry Pi's modern camera tool) piped through FFmpeg to create RTSP streams.
Add the following to the paths
section in mediamtx.yml
:
paths:
cam1:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 0 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
cam2:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 1 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
This configuration:
cam1
andcam2
: Define two camera pathsrpicam-vid
: Captures YUV420 video from Raspberry Pi camerasffmpeg
: Transcodes the raw video to H.264 RTSP streamrunOnInitRestart: yes
: Automatically restarts the stream if it fails
Port Configuration
Change the default RTSP port to avoid conflicts with Frigate:
In mediamtx.yml
, change:
rtspAddress: :8554
To:
rtspAddress: :8900
Otherwise there will be a port conflict with Frigate.
Start MediaMTX
Run MediaMTX in the foreground to verify it's working:
./mediamtx
If there are no errors, verify your streams using VLC or another RTSP client:
rtsp://airaspi.local:8900/cam1
rtsp://airaspi.local:8900/cam2
Note: Default RTSP port is 8554, but we changed it to 8900 in the config.
Current Status and Performance
What's Working
The system successfully streams from both cameras at 30fps and 720p resolution. The Coral Edge TPU performs object detection with minimal latency - the TPU itself is not breaking a sweat, maintaining consistently high performance.
According to Frigate documentation, the TPU can handle up to 10 cameras, so there's significant headroom for expansion.
Current Issues
However, there are several significant problems hampering the system:
1. Frigate Display Limitations
Frigate limits the display FPS to 5, which is depressing to watch, especially since the TPU doesn't even break a sweat. The hardware is clearly capable of much more, but software limitations hold it back.
2. Stream Stability Problems
The stream is completely errant and drops frames constantly. I've sometimes observed detect FPS as low as 0.2, but the TPU speed should definitely not be the bottleneck here. One potential solution might be to attach the cameras to a separate device and stream from there.
3. Coral Software Abandonment
The biggest issue is that Google seems to have abandoned the Coral ecosystem, even though they just released new hardware for it. Their most recent Python build supports only Python 3.9.
Specifically, pycoral
appears to be the problem - without a decent update, I'm confined to Debian 10 with Python 3.7.3. That sucks. There are custom wheels available, but nothing that seems plug-and-play.
This severely limits the ability to use modern software and libraries with the system.
Reflections and Lessons Learned
Hardware Decisions
The M.2 E Key Choice
The decision to go for the M.2 E key version to save money, instead of spending more on the USB version, was a huge mistake. Please do yourself a favor and spend the extra 40 bucks.
Technically, it's probably faster and better for continuous operation, but I have yet to feel the benefit of that. The USB version would have offered far more flexibility and easier debugging.
Future Development
Several improvements and experiments are planned to enhance this system:
Documentation and Visual Aids
- Add images and screenshots to this build log to make it easier to follow
Mobile Stream Integration
- Check whether vdo.ninja is a viable way to add mobile streams, enabling smartphone camera integration and evaluation
MediaMTX libcamera Support
- Reach out to the MediaMTX developers about bumping libcamera support, which would eliminate the current
rpicam-vid
workaround. I suspect there's quite a lot of performance lost in the current pipeline.
Frigate Configuration Refinement
- Tweak the Frigate config to enable snapshots and potentially build an image/video database for training custom models later
Storage Expansion
- Worry about attaching an external SSD and saving the video files on it for long-term storage and analysis
Data Export Capabilities
- Find a way to export the landmark points from Frigate, potentially sending them via OSC (like in my pose2art project) for creative applications
Dual TPU Access
- Find a different HAT that lets me access the other TPU - I have the dual version, but can currently only access 1 of the 2 TPUs due to hardware restrictions