A baseline model serves as a foundational reference point. It establishes a minimum level of performance that any advanced model must surpass to be considered effective.
In this section, we will deploy two baseline models: an image classification model on the Arduino Nano 33 BLE Sense, and an object detection model on the OpenMV Cam H7.
Image classification involves assigning a label to an input image based on its content. We will train a model to distinguish between two classes: "bottle" and "computer".
Login to Edge Impulse and create a new project named
tech4wildlife-base-classification.
Using the Uploader or Dashboard, upload the files from the
dsail-tech4wildlife/base-data/classification folder. Ensure you let Edge Impulse
infer the label from the filename.
Design the processing pipeline:
96x96.Navigate to the Image tab, save parameters (RGB color depth), and generate features. Then, go to the Transfer Learning tab.
Training Settings:
Navigate to Model Testing and click "Classify All". Our baseline achieved ~93.5% accuracy, which is excellent for a starting point.
install_linux.sh,
install_mac.command, or install_windows.bat).edge-impulse-run-impulse --debug
Open the provided URL to see live classification results!
Unlike classification, object detection identifies multiple objects and their locations within a frame. We will use the OpenMV Cam H7 due to its higher processing power (480 MHz).
We will use FOMO (Faster Objects, More Objects), a lightweight detection model that provides object centroids.
Steps:
dsail-tech4wildlife repo, navigate to
base-model/object-detection-model.main.py and labels.txt.The OpenMV Cam includes a built-in Haar Cascade face detector that requires no training.
dsail-tech4wildlife/open-mv-cam-h7.main.py file provided there.The onboard LED will act as a status indicator: