Baseline Models

A baseline model serves as a foundational reference point. It establishes a minimum level of performance that any advanced model must surpass to be considered effective.

In this section, we will deploy two baseline models: an image classification model on the Arduino Nano 33 BLE Sense, and an object detection model on the OpenMV Cam H7.


Part 1: Image Classification

Image classification involves assigning a label to an input image based on its content. We will train a model to distinguish between two classes: "bottle" and "computer".

1. Project Setup

Login to Edge Impulse and create a new project named tech4wildlife-base-classification.

Using the Uploader or Dashboard, upload the files from the dsail-tech4wildlife/base-data/classification folder. Ensure you let Edge Impulse infer the label from the filename.

Data Upload
Data Upload Dashboard in Edge Impulse

2. Create Impulse

Design the processing pipeline:

  • Image Data: Resize images to 96x96.
  • Processing Block: Use "Image" to normalize data (standardizes pixel values).
  • Learning Block: Use "Transfer Learning". This fine-tunes a pre-trained MobileNetV2 model, which is efficient for small datasets.
Impulse Design
Impulse Design Pipeline

3. Feature Generation & Training

Navigate to the Image tab, save parameters (RGB color depth), and generate features. Then, go to the Transfer Learning tab.

Training Settings:

  • Model: MobileNetV1 (or V2) 96x96 0.35
  • Training Cycles: 30
  • Learning Rate: 0.0005
  • Data Augmentation: Enabled (helps prevent overfitting)
Model Training
Training the Model

4. Evaluation

Navigate to Model Testing and click "Classify All". Our baseline achieved ~93.5% accuracy, which is excellent for a starting point.

Model Testing Results
Confusion Matrix on Test Set

5. Deployment to Arduino

  1. Go to the Deployment tab.
  2. Search for Arduino Nano 33 BLE Sense.
  3. Click Build. This downloads a firmware zip file.
  4. Connect your Arduino to the PC via USB.
  5. Unzip the file and run the flash script (install_linux.sh, install_mac.command, or install_windows.bat).
  6. Once flashed, open a terminal and run:
    edge-impulse-run-impulse --debug

Open the provided URL to see live classification results!

Live Deployment
Live Interface showing real-time classification

Part 2: Object Detection

Unlike classification, object detection identifies multiple objects and their locations within a frame. We will use the OpenMV Cam H7 due to its higher processing power (480 MHz).

1. Bottle Detector (FOMO)

We will use FOMO (Faster Objects, More Objects), a lightweight detection model that provides object centroids.

Steps:

  1. Connect the OpenMV Cam H7 via USB. It will mount as a USB drive.
  2. In the dsail-tech4wildlife repo, navigate to base-model/object-detection-model.
  3. Copy main.py and labels.txt.
  4. Paste them into the OpenMV USB drive (replacing existing files).
  5. Eject safely and reset the camera.

2. Face Detection (Haar Cascades)

The OpenMV Cam includes a built-in Haar Cascade face detector that requires no training.

  1. Navigate to dsail-tech4wildlife/open-mv-cam-h7.
  2. Copy the main.py file provided there.
  3. Paste it into the OpenMV USB drive.

The onboard LED will act as a status indicator:

Idle State
Blue LED: Idle / No Face
Detected State
Red LED: Face Detected!