Deploying ML Models on Edge Devices



Baseline Models

Introduction

A baseline model in deep learning serves as a foundational reference point for evaluating the performance of more complex models.

It is typically a simple and straightforward model that provides a basic benchmark against which the performance of more advanced models can be compared. The purpose of a baseline model is not to achieve state-of-the-art accuracy but to establish a minimum level of performance that any new model must surpass to be considered effective.

In this section, we will deploy two baseline models, one on the Arduino Nano BLE 33 Sense and another on the Open MV Cam H7. Specifically, we will implement an image classification model on the Arduino Nano BLE 33 Sense and an object detection model on the Open MV Cam H7.

Image Classification Task

Overview

Image classification refers to the process of assigning a label or class to an input image based on its visual content.
It is a fundamental task in computer vision and machine learning. In image classification, a machine learning model analyzes the features and patterns present in an image to predict the most appropriate class or category that the image belongs to.
The model is typically trained on a labelled image dataset, learning to recognize and differentiate between different objects, scenes, or concepts.

Procedure

Login to Edge impulse and create a new project “tech4wildlife-base-classification” This will take you to the project page/dashboard where you choose to add existing Data. From the cloned dsail-tech4wildlife repository navigate to the “base-data” folder then choose all the files in the “classification folder” Be sure to infer from the filename and edge impulse will split the data accordingly

You should see a page similar to this one

Superb!

Let's Create an Impulse
Impulse Design is a component of Edge Impulse that enables you to design machine learning models. It offers a user-friendly interface for defining inputs and outputs, selecting algorithms, and configuring model architecture.
It supports classical and deep learning techniques, provides data collection and preprocessing tools, and handles training with performance monitoring. Users can evaluate models using validation data and deploy optimized code to microcontrollers or edge devices.
Impulse Design simplifies the process of creating, training, and deploying machine learning models, making them accessible for a wide range of applications.

The pipeline to train the images

  • Image data: To begin, the images will be resized to a dimension of 96x96.
  • Image: The image data will then be preprocessed and normalized, with the additional option of reducing the colour depth. Normalizing the image data before training offers the benefit of standardizing the pixel values across the dataset, ensuring consistent and optimal learning conditions for the model. This normalization process aids in mitigating the influence of varying pixel intensity ranges, enabling more effective and reliable training results.
  • Transfer learning: This is a beneficial approach in scenarios where data is scarce and deep neural networks require large amounts of data for effective feature learning. It involves utilizing the knowledge acquired from training a model on one task and applying it to a related but different task. By leveraging pre-trained models as a foundation, transfer learning enables faster and more precise learning in new domains where labelled data is limited.
  • Output features: the classes the model can predict. In this particular case, the model is designed to classify objects into two classes: "bottle" and "computer". These classes represent the two categories that the model has been trained to distinguish between.

Generating Features
In this step, we save the parameters and generate features for the image while considering the colour depth, which in our case, is RGB.
By saving the parameters, we store the learned information from the model, facilitating its reusability or further analysis.
Generating features involves extracting meaningful representations from the images and capturing their distinctive characteristics.

Navigate to Image under Impulse design

Model training
In this step, we will utilize the MobileNetV1 model with an input image size of 96x96 and a dropout rate of 0.2.
This configuration aims to enhance the model's capacity to capture intricate features and prevent overfitting.
We will train the model for 30 cycles, adjusting the model's parameters through a learning rate of 0.0005. This iterative training process allows the model to gradually optimize its performance and improve its ability to classify images accurately.

Model training

Model Evaluation
To assess the model's accuracy, go to the Navigation bar on the left and select the "Model Testing" section. Once there, choose the "Classify All" option to execute the test.
After running the test, the results indicate that the model achieved an accuracy of 93.5%. This high accuracy indicates that the model performed well in classifying the test data and is a promising sign of its effectiveness in real-world scenarios.

The model has performed quite well with 90.5% on the validation set.

The model accuracy of 93.55% on test set

Model Deployment
To proceed with the deployment:
  • Navigate to the "Deployment" section.
  • Select the Arduino Nano BLE 33 Sense as the designated board for deployment.
  • Click on the "Build" button to initiate the firmware-building process.
  • After unzipping the generated zip file, select the appropriate file based on your operating system:
    • For Linux: Use the .sh file.
    • For mac OS: Use the .command file.
    • For Windows: Use the .bat file.
  • Ensure that the Arduino Nano BLE 33 Sense board is connected to your computer.
  • Double-click on the corresponding file to flash the firmware onto the connected Arduino Nano BLE 33 Sense board.
  • Open your terminal or command prompt.
    Type the command "edge-impulse-run-impulse --debug" .
    The command will provide a URL
    Copy and paste the URL into the browser of your choice.

You should now be able to see live inferences of the model running.

This shows an accurate prediction of the model during deployment.

Object Detection Task

Overview

Object detection is a computer vision task that identifies and localizes multiple objects within an image or video.
It goes beyond image classification by generating bounding boxes and class labels for each object detected. Algorithms analyze the image, leveraging techniques like feature extraction and machine learning models to accomplish this task.
In this section, we will deploy two object detection models, on the Open MV Cam H7 a bottle detector and a face detection model.

Requirements
  1. OpenMV Cam H7
  2. OpenMV IDE

Bottle Detector with OpenMV Cam H7

We'll utilize the Edge Impulse FOMO(Faster Object More Objects) model FOMO does not output bounding boxes but will give you the object's location using centroids. Hence the size of the object is not available. .
This model is specifically trained to identify bottles within the camera's field of view.
What sets the OpenMV Cam H7 apart is its impressive processing power.
With a clock speed of 480 MHz, it outperforms the Arduino Nano 33 BLE Sense, which runs at just 64 MHz.
This substantial difference translates to a significantly higher Frames Per Second (FPS) of approximately 20 when performing inference. This boost in processing speed allows for smoother and faster object detection
The edge impulse public project for the bottle detector can be found here

Setting Up
  1. Navigate to the base folder of the dsail-tech4wildlife repository.
  2. Locate the base-model folder within the base folder.
  3. In the object-detection-model folder, find the main.py and labels.txt file.
  4. Connect the Open MV Cam to your computer using a USB cable.
  5. Access the USB Drive D, which represents the storage of the Open MV Cam.
  6. Copy the main.py and labels.txt file from the open-mv-cam-h7 folder.
  7. Paste the main.py and labels.txt file into the USB Drive D.
Inference
Simple Neuron

Face Detection with Open MV Cam H7

Overview
Face detection is a computer vision technique used to identify and locate human faces within an image or video.
One common approach to face detection is using the Haar cascade classifier, which is a machine learning-based algorithm.
The Haar cascade filter uses a series of rectangular features to detect patterns resembling facial features.
An example of an out-of-the-box system utilizing face detection with the Haar cascade filter is available with the OpenMV Cam H7. The OpenMV Cam H7 provides a pre-built face detection system that leverages the Haar cascade classifier. This system is ready to use and does not require additional training or configuration.

Deploying an out-of-the-box face detection to the Open MV Cam H7:
  1. Navigate to the base folder of the dsail-tech4wildlife repository.
  2. Locate the open-mv-cam-h7 folder within the base folder.
  3. In the open-mv-cam-h7 folder, find the main.py file.
  4. Connect the Open MV Cam to your computer using a USB cable.
  5. Access the USB Drive D, which represents the storage of the Open MV Cam.
  6. Copy the main.py file from the open-mv-cam-h7 folder.
  7. Paste the main.py file into the USB Drive D.

Replace the main file with the file from dsail-tech4wildlife

To refresh the Open MV Cam, follow these steps:
  • Unplug the Open MV Cam from its current power source or disconnect the USB cable from the computer.
  • If using a power supply, connect the appropriate power source to the Open MV Cam. Ensure that the power supply is compatible with the Open MV Cam's requirements.
    Alternatively, you can reconnect the USB cable to the Open MV Cam.
  • Once the Open MV Cam is powered on, it will start in an idle state.

In the idle state, without a face in the frame, the onboard LED of the Open MV Cam will appear blue.

When the Open MV Cam detects a face in the frame, the onboard LED will change to red to indicate the presence of a detected face.


<<< Previous
Back Home
Next >>>