The Internet of Things (IoT) is about connecting physical objects — sensors, machines, vehicles, appliances — so they can collect and share data. AI and Machine Learning (ML) are what make that data useful.
Instead of just measuring temperature, vibration, heart rate, traffic, etc., AI/ML can learn patterns from that data and make decisions: predict a failure, detect something abnormal, optimize energy use, or trigger an action automatically.
So in short: IoT gives you data from the real world, AI/ML gives that data “brains.”
Adding AI/ML to IoT turns systems from reactive to proactive. A normal IoT sensor might tell you “motor temperature is high right now.”
An ML-powered IoT system can say “this motor will likely overheat in 36 hours, schedule maintenance now.” That means less downtime in factories, smarter energy grids, safer cars, healthier patients, smoother logistics.
You also get automation at scale: thousands or millions of devices making local decisions without a human in the loop, which is essential in smart homes, smart cities, agriculture, and industry.
The typical flow is: sensors generate data → data is processed either in the cloud or on the “edge” (right on the device or nearby gateway) → ML models analyze it → actions/alerts are created.
Cloud is good for heavy training and seeing global trends across many devices. Edge AI is good for speed, privacy, and low bandwidth (for example, a camera that detects anomalies locally and only sends an alert, not the full video feed).
The main challenges are data quality, security, battery/compute limits on tiny devices, and keeping models updated in the field. The main payoff is continuous, intelligent, real-world control.
Above is an example of a TensorFlow Lite Micro workflow, which is designed to run machine learning models on DSPs, microcontrollers and other devices with limited memory.
We're going to turn your Sparrow node into a tiny self-contained “did someone shake me?” AI. Here's the full plan end to end:
You will need to create a new PlatformIO project with this platformio.ini.
[env:sparrow] ; Community fork of espressif32 platform w/ Arduino core 3.x for ESP32-C6 platform = https://github.com/pioarduino/platform-espressif32/releases/download/54.03.20/platform-espressif32.zip board = esp32-c6-devkitm-1 framework = arduino monitor_speed = 115200 build_flags = -D ARDUINO_USB_MODE=1 -D ARDUINO_USB_CDC_ON_BOOT=1 -D ESP32_C6_env ; We'll need Wire (built-in), math, etc. For the RGB LED we'll pull Adafruit NeoPixel. lib_deps = adafruit/Adafruit NeoPixel @ ^1.12.0
We will need to stream accel samples and a live label (idle vs shake) over Serial as CSV. For this you will need to include in your project the following:
After adding these files to your project, build and upload. The program will start logging on the serial interface samples from the accelerometer. You will need to label these samples and save them on your machine.
We will need to capture two datasets separately for training. These will log the two states of the device: one dataset will contain accel data from when the device is not moving and another dataset will capture data when the board is being shaken.
Download the Python script for dataset collection here.
Also, close all Visual Studio windows before running the script (Visual Studio will keep the serial port open and the script will fail)!
We’ll do this:
Put the board still on the table. Run script that logs with label stationary:
> python3 collect_data.py -stationary --port /dev/ttyUSB0 --baud 115200
Get ~10 seconds of data then hit Ctrl+C. The script will automatically save the collected data into raw_data_label0.csv
Hold the board in your hand and actually shake/jolt/tap it like it’s being moved. Run script #2 that logs with label shaking:
> python3 collect_data.py -shaking --port /dev/ttyUSB0 --baud 115200
Get ~10 seconds of data then hit Ctrl+C. The script will automatically save the collected data into raw_data_label1.csv
Then, we’ll merge the two CSVs in the next step.
With the two generated datasets we can now proceed to training our model to recognize the two different states.
We will need a training Python script, you can get it from here.
Run the script python3 train_model.py and it should generate a new file, include/model.h which contains the trained model for our shake recognition.
model.h file in include/
Now comes the fun part!
Now that you have good-looking weights and a generated model.h, you're ready to flash the on-device detector.
Download the main.cpp code from here.
Your final project structure should look like this:
include/ |-LSM6DSL.h |-model.h src/ |-LSM6DSL.cpp |-main.cpp platformio.ini
When it’s running: