This shows you the differences between two versions of the page.
|
iothings:laboratoare:2025:lab7 [2025/11/05 19:37] dan.tudose [K-Means: IMU Activity Clustering] |
iothings:laboratoare:2025:lab7 [2025/11/08 19:01] (current) dan.tudose [K-Means: IMU Activity Clustering] |
||
|---|---|---|---|
| Line 59: | Line 59: | ||
| ===== K-Means: IMU Activity Clustering ===== | ===== K-Means: IMU Activity Clustering ===== | ||
| - | K-means clustering is an unsupervised learning technique that groups data points into clusters based on how similar they are to one another. Unlike supervised methods that rely on labeled examples, K-means works with unlabeled data to reveal underlying patterns or structure. For instance, an wearable device might use K-means to discern between different activities, such as sleeping, walkinr or running. | + | K-means clustering is an unsupervised learning technique that groups data points into clusters based on how similar they are to one another. Unlike supervised methods that rely on labeled examples, K-means works with unlabeled data to reveal underlying patterns or structure. For instance, an wearable device might use K-means to discern between different activities, such as sleeping, walking or running. |
| + | |||
| + | {{ :iothings:laboratoare:2025:har.jpg?600 |}} | ||
| <note tip>You can learn more about K-means Clustering [[https://www.geeksforgeeks.org/machine-learning/k-means-clustering-introduction/|here]]!</note> | <note tip>You can learn more about K-means Clustering [[https://www.geeksforgeeks.org/machine-learning/k-means-clustering-introduction/|here]]!</note> | ||
| Line 96: | Line 98: | ||
| ===== On-Device Audio Anomaly Detection ===== | ===== On-Device Audio Anomaly Detection ===== | ||
| - | Build/flash the code. | + | This ML example turns the ESP32 into a real-time audio anomaly detector. It learns what “normal” audio sounds like for a few seconds, then continuously flags frames that look different. The NeoPixel shows: |
| - | Leave the board in a quiet “normal” state for ~2–3 seconds while it bootstraps. | + | Green = audio looks normal |
| + | Red = anomaly (outlier) | ||
| - | Make an unusual sound (clap / keys / tap desk) → you should see ANOMALY and the LED turn red. | + | |
| + | The code goes through these steps: | ||
| + | * Bootstrap phase (about 2–3 s): collects BOOTSTRAP_W=40 frames of raw features. | ||
| + | * Computes per-feature mean and std, then z-scores all bootstrap frames. | ||
| + | * Runs K-means (K=4) on the standardized bootstrap features: k-means++ initialization, then a few iterations. | ||
| + | * Measures each bootstrap frame’s distance to its nearest centroid and stores: Median distance (dist_median) and MAD (median absolute deviation) of distances (dist_mad) | ||
| + | * Sets an anomaly threshold: ''threshold = dist_median + ANOM_MULT × dist_mad'' (with ANOM_MULT = 3.0) | ||
| + | |||
| + | In order to make it run properly: | ||
| + | |||
| + | - Build/flash [[iothings:laboratoare:2025_code:lab7_2|the code]]. | ||
| + | - Leave the board in a quiet “normal” state for ~2–3 seconds while it bootstraps. | ||
| + | - Make an unusual sound (clap / keys / tap desk) → you should see ANOMALY and the LED turn red. | ||
| If it’s too sensitive or not sensitive enough, tweak: | If it’s too sensitive or not sensitive enough, tweak: | ||
| - | - ANOM_MULT (e.g., 2.5–4.0), | + | * ''ANOM_MULT'' (e.g., 2.5–4.0), |
| - | - K_CLUSTERS (3–6 often fine), | + | * ''K_CLUSTERS'' (3–6 often fine), |
| - | - FFT size (512/1024) or N_BANDS (8–24). | + | * ''FFT size'' (512/1024) or ''N_BANDS'' (8–24). |
| + | |||
| + | ===== On-Device Light Anomaly Detection ===== | ||
| + | |||
| + | This example turns an ESP32-C6 “Sparrow” into a light-state recognizer. | ||
| + | |||
| + | Each loop tick runs every 100 ms to achieve 10 Hz sampling. The raw brightness is compressed to a [0,1] scale with a log normalization that treats 64k as a generous upper bound, then smoothed with an exponential moving average using α=0.3. The code also maintains a short ring buffer covering roughly three seconds of these smoothed values to estimate a windowed standard deviation, which acts as a quick measure of short-term variability. | ||
| + | |||
| + | Unsupervised clustering is done online with a very small, one-dimensional k-means-like scheme. Five cluster centroids are seeded across the [0,1] range, and for about forty seconds (TRAIN_SAMPLES=400 at 10 Hz) the system is in a training phase: each new smoothed sample is assigned to its nearest cluster by absolute distance and that cluster’s running mean and variance are updated. During training it prints progress and the evolving means. After training, each new sample is again assigned to its nearest cluster to produce a cluster index, along with the cluster’s standard deviation and the short-window deviation from the ring buffer. A human-readable room state is then chosen. If the user has previously assigned a manual label to that cluster via a tiny serial REPL, that label is used; otherwise a heuristic converts normalized level and short-term variability into categories like “night”, “full_sun”, “lights_on”, “shade/day_indirect”, or “transition”. To avoid label flicker, a simple hysteresis keeps a new label “pending” until it appears twice in a row. The cluster means continue to adapt slowly during inference so the model can track gradual daylight changes. | ||
| + | |||
| + | Labels can be managed over Serial with commands such as setlabel <k> <name>, savelabels, and labels. The Preferences API persists these names in NVS under a “labels” namespace so they survive reboots. Throughout, the program reports the sensor type, the raw brightness, the normalized and EMA values, the chosen cluster and its statistics, the short-window deviation, and the stable label, making it easy to tune thresholds or swap sensors without changing application logic. | ||
| + | |||
| + | Download the code [[iothings:laboratoare:2025_code:lab7_3|here]]. | ||
| + | |||
| + | Add these two lines to your ''lib_deps'' in ''platformio.ini'': | ||
| + | <code> | ||
| + | https://github.com/DFRobot/DFRobot_LTR308.git | ||
| + | adafruit/Adafruit LTR329 and LTR303@^2.0.1 | ||
| + | </code> | ||