This shows you the differences between two versions of the page.
pm:prj2025:iotelea:ageorgescu2407 [2025/05/28 09:59] ageorgescu2407 [Software Design] |
pm:prj2025:iotelea:ageorgescu2407 [2025/05/28 10:50] (current) ageorgescu2407 |
||
---|---|---|---|
Line 47: | Line 47: | ||
* Pinky sensor is connected to line 3 (SD7, SC3) | * Pinky sensor is connected to line 3 (SD7, SC3) | ||
* Reference sensor is connected to line 2 (SD2, SC2) | * Reference sensor is connected to line 2 (SD2, SC2) | ||
+ | |||
+ | Physical Wiring: | ||
+ | {{ :pm:prj2025:iotelea:manusa_fizica_ageorgescu2407.jpg?700 |}} | ||
====Software Design==== | ====Software Design==== | ||
Development Medium: Arduino IDE | Development Medium: Arduino IDE | ||
Line 125: | Line 128: | ||
* loop - runs either record_loop or interpret_loop, based on the hardcoded FUNCTION_MODE currently set | * loop - runs either record_loop or interpret_loop, based on the hardcoded FUNCTION_MODE currently set | ||
+ | ====Results==== | ||
+ | The overall result is a system that can be configured for either gesture interpretation or gesture learning. Functionally the ESP32 used could learn even more complex or larger matrices to continue using multionomial regression for more potential classes, learning new gestures that don't strinctly have to me mapped onto a character. The ESP32 has bluetooth/wifi functionality so it is completely plausible to modify a system like this to interact with smart devices by issuing commands. The choice to interpret sign-language is arbitrary. For my current learning set (30 entries for each gesture) inference analysis yields great results: | ||
+ | * Average precision - 0.98 | ||
+ | * Average recall - 0.98 | ||
+ | * F1-score - 0.98 | ||
+ | * Accuracy - 0.98 | ||
+ | * (PS. I know it looks odd that they are the same, but this is the result as calculated my sklearn's script) | ||
+ | {{ :pm:prj2025:iotelea:conf_matrix_ageorgescu2407.png?800 |}} | ||
+ | ====Conclusions and Lessons==== | ||
+ | * Not a final conclusion really, but I realised half-way through development that I bought MPU6500 instead of MPU6050s like originally intended, which is a lesson to read more clearly when buying parts, I suppose. | ||
+ | * Sometimes, the breakout schemes of the various components have diagrams that just actually lie, the TCA9548A for example was supposed to have its own pull up resistors, but I needed to at my own. | ||
+ | * Positional tracking is difficult to realise with 6DOF systems, I was able to salvage my idea only because hand gestures have limited freedom of movement, but Z-rotation instability ruins any attempts at keeping a valid virtual Oxyz coordinate system. | ||
+ | * I know it is minimal in effect, but I think solidly color coding my wiring by function may be the single best decision I made during the project. | ||
+ | * LCD factory settings set contrast to 0, took a while of code/verify/upload cycles before I realised that one. | ||
+ | * Always check if cables you buy are data-capable. | ||
+ | * Although it worked for this project, sensors with pins going out like the ones I am using are cumbersome due to their volume occupied blocking natural finger positions. Same issue with the wiring being to long. I think using hard wires might've possibly been more beneficial in retrospect. | ||
+ | * Avoid using textiles as a base support for projects, I had to sew my sensors in with thread and use a zip tie to position the breadboard because nothing was sticking. | ||
+ | ====Code Repository==== | ||
+ | Available [[https://github.com/AnduG/SmartGlove|here]]! |