Datasets:

Modalities:
Video
Size:
< 1K
Libraries:
Datasets
License:

Upload 165-Allow Motion.mp4

#73

Track 3 : Create and integrate a new/existing robot/teleoperator to LeRobot framework

Hi, I'm Joachim, also known as Draco Ryujin from Allow Motion. And this… is not your average prosthetic hand.
Most prosthetic hands today rely on fixed EMG patterns.
Limited grip styles. Clunky control. No real learning.
They don’t adapt to you. You must adapt to them.
We’re changing that.
Our AI-powered prosthetic learns how you move — by mimicking natural hand motion.
Then it maps your EMG signals… and learns your gestures.
No more pre-recorded commands. No more frustration. Just seamless, intuitive control.
Built with Hugging Face machine learning models, Python, and real-time signal training, our hand becomes smarter every time you use it.
The result? A low-cost, 3D-printed hand that learns with you.
Empowering amputees with technology that adapts, not dictates.
This is just the beginning.
With AI and open-source hardware, we're not just building prosthetics —
We're building the future of human-machine integration.

🏆 Allow Motion is now also competing in the Social Entrepreneurship Challenge 2025 at Vrije Universiteit Brussel for a chance to pitch live in Brussels.
🎥 We need your support — please vote #4 before June 17th → https://www.vubsocialentrepreneurship.com/sec-2025
Let’s help them reach the final and connect with the right investors and partners. Just 30 seconds of your time could change someone’s life.

This revised diagram outlines a three-phase training system for our myoelectric prosthetic hand, refining the process from data collection to real-time EMG-only control.

Phase 1: Data Collection
Goal: Build a dataset pairing hand gestures with EMG signals.

Inputs:
Hand Image (Webcam): Captures visual gestures.
EMG Signal (Sensors): Records muscle activity.

Processing:
Image Processing:
Tracks hand region & detects finger positions.
Recognizes meaningful gestures (e.g., fist, pinch).

EMG Processing:
Filters and analyzes muscle signals.
Matches EMG patterns to corresponding gestures.

Output:
A training dataset linking EMG signals to gestures.

Phase 2: Matching EMG + Hand Gestures (Learning Phase)
Goal: Train AI to associate EMG signals with gestures.

Inputs: Same as Phase 1 (camera + EMG).

Processing:
EMG Signal Matching:
Compares live EMG data to the dataset.
Predicts intended gestures.

Verification:
Cross-checks predicted gestures against camera input or user input for accuracy.

Action Execution:
Prosthetic performs the recognized gesture.

Phase 3: EMG-Only Control (Use Phase)
Goal: Operate the prosthetic without camera input (EMG-driven).

Input: EMG signals only (no visual tracking).

Processing:
Signal Processing:
Analyzes muscle activity in real time.
Matches EMG patterns to pre-trained gestures.

Action Execution:
Prosthetic performs the predicted gesture (e.g., grip, release).

Key Improvements Over the Previous Version
Structured Progression:
Data Collection → AI Training → Real-World Use (EMG-only).

Reduced Dependency on Camera:
The final phase relies solely on EMG, making it more practical for daily use.

Verification Step:
Ensures AI predictions match actual gestures before full deployment.

Next Steps:
Expand the gesture library (e.g., sign language, tool grips) and an application allowing people to control robots with sign language or mind control.

NeoHand Working diagram (1).png

pepijn223 changed pull request status to merged

Sign up or log in to comment