This week, I was working on a Literature review for the graduation project. I created a structure by listing 3 key topics of my project .
- AI-powered prosthetics
- How to improve the responsiveness and flexibility of a prosthetic hand using Machine Learning?
- Kumar, G., Yadav, S.S. and Pal, V., 2022. Machine learning-based framework to predict finger movement for prosthetic hand. IEEE Sensors Letters, 6(6), pp.1-4 Machine_Learning-Based_Framework_to_Predict_Finger_Movement_for_Prosthetic_Hand.pdf The biological hand services are quite difficult to be replaced by any prosthetic hand, as the biological hand has the incredible spectrum of functionality, but in real-time, electromyography (EMG) sensors have been used extensively for designing the prosthetic hand. The presented framework uses Kalman filtering approach, which makes it more robust. It is obtained that KNN classifier provides 76% accuracy, DT gives 80% accuracy, RF obtains 83.5% accuracy, and XG- Boost classifier demonstrates 85% accuracy for the framework under consideration. Thus, XGBoost classifier outperforms the remaining classifiers due to its ensemble nature.
- How to improve the responsiveness and flexibility of a prosthetic hand using Machine Learning?
- finger movement prediction
- How to improve the finger movement prediction using Machine Learning?
- Widasari, E.R. and Setiawan, E., 2024. Comparative analysis of machine learning techniques for hand movement prediction using electromyographic signals. Journal of Information Technology and Computer Science, 9(1), pp.34-45.
- How to improve the finger movement prediction using Machine Learning?
- Gesture-based human computer interaction
- Hand Gesture Recognition for Natural Human-Computer Interaction
- Song, Y., Demirdjian, D. and Davis, R., 2012. Continuous body and hand gesture recognition for natural human-computer interaction. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(1), pp.1-28. Evidence suggests that gesture-based interaction is the wave of the future, with consid- erable attention from both the research community (see recent survey articles by Mitra and Acharya [2007] and by Weinland et al. [2011]) and from the industry and public media (e.g., Microsoft Kinect). Evidence can also be found in a wide range of potential application areas, such as medical devices, video gaming, robotics, video surveillance, and natural human-computer interaction. Gestural interaction has a number of clear advantages. First, it uses equipment we always have on hand: there is nothing extra to carry, misplace, or leave behind. Second, it can be designed to work from actions that are natural and intuitive, so there is little or nothing to learn about the interface. Third, it lowers cognitive overhead, a key principle in human-computer interaction: Gesturing is instinctive and a skill we all have, so it requires little or no thought, leaving the focus on the task itself, as it should be, not on the interaction modality. Gesture recognition can be viewed as a task of statistical sequence modeling: Given example observation sequences, the task is to learn a model that captures spatio- temporal patterns in the sequences, so that the model can perform sequence labeling and segmentation on new observations. Our main contributions are threefold: a unified frame- work for continuous body and hand gesture recognition; a new error measure, based on Motion History Image (MHI) [Bobick and Davis 2001], for body tracking that captures dynamic attributes of motion; and a novel technique called multilayered filtering for robust online sequence labeling and segmentation.
- Hand Gesture Recognition for Natural Human-Computer Interaction
Leave a Reply