A realistic implementation of ultrasound imaging as a human-machine interface for upper-limb amputees - PubMed (original) (raw)

A realistic implementation of ultrasound imaging as a human-machine interface for upper-limb amputees

David Sierra González et al. Front Neurorobot. 2013.

Abstract

In the past years, especially with the advent of multi-fingered hand prostheses, the rehabilitation robotics community has tried to improve the use of human-machine interfaces to reliably control mechanical artifacts with many degrees of freedom. Ideally, the control schema should be intuitive and reliable, and the calibration (training) short and flexible. This work focuses on medical ultrasound imaging as such an interface. Medical ultrasound imaging is rich in information, fast, widespread, relatively cheap and provides high temporal/spatial resolution; moreover, it is harmless. We already showed that a linear relationship exists between ultrasound image features of the human forearm and the hand kinematic configuration; here we demonstrate that such a relationship also exists between similar features and fingertip forces. An experiment with 10 participants shows that a very fast data collection, namely of zero and maximum forces only and using no force sensors, suffices to train a system that predicts intermediate force values spanning a range of about 20 N per finger with average errors in the range 10-15%. This training approach, in which the ground truth is limited to an "on-off" visual stimulus, constitutes a realistic scenario and we claim that it could be equally used by intact subjects and amputees. The linearity of the relationship between images and forces is furthermore exploited to build an incremental learning system that works online and can be retrained on demand by the human subject. We expect this system to be able in principle to reconstruct an amputee's imaginary limb, and act as a sensible improvement of, e.g., mirror therapy, in the treatment of phantom-limb pain.

Keywords: force control; human-machine interaction; human-machine interfaces; incremental learning; rehabilitation robotics; ultrasound imaging.

PubMed Disclaimer

Figures

Figure 1

Figure 1

(A) A typical ultrasound image obtained during the experiment. The ulna is clearly visible in the bottom-left corner, while the flexor muscles and tendons are seen in the upper part. (B) A graphical representation of the human hand and forearm (right forearm; dorsal side up). The transducer is placed onto the ventral side; plane “B” corresponds to the section from which the ultrasound image was taken.

Figure 2

Figure 2

Parts of the setup. (A) ATI Mini45 force sensor, fixed to the table. The subjects press on its top; (B) linear ultrasound transducer GE 12L-RS; (C) custom-made transducer cradle, disassembled; (D) transducer attached onto a subject's forearm, using the cradle.

Figure 3

Figure 3

A bird's eye view of the setup. The subject sits in front of a screen on which the stimulus is shown; meanwhile force data and US images are recorded.

Figure 4

Figure 4

(A) Structure of the stimulus shown to the subjects, first session. In the on-off phase (OO1), only rest and maximum force are induced for each finger, each repetition consisting of 4.5 s of force application, followed by 4.5 s of rest. Five repetitions per finger are induced. In the graded phase (GR1) the subjects must exert force following a squared sinusoidal pattern. (B) Forces as measured by the force sensor during the experiment for a typical subject.

Figure 5

Figure 5

The grid of ROIs, superimposed to the typical shot seen in Figure 1. Each ROI has a radius of 20 pixels, and each ROI center is 50 pixels apart from each other.

Figure 6

Figure 6

Two different views of a three-dimensional PCA projection of the samples obtained from a typical subject during OO1 and OO2. Colors denote finger flexions and rest.

Figure 7

Figure 7

The general safety matrix. Each entry of the matrix, s ij, is the safety index between clusters C i and C j, that is the ratio of the maximal standard deviation of cluster C i and the Euclidean distance between the two clusters. Values averaged over all subjects.

Figure 8

Figure 8

Normalized root-mean-square error obtained by the linear prediction of force (A) and visual stimulus (B) for a typical subject, for each session (OO1, GR1, OO2 and GR2) and for each finger. Each bar and stem represents the mean nRMSE and one standard deviation obtained over the 50 cross-validation folds considered.

Figure 9

Figure 9

Normalized root-mean-square error obtained by the linear prediction of force (A) and visual stimulus (B) for all subjects, for each session (OO1, GR1, OO2 and GR2) and for each finger. Each bar and stem represents the mean nRMSE and one standard deviation obtained over all subjects.

Figure 10

Figure 10

nRMSE for all subjects, when training on an on-off phase and testing on a graded phase. The legend denotes the training/testing phase, e.g., OO1/GR2 means that ridge regression was evaluated with data gathered during the first on-off phase, and the prediction was tested on data gathered during the second graded phase. (A) With the force as ground truth; (B) with the stimulus as ground truth. Each bar and stem represents the mean nRMSE and standard deviation obtained over all subjects.

Figure 11

Figure 11

System learning process for a typical subject. (A) Evolution of the prediction error evaluated over the two graded sessions (GR1+GR2) as the system is fed on-off training data; (B–E) Little finger: stimulus target signal and force prediction at the training points B, C, D, and E (see A).

Figure 12

Figure 12

Online training/testing for a typical subject. After a perturbation in the position of the US probe reduces the quality of the prediction, the system is set again to training and more information is fed to the system. After a fast retraining phase the prediction recovers the accuracy obtained with the initial training. (A) Stimulus and prediction values for the middle finger; (B) stimulus and prediction values for the index finger.

References

    1. Boser B. E., Guyon I. M., Vapnik V. N. (1992). A training algorithm for optimal margin classifiers, in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, ed Haussler D. (Pittsburgh, PA: ACM press; ), 144–152
    1. Castellini C., González D. S., (2013). Ultrasound imaging as a human-machine interface in a realistic scenario, in Proceedings of IROS - International Conference on Intelligent Robots and Systems, (Tokyo: ).
    1. Castellini C., Gruppioni E., Davalli A., Sandini G. (2009). Fine detection of grasp force and posture by amputees via surface electromyography. J. Physiol. 103, 255–262 10.1016/j.jphysparis.2009.08.008 -DOI -PubMed
    1. Castellini C., Passig G. (2011). Ultrasound image features of the wrist are linearly related to finger positions, in Proceedings of IROS - International Conference on Intelligent Robots and Systems, (San Francisco, CA: ), 2108–2114 10.1109/IROS.2011.6048503 -DOI
    1. Castellini C., Passig G., Zarka E. (2012). Using ultrasound images of the forearm to predict finger positions. IEEE Trans. Neural Syst. Rehabil. Eng. 20, 788–797 10.1109/TNSRE.2012.2207916 -DOI -PubMed

LinkOut - more resources