|
|
When designing systems that support remote instruction on physical tasks, one must consider four requirements: 1) participants should be able to use non-verbal expressions, 2) they must be able to take an appropriate body arrangement to see and show gestures, 3) the instructor should be able to monitor operators and objects, 4) they must be able to organize the arrangement of bodies and tools and gestural expression sequentially and interactively. GestureMan was developed to satisfy these four requirements by using a mobile robot that embodies a remote instructors actions. The mobile robot mounts a camera and a remote control laser pointer on it. Based on the experiments with the system we discuss the advantage and disadvantage of the current implementation. Also, some implications to improve the system are described. |