|
|
We developed Iconic, a prototype interface that allows user to describe the layout of three-dimensional scenes through a free mixture of speech and depictive gestures. Interpretation of this type of gestures requires an integrated approach where a high-level intrepreter can simultaneously draw from clues in both the speech and gesture channels. Our system, a user's gestures are not interpreted based on their similarity to some standard form but are only processed to an intermediate feature-based representation. By this approach, gestres can be succesffully interpreted in the wider context of information from speech and the graphical domain. |