Using Audio Cues to Impose Temporal Constraints on Motion Gestures for Mobile Interaction
Motion gestures are an underutilized input modality for mobile interaction despite numerous potential advantages. Negulescu et al. found that the lack of feedback on attempted motion gestures made it difficult for participants to diagnose and correct errors, resulting in poor recognition performance and user frustration. In this paper, we describe and evaluate a training and feedback technique, Glissando, which uses audio characteristics to provide feedback on the system’s interpretation of user input. This technique enables feedback by verbally confirming correct gestures and notifying users of errors in addition to providing continuous feedback by mapping distinct musical notes to each of three axes and manipulating pitch to specify both spatial and temporal information.
CitationS and materials
S. Morrison-Smith, M. Hofmann, Y. Li, and J. Ruiz. Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices. ACM Transactions on Applied Perception (TAP) 13.3 (2016): 16.
S. Morrison-Smith and J. Ruiz, Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices, CHI ’14 Extended Abstracts on Human Factors in Computing Systems, ACM (2014), 1621–1626.
Image credit: https://phandroid.com/