Considering Wake Gestures for Smart Assistant Use

2020.04 | CHI EA 2020


Smart speakers have become an almost ubiquitous technology as they enable users to access conversational agents easily. Yet, the agents can only be activated using specific voice commands, i.e. a wake word. This, in turn, requires the device to constantly listen to and process sound, which represents a privacy issue for some users. Further, using the trigger word for the agent in a conversation with another human may lead to accidental triggers. Here, we propose using gestural triggers for conversational agents. We conducted gesture elicitation to identify five candidate gestures. We then conducted a user study to investigate the acceptability and effort required to perform the gestures. Initial results indicate that the snap gesture shows the most potential. Our work contributes initial insights on using smart speakers with ubiquitous sensing.


smart speaker, gestural input, smart assistant, gesture, gesture elicitation


CHI EA ’20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems

Project Info



Patryk Pomykalski, Mikołaj P. Woźniak, Paweł W. Woźniak, Krzysztof Grudzień, Shengdong Zhao, and Andrzej Romanowski