Homepage for Heads-Up Computing

“Smartphone zombie”

Ever had a love-hate relationship with your smartphone? The mobile interaction paradigm has revolutionized how information can be accessed ubiquitously, allowing us to stay connected anytime and anywhere. However, the current ‘device-centered’ design of smartphones demands constant physical and sensorial user engagement, where humans need to adapt their body to accommodate the use of computing devices. We do so unknowingly by adopting an unnatural posture, such as by holding up our smartphones with our hands, and looking down at our screens. The eyes-and-hands-busy nature of mobile interactions not only imposes constraints on how we engage with our everyday activities, but undermines our ability to be situationally aware (hence, the “smartphone zombie” phenomenon).

Our vision: Heads-Up Computing

Heads-Up computing is our ambitious vision to fundamentally change the way we interact with technology, to shift into a more ‘human-centered’ approach.

Its key concept involves providing more seamless computing support for the human’s daily activities, offering a style of interaction that is compatible with natural and intuitive human movements, rather than demands for newly learned behaviors. Heads-Up computing focuses on the immediate perceptual space of users (i.e. ego-centric view), as a guide for designing technology that allows users to become highly independent, effective and efficient in their everyday actions. Wearable embodiments of Heads-Up technology include a head- and hand-piece device that enables complementary capabilities such as multimodal human-computer interactions and complementary motor movements.

Our slides containing more information on the NUS-HCI Lab and HeadsUp Vision are available for download here.

Past Publications

Recent publications

Given Heads-Up is a new paradigm, how do users input? 

  • We propose a voice + gesture multimodal interaction paradigm, in which voice input is mainly responsible for input text; voice + gesture input from wearable hand controllers (i.e., interactive ring) 
  • Voice-based text input and editing. When users are engaged in daily activities, their eyes and hands are often busy; thus we believe voice input is a better modality for inputting text information for Heads-Up computing. However, we don’t just input text, editing is a big part of text processing. Editing text using voice alone is known to be a very challenging problem. Eyeditor is a solution we have come up with to support mobile voice-based text editing. It uses voice re-dictation to correct text, and a wearable ring mouse to perform finer adjustment to text.
    • EYEditor: On-the-Go Heads-up Text Editing Using Voice and Manual Input
    • While the paper above is about text input using voice input in general, we also share an application scenario on how to write about one’s experience in an in-situ fashion using voice-based multimedia input – LiveSnippets: Voice-Based Live Authoring of Multimedia Articles about Experiences
  • Eyes-free touch-based text input as a complementary input technique. In case voice-based text input is not convenient (in places in which quietness is required), we also have a technique (in collaboration with Tsinghua University) to allow you to type in text in an eyes-free fashion (not in the sense of don’t have a visual display, but rather, it does not need users to look at the keyboard to input the text. This allows the user to maintain a heads-up, hands-down posture while input text into a smart glasses’ display.
    • Blindtype: Eyes-Free Text Entry on Handheld Touchpad by Leveraging Thumb’s Muscle Memory
  • Interactive ring as complimentary input technique for command selection and spatial referencing. Voice input has its limitations, as some information is inherently spatial. We also need a device that can perform simple selection as well as 2D or 3D pointing operations. However, it’s not known what’s the best way for users to perform such operations in various daily scenarios. While there might be different interaction techniques that are considered most optimal for different scenarios, users are unlikely willing to carry multiple devices or learn multiple techniques daily, so what will be the best cross-scenario device/technique to perform command selection and spatial referencing? We conducted a series of experiments to evaluate different alternatives that can perform synergistic interactions under the Heads-Up computing scenarios, and found that an interactive ring stands out as the best cross-scenario input technique for selection and spatial referencing for Heads-Up computing. Refer to the following paper for more details

How to output? 

    Heads-Up computing aims to support users’ current activity with just-in-time, intelligent support, either in the form of digital content or potentially physical help (i.e., using robots). When users are engaged in an existing task, presenting information to users will ultimately lead to multi-tasking scenarios. While multitasking with simple information is easier, in some cases, the best information support might be in the form of dynamic information, so one question that arises is how to best present dynamic information to users in multitasking scenarios. After a series of studies, we have come up with a presentation style that’s more suitable for displaying dynamic information to users called LSVP. Read the following paper for more details. 

LSVP: On-the-Go Video Learning Using Optical Head-Mounted Displays

Older publications on wearable solutions and multimodal interaction techniques that are foundational to Heads-Up computing:

Our Direction

Heads-Up research we are exploring include: 

Foundation

  • Understanding Interactions for Everyday multitasking via Optical See-Through Head-Mounted Displays (OHMDs)
    • Notifications: Reducing notification interruption on OHMDs
    • Developing the resource interaction model: Establish how different situations affect our ability to in-take information on different input channels.  
  • Sensing Attention:  EEG Monitoring of attention fluctuations 

Applied 

  • Education & Learning: Mobile microlearning and on-the-go learning videos  
  • Healthcare & Wellness: Mindfulness practices on-the-go 

Our larger aims for Heads-Up computing include:

  • Identifying suitable hardware forms 
  • Establishing text editing capabilities using voice + gesture approaches

Tools, Guides and Datasets

Main Participants

Dr. Shengdong (Shen) Zhao
Nuwan Janaka
Hyeongcheol Kim
Zhang Shan
Ashwin Ram

We thank many others who have collaborated with us on our projects in some capacity or other.