If you have a physical condition that makes it impossible to use your computer’s mouse, how do you check your email? Or what if you’re a physician doing triage and you need to share a computer with other doctors without swapping germs? That’s what Professor Dean Mohamedally asked his colleagues and students to solve for at UCL (University College London)’s Department of Computer Science in the early days of the COVID-19 pandemic in 2020.
With help from Intel and other partners, his students developed what would become UCL MotionInput—now in its third generation. The ground-breaking suite of applications lets people use their PC without physically touching a keyboard, mouse, or joypad. Instead, they can use voice commands, facial expressions, or physical gestures captured by their webcam. With endless uses including healthcare, education, industry, and gaming, UCL MotionInput shows us what the future of user experience (UX) could look like.
It started in a classroom
While offering endless uses, UCL MotionInput was first created in response to the global COVID-19 pandemic. It was designed to give all students access to remote learning and to help the United Kingdom’s National Health Service (NHS) with rapid patient triage—thanks to requirements design from Clinical HCI Researcher Sheena Visram, and NHS GP Dr. Atia Rafiq.
Two computer science students whose talents helped make MotionInput 3.0 a reality were Sinead Tattan and Carmen Meinson. They got involved with the project when Professor Dean shared his ideas with his classes across several taught degree courses, both at undergraduate and master’s levels. It’s the progress students had been making with combining major AI technologies like Intel® OpenVino™ within UCL MotionInput that made Professor Dean confident his students could find a way to help people use their computers without using their hands.
Sinead recalls how Professor Dean tasked students with creating something that would really make a difference in people’s lives, an application that could give those with fine motor skill conditions opportunities that typically abled people take for granted. To build it, Sinead led student teams to eventually combine several machine learning, computer vision, natural language processing, and software processing technologies, thanks to Carmen’s Intel-optimized software architecture designs.
It was no easy task. Separate teams took on specific functionalities—like the world’s first hybrid facial navigation with simultaneous federated on-device voice commands—while Sinead played a key role as team architect, ensuring they could be combined and accessed as individual software features or a single easy-to-use suite of programs.
Intel U.K. mentors Costas Stylianou and Phillippa Chick were eager to see the students progressing in new ways. Features like pinching in the air for “touchless multitouch,” and “nose navigation” for browsing online by pointing one’s nose at regions of the screen, were optimized and built for performance as radically different ways of using everyday PCs and laptops.