Mobile & Emerging Technology

Mobile and emerging technologies are enhancing assistive devices with features like text-to-speech and eye-gaze control, improving accessibility. However, questions persist about the effectiveness of mobile devices as assistive technology.

Maryam and a participant in our research working together to use a mobile phone as a form of AT.

 

Mobile and emerging technologies are shifting the landscape of assistive technology.

Previously expensive assistive devices were required for text-to-speech technology, for example, eye-gaze control. However, increasingly, these technologies are embedded into mainstream devices. This is exciting. What we don’t yet know, though, is how far a mobile device can act as an assistive technology, how will be people learn about novel accessibility features and be trained to use these, and what impact can mobile and emerging technology have on disabled people’s quality of life. We focus these questions on people living in low- and middle-income countries. 

We previously designed and implemented GSMA’s first mobile disability gap and the first digital product narrative.  

We are not focussing on two projects:

  1. Mobile as AT
  2. Project My Voice, My Words!

Mobile as AT

How can mobile phones be assistive technologies for people in low- and middle-income countries? The current focus is the use of Android devices by people living in Brazil, Kenya and India with hearing and visual impairment. We have recruited 800 people and are monitoring their use and the impact of this use. AT2030 is funding the core research and has secured matched funding for this work from ATscale and is partially funded by an unrestricted gift from Google. The project is led by GDI Hub at UCL Computer Science with academic, mobile and community partners in Brazil, Kenya and India.

Project My Voice, My Words!

In countries like Ghana, people with communication difficulties that lead to non-standard speech are not easily understood. This community are often excluded from society due to:

  • Speech and Language Therapy (SLT) services
  • Poor availability of assistive technologies to support communication
  • Stigmatizing cultural beliefs.

We initially conducted a six-week study of Relate, a free-to-use application launched by Google in Ghana. Relate uses bespoke language models and Automatic Speech Recognition (ASR) to communicate with people with non-standard speech. We found people wanted to speak their language and their own words, so we developed Project My Voice, My Words! as a collaboration between UCL, GDI Hub, and the University of Ghana. It is expanding to support the collection of non-standard speech in Ghanaian languages in partnership with the University of Ghana and research in Nigeria, Kenya, and Rwanda. This work is now partially funded by an unrestricted gift from Google.

Partners 

  • UCL Computer Science 

  • UCL GDI Hub 

  • GDI Hub CIC 

  • Google (matched funder) 

  • ATscale (matched funder) 

  • IIIT Bangalore 

  • Vision Empower 

  • Keio University 

  • University of Sao Paulo 

  • Claro (mobile internet partner) 

  • Safricom (mobile internet partner) 

  • GSMA 

Team 

  • Dr Giulia Barbareschi, Keio University 
  • Dr Richard Cave 
  • Ms Gifty Ayoka 
  • Mr Derick Omari 
  • Mr Lan Xiao 
  • Mr Raul Szekely 
  • Dr Mwangi J Matheri 
  • Philip Oyier School of Computing and IT, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya 
  • Henry Athiany Department of Statistics and Actuarial Sciences, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya 
  • Dr Wallace M Karuguti Department of Rehabilitation Sciences, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya 
  • Dr Mwangi J Matheri MJM Department of Rehabilitation Sciences, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya 
  • Raul Szekely University College London, London, United Kingdom 
  • Clara Aranda, GSMA  
  • Michael Nique, GSMA 
  • Jenny Casswell, GSMA 

Publications 

News and Blogs