Case Study AT2030: Measuring Need through Population Health Data and Screening Tools
The problem:
Measurement of unmet need, met need, and under-met need (occurring when a person’s AT is broken or inappropriate for their needs) is essential data in planning services for people with disabilities. Yet despite the tools described above, global AT need data are lacking, as identified by a background scoping review for the first WHO’s and UNICEF’s first World Report on Assistive Technology [98] conducted by the AT2030 program. Building on this need, a follow-up systematic review utilizing the same corpus, highlighted how functional domains are not equally represented in the literature; nearly 80% of studies identified in the systematic review inform all or in part on glasses. We have a more limited understanding of population-level need where AT need assessments, and/or the relationship between a disability and an assistive product are more complex (such as mobility or cognitive functional domains); data sets for mobility device needs, communication aids, tools to help people with autism or mental health issues are almost non-existent. Unmet need for APs was found to be high in all country income contexts, yet studies varied considerably; most studies utilizing centralized health record systems were set in high-income countries, and these data sources are not universally available. Though most studies identified in both reviews originate from LMICs, the vast majority are cross-sectional, and the evidence basis decreases rapidly when narrowing to country- or device-specific findings. This indicates our knowledge gaps are widest where coverage and access are the most limited. Large, population-based surveys can be time and resource-intensive and may not produce the timely data needed by policymakers.
Value and usefulness, Open and Scalable:
Emerging tools, based mostly on mobile phones, have been developed to support individual assistive technology need assessment. For example, Peek Vision is a smartphone-based visual acuity (VA) test, which replaces the equivalent paper-based vision assessment. However, the Peek Vision tool extends beyond screening and provides a data-driven systems approach to vision health. This allows for customizations and individualized health care plans for the user, with integrated text message reminders that have demonstrated a two-fold increase in attendance at follow-up appointments of children [327]. For healthcare providers and policy makers, Peek’s data allows population-level analysis of need to identify and plan care pathways and services [266]. In the hearing domain, a similar approach is used by HearX, who are currently trialing the rollout of a smartphone-based hearing loss screening tool, an affordable hearing aid, and WhatsApp-based support tool [23]. Again, the intervention works at multiple levels and necessitates the consideration of multiple dynamics: for the user and devices, clinician and devices, and the clinician and the user. This approach also begins to look at the role of online communities, which again links back to the opportunities for HCI and health and wellbeing by Blandford [52]. Both Peek Vision and HearX produce data that is useful beyond simply growing their products and services; they can help inform policy through a nuanced understanding of population-level need.
There is then work to be done by HCI researchers with the emerging data collection tools and novel data sources to maximize learnings in this sector. How can we best aggregate and share data insights to policy makers so that they are actionable and fair? How can better integrate data-driven models of care within communities and increase provision of services? What role could communities of practice have in capturing community-level data sets? What role might there be for ubiquitous monitoring of abilities to help drive data collection? How can we turn all of the resulting information into what Rogers describes as “actionable information” [80]. A starting point for this work is the global mapping and data dashboard being created through AT2030, which will automate insights with the help of a collaboration between the UNESCO Centre for AI and UCL’s WHO Collaborating Centre on AT based at the GDI Hub.
This case study was excerpted from Disability Interactions: Creating Inclusive Innovations by Catherine Holloway and Giulia Barbareschi, https://doi.org/10.1007/978-3-031-03759-7, pages 138–139.