Thursday, 03. October 2024

Multi-perspective case study on the use of fall sensors

Real data is required for the development of AI applications. For so-called vulnerable target groups, for example in the long-term care sector, this data is partly generated artificially by simulation, as there is still little real data material on age(ing).

In a recent publication on AI-based fall sensors, the Competence Centre for Gerontology and Health Research at Karl Landsteiner University (KL) describes why the synthetic generation of data leads more to exclusion than to genuine inclusion.

The article shows that inclusion must be demanded politically and emphasises the need for multi-perspective involvement in the development of new AI technologies. The Algocare project, which forms the framework of the publication, was funded by the Vienna Science and Technology Fund (WWTF). Thanks to KL's open access funding, the publication is freely accessible in the scientific journal "Socius".

In recent years, algorithmic decision-making systems and (big) data infrastructures, known as artificial intelligence (AI), have increasingly been developed for population groups that are perceived as vulnerable. One example of this is Gerontechnology, which focuses on the development of technologies to support older adults. Assistant Professor Dr Vera Gallistl and Katrin Lehner, BA, MA from the Competence Centre for Gerontology and Health Research at Karl Landsteiner University of Health Sciences are two of the authors of the current publication, which deals with the development, application and use of fall sensors. "The special thing about Algocare is the multi-perspective research opportunity. We were able to conduct qualitative interviews not only with staff in long-term care facilities, but also with older care home residents and the developers of the technologies. This made it clear to us that vulnerability is not just a human phenomenon, which prompted us to write the article 'Vulnerability Assemblages: Situating Vulnerability in the Political Economy of Artificial Intelligence'."

The results of the empirical work on AI fall detection and prevention systems reveal three dimensions of vulnerability.

Data vulnerability: Artificial intelligence has to learn. To do this, it needs data and experience. In the context of old age, however, these are very often lacking, which also became clear in the study. "As a result, these had to be synthesised," explains Katrin Lehner. "Therefore, technology developers produced data on falls themselves. Hence, this data was very 'clean', but what was missing was the complexity of the real world. This synthetic data generation ultimately also makes the AI system itself vulnerable, which was particularly evident in false alarms. These occurred, for example, when someone put their shoes on and bent over."

User vulnerability: "In this 'vulnerability assemblage', it was shown that older people are the only ones ascribed vulnerability. However, we found that this target group wants to have autonomy, interest and self-determination over the technologies used. However, since the older residents were seen as vulnerable and disinterested, no consideration was given to involving them in technology implementation or even development."

More than human vulnerability: While the vulnerability of technologies was not recognised in practice, older care home residents were understood as vulnerable as a matter of course. "In fact, however, we were able to see that vulnerability consists of a complex interplay between older people, their (living) environment and the technical systems, which is orientated towards profit-oriented structures of AI technology development."

Representation versus reality
One of the most important aspects of older adults' vulnerability in the analysed "vulnerability assemblage" concerned the question of the representation of older adults' everyday life in the training data, which in turn is used to train AI models. This is in line with recent advances in sociogerontological research on AI, which express concerns about age bias and age discrimination in AI. "In our study, we also found how synthetic data generation practices further reinforce age bias, i.e. the stereotypical and prejudiced perception of existing data sets," says the study co-author. One of the conclusions is therefore to strengthen the active role of older adults in technology development and implementation. "On the other hand, our contribution emphasises the need to think about human-technology relationships not in terms of interventions, but in terms of shared vulnerability. Focusing on these cases of shared vulnerability offers new ways of conceptualising human-technology relationships that go beyond seeing technology."

KRIS-Link: Vulnerability Assemblages: Situating Vulnerability in the Political Economy of Artificial Intelligence