As this short piece discussing the second installment of the Stamford AI 100 report points out, as AI systems become more ubiquitous in predicting human behaviour, they amplify the biases and prejudices in our society. While I have to confess that I am not generally keen on more regulation as such, it is clear that naïve and uncritical deployment of AI technology poses a significant risk to our society and needs to be guided by sound principles. As the response to one of the Stamford AI 100 workshops has put it, “integration, not deployment” is needed. AI systems cannot just be dropped in but need to be carefully integrated with the organisations and ultimately people that deliver the services the technology is meant to assist.
An alarming, and sad, example is this story reported by WIRED of a chronic pain sufferer and dog owner who was refused pain treatment due to a privately-run obscure AI system identifying her as high risk for opiate addiction. Turns out, that the system was triggered by her sick dog's prescription of opiates and barbiturates, which happened to be in her name. The piece details the painful consequences of her caregivers' blind reliance on the AI system, encouraged and even required by government authorities. Clearly, we need to do better in terms of outcomes for individuals and society but importantly also in terms of transparency and accountability.
It is good to see that the UK government is taking these issues seriously in their National AI Strategy published this week.
AI governance and regulation, algorithmic transparency, and awareness of AI safety are all on the radar. In particular, the risk of cementing societal biases and the need for diversity and inclusion in developing and deploying AI system is acknowledged, which is great:
“AI systems make decisions based on the data they have been trained on. If that data – or the system it is embedded in – is not representative, it risks perpetuating or even cementing new forms of bias in society. It is therefore important that people from diverse backgrounds are included in the development and deployment of AI systems.”
See also here from the United Nations High Commissioner for Human Rights. Clearly, these are issues that are starting to be taken seriously.