Responsible AI: A management problem, not a purchase
Trust thought for today - it's the system innit.
This is about conversations about AI and data crunching in general forget the systems they are embedded in and What Actually Happens in the real world. Many of people are onto this, but what to do about it is still an issue for design, regulation, procurement and use of AI. Especially when data is often seen as clear and infallible and people's judgement biased and useless.
This article by Joshua Kroll shows that AI regulation and responsible tech programmes are missing the point that "responsible AI is about developing a responsible management system. A program is more than a process or a piece of technology. But the design and capabilities of technological components can support or enable programs. Assurance comes not from the technology, but from an assemblage of technology and humans, organizations, cultures, and policies. In other words, the control is itself a sociotechnical system."
He uses the analogy of smoke alarms, (with a few builds from me and lots more in Gill Kernick book on the Grenfell fire. ) - the perfect smoke detector technology (even with IOT) exists in a system that can affect its usefulness in many ways:
1. Fitted in the first place, if it is too expensive it won't be put in the very places who need it most.
2. It must be fitted in the right place. Too near the cooker it will go off every 5 mins, too far and it won't register until it's too late.
3. It must be of the optimal sensitivity, too sensitive and we take the batteries out or turn it off
4. There must be an understanding of what to do immediately in case of fire - eg don't throw water on a chip pan or provision of fire blankets or fire extinguishers. Which are also expensive.
4. The building must have been designed with effective evacuation near to all residents
5. Residents must understand what to do
6. Someone must maintain the alarms and update the software
7. Some buildings are are more fireproof than others for hundreds of other socio technical reasons as Gill's book and the current Grenfell Inquiry shows.
8. The incentives, culture and system of planning, building, maintenance, staffing, fire response etc also influences whether the smoke alarm can effectively do the job it is supposed to.
Another example he cites is this article evaluating how Pittsburgh’s predictive analytics used to provide a score proposing which children should be taken into care, misdiagnoses child maltreatment and prescribes the wrong solutions. And not just because of the historic bias of the tech but the system it operates in. https://lnkd.in/ePWmST37. There are many more.
Interested in any thoughts on the learning for AI and data analytics from systems thinking approaches from other areas Gill Kernick, Roger Miles, Christian Hunt, Ruth Steinholtz, Mathew Mytka ?