Soon we will start to see AI integrated more into automation and robotic systems, with a particular use case for this being in manufacturing or supply chain management. But, as governments, organisations, and individuals, are we prepared for this innovative technology and how can we ensure that it is designed, developed, deployed and used in the right way?
In 2015, Joy Buolamwini, a researcher at the Massachusetts Institute of Technology (MIT), discovered that facial recognition technologies tended to perform less well on females with darker skin colour (BBC News, 2017). This was due to the training set that had been used to create the facial recognition technologies predominantly only containing data about white males: the AI had developed its model of ‘a face’ based upon the data it had been presented with, which happened to be predominantly white males.
At the time, this illustrated a major issue with AI technologies, giving rise to the term ‘AI bias’, referring to the issue whereby AI decision-making systems display prejudice towards/against a certain demographic, usually due to the training data involved in the creation of the AI model being unrepresentative of its user group. AI bias has been shown to be present in AI systems making decisions about whom to give a job, whom to give a loan, and even in physical autonomous systems such as driverless cars that are dependent upon AI decision-making.
Such issues made AI ethics a driving imperative, with various institutions and bodies being established to look at topics such as bias, privacy, transparency and fairness when AI systems are being designed and implemented. However, with the introduction of AI into sectors such as manufacturing and supply chains, where it is being used alongside automation and robotics, we have a new ethical paradigm whereby we not only have to be conscious of the ethical risks surrounding the AI software, but new risks arising from its physical embodiment.
For instance, how does bias impact a physical system that is reliant upon AI for its decision-making, and how can we mitigate these risks? How can we ensure that these new physical autonomous systems are being used in the right way, and in the context of manufacturing and supply chains, that their implementation complements the needs of the human workforce, as opposed to increasing risk or making interesting jobs redundant?
There are significant ethical considerations and assessments that must be addressed as AI and physical autonomous systems are embedded even more into our lives, and we all have some responsibility for having awareness for what these are. At a government level, we need to assess what regulatory frameworks are needed to ensure that the operation of physical autonomous systems sits in line with safety standards and that they are used fairly, without discrimination. Organisations need to ensure that the newly embedded physical autonomous systems are equitable for all stakeholders, in particular, employees and broader society. At an individual level, we need to be aware of the limitations of such technologies and be cognisant of the ethical risks, to challenge those developing and commissioning the technologies to do so in a way that is conducive to everyone.
Given the complexity of the ethical landscape when it comes to AI and physical autonomous systems, how can leaders whose operations, functions or organisations adopt these technologies ensure best practice in this area? Rather than prescribing static answers, we suggest asking yourselves the following questions, and reflecting on the implications on your own practice.
1. How does my organisation currently address the ethical implications presented by these technologies?
2. What new governance frameworks could my organisation develop to best incorporate embodied artificial intelligences, such that ethical considerations are appropriately addressed?
3. How can I further educate myself, my teams and my peers about ethics in Artificial Intelligence?
References
BBC News, 2017, ‘Artificial Intelligence: how to avoid racist algorithms’, https://www.bbc.co.uk/news/technology-39533308
Dr Rebecca Raper is a robotics lecturer at Cranfield University and MK:U. She is the course director for the recently launched Robotics Degree Apprenticeship at MK:U. She completed her PhD on the topic of Moral Machines and has consulting experience in practical ethics applied to AI system design. She is recent author of the book: ‘Raising Robots to be Good: a practical foray into the art and science of Machine Ethics’ with Springer.