“AI robots are increasingly used to facilitate human activity in many industries, for instance: healthcare, education, mobility and the military, but must have accountability for their actions,” according to the university. “We need to create specific accountability guidelines to ensure that the use of AI robots remains ethical.”
“In a normal working environment, if a person makes an error, mistake or commits any wrong-doing, it is obvious who is accountable in most circumstances, either that person specifically or the wider organisation, said marketing and management researcher Zsofia Toth. “However, when you bring AI robots into the mix, this becomes much more difficult to understand.”
The researchers reviewed the uses of AI robots in different settings from an ethical perspective, and identified four ‘clusters of accountability’ to help identify where accountability for the actions of AI robots lies.
A warning, dear reader: This article attempts to summarise philosophical research – which is far from Electronics Weekly’s home turf. Please read the paper linked at the bottom if the results of this research might be important to you.
The clusters are loosely named:
- Illegal – any action that is against the law and regulations
Where AI robots are used for small, remedial, everyday tasks like heating or cleaning.
Robot design experts and customers take most responsibility for the appropriate use.
- Immoral – any action that only reaches the legal threshold’s bare minimum
Where AI robots are used for difficult but basic tasks such as mining or agriculture.
A wider group of organisations bear the brunt of responsibility.
- Morally permissible – actions not requiring explanations of putative fairness or appropriateness
Where AI may make decisions with potential major consequences, such as healthcare management and crime fighting.
Governmental and regulatory bodies should be involved in agreeing guidelines.
- Supra-territorial – where AI robots are used globally, such as in the military or in driverless cars.
A wide range of governmental bodies, regulators, companies and experts are accountable. Although accountability is widely spread “this does not imply that the AI robots usurp the role of ethical human decision-making,” according to the university, “but it becomes increasingly complex to attribute the outcomes of AI robot use to specific individuals or organisations, and thus these cases deserve special attention.”
Previously, the accountability of these actions was a grey area, said the university, but framework like this should help to reduce the number of ethically problematic cases of AI robots’ use.
The work is described in ‘The dawn of the AI robots: Towards a new framework of AI robot accountability‘, published in Journal of Business Ethics – available in full, expect much philosophical language.