By Maria O’Sullivan
Governments around the world are increasingly using technology to assist them to make important decisions that affect human rights. This expansion in the automation of government decision-making is due to a number of factors, including the availability of huge volumes of data and the push by governments to make decision-making more efficient by using technology to replace human decision-making with machines.
In Australia, the most controversial use of technology has been Centrelink’s social security benefit system (dubbed ‘Robodebt’). Here, a machine learning-assisted method is being used to identify and recover social security overpayments using data-matching. If the algorithm detects a difference between tax office data and income reported to Centrelink by the applicant, then it sends out a letter to the applicant to explain this debt. However, the process is regarded as inaccurate and unfair. Flaws in the design of the system have meant that many overpayments have been wrongly identified and therefore the use of technology has led to systematic errors. This has had a significant impact on a vulnerable cohort of society from whom repayments are being wrongfully demanded. Vulnerable low-socioeconomic debtors have also found it difficult to challenge the decisions due to their lack of understanding of the way the automated system operated. For instance, the Welfare Rights Centre told a recent Senate Inquiry that Centrelink issued a $14,500 robodebt to a disability pensioner with an intellectual impairment and then failed to offer him support to deal with the alleged overpayment.
Centrelink’s Robodebt system is therefore illustrative of some of the dangers associated with governments using technology without putting in place due process mechanisms to ensure that the system operates fairly.
However, Robodebt is only one aspect of automated government-decision making. Similar technologies are also being used in other important areas in Australia, including for certain decisions by the Department of Veteran’s Affairs, the Australian Tax Office and the Department of Home Affairs. Therefore, as ‘artificial intelligence’ (AI) and other technologies become more sophisticated and more widespread in use, we need to consider the risks associated with using these mechanisms. While automation may increase efficiency by allowing technology to streamline decision-making processes, it also raises concerns in relation to transparency, accountability and individual fairness. Specifically, algorithmic systems can pose dangers for the enjoyment of due process rights and fair procedural treatment, protected in Articles 14 and 26 of the ICCPR, as well as in relation to the substantive rights which might be affected by a particular decision.
Whilst raising concerns about the use of AI for human rights, it should be noted that technology can be beneficial. For instance, AI applications such as Google’s DeepMind and the University of Oxford’s lip-reading programme are used for closed captioning. Similarly, Microsoft’s ‘Seeing Ai’ narrates the world around a user with blindness or low-vision. These technologies are therefore enhancing accessibility and independence for people with disability and older persons.
However, the use of artificial intelligence in other areas is more problematic. For instance, the use of drones and autonomous robotics by defence personnel raises human rights concerns relating to the right to life and security (as well as accountability problems for international humanitarian law). In addition to accountability concerns, there is a potential for robotics to become ‘too intelligent’, and act incompatibly with the wishes or interests of their human operators.
It may also be argued that certain decisions should never be subjected to the use of algorithms or AI. For example, there have been suggestions that AI may be used to assess refugee applications. In 2016, IBM launched a tool that would help governments separate ‘real asylum seekers’ from potential terrorists by assigning each refugee a score that would assess their likelihood to be an imposter. The Canadian government has also stated that it hopes to utilise AI and machine learning to assist in the determination of refugee applications. Thus, it is possible that automation may be used in the future to not simply match data, but also to make evaluative decisions (ie about whether someone is a refugee). Given that such decisions involve questions of life and security and the serious consequences for refugees of an incorrect decision, the use of technology in this area is therefore highly controversial. One important human rights law question is whether such evaluative decisions should ever be the subject of algorithmic input or, alternatively, whether algorithms can be used for only certain aspects of such decisions and should be subject to regulation to ensure that the systems are fair and reviewable. Human rights law has not, to date, grappled with these questions but it must do so given the rate of technological development across the globe.
In conclusion, technological developments offer significant opportunities to governments to improve the consistency and efficiency of service delivery and decision-making. However, given the significant coercive and information-gathering powers held by state authorities, there is a need to ensure that new technologies align with the legal requirements of human rights law. I therefore argue that artificial intelligence systems and other forms of technology should be designed in a way that respects human rights, and should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure that such systems operate in a fair and just manner.
To receive notifications of new posts, click “sign me up” at the bottom
To join the Castan Centre mailing list, click here.
To follow the Castan Centre on Twitter, click here.
To follow the Castan Centre on Facebook, click here.