Automated decision systems are based in algorithms. These are becoming very popular, but do they really serve the common good? Are they really providing us with better services? And what problems can arise when using algorithms and not humans to make decisions about health, social security allowances, or as tools to influence/fabricate news for the Media ?
Algorithms have been advancing into our lives, ever more rapidly. They are so prevalent that now both businesses and the government are increasingly seeing opportunities to use these to great benefits. However, problems have been emerging with algorithms, and when they fail, it has been noted that the “public lacks the tools to hold these systems accountable”. This was the conclusion of a recent report prepared by AI Now, which pinpointed an accountability gap, as the technology is increasingly being used across more fields, including social domains. There is little regulation to rein in the ways in which algorithms can be used, and this can in some ways be detrimental to the
Algorithms in Healthcare
While algorithms are lauded as bringing many benefits, they do sometimes go wrong. Recently, IBM had a Watson system, which was deployed at the Memorial Sloan Kettering Cancer Center, and it was used to give trial diagnoses. Worryingly, it was identified that this system had provided treatment recommendations that were not only incorrect, but were also considered to be unsafe. This highlights the issue that not all benefits and usefulness of algorithms are necessarily supported by scientific evidence. Where there might be evidence available to support benefits provided, these are not necessarily available to the public. In the case of the incident documented above, there was very strong marketing about the technology and its capabilities, but this simply was not supported by research that had been peer reviewed.
Algorithms and Media
This is thought to be just one in a line of many incidents regarding AI and algorithm deployment by businesses and governments. There was the well-publicised media scandal around Cambridge Analytica earlier this year, which revolved around the issue that the company had taken personal data from millions of users on Facebook, and it had harvested that data without gathering consent, to use it for political means. Meanwhile, there are also those that suggest that Facebook, as a result of its algorithms has played a part in the genocide in Myanmar.
These types of scandals have led to a backlash by employees of the companies concerned. For example, there were walkouts of staff at companies such as Google and Airbnb, and Google staff also walked away from the company on the basis of contracts with the Pentagon that the company had been working on. These protests may be argued to intensify the need for public accountability relating to algorithms and their deployment and use.
Use of Algorithms in Governments
From a government perspective, algorithms are starting to be utilised in a variety of areas with a view to delivering greater efficiency in public services. But its applications can raise all kinds of questions in terms of ethics.
One such example is the development of algorithms that are capable of deciding on the length of a prison sentence. While this might make for faster decision making, the process behind the decision may not necessarily be well understood, and when systems make these sorts of decisions, they can be difficult to appeal as well. What’s more, in the USA, such systems have been implemented in the area of allotting medical aid. In one case, a women with cerebral palsy found her home care hours cut almost in half by such a decision making machine, and no explanation was provided. In this particular case Arkansas state was sued and it was decided that the algorithm system was not “constitutional”. This particular situation clearly illustrates the sorts of problems that can arise when a decision is made by a machine and it is not possible to see the logic behind it.
Algorithms and Facial Recognition
Facial recognition is another area in which algorithms are being deployed. This technology is very popular with the police, and has been adopted across Europe, the USA and in China. However, there are significant flaws with these systems. For example, these types of facial recognition are known to not perform well when utilised with people of different races. Evidencing this, one study showed an error rate of 5% for white people and 39% for non-white people. This is even more worrying when it is considered that technology of this nature is also being deployed with a view to comprehending character and intent through simply looking at a person’s face. This has inherent dangers, though accountability is limited. None of this analysis suggests that algorithms should not be used, just that there is a degree of accountability needed, particularly when technology is being implemented in the social sector.
Governments need to wake up and legislate so that more scandals and major issues do not occur. After all algorithms need to serve humanity and bring better quality of life to all, not the other way around.
Maria Fonseca is the Editor and Infographic Artist for IntelligentHQ. She is also a thought leader writing about social innovation, sharing economy, social business, and the commons. Aside her work for IntelligentHQ, Maria Fonseca is a visual artist and filmmaker that has exhibited widely in international events such as Manifesta 5, Sao Paulo Biennial, Photo Espana, Moderna Museet in Stockholm, Joshibi University and many others. She concluded her PhD on essayistic filmmaking , taken at University of Westminster in London and is preparing her post doc that will explore the links between creativity and the sharing economy.