The pace of progress in artificial intelligence is scaring many people, that feel threatened by the huge impact automation might have on employment, and other areas such as the development of autonomous weapons. A question growing in the heads of experts, deeply aware of the impact of AI on society, is how to understand and predict what can happen, when increasingly automated complex systems fail, or go off track. As John Danaher wrote in the Institute for Ethics & Emerging Technologies, “Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. “
Trying to deliver some answers to this and other questions, Carnegie Mellon University just launched a new center, entitled K&L Gates Endowment for Ethics and Computational Technologies. The center was funded with the sponsorship of K&L Gates, an international law firm based in Pittsburgh.
The center tries to give some answer to the widespread anxiety over machine intelligence , which has been growing lately, with news of increased automation, which will mean less jobs and the fourth industrial revolution. Last month the White House released a report assessing the potential effects of AI and several of the world’s largest tech companies, such as Amazon, google, deep mind, Facebook, IBM and Microsoft, recently joined forces to create an organization, called Partnership on AI, to study the technology and its potential impacts.
CMU is internationally renowned as a global research university that excels in the fields of artificial intelligence, performing arts, brain science, technology-enhanced learning, cybersecurity and robotics, among others.
The university has been at the epicenter of artificial intelligence (AI) since the discipline was created in the 1950s and CMU visionaries Allen Newell and Herbert A. Simon pioneered AI and cognitive science. Their quest to make machines think not only led to new insights about cognition, language and vision but opened the door to a novel and powerful approach to computing that is now shaping our world.
Today, CMU faculty, students, and researchers are pushing the boundaries of artificial intelligence in both autonomous technologies and technologies that augment human abilities. At the same time, many scholars at CMU are stepping back to take a look at how humans actually use these technologies, to help ensure that these discoveries are harnessed to benefit humanity. This step back is profoundly welcomed
In a statement, CMU’s president, Subra Suresh, said it will be important to consider the human side of all AI systems. “It is not just technology that will determine how this century unfolds,” he said. “Our future will also be influenced strongly by how humans interact with technology, how we foresee and respond to the unintended consequences of our work, and how we ensure that technology is used to benefit humanity, individually and as a society.”
Besides the aforementioned issues of employment, other deleterious consequences of AI could be biased algorithms and/or systems that are inscrutable because no one programmed them. This issue is particularly at stake with blockchain technology. Some of the steps being done to reverse and/or study AI, is recent research aiming to understand how machine-learning systems are able to explain their workings. Another area of the utmost importance is policy making and regulation.
The newly found center is very welcome, as an important first step to truly study and pay attention to AI in terms of ethics. The center’s work will certainly provide some answers to widespread worries of the population, that in same cases, even predict futuristic sci-fi distopian scenarios, which pose an existential threat to humanity.