The AI Dilemma: Juliette Powell And Art Kleiner Discuss The Future Of Humanity With AI In The Dinis Guarda YouTube Podcast

Protein-folding, speech recognition, and generative models…. AI is augmenting human potential to innovate in almost every sector. But what about the potential threats it poses to the very existence of humanity? Researchers and experts in AI, Juliette Powell and Art Kleiner discuss ‘The AI Dilemma’ in the latest episode of the Dinis Guarda YouTube Podcast. The episode is live on all media platforms, including openbusinesscouncil.org and citiesabc.com.

The AI Dilemma: Juliette Powell And Art Kleiner Discuss The Future Of Humanity With AI In The Dinis Guarda YouTube Podcast

The AI Dilemma encapsulates the complex relationship between humanity and the rapidly evolving realm of intelligent machines. While on one hand, AI augments human potentials to contribute to the world economy, many global experts and influencers are also discussing the possibilities in which this technology could harm the very essence of humanity.

This is what Dinis Guarda discussed in his latest interview with a global researcher in AI, Juliette Powell, and tech businessman and profound writer, Art Kleiner for his YouTube podcast, featuring their book ‘The AI Dilemma: 7 Principles for Responsible Technology’, released in August 2023.

The two guests agreed that humanity maintains a special relationship with technology. While Juliette says: “Humanity, for me, manifests through digital that helps me connect to my fellow beings beyond any boundaries”, Art believes that humanity is the connecting link between the 4 dimensions of life: physical or the tangible things, larger world of structures and organisations, a world that we interact with, and the inner world, and that technology could be a medium that helps us keep with the questions and answers.

But they also highlight some of the threats and dangers associated with the technologies, especially artificial intelligence, that is rapidly becoming the core of all innovative research, shaping the future of humanity with an unexplored dimension.

The AI Dilemma: Cross-benefit analysis, handling biases, and data privacy

‘The AI dilemma’ encapsulates a multifaceted set of challenges and ethical considerations stemming from the rapid proliferation of artificial intelligence (AI) across diverse aspects of modern society.

You can’t avoid risk, if you are into AI. There are going to be unintended consequences by the very nature of automated technology. So, you have to choose what kind of risks you want to take, and how you organize your oversight”, says Art.

Striking a balance between AI’s capacity for efficiency and human oversight to ensure moral alignment is a persistent challenge. As AI systems become more autonomous, they increasingly make decisions based on algorithms and data without human intervention. The challenge is to ensure that these autonomous choices align with ethical principles and human values.

We delegate a lot of decision making capabilities to the digital tools we control today, without doing much cross-benefit analysis that’s actually necessary to truly control something”, says Juliette.

The confronting question”, she adds, “is how to handle biases.” AI algorithms are often trained on historical data sets, primarily from human sources. This, therefore, may carry embedded biases. When these biases are carried forward into AI systems, they can result in discriminatory outcomes.

Humanity is biased. To what extent can we safely raise biases that exist, and eliminate or even reduce the harmful effects of these biases to the groups of people who might have been excluded, ostracized, or harmed because of these”, said Art.

The pervasive use of AI in data analysis and surveillance has raised significant privacy questions. Balancing the potential benefits of AI in areas like healthcare and security with the need to safeguard individual privacy is a complex challenge facing policymakers and organizations.

Art told Dinis:

From my experience of working with big corporations, I understood that there exists a transactional behviour between the corporations and the people who work for them. The company will move in the service of its priorities.”

From this Juliette explained how this transactional behavior contributes to an increasing AI Dilemma. She said:

These are learned behaviors within the organisation, and increase in sophistication as you go up to the top level. 

You have to refrain yourself from thinking transactionally and start recognising that we, as humans, are in this together. Coming out of the pandemic, we now need to understand, more than ever, the realities of where AI is going, how quickly it’s going there and how we have to tap into our humanity now – if we don’t want it to overpower us.”

The AI Dilemma: Job displacements, accountability issues, and potential unethical development

Job displacement due to AI-driven automation is a dilemma affecting labor markets globally. While AI promises efficiency gains, it also raises concerns about job disruption and economic inequality. Effectively managing this dilemma entails devising strategies for workforce transition and upskilling while capitalizing on AI’s potential to augment human capabilities.

In addition to the above-mentioned potential harms of an accelerated proliferation of AI-based solutions, there’s the looming threat of AI being weaponized for malicious purposes, including cyberattacks, misinformation campaigns, and autonomous weapons. The AI dilemma, therefore, extends to questions of accountability, liability, and regulation.

It’s really important to remember that the first way to stop the potential harm from happening is to start setting regulations now. How do we want our systems to behave – each one of us. If we get that price figured out, it will go a long way to address the two sides of AI”, said Art.

Determining responsibility in cases of AI failures or accidents, especially when AI operates autonomously, remains a challenge.

What mechanisms can we put in place so that when people misuse the technology, either deliberately or through carelessness, they are able to understand the implications and compensate for those affected by it”, Juliette added.

Ethical AI development, including defining clear ethical guidelines and best practices, is an essential aspect of addressing the AI dilemma. Ensuring transparency and explainability in AI systems is crucial for building trust and understanding their decision-making processes.

The interview concluded in a consensus that as the creators of AI, humans bear a profound responsibility for managing its implications. We hold the power to shape this technology into either a beneficial ally that enhances our lives and sustains our future or a potentially destructive force that overshadows humanity itself. Our choices in AI development, regulation, and utilization will determine the path it takes.

At the end of the day, when you are a parent, you don’t regret, because of all that you have learnt along the way, and it’s no different with the technologies we are developing. That’s when I get the hold of humanity”, says Art hopefully.