Balancing Biases: Gender in AI and Social Robotics

Balancing Biases: Gender in AI and Social Robotics
Balancing Biases: Gender in AI and Social Robotics

The adoption of artificial intelligence and machine learning is growing rapidly. With this rise has come the development of gendered solutions, often used to make AI more user friendly.

From satellite navigation systems in cars to the voice assistants in smart speakers, consumers are becoming more accustomed to using gendered solutions in daily life. But with users becoming more tech-savvy, is use of gender truly necessary in 2019?

Kaspersky is a global cybersecurity company founded in 1997 that organises conferences about cybersecurity. Kaspersky’s deep threat intelligence and security expertise is constantly transforming into innovative security solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company’s comprehensive security portfolio includes leading endpoint protection and a number of specialized security solutions and services to fight sophisticated and evolving digital threats. Over 400 million users are protected by Kaspersky technologies and we help 270,000 corporate clients protect what matters most to them.

“Machines, robots and software, unlike humans, are designed to fulfil a commercial
purpose and need instruction to carry out tasks. Any use of gender that may humanise
this technology is little more than strands of code. As it stands, ‘AI’ lacks motivation
and features are programmed – not learnt. To put it simply, an ant has more intelligence
than ‘AI’ to date and anything artificial needs human direction to function.”
Eugene Kaspersky

At the 2019 Kaspersky NEXT conference in Lisbon, that happened this October (2019) delegates were able to learn more about the use of gender in AI and social robotics. There they heard from Kaspersky experts and researchers, as well as third-party specialists on what is becoming an evolving, thought-provoking technological trend.

Kaspersky Security Evangelist David Jacoby – who works in the company’s Global Research & Analysis Team – hosted and moderated a panel discussion on the role of gender in AI. He was joined by his colleague; malware expert Alexey Malanov. Fellow panellists, offering external insight, were freelance journalist and member of the strategy board at AI Business School AG Zurich/Switzerland Thomas Ramge and Kriti Sharma, the founder of ‘AI for Good’.

Conference delegates will be invited to watch this discussion unfold, with the panellists set to focus on the motive behind assigning genders and other human traits to AI, how comfortable consumers feel using gendered AI and whether or not gendering AI is truly necessary.

The following day, Ramge will be in the moderator’s seat for a roundtable discussion and will be joined once more by Malanov, Jacoby and Sharma as well as Tony Belpaeme – a Professor at the University of Ghent and Professor in Robotics and Cognitive Systems at the University of Plymouth in the UK. The group will be speaking to a small audience of media attendees and delegates, exploring the impact of teaching human biases to AI and the potential harm this may cause, debating if controls are needed to regulate how organisations programme AI traits and contemplating if machines can be taught ethical judgement.

“The Kaspersky NEXT summit is an ideal opportunity to discuss the use of AI amongst both businesses and consumers. Gender in particular has been used to drive uptake of different services, technologies and products. However, in 2019, is this really necessary? Our panel and roundtable discussions will help us lift the lid on this fascinating topic, exploring how gendering AI may lead to other, more harmful, biases when designing solutions and what can be done to ensure technology remains neutral”, comments Jacoby.

A supporting report, From science fiction to modern reality: examining Gender in AI, is available to download.

For more information about the Kaspersky NEXT summit, visit the designated web page.

 

 

 

Comments are closed.