Hassan Taher Discusses Whether AI Is Close to Being — Or Could Ever Be — Sentient

Can AI be sentient? That is, can artificial intelligence, which is growing by leaps and bounds, express feelings? It’s a heavy topic and one that seems to have sprung from a 1970s science fiction movie rather than everyday life.

All around are clear examples of how AI is moving into our lives. It’s driving vehicles and helping us see who’s at the front door without getting up. It’s solving incredibly challenging problems and working to resolve data issues at all levels. But can AI be sentient?

Hassan Taher, a prominent author and expert in the AI field, shares his opinions and expert insights into this hotly debated topic. “I believe that AI technology has the potential to bring about significant positive change in the world, but many people are hesitant to embrace it fully. While some may disagree, I believe that with responsible use, AI can actually make the world a better place for everyone.”

Hassan Taher on AI and Emotions

Where we stand right now, in the heart of 2023, AI doesn’t have the capacity to experience emotions, Taher explains. However, it can emulate them to some degree.

The challenge for AI regarding emotions is that emotions are a unique interplay of psychological and psychological responses to stimuli. Since AI is machine- and technology-based, it lacks any biological or conscious elements that could express emotions the way most people do now.

Could it develop these skills? It’s unlikely that AI is anywhere near in the realm of being able to do so now, but some might say that with sufficient advances in AI, it could evolve to express most of those emotions. Other researchers believe this isn’t possible because the essential biological components required for emotional presence are simply not present in AI technology.

Could AI Become Better Than People?

Another critical question that many people worry about is: Could AI become superior to people in general? Hassan Taher offered some insight into this in a recent blog post noting that researchers have expectations that, within 35 to 40 years, AI will be able to tackle some impressive tasks like completing surgery and writing a bestselling book.

There’s a lot of work to be done before AI can reach that level — but that doesn’t mean it’s impossible. The key here is to consider how AI in its present form could impact the world based on emotional decision-making (or the lack of the ability to do so).

Hassan Taher shares, “The field of AI technology is constantly evolving, and staying up to date on the latest developments is crucial to staying relevant and competitive.” There’s a lot of change coming in this industry and without a doubt, more achievements are likely. There’s no way to know, though, where that ultimate level of reach will be.

Could People Create AI That Has Morality?

Another way to look at the potential for AI is how people program or modify AI technology and whether it would be possible to program it to have morality. It’s quite a fascinating thought: Could people program AI to think and process information from a viewpoint of morality rather than just facts and figures?

Taher shares that this topic isn’t new. In fact, Duke University researchers Vincent Conitzer and Walter Sinnott-Armstrong have been working to develop a method that would allow AI to have a type of moral compass built into it. These researchers set out to create a way to better understand human ethical decision-making — what’s behind the decisions people make when it comes to that moral compass — and try to determine if it could be programmed into an AI algorithm.

How would this work? According to the researchers, their work revolves around the concept of trying to predict what a human’s actions would be in any given situation. And if they can do this well enough, it would then be possible for AI to become proficient at making decisions as humans do, and that would equate to having some type of independent ability to follow a moral compass, so to speak.

Do We Want AI To Be Sentient?

Another area to consider is whether or not there are ethical implications if AI reaches sentience. Should AI achieve this level — and as noted by Taher, we aren’t anywhere near there yet — the technology would have to be held to the same moral code as people are. There would have to be some type of ethical guidelines put in place, and if those guidelines are violated, there would need to be some sort of legal consequences.

There’s also the concern of dominance. For example, over time, sentient AI could mean that there’s a risk for either party — humans or AI — to dominate the other. This could create a whole new level of concern for the technology, including transparency and accountability regarding how it’s used and what level of protection it should have in place. How could people make sure that AI aligns with human values in the decisions it makes, then?

These questions are hard to answer. How could such a situation unfold, and what science fiction movie reality would develop?

Is It Likely AI Becomes Sentient?

Hassan Taher, along with everyone else, has no ability to know what the future holds. Yet, it’s important to be realistic about the current type and level of AI technology and what the future could hold.

In short, we aren’t there yet, and the technology for AI to become sentient isn’t anywhere near the point of this becoming a reality. David Chalmers, a New York University philosophy professor, stated in a keynote speech that the likelihood of AI reaching sentience was likely to be less than 10% in the coming decade with current language models​​.

For Hassan Taher, the goal continues to be to watch, learn, and stay on top of what the industry is doing and how AI is developing to spot areas of not just opportunity, but concern.