As we enter 2025, we now have a technology in our hands and pockets, capable of voice recognition, synthesised voices, fake imagery using augmented reality, driverless trains and cars. Scandals like Cambridge Analytica concerning ethics and AI came to the forefront. People became more aware of the dangers of AI if not clearly regulated, and big tech companies were pressured to take a stand in terms of its ethics, in a number of projects. How do we ensure AI protects privacy, fairness and human autonomy?

Artificial Intelligence (AI) is one of the most used buzzwords we hear all the time these days. AI’s “trendiness” is not difficult to understand either. One decade ago, most scientists were just working in their labs, improving the technology in order to make it applicable to some practical outputs.
As of 2025, nearly 1.8 billion people worldwide have used AI tools. Around 500 to 600 million engage with them daily. In the business world, 78% of organisations now use AI in at least one function.
This is a notable rise from 55% just a year before. The global AI market is valued at about $391 billion, and projections show it could reach $1.8 trillion by 2030.
This rapid growth isn’t limited to one region. India, for example, has become the leading country in ChatGPT usage, making up 13.5% of global monthly active users. This surpasses the United States, which accounts for 8.9%.
Likewise, Washington, D.C., has the highest per-capita AI use in the U.S. It shows usage of Anthropic‘s AI platform Claude is 3.82 times greater than expected based on the city’s share of the working-age population.
A decade ago, we could only dream about AI-powered systems; now, we rely on them for countless daily tasks. Once confined to research labs and futuristic predictions, AI has now become integral to various industries, from healthcare to finance, transportation to entertainment, and even in the devices we use daily.
Tools like voice recognition, driverless cars, and AI-powered recommendations have become familiar features in our lives, even if we don’t always recognise them as AI. As we step into 2025, AI’s influence only grows, creating both excitement and concern.
The speed at which AI has evolved is nothing short of extraordinary, but this rapid development brings with it a host of ethical challenges. We are no longer just asking how AI works or how it can be improved; we are also asking who it serves, what it should or shouldn’t do, and how we can ensure it is used responsibly. As we embrace this transformative technology, it’s essential for us to confront these ethical dilemmas head-on.

AI in our daily lives
AI’s presence in our daily lives is no longer limited to sci-fi movies or speculative fiction. It has become deeply embedded in the digital tools we use every day. One clear example is Google’s Smart Compose, which helps us write emails by predicting our words. This shows how AI systems have blended into the digital landscape. They help us complete tasks efficiently and make smarter, data-driven decisions.
However, despite the benefits of this technology, its rapid development and widespread use have raised important ethical concerns. AI’s ability to predict behaviour, influence decisions, and control aspects of human interaction creates a growing need for responsible management and transparency.
However, as we move forward, we must ask ourselves: Is AI advancing human progress, or is it contributing to a new set of challenges?
Ethics of AI: A growing concern
AI’s rapid growth has raised fundamental ethical questions about its role in society. While the technology promises to deliver huge benefits, its development must be approached with caution. As we continue to integrate AI into more aspects of our lives, it’s crucial to consider its ethical implications.
We must ask: How do we ensure AI is used responsibly? How do we balance innovation with fairness, privacy, and accountability?
The issue of AI’s ethical boundaries has gained significant attention in recent years. With the rise of data privacy concerns, biases in AI algorithms, and the growing use of AI in high-stakes areas like military applications and law enforcement, we are forced to confront the consequences of unchecked AI development.
A key example of this is Project Maven, a controversial initiative that involved Google working with the Pentagon to develop AI tools for military purposes. The project aimed to use AI to help military personnel identify potential targets from drone footage. This led to protests from over 4,500 Google employees, who argued that the technology could be used for harmful purposes.
As a result, Google chose not to renew its contract with the Pentagon in 2019, a decision that signified a significant moment in the ethics of AI development.
Google’s decision to establish a set of AI principles in response to this protest marked a pivotal moment in the ongoing discussion about AI ethics. These principles included commitments not to develop AI systems that would perpetuate societal biases or be used in harmful ways, such as in weapons systems.
While these commitments were important, they also raised questions about how much we can trust tech companies to regulate their own activities, especially when it comes to the vast amounts of data they collect from us, their users. The question remains: Can we rely on companies to adhere to ethical guidelines when there is significant profit to be made from AI?

The role of tech companies in AI ethics
We are seeing more tech companies becoming aware of the ethical challenges that come with AI development. Facebook, for example, has started to recognise the potential blind spots in its AI applications. Joaquin Candela, Facebook’s director of applied machine learning, acknowledged in 2018 that the company had been focusing too narrowly on certain applications without fully considering their broader implications.
In response, Facebook has created tools like Fairness Flow, which allows engineers to assess whether their AI models work fairly across different demographic groups. While such efforts are commendable, they still do not address the broader issue of how these systems are shaping our society.
Another critical issue that has emerged in the AI space is the use of facial recognition technology. While it holds potential for improving security and convenience, it has also come under scrutiny for its biases, particularly in relation to how it identifies darker-skinned individuals.
Studies have shown that facial recognition systems are more likely to misidentify women and people of colour, which raises significant concerns about racial and gender discrimination. As a result, there has been increased pressure on companies like Amazon and Microsoft to reconsider how they deploy these technologies, with some calls for a ban on facial recognition in certain contexts.
These issues highlight the importance of transparency and accountability in AI development. It is not enough for companies to simply state that their systems are fair; they must demonstrate how they ensure their AI technologies are free from bias and discrimination.
Collaborating for ethical AI
In response to the growing ethical concerns surrounding AI, a number of initiatives have been launched to develop frameworks and guidelines for responsible AI development. One such initiative is the Partnership on AI, a consortium of leading tech companies, academics, and non-profit organisations working together to ensure that AI is developed in a way that benefits society.
The consortium’s focus is on promoting fairness, accountability, and transparency in AI, and it aims to address some of the biggest ethical challenges facing the industry.
However, while such efforts are important, we must recognise that the drive for growth and innovation can sometimes overshadow ethical concerns. A recent example of this is Microsoft’s contract with the U.S. Immigration and Customs Enforcement (ICE), which involved providing facial recognition technology to help the agency with its operations.
Despite protests from employees, Microsoft continued with the contract, raising questions about the company’s commitment to ethical standards when business interests are at stake.
As AI technology continues to evolve, it is crucial for us to continue developing ethical frameworks and regulatory mechanisms that ensure its responsible use. The Partnership on AI and other similar initiatives are important steps in the right direction, but they must be supported by concrete actions that hold companies accountable for the impact their technologies have on society.

Regulation in AI: A global scenario
As AI grows in various industries, the need for clear rules has become more important. Different countries and regions are taking different approaches to governance. Their choices reflect their cultural values, legal systems, and economic goals. However, everyone shares the same aim: to ensure that AI technologies are developed and used responsibly.
In the European Union, the EU AI Act has become one of the most thorough regulatory efforts so far. It takes a risk-based approach and classifies AI applications into categories like unacceptable risk, high risk, and low risk. Systems that are high-risk, such as healthcare diagnostics or self-driving cars, must adhere to strict standards for transparency, safety, and accountability. This framework aims to protect users while still promoting innovation.
In contrast, the United States has taken a more decentralised approach. Instead of having one national law, the U.S. relies on specific guidelines for different sectors and regulations at the state level. This provides flexibility and encourages fast technological testing, but it also results in inconsistencies and uneven protection for users in various regions.
At the same time, countries like China have set strict rules that focus on government control and national security. China has enacted laws that regulate data use, recommendation algorithms, and deepfake content. This reflects a system where state control is a key part of AI governance. India, on the other hand, is still developing its regulatory framework. It is currently working to find a balance between promoting innovation and ensuring digital sovereignty, especially with its rapidly growing AI user base.
Human-Centric AI
As we look ahead to 2025 and beyond, the ethical questions surrounding AI are only likely to become more complex. AI is already being deployed in critical areas like healthcare, finance, and law enforcement, and its influence will only continue to grow.
With this increased reliance on AI, we must continue to ask ourselves important questions: How do we ensure AI is used fairly? How do we protect our privacy in a world where data is constantly being collected? How can we hold companies accountable when their AI systems cause harm?
The future of AI will require collaboration between governments, corporations, and civil society. We must work together to develop global standards for AI ethics and ensure that the technology is used in ways that align with human values.
This means fostering transparency, accountability, and inclusivity in AI development and ensuring that AI technologies are designed with the needs and well-being of all people in mind.

Dinis Guarda is an author, academic, influencer, serial entrepreneur, and leader in 4IR, AI, Fintech, digital transformation, and Blockchain. Dinis has created various companies such as Ztudium tech platform; founder of global digital platform directory businessabc.net; digital transformation platform to empower, guide and index cities citiesabc.com and fashion technology platform fashionabc.org. He is also the publisher of intelligenthq.com, hedgethink.com and tradersdna.com. He has been working with the likes of UN / UNITAR, UNESCO, European Space Agency, Davos WEF, Philips, Saxo Bank, Mastercard, Barclays, and governments all over the world.
With over two decades of experience in international business, C-level positions, and digital transformation, Dinis has worked with new tech, cryptocurrencies, driven ICOs, regulation, compliance, and legal international processes, and has created a bank, and been involved in the inception of some of the top 100 digital currencies.
He creates and helps build ventures focused on global growth, 360 digital strategies, sustainable innovation, Blockchain, Fintech, AI and new emerging business models such as ICOs / tokenomics.
Dinis is the founder/CEO of ztudium that manages blocksdna / lifesdna. These products and platforms offer multiple AI P2P, fintech, blockchain, search engine and PaaS solutions in consumer wellness healthcare and life style with a global team of experts and universities.
He is the founder of coinsdna a new swiss regulated, Swiss based, institutional grade token and cryptocurrencies blockchain exchange. He is founder of DragonBloc a blockchain, AI, Fintech fund and co-founder of Freedomee project.
Dinis is the author of various books. He has published different books such “4IR AI Blockchain Fintech IoT Reinventing a Nation”, “How Businesses and Governments can Prosper with Fintech, Blockchain and AI?”, also the bigger case study and book (400 pages) “Blockchain, AI and Crypto Economics – The Next Tsunami?” last the “Tokenomics and ICOs – How to be good at the new digital world of finance / Crypto” was launched in 2018.
Some of the companies Dinis created or has been involved have reached over 1 USD billions in valuation. Dinis has advised and was responsible for some top financial organisations, 100 cryptocurrencies worldwide and Fortune 500 companies.
Dinis is involved as a strategist, board member and advisor with the payments, lifestyle, blockchain reward community app Glance technologies, for whom he built the blockchain messaging / payment / loyalty software Blockimpact, the seminal Hyperloop Transportations project, Kora, and blockchain cybersecurity Privus.
He is listed in various global fintech, blockchain, AI, social media industry top lists as an influencer in position top 10/20 within 100 rankings: such as Top People In Blockchain | Cointelegraph https://top.cointelegraph.com/ and https://cryptoweekly.co/100/ .
Between 2014 and 2015 he was involved in creating a fabbanking.com a digital bank between Asia and Africa as Chief Commercial Officer and Marketing Officer responsible for all legal, tech and business development. Between 2009 and 2010 he was the founder of one of the world first fintech, social trading platforms tradingfloor.com for Saxo Bank.
He is a shareholder of the fintech social money transfer app Moneymailme and math edutech gamification children’s app Gozoa.
He has been a lecturer at Copenhagen Business School, Groupe INSEEC/Monaco University and other leading world universities.