In the latest episode of the Dinis Guarda Podcast, Ben Goertzel, founder and CEO of SingularityNET, explores the launch of MM2, the evolution of AGI, the importance of security by design in generative AI, and the decentralised infrastructure behind SingularityNET and the upcoming ASI chain. The podcast is powered by Businessabc.net, Citiesabc.com, Wisdomia.ai, and Sportsabc.org.

Ben Goertzel is a computer scientist, AI researcher, speaker, author, and entrepreneur who coined the term AGI in 2003 and has contributed significantly to the field. He is the founder and CEO of SingularityNET, a decentralised AI platform that leverages blockchain technology to provide open access to AI services, fostering collaboration across industries. With its AI marketplace, developers can monetise their innovations using the AGIX token.
During the interview, Ben Goertzel discusses the launch of MM2 at the Hyperon AGI workshop:
“One of the things we launched called MM2, which is minimal meta 2, a new version of the compiler for our AGI language meta.
It’s around a million times faster than the very slow early research version we’ve been using before. It still needs some extra features to be able to deal with numerical data efficiently or text efficiently. Right now, it’s super fast at just mind graph processing. It’s not super fast at dealing with the outside world yet, but it will be within a couple of months.
We’re at a very interesting time for AI developers to jump in. You can start playing with our meta language and within a couple of months, the super fast version is going to be dealing with practical problems.
We can then take all these other AI algorithms that have been experimented with for decades and try them out at huge scale for the first time. It’s quite a fascinating time to be playing with all these new tools. There’s never been an easier time to jump into all this stuff.
There’s still a lot that still needs the human brain to do. You can make a web page or write a Python script by prompting, but you can’t yet build an AGI by prompting. There’s still this one big heroic task for humans to come together and cooperate in building, but now is the time to do it.”
AI vs AGI evolution
During the interview, Ben compares the current state of Artificial Intelligence (AI) with Artificial General Intelligence (AGI):
“There’s always different ways to interpret terms of this nature. According to the meaning that we laid out in the original book on AGI in 2005, we do not yet have human-level AGI.
Generality of intelligence is a gradation. A dog has a greater ability to leap beyond its history and training data than a worm, and a human has more than a dog. Future superintelligences may have more than a human, maybe Einstein had more than the average person.
It’s a graded scale, not a binary distinction, but clearly current AI programs have substantially less general intelligence than a human being.
The nature of LLMs is that they can’t leave that far beyond their training data, but their training data covers so much of human endeavor that they can do a fairly broad scope of stuff without having to leap that far beyond their training data.
Even the smartest reasoning LLM or Alpha Zero, whatever you want to look at, don’t have a level of generality of intelligence that a human does.
An LLM is pretty general in what it can do. It can write poems in different languages and styles, it can answer questions about many different domains, but it’s not able to make creative and imaginative leaps beyond its preparation in the way that a person can.
I think we’re really, really close, not because I think LLMs can just be scaled up or tweaked to yield AGI, but because I think that putting together the modern computing infrastructure that enabled LLMs with a whole bunch of other AI technologies, I think we’re going to be able to create systems with AGI at the human level and then beyond within just a few years.
I think once we get human-level AGI, that human-level AGI will be a programmer, it’ll be a computer scientist, it’ll be an AI theorist, it’ll be a mathematician, it’ll be an electrical engineer. It will be able to rearchitect its own code and its own hardware infrastructure, so it’ll lift its level beyond the human and up further and further.
The human-level AGI barrier is a little arbitrary. It’s more like the escape velocity of Earth or something. It’s not like building a rocket that surpasses the escape velocity of Earth requires a fundamentally different architecture than making one that’s a little less than the escape velocity of Earth.”
Generative AI & cybersecurity
As the interview continues, Ben and Dinis discuss the importance of cybersecurity in generative AI:
“The way people deal with LLM security is to try to bolt on security after the LLM is done, you’re adding guard rails on top of it trying to do final instruction tuning with security in mind.
If you have certain system prompts that you need to use over and over again, you would train a network for those system prompts and freeze that code so nobody could change it. Then that defends you against prompt injection attacks right away.
You need an intelligent reasoning component that can try to detect when anyone is trying to overcome this wired-in, hard-coded system prompt. You can use a cybersecurity knowledge graph to do some symbolic reasoning about that.
In the AGI infrastructure we’re building, security is backed in by design. Each block of code defining a function can be encrypted with a number of different parties’ private keys so you can only see the local variable values if you have the right keys to log in.
This infrastructure will provide very secure by design infrastructure for large neural nets, along with all the secure stuff in the book you mentioned. I think this can be an advantage that the open-source and decentralised community has, Linux is quite secure, arguably more so than proprietary operating systems, you just have more people looking at bugs and trying to fix them.
These open decentralised networks not only will have more eyeballs on security but can actually incorporate more sophisticated secure-by-design mechanisms. There’s an interesting overlap between what you want to do for traditional cybersecurity versus what you want to do for AI safety.
In both cases, you want to have more ability to track and observe and monitor what’s going on inside the mind of the AI system.”
The SingularityNET and ASI chain framework
Ben Goertzel also describes the technological architecture behind SingularityNET:
“There’s a lot of layers in the AI tech stack and there will be a lot of layers in the AGI tech stack. Neural networks and then Hyperon as a broader approach to building AGI cognitive architectures using logic and evolution and other techniques together with deep neural nets, this is sort of the AI layer.
At the bottom, you have Templ. You have computer architectures. You have CPUs, GPUs, and we’re working on my company, True AGI, we’re working on some novel AGI chips.
We’re also partnering with a company called Tense Torrent with Singularity and Tense Torrent on their neural symbolic chips.
What SingularityNET does is somewhat unsexily described as, it’s a middleware layer between the hardware and the AI systems.
SingularityNET is trying to give you an alternative to, deploying your AI algorithm on AWS or Azure or Google Cloud, instead you want to deploy your AI system on a decentralised infrastructure layer.
Blockchain is the best technology that we have today for coordinating a large global network of machines without a central owner or controller. The fact that you can deploy decentralised AI systems on blockchain infrastructure allows you to have this global, decentralised coordination.
We took our crypto tokens, merged them into one token and for the moment that token is still called the Fetch token. This year I believe we’re going to proceed with the transition of the ticker symbol to the ASI token for artificial super intelligence.
SingularityNET, Fetch.ai, Ocean Protocol, and CUDOS remain distinct organisations with their own management and project plans, but they’ve taken their crypto tokens and merged them into one token.
We’re launching probably late this summer, the ASI chain will be the first layer 1 blockchain designed specifically for decentralised AGI. The ASI chain will be really the first layer 1 blockchain designed specifically for decentralised AGI. The smart contract language for this blockchain is Meta, the same AGI programming language that we just talked about.
We can then launch different shards of this ASI chain network doing different sorts of AI, different shards of the ASI chain can run different consensus algorithms.
We can also make sort of hybrid nodes in the ASI chain connecting between the ASI chain and other blockchain networks that are out there now running things they may want to make use of what’s in ASI chain.
The key is connecting decentralised AI systems with decentralised blockchain infrastructure. This is what will make AGI practical, scalable, and secure. By using blockchain, we can ensure transparency, accountability, and trust in AI systems, which are essential for AGI to gain acceptance in society.
The good news is we’re at the point now where AI tools make it faster and faster to do all the work, the AI tools are themselves now accelerators of AI progress. I think that’s an indication that the singularity is indeed near,we’re in the end game here folks.”

Shikha Negi is a Content Writer at ztudium with expertise in writing and proofreading content. Having created more than 500 articles encompassing a diverse range of educational topics, from breaking news to in-depth analysis and long-form content, Shikha has a deep understanding of emerging trends in business, technology (including AI, blockchain, and the metaverse), and societal shifts, As the author at Sarvgyan News, Shikha has demonstrated expertise in crafting engaging and informative content tailored for various audiences, including students, educators, and professionals.