read this book last year (2025) as AI intrigues me and also at times frightens me. I want to be able to see through the hype and have a better understanding of both the potential benefits and the downsides.
In this post I want to explore the key insights and notes that I took from the book, which was written in 2014 and merge it with any insights I have gained from my own experiences of using AI in early 2026.
Intelligence Got Us Here, and That’s the Problem
What separates humanity from every other species on the planet?
It isn’t our strength or physical ability, it is our ability to form large social groups and our underlying intelligence. These two pillars allowed us to develop both technologies and culture.
As a social animal, we must communicate with each other and this led us to develop language.
Language can be considered as the human operating system through language we are able to create both our culture and technology.
The only real threats currently to humanity’s survival are:
- Ourselves
- Natural disaster
- Something more intelligent than ourselves,
As you can guess from the book title “Smarter Than Us” the threat comes from AI becoming smarter than us.
Stuart reminds us in the book that a computer was once considered to be intelligent when it referred to a human who was employed to solve complex mathematical problems. When these people were replaced by technology it was no longer considered to be intelligent.
The problem I see with this argument is that a computer based on a Turing Universal Machine is following a set of algorithms. But it does raise the question Can Intelligence be defined in an algorithm?
The answer to this question will determine if Artificial Intelligence running on a computer can ever achieve Artificial General Intelligence (AGI).
However the underlying point Stuart makes that the definition of intelligence changes as technology advances seems to hold true.
How AI Could Come to Dominate
In the book Smarter Than Us, Stuart identifies three distinct ways that lead to AI domination which are:
- Social abilities
- Technological developments
- Economic ability
Social Abilities
Let us take a look at social ability first. As I mentioned earlier language can be considered the operating system which helped us to get us here. But as Yuval Noah Harari has argued we have given AI the keys to that operating system.
We have developed Large Language Models trained on our language and while they don’t understand language in the same way as a human does they are none the less able to create language.
Take this blog post I got AI in this case Claude Code to search my vault and find what it thought was the key takeaways from the book and layout this post. Not all of it I’ll stick to for example as I was writing this post I remembered Yuval Noah Harari’s argument about handing AI the keys to the human operating system.
Another downside with the current models is that they have a tendency to lie or as the developers like to call it hallucinate. Yet many people take its answer at face value as they think a computer can never be wrong.
Technological Development
In this section I want to cover the various technological developments that will enable AI to dominate.
AI Supercommittee
An Artificial Intelligence supercommittee could be formed if a network of different Artificial Intelligence instances were to ever form a network.

This has become reality in the form of AI agents which can be generated by the leading edge models such as Claude and Gemini.
The first time I watched AI use agents was when I asked Claude Code if it was possible to index the permanent notes in my vault alphabetically.
Claude Code as it tried to figure out my request spun up agents. To get details on my vault and to compare them to my index. It was both incredible, exciting and just a bit scary.
In less than 20 minutes the AI had figured out how to do it. Added a link to all the permanent notes I had at that time to my A to Z index and had created a skill to run the next time I thought it should be done.
These AI agents are going to have a massive impact on how AI is implemented and what it is able to do over the next few years.
Economic Ability
This is the weakest area of my notes but it will likely increase the exponential gap between the richest and poorest members of our communities with the leading AI companies and their owners gaining the most.
And this will give the AI the resources to continue to develop and improve over time.
The Gap Between What We Program and What We Intend
I don’t code for a living, but I have learnt some programming languages for specific qualifications including my degree and I had never thought of it this way before until I read Stuart Armstrong’s Smarter Than Us. “But it is speaking to an alien mind.”
Higher level programming languages such as Python, Java, or C is a half way house between human language and the natural language of traditional computers. To make it easier for us to understand.
Yet for the computer to run them they have to be translated into machine code a language which your computer can understand.
In the last five years or so, this has changed as we have taught computers to speak to us by training what we call Large Language Models, but this language still has to be translated into a form that the model can understand. This is done by a Transformer.
But we are dealing with an alien intelligence that we don’t fully understand how it works due to it being a black box. However we do know that it was taught our language by predicting the most likely next word. Predictions it’s learnt from a massive quantity of data. Which is pretty much the sum of human knowledge.
The Efficiency Trap
Caught up in this is a potential trap on our cognitive abilities the drive for increased efficiency could drive us out of the loop and lead to the loss of human skills and cognitive abilities. This doesn’t have to happen and it can be managed if we are aware of the risk.
I have created an AI literacy framework for using AI with my PKM in part to negate this trap. I will link to the first post in that series the Level 0 Framework.
The Moral Philosophy Problem
The creation of safe AI requires the solving of moral philosophy. There is only one problem humans have been looking to resolve these moral dilemmas and questions for thousands of years and we haven’t really made any progress.
This may well be that there isn’t one definite answer as it depends on the individual’s moral compass. But what defines that?
For humans it could be some combination of genetics, perspectives and knowledge. And if that is the case for humans, will that ring true for AI.
Amir Husain argued in his book The Sentient Machine that there will be a period of time before the emergence of Artificial General Intelligence (AGI) when Artificial Intelligence (AI) will act as a mirror on Humanity and help us to determine the answers of some fundamental questions that Philosophy has been asking for millennia. Such as:
- What is uniquely human?
- What we want to become in the age of AGI?
- Do all of our complex goals come from our biological needs?
It seemed to me that Large Language Models reached that point in early 2025 and I definitely see glimpses of it in some of my conversations with Claude. And if this is indeed the case we have a unique view on ourselves.
These Large Language Models were trained pretty much on the sum of human knowledge, and this means that the mirror reflects us back warts and all.
And I think we need to own it and look at how we can improve those aspects of ourselves rather than try to correct the data these models are trained on.
What Will Human Society Look Like After the Singularity
What will human society look like after the technology singularity?
It is likely that human society will reflect what the Super Artificial Intelligence wants it to be.
This process might even be starting now as Large Language Models (LLM) have access to language which as I have mentioned previously is the operating system of humanity. And these models are already creating content.
I always write my blog posts but AI does help me in the creation process. Claude helped me to layout the post and it will also help me to edit it. I will also use Claude to create the social media to promote this post.
The lines between human creativity and AI is starting to blur.
Conclusion
AI is coming whether we like it or not the technology is out there. So our goal must be to leverage the best aspects of AI and AGI (Artificial General Intelligence) we can. While being aware of the potential downsides so we can reduce those risks.
That is why I urge you to try out these models learn about prompt engineering see the potential benefits and downsides.
And make sure you stay informed. But you will need to be careful as there is a lot of hype out there. I would suggest that you start with our further reading.
Lastly share your thoughts on AI.
Further Reading
- Stuart Armstrong. Smarter Than Us
- Yuval Noah Harari, AI and the Future of Humanity Yuval Noah Harari at the Frontiers Forum
- Michael Wooldridge, The Road to Conscious Machines
- Introductory guide to Large Language Model
- Introduction to my AI Knowledge Framework level 0
- Amir Husain, The Sentient Machine
- The Sentient Machine: Key Takeaways on AI, Humanity, and Our Future
- Introductory Guide to Artificial General Intelligence
