Growing up during the 1970s and 1980s, the concept of Artificial Intelligence or an intelligent computer was largely confined to science fiction, like K9 in Doctor Who, Orac in Blake 7, or HAL in 2001: A Space Odyssey.
Then, in October 2022, with the public release of ChatGPT, Artificial Intelligence suddenly seemed to emerge into reality. I remember trying it and being impressed, even though I felt the technology wasn’t quite fully developed. It was good enough for me to realise that AI, in the form of Large Language Models, was going to be the next big thing. So, I decided to start learning about the underlying technology.
Now, three years later in November 2025, I use Generative AI at least several times a week.
In this guide for beginners, I will break down what generative AI is and provide examples of how I use it.
What is Generative AI?
Generative Artificial Intelligence is an AI capable of creating new content. There are generative AI models capable of creating content for specific mediums, including:
- Text
- Images
- Video
- Audio
Some models can create content across different mediums, while others specialise in one specific area.
These models are trained on large datasets of human data. This data is used by the model to predict the best output, not in one go, but piece by piece for the requested outcome. A perfect example of this is a Large Language Model, which we will explore in a later section.
However, generative AI is essentially a prediction machine, which is why it can make mistakes and present them with such confidence.
The Powerhouse Behind It: Large Language Models
Large Language Models are trained using a combination of machine learning and human input.

How a Large Language Model Works
The underlying neural network is called a Transformer.
The Transformer was the first transduction model based solely on self-attention. A Transformer consists of an encoder stack made up of several encoders stacked one on top of another.
A Transformer also has a decoder stack made up of multiple decoders, matching the number of encoders.
Self-attention is a mechanism that relates different positions within a single sequence, which aids in processing that sequence.
Self-attention enables the following:
- Reading comprehension
- Abstract summarisation
- Textual entailment
Large Language Models, such as Google Gemini and OpenAI’s GPT-5, can code at human-like levels. Users can define their requirements using natural language, and the AI outputs the required code.

My Own Experiences with LLMs
As I alluded to in the introduction, I didn’t think ChatGPT was truly “there” when I first tried it in October 2022. However, it impressed me enough to realise it was going to be the next significant technological advancement.
Now, some three years later, I use a Large Language Model practically every day. I’m writing this post in Obsidian, a knowledge management application, and I have installed a community plugin called Obsidian Copilot. This provides AI, in the form of a Large Language Model, access to my knowledge contained within my Personal Knowledge Management (PKM) system. This allows me to query my knowledge in English. I used this ability to get AI to summarise what I had learned about generative AI, and it was then able to lay out the structure for this post. I will use that layout as a basis to structure this post, but as writing is a creative medium, I may deviate slightly. The AI’s structure is there to help me get started.
I also find that Large Language Models can be incredibly helpful for thinking through complex ideas and, with the correct prompt, can critique your work effectively.
The Large Language Models or tools I routinely use are:
- Obsidian Copilot community plugin
- Claude
- Google Gemini
- Google NotebookLM
The Moment I Realised AI Had Arrived
As I mentioned earlier, I believe Large Language Models can be excellent thinking partners. These two stories will, I think, explain why.
Improving My Prompt Engineering
I wanted to improve my ability in prompt engineering, so I asked the Obsidian Copilot plugin to go through my PKM and analyse what I already knew about prompt engineering and Large Language Models, as I was sure I had notes on the subject.
I also asked the model to identify any gaps it saw in my knowledge. This gave me an idea of what I already knew and the likely areas where my knowledge was lacking. This provided context for my next query.
My next query was to ask the AI to design a curriculum for me to learn more about prompt engineering, with recommended content to consume. I fed in the response from my original query; this added context, enabled the model to tailor a plan specifically for me.
Does AI Offer a Humanity Perspective?
This example blew my mind as it allowed me to see the possibility of using generative AI as a thinking partner. I had this idea that you could extend the concept of AI acting as a reflection on humanity to AI having a perspective on humanity.
I asked the AI to analyse this idea and find any weaknesses, which it did. It argued that a person’s perspective is a combination of knowledge and lived experiences, and therefore, an AI cannot have a perspective.
As I read the answer, I thought that while it might not have perspective in the way humans do, but it does have a unique view on humanity. As it is trained on so much of our data, it can see us warts and all. We just don’t have a word to describe that unique viewpoint yet.
But it was an idea I wanted to explore, along with the related idea that AI, beyond a certain point but before it reaches Artificial General Intelligence (AGI), will act as a mirror, reflecting intelligence back at us. As it is such an intriguing idea, it gives us a way to learn more about humanity and ourselves as individuals.
So, I captured the feedback from the various LLMs related to this idea, plus other related notes, into Google NotebookLM. I plan to add other relevant content there and let it develop over time.
Conclusion
After reading this guide, you should have some understanding of what generative AI is and how it has impacted the life of one technology blogger.
Anyone with an interest in computing and technology should be exploring generative AI and considering how it impacts us. Even if development in the underlying AI technology were to slow down today, its impact will continue to shape society.
Many models have a free tier; I would suggest you explore them.
Sign up for our weekly newsletter to be kept informed of our most recent content and our latest thoughts on technology.
Further Reading
- Computerworld, Q&A: The Human-Machine Relationship Requires Mutual Understanding, Respect for AI
- Matt Burgess, How ChatGPT and Other LLMs Work—and Where They Could Go Next
- Peter H. Diamandis, No Human Coders in 5 Years – An in-depth explanation of LLMs.
- Jay Alammar, The Illustrated Transformer – More information on the Transformer architecture.
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kais… “Attention Is All You Need”: The original research paper from Google DeepMind announcing the development of the Transformer, which now powers all LLMs.
- Introductory Guide to Large Language Models – My guide on Large Language Models.
