Knowledge Collapse: The Hidden Risk to Society

An image representing the loss of human knowledge over time without us even knowing it.

This is the third post in our series on Artificial Intelligence (AI) Influence on human cognition. This post takes the concerns covered in the previous post We’ve been here before – A history of cognitive concerns. If our cognitive ability as individuals begins to reduce, what would the long-term impact be on our society? That is the obvious question.

But many of the technologies I mentioned in We’ve been here before – A history of cognitive concerns Reading and writing allowed us to share ideas and thoughts across space and time, allowing us to read books written by authors who are no longer alive and get a glimpse of their inner thoughts and ideas.

While the invention of the printing press and the Internet made knowledge more accessible, which shook up our society with some businesses and jobs vanishing while new companies and jobs took their place.

How will AI impact our society? Let’s have a chat and see where it takes us.

What is Knowledge Collapse?

Knowledge collapse is a concern that long-term use of AI and predictive algorithms will lead to a gradual reduction of human knowledge over time.

This loss of knowledge will be most notable in these areas:

  • Specialist knowledge
  • Niche knowledge
  • Unorthodox ideas

This may already be happening, as we discussed in last week’s post. We’ve been here before – A history of cognitive concerns. Research is beginning to indicate that work created with the help of AI is narrower than work created on our own. Over time, this narrowing is likely to increase.

Could we be heading to a world where we surrender power to AI due to us losing our cognitive abilities?

The Content Devaluation Risk

Are we on the brink of a new knowledge revolution? A revolution driven by Generative Artificial Intelligence. Trained on the whole of human knowledge. Knowledge which can then be used to generate new content when prompted by us. But as generative AI is a prediction machine, this content will be less diverse than content that a human on their own will create.

If this content is used for the training of new Large Language Models, then this averaging of output will likely become narrower as the likelihood of a particular outcome will increase. Could this eventually lead to the loss of less popular ideas and concepts? Could it lead to the loss of human knowledge?

How to Protect Ourselves from Knowledge Collapse

To protect ourselves from a society-wide knowledge collapse, we should look to do the following:

  • Don’t allow ourselves to become totally reliant on Artificial Intelligence
  • Invest in preserving our specialist knowledge
  • Prevent AI from training on data generated by AI models

By keeping some of our activities free from AI, we give ourselves the opportunity to use our cognitive skills and prevent those skills from disappearing. Besides, do we really want to give up the things that give us the most pleasure? This ties in with investing in specialist and niche knowledge.

By preventing AI from training on AI-generated data, we are slowing down, if not stopping, the gradual averaging down of human knowledge.

But individual citizens also have a part to play by consuming digital content ethically: checking your facts and consuming content from independent media outlets. This won’t just protect you from AI-generated content, but also from the algorithms used by social media platforms that encourage doom scrolling through content likely to enrage you.

The Role of Thinking for Yourself

Of course, society is the sum of humanity and we must all play our part by protecting our own cognitive abilities.

The ability to think for oneself is a deliberate practice which is developed by being curious about your own thoughts. I explore my own thoughts by keeping a journal. This can help you discover your own path through life. It can give you authenticity and ownership of your own life.

Conclusion

In the first three posts in this series, we have covered the potential downsides of generative AI and Large Language Models. In next week’s post, we will explore strategies that we can look to introduce into how we use AI to manage this risk. If enough of us do this, we can ensure that knowledge collapse never happens.

Further reading

Leave a Comment

Your email address will not be published. Required fields are marked *