Generative AI and critical thinking. Person inside the matrix with a black box of question marks.

The AI Apocalypse: Are we losing our ability to think critically?

Have you been fiddling with generative artificial intelligence (AI) lately? ChatGPT has popularized generative AI, and people are typing their fingerprints clean, chatting with their new know-it-all partner. But, although AI models benefit many aspects of life, we must recognize their pitfalls, especially those affecting our critical thinking and analytical skills. What are the drawbacks of the surge of accessible generative-AI models? And how can we use AI to strengthen our thinking skills?

Generative AI refers to artificial intelligence models that generate new content based on existing data, including text, images, music, and sounds. In other words, data from the internet or books have trained large language models like ChatGPT and the image-generating DALL-E to generate content based on users’ inquiries (so-called prompts). As a result, a generative AI model like ChatGPT will generate content similar to its training material based on large datasets of, for example, books, articles, and other content. However, it’s important to note that the generated content isn’t a copy of the training material. Instead, it uses large datasets as resources to create new content with similar themes, styles, and genres.  

It’s impossible to deny the brilliance of these tools. Sit down for a minute or two and think about their capabilities. The current generative AI models contain hundreds of billions of learning parameters (pieces of information), which is already impressive. And what’s more, future models are expected to have about a hundred quintillion – that’s 100,000,000,000,000 – parameters, which is immense from a technological standpoint.

Generative AI can propel our knowledge and shape our societies

It’s no secret that generative AI – and AI in general – can change the world and shape its prevalent worldviews. Regardless of your opinion about the technology, AI tools are here to stay and will expand beyond our everyday experience and imagination. Living in a technology-driven society, it’s more than fair to wonder how AI will affect our knowledge and reality.

Unsurprisingly, AI tools like DALL-E and ChatGPT have triggered both standing ovations and harsh criticism. Artists seem to have the least patience with AI-generated outcomes. Some claim we’re “watching the death of artistry unfold right before our eyes.” Quite poetic.

However, we’re already experiencing the massive benefits generative AI offers to knowledge seekers.

We see how these new, accessible AI tools have democratized our access to information. Let’s take ChatGPT as an example. ChatGPT instantly answers almost any question, leveling the playing field of knowledge access – as long as it remains accessible. No matter your background or level of expertise, you’ll obtain almost any information in no time.

It’s worth noting, however, that the tools often generate far from perfect results. But more about that soon.

Apart from their increased accessibility, these tools can provide knowledge based on your level of expertise. Suppose you want to learn about generative AI’s learning parameters and what they mean but lack knowledge about AI. In that case, you can ask ChatGPT to explain these terms in a language a 10-year-old can understand.

If the answer is unclear, you can ask the tool to elaborate further, “Please explain it so that a 6-year-old understands your description.” And bam, there’s a version of the same answer for your inner child to enjoy.

An example of a conversation with ChatGPT explaining the scientific mindset in the words a 10-year-old can understand.
Generative AI and critical thinking. ChatGPT responds to prompts, adapting its output to the users’ requests.

Or perhaps ask ChatGPT to formulate the description as a Shakespearean poem if you’re more into creative learning.

You get the gist of it by now. Since text-based AI models generate content similar to human communication, you’re presented with relevant, human-like answers in seconds.

The reliance that hurts your critical thinking and analytical skills

But what are the shortcomings of these powerful tools? We can start by asking what access to readily available information means for our future ability to think and analyze facts.

A self-enforcing feedback loop of biases and incorrect information

A fundamental problem with generative AI tools emerges once you realize they’re not made from magic and miracles. Humans have trained them to recognize patterns in prompts and present content based on these patterns and data the models have used as learning material during training. 

Generative AI tools lack consciousness, emotions, free will, and abstract thinking. In other words, they lack intrinsic critical thinking and cannot understand the meaning behind the patterns they recognize. As a result, they will ultimately answer to their true masters: their designers and trainers.

But what does this mean for our critical thinking, analytical skills, and knowledge? Well, generative AI tools can amplify and maintain biases and errors from the data used to train the model. These inevitable human-created biases and errors can eventually self-enforce in a positive feedback loop. The more we rely on AI-generated content, the greater the probability that future AI-model training will depend on sometimes flawed data points.

Critical thinking and generative Ai. Asking ChatGPT to end a sentence with chimpanzee.
Sometimes, we argue but end it on a good note.

Put crudely, AI-generated internet content can wipe out dissident voices and unpopular beliefs critical of specific narratives. As a result, the narrative an AI model supports would then further reinforce and homogenize the respective knowledge.

We occasionally need opposing views to develop our thinking. This can be exemplified by the approach of switch-side debating, a debating technique used by competitive debaters. How would they improve critical thinking if they had no opponents?

So, if we’re not careful as a society, we risk ending up in an AI-generated, self-propagating information bubble. Ultimately, the content generated by AI is only as good as the data from which it learns; human-fed data. And we know AI contains biases.

The blinding transparency issues of generative AI

How do generative AI tools like ChatGPT decide how they’ll select sources and respond to your prompts? Can you explain their internal processes and algorithms in detail? Most can’t. Currently, AI-generated decision-making ressebles a black box. We simply don’t understanding all their decision steps, which affects purposeful knowledge seeking.

To visualize this its opaqueness, imagine you’re reading your favorite newspaper. You land on an article claiming something, for example, that ChatGPT is transforming journalism irreversibly. You look for references supporting these bold statements but find none. You’re left sipping morning coffee unaware of the reliability or accuracy of the information. But you follow up, search the web, and screen for other sources. What if all other sources lack references?

Related: Brandolini’s law: why you sometimes struggle to refute BS (and how to solve it).

Lack of transparency also gives you little insight and control over generative AI tools’ decision-making, limiting you from changing parameters according to your needs. For example, you cannot evaluate the possible biases integrated into the tools without sources. Depending on your aims, you’re limited in how you can tweak the algorithms to fit your needs and filter these biases or other data.

The numbing effect of a uniform information and news landscape

One reason to advocate for free-speech is selfishness; we need diversity to maintain sharpness (that’s only one of the reasons, of course). Developing and strengthening critical thinking and analytical skills requires exposure to different ideas, viewpoints, and sources. Diversity triggers your critical thinking and analytical skills to evaluate information and references, consider alternative information, and form independent conclusions.

Uniformity is the polar opposite of diversity. Since language models like ChatGPT analyze and generate content based on trends and patterns in big datasets, an over-reliance on these models could theoretically create a uniform information landscape. An increased reliance on ChatGPT or other models can create a repetitive and uniform source of information that reinforces the respective trends and biases.

Let’s try a little mind game, shall we? Suppose you committed yourself to eating exclusively liquid foods for two years. How do you imagine your body would react to its first encounter with solid foods (say, beef and raw carrots) after two years of liquid madness? Your teeth would probably hurt, and your digestive system would struggle to process the food, possibly causing bloating, constipation, and stomach pain. It’s a “use it or lose it” type of situation.

Similarly, if you’re exposed to a uniform information landscape, you detrain your brain from processing challenging and alternative perspectives. In theory, lacking uniform information can prevent you from forming new opinions, starving your critical thinking and analytical skills, a “use it or lose it” situation. 

This is why Ivory Embassy advocates challenging ideas; yours and your idols’. It cultivates and strengthens your critical thinking.  


You may also find this interesting: Think like a scientist: The power of a scientific mindset.


We already have experience with algorithms that, more or less, enforce uniformity or filter bubble effects: search-engine algorithms. Search-engine algorithms personalize and recommend newsfeeds based on your previous searches. This convenient feature may speed up your search. However, filter bubbles or echo chambers could also give you the impression you’re exposed to diverse news. Instead, you’re likely consuming information aligning with your views and biases, so-called confirmation bias.

All plagiarism and no checks makes Jack a dumb boy?

Then there’s plagiarism. Some claim that ChatGPT may harm our – particularly children’s – education as these tools can create “high-tech plagiarism,” as Noam Chomsky described ChatGPT in a recent interview. The Canadian novelist and cultural commentator Stephen Marche even proposed a timeline for the educational system’s death based on the increasing popularity of generative AI.

Whether you trust this prognosis or not, it’s fair to suspect that plagiarizing material will become easier as AI tools spread among students. On the one hand, professors claim that ChatGPT generates terrible essays. On the other hand, it seems ChatGPT can create scientific abstracts that fool researchers. With future improvements in generative-AI tools and uneven accessibility to AI checkers in schools, these tools may give teachers and professors headaches.

The question is, how much more of a threat to education do generative-AI tools pose than the current cheating? While it’s true that ChatGPT can function as another tool that slows education, cheating has been present in education for as long as we’ve graded students. Still, the accessibility of ChatGPT in schools may complicate teachers’ student evaliations in an overworked school environment.

How to turn these pitfalls to your advantage

Does all this mean we should avoid generative AI to maintain or strengthen our critical thinking? Definitely not! On the contrary, you should stay updated with technologies like AI, adapt to their advancements, and use them to your benefit. But, to reap the benefits of AI, you’ll need to use these tools responsibly:

Use generative AI as a tool, not a replacement

Realize that generative AI tools are just that: tools. They should not replace active learning, information seeking, or resources. Although AI-generated content can be beneficial and readily accessible, you should never rely on said information exclusively. Be hungry for more.

For example, if you’re writing a report for work or an essay for school, don’t merely copy and paste whatever the tools spit out. Instead, use the content as a starting point and use this information to continue researching the topics.

Stay skeptical

Don’t assume complete accuracy from generative AI tools. In fact, they can generate incorrect information – quite often, actually. Investigate opposing opinions and use your critical thinking and analytical skills to refine, extend, and build on AI-generated knowledge. Fact-check and verify the generated facts.

Argue with generative-AI

This might seem odd, but to trigger your critical thinking skills, you can scan through AI-generated content line by line and look for inaccuracies. Once you encounter questionable or new information, question the AI models’ choice of content, first with the same AI tool, later by solving the question yourself, and lastly with additional resources. This practice can train you to formulate your knowledge and consider opposing points of view.

I’ve had several arguments with ChatGPT about dilution factors, content writing, and scientific facts. For example, after apologizing, ChatGPT may continue to give deviating answers – sometimes including new information. Eventually, it’s wise to confirm doubtful details with independent sources, such as articles, colleagues, or other experts. (Be wary; it’s easy to get stuck in a never-ending argument loop with ChatGPT.)

Learn with generative AI

If used correctly, generative AI can be a handy learning tool. For example, use it to generate quizzes or practice exercises to help you learn a new, unknown topic.

But the same recommendation applies here: don’t rely exclusively on AI-generated content. Use it as a starting point, and then use your critical thinking and analytical skills to fill in knowledge gaps and ensure you fully understand the material.

Play around with different AI

AI models differ from one another. Experiment with different models and evaluate their trends and biases. Figure out which AI model works best for your purpose. This will matter once more AI models become accessible.

Free up brain space from boring and repetitive tasks

Are you a content writer? Ask ChatGPT to create your article outlines and meta descriptions. Researcher? Ask it to list relevant materials or products for manuscripts, posters, or protocols. How about artists? Ask AI tools like DALL-E to create CD or book cover ideas.

Delegating the tools to perform simple assignments can give you extra time to do the work that matters, such as practicing your critical thinking.   

Have fun with AI!

Remember to have fun with AI tools. Create copies in different styles, jingles to your fantasy radio show, or art to hang up in the apartment. Experiment and expand your imagination; it can nourish your critical thinking skills.

Conclusion

Like many things, science and technology are not necessarily good or evil by default. Instead, their qualities rely on how we use them. A complete reliance on AI can impair our critical thinking and analytical skills. Simply speaking, you lose what you don’t use. 

Of course, the silver lining for content creators is that the ones with original and provocative ideas might thrive in these conditions. Their content will likely stick out from the dull and repetitive narrative and trigger readers to think more critically and analytically. Again, our critical and scientific minds need opposing views.

I’ve learned to love ChatGPT, and like Theodore in Her, I hope my feelings are mutual.

So go ahead, use ChatGPT and similar AI tools to create a foundation for your knowledge and critical thinking. But don’t stop there. Look up other sources, challenge yourself, your ideas, and others. It makes you feel better (now, find the source to this claim).

Frequently asked questions:

Can generative AI content really be trusted?

Kind of. While generative AI can be a powerful tool, it’s important to stay skeptical and fact-check any information it generates. Generative-AI tools often generate incorrect information and biased responses and do not provide you with complete expertise in one topic. Keep fact-checking what you learn from AI tools by searching other sources.

Can generative AI help me learn?

Yes! Generative AI can be an excellent tool for learning. Still, it should collaborate with – not replace – your critical thinking and analytical skills.

Did you like this article? Here’s how you can support. ​

Your help can make a significant impact.

If you enjoyed this article, please take a moment to like and share it. Your actions help us reach a wider audience.

Consider contributing to our project by clicking the Ko-fi button below (or this link). Your donations enable us to create more high-quality content for you.

Also, don’t miss out on future updates—subscribe to our newsletter today!

Share the post:

LinkedIn
Facebook
Twitter

Connect on social media:

This Post Has 2 Comments

  1. Valery

    There’s so much information coming out about AI and all the pros/cons. Your post is so informative, objective, and thorough! Thank you for taking the time to put this together

    1. Santiago

      Thank you for the kind words! And you’re right; AI is so hyped that everyone is offering their 2 cents (it has become “McAI”). Glad I could offer something original 😉

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This is Ivory Embassy

Ivory Embassy’s blog aims to ignite and inspire your curiosity and independent and critical thinking—molding a scientific mindset one step at a time.

Subscribe to our free newsletter below to receive stories, updates, and tips, all to inspire you to think independently, solve problems, and learn. 

Enjoy the stories, and feel free to reach out with questions and content suggestions.

Connect on social media: