Have you been fiddling with generative artificial intelligence (AI) lately? ChatGPT has popularized generative AI, and people are typing their fingerprints clean, chatting with their new know-it-all partner. But, although AI models benefit many aspects of life, we must recognize their pitfalls, especially those affecting our critical thinking and analytical skills. What are the drawbacks of the surge of accessible generative-AI models? And how can we use AI to strengthen our thinking skills?
Generative AI refers to artificial intelligence models that generate new content based on existing data, including text, images, music, and sounds. In other words, data from the internet or books have trained large language models like ChatGPT and the image-generating DALL-E to generate content based on users’ inquiries (so-called prompts). As a result, a generative AI model like ChatGPT will generate content similar to its training material based on large datasets of, for example, books, articles, and other content. However, it’s important to note that the generated content isn’t a copy of the training material. Instead, it uses large datasets as resources to create new content with similar themes, styles, and genres.
It’s impossible to deny the brilliance of these tools. Sit down for a minute or two and think about their capabilities. The current generative AI models contain hundreds of billions of learning parameters (pieces of information), which is already impressive. And what’s more, future models are expected to have about a hundred quintillion – that’s 100,000,000,000,000 – parameters, which is immense from a technological standpoint.
Generative AI can propel our knowledge and shape our societies
It’s no secret that generative AI – and AI in general – can change the world and shape its prevalent worldviews. Regardless of your opinion about the technology, AI tools are here to stay and will expand beyond our everyday experience and imagination. Since we live in a technology-driven society in which our knowledge and, thus, sense of reality depend on online information, it’s more than fair to wonder how AI will affect these.
Unsurprisingly, AI tools like DALL-E and ChatGPT have triggered both standing ovations and harsh criticism. Artists seem to have the least patience with AI-generated outcomes. Some claim we’re “watching the death of artistry unfold right before our eyes.” Quite poetic, still.
However, undeniably, we’re already experiencing generative AI offers massive benefits for knowledge seekers.
For one, we see how these new, accessible AI tools have democratized our access to information. Let’s take ChatGPT as an example. ChatGPT instantly answers almost any question, leveling the playing field of knowledge access – as long as it remains accessible. No matter your background or level of expertise, you’ll obtain almost any information in no time. (It’s worth noting, however, that the tools often generate far from perfect results, as you’ll see.)
Apart from their increased accessibility, these tools can provide knowledge based on your level of expertise. Suppose you want to learn about generative AI’s learning parameters and what they mean but have zero knowledge about AI. In that case, you can ask ChatGPT to explain these terms in a language a 10-year-old can understand.
If the answer is unclear, you can ask the tool to elaborate further, “Please explain it so that a 6-year-old understands your description.” And bam, there’s a version of the same answer, which your inner child can enjoy.
Or perhaps ask ChatGPT to formulate the description as a Shakespearean poem if you’re more into creative learning.
You get the gist of it by now. Since text-based AI models generate content similar to human communication, you’re presented with relevant, human-like answers in seconds (sometimes minutes if the website is overcrowded).
The reliance that hurts your critical thinking and analytical skills
In short, we can ask generative AI tools to feed us any information and instantly have that information in front of us. But what are the shortcomings of these powerful tools? Especially in critical thinking, we can start by asking what all this access to readily available information means for our future ability to think and analyze facts.
A self-enforcing feedback loop of biases and incorrect information
A fundamental problem with generative AI tools emerges once you realize they’re not made from magic and miracles. Humans have trained them to recognize patterns in prompts and present content based on these patterns and data the models have used as learning material during training.
Generative AI tools don’t understand cognition and lack consciousness, emotions, free will (if you believe in this), and abstract thinking. So, they lack intrinsic critical thinking and cannot understand the meaning behind the patterns they recognize. As a result, they will ultimately answer to their true masters: the humans that design and train the system.
But what does this mean for our critical thinking, analytical skills, and knowledge? Well, generative AI tools can amplify and maintain biases and errors from the data used to train the AI model. These inevitable human-created biases and errors can eventually self-enforce in a positive feedback loop. The more we rely on AI-generated content, the greater the probability that future AI-model training will depend on sometimes flawed data points.
Put crudely, AI-generated internet content risks wiping out dissident voices and unpopular beliefs critical of the established narrative. As a result, the existing narrative would further homogenize sources and shared knowledge.
Sure, you can bash your opponent’s arguments all you want, claiming they’re misinformed or naïve. Still, deep inside, you need deviating opinions to develop your thinking and solidify sound knowledge. Just ask practitioners of switch-side debating, a debating technique used by competitive debaters. How would they improve critical thinking if they had no opponents?
So, if we’re not careful as a society, we risk ending up in an AI-generated, self-propagating information bubble. Ultimately, the content generated by AI is only as good as the data from which it learns; human-fed data points are the first data source. And, without a doubt, we know AI contains biases.
The blinding transparency issues of generative AI
How do generative AI tools like ChatGPT decide how they will respond to your prompt and which sources it uses? Can you explain their internal processes and algorithms in detail? Most can’t. Currently, generative-AI models are a black box, and this lack of understanding about AI decision-making creates issues for purposeful knowledge seeking.
Imagine you’re reading your favorite newspaper and landing on an article claiming ChatGPT is transforming journalism and how the profession is changing irreversibly. You look for references supporting these bold statements but find none. Zero resources. Zip. You’re left sipping morning coffee, unaware of the reliability or accuracy of the information you just consumed. How are you supposed to form well-informed opinions and make decisions based on dubious information that someone wrote? You’d follow up, search the web, and screen for other sources. Now, what if all other sources lack references? And so on…
The same applies to AI-generated information, which can conceal sources and motivations behind their responses. This lack of transparency blocks you from assessing the generated information’s reliability, accuracy, and biases, increasing your chances of consuming misleading or inaccurate data.
Related: Brandolini’s law: why you sometimes struggle to refute BS (and how to solve it).
Lack of transparency also gives you little insight and control over generative AI tools’ decision-making, limiting you from changing parameters according to your needs. For example, you cannot evaluate the possible biases integrated into the tools without sources. Depending on your aims, you’re left with little possibility to tweak the algorithms to fit your needs and filter these biases or other confusing data.
The numbing effect of a uniform information and news landscape
One reason I’m a free-speech advocate is selfishness; I need diversity to maintain sharpness (that’s only one of the reasons, of course). Developing and strengthening critical thinking and analytical skills requires exposure to different ideas, viewpoints, and sources. Diversity triggers your critical thinking and analytical skills to evaluate information and references, consider alternative information, and form independent conclusions.
Uniformity, however, is the polar opposite of diversity, including in the context of critical thinking and analytical skills.
Since language models like ChatGPT analyze and generate content based on trends and patterns in big datasets, an over-reliance on these models could theoretically create a uniform information landscape. This reliance, where ChatGPT covers the information landscape, can create a repetitive and uniform source of information that reinforces the respective trends and biases.
Let’s try a little mind game, shall we? Suppose you committed yourself to eat exclusively liquid foods for two years. How do you imagine your body would react to its first encounter with solid foods (say, beef with raw carrots) after two years of liquid madness? Your teeth would probably hurt, and your digestive system would struggle to process the food, possibly causing bloating, constipation, and stomach pain. It’s a “use it or lose it” type of situation.
Similarly, if you’re exposed to a uniform information landscape, you detrain your brain from processing challenging and alternative perspectives. In theory, a lack of information diversity could appease your brain from questioning data and forming new opinions, starving your critical thinking and analytical skills. Again, a “use it or lose it” type of situation.
Few things challenge your assumptions and beliefs in a uniform news landscape with a limited range of perspectives or ideas. This lack of stimuli could limit you from developing critical thinking skills, which is why Ivory Embassy advocates challenging ideas; yours and your idols’. It cultivates and strengthens your critical thinking.
You may also find this interesting: Think like a scientist: The power of a scientific mindset.
We already have experience with algorithms that, more or less, enforces uniformity or filter bubble effects: search-engine algorithms. Search-engine algorithms personalize and recommend newsfeeds based on your previous searches. This convenient feature may speed up your information search. However, filter bubbles or echo chambers could also give you the impression you’re exposed to diverse news. Instead, you’re likely consuming information aligning with your views and biases.
All plagiarism and no checks makes pupils a dumb bunch?
Then there’s plagiarism. Some claim that ChatGPT may harm our – particularly children’s – education as these tools can create “high-tech plagiarism,” as Noam Chomsky described ChatGPT in a recent interview. The Canadian novelist and cultural commentator Stephen Marche even proposed a timeline for the educational system’s death based on the increasing popularity of generative AI.
Whether you trust this prognosis or not, it’s fair to suspect that plagiarizing material will become easier as AI tools spread among students. On the one hand, professors claim that ChatGPT generates terrible essays. On the other hand, it seems ChatGPT can create scientific abstracts that fool researchers who cannot differentiate between these and human-written abstracts. With future improvements in generative-AI tools and uneven accessibility to AI checkers in schools, these tools may give teachers and professors headaches.
The question is, how much more of a threat to education do generative-AI tools pose than the current cheating? Because while it’s true that ChatGPT can function as another tool that slows education, cheating has been present in education for as long as we can remember. Still, the accessibility of ChatGPT among students doesn’t help teachers evaluate students in an overworked school environment.
How to turn these pitfalls to your advantage
Does all this mean we should avoid generative AI to maintain or strengthen our critical thinking? Hell no! On the contrary, you should stay updated with technologies like AI, adapt to their advancements, and use them to your benefit. But, to reap the benefits of AI, you’ll need to use these tools responsibly:
Use generative AI as a tool, not a replacement
As a first step, realize that generative AI tools are just that: tools. They should not replace active learning, information seeking, or resources. Although AI-generated content can be beneficial and readily accessible, you should never rely on said information exclusively. Be hungry for more.
For example, if you’re writing a report for work or an essay for school, don’t merely copy and paste whatever the tools spit out. Instead, use the content as a starting point and use this information to continue researching the topics.
Don’t assume complete accuracy from generative-AI tools. In fact, realize they can generate incorrect information – quite often, to be frank. Investigate opposing opinions and use your critical thinking and analytical skills to refine, extend, and build on AI-generated knowledge. Fact-check and verify the generated facts.
Argue with generative-AI
This might seem odd, but to trigger your critical thinking skills, you can scan through AI-generated content line by line and look for inaccuracies. Once you encounter questionable information, try to question the AI models’ choice of content, first with the same AI tool, later by solving the question yourself, and lastly with additional resources. This practice can train you to formulate your knowledge and consider opposing points of view.
I’ve had several arguments with ChatGPT about dilution factors, content writing, and scientific facts. In the case of ChatGPT, you’ll probably note that it will apologize at first. Still, it might continue to give you answers that deviate from your knowledge – sometimes with new information. Eventually, you’ll need to confirm with independent sources, such as search engines, colleagues, or other experts. (Be wary, it’s easy to get stuck in a never-ending argument loop with ChatGPT.)
Learn with generative AI
If used correctly, generative AI can be a handy learning tool. For example, use it to generate quizzes or practice exercises to help you learn a new, unknown topic.
But the same recommendation applies here, don’t rely exclusively on AI-generated content. Use it as a starting point, and then use your critical thinking and analytical skills to fill in knowledge gaps and ensure you fully understand the material.
Play around with different AI
All AI models differ from one another. Experiment with different models and evaluate their trends and biases. Figure out which AI model works best for your purpose. This will matter once more AI models become accessible.
Free up brain space from boring and repetitive tasks
Are you a content writer? Ask ChatGPT to create your article outlines and meta descriptions. Researcher? Ask it to list relevant materials or products for manuscripts, posters, or protocols. How about artists? Ask generative AI tools like DALL-E to create CD or book covers ideas.
These tasks are crucial but distract your brain and take time from completing the main tasks, for example, writing, studying, or creating. Delegating the tools to perform simple assignments can give you extra time to do the work that matters – and, while we’re at it, practice your critical thinking.
Have fun with AI!
Remember to have fun with AI tools. Create copies in different styles, jingles to your fantasy radio show, or art to hang up in the apartment when you’re alone. Experiment and expand your imagination; it will nourish your critical thinking skills.
Like many things, science and technology are not necessarily good or evil by default. Instead, their qualities rely on how we use them. A complete reliance on AI can impair our critical thinking and analytical skills. Simply speaking (and for the last time), you lose what you don’t use.
Of course, the silver lining for content creators is that the ones with original and provocative ideas might thrive in these conditions. Their content will likely stick out from the dull and repetitive narrative and trigger readers to think more critically and analytically. Again, our critical and scientific minds need opposing views.
I’ve learned to love ChatGPT, and like Theodore in Her, I hope my feeling is mutual.
After all, the tool has increased my productivity and creativity during the last months. However, if we want to keep thinking critically – and improve our scientific mindset – we need to use AI tools with care.
So go ahead, and use ChatGPT and similar AI tools to create a foundation for your knowledge and critical thinking. But don’t stop there. Look up other sources, challenge yourself, your ideas, and others. Studies say this makes you feel better. Now, ask me which studies.
Frequently asked questions:
Can generative AI content really be trusted?
Kind of. While generative AI can be a powerful tool, it’s important to stay skeptical and fact-check any information it generates. Generative-AI tools often generate incorrect information and biased responses and do not provide you with complete expertise in one topic. Keep fact-checking what you learn from AI tools by searching other sources.
Can generative AI help me learn?
Yes! Generative AI can be an excellent tool for learning. Still, it should collaborate with – not replace – your critical thinking and analytical skills.
This Post Has 2 Comments
There’s so much information coming out about AI and all the pros/cons. Your post is so informative, objective, and thorough! Thank you for taking the time to put this together
Thank you for the kind words! And you’re right; AI is so hyped that everyone is offering their 2 cents (it has become “McAI”). Glad I could offer something original 😉