Why you fear science and robots | Fear of the unknown

Why you fear science and robots | Fear of the unknown

April 29, 2020 5 By Santiago

Reading Time: 9 minutes

“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown” – H.P. Lovecraft (1927).

The thing about fearing the unknown is that it’s a guessing game. You don’t know what scares you and why. The fear of science or scientific advances that can impact our generations to come is no exception. You might find a dozen reasons to doubt them, half a dozen worst-case scenarios. Still, ultimately, for most of you, these are merely speculations; a guessing game.

And that’s OK. It’s actually more than “OK”; it’s as natural as the need to pee each time you take a dump. Fear and anxiety are central emotions in humans and other animals that help us survive.  

So, if fundamental for survival, how can we advance if we’re always scared of the unknown? How do we develop if we constantly doubt new technologies, new job environment, or new projects, such as starting your own circus?

Like most things in life, new unknowns bring both good and evil, and we need to balance our fears and curiosities to function in our world. The same is true for science, including gene editing and artificial intelligence (AI) or robots. We need to find that balance that allows us to develop these without losing our skepticism.

When you’re confronted with new stuff – and that can be everyday events like a presentation or interview – you need rationalize your fears and advance. So, how can we do that? A step in this process is to ask yourself, “what am I scared of and why?”

Before we look into that, let’s have a look at two examples of science-related fears you might have experienced in some way or another.

You love science and scientific advances

Yes, we know, you love science and scientific advances. We all hail and praise new discoveries that have the potential to improve our lives. Our love–hate relationship with CRISPR and robots is a prime example of that.

CRISPR-based gene editing turned out to be so available, cheap, and easy to use, that almost anyone could visualize its future benefits: save lives, treat diseases, and improve the environment. CRISPR experts all over the world exclaimed the potentials of the tool through the media megaphone and we all shared the good news.

Similarly, I remember our joy when we for the first time heard a real-life experience story about robots that could waiter – waiter-robots. They could take orders (say, a hotdog), go back to the bar, and bring you the right order. [let me tell you that living in Amsterdam, such a story brings tears to my eyes. Bringing me the right order, you say…? That’s amazing stuff.]

William Richards, who co-created the first robot, sitting in a cafe in Berlin in 1930 with one of his robots. Attribution to Bundesarchiv, Bild 102-09312 (CC-BY-SA 3.0)

We were all stunned by both advances, even though we knew that they were still at their infant stage. We know, for example, that CRISPR has several limitations in human therapy. We also know that the robot-waiter would mess up the order if you’d change the customer’s seating position just the slightest. But that’s the exciting part, right? It can only get better from here on!

…and then you fear science

As long as scientific advances like CRISPR and robots created headlines, we shared the positive prospects, we followed expert researchers on TV studios, and we all smiled while we ate our premade schnitzel in front to the TV. But, once the positive hype became outdated, and once (it seemed) all the experts were out of their 2 cents, we suddenly found ourselves at a turning point; CRISPR and robots became the villains.   

Our storytelling brains started creating dark futuristic CRISPR scenes – the apocalyptic ones. The ones where CRISPR gene editing creates super soldiers and designer babies that are faster, stronger, and more intelligent than we are.  

We realized that these human-like robot freaks have the potential to develop, get a mind of their own, and eventually outperform us. Easy done for robots; they’re tireless. While we lazy humans need to stop practicing whatever activity we’re doing to eat and sleep, a robot will be on the field practicing without breaks.

You suddenly realize that these freaks will eventually outperform us, whether in football, intelligence, or… war…

fear of the unknown apocalypse
Image by Fabien Huck from Pixabay

What once was our little self-created pet started resembling Cujo for every magazine we opened, every panel discussion we saw, and every post we liked.

We feared our ability to play gods, to re-build the Tower of Babel, or to recreate Icarus’ wings. But, the real dark predictions took the fear one step further. These featured some sort of warfare against you, your family, friends, and society, for example, through bioterrorism and robot domination.

Perhaps you have already read that CRISPR is a potential weapon of mass destruction – if not, check it out here. Maybe you’ve heard prominent scientists and tech nerds warning us about the dangerous development of future AI. It feels like we’re entering a dark era.  

Why you fear new science

The question remains, why do you develop such a love–hate approach to new advances in science? Why are they so exciting, and at the same time so scary? As you can imagine, the answers to these questions are rather complicated and depends on the individual and the group. Still, let’s, at a high level, look into some of the potential reasons you (or we) fear new scientific advances and the unknown. 

Evolution and the individual

Humans have thrived throughout development because of fear of the unknown, whether they are primal, like the fear of falling, or acquired, like the fear of spiders. They’re all there for a reason: survival.

Throughout evolution, humans and other animals have developed a sense of caution when confronted with unknowns. We’ve survived because we’ve developed these “right amounts” of fear that make us approach unknowns with skepticism and identify them as possible threats. In fact, this default vigilance-mode is related to the so-called negativity bias, meaning that we have a tendency to focus on negative information rather than positive.

That’s the default first approach to unknowns, but we quickly acclimatize to the unknown once we’ve figured that it doesn’t harm us. That is, we reduce our levels of fear once we learn about the unknown; once it becomes less unknown. [yes, that’s where we’re going.]

And, according to animal studies, fear levels may also depend on individual genetic traits. In these experiments, rats showed lower fear responses to unknowns during early development if their mother had high levels of stress regulation.

So, fear of unknowns, such as scientific advances, is a normal response that have allowed humans and other animals to survive through generations. So, you don’t fear gene-editing and AI robots per se – they intrigue you. Instead, you fear the development of these into something uncontrollable that might extinct us (not the robot waiters of course). Not necessarily causing you harm at a personal level, but at the species level. The unknown dark future of that thing scares you.

The hierarchical (selfish) reason

Then, humans became superior to everything. Our imposed superiority in nature might partly explain our fears of unknown science and technology, especially if they can collapse our hierarchical structure or advantage.

What do I mean with that? Well, our superior self-image must have intensified once we began domesticating animals and became farmers. “Sit, Fido!! Good boy!” It developed further with slavery. So by now, humans are not part of nature anymore, instead, we play nature like a game of chess.

Now, what happens to that superiority and status if the animals, slaves, or other groups we oppress start outperforming us?

We create robots and laugh when they behave like humans, picking up hotdogs and whatnot. We develop medical or scientific tools and praise their effectiveness. Still, our amusement has limitations, and we cannot accept just being part of nature, among robots or super babies (that have grown older by then). No, we need to rule it, and a loss in status would ruin humankind.

The cultural reason

Apparently, the Japanese approach robots differently than “Westerners”. It turns out that they don’t fear the rise of robots, they embrace it.

According to Asian cultural roots, humans are merely a part of nature and not the superior creatures. According to The Science of Storytelling by Will Storr, Asian cultures tend to be less individualistic (ego-filled), which is apparent in their story structures. Their stories are less self-focused and more group-focused or other-focused.

I believe this view releases the self from all responsibilities, and importantly, our perceived superiority to some extent. It’s pretty relaxing, actually.

Each time Hollywood depicts robots or genetically modified creatures, the shit hits the fan without exception (of course Wall-e is an exception). One superhero needs to save us from the terrors that inevitably will lead us to the apocalypse.

Creative commons (CC0 1.0)

Like it or not, exposure to these stories and cultures will affect your view on new technology. Stories are compelling, and they can change our opinions on things we lack the knowledge about, even though we know that the stories are fictional. Somehow, they make the unknown more known, and for us in the “west”, the message about gene editing and robots seems to bring bad news. Not in Japan.

How can you reduce your fears of new science (and unknowns in life)?

We will probably always be cautious about new unknowns. That’s inevitable and, to some extent, necessary for our survival. But I think that we can start approaching our fears of scientific advances and technology differently to reduce our anxiety levels.

Make the unknown more known

Keep being skeptical, for sure, it’s a great thing that we keep a critical eye on old and new advances. They need to benefit us without causing harm. But that doesn’t mean that you cannot be skeptical about your skepticism.  

This is especially true when your starting point is based on no or limited knowledge about the matter. We recently published a meme on Instagram and Facebook about the so-called Dunning–Kruger effect, a cognitive bias where you overestimate your abilities or knowledge. The meme was related to COVID-19, but applies to new and old science as well, including CRISPR, vaccines, the environment, and robots.

The more knowledge you get, the better you can estimate how a new technology may or may not be used in the future. Read peer-reviewed papers, well-respected websites, watch presentations, or go to conferences. If you have doubts, let us know, and we’ll try to unwrap it.

The point is that right knowledge is king. Awareness filters out all the disturbing noise coming from fear-mongers and conspiracy theorists.

The same can be said about your everyday life. The better prepared you are for an interview, competition, or presentation, the less energy you’ll spend on trying to predict the future (often predicted as dark). You’ll be cooler and more assertive. Sure, just as with science, there’s a risk that things go terribly wrong, but at least you’ll know then; it’s not a guess anymore.   

Know that predictions you hear might be biased (tainted)

Take a step back and remind yourself that reality is relative and that your fears of an unknown future are products of your surroundings. Just because your environment has an oppressive, aggressive, or conquering mindset, doesn’t mean that it directly translates to others.

For example, “The development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking.

I can’t believe that I’m entering a disagreement with the late Stephen Hawkins. All respect to him and his great work. Still, his believes of robots replacing human life are mainly based on how we behave and how we perceive the world (emphasis on we). Our societies start wars, pollute the environment, and extinguish species, but is this necessarily applicable to any type of super-human being? Are they predisposed to act the same way we do?  

Note that we do the same with alien theories. “Once they arrive, they’ll be so advanced that they’ll dominate us.”

You could just as easily think that, if robots, genetically modified (grown-up) babies, and aliens are so advanced, maybe they’ve “switched off” their primitive brains. They figured out how to live harmoniously and sustainably with nature and other beings. And, if you don’t like this hippie crap lingo, tweak the prospective to your taste; it will be just as accurate at this point.    

Our fears of the unknown are ultimately merely reflections of ourselves. They are projections of our flaws and what we need to improve . They are a self-lesson, really. The dangers we see in the future reveal our societies’ weaknesses. So, instead of freezing and expect the worst, we can use our fears of biology, robots, and aliens to fix our surroundings. Maybe that will bring the positive vibes.

Next time we’ll make it shorter I think. Maybe about how to think like a scientist. Or how to dress like one? Anyway, until then, stay safe!

You know how you can help and support us? You can share this or other posts, like our stuff, and follow us on social media. We want to spread science and skeptical thinking to as many people as possible to make this a better place for you and for me and the entire human race. It would help us a great deal. Thanks!  

Share the love!