AI Anxiety

A walk into the unknown.

photo of landscape with ocean in middle ground, blue sky up top, bushes in foreground

The following is a whimsical, philosophical, psychological, thinking out loud, call it what you want, exploration into the dynamic world of artificial intelligence (AI).

And the influence it has on myself and society.

It offers several non-technical points of view, each of which have unprescribed levels of validity.

To be clear, I don't know how much of the following is BS and how much is not.

I'm putting it down to reflect on it later.

And perhaps spark a discussion on which ideas are plain wrong and which could be improved.


There's a feeling going around the internet about the recent wave of advancements in AI.

Some are excited for whatever comes and some are calling for it to all be stopped.

When something has a name, it starts to become real.

But when something has a name, you can also start to understand it.

Enter AI anxiety.

A feeling felt by many in the technology industry (as of writing this in April 2023).

And a feeling likely to be felt by people throughout the world in the coming months and years.

A natural feeling.

Anxiety is a fear of the future. The other side of the coin of excitement.

And with the recent AI updates, especially in the world of large language models (LLMs) such as ChatGPT and GPT-4 and generative models such as Stable Diffusion and Midjourney, it can feel like the future is already here.

One might start to think, what if AI can do this?

Or what if it can do that?

What if it's better than me at XYZ?

What if it takes my job?

What if it decides to take over the human race?

How do we know what’s real and what’s not when most of it is generated?

All valid questions.

Questions I don't have answers to.

For a couple of weeks I walked around with a kind of haze on everything.

Thinking to myself, what’s the point of all this if AI is just going to do it in the future?

What do we do if/when a robot is capable or more capable at any task we’d like to do?

Do we go back to just living?

Like the animals do?

These questions are not new, they’ve been asked for centuries (just replace AI with God or another deity, a person of power or your arch nemesis).

AI anxiety can feel similar to climate anxiety.

The big unknown of what's going to happen in the future with regards to climate.

Or death anxiety.

The big unknown of what happens when the big sleep happens.

Again, I don't know.

There seems to be convincing arguments on each side.

Many for the idea that AI has the potential to take over (and be destructive) and many against it (it will be helpful).

I understand and acknowledge the former but I'm for the latter.

But what do you do when there are some of the smartest people you know on either side of an argument?

My knee-jerk take is that there's too many things for me to begin to understand.

So the human in me panics.

The engineer in me thinks, like any new technology, it will be iteratively improved over time. Steered in the direction we'd like it to go.

The terminator-movie-lover in me thinks, what happens when the AI gets out of control and starts replicating itself at all costs?

Well, that'd be a strange ride.

Being cut in half by a red-eyed cybernetic organism would simultaneously be a childhood nightmare and dream.

AI in God's clothing

If technology has replaced religion in the modern world (to some extent).

Then AI is the new God.

And so the historian in me wants to know more about how people felt when Christianity (or other religions) started to propagate.

Is this a similar time?

What God-like figure(s) were there before God?

Is it because AI can create things on our screens now that it seems much more real than when it used to just work in the background?

How do you know God isn't real?

I don't know either.

What God-like figures act in the background without us knowing?

As real as ever but just not in sight.

After all, language, in the form of words on a page, is a conscious form of intelligence.

A major source of truth in many cultures comes from words written in a book, the Bible, the Bhagavad Gita, the Quran.

And since the printing press (1400s), we’ve been more and more conditioned to believe what is written down is what’s important.

Of course, in many cases it is.

But for every word written down, there’s an inconceivable amount that aren’t.

Does this make them less important?

A simple thought experiment would be to take in the room around you.

And try to write down every single thing that’s happening.

To capture a single second would require a lifetime of typing (maybe more).

The point of this is that we often confuse the menu for the food.

As in, what’s written down to be more important than what’s actually happening.

As in, what’s on our screens to be more important than what’s actually happening.

How much of intelligence is outside our perceptions?

Beyond sight, hearing, taste, feeling, smell, narrative (the sixth sense).

What kind of intelligence tells your hair to grow?

Or the flowers to turn and face the sun?

Your heart to beat?

Your digestive system to turn food into shit?

How does a potato know to grow when it's in the ground?

What intelligence releases hormones to make me feel good when I hug a loved one?

pink and white flowers on the side of the road
A house near where I live has planted a bunch of flowers next to the street. I love driving past.

Much of the intelligence that creates life is unconscious.

Have we put too much emphasis on intelligence we can see and not enough on instinct?

Just because we can't communicate with something doesn't mean it's unintelligent.

And of course, the logical argument here is to say that none of this matters in the face of an overarching superintelligence that wants nothing more than to replicate itself.

Isn’t that what we (humans) do already?

Logic is very helpful until it’s not.

This isn’t to take away from language as a powerful tool.

It’s to make sure we don’t undersell ourselves as mere orators or thinkers.

One of the main points of the concept of God is to remind you that you aren't God.

Is AI anxiety another form of us once again mistaking ourselves for the most important thing in the universe?

A faux pas since we are the universe.

This point of view is much closer to the Eastern way of thinking. You aren't the only one who's God, everyone is God.

Since we are the universe, creating technology, any technology can be seen as natural a path as any.

Just like a wasp builds a house.

Or a beaver builds a dam.

Or an apple tree produces apples.

Humans create narratives.

Narratives in the form of traditions, religions, myths, techno utopias.

And just as life requires death, every techno utopia requires an equal techno dystopia.

Falling in love with the mirror

The intelligence embedded in current AI models is us.

Without the 20+ years of internet data, what do you have (one way of viewing recent LLMs and AI models is that they have essentially memorised the internet)?

A nice set of equations on a page and thousands of GPUs (Graphics Processing Units) waiting to crunch numbers.

So it makes sense to be scared of such a thing.

Because it’s like a mirror.

A giant mirror into collective human consciousness.

Or at least what we’ve deemed important enough to put on the furthest corners of the internet.

And anyone who's dived into the practice of self-analysis knows there are plenty of places in the mind we play hide and seek, hoping never to be found.

We don’t need a superintelligence to make us anxious.

We already do it on our own accord.

Every day we sever the heads of our most heart-pulsing ideas and feel our chests tighten when we know there was something we could've done, could’ve created, something beautiful, something profound but held back.

Even before AI, I've met few people who say they want to do something and then end up doing it.

The capitalist in me begins to think about how to use the new tools for profit.

But what if the new tools belong only to a handful of entities?

The teacher in me wants to use them for creating materials.

But what if they’re inaccurate?

There are solutions to both of these.

Neither are new problems.

Like any powerful technology, some version of it will become open-source at some point (see OpenAssistant, the open-source version of ChatGPT).

And there’s nothing more beautiful than watching a community of people around the world work together to make magic accessible for all.

As for inaccuracy, I'm already inaccurate as a teacher. But I and my students have the ability to reflect and correct.

Making saddles when the car is around the corner

I begin to wonder what my role in this is.

I begin to wonder whether, as a software developer and teacher, I’m someone who’s creating horse saddles when the car is around the corner.

Or someone who makes oil lamps while Thomas Edison and Nikola Tesla are about to bring electricity to the world.

Ha!

Well, they'll be comfortable saddles and dam good-looking lamps!

And teaching is far more than just sharing information and summaries.

It's instilling a confidence in students that they can figure something out when they're not sure how to figure something out.

So it’s my job to promote the openness, the potential.

Not to be paralysed by a fear of the unknown (though my anxieties of the future are another source of my intelligence, and perhaps even more important than anything I could ever write down).

And to continue to share and build the things I’d like to see in the world.

The same as it’s always been.

As for the long, long term.

I don’t know.

And if I did, I'd be a little bored.

A known future is already the past.

Questions without answers

But what about the economy, how will AI affect it?

If an economy is akin to a living organism, it may get damaged in one area, only to recover and mend itself stronger (pending it survives).

LLMs and AI as a whole are like introducing a new species to the ecosystem of thought.

A new species of technology is making its way through the human ecology.

Biology teaches us that new species can harm and make irreparable changes.

But it also teaches us that they can work together in symbiosis.

We’re used to that.

If there’s anything humans are good at it’s adapting to a changing scenario.

Forgetting that is the real mistake.

Adapt! Adapt! Adapt!

The Hindu’s would laugh and say put on another mask!

The play must go on!

But what about all the changes coming?

The time in which life becomes most cumbersome or difficult to bear is when we feel it must be a certain way and only that way.

It hardens and becomes stale, brittle to even the most minor of shocks.

Only accepting the perpetual mystery of the unknown does it once again become meaningful.

Part of being human is developing the ability to find meaning in the meaningless.

But what if it takes over human life and makes it meaningless?

Even if AI took over human work (and not even just work, human life) and there was no point to the species anymore, what’s different to now?

A paradoxically enlightening realisation is that there was no point to the species to begin with.

Traditions, religions, myths and stories have all been trying to deal with the problem since the dawn of consciousness.

But the chief error is thinking that it is a problem in the first place.

Does dancing have a point?

Not if you’re doing it properly.

Yet it’s one of the most pleasurable activities one can participate in.

Trying to get something from an experience or trying to get something out of life is like reaching for a coin at the back of the couch.

The harder you try, the more it falls away.

Except in reality you are the reacher and the coin, in essence, you are the experience and experiencer.

These cannot be separate.

Separating them is like chasing your own tail. The fun of chasing goes with the potential for getting caught in knots.

From this point of view, humanity as a collective giving birth to a superintelligence could be seen as organic as a woman giving birth to a child.

A few of my friends have new children.

They tell me becoming a parent for the first time has been the most rewarding yet terrifying experience of their life.

And yet nothing could be more natural.

My personal use cases of AI

The main use case I've been using AI, mostly large language models like ChatGPT and OpenAssistant for is as a calculator for words.

For turning unstructured text into structured text

For example, extracting information from a paragraph of words into a JSON.

My girlfriend used it the other day to organise all of her teaching notes into sorted categories.

A tedious (but easy) task for her that would've taken hours but took ChatGPT a few minutes.

For writing boilerplate code

Much of what I've been doing lately is exploring and manipulating data with pandas.

I know how to do most of what I want to do but it takes time to write the code.

Instead, GitHub Copilot takes care of much of it for me.

It makes errors but so do I.

GitHub Copilot extends my coding ability similar to how a bulldozer extends a shovel (more verbose but perhaps less precise).

For taking photos of food and learning about it

My brother and I are working on an app called Nutrify.

An app to take photos of food and learn about it.

It uses computer vision to understand what's in an image.

And (later) LLMs to describe it.

We're having a bunch of fun.

And learning more about food as we build it.

Humans and AI working together.

Resources to learn more