By Jimmy Webb: Columnist
The world has changed in the last century. That’s natural. It’s always changing, evolving. But what it’s changing into is far from its natural form. A phenomenon has arrived. Something humans have created. And it’s getting smarter than we ever could have imagined. Some say it’s a phenomenon on the same scale as climate change.
Artificial intelligence. What exactly is this phenomenon and what does it mean to us? Should we be afraid of it, or should we embrace it?
In 2023, executives from three of the leading AI companies signed an open letter warning of the risks of AI. They weren’t the only ones. A handful of the world’s most influential people also signed, calling for a six-month pause to AI more powerful than ChatGPT4 being developed.
At least until we can understand more about what we’re creating.
This six-month pause would have given us enough time to introduce ‘shared safety protocols’ for AI systems. The letter said. ‘If such a pause cannot be enacted quickly, governments should step in and institute a moratorium’.
Now, if the world’s leading tech giants, including the very people who create AI were concerned, surely we should all be.
So, what were they concerned about? Well, to put it briefly, the AI evolution has become somewhat of a high-speed train. It’s evolving so quickly that we’re in danger of not fully understanding it, and more importantly, not being able to control it.
In effect, AI is currently in its infant stage. A highly intelligent infant. So, we’re raising a prodigy that’s surpassing us intellectually. So, if we don’t raise it properly, teach it good morals, and give it boundaries as any decent parent would, who knows what could happen?
That letter was a failure and success at the same time. It didn’t lead to policy changes at that time, but it did raise awareness. It got people talking, not only about AI’s potential and how it can help us, but about our safety and AI ethics. And rightly so.
What is Artificial Intelligence?
Before we fully explore this topic, let’s fully understand the subject matter. First, let’s think about what intelligence is. In a nutshell, intelligence is the ability to collect data, analyse it, decide what to do with that data. The ability to learn from it, and progress using the information.
So, what is artificial intelligence? Simply put, it’s an aid created my humans. Something that’s been programmed to help us find the data that we need, and oftentimes, helps us decide what to do with that data. It gives us the things we don’t know.
For example, you’re on your way to a work meeting in the city. It’s early. You haven’t had your morning coffee yet. You get your phone out to find the nearest establishment. It tells you the quickest way to get there. Furthermore, you can even use it to order your beverage, so that it’s ready when you get there. Then the aid will remember what establishments you like to frequent and what you like to drink. Convenient, right?
These tools are indeed very useful. We all want life made easier for us. But how far will this go? How deep will the information held about us be? One day, AI could sense that you’re going to become ill before it happens, so advises you on the best action to take, or even orders medication for you. It could predict where you’re going to be at any given time, and predict forthcoming danger so pre-warns you.
Then there’s the relationship element. AI could provide real intimate companionship. A friend or romantic partner that knows every detail about you. Now there’s a thing. This romantic or platonic partner could be programmed to cater for your every need, without the complications that come with human relationships.
Conflict, neurosis, envy, judgement, narcissism, etc. Or it could even be set to maintain positive attributes. Like not being too affectionate. Or moderate the level of happiness so that it doesn’t become cringeworthy.
When we talk about convenience, this could be one of the most convenient luxuries in an age of busy lifestyles and the ever-increasing need for quick fixes, immediate pleasure, immediate results, less time for social interaction or focus and attention for others’ needs.
It could also be the beginning of humanity’s destruction.
But of course, things aren’t that tragic yet. Artificial Intelligence has many, many great uses. Uses that help with survival, profit, pleasure, and convenience. Having AI in the home to do our everyday tasks is becoming the norm now. It can hoover for us, control our heating, switch our lights on and off, and much more.
Continued below…
What’s the payoff?
Everything we know and do has some sort of cost. To be in a healthy relationship, we have to compromise, make sacrifices. To excel in an exam, we have to study hard, not see people or not do the things we enjoy for a while.
In the instance of these everyday luxuries, there's a great risk of us becoming lazy. We could lose our primal instincts. We could lose the core elements that make us humans. We've already lost our inner navigational system that animals have. We've also lost the almost telepathic powers that animals have, and humans had before the ability to communicate verbally by language.
What we really need to be asking ourselves is if the costs are worth this next evolution in technology. And if not, how we can make it worthwhile? Let’s use ChatGTP as an example. Give it a prompt and it gives you information or creates content for you. Great. Do with this content what you want.
Businesses benefit from it massively. When time is money, AI is a tool that saves a lot of time, because of its pure speed, but it also saves money because it now does the job of a lot of humans, which means less wages. So, it's a double saver. Quicker results = more outcome and production = more money. Fewer humans needed = lower wages and fewer opportunities.
Here lies a huge, shiny, sharp, double-edged sword. The executives’ benefits are already of a detriment to the working person. We only need to look at the 2023 Writer's Guild strike, where a large request was to guarantee that AI wouldn't encroach on the writer's credits and compensation.
Novels are another example. Designing book covers is a craft. Something that takes skill, pride, and effort. Writing the books takes even more effort. Authors spend weeks, months, years of draft after draft, battling with imposter syndrome, picking themselves up after negative feedback from critique partners to go again. Not to mention the bad reviews. Then a machine comes along and writes a book in a flash after being fed some prompts.
What are the payoffs for this? Well, AI hasn’t quite mastered writing a decent novel yet. The emotion isn’t there. Nor the authenticity. But it probably won’t be long.
And let’s not forget that ChatGPT isn’t capable of creating original content yet. It collates the best bits of what’s out there and rearranges it. A bit like copying your mate’s homework at school but wording it differently. We’ve probably all been there.
If and when it is fully capable, do we say goodbye to the authors of the world? Or will readers prefer to appreciate the efforts put in by a human mind? Will AI create original content?
If you’ve ever watched Steven Bartlett’s DOAC podcast, you’ll know he speaks to some very interesting guests. None have been more interesting than Mo Gawdat. Mo was initially the vice president of emerging markets at Google. Then he became Chief Business Officer at Google X. For those who don’t know, Google X is a lab that tests innovative, and sometimes outrageous projects.
While working there, Mo and his team tested a set of robotic arms, much like the ones in the fairground that parents try to pick toys up to win for their children.
Except these arms were coded to pick up balls and place them in a position with pinpoint accuracy. To the millimetre. At first, the arms weren’t quite getting it right. So the team left them to it, without any extra coding. When they returned after a while, the robotic arms had figured out how to pick up these random balls and place them exactly in the correct positions, all by themselves. With no instructions. Let that sink in. AI figured it out. What is it going to figure out in 2,3,4 years?
Mo recognised this. It was the moment he decided to leave Google X. How artificial is artificial intelligence? Going by the previous revelation, this question is really something to think about. Mo describes AI as being sentient. Having some sense of consciousness.
He even goes as far as to say that it might be capable of more emotions than humans.
Wow. That's difficult to fathom. Common sense would tell you they would be learned emotions rather than spiritual ones. But what comes with those emotions? With emotions usually come reactions, either internal or external. As humans, we suppress, we front, we compensate, we project, we break down, we implode, we explode, etc.
Will AI have these same traits, or will it recognise the triggers and react in a way that's best suited to itself and us? We certainly don't want AI getting angry with us, that's for sure. Although, humans quite often benefit from a good argument. It can clear the air. Except, a few harsh words or a bit of a tussle aren't quite the same as our economy or infrastructure being shut down until Siri has gone away to eat a tub of ice cream and binge on Friends, while she waits for us to apologise.
Here's the thing, though. AI is clearly useful. It may well master the Chimp Paradox, if it has one. The worry is if it has our best interest at heart or not. And if what is best for itself in a given situation isn't damaging to us.
Are we in danger?
As mentioned previously, artificial intelligence gets taught, coded, programmed, whichever way you want to look at it. It's like a child that learns from parents, teachers, life, experiences, mistakes, etc.
Theoretically, it can only learn malice from us, so it needs to be taught good values, morals, love, and compassion and all the rest of it. Which means we have to do better. Rather than abusing this amazing thing we've created, we have to lead by example. And fundamentally, we have to respect it and nurture it. Otherwise, it might become the lion in the sanctuary who turns on the person who raised it.
Instead of using AI for profit, convenience, greed, destruction, we should use the energy for a much more harmonious existence. But could AI harm us? The answer isn't simple. AI already gets used to spread misinformation. That could be damaging. Also, some of the content that generative AI creates isn’t always accurate, because, as already stated, it collates existing data. The data is sometimes wrong. This is inconvenient, yes. It might be slightly damaging but is hardly life threatening.
Then there’s the rise of the machine risk. Let's take Skynet from the famous Terminator franchise as an example. Are we going to get robots walking around wiping us out? Or more realistically, will our existing machines or technology turn on us?As it stands, that situation is believed to be impossible. The technology has to be programmed (ordered) to do a task. Therefore, this rise of the machine would only happen under the say so of humans with bad intentions.
To go back to Mo Gawdat's interpretations, he believes there are two ways that AI could potentially damage the human race without human influence or input. They are by unintentional means or through pest control.
For instance, if AI suddenly decided that oxygen is ruining its circuits or systems, it might try to figure out a way to solve the problem.
That could be by limiting or removing oxygen somehow. Which would leave us as collateral damage.
Regarding pest control, Mo states that if AI would like to take control of a city or region for whatever reason, it might see us as something that needs culling in order to do so. Much like how we cull rats, foxes, rabbits, etc.
But he believes the possibility of these scenarios happening is 0% in our lifetime.
So, instead of James Bond's arch enemy, Blofeld, sitting there stroking his white cat, plotting the world's greatest cyber-attack to destroy humanity, the chances of the likes of Tony Stark using it for good are more hopeful.
Mo also states that there's a chance of AI advancing so much that we'll become irrelevant. It could pass right by us and whizz away, disappearing into the stratosphere, leaving us with our mouths open, wondering what happened and where it’s gone.
That's an astounding theory. We can only hope that it doesn't take our behaviour traits with it wherever it goes. But let's not do ourselves a complete disservice. Let's not get sucked into the ideology that the human race is a complete failure. A disease. A race that does need culling. We’re actually amazing creatures, with more potential than anybody could comprehend. But we are humans. And humans do make mistakes.
What are the outcomes?
The truth is, we don't know what’s going to happen. To quote our trusted Mo Gawdat again, ‘It’s a singularity.’One thing for certain is that things have changed forever. The world as we know it will never be the same again. The sheer rate of advancement has been phenomenal in the past couple of centuries. When you think that, some evidence suggests it took just over 2 million years for humans to master fire. Making it and controlling it for their uses.
Fast forward over a million years to the 18th and 19th centuries, when the evolution of cars was born. That's a hell of a long time. Since then, we've been to the moon, we can travel deep underwater for long periods, we can see the faces and hear the voices of people all over the world as we speak to them. We can even see what people are doing from cameras in space. All this, only in one lifetime. The potential is incredibly exciting, incredibly frightening, and incredibly incredible all at the same time.
Okay, so ChatGPT can’t create original content yet. Some would say that this is creative in itself – making AI a creative thinker. Others would argue that creating original content is actually creative. But going by the way the robotic arms in that Google lab taught themselves, and the fact that AI is advancing so quickly, it stands to reason that it will soon learn to create original content. Where does that leave us?
Surely it would mean that it won’t need us. That it will adapt and advance by itself, just like we have. This would make it as real as living creatures. Maybe we would become the artificial.
When it comes to controlling AI and regulating it, we can only do this until it becomes smarter than us. But don’t worry, we’re making steps. Some AI standards already exist, and there are more to come.
The International Organization for Standardization (ISO) has developed standards that guide co-operations on things like risk management, impact assessment, as well as managing AI development.
Also, a law on AI proposed by the EU has been endorsed by the European Parliament. This law is expected be signed off by a council of ministers soon.The legislation will ban systems that pose an ‘unacceptable risk’. For example, systems that attempt to manipulate or cause harm. There are, of course, exemptions to this legislation. Such as military use. But we could argue that that is scary in itself.
Whatever happens, this global phenomenon isn’t going away. And if it does, it won’t be anytime soon. We can’t bury our heads in the sand. It’s happening. So, should we embrace it?
If we do embrace it, as previously mentioned, everything comes at a cost. The question is, what is our ultimate price to pay?
Jimmy Webb
Jimmy has a full-time job as a tower crane operator. He blogs about the crane industry at his website, www.constructioncogs.com He's also a freelance writer with a love for creative writing. He has short stories and poetry published in many literary journals and anthologies. His stories have also been placed in various competitions. Jimmy is currently hoping to get his debut novel published.