×

AI vs Humanity

What follows is what they call an ‘Opinion Piece’ in that it represents a personal and grave concern I have about the effects of Artificial Intelligence on human beings. I am not referring to robots taking over the planet, which is of course a real possibility, but rather the effect AI has on the quality of our lives and the creeping dehumanization of society. If we care about this issue then we must be awake to this.

I know I know … ‘Make AI your friend’ and ‘Don’t be afraid of AI’. Maybe we should be more afraid, not of technology, which has massive potential to make our world a better place, but of humans’ ill considered rush to technological supremacy and gazillions of dollars profit.

I actually love the technology that makes my life easier but I know how easy it is to depend upon it. Like most humans, I am lazy. TV gave birth to the expression ‘couch potato’. As a society, we have become more the passive consumer with less and less interaction with the real world. A few decades on fast forward and we have the ever-increasing use of virtual reality in which we are not engaging with reality unless you call interacting with a fake reality real.

Back to the workplace. We were lucky that technology enabled us to carry on during the lockdowns. Nevertheless, most people were delighted to be able to meet in person once again. There is no going back though. “You can’t stop AI”. Ironically the very positives that we so value in the digitization of business mean that it has and will become too easy to communicate in isolation.

The impact of AI is insidious. Isolation, (this includes mental health and well-being), distance, alienation, and disconnection are some of the symptoms of a world dominated by AI. I’ll never forget a few months into the pandemic a senior executive of a global technology firm was on the verge of a nervous breakdown, imprisoned in his house with his wife, children, and mother in law. He told me how he was getting zero support from top management and how emotionally distressed he was.

Again, humans tend to go for the easy option and if we’re not careful we will become subsumed into a world in which it is almost impossible to distinguish between what is and what is not real.

I recently watched a documentary about a town in West Virginia, USA, the poorest part of America. A shocking 50% of young adults never left their houses, used drugs, did not want to work, and spent all their time on social media. A perfect example of passivity and disengagement.

Teenagers can get lost in their own world of social media for hours on end. How many of them are good live, face to face communicators? We are seeing the numbing and dumbing of the younger generations fed on ‘other worldly’ realities. I am generalizing of course but I am concerned about the new generations entering the workforce and their inability to communicate well. We need the younger generations to be in a fit state to train the machines in ethical behavior.

So what is the solution? Let’s first look at some recent developments.

The man often called the godfather of AI, Dr Geoffrey Hinton, recently quit Google citing concerns over the flood of misinformation, (we will not know what is fake and what is real in the future), the likelihood of massive job losses, and the “existential risk” posed by the creation of a true AGI – artificial general intelligence.

Speaking recently via video link to a summit in London, Elon Musk said he expects governments around the world to use AI to develop weapons before anything else. Elon hit out at artificial intelligence (AI), saying it is not “necessary for anything we’re doing”. May 2023

And Mo Gowdat, until recently the former chief business officer for Google X – “My biggest fear is that humans will use that abundant intelligence (AGI) in ways that are not pro-humanity”. He reckons that the drive to develop AI is mostly about shifting power and wealth in a competitive market.

So now we come to the crux of the matter. The experts say there is no stopping AI controlling the world and ultimately rendering human beings superfluous. So how do we humans save ourselves from extinction?

Dr Nathanael Fast, behavioral scientist at the USC Marshall School of Business, has written a very thoughtful piece that gives us a foundation for an approach that includes pros and cons and asks us to consider the long term consequences of developing AI as opposed to being caught up in the instant gratification that investing in and developing AI can often pander to and seduce humans seeking wealth power and control.

Dr. Fast is worth quoting here:
As AI becomes more powerful, we must invest not only in designing the technology, but also in boosting our own “Technological Intelligence”—our ability to understand and make wise decisions about technology. We need to get better at objectively evaluating the benefits and harms of technology in our lives.”

Going forward, tech leaders need to focus just as much on how human psychology responds to AI as they do on the design of the technology itself. Likewise, managers and companies that employ AI should consider the reactions of their employees and keep them in the loop, instead of implementing new technology abruptly. Building and maintaining trust is essential.

As the development of AI speeds up, the future of humanity lies in the balance. The consequences of our choices and actions are immense. Let us take this responsibility seriously and treat AI as the singular, albeit complicated, puzzle that it is, rather than only looking at “good” or “bad” pieces of the puzzle in isolation. We must increase our technological intelligence to ensure that we build a more positive relationship with AI and, ultimately, a better future.

And finally I recently watched a two hour interview of the brilliant Mo Gowdat by Brian Rose of London Real. This was a mind blowing exposition of how we can get to a state of Utopia by teaching machines to be ethical.

We can debate forever the macro picture, and the future of humankind but let’s focus back on the practicalities of managing machines before it’s too late. Given the inevitability of machines ultimately controlling humans, Mo Gawdat is convinced we have a short time to train robots to behave ethically and not destroy human society. And here the application of Emotional Intelligence will be crucial to the success of this endeavor.

Who would have thought that the espousal of EI in the corporate world would eventually be employed in service of saving humankind from extinction?! Gowdat makes the very insightful point that robots will take their cues from both poor and positive human behavior. In a very real sense how we ‘parent’ our bots will determine their degree of ethical behavior.

Here are three ‘on point’ quotes from Mo Gawdat:

The moments that define life are moments of human connection”,

“Never make the machine your enemy”

“We can still influence them (the robots) by showing them a side of us we want them to be like”

Ultimately it will be the choices that humans, not machines, make that will create our future. It will be like this until AGI prevails and we have lost complete control. So it is imperative that we start developing our EI skills now!

Michael Banks

August 2023

.

.

.

.

.

If you want to know more about PeopleSmart and the services we offer reach out to us for a conversation: contact@peoplesmart.fr

Up next...

  • What Business Leaders Can Learn from the Olympic Games
  • Executive Insights – Episode 10
  • “Learning By Breaking From Groups”
  • Executive Insights – Episode 9
  • Executive Insights – Episode 8