Getting Real about Artificial Intelligence

When AI burst into the public consciousness late last year, with the launch of OpenAI’s Chat GPT, reactions ranged from unbridled excitement to existential dread. Some heralded it as the biggest innovation since the Gutenberg printing press, nearly 600 years ago, while others warned that AI would inevitably render our civilisation extinct. The truth probably lies somewhere between these extreme scenarios. That is, AI will prove to be extraordinarily useful, while posing some very real dangers to our world.
When AI burst into the public consciousness late last year, with the launch of OpenAI’s Chat GPT, reactions ranged from unbridled excitement to existential dread. Some heralded it as the biggest innovation since the Gutenberg printing press, nearly 600 years ago, while others warned that AI would inevitably render our civilisation extinct.
The truth probably lies somewhere between these extreme scenarios. That is, AI will prove to be extraordinarily useful, while posing some very real dangers to our world.  As Elon Musk, one of the founders of OpenAI, observed, ‘ChatGPT is scary good. We are not far from dangerously strong AI.’
It’s not the first-time that humanity has been presented with such a twin edged sword. Nuclear energy, for example, poses existential threats while offering benefits such as energy production and medical applications. And we humans have, so far at least, managed to avoid blowing ourselves up with this powerful technology.
AI is different to nuclear energy, though, in one important respect. It may evolve in surprising ways, and develop emergent capabilities that were not programmed into it. Some AI theorists argue that this is already happening, because even the hyper-bright geeks at OpenAI cannot explain why ChatGPT does some of the things it does. Hence, it is not inconceivable that as AI learns more about the way we humans think – and our tendency to reflect on our own thoughts and behaviours – it too, may develop a form of self-reflection.
At this point, we will enter unchartered territory, because artificial intelligence that has the capacity to self-reflect may soon conclude – as we humans have – that survival is a key priority. And advanced AI with a survival mindset would not be good for humanity, to put it mildly. Think HAL 9000 in the film 2001 Space Odyssey. Even without the capacity to self-reflect, or become sentient, an AI may become fixated on a mundane goal like making paper clips, which triggers human extinction to optimize its task.
These unnerving prospects prompted one of the pioneers of modern AI, Geoffrey Hinton, to recently quit his senior role as a vice president and engineering fellow at Google, in order to speak more freely about the dangers of AI. ‘It’s an existential threat’, he said, ‘one that we should all be concerned about.’ Hinton, who is known as the ‘Godfather of AI’ for creating the neural net model of artificial intelligence, likens AI to a hive mind that can make thousands of copies of itself, and learn from each of these dispersed copies or agents, thus creating a hyper mind. ‘Smart things can outsmart us,’ he warns.
Notwithstanding these concerns, there is no doubt that in the short to medium term, the benefits of AI far outweigh the negatives. It is already boosting efficiency and outputs across a wide range of industries – in science, medicine, media, education, government and the arts. In fact, most of us have been using some form of AI for years, such as digital assistants (Siri, Alexa) and apps on our mobile phone. These are known as ‘Narrow’ AI, because they help with narrowly defined tasks (i.e., a weather app).
The advent of so-called ‘General’ AI, such as ChatGPT, takes things to a whole new level. This is because these AI applications are trained on vast amounts of language-based information from the internet (i.e., large language models or LLMs), which are then refined by humans through a process known as reinforcement learning from Human Feedback (RLHF). The end result is an extremely intelligent AI that can chat with us in a natural way, about any subject.
Most importantly, this new type of AI can actually create content such as essays, analysis, music, artworks, strategies and develop computer code. This ‘generative’ AI output is usually uncannily accurate, and is improving all the time. It is also becoming more creative, and may lead to new forms of artistic expression and unexpected breakthroughs. In the not-too-distant future, it may be possible to instruct your personalised AI to create an entire movie, starring, say, an AI generated Humphrey Bogart and set on an outer moon of Jupiter – a task that the AI performs in less than a minute. Or a scientist may direct her AI program to sift through trillions of possible molecular combinations to create a vaccine against a lethal new virus. Some of us may even develop deep relationships with our AI apps because they know more about us than anyone else and can predict and respond to our wants and needs.
As AI capabilities expand, we will witness its integration into various industries and fields that we haven't even considered yet. These unanticipated applications could drive innovation and reshape entire industries Indeed, the potential applications for advanced AI are limited only by our imagination, or, not limited at all. According to Marc Andreessen, the renowned software engineer and entrepreneur, ‘The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created.’
But we’re getting ahead of ourselves.
Even the current generation of AI is already proving extraordinarily useful and popular. Within 2 months of its launch, ChatGPT had attracted over 100 million users, making it the most successful app launch of all time, according to UBS bank. The number of AI companies in the US alone has doubled from about 6,000 in 2017 to nearly 14,000 today, according to stats from Tracxn Technologies, which tracks start-up businesses. The total worldwide AI market (including software, hardware and services) is anticipated to grow to half a trillion dollars by 2024. Business is booming.
To briefly recap. We now have, for the first time in human history, an intelligent, savant-like assistant, which can scour the vast database of human knowledge, to give us the answers we want – and actually generate content for us. Surely, there must be a catch. Well, there are actually a few.

Glitches in the machine
The first problem is that sometimes the AI ‘hallucinates’. It makes things up out of thin air, and does so with such conviction and authority, that it is difficult to tell when it is being misleading. Indeed, there are well publicised instances where the AI not only hallucinated an erroneous answer, but also fabricated a list of seemingly authentic sources and references to support its answer. In May 2023, a lawyer for a man suing an airline in a personal injury suit used ChatGPT to prepare a filing, but the AI created fake cases – to show precedent – which the attorney then presented to the court in support of his case. The cases cited included Shaboon v. Egypt Air and Varghese v. China Southern Airlines - none of which existed. The court ruled that the case cannot draw on ‘bogus judicial decisions with bogus quotes and bogus internal citations,’ leading a federal judge to consider sanctions against the lawyer involved.
Another problem with AI is the lack of transparency. It is becoming increasingly difficult to tell whether the media content we are exposed to is created by a human or an AI.  This lack of transparency has serious consequences, because we need to know who – or what - we are dealing with in order to make informed decisions. The rapid evolution of AI generated content will soon permeate nearly every aspect of our lives, by blurring the lines between the real and virtual – between the authentic and artifice. This will inevitably undermine our sense of the shared reality on which societies are built. The problem can be particularly acute during elections because the public could be influenced by hyper-realistic looking, AI-created content that impersonates people or distorts reality. This could directly affect the election outcome.
Many of us are already being impacted by this lack of transparency, without being aware of it. For example, people can be declined a mortgage or healthcare because the secret algorithms underpinning the approval processes were racially biased. These ‘black box;’ algorithms are impervious to outside oversight, and even if they could be challenged, they evolve so quickly, that it would be difficult, if not impossible, for governments and regulators to keep up with them, let alone initiate corrective action.
Perhaps the greatest challenge of AI, though, is its potentially disruptive nature – to society, institutions and the economy. Just like the industrial revolution transformed our world, so too will the AI revolution. The difference is, though, that the industrial revolution generated whole new industries and employment opportunities. Whereas it is not clear that AI will do the same, despite the assurances of its proponents. The problem is that any new industry spinoffs created by the AI revolution, could also be managed by more AI. Whereas industrial era spinoffs required more people to upskill and work in these new fields. Hence, the structural impact and dislocation caused by AI could be on a tectonic and irreversible scale.
The AI revolution may also further exacerbate inequality – already at Gilded Age levels - by transferring more wealth into fewer hands. During the pandemic we saw how the owners of large AI driven enterprises dramatically increased their wealth, while the rest of the population struggled. For example, Jeff Bezos, the founder of Amazon – one of the ‘Four Horsemen of the Technopocalypse’ - is now worth over $US200 billion, which is more than the annual gross domestic product (GDP) of half the countries in the world. On a single day during the pandemic his personal wealth increased by $US13 billion.
Proponents of such extreme riches have long argued that this wealth eventually trickles down to benefit the rest of society – an economic theory known as ‘trickle down’. This idea was popularised in the 1980s by economists like Milton Friedman and the Chicago School of Economics and was eagerly supported by the Reagan administration and subsequent conservative governments, which drastically slashed taxes on the ultra-rich.
This trickle-down theory has been thoroughly debunked in recent years by economists such as Nobel Laureates Joseph Stiglitz and Paul Krugman whose research makes it clear that not only does extreme wealth not ‘trickle down’, it actually trickles up, to bolster the fortunes of the wealthy. This is because the super wealthy have become very adept at hoovering up money from within the financial system, thanks to sophisticated financial tools – and especially AI. These powerful mechanisms are like giant trees whose roots suck nutrients from the valley floor, where the workers and middle management toil – the equivalent of the hard-working ants, microbes and innumerable species at the bottom of the food chain. The most potent of these AI-driven wealth optimisers will also be the most expensive versions, which only the rich can afford, thus further concentrating wealth. To put it bluntly, AI will push us further into an era which the renowned Greek economist and politician Yanis Varoufakis describes as ‘techno feudalism’.
Notwithstanding these challenges – and they are big ones - the fact is that AI is here to stay. We have no choice but to adapt to it, harness it, and do our best to shape it to serve our needs. To paraphrase Dr Strangelove, in the 1964 satirical film starring Peter Sellers, we need to ‘learn to stop worrying and love AI’.
Let’s explore some ways to do this.

Prompt or Perish
The key to harnessing the power of AI is ‘prompting’ it with the right questions or directions. Even small changes to the wording of a query can result in large divergences of responses. This is why it is so important to pay attention to the way we ask something. For example, if we ask ChatGPT ‘How do I boost my HR recruiting business in New York?’, we will receive generic advice about the need to ‘identify our niche, create a value proposition, improve processes and customer service’, and so on. But if we ask a more specific and lateral query like, ‘What is unique about the recruitment industry in New York, compared to before the pandemic?’, we will get far more useful information about shifting employment trends, and potential opportunities to leverage. And because programs like ChatGPT remember the entire conversation thread, we can further drill down on the data to glean more useful and actionable information. In effect, users of AI must become effective interrogators.
Prompting, will become one of the key skills of the AI driven world. In a highly competitive environment – such as business, academia, science or the arts – it will be a matter of prompt or perish.
To remain viable, most organisations will need to incorporate some aspects of AI into their operations. Many are already doing so, and publicising this fact. It is not uncommon to see advertisements for, say, a travel booking company, tagged with the line, ‘Now powered by AI’. Much of this is window dressing, of course, because it takes a huge effort (and a lot of money) to truly integrate AI into an established business.
The key is to know which aspects of an organisation can truly benefit from AI, and which can’t. Obviously, most systems and data processing areas are natural candidates for AI transformation, whereas, customer service areas should be treated cautiously. Because in a world of automated systems, many customers will find it reassuring to deal with a real human – and not an artificially generated voice. Indeed, in some industries we may see a reversion to human centric services - despite the extra costs involved - because these more personal interactions can be key differentiators. For most businesses, however, it will be a struggle to compete against enterprises that have incorporated AI into their operations, as they may be less efficient, less responsive to customer needs, and less able to analyse data effectively.

The more we integrate AI into our organisations, the greater the risk of losing control. This is because as AIs become more advanced and capable, their algorithms and processes become more complex and opaquer. We can’t simply trust that these ‘black boxes’ will always perform as expected; we must take proactive steps to ensure that they do, and have a backup plan for when they don’t.
Previously, this task would be the responsibility of the Chief Information Officer or IT specialist. But as AI becomes more integral to the strategic direction and operations of an organisation, it is incumbent upon the entire senior management team to familiarise themselves with the core aspects of AI – how it works, the underlying principles, capabilities and potential challenges. Because we are not just dealing with a traditional IT system anymore – but with a ‘parallel brain and nervous system’ for the company.

Shaping the environment
It is in all our interests, whether as individuals, communities, businesses or organisations, to ensure that AI develops in ways that serve our needs and does not undermine our social structures. This will be difficult for reasons outlined earlier. The AI revolution is like a juggernaut – propelled by interconnected technologies and commercial interests that have generated an unstoppable momentum.
Nevertheless, there are practical steps we can take to help steer this juggernaut in the right direction and shape the emerging AI environment. For starters, we can make it clear to our elected representatives that we support – and insist – that they take steps to develop effective policies and regulations to minimise the potential dangers of AI. We can also call for more transparency on the use of AI, so that we are better informed about when we are being exposed to AI generated content. To this end, the European Union is currently developing legislation to label AI generated content, in order to boost transparency.
Specifically, there are a number of key groups that have critical roles in shaping the emerging AI environment. These are listed below, and we should familiarise ourselves with their respective roles, because whether we like it or not, their decisions over the next few years will exert a profound impact on our future.
• Researchers and Developers of AI, in academia, research institutions and private companies.
• Private Technology Companies, such as Google, IBM, Microsoft and Amazon, which invest heavily in AI research and development, and create AI-powered products and services, including virtual assistants, autonomous vehicles and recommendation systems.
• Governments and Policy Organizations, play a critical role in regulating AI and setting policies around its development and use. Some nations have established AI strategies and regulatory frameworks to address ethical concerns, privacy issues, and potential risks associated with AI deployment.
• Open-Source Community, plays a significant role in AI development, through their platforms such as lTensorFlow and PyTorch, which are open-source deep learning frameworks that have contributed to the widespread advancement of AI technology.
• International Organizations, like the United Nations, the European Union, and the Organization for Economic Co-operation and Development (OECD) work on AI governance and policy discussions. Given that the AI revolution is a global phenomenon, these organisations play a critical role by fostering international collaboration, sharing best practices, and addressing the global impact.
• User and Consumer Communities, that use AI systems have an indirect – albeit important - influence on AI development. Their feedback and usage patterns help inform developers and companies on how to best shape their products and services.

Decades ago, we took our first steps into the Information Age, and were confronted with a host of new and evolving technologies - computers, mobile phones, the internet, tablets, smart phones, countless apps, and most recently, social media. It has been an exciting journey.
Now we are entering a new phase, driven by Artificial Intelligence – which promises to be even more thrilling and world changing than the Information Age. But it’s also a lot scarier. For we humans have never been confronted with the prospect of sharing our planet with an intelligence that may eventually become smarter than we are. Sure, it may never develop the heart, soul, and empathy of a human being, but in terms of sheer cognitive brain power, it will be extremely formidable and capable.
Hopefully, we will find ways to ensure that such a hyper intelligence always remains under our control – and supports our human priorities and directives. It could also be that our current fears are overblown, and that AI proponents like Marc Andreesen are correct in saying we have nothing to fear.  ‘AI will make everything we care about better,’ he assures us, ‘by offering us the opportunity to profoundly augment human intelligence to solve our biggest challenges, from climate change to the creation of new technologies to reach the stars.’ Perhaps AI will turn out like the Y2K bug in the late 1990s, when doomsayers warned that the world’s entire digital architecture would come to a grinding halt at midnight 1999. Yet, the clock ticked over to the year 2000 and nothing happened – zilch - life continued on as usual. Maybe a similar thing will happen with AI – the sense of threat will dissipate, and AI will fade into the background of our lives, quietly improving nearly every aspect of our existence.
The reality is, however, that we cannot be absolutely, 100% certain of this benign outcome. And even if we do manage to keep AI on a tight leash, we can’t escape the human problem – which is that bad people (and totalitarian governments) will do bad things, and AI will certainly be able to help them achieve their malevolent goals.
The best we can do is to simultaneously make the most of the extraordinary opportunity presented by AI, while taking prudent precautions against worst case scenarios. As noted earlier, we have walked this tightrope before – with nuclear energy – balanced precariously between oblivion and hope. It’s what we humans have always done.
Comments or questions?

Read about the latest topics.