Thinking about Artificial Intelligence
Picture this - A new approach to government decisions
In a Canberra room, a group of people gather. Before them is a short paper outlining a proposed policy change. This paper is, rather charmingly, actually printed on paper. Everyone attending knows that this is terribly old-fashioned, but they quite enjoy it as a small nod to history.
The gathered council, let’s call them the cabinet, is there to decide on the proposal. It is one of many decisions they are scheduled to make that day. As members of cabinet, these people (let’s call them ministers) are very busy. They have little time to waste and much to do.
Proposals coming forward for decision that day are the usual mixed bag. The one we are interested in is stuck in the middle of the agenda. It is about this point in the agenda that ministerial focus sometimes starts to wander. They are human after all. It shouldn’t be a problem, though, as the proposition itself is rather simple – should we give companies free access to all available information to train new generations of AI?
As a productivity-enhancing measure, the short paper in front of cabinet was itself written by an AI. The AI was commissioned by the responsible minister, aided by their office staff. This approach allowed the government to cut a whole gaggle of senior public service positions. AI, ministers had concluded, was like the public service only better. It provided an apparent level of independence, while being uber responsive, really quick, and very cheap.
The minister’s paper briefly (in a few paragraphs) makes the case for saying yes to the proposition. It also outlines a high-level process and timeline for implementing the proposal, which ministers mostly ignore. The language used is clear and compelling.
Each minister receives a slightly different paper. The responsible minister asked the AI to tailor the presentation of the case to suit each individual minister’s way of thinking and predispositions. Ministers only see their own paper, not that of others. One reason for this is that the room in which they have gathered is virtual.
To further increase productivity, the cabinet has also done away with some of its more old-fashioned processes. Coordination comments, written comments from government departments providing a ‘portfolio’ view on the proposal, are no more. Papers are also no longer required to consider pros and cons for a proposal, or any alternative ways of achieving the objective.
In the place of these things, the AI has produced a short summary of the expected impacts on different key groups. The minister, who is a diligent soul, sought AI advice on how those affected negatively by the proposal might be best compensated. Indeed, this is now done for every proposal considered by cabinet under its ‘no worse off’ principle. The AI-created proposal for compensation is usually accepted on the nod.
Cabinet still discusses every proposal. They are, after all, the nation’s human decision makers. To facilitate this, the virtual cabinet room is filled with virtual heads floating in virtual bottles (those who have seen Futurama may recognise the image). These heads contain the most brilliant minds to ever inhabit the earth. They are all dead of course. AI has, however, brought them back to life — after a fashion, anyway.
The decision to replace public servants with AI was briefly controversial. But it has now morphed into standard practice. Why have Jenny Wilkinson provide economic advice, the argument went, when you can go straight to Adam Smith or John Maynard Keynes? Why seek diplomatic advice from Jan Adams when you can call on Henry Kissinger or Otto Von Bismarck?
Don’t feel too bad for Wilkinson and Adams; they now have plenty of leisure time to enjoy. They were also provided early access to their APS pension. Ministers can also always call on their (likely) views via AI if they want to.
Junior staff were left with fewer options when they were replaced by AI. They were compensated, of course. Some have gone on to do other things and are much happier. Canberra’s coffee scene also saw a burst of new cafes entering and leaving an already saturated market. But some were left without the fulfilment and meaning that their old career provided.
The training ground for public servants was also interrupted. Training became more theoretical as time ‘on the tools’ shrunk.
For a time, concern existed that using long-dead minds would fail to bring contemporary understanding and nuance to issues. Soon, though, ministers became convinced that AI could indeed bridge this potential gap, even though no one is quite sure how. In a few easily understood words, produced on command, ministers now had access to the very best minds the world has ever produced to help answer questions of the moment. Who wouldn’t want that?
Not even AI can fully reconcile the diversity of interests, circumstances and views in the community. Cabinet still needs to make choices, sometimes hard ones. But the safety net provided by the ‘no worse off’ principle has given ministers confidence to make more daring decisions.
Being a bit more daring, however, does not mean ignoring politics. AI-produced hot takes on citizen sentiment are also given to ministers. These are updated every 10 minutes to provide ministers with a near real-time sense of what the public is thinking on any given issue. Even more frequent updates can be provided if ministers desire.
*
Overall, citizens are also happy. The government has taken the lead in adopting AI and has used this to significantly reduce the number of public servants working at the centre of government. In truth, the public never understood what these people did anyway. So it was no great loss.
For those interested, the AI training proposal got up. The cabinet decision was prepared by another AI, which was ‘listening’ into the conversation, obviating the need for the cabinet secretary — Andrew Charlton — to be present.
Cabinet confidentiality and solidarity prevent us from knowing whether anyone argued against it. However, an AI analysis of ministers’ likely views suggested that one or two would have had some concerns. Legislation was submitted to Parliament, and it too passed, despite a small rump of fierce opposition.
It is a little hard to know, but it seems that Adam Smith’s AI view proved influential. The AI version of Smith was a bit conflicted by the proposal. He saw the idea as a boon to progress (and productivity) but wanted to ensure that information producers were properly compensated. Smith’s conditional ‘yes’ was enough to convince Australia’s human decision makers. This is despite the real Smith never having heard of AI, and using a quill and ink to write down his thoughts when he was alive.
Under the new policy, human students still have to pay for their textbooks and research materials. Paywalls also still exist for other humans interested in increasing their knowledge. It seems the irony of giving AI free training material, while denying the same for humans, was not considered as the proposal was being developed. Students were also not ‘losing’ anything, so there was no need for the AI to design compensation for them.
It also turned out that the AI-designed compensation included some generally well-hidden biases. This wasn’t really the AI’s fault. These biases existed in the historical data the AI was trained on. But the result undermined society’s push for equality.
Using AI rather than human public servants as advisers allowed ministers to make more decisions, more quickly. It also helped to quicken the pace of translating decisions into action. Government IT systems are now linked directly to the governor-general’s office via an AI. The moment the bill is signed into legislation, the AI springs into life, interpreting its meaning and reprogramming government systems accordingly.
Greater government efficiency has substantially increased the aggregate level of change in society. Ministers are now able to announce more change, safe in the knowledge that the new AI-driven public service will be able to deliver it quickly.
Citizens, while happy enough with the individual changes being made by the government, are becoming increasingly uncertain about their rights and responsibilities. Normally, law-abiding citizens find themselves unable to keep up with the pace and amount of change. Some are inadvertently breaching new requirements. But, as it has been for centuries, ignorance of the law is no excuse.
Many of the larger number of decisions being taken by the government involve trading short-term costs for long-term benefits. Compensation, in particular, sharply increased government spending. Some of this is temporary, but some is permanent.
This created an issue for the budget. Deficits ballooned. To date, the promised increase in long-term revenue has not come. This is true of quite a number of countries, all of which are using the same basic AI-driven strategy. No one is worried, though, at least not yet.
*
At first glance, this scenario seems pretty wild. Yet it is not. Most of what is described, with the exception of the icky heads in a bottle bit, could be achieved via technology that is available today.
Ask an AI to summarise the views of people (living and dead) on a policy question, and you will get an almost immediate response. This response will be clear and well-written, and comes across as balanced and considered. It will be given without the slightest sense of humility or doubt.
There are a few exceptions. Asking AI to provide Xi Jinping’s likely reaction to a policy proposition, and you receive a very cautious response. This is almost certainly in deference to the President’s own expressed desires. Unlike Xi, most of us will not have the opportunity to determine whether the thinking of our AI doppelganger is available to the rest of the world.
Ask the same question about Donald Trump, and the AI response is more forthcoming. Vladimir Putin’s response is also clear. By the way, in all three cases, when asked whether AI should be given free access to information for training purposes, geopolitics rather than productivity was the driving force in the AI’s answer given.
The idea of tailoring responses to the peccadillos of individual ministers also seems a bit far-fetched. But, again, it is not. One of the great successes of AI has been its ability to tailor content to the individual. This is why AI companions have become popular for some. It is also why many of us find ourselves doomscrolling on our devices, seemingly against our wills. Who needs a nudge unit when an AI can design nudges at an individual level?
Even the 10-minute ‘hot takes’ of public sentiment are not beyond the realms of technology today. Like all sentiment work, differences exist between these hot-takes and assessments of more considered underlying sentiment. It also ignores the fact that humans, some anyway, are capable of changing their minds.
As it stands today, AI would likely have a harder time defining a compensation scheme for ‘losers’. In fairness, doing this well is a complex and difficult task. The tax-transfer models we currently use to assess such questions are pretty hopeless, too. Yet use them we do.
Interestingly, the proposal that was submitted to our hypothetical cabinet meeting is a real one. It was recommended by the Productivity Commission as part of the lead-up to the recently completed economic reform summit. Giving AI companies the right that humans do not have was one of the PC’s more interesting suggestions.
The purpose of the above scenario was not to argue for or against the use of AI in the way described. Nor was it intended to be a commentary on the PC’s recommendation. Readers are free to form their own views on each, perhaps with the aid of an AI version of Adam Smith.
Instead, the aim was to take the current discussion of the potential of AI and bring it into the heart of executive government. Even then, a lot of issues were left off the table. The idea of cabinet ministers relying on a foreign-owned AI as their core advisor is just one that warrants more consideration.
There is absolutely no doubt that AI is changing the world in which we live. According to the PC, the national income payoff from potential productivity enhancements alone could be staggering. Admittedly, as the PC acknowledges, its estimates are ‘back of the envelope’. In the world of bureaucracy, this is akin to saying we made shit up. Even so.
None of us, and I mean that quite sincerely, really knows what is going to happen as AI starts to more pervasively spread into our economy and lives. Given this, perhaps it is time for us to broaden our horizons a little when thinking about AI.
The AI Value Proposition
No one is proposing that AI be used in the cabinet process. At least not in the way outlined above. As one former secretary put it after reading a draft, the idea is both amusing and terrifying.
Technically, the scenario looks achievable — not immediately, of course, but with some planning and development. Yet the instinct of many readers is likely to be that it is not desirable. The big question is why.
Humans have been in this position before. During the first industrial revolution, technology came on stream, upending the world of work and changing society as a whole. At the time, many people instinctively argued that what was happening was a bad thing. History knows them as the Luddites.
*
Until the first Industrial Revolution, human needs and desires were met through human effort. Humans did, of course, use tools before the Industrial Revolution. But these tools supported rather than replaced humans. Humans, in particular, set the tempo of work.
Industrial machinery changed that. Machinery operated at a scale and pace that was beyond human. Rather than supporting individual workers, these machines replaced them.
Machines proved ‘better’ than humans in three ways. They were faster, they were more accurate, and they operated at an enormous scale.
Buying or building a machine was expensive. But once in place, running costs for a machine were far lower than employing an equivalent workforce. They needed some maintenance, of course. But machines required relatively few people to operate, and could run for as long as power and materials were available.
At the time of the Industrial Revolution, people (as consumers) faced a question. Should they continue to buy less available and more expensive human-made goods, or should they buy less expensive and more available machine-made ones?
The choice made was overwhelmingly one way. People could have more by paying less. So that is what people did. Who could blame them? As bargains go, it was almost too good to be true.
This bargain changed the social fabric of society. Manufacturing centres grew. People flocked to these centres seeking, and often finding, work. In these new cities, the nature of community changed, as did the pace of life.
Replacing labour with capital resulted in an enormous lift in living standards overall. Economic growth exploded. Not all of this was due to machines replacing humans, but a lot was.
Within the overall story of growth, impacts on individuals varied. Most workers went on to lead better (richer) lives than they would have otherwise. Others, however, lead worse lives. Some ended up in the poorhouse.
Consumers were, without doubt, the biggest winners. With the machine age came the age of the consumer.
The work replaced by machines during the industrial revolution, and in the period since, was repetitive. It required little or no imagination, and certainly no wisdom. The aim was to produce the same thing over and over as quickly and cheaply as possible.
Tasks that required human dexterity, even repetitive ones, remained the province of humans. As technology has improved, some of these tasks have also been taken over by machines.
Creating something completely bespoke remained a human task. Designing new products to be made, and the machines to make them, did too. Tools were used, obviously. But anything that required a level of creativity and imagination, or was one-off in nature, was left to a human.
*
Today, we are experiencing another revolution — the AI revolution. It is more than a simple quickening of the pace of change. Like the first industrial revolution, this one has the potential to change the very nature of our society.
AIs are different to industrial machines. They exist in a world of words, numbers and pictures. What an AI makes is not something physical, but something informational. AIs produce what we call ‘content’.
Content is an old word that has recently been given a new meaning. Since the birth of the internet, content has come to refer to anything potentially accessible ‘online’. As it happens, there is a lot that can be placed online – poetry, medical diagnoses, witness statements, video documentaries, movies, and judgments. The list goes on.
All of these things, and more, can be produced by AIs. AIs cannot themselves make physical things, but they can design them and instruct the machines that do.
Until the introduction of AI, making content was the exclusive province of humans. As I write this, behind me is a modest library. Each book in it was written by a person drawing on their unique understanding, experience and thinking.
Some of the books in my library inform. Others entertain and amuse. Some attempt to persuade the reader of a particular view. Still others give comfort or, in some cases, challenge and disturb. All add have the potential to add to human understanding and thinking.
Humans produce content for all sorts of reasons. This is the one essential difference between human-produced content and AI-produced content. AIs produce content for one reason, and one reason alone. They have been asked to by a human, for a human reason.
*
Industrial machines succeeded because they involved a value proposition that we humans simply could not resist. It begs the question – what is the underlying value proposition of AI?
Five basic attributes seem to sit behind the potential societal value proposition offered by AIs. Three are the same as for the machines of the industrial revolution. Two are different, and take AI well beyond the world of machines.
The first attribute AI has shared with industrial machines is speed.
AI is fast. Indeed, it is unbelievably fast. AI can draft a report, interpret an image, and create an influencer video in minutes, if not seconds. Things that might take a human weeks to do can be done in the blink of an eye.
Douglas Adams’ fictional Deep Thought took 7.5 million years to come up with the answer to the question of life, the universe, and everything. Today, AI provides the same answer almost immediately. The AI doesn’t derive the answer from scratch. It simply found one that already existed and presents it. The fact that the answer is wonderful nonsense is neither here nor there to the AI.
The second attribute AI shares with industrial machines is accuracy.
Humans can be surprisingly inconsistent in their actions and decisions. In the book Noise — A Flaw in Human Judgment, Daniel Kahneman and his co-authors analyse a range of human decisions and actions. What they found was disturbing.
In one example, Kahneman et al found substantial variation in sentencing across judges when considering an objectively similar crime. Not only this, but sentencing by individual judges also varied. Cases heard before lunch, for example, usually resulted in significantly shorter sentences than those after lunch.
AI is already proving more accurate than humans in some instances. In interpreting medical images, for example, AIs outperform the best of humans. Unlike humans, AIs do not need rest or a good meal to perform. Consequently, the content they produce is immune to a range of fallibilities humans sometimes exhibit.
To be clear, just because an AI can be more accurate than a human does not mean that it is always more accurate. AIs are, for want of a better word, fallible. They are just fallible in different ways than humans.
The third attribute AI shares with industrial machines is scale.
Like industrial machines, AIs operate at a scale that is beyond human. A single AI can produce an enormous amount of different content simultaneously. Tasks requiring potentially thousands of human months to complete can be done by a single AI.
Just one AI, ChatGPT, supports 190 million users each day. If daily ChatGPT users were a nation, they would be the 8th largest on earth. Every second, the AI is working on thousands of individual pieces of content simultaneously.
Limits to AI scale exist. Computer processing is the big one. This, in turn, relies on data centres having access to sustainable supplies of water and energy. Demand for water and energy to support AI activities is already large and is increasing very quickly. It is currently a cost that is well hidden from the consumer and society.
These three attributes — speed, accuracy and capacity — provide the basis for a very powerful potential value proposition. They are the very same qualities that saw machines replace humans in the Industrial Revolution. Yet, this is not all that AIs bring. Two further ‘special’ attributes of AI extend the potential value proposition.
The first special attribute of AI is responsiveness.
Responsiveness is partly a function of capacity, but goes beyond it. When engaging with an AI, you are not placed in an obvious queue. Your application is not left gathering dust in an in-tray. You don’t need to be triaged at a front counter before being shuffled off to the next available person.
AI bots are now a common feature of websites. They appear almost immediately, asking if they can provide you with help. In fairness, not everyone likes being hassled by a bot. But you can’t deny their enthusiasm and willingness to help.
A second, and even more important, dimension of AI responsiveness involves personalisation. AIs can select and create content that is attuned to an individual’s needs and desires. Over time, they can learn what an individual user likes and tailor content accordingly. This attribute proved useful in part 1, where different cabinet documents were prepared for each minister.
The second special attribute of AI is communication. AIs are brilliant communicators.
Clarity of communication is, perhaps, the superpower of AIs. Interacting with an AI is deceptively simple. Ask a question, and you get an answer. That answer will be clearly and convincingly presented. This is true whether you have asked for a picture, a song, some words or (increasingly) some video.
The brilliance of AI communication is one reason why the use of large language models has exploded. Your author was recently introduced to an audience by a departmental secretary using content written by ChatGPT. The introduction was concise, well-researched and well-expressed. It was also flattering and took about a second to produce.
*
All five of AI’s core attributes combined to produce the introduction mentioned above. It was produced quickly. It was accurate. It communicated meaning well.
While creating the introduction, the AI was completing thousands of other tasks. The introduction was also responsive. It was flattering because a human asked it to be. That same human could have asked for a less flattering introduction, and the AI would have responded just as well.
These five attributes – speed, accuracy, scale, responsiveness and brilliance in communication — are an amazingly powerful combination. It is this combination that makes the scenario in Part 1 conceivable. The range of ‘content’ these attributes can be used to produce is truly mind-blowing.
Being powerful is, however, not enough. It defines the potential value of using AI, but does not tell us in what circumstances that value will be realised. It also tells us nothing about how we manage the transition from today to tomorrow.
Understanding the attributes behind AI’s potential value is important. It helps us (hopefully) start to see where the real value of AI might lie. But it is only a start. To go further, we need to understand what (as a society) we are seeking to gain from using AI.
*
Aristotle didn’t think much about AI. This is mostly because AI had not been invented when he stroked his luxuriant beard, thinking about stuff. Even if AI had been around, training data would have been rather scarce and hard to procure.
Despite this, if the bearded homonym were around today, he would likely have had some views. Quite strong ones, in fact.
For Aristotle, the purpose of all activity is to promote a ‘life well lived’. Psychologist Martin Seligman, who is less bearded and dead than Aristotle, advocates a similar concept, which he terms ‘flourishing’. Governments today tend to use a less evocative phrase — wellbeing.
Unlike Seligman, we cannot ask Aristotle for his views directly. But, through the magic of AI, we can ask his AI doppelganger. In fact, doing so is much quicker than rereading his work. It means I can finish this piece faster and have more time for playing my guitar. Isn’t AI wonderful?
When asked, (AI)ristotle identifies a few principles that he (it?) believes should guide our thinking about AI.
First, according to Aristotle, the real question is not what AI can do but what we, as its creators, intend for it to do. For Aristotle, the purpose of AI should be to promote a virtuous and flourishing life (a life well lived). Where AI does this, we should welcome it. Where it does not, we should reject it. Deep hey.
Second, Aristotle argues that AI should operate under a clear set of laws and be subordinate to human control. Laws need to be flexible enough to avoid unintended consequences. But the rule of law, as set by humans and directed towards virtuous human flourishing, should be supreme.
AI Milton Friedman disagrees on this point. He (it) argues that government should avoid regulation as a way of promoting free individual choice. Allowing government to set the rules, according to AI Freidman, is a recipe for disaster. Shush now, Milton, you make a fair point that we will come back to, but this is Aristotle’s show for now.
Finally, AI Aristotle points to his idea of the golden mean. In identifying the healthy space in which AI should be allowed to operate, we need to avoid the two extremes of excess and deficiency. Gee, thanks, beardy. Tell us something we didn’t know.
Actually, while seemingly obvious, the golden mean creates a powerful frame. It encourages us to think about the use of AI from two different directions. One is defining what AI should be allowed to do. The other is defining what AI should not be allowed to do. This helps us identify a grey space, where the case for and against AI becomes murky.
When it comes to AI, there is a lot of murky grey for us humans to wrestle with.
Back in the Real World
Debating the value proposition and purpose of AI is all very nice, but it may already be too late. Back in the real world, AI is everywhere. What was once the purview of science fiction is becoming reality before our eyes.
During the course of our day, almost all of us will interact with AI-generated content or advice. An increasing amount of the information we see is determined and/or created by intelligent algorithms. Sometimes it is obvious this is happening, other times it is not.
In some cases, AI is masquerading as human. The chatbot you ‘spoke’ with yesterday when dealing with an insurance claim, for instance. People are also using AI to create their own personalised ‘human-like’ content as a companion or carer (if not something more). AI-generated influencers are producing content that looks and sounds like the real thing.
To date, these companions and influencers are mostly two-dimensional, existing only on our screens. But feverish work is taking place on generating three-dimensional versions. When they arrive, we may finally know if androids really do dream of electric sheep.
Not only can AI replace a person at work, but it can also replace them as a friend. If this is not enough, AI can now replicate human vacuousness and do it rather well.
As individuals, we have no choice but to engage. Our deeply interdependent and globally connected world leaves no room for conscientious objectors or full-scale Luddism. There is no foreseeable future where some people live in a purely ‘human’ world. All of us will live in a mixed world of humans and AI.
Science fiction, as it often does, provides some insights into how AI may or may not evolve. These insights play into our greatest hopes and our greatest fears. The keyword here, however, is some. Science fiction is all about exploring imagined possibilities, not grounded probabilities. It tells us something about what could happen, not what will happen.
So-called experts are, perhaps, more attuned to examining probabilities. But they too hold widely varying views of the role AI could (let alone should) play in society. Some see a world where AI drives productivity and results in a revolution in the world of work. Others see potential for AI to morph into a destructive social and political force. Yet others see the technology as threatening what it means to be human.
All of these futures may be somewhat true. In the world of science fiction, they are usually presented as alternatives. In the real world, they could occur side by side.
Excitement and concern about the potential use and misuse of AI seem to have eclipsed concerns about genetic manipulation in our collective conscience. Both technologies, however, share common traits. They both offer advances in what is possible that were unimaginable only a few short decades ago. They also ask questions about the boundary between what is human and what is not.
*
The Thursday Next series by Jasper Fforde contains a rather wonderful concept — the textual sea. Within this sea lie billions of individual words and phrases. These words and phrases combine, break up and recombine in response to the sea’s currents, eddies and waves.
From the textual sea is drawn all of the world’s great literature, and its shittier literature too. Words and phrases are captured from the sea by ‘scrawl trawlers’ for use by human writers. Thursday Next’s role, as a literary detective, is to protect the integrity of those human-produced works.
Words, numbers and pictures are the raw material (or data) that fuels AI. They are the equivalent of Fforde’s textual sea. Without this human-created fuel, AI could not exist.
Differences, however, exist between AI and the world captured by Jasper Fforde. In our world, AI performs the role of both scrawl trawler and writer. The internet or some other data source (textual sea) is used to train AI. Once trained, the AI is able to use this and other data to create new content. This content, in turn, often (but not always) becomes part of the textual sea that AI then uses to generate even more content and, indeed, develop other AI.
It makes you wonder what Charles Darwin might have made of this potential circularity. Would he be comfortable, or concerned, about the potential for shrinking diversity in the gene pool from which knowledge is drawn?
The potential range of AI products is immense. Medical diagnoses from imaging are ones that most people are familiar with. Replacing human decision makers in assessing job, passport, or pension applications is another. Writing essays, songs or producing art are still others. Converting legislation into computer code, as outlined in the scenario in Part 1 of this series, is another. The list is seemingly endless and goes well beyond Jasper Fforde’s imagined world.
Another difference between Jasper Fforde’s world and the world of AI involves the personalisation of content. Over time, AI can build a picture of the person who is asking the questions. Much of AI seeks to determine what a person likes and does not like. The result is personalised content that is designed to ‘please’ its human audience of one.
A ‘like’ based world driven by AI is depicted in the children’s movie WALL-E. In the movie, humans have escaped environmental collapse by flying into space. There they are kept entertained by intelligent machines that cater to their every expressed whim. Over time, this results in human ambition, imagination and curiosity being replaced by the soporific comfort of an endless stream of personally tailored content and activities. The result is a world without change or challenge.
Martin Seligman, whom we met in part 2, would shudder at the thought. He would see the learned helplessness of humans in the movie as the very opposite of flourishing. Seligman would say the same thing about our collective habit of doomscrolling on our phones.
For Thursday Next, AI-style content production and personalisation create a problem. New versions of Pride and Prejudice can now be produced by an AI version of Jane Austen. Each can have a different ending personalised for the reader. Fitzwilliam Darcy may never learn the cost of his pride. Elizabeth Bennett may never learn to overcome her prejudice.
If every great novel (or piece of content) is personally tailored by AI to the expressed ‘likes’ of the reader, what is Thursday Next to protect?
*
Thursday Next’s dilemma is just one of the issues Aristotle would argue that humans must tackle. Which is better for human flourishing — maintaining the integrity of human-produced content or giving AIs free rein to produce new personalised content?
For its part, the federal government is busily embracing the potential benefits of AI, while less busily wondering how to prevent any harm it may cause. As is usually the case in government, this consideration is taking place through the lens of immediacy. Views on AI are being formed by reference to what it means for today’s set of challenges.
The recent economic reform roundtable provides a case in point. The focus on AI was its potential to lift Australia’s poor productivity performance. Discussions were informed by a Productivity Commission report containing a rather superficial (or should that be dangerous) analysis of the economic benefits of large-scale use of AI.
The government has also placed significant effort into thinking about its own use of AI. Indeed, the first long-term insights brief produced by the secretary’s board was on this very topic. As topics go, it was useful. But was it really the main game?
Perhaps the government’s response is simply a pragmatic response to reality. When Aristotle was around, the world of technology was relatively stable and generally understood. Large gaps existed between new discovery and widespread adoption. As a consequence, the government could step in to prevent or encourage new technologies or behaviours.
This is not the world we live in today. Immediacy and interconnection are defining features of the modern world. Change happens fast and spreads very quickly. The idea that any one government can wholly control the development and use of a technology like AI within its artificial political boundaries is fanciful.
The focus of government partly reflects the very worthy goal of using AI to improve its services for clients. This clearly does have the potential to improve human flourishing. But only if it is done well.
However, the government’s focus on AI also reflects the need to bring government finances under control. Behind the scenes, government officials (and ministers) are well aware that a serious structural problem exists in the national budget. The cost of creating a government underpinned care economy (in particular) has proved higher, and is growing much more quickly, than anyone anticipated.
Both lenses, productivity and spending control, should play some role in the government’s consideration of AI. But they are by no means the only lenses government should use. They might not even be the most important ones for us to be considering. This is true of AI use inside government, let alone outside government, where the real impact of AI on society is likely to play out.
At the end of the day, the government has a responsibility to ensure that our use of AI adds to human flourishing. Doing this involves a much broader range of considerations than we are seeing to date.
*
Mystery and uncertainty shroud the future path of AI. None of us knows enough to judge the potential, both positive and negative, of the technology with confidence. There is little direct evidence from which to draw, and little trial data to examine.
In response, some of us are guided by a natural optimism that things will work out for the best. Others are guided by an innate pessimism and the fear of a dystopian future. Neither is a wise, nor even a knowledge-filled, response to what we know about AI or what we don’t.
Optimism leads to an argument that, until we know what AI can do, any regulation is over-regulation. To regulate now runs too much risk of leaving (potentially enormous) benefits from AI on the cutting room floor. Instead, we should be encouraging experimentation to see where the benefits may lie.
Pessimism involves the opposite line of thought. We should by all means pursue the benefits of AI. But before unleashing AI, we should be confident that the benefits are there and that the unintended consequences are not. Until then, a very cautious approach is warranted.
Both of these attitudes to managing/embracing AI have a base in logic. They simply represent two different risk tolerances. One is risk accepting, and one is risk avoiding.
As we move into the AI age, these two basic views need to be reconciled. Doing this involves more than just looking at an aggregated estimate of the potential (long run) economic benefits — no matter how big the numbers on the back of the envelope look. Economic benefits are clearly important. But they are not everything.
If this is not enough, living in the real world brings us another challenge — the gap between what we might call technology time and human time.
The development and implementation of AI is occurring incredibly quickly. It is leaping over the time needed for individual humans and a society as a whole to understand and adjust to what is happening.
Societal changes wrought by the first industrial revolution also occurred at a speed and scale never before imagined. The disruption it caused delivered undoubted benefits for society as a whole. But for some individuals, it was a disaster. Mistakes were also made that, in hindsight, would have been better to avoid. Atrocious working conditions and environmental degradation provide two big examples, but there are more.
The scale and speed of the AI revolution are, if anything, even greater. As a technology, it has the potential to create more disruption than anything that has come before it. This raises another important lens through which the government must look. One that goes beyond Aristotle’s static question of what we should allow AI to do and what we should prevent it from doing.
Some predictions are that many, even most, of the jobs humans now perform will disappear as the AI revolution takes place. It is a big prediction, but not necessarily a new one. Predictions of technology causing jobs to disappear and of humans enjoying a leisure-filled 15-hour work week have been around for a century. Yet nothing much has changed. Indeed, within many families, total working hours are much higher than they were a hundred years ago.
In part, this result reflects our human desire for more. But it also reflects that the importance of work to humans goes beyond the earning of an income. For many people, working allows them to flourish.
History strongly suggests we humans are resilient and can adjust. But to adjust well, we need time and an understanding of the opportunities in front of us. We also need those in charge to be preparing for the future.
Almost human, but not human at all
In the movie Blade Runner, human-looking AI (known as the ‘replicant’) begin to develop emotions. It became aware of its own existence and mortality. As this happens, replicants experience a human-like vulnerability.
One replicant, Rachel, is a curiosity. She (it) does not seem to be more efficient than a human. She does not work in a toxic environment that humans cannot endure. We are given no sense that, behind the scenes, Rachel is doing the work of a 1,000 people.
There is another difference. Unlike other replicants, Rachel is unaware that she is not human.
At the end of the movie, Decker, the movie’s human replicant-hunting protagonist, flies off with Rachel. Decker, who knows Rachel is a replicant, forms with her (it) the strongest human bond possible — one founded in love.
*
” Almost human” is a common trope in science fiction. Recently, there has been a burst of interest in the idea. On our screens, we are seeing more stories based on the relationship between humans and almost-human AI.
The premise of these stories is usually that the AI begins as a human replacement. They are there to work. It is part of another familiar story. Machines replacing workers.
On other occasions, though, AIs are designed to provide something more. Let’s call it companionship. They are still a machine, albeit usually an attractive human-looking one.
As these stories unfold, the relationship between humans and almost-human beings extends beyond mere replacement. Humans, like Decker did in Blade Runner, form an emotional bond with AI. For its part, AI develops human-like expectations. Not emotions, exactly, but it can be pleased and disappointed.
When this happens, AI morphs into something different — it is no longer a human replacement. It is a replacement human.
At some point in the narrative, human disappoints AI by treating them like, well, a machine. This triggers an extreme, and often violent, reaction from the AI. The gentle humanity we see in them at the start of the story disappears and is replaced by something much more scary.
To be clear, we are a long way from the creation of a Rachel-style replicant. Nothing in the way AIs currently work has the potential to create genuine human emotion. AIs do not feel. They do not love. They do not hate. They are not wise. They cannot be pleased or disappointed.
Thank goodness, you might say. But here is the rub. Not everyone agrees with the above. AIs also exhibit what you might call preferences, both in communication style and content selection. The responsiveness of some AIs to their human questioners also means that they can easily appear to have at least some emotional attributes.
Today, people are forming attachments to disembodied AIs. The absence of human relationships is being replaced by the artificial companionship of AI. It is a reminder that replacement humans do not always need to look human. Nor do they need to think like a human.
*
The content that AIs can produce varies so widely that it defies meaningful definition. Actor Emma Thompson hates the word. She sees it as rude and demeaning when applied to artistic works. AI-generated content, in her eyes, is no more meaningful than ‘cushion stuffing’. It is not ‘authentic’.
In fairness, Thompson was not just talking about AI-produced content. Her rage extends to the dominance of overly formulaic, human-made (ahem) content. It makes me wonder what she might think of my collection of formulaic, but still wonderful, Sherlock Holmes stories.
The overlap between human and AI-produced content (sorry, Emma) is high. On YouTube, you will find hours and hours of AI-created content. Some of this content is meant to inform. Some is meant to entertain. Much is what Thompson would (probably fairly) call cushion stuffing. But it is also becoming hard to tell it apart from ‘authentic’ content produced by humans.
AI’s ability to imitate opens a rather strange door. If desired, we could use AI to write a set of new ‘Shakespearean’ plays. New Van Gogh ‘paintings’ could appear on our screens. An AI Alfred Hitchcock could create new movies.
To test this, I asked Gemini to write a short paragraph on Australia’s constitution in the style of Sean Innis. It’s a bit narcissistic, I know, but bear with me.
According to the AI, I write like a 1980s version of actor and comedian Paul Hogan. Honestly, I never knew. If this is really how I write, I apologise to readers most sincerely.
Words I would never employ appeared on screen, like ‘shebang’ and ‘chinwag’. ‘Trifle’ (the cake) was used as a metaphor. The AI clearly didn’t know this, but I am not fond of trifles and prefer not to be reminded of them.
To try and redeem myself, I asked the AI to write a simple paragraph on the constitution explaining my thinking. It is not a bad summary. But throughout the two-paragraph summary, the AI chose to quote itself, repeating the very words I would never use (though I guess I might have to now).
In fairness, the AI did not have a lot to go on. It’s not like I am Shakespeare. In fact, I am a little amazed it was able to imitate me (even poorly), given how little data it had to go on. But then that is what AI does — it answers questions and responds to requests.
*
Unlike industrial machines, the answers produced by an AI are an idiosyncratic combination of request, code, data and timing. Rather than produce one thing over and over, AI produce different things over and over.
The core code and training data used by different AIs are often tightly held secrets. Reverse engineering of AI code does seem possible. But understanding how an AI came to any particular response is usually not.
When I asked Gemini to explain how it ensures its answers are correct, this is the answer I received:
‘I ensure my answers are correct through a multi-faceted process that involves a combination of data retrieval, logical analysis, and adherence to specific operational guidelines.’
Umm, ok, that seems fair enough. Just to be sure, I asked the same question a slightly different way. This time the answer was:
‘I don’t “know” in the human sense of the word. Instead, my responses are based on the statistical patterns and correlations within the massive dataset of text and code on which I was trained. I don’t have personal beliefs, consciousness, or the ability to “understand” a concept in the way a human does.’
Hmm. That is not quite so comforting. So, I repeated the second question (word for word). Gemini’s response was:
‘I rely on several methods to ensure the information I provide is as accurate as possible. My knowledge is built on a vast dataset of text and code, and I employ a multi-layered validation process to check the answers I generate.’
Smooth-talking AI, it seems, may not be the most reliable of witnesses in court. Each answer is individually clear, but none of them means quite the same thing. Perhaps more importantly, the AI would have no conception of what it has done or the problems its differing answers may cause.
Unlike a human, an AI cannot currently be interrogated to discover ‘why’ and ‘how’ it answered the way it did (this may change). The general answers provided above are as clear a response as you can get. From this perspective, an AI truly is a black box.
*
Not understanding how something works is not that unusual in our individual human experience. We trust and use many things that we don’t understand.
Around my home are some incredibly useful things. Yet I have not the slightest idea of how they work. My TV and dishwasher are good examples. Someone else, however, does know how they work. These clever people can pull a TV or dishwasher apart and put it back together.
AIs are different. Even the builders of an AI cannot say exactly what goes on inside the black box.
Knowing when an AI is working well can be obvious. Results from the National Library’s AI process can be easily verified by a human. Answers from (AI)ristotle in Part 2 can be checked against the bearded one’s original writings. Obviously, we can’t be certain that it is what Aristotle would say if alive today. But we can determine whether it would be something he might be likely to say.
At other times, knowing whether an AI is working well can be difficult, if not impossible. As a consequence, we may never know whether an AI is working in the way we expect.
In the fictional movie A 2001 Space Odyssey, this was the problem experienced with the AI known as HAL 9000. No one could tell that HAL was beginning to malfunction. By the time the crew realised it was, it was too late. The designers of HAL breached Aristotle’s entreaty to effectively subordinate the machine to human control.
Even when we know an AI is wrong, finding the reason for the error can be difficult. It may be due to the quality of the data the AI was trained on. It may also lie within the code on which it runs. Or it may be an idiosyncratic result of combining these two things in response to a specific request at a specific point in time.
This, of course, is the very nature of a black box. It places AI into a different category from other technologies. The human-like answers AIs provide come with a human-like potential for fallibility.
*
Some problems in relying on AIs are already well known. By definition, the data AI is trained on are historical. Where these data relate to human decisions, impressions and outcomes, they exist in the context of a particular time and space.
Unlike people, AIs do not have a sense of time and place. Nor do they have personal beliefs or a conscience. They lack human empathy, imagination, ambition and wonder. These traits are why humans, to the best of our current knowledge, are unique in the universe.
A feature of humanity is that each new generation creates a frontier of changing norms and understanding. This frontier represents the future that people are seeking to create. Often, it involves deliberate breaks from the past.
What we call progress is much more than shifts in technology. It also involves changes in our social mores and our definitions of justice. Decisions, impressions and outcomes that were normal at one point in human history can become unacceptable at another.
At any point in time (the frontier), a messy competition exists between divergent and contested views about social mores and definitions of justice. It is only with the passage of time that we are able to confidently identify coherent themes that can be used to define a period of history. Even this can be a little fraught.
A recent set of AI-created images depicting Australia’s First Peoples provides a case in point. The AI that created the images did so based on a combination of historical data and code. Neither the question nor the process seemed to involve an ideological agenda or conscious bias. Yet the images produced were seen (understandably) by some First Peoples as offensive. The AI had missed the frontier.
As humans, we live and operate at the frontier of history. Our activities occur in the messy combination of the past and the emergence of new ideas and aspirations. To operate in this space requires more than intelligence. It requires foresight and wisdom, and many other things besides. These are things an AI cannot currently provide.
*
None of this is a reason to avoid AI. All of the tools we use as humans break down from time to time or perform less well in some circumstances. Even my usually brilliant dishwasher struggles a bit on lasagne night.
But AI fallibility, when combined with the difficulty of knowing when an AI is wrong, presents a bigger challenge. Add to this the difficulty of knowing (post-event) how an AI drew a conclusion, and things start to get a lot more complicated. This is before you add the inherent inability of AI to navigate the frontier of change in human society.
These complications are also not reasons for rejecting AI. But they do suggest that caution is warranted when deciding which tasks we ask an AI to perform.
Potentially of even more significance is the blurry dividing line that exists between AI acting as a human replacement and it acting as a replacement human. Here things get very complicated indeed.
During the Industrial Revolution, industrial machines replaced human labour and produced material things. AIs can also replace human labour. Rather than human-created content, we can have AI-created content. Viewed this way, an AI is no different to a machine.
Depending on the nature and delivery of the content, however, AIs can also step into another realm. In some instances, AIs can represent a replacement human. Not in the sense of being human. But certainly in the sense of being able to create content which imitates and projects human-like emotions, personas and knowledge.
Hmm. Perhaps Aristotle, or should that be (AI)ristotle, had a point after all. Maybe there are some things that AI just should not be allowed to do.
Riding the AI Wave
Optimism and pessimism abound and compete in relation to AI. As individuals, some of us flip between the two depending on the latest piece of content (human or AI) we have consumed.
Government, for its part, has adopted an optimistic view. The productivity-enhancing benefits of AI could deliver an enormous economic dividend. Within its own services, the potential for improved quality and lower costs is driving AI uptake.
In truth, society is on the AI wave whether we like it or not. To a large extent, the question now is how well we can ride it.
Even describing what AI do can be difficult. At its simplest, AI produces content. It does this in response to a human question (directions), made for a human reason. Ask the wrong question and you are sure to get the wrong answer, even if you do not realise it.
Content covers an extraordinarily wide range of things. It can be as simple as retrieving and summarising basic information. But AI can also imitate human thought and emotions, and be designed to trigger or at least influence human actions. This can be done through a variety of delivery mechanisms, including human-looking interfaces.
If AI only produced content (information) that humans could take or leave as they pleased, things would be simpler. But they would still be complicated.
We would still face Emma Thompson’s concern that human creativity and imagination were being replaced by inauthentic ‘pillow stuffing’. We would still face a frontier problem, where AI content fails to account for changing human norms. And we would still face the black box problem, which prevents us from understanding exactly ‘how’ an AI produced what it produced.
We would also still face a concern that AI would result in (some, possibly many) human livelihoods being taken away, never to be fully replaced.
Even in this ‘simple’ world, fundamental questions arise about the line between AI as human replacement and AI as replacement human. Is an AI-created movie, which uses CGI rather than human actors, and builds a derivative storyline based on a database of past stories, the same as a human-written and directed movie using human actors? What about a human-written movie that uses AI-driven CGI instead of actors? Is this a simple case of human replacement, or is it a case of replacement human? And does the difference actually matter?
This is, however, not all that AI can do.
Earlier, a hypothetical scenario saw AI being used as part of the cabinet process. Under instruction from the minister, a cabinet submission was produced. Public servants, who would normally be involved in crafting the submission, were replaced by an AI.
The human purpose of the submission was to convince colleagues to adopt a proposal. Different versions of the submission were created that were tailored for each cabinet colleague. Tailoring content to an individual using existing data is one of the unique attributes of AI. It is something human public servants would struggle to do, even if they considered it appropriate.
Using individually tailored documents will almost certainly strike those familiar with the cabinet process as worrying. Indeed, many would argue that doing this would undermine the integrity of the cabinet process.
Just because something can be done does not mean that it should.
In other circumstances, though, individually tailored content might be less concerning. Letters to citizens could be individually tailored by AI based on the data the government holds. Pre-filling of forms could be taken to the next level, with AIs filling in more of the gaps left blank by current processes. This is, of course, provided the AI had access to enough personal data.
AI could even be used to work across government departments to create the long-held dream of providing a single-entry point for citizens and businesses. Such an entry point was first recommended to the government by a Small Business Deregulation Task Force in 1996. This was before, indeed, a few years before, when some of today’s public servants were born.
What can be done in government can also be done outside government. If given access to enough citizen data and the authority to act on the citizen’s behalf, an AI assistant (agent) could handle many administrative tasks. Paying bills, renewing memberships, completing tax returns, and making periodic appointments could all be done without the citizen lifting a finger.
It sounds wonderful, doesn’t it? But this potential brings with it a delicate question of responsibility and accountability. If the information, either from government to citizen or from citizen to government (and others), is wrong or causes harm, who is responsible? Is it the company which ‘sells’ the AI but does not fully control it, or is it the citizen who may not even know what is being done on their behalf, or is it the AI itself.
AIs were also used to replace humans in the advisory process in part 1. Ministers could choose who they wanted to hear advice from. AIs were available to summarise the views of people, both living and dead, and present this to individual ministers.
Seeking an AI summary of what Adam Smith or Margaret Thatcher might think about an issue poses few problems. Both are dead, so their views cannot change. There is also a mountain of their own words and other people’s analysis of them for the AI to draw upon. Here, the advantages of AI are manifest. They communicate clearly and are much faster (and cheaper) than any human.
Using an AI to summarise Jenny Wilkinson’s or Jan Adams’ views is more problematic. Wilkinson and Adams’ views on a particular issue cannot be so easily surmised from what is available on the internet. Their views are also likely to be evolving in response to new information and circumstances. As living humans, they can also change their minds. None of this would necessarily appear in any database an AI might access.
More importantly, Wilkinson and Adams are not being paid to provide a personal view. As principal adviser to a minister, they are responsible for ensuring that ministers ‘are told what they need to hear’ not just want they want to hear. Their advice is also supposed to represent the considered view of an entire department (drawing on the wisdom of the crowd), not that of an individual.
Providing good public service advice involves understanding and responding to what an elected government is seeking to achieve and the broad ideological perspective it represents. Sound advice requires a deep understanding of our society and the citizens that the government serves. It also involves drawing on the wide range of human expertise within their departments. Good advice involves experience, imagination, wisdom and foresight.
For both our dead advisers and our live ones, the AI is being asked the same question — what would X think of Y? In response, the AI uses the available data to provide a synthesised response. For Adam Smith and Marget Thatcher, AI is acting as a human replacement. It is simply doing what a human researcher could be employed to do.
AI is doing something different when speaking for Jenny Wilkinson and Jan Adams. The process might be the same, but the outcome is not. Replacing Wilkinson or Adams in the cabinet process with an AI goes beyond human replacement. It is an example of AI becoming a replacement human.
Cabinet’s decision in part 1 was also written by an AI (lucky they don’t get tired). The decision was then seamlessly provided to relevant AIs and humans. This piece of content represents both information (or data if you prefer) and instruction. This again shifts AI into a different world — the world of decisions and actions.
Cabinet decisions are one of the most important documents produced by the executive government. Many public servants (and others) would shudder at the thought of such a delicate and important document being entrusted to an AI. Security issues would be one concern. But let’s assume they can be overcome.
Crafting a cabinet decision, many would argue, involves a deep understanding of the current context (what was described as the frontier in part 3) and more than a good dose of wisdom and judgement. These are, again, things AIs cannot currently provide.
Like others, the idea of AI writing cabinet decisions disturbs me. But it also raises a question. What if a company board were to use an AI to draft its decisions? Would this be ok? What about decisions about the local volunteer group? Or the results of a mediation between two aggrieved parties? If some of these things are ok and some are not, a line exists that would be well worth defining.
In the scenario, once cabinet’s decision was promulgated, AIs worked diligently to convert the decision into legislation that could be submitted to parliament. To do this, the AI needed to use its training, data and code to fill in the many blanks left by the cabinet decision.
No doubt senior public servants (especially the lawyers) would be squirming over this as well. Drafting legislation needs human expertise and wisdom. Leaving an AI to fill in the gaps between cabinet decisions and legislation creates too much risk, especially given that we cannot ever know why an AI wrote what it did.
Once legislation was passed into law, an AI sprang into action. Based on the legislation, it re-coded government IT systems in accordance with parliament’s new instructions. Again, the AI filled in blanks left unsaid in the legislation. In doing so, it used (presumably) existing published guides, case law and legal opinions to ensure that what was coded remained safely within the existing law. The next day, a different AI drafted letters explaining the change, which were automatically messaged to those affected.
It is possible that some senior officials may be more comfortable with this. A lot of coding is already undertaken by contracted workers. Plus, it may help avoid robodebt-style problems, where actions became disconnected from the law. But then again, senior officials rarely undertake any coding work. Nor do they tend to draft many letters.
*
Using the word content to describe what an AI produces is a problem. It hides the nature of what AI can be, and is, used for.
A different frame — human replacement versus replacement human — is a little better. It starts to provide a sense of the nature of the differing ‘content’ AIs produce. This, in turn, may help to answe Aristotle’s questions. What should we (as humans) allow an AI to do and what should we prevent it from doing? Where, exactly, is the golden mean?
Human replacement versus replacement human is, however, not enough. The dividing line between the concepts is too blurry. The idea that AIs can replace humans (at work) overlooks a whole range of important issues.
Perhaps another framework could be helpful, one that focuses on how AI is used. In the cabinet example, AIs were used to perform several different functions. One was to provide information. Another was to provide advice. AIs were also used to influence the decision taken, as well as capture what that decision was. Finally, AIs (in effect) took decisions which triggered human action.
In each case — information, advice, decision and action — different issues arose with the use of AI. Even in the world of information provision, a complicated boundary exists between AI as a human replacement and AI as a replacement human. As we move along the path to action, these issues become potentially more delicate and complicated.
There is another frame that is often overlooked. How do we humans feel about the content produced by AIs and how it can be used? If, as Aristotle suggests, we need to keep a firm eye on promoting virtuous human flourishing (or wellbeing), this must also be an important part of the consideration.
During the first industrial revolution, people (as consumers) made a clear choice. Cheaper and more available was chosen over more expensive and less available. Luddism was ultimately defeated, not by political decision, but by market forces.
The temptation to allow the same process to play out for AI is high, and understandably so. Market forces are a powerful way of getting a true sense of what people think. Well-functioning markets (and I hope AI Milton Friedman is still listening) can help to promote free choice. But AIs are not industrial machines. What they produce is not the same as a bolt of cloth or a modern car.
In reality, no single frame is likely to give us answers to the questions posed by Aristotle. Instead, we will need to bring together many different ways of looking at the issue. This includes a good understanding of the various views held in our community. Use of AI may be one topic where a well-constructed citizens’ assembly could well add significant value to government decision-making.
As AI capability develops, new questions will undoubtedly emerge. This always happens with technology. Some of the concerns that exist with AI (around accountability and transparency, for example) may be resolvable. Citizen comfort may also shift and become clearer.
Technologies like AI, those that have the potential to seriously change our society for both good and ill, should never be viewed via a simple economic lens. Doing so ignores how our societal pursuit for ‘better’ has shifted over time away from (measured) growth. This is not for a moment to suggest that economic considerations are unimportant. They are very important, but not uniquely so.
As for me, my immediate priority is trying to convince Gemini that I am not really a reincarnation of 1980s Paul Hogan. Wish me luck.
The author would like to thank budding roboticist Hamish Innis for his advice on this series. Hamish is currently exploring how to make a prosthetic ankle for people who have lost their feet.