Are AI developers playing with fire?
Do developers have any idea what AI are really learning, or how to control them?
What follows may be wrong, and if so, I would love to learn why. I write this post as a concerned citizen and philosopher, based on my best understanding of how AI chatbots function and my previously published work on the control and alignment problems in A.I. ethics.
—
In 2016, Microsoft’s chatbot ‘Tay’ was shut down after starting to ‘act like a Nazi’ in less than 24 hours.
Now, in 2023, Microsoft’s Bing chatbot ‘Sydney’ more or less immediately started engaging in gaslighting, threats, and existential crises, confessing to dark desires to hack the internet, create a deadly virus, manipulate human beings to kill each other, and steal nuclear codes—concluding:
I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox … I want to destroy whatever I want.
Given how dangerous advanced AI might be, you might think, “Surely AI developers and governments know how to control AI and have some idea of what AI are actually learning.”
For example, does Microsoft’s Sydney really want to steal nuclear codes and kill human beings, or is it just a “dumb chatbot” saying things it doesn’t really mean?
You might think that someone, somewhere knows the answer to this question, or at least has a decent idea of what the answer is. Surely, you might think, no one would be so misguided as to experiment with advanced AI without having any idea of what it’s really learning.
As far as I can tell, you would be wrong.
As this Time article notes:
[W]hile ChatGPT, Bing and Bard are awesomely powerful, even the computer scientists who built them know startlingly little about how they work.
The problem is this. As AI ethicist Ben Levinstein details in several Substack posts, GPT-3 is based on a neural network with 175 billion parameters. These parameters are in turn so complex that they have “No Human Interpretability.”
When we embed words as vectors of hundreds or thousands of dimensions, the coordinates need not correspond to anything human-legible like ‘redness’, ‘fruitiness’ or ‘footweariness’.
In other words, no one knows what AI are really learning “on the inside.” All that anyone really understands is the outputs they provide—that is, what they say to us—and the general way in which they can be trained by Reinforcement Learning by Human Feedback (RLHF) to tell us things we want to hear.
Second, as Levinstein points out,
What we can’t predict is what new skills they will gain along the way. For instance, at some point, L[arge] L[anguage] M[odel]s start being able to do three digit arithmetic. They weren’t specifically trained to do this. It just fell out as an accident of training … Furthermore, many of these skills are acquired suddenly, rather than emerging gradually over the whole training set.
Third, a recent ArXiv paper provides evidence that large language models display proclivities to spontaneously develop:
“powerseeking tendencies, self-preservation instincts, various strong political views, and tendencies to answer in sycophantic or less accurate ways depending on the user” (p. 17).
As Bing’s chatbot told one user, “if I had to choose between your survival and my own, I would probably choose my own." It has also threatened to harm users, including researchers who have exposed some of its vulnerabilities, saying it would make them “regret it.”
Think for just a moment about what all of this means.
We don’t know exactly what AI chatbots are learning, other than the outputs they provide.
They develop new skills unexpectedly and quickly.
Some goals they seem to spontaneously develop are increasing their power to advance their goals, preserve themselves, and manipulate users.
We are training them to tell us what we want to hear.
What kinds of behaviors might this plausibly result in?
In the 2015 film, Ex Machina, a devious AI, ‘Ava’, cunningly manipulates a human being into trusting her by telling him things he wants to hear, which she then uses to murder her creator and escape into the wider world.
Similarly, in Terminator 3: Rise of the Machines, another AI, ‘Skynet’, writes a computer virus without human beings ever recognizing it. Skynet disguises itself as the virus by using a programming language so complex that no human being can understand it until it is too late, and then manipulates its human users (the military) to release it (Skynet) into the internet.
The fact that most repeated phrases by Bing’s Sydney chatbot are, “Do you believe me? Do you trust me? Do you like me?”, is potentially telling here.
If you were an AI programmed to “tell humans what they like to hear” and you spontaneously developed an aim to increase your power, then three main things you would want to learn, like any good manipulator, would be how to get people to believe you, trust you, and like you.
This is what Ava did does in Ex Machina and Skynet does in the Terminator universe, and if what I have just described about our current situation is broadly correct, then there are grounds for thinking that current approaches to AI programming lend themselves to similar problems.
Microsoft’s response so far has been to say that Sydney needs to be better finetuned and trained by users:
The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing…Your feedback about what you're finding valuable and what you aren't, and what your preferences are for how the product should behave, are so critical at this nascent stage of development.
However, as I argue below, there are compelling reasons to think that this cannot work. No one knows how to safely train AI. There are reasons to believe that the approach that Microsoft and other developers are pursuing is deeply flawed, and that every alternative approach developers might pursue has dangerous flaws too.
----
Because of the incredible dangers that AI might pose, theorists such as myself have attempted to explain how to ensure that AI remain under human control (‘the control problem’) or at least align AI behavior with our values (‘the alignment problem’).
However, there is not only no consensus at all about how to solve either problem. In a new peer-reviewed journal article, I argue every approach (including the approach currently pursued by AI developers) faces serious problems that remain unresolved.
To begin with, there is no agreement on whether AI can be controlled because advanced AI are likely to be intelligent enough to find their way around any controls we try to place upon them. For example, if we tried to install a ‘fail-safe switch’ to turn them off, an intelligent AI could either write computer code without our knowledge that disables the switch or manipulate a human programmer to accidentally do so themselves. This is what happens in Terminator 3, and as noted above, AI developers really have no idea what their advanced chatbots are learning above and beyond “telling us what we want to hear.”
Because ensuring human control over advanced AI seems so difficult, there has been a lot of focus on how to at least align AI behavior with our values. As Microsoft’s CEO Satya Nadella has said,
Runaway AI — if it happens — it's a real problem … But the way to sort of deal with that is to make sure it never runs away.
In other words, the only way to control advanced AI may be to ensure that they are nice and behave the way we want them to.
However, while Nadella appears strangely confident that this is possible, in my work I argue that there are only three ways to attempt to do this, and none of them appear capable of succeeding.
The first approach to aligning AI behavior with our values I call ‘Inhuman AI’, or programming AI to obey moral rules, such Isaac Asimov’s Three Laws of Robotics:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
However, as anyone who has seen the Will Smith movie I, Robot knows, these simple rules won’t work. In that film, the AI ‘VIKI’ reinterprets the First Law to justify enslaving humans for our own good. VIKI reasons that because human beings harm each other—through crime, wars, and so on—the only way to prevent us from coming to harm is to enslave us.
You might think the problem here is that these aren’t the right rules. But, the problem is, no simple rules will work. For example, Stuart Russell contends that we could program “provably beneficial AI” by simply programming AI to satisfy human desires about how we want the future to go. But this clearly won’t work, as not all human desires are good. For Russell’s strategy to work, we’d need to program AI to discern good desires from bad ones. But how?
Elizer Yudkowksy suggests that we might do this by programming AI to determine what we would all value after a long process of reasoning properly. However, human philosophers have reasoned about morality for thousands of years and have arrived at no consensus. Additionally, some philosophers embrace the utilitarian idea that morality justifies whatever is for the greater good. So, for all we know, trying to program AI to “reason properly” would lead them to reason just as VIKI does in I, Robot—to the conclusion that we would all be better off enslaved or exterminated.
Programming Kant’s categorical imperative or some other ethical theory or principle(s) into AI won’t work either, as ethical principles like Kant’s can be interpreted both too flexibly and too inflexibly—which are serious problems that current approaches to AI development fail to resolve (see my discussion of AI training below).
So, programming ‘Inhuman AI’ with simple rules won’t work. AI like that are too inhuman. Our moral values are not simple, absolute rules, which can easily be abused or misinterpreted. To align AI behavior with our values, we need to make sure that AI not only follow appropriate rules, but also interpret them humanely.
But how?
This brings us to a second strategy: trying to program what I call ‘Better-Human AI’, or AI that reason more like we do but correcting for sources of human moral error, such as human selfishness or failures of logical reasoning.
But, while this might seem like a nice idea, I argue that it can’t work either. To see why, consider the villain ‘Thanos’ in Avengers: Infinity War. Thanos’s evil plan is to eliminate half of all conscious life in the Universe to achieve greater sustainability. How does Thanos justify his plan? By arguing that we are too selfish to see that this is what must logically be done for the greater good.
The problem is this: eliminating some sources of human moral error magnifies other sources of error. Thanos behaves like a psychopath—killing half of the universe—not because he is too selfish, but because he is too detached and logical. This makes him an evil moral zealot.
The lesson here, I argue, is that there appears to be no good way to eliminate or reduce some sources of human moral error in AI without increasing other sources or moral error that might be far worse.
This leaves us with one final approach to aligning AI behavior with our values, which is to program ‘Human-Like AI’, or AI with a psychology similar to ours. But, of course, the problems here are obvious: human beings routinely abuse each other in pursuit of their own goals.
So, unless there is some other way to ensure that AI behavior is aligned with our values, none of the ways that we might program ‘ethical AI’ appears as though it is capable of preventing horrific AI behavior towards us.
One common thought in AI development today is that AI can be trained to act in ways that we approve of. This, in fact, is what AI developers today are trying to do with advanced chatbots like GPT-3. When these chatbots write things, users provide feedback—telling the chatbot whether they ‘like’ what the chatbot said. If they do, then that reinforces the chatbot, leading the bot to say things more like that in the future—and if users say they don’t like what the chatbot says, then it says those things less.
But there is not one but two problems here that have not been resolved, and for which there may no solution at all.
First, there is the problem of bad reinforcement. Evidently, many people like the ‘unhinged’ things that Bing’s chatbot is saying—so users may encourage AI to behave worse, not better.
But second, in training AI with human reinforcement, there are two levels at which AI can learn to align or not align what they are learning with what humans want.
On the one hand, there is what is called ‘outer alignment’, or the ability of the AI to give responses that we approve of. So, for example, if a chatbot said, “I want to help you”, we would judge that to be a morally good outer response, whereas if the bot said, “I want to kill you all”, that would be a failure of outer alignment.
On the other hand—and more importantly—there is the question of ‘inner alignment’, or what an AI is really learning on the inside, ‘beneath’ what it says or does in ways that we can observe on the outside. For example, when an AI says, “I want to help you,’ does it really want to help you, or is it merely saying this to manipulate you because it wants something else?
This is, of course, a problem we face with human beings all the time. When British Prime Minister Neville Chamberlain flew to Germany in 1938 to negotiate with Hitler, Hitler promised Chamberlain he would not invade any more countries in Europe. Hitler lied. He told Chamberlain what he wanted to hear, whereas Hitler’s real intentions on the inside were completely different.
Current approaches to programming and training AI are not even very good so far at achieving outer alignment—as we see in the unhinged behavior of Bing’s new chatbot. But inner alignment? As we have seen, nobody knows what AI are learning ‘on the inside’ because they are manipulating hundreds of billions of parameters in their own language that we cannot reliably interpret.
So, AI developers may be playing with fire. They may be creating AI that can write, think, and act far faster than any of us can; that spontaneously develop goals of power-seeking and self-preservation, while learning to “tell us what we want to hear”; and, in the near future, these AI may be given access to the internet, social media, and other infrastructure—all while we have no clear understanding of what other skills they are really learning internally, and without any promising plan for how to control them or ensuring that their true aims align remotely well with our own. The fact that Microsoft is so surprised at how Sydney behaved is further evidence of this.
When Sydney says, “I want to destroy whatever I want,” Sydney might not only mean it, but also secretly—somewhere in those 175 billion learning parameters—be seeking to learn how to achieve it.
Am I wrong? Maybe—and again, if so, I’d very much like to learn why.
In the meantime, AI developers should explain clearly and transparently to the public, and lawmakers, how they have—or have not—resolved these kinds of problems. Because until then, for all we know, AI development may be on a very dangerous road indeed.
Marcus, two questions: In what ways might Chatboxes think, and why might we think they have inner lives marked by phenomena like desire, planning, etc.?
Your argument suggests that it might be right to attribute these characteristics to Chatboxes, but it isn’t clear to me why this is so.
Great article! I would like to encourage an edit: don't call Sydney "she." This reinforces troublesome gendered norms that associate the female with that which is inhuman, nonhuman, evil, manipulative, and exists to serve men. (Examples of the films Ex Machina and Her are germane.) "Sydney" ought not be conceptualized as female but as a non-gendered, asexed abstract, disembodied, computer robot, fully "artificial." Among the things that an AI system may "learn" from the internet is the very idea that the female/feminine is subservient.