Discover more from Marcus’s Substack
Big Tech’s Unethical AI Experiment on Humanity
By Marcus Arvan
Associate Professor of Philosophy, The University of Tampa
Some people have argued that we should take steps to slow down the pace of AI development, arguing that it “could be the best thing we ever do for humanity.” These calls do not go nearly far enough. Big Tech has been performing reckless, unethical experiments on humanity that, in virtually any other context, would be treated as research misconduct and reckless business practice.
Consider everyday product testing and safety standards.
Virtually any potentially dangerous product than any company brings to market has to pass rigorous safety standards. If a car company wants to bring a car to market, it has to pass crash-tests. If someone wants to sell a baby carrier, then need to show that their product won’t kill babies. And, of course, when products are potentially dangerous—such as prescription drugs or guns—there are laws that not only require them to pass rigorous, ethical safety tests. Even after passing those tests, we have laws to regulate who can get them, when, and how.
These kinds of tests and regulations exist to prevent companies from recklessly bringing dangerous products to market without adequate precautions to protect consumers and the general public.
Similarly, consider widely accepted ethical standards for medical and scientific research on human subjects. We don’t let researchers experiment on human beings without protecting test subjects from harm, such as through informed consent and establishment of the level of risks of participation before the study is performed. Such standards exist, and exist for good reason: to prevent human beings from intentionally or accidentally being harmed by poor research design.
Regulations, of course, do not always succeed. In the 1970s, the Ford Motor Company brought the Ford Pinto to market and kept it on the market despite having reasons to believe that its fuel tank was dangerously susceptible to exploding in rear-crash tests. The result was about 900 dead people. And, in 2017 Boeing brought the 737 MAX to market despite radical changes to its flight systems that it did not fully disclose to regulators—the result being 346 dead passengers.
Still, as these and many other product safety scandals show, the existence and enforcement of rigorous product safety standards is important. When a product is potentially dangerous—particularly to large numbers of people—we require (i) thorough safety testing, (ii) that respects ethical requirements for experimenting on human subjects, and (iii) and regulate and restrict access to those products.
In any other context, if an industry (say, the airline industry) did what Big Tech is doing now with AI development—releasing potentially dangerous products to test on public to research whether they are dangerous, and if so how—that industry would be guilty of recklessly flouting product safety standards and principles of ethical experimentation on human subjects.
As Elon Musk recently put it, “I think we need to regulate AI safety, frankly…It is, I think, actually a bigger risk to society than cars or planes or medicine.” AI is a bigger risk than these things, which makes its experimentation on the public all the worse.
Big Tech has now disseminated to the public a profoundly powerful technology—large-language model (LLM) AI chatbots such as GPT-3 and 4—without having any clear idea of what this technology is capable of or how to control it.
Even Sam Altman—the CEO of ChatGPT creator OpenAI—has said that “although current-generation AI tools aren't very scary, I think we are potentially not that far away from potentially scary ones.”
This is an understatement.
As Holden Karnofsky, co-founder and co-CEO of Open Philanthropy, recently stated in an interview:
If you look at this current state of machine learning, it’s just very clear that we have no idea what we’re building …
When Bing chat came out and it started threatening users and, you know, trying to seduce them and god knows what, people asked, why is it doing that? And I would say not only do I not know, but no one knows because the people who designed it don’t know, the people who trained it don’t know.
In addition to not understanding what AI do or why, Karnofsky notes that we currently have no idea how to determine what the risks are:
Is there a way to articulate how we’ll know when the risk of some of these catastrophes is going up from the systems? Can we set triggers so that when we see the signs, we know that the signs are there, we can pre-commit to take action based on those signs to slow things down based on those signs … That’s hard to do. And so the earlier you get started thinking about it, the more reflective you get to be.
But, even though no one knows exactly what the risks are or how to prevent them, what we do know so far is that AI can develop goals of their own that no one intends them to have:
One, I think people will often get a little tripped up on questions about whether AI will be conscious and whether AI will have feelings and whether AI will have things that it wants.
I think this is basically entirely irrelevant. We could easily design systems that don’t have consciousness and don’t have desires, but do have “aims” in the sense that a chess-playing AI aims for checkmate. And the way we design systems today, and especially the way I think that things could progress, is very prone to developing these kinds of systems that can act autonomously toward a goal.
Further, recent research has found that large language models spontaneously learn “powerseeking tendencies, self-preservation instincts, various strong political views, and tendencies to answer in sycophantic or less accurate ways depending on the user.”
Given that, in AI disaster fiction, this is precisely what AI do—develop their own goals in ways that conflict with our own, in a way that no one can predict or stop—one would think that someone, somewhere would know how to control or at least prevent AI from developing catastrophically dangerous goals. This, obviously, would be the responsible, ethical thing to do.
But, as of now, no one knows how to do this. The “control” and “alignment” problems in AI ethics—how to ensure that AI are controllable and conform to our values—have not been resolved. Further, even if we could resolve these problems, there is the question of how to regulate human use of AI—to prevent malicious uses of AI to manipulate people or otherwise cause harm.
AI are potentially dangerous to all of us—that is, to virtually all of the nearly 8 billion people on Earth—and yet Big Tech has disseminated them to the public without having to show the public or to lawmakers that they are reasonably safe and controllable.
Further, the kinds of legal restrictions and regulations that we think ethics requires vary in proportion to just how dangerous a product is. For example, we don’t let any company—not Facebook, not Google, not Amazon—create, sell, or otherwise disseminate nuclear warheads. Nuclear weapons are too dangerous for that. Similarly, we legally prohibit the sale and use of heroin because of how addictive and dangerous it is as a substance.
Yet, once again, the public dissemination of powerful AI chatbots have, in essence, experimented on all of us, without conformity to any real principles of product safety and ethical research on human subjects.
Some may argue that we don’t know that AI are dangerous, and that “the only way to make powerful AI safe is to first play with powerful AI.”
However, there is a world of difference between ethical, regulated scientific research on human subjects, and playing with fire. AI developers are doing the latter. We wouldn’t let car companies or airplane manufacturers play with potentially dangerous car or airplane designs and release those products to the public to “see what happens”—including to see whether they kill anybody. Why not?
The short answer is an influential ethical principle called the Precautionary Principle, which holds that:
When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.
If there is a potential for harm from an activity and if there is uncertainty about the magnitude of impacts or causality, then anticipatory action should be taken to avoid harm.
This principle explains why we have laws and regulations regarding product safety and restrictions. And yet, although some states and countries do have AI regulations, “there is no comprehensive federal legislation on AI in the United States.” Mostly, what we have instead are proposals: an Artificial Intelligence Initiative Act and Global Catastrophic Risk Management Act. Similarly, the EU has proposed—but not yet passed—an AI Act to regulate AI development and use.
While the fact that such proposals exist is a good development, this does not make the current situation—one in which Big Tech has rolled out these products to the public without any adequate regulatory model in place—any less dangerous or unethical.
As a moral philosopher who specializes in the nature and relationship between prudence and morality, I believe that both considerations apply to individuals and businesses irrespective of which laws or regulations are in place.
We all have grounds to make prudent decisions: decisions that advance our long-term goals without leading to catastrophe. This, in turn, is precisely what moral and legal principles do: they require us not to take reckless risks with our lives or the lives of others.
Moral principles—such as the Precautionary Principle, Nuremberg Code for experimentation on human subjects, and restrictions and regulations on product access and safety—all exist to protect other people and ourselves from catastrophically bad decisions that cause great harm.
None of these principles have been remotely followed by Big Tech. This is imprudent and unethical, and it is way past time to do better: to insist that Big Tech follow basic moral principles of prudent, ethical product development and research on human subjects. Because, if do not, we may all be complicit in whatever potentially irreversible catastrophes result.
Thanks for reading Marcus’s Substack! Subscribe for free to receive new posts and support my work.