Designing Our AI Future

Lemmalytica, No. 2

Thinking Ahead

This week, we’ll be exploring the future. It’s hard to look at the world today and not notice the massive amounts of technological change. And with that change, come changes to our society, our relationships, and our way of life. Emerging technologies, and artificial intelligence in particular, are primed to change the world in ways we can’t yet predict. But the change is going to be so significant that we had better start thinking now about what future we’re creating.


Designing Our AI Future

If one term has captured the popular imagination of late, it’s artificial intelligence (AI). Depending on whom you ask, it’s either the panacea for humanity’s woes, or a harbinger of the end times. And yet, how many of us can truly say that we understand AI? Can either the optimists or the pessimists actually justify their positions? What does the AI future actually look like?

These aren’t easy questions to answer. In truth, there probably isn’t a single answer to describe our AI future. Rather, there is some range of possible futures, and it’s incumbent on us to make choices that lead to one of them or another. And given that we can’t agree on whether the AI future is something to pursue or something to defend against, working cooperatively is likely to be difficult.

Perhaps a first step then is to take a dispassionate look at what an AI future looks like, and what it’s going to take to design one that works for, rather than against, humanity.

The AI Future is Coming

The story of humanity is the story of progress. Where once we led short, miserable lives defined by subsistence farming and a precarious balance on the precipice of disaster, we now lead long lives of relative leisure, safety, and plenty. This is not to say that human suffering has disappeared--far from it, the world remains a deeply unequal and difficult place for many people. But on average, today is in almost all respects the best time to be alive.1 And what has been the driver of our progress? Technology.

Humans have long yearned for better ways to maintain their health, safeguard their interests, and increase their productivity. It’s this yearning that leads us to study the universe, build new tools, and innovate relentlessly. Still, marginal gains in these areas have been the rule for most of human history--until the arrival of the industrial revolution when we saw massive increases in all three. And that trend now continues with the arrival of useful AI. Exponential improvements in computing power and the availability of information have led to AI systems that can solve problems faster and more effectively than humans.2

If we look at the world and identify all of the persistent problems we face--despite significant progress in the last 200 years--it’s hard to imagine that we would allow our progress to come to a halt now.3 Done right, AI promises to help us cure disease, end poverty, explore the universe, automate drudgery, and pursue our passions more freely. In other words, the march of progress will continue--humans will always be motivated to make their lives easier and more pleasant. And AI can help us do just that. Put simply, the benefits of AI are too great for us to ignore.4

At this point, we should pause to clarify a few definitions. What is it that we mean when we say artificial intelligence? U.C. Berkeley Professor Stuart Russel and his co-authors describe intelligence as the ability to “make good decisions, plans, or inferences” based on some set of environmental factors and associated goals.5 AI then is a system that can make such decisions on its own, without human intervention.

Such automated systems are already at work in the wild--at least in the most narrow of senses. Self-driving cars make decisions about how to move from one point to another safely and efficiently. AI systems have mastered decision-making in games like chess and go. And somewhat concerningly, AI programs are making decisions in distinctly human domains like parole hearings and home loans.6

But narrow decision-making such as that described above is just the beginning of our AI future. The end-game of AI development, or at least the end of humanity’s role as a builder of AI, is so-called artificial general intelligence (AGI). An AGI system is one that is capable of decision-making in a wide variety of domains, autonomously, and in the face of novel situations. Moreover, an AGI is not limited to human-level intelligence. To the contrary, it is almost guaranteed to surpass us on the overall intelligence spectrum.7

Machine circuitry can carry out computations faster than biological systems and can do so without exhaustion.8 Once we have created a machine that is smarter than us, it will be capable of improving on itself without limit. In 1965, long before modern AI advances, statistician I.J. Good referred to this as an intelligence explosion.9 More recently, University of Oxford Philosopher Nick Bostrum described the result as a superintelligence, which would far exceed the problem solving ability of humankind.10

All of this leads us to the conclusion that the AI future is coming. And it’s not just a future where constrained AI systems make decisions in a small set of domains. It’s a future where AGI and superintelligence systems make decisions in nearly all aspects of human life. The real question is, what does this future mean for humanity?

A Human-Centered Future

The problem with creating something smarter and more capable than you, is that you had better hope it stays friendly. This is the inspiration behind countless science fiction stories about humankind’s eventual end. And yet, our instinct for innovation is moving us, perhaps inevitably, towards such a creation. As philosopher and neuroscientist Sam Harris put it, “We seem unable to marshal an appropriate emotional response to the dangers that lie ahead.”11

If we accept that the AI future is inevitable, and smarter-than-humans machines along with it, then our focus ought to be squarely on designing that future for the benefit of humans. To this end, our first task should be understanding the workings AI systems. Unlike humans, who often act in ways that are unpredictable, illogical, and against their own interests, AI systems are almost logical to a fault--they have some goal, and pursue that goal relentlessly. It is humans who orient AI systems towards a goal, and it is thus incumbent on humans to ensure that we provide the right goal.

One doubts (or at least hopes) that we as a species would not knowingly provide our superintelligent systems with the goal of destroying humanity. But we might nonetheless arrive at that ignominious result if we create AI systems without care. An AI doesn’t have motivations in the same way that a human does--it has utility functions that it uses to make decisions about what actions to take. If an action is more likely to lead the system to achieve some goal (that is, gain utility), then it will take that action. This is problematic when the utility function is misspecified, resulting in a machine that pursues its goal without consideration for other things that its designers may have intended.12

Imagine, for example, a self-driving car. If the only goal you give it is to “move from point A to point B as quickly as possible,” then it won’t take long to see how dreadfully wrong you were in designing the machine’s utility function. Such a vehicle would break all traffic laws, run over curbs, ignore the safety of pedestrians, and generally be a menace to those around it. And that’s in the case of a domain-constrained system. A superintelligent AGI with a misspecified goal framework could put humanity itself at risk--not through malicious intent, but through the relentless pursuit of some other goal in a way that has adverse consequences for human lives.

AI researchers refer to this issue as a value alignment problem. Meaning, how do we ensure that AI systems act in a way that is consistent with our values? If we do build a superintelligent AGI, we are only going to have one chance at aligning its values with ours. Rolling back deployment of an AGI would be near impossible and it might well have motivation to defend itself from being turned off.13

Considering the dangers of value misalignment, Prof. Russell argues that our focus should be on designing not just AI, but human-compatible AI, which would be provably beneficial to humankind. In Prof. Russell’s definition, a human-compatible AI would be altruistic, in that its only concern would be the realization of human values. Furthermore, it would approach this goal from a position of humility, meaning that through observation of human behavior it could update its definition of what constitutes human values.14

All of this is a way of saying that caution is key. We can hardly dismiss AI development as too dangerous--to do so would be to miss out on the near endless potential benefits to humanity. But just as we can’t dismiss AI’s potential, neither can we dismiss the dangers.

Designing the Future

Earlier, we noted that there are two schools of thought about our AI future: the optimists, who see it as humanity’s saving grace; and, the pessimists, who fully expect AI to bring about the apocalypse. Indeed, it’s hard to look at both the inevitability of AI and the difficulty of doing it right and not feel a deep sense of foreboding. Perhaps though, there is something to be said for both approaches. 

If we consider the wonders that technology has given us to date, AI seems to have incredible potential to alleviate human suffering and help us achieve goals heretofore thought impossible. Today, we think curing cancer, interplanetary travel, and long-lasting life are mere fantasies. But could a superintelligent machine, with its superior processing and pattern recognition abilities, get us there? Maybe. In truth, we don’t know. But a few hundred years ago we didn’t know that flight, antibiotics, or worldwide instantaneous communication were possible. For us to say that we know what will remain impossible in the future feels a bit like hubris.

Of course, it’s also hubris to say that we can build a superintelligent machine right on the first try. Humankind has shown a remarkable capacity for self-destruction before. At this very moment, nuclear weapons, climate change, and unabated resource usage threaten humanity’s continued existence. Who is to say that AI won’t be the thing that pushes us into oblivion? If we don’t approach the problem with care, our worst fears may in fact come to pass.

If we’re going to reap the rewards of our AI future, without suffering the admittedly significant dangers, then we need to start designing the right future, right now. This is going to require a massive effort in both technical and value-oriented thinking. In the meantime, we’re going to have to hold off those who see AI as a winner-take-all proposition. If we don’t work cooperatively, then the likelihood of mistake grows, and it only takes one poorly designed superintelligence to put an end to the game for everyone.

All of this may seem like a somewhat academic exercise. AI systems today are amazing, but they’re nowhere near the type of superintelligent systems that we’re discussing here. In some ways, our AI future seems a long way off. Why then worry about it? In short, because if we don’t worry about it now, then we may not have the chance to do so in the future. Technology has a way of surprising people, and exponential growth, as we have seen in computing power, brings change far faster than most people realize.

Long term planning is not humanity’s strong suit. We are understandably focused on what is happening in people’s lives today. That said, even if we invested hundreds of billions of dollars into thinking about the design and operation of superintelligent machines, which may never even come to pass, it would be an investment worth making. Such research could prevent future catastrophe while simultaneously ensuring a bright future for humanity.

Note: This article originally appeared on Medium.

Leave a comment


Paper of the Week

In the above essay, we spent some time thinking about our AI future. The much harder question, which we only had a small amount of time to touch on, is how we orient the world towards the right AI future. Given the importance of this issue, I thought we would stay on theme with this week’s paper, and delve a bit more deeply into one of the papers cited in this essay. This paper talks about reserach priorities for creating beneficial AI. I can hardly think of any more important topic as we drive ahead into our AI future.

Summary: Research Priorities for Robust and Beneficial Artificial Intelligence

Russell, Stuart, Daniel Dewey, and Max Tegmark. “Research Priorities for Robust and Beneficial Artificial Intelligence.” AI Magazine 36, no. 4 (December 31, 2015): 105–14. (link)

In the first 20 years of the 21st century, AI research was focused on the construction of intelligent agents that could perceive and act in some environment and making decisions accordingly. With the growing consensus that AI research is progressing, it has become clear that the potential benefits are huge. We cannot predict what we might achieve when human intelligence is magnified by AI tools, to include such robust goals as the elimination of poverty. Given the import, we must focus research on maximizing the societal benefits of AI in a way that is robust, meaning that our AI systems do what we want them to do.

Short term research priorities include the following:

  • Optimizing AI's Economic Impact. We need research on mitigating the adverse effects (inequality, unemployment) of automation. The disparities may fall disproportionately along lines of race, class, and gender. We need research to anticipate such problems and develop policies that will help automated societies flourish.

  • Law and Ethics Research. Intelligent autonomous systems will pose difficult legal and ethical questions. For example, how should an autonomous vehicle value human injury compared to large material cost? Can lethal autonomous weapons comply with humanitarian law? Is the use of lethal autonomous weapons moral, and if so, how can we integrate it into human command-and-control structures? We need to look at policy questions, to include where policies are needed and what criteria should be used to evaluate a policy.

  • Computer Science Research for Robust AI. Autonomous systems need to be robust, meaning that they behave as intended. This requires four key elements: verification, meaning that the system is built correctly according to some design (did I build the system right?); validity, meaning the system meets formal requirements and does not have unwanted behaviors and consequences (did I build the right system?); security, meaning unauthorized parties can't manipulate the system; and, control, meaning humans have meaningful control over AI systems after they begin to operate.

If there is a non-negligible possibility that researchers will succeed in creating an intelligence that exceeds human capacity, then additional long term research priorities are also needed on verification, validity, security, and control at the higher level.


Term of the Week

This week’s term is a simple one—but one that is often misunderstood. Spend enough time on the Internet these days and you’re bound to hear about “models”. But what does that really mean? How should you think about it?

Definition: Model

A model is a simplified representation of the world. It aims to estimate some real-world process by reducing it to smaller pieces. Models are useful because they can distill complexity into something that is easier to manipulate and understand. All models are inherently wrong to some degree because they are simplified versions of the real world. The purpose of the model is not infallibility, but to help reason about the mechanisms and relationships that drive a phenomenon.


What to Read

My reading project this week was The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, by Erik Brynjolfsson and Andrew McAfee. The book was first published in 2014, but remains an important review of how emerging technology will shape our future. Brynjolfsson and McAfee argue convincingly that we’re on the brink of massive technological change—the type of which is bound to bring social and political upheaval. This book informed much of my thinking this week about our AI future and I highly recommend you give it a read.


Thank you for joining me on an exploration this week of our AI future! This is an important topic, and all of us should be spending some time thinking about it. Of course, we’ve only scratched the surface. Rest assured though, this is something we will return to again and again.

If you enjoyed this newsletter, I encourage you to share it with like-minded friends, colleagues, family, and anyone else who you think would find it interesting.

Share

1

Pinker, Steven (2018) Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

2

Brynjolfsson, Erik, McAfee, Andrew (2016) “The Second Machine Age.”

3

Harris, Sam (2016) “Can we Build AI Without Losing Control Over it?”, TED. (link)

4

Russell, Stuart (2017). “3 Principles for Creating Safer AI.” TED. (link)

5

Russell, Stuart, Daniel Dewey, and Max Tegmark. “Research Priorities for Robust and Beneficial Artificial Intelligence.” AI Magazine 36, no. 4 (December 31, 2015): 105–14.

6

Ramey, Corinne. “Algorithm Helps New York Decide Who Goes Free Before Trial.” Wall Street Journal (September 20, 2020).

7

Harris. “Can we Build AI Without Losing Control Over it?”

8

Bostrom, Nick (2015) “What happens when our computers get smarter than we are?”, TED. (link)

9

Good, Irving John (1966). “Speculations Concerning the First Ultraintelligent Machine.”

10

Bostrom, Nick (2014) “Superintelligence: Paths, Dangers, Strategies.”

11

Harris. “Can we Build AI Without Losing Control Over it?”

12

Soares, Nate. (2017) “Ensuring Smarter-than-Human Intelligence Has a Positive Outcome.” Google Talks. (link)

13

Bostrum. “What happens when our computers get smarter than we are?” Soares. “Ensuring Smarter-than-Human Intelligence Has a Positive Outcome.”

14

Russell. “3 Principles for Creating Safer AI.”