It’s Time to Push Pause on Artificial Intelligence, Says BU Philosopher

The villain of the latest Mission: Impossible movie is a sentient artificial intelligence, called the Entity, that’s out for global domination. The filmmakers clearly boned up on current events, as some real-life critics are warning that AI threatens humans with the “risk of extinction.”

AI’s potential dangers convinced Juliet Floyd that we need to push the pause button on its development.

A College of Arts & Sciences professor of philosophy who specializes in science and emerging media, she is among the tens of thousands of scientists and others who requested a six-month halt to the development of AI systems more powerful than the machine learning algorithm behind ChatGPT. Floyd, BU’s Borden Parker Bowne Professor of Philosophy, joined high-profile signatories including Elon Musk and Apple cofounder Steve Wozniak, though not with the expectation that a pause necessarily is in the cards.

“Rather, I expressed an aspiration and solidarity with the activities of increasing numbers of people around the globe concerned about, and discussing and researching, AI and ethics,” she says.

Artificial intelligence heralds economic prosperity, medical miracles, and more. But the doomsday concern of some, summarized by the New York Times, is that one day, “Companies, governments or independent researchers could deploy powerful AI systems to handle everything from business to warfare. Those systems could do things that we do not want them to do. And if humans tried to interfere or shut them down, they could resist or even replicate themselves so they could keep operating.”

Less apocalyptic, but still terrifying, are fears that AI could enable “synthetic biology” advances that unleash the next pandemic, or that it could drown the 2024 presidential election in misinformation.

We asked Floyd for her views on the need for a research moratorium.

This interview has been edited and condensed for clarity.

Q&A WITH JULIET FLOYD

BU Today: Why did you sign the petition?

Juliet Floyd: AI can’t solve all our moral problems with algorithms, and it creates new problems. I strongly support further, intensive research into AI generally; in fact, I believe we need it desperately, and in nearly every field. I come at this from the humanities, as a philosopher interested in the ethics of society and everyday life, the better and the worse of it, the discovery of what ultimately matters. From this point of view, we are all stakeholders in our future, and it is time for the engineers and economists to occasionally exit the lab and put work into learning how to discuss these things inclusively and conscientiously and reflectively.

The humanities are becoming more and more foundational in our world, because the capacity to pursue questions we all ask, each from our own individual perspective, is a prerequisite for guiding our journey to a better place. We cannot allow ourselves to become creatures of nudging technology alone.

What about the apocalyptic worry that, someday, AI could be an “existential” threat to humanity?

If computational biology can create cures for terrible diseases, it is obvious that it can be used to create superbugs and biological technologies that have catastrophic potential. Just as concerned scientists and members of the public called for serious discussion of nuclear weapons management from the 1950s onward—and biologists have met to discuss cloning, DNA databases, and nanotechnologies in recent decades—the time has come for discussion of how to protect life in the face of AI, whose workings connect far more immediately to us in the day-to-day than even nuclear physics and biology do. In the 1950s, Alan Turing made several broadcasts on the BBC to respond to cultural hysteria about the existence of an “electronic brain.” We’ve been here before, but AI is ever more powerful in seeping into the interstices of everyday life for most humans on the planet, and a broad discussion is needed.

Critics say AI such as ChatGPT is a source of misinformation. In a society where election denialism fueled an insurrection, is this grounds for a pause?

ChatGPT is not surprising to me. Ordinary language philosophers emphasized in the 1950s that a great deal of our everyday language use is highly predictable, and this is what ChatGPT is built on. Judge P. Kevin Castel recently ruled against a lawyer who submitted a brief filled with nonexistent cases and rulings that had been created by ChatGPT. The law won’t work if we have to constantly wonder whether briefs and motions are filled with falsehoods. And the web won’t work if ChatGPT-generated nonsense feeds on its own productions, cramming the airways. We require human checks and efforts; a great deal of new forms of work and creativity will have to go into design of this. A widely shared sense of the importance of how to discuss humanity’s values is crucial. This is what the humanities is for.

ChatGPT can generate a boring C+ paper if it’s short. So what? If this is all the writer cares about doing, it may be our job to say, “I could show you how to do better, something far more interesting.” Most students want to know how to do better; they want to learn how to think, not merely what to think. ChatGPT has already changed the way writing is taught, and I suspect there are opportunities here. My students are writing in journals as part of every class. They are sharing their editing of drafts with me, as well as their experiences with ChatGPT—many find it boring.

The academy and our halls of justice and politics were for too long filled primarily with white gentleman scholars. Now, thanks in part to AI, we may hope that not only access to information, but also the support generated by forms of intense discussion and self-development may be opened up to, and corrected, by everyone.

The European Union has begun regulating AI, and there are a number of proposals before Congress. Do particular regulations make sense to you?

In Europe there has been legislation to protect privacy and legislation against Facebook. There are many large, international university research grants dealing with AI, sustainability, and smart cities—e.g., in Singapore, where philosophers and ethicists sit on boards. There is a large push on ethics at UNESCO as well. In the United States, there will have to be legislation. Many software developers want legislation if it allows them to understand the field they are playing in, because the Wild West isn’t fun. Cyberlaw is a crucial area of research and education and will be pursued intensively, ideally with the help of faculty and students versed in anthropology, history, and the arts and humanities generally.

I’ve learned much from attending workshops with the Cyber Security, Law and Society Alliance at Boston University, a concerned group of faculty brought together through the Hariri Institute and the School of Law. They regularly discuss these matters and are making fundamental contributions to the handling of privacy, disinformation, cybersecurity, and pursuing much-needed discussion of how to mitigate harmful impacts on vulnerable members of our society that AI has already begun to cause. A pause on large training models would allow us to reflect and discuss what we care about, including how to invest in developing ways to include the voices and perspectives of the young, the marginalized, and vulnerable populations in the design of AI in the future.

Don’t forget HAL’s unforgiving response in Kubrick’s 2001: A Space Odyssey when Dave asked him to open the pod bay doors.

Via: https://www.bu.edu/articles/2023/its-time-to-pause-artificial-intelligence/