The Evolution of AI Risk in America, 1956–96

Friday, January 5, 2018: 11:10 AM
Roosevelt Room 4 (Marriott Wardman Park)
Colin Garvey, Rensselaer Polytechnic Institute
In the mid-1980s, the US government planned to put artificial intelligence (AI) in charge

of the nation’s nuclear defense system as part of President Reagan’s “Star Wars” Strategic

Defense Initiative. Thankfully, a group called the Computer Professionals for Social

Responsibility convinced DARPA any system of that size was bound to contain bugs, and

therefore to pursue it would be tantamount to creating a “doomsday machine” (Roland

& Shiman 2002). Luckily, it was called off. In this case, knowledge of one kind of AI risk

was critical to offset the creation of another, greater risk.

This paper asks, how have the risks of artificial intelligence (AI) been framed,

represented, and understood in America since the field’s “official” founding at a small

conference at Dartmouth College in 1956? Due in part to their speculative nature, the

risks of AI have been understudied. The discipline of AI itself has historically paid scant

attention to ‘social impacts’ and other non-technical subjects such as risk. As

philosopher and longtime critic of AI Hubert Dreyfus argued in What Computers Can’t

Do (1972), “artificial intelligence is the least self-critical field on the scientific scene.”

In reaction to technical experts’ organized ignorance, the first four decades of AI

saw a number of American critics of AI rise to prominence, including Mumford, Joseph

Weizenbaum, Hubert Dreyfus, Terry Winograd, John Searle, and so on. These and other

scholars constitute a “minor literature” within the broader constellation of AI discourse,

yet the common threads connecting their contributions remain largely unmapped.

Focusing on what I call the “classic period” of AI (1956-1996), this paper examines this

lineage of prominent American AI critics, looking for categorizations and constructions

of AI risks, and considerations of those at-risk. However, this paper looks beyond these

“usual suspects” to include the relevant writings of US-based social movements, such as

Science for the People and Computer Professionals for Responsibility, that were actively

involved in challenging dominant forms of technoscience.

<< Previous Presentation | Next Presentation