May 10, 2013
It’s coming. Sooner than we think, a reckoning between humans and our machines will slip into each of our lives: a little deep processing on what it will mean to live, work and even die in the company of robots.
In the attached essay, Neil Richards, Professor of Law at Washington University in St. Louis and William Smart, Associate Professor of Computer Science; Director, Media and Machines Laboratory; and Co-director, Masters of Engineering in Robotics program at Washington University in St. Louis, offer us a primer of implications and lots of food for thought in the looming “co-presence” of humans and robots occupying the same places together for years on end.
In their essay, “How should the law think about robots,” they begin with law, but go more deeply, saying that robots “have the potential to revolutionize our daily lives and to transform our world in ways even more profound than broad access to the Internet and mobile phones have done over the past two decades.
“We need to be ready for them and, in particular, we need to think about them in the right way so that the lawmakers can craft better rules for them, and engineers can design them in ways that protect the values our society holds dear. But how should we do this?
“As we have all personally witnessed, especially during the past decade, the Internet and cell phone have become for us much more than just machines or mere communications devices. They are now part of us, almost inextricably so, capable of good and evil, right and wrong, and life and death; the evening news is nightly filled with their exploits. Robots will do the same—and much more.”
Here at first contact is a good place to wonder about the future implications, while we still have time to remember how it was before they arrived.
Here as a downloadable pdf, the “essay is an attempt to think through some of the conceptual issues surrounding law, robots and robotics, to sketch out some of their implications.
“It draws on our experience as a cyberlaw scholar and a roboticist to attempt an interdisciplinary first cut at some of the legal and technological issues we will face. Our paper is thus analogous to some of the first generation cyberlaw scholarship that sketched out many of the basics of the field, even before the field itself was a recognized one.”
Brave new world of robot litigants, soldiers, and escorts
Plenty of real-world implications of human/robot interactions were gleaned by Miami Herald’s technology reporter, Glenn Garvin, at April’s “We, Robot” Conference at the Miami School of Law. His observations work as an interesting companion piece to the essay, offering robot exploits that we may soon see on the nightly news.
MIAMI HERALD: The robots are coming! And they’re going to be ratting us out to police (except when they are the police), suing us for sexual harassment (except for the robot hookers in approved android whorehouses), and sending the cost of our homeowners insurance through the roof (unless they’ve already slaughtered us in our beds at night).
That, at least, was the picture that emerged Saturday at We Robot, a University of Miami law school conference on the ethical and legal issues posed by the galloping developments in technology that have put uncounted millions of robots to work for us, from the cute little Roomba that sweeps floors to the killer drones flying around Afghanistan blowing up accused al Qaida terrorists.
The symposium upgraded to issues of terrorism and mass murder, with panels titled When Machines Kill and The Rules Of War And The Use of Unarmed, Remotely Operated Robotics Systems Platforms and Weapons.
“For when the metal ones decide to come for you. And they will!”
Robots that are homicidal, whiny and lascivious?
If that seems grim, Saturday’s session often seemed like a Star Wars sequel in which R2D2 and C3PO had been rewired by the mob lawyers of The Firm. The portrayal of robots ricocheted between homicidal, whiny and lascivious.
The latter took the form of Roxxxy (the spelling of her name with three Xs is anything but coincidental), a new anatomically correct sex robot who comes in models with names like Frigid Farrah and S&M Susan.
At a price of $7,000 to $9,000, Roxxxy not only does the wild thing but can cuddle (she’s got a faint mechanical heartbeat) and chat afterward. But one thing she can’t say is “no.” And Sinziana Gutiu of Canada’s University of Ottawa law school predicted that Roxxxy will lead to an epidemic of rapes by men conditioned by robot sex to think that real women can’t say no, either.
“There is nothing about consent in that encounter,” Gutiu said of sex between a man and female robot, adding that sex robots are “an embodiment of sexual inequality.” The paper she presented at the symposium suggested sex robots be restricted to android bordellos, and that women whose husbands are seduced by robots be allowed to sue for alienation of affection.
But Kate Darling, a lawyer pursuing a doctorate in intellectual property in Switzerland, suggested it’s the robots who will soon be suing us. She predicted that legal rights will soon be extended to “social robots,” those designed to “communicate and interact with humans on an emotional level.”
She admitted most of her friends are singularly unimpressed by her belief that machines have rights. “They’ll say, ‘I saw my toaster this morning, and it didn’t come anywhere close to becoming conscious and demanding better treatment,’ ” Darling recounted sadly.
Even more menacing were the border patrol robots described by Kristen Thomasen of the University of Ottawa law school. Arizona researchers, she said, are developing computer-screen avatars to interrogate anybody entering the country.
The interrogator-bots — displayed on screens in kiosks at border checkpoints — will ask questions while flashing pictures of weapons or drugs, then decide whether the answers are truthful from data collected by sensors monitoring involuntary responses like breathing and heartbeat. The robots even have the ability to switch from good-cop to bad-cop modes depending on how a suspect responds.
“The kiosk from Hell!” exclaimed another panelist, Stanford law school’s Ryan Calo, who then offered up an even more ominous scenario: “What’s to stop police from recruiting your robots to inform on you?”
“Suppose your robot butler asks you, ‘How’s the marijuana crop this year?’ ” he wondered.
“Could your car ask you, ‘Are you going to the scene of the crime?’ ” Thomasen replied, getting into the paranoid spirit of the moment.
Some panelists, to be sure, thought everybody was getting a little carried away with attributing needs, motives and grievances to robots.
“Realizing that I’m sounding like the bad guy in a robot-uprising movie, they’re tools,” insisted Neil Richards. a law professor at Washington University in St. Louis. “A really, really fancy hammer,” agreed his law school colleague William Smart.
But Smart may have undermined his case slightly when, asked why we need to write new laws governing robots but not artificial-intelligence computer software, he lifted a laptop computer in the air and proclaimed: “This thing can’t creep up to my bed in the middle of the night and stab me. But a robot can.”
See: More “Robots and the Law”