UPDATE: And let me introduce you to a blog all about medieval robots.
Last weekend, I had my own April madness (in my case, more like April silliness) of flying from Paris to State College, PA, to give a version of my worms paper at the Penn State Institute of the Arts and Humanities “Robot Weekend: Being Human Gizmos.” How could I say no to Michael Bérubé? Could you? (and, incidentally, if you haven’t read his recent MLA column, do so as soon as you can, and PARTICIPATE in the crowd-sourced project on non-tenure-track faculty).
The Institute’s to be praised for putting–as you’d get from its title–the humanities in touch with the arts, something we tend not to do, preferring not to (or not take the trouble to) contaminate our commentary with the messiness of creation. Not our readers, of course! We’re a messy bunch, I hope.
- I proposed that we should not look for the real breakthrough with artificial intelligence in machines that think as well or better than we do, nor in machines that love or suffer tragedy like we presume ourselves to do (and here, I direct you to the conclusion of R.U.R.) I don’t think we need to keep swinging between, or trying to undo, the old divisions of thought and feeling (notoriously, the conclusion of Metropolis). Instead, I think the breakthrough will come when we build machines that play, and not in the sense of playing chess or in following any set of rules. I mean play in the sense of taking pleasure in things and each other, and in suspending the rules temporarily in order to bring a new world into being for its own sake. R.U.R.’s brilliant, but imagine if Čapek had ended it not with love and sacrifice, but with the robots jumping rope? Or singing karaoke?
- All of this, of course, eludes the fundamental question of the distinction between artificial and “natural” or “innate” intelligence.
- Note that the same problem of distinction applies to autonomy (eg, humans), semi-autonomy (eg, robots), or, well, whatever falls outside or below this threshold, like automata. Derrida’s analyses of sovereignty and of reaction vs. response require more attention to these distinctions. More straightforwardly, studies of the cognition of elite athletes demonstrate that the athletes work best when they work automatically: pitchers who think about their motion, who becomes less ‘robotic,’ lose their effectiveness. Consider, finally, the entry on automata from Diderot’s encyclopedia: “engin qui se meut de lui-même, ou machine qui porte en elle le principe de son mouvement.” [Instrument which moves by itself, or machine which contains within itself the source of its motion]. If we follow deconstruction mechanically, we know what to do with this. On the one hand: the automaton moves itself; on the other, it is the object of an internal intention, so to speak, the object to a subject within itself, a motion-imparting subject that we will never be able to get at.
- Notably, actors in Gizmo had wanted to know their “motivation.” What’s a motive but that which moves us but which isn’t accounted for in the action itself? A motive’s a kind of extra power. The actors want to know, in other words, what makes them more than robot. Alternately, a “motive” is precisely the search for the mechanism: actors who want to know their motives are robots who want to know what makes them tick. They’re not looking for free action so much as looking for a different way to wind themselves up.
- We had several papers on robot caregivers for disabled people. I won’t presume to summarize (or mangle) their arguments. Instead, I’ll just (again) offer my ideas, in this case, on the chronological difference of robots. Robots operate at a different speed than (most of) us do. Either they operate far more quickly (in terms of information retrieval and processing) or they operate far more slowly, in the sense that they might long outlast us, or they operate in a kind of stasis or quick temporal looping, in the sense that an action or set of actions will be repeated identically, whether done 10, 100, or 1000 times. See Disney’s Sorcerer’s Apprentice.
- Proposed: what distinguishes robots from artificial intelligence is that robots are bodied. Counterproposal: HAL or Alpha 60 [in Alphaville] can be hurt, and seem to finally be localizable, and Skynet (in the Terminator films) does its ill through materializations (the terminators, nuclear warheads, etc.). Development: it’s impossible to imagine a disembodied mind: it’s all (necessarily) material, in the sense that everything has a place.