Moving away from alternative and parallel fictions to multiple possible futures, Julie Heyward's comment (to my "The rise of the machines" link) that robots must, necessarily, evolve as competitors to their makers is an interesting one.
As she says, it's been a frequent theme for science fiction – Gregory Benford, in particular, advancing it compellingly in the "Galactic Center" sequence of novels (if, towards the end, somewhat tediously – the second, Across the sea of suns, is enough to get the idea). Stanislaw Lem* made it more fun, with his asteroid belt gang wars between washing machines and refrigerators, and the Terminator franchise runs along the same general set of assumptions.
I'm less convinced than Julie, however, that this is inevitable. I'm not even sure that it's the most probable scenario, though I agree that it may be. There are a number of parallel futures which seem, to me, possible. I don't offer these as being my preferred options, nor as ones which I believe superior; just tossing around ideas which her comment triggered in the rummage room I call a mind.
Perhaps true machine intelligence will never occur at all. I wouldn't, myself, put real money on this one but it's probably (a guess: I've not done a survey) the majority view. In that case, conflict doesn't arise and the whole question is, of course, irrelevant.
Perhaps machine intelligence will reach a threshold short of and/or different from the human, as it has for the most part in terrestrial biology. In that case (assuming that it arises from human design rather than spontaneously evolving, as ours seems to have done, from its environment), we should not be looking to competitor species or human slavery models but the various types of dominance (eg sheep, cattle) or mutual benefit (dog, cat) relationships. These have formed as a result of differences between potential competitors and the ability of humans to distort natural selection in directions favouring a desired interspecies relationship.
Perhaps the reverse will occur: machine intelligence, starting with a design bunk-up and early assistance, rapidly evolves beyond our own and we ourselves become the useful dog/cat (I don't want to think about sheep/cattle...) partners.
Perhaps, again given its origins in design by humans, machine intelligence will evolve in an analogue of symbiosis or biological cooperation with homo sapiens. We are, after all, designing machines to do what we cannot, and which cannot do things which we can. The best known science fiction example of this is Iain M Banks' sequence of "Culture" novels (for example, Excession), in which humans and machines coëxist as equal partners to mutual benefit in a single society. Greg Bear's Moving Mars also has a variant on the same idea: science, society, economy, are dependent upon compact supercomputers known as "thinkers" which, being immobile (in fact, unconnected with the outside world except through data input/output) and so are equally dependent on humans and are equal citizens. In Clarke and Baxter's Sunstorm, the developed internets of Earth and Moon in 2037 have citizen status on the same basis.
Perhaps which route is taken depends on the balance of development between military and peaceful developments of robots. Perhaps we will see a battle not between human and robot, but between pure machine and mixed human/machine societies.
Finally ... perhaps, as in Banks' The Algebraist, we will see the conflict which Julie anticipates but not Benford's envisaged outcome: machine intelligences driven into hiding and subterfuge by human xenophobia, along the lines of Wyndham's human mutants in The Chrysalids.
* I don't remember, off hand, and can't quickly establish, which Lem short story collection contained this one ... if I find out, I'll come back and add a reference.
- Iain M Banks, Excession. 1996, London: Orbit. 1857233948
- Ian M Banks, The algebraist. 2004, London: Orbit. 1841491551 (hbk), 1841492299 (pbk.)
- Greg Bear, Moving Mars. 1993, London: Legend. 0099263114 (hbk) 0099261219 (pbk)
- Gregory Benford, Across the sea of suns. 1997, London: Vista. 0575600551. (Originally 1984, London: Macdonald & Co. 0356102254)
- Arthur C Clarke. and Steven. Baxter, Sunstorm. 2005, London: Gollancz. 0575075317 (hbk) 0575076542 (pbk.)
- John Wyndham, The chrysalids. 2000, London: Penguin. 0141181478 (originally 1955, London: Michael Joseph.)
8 comments:
How about if you do a convergence thought experiment. Suppose synthetic biology makes a hybrid human/orangutan. Very strong, terrific for labor, not too smart. Ethically problematic.
Just for a minute, ignore the ethics of it and think about how you would relate to such a creature. Quick, your immediate reaction.
On the other hand, you make a machine (robot) that is identical to that human/orangutan cross. No ethical problems? Maybe, but ignore them for a second and consider your response to this creature. It looks, acts, sounds, behaves just like the human/orang synth biology cross.
Do you think, on a visceral, instinctive level, you can choose which of those responses (if your two were different) you give to a creature that is identical -- machine to hybrid?
I don't think robots would ever be like cats/dogs because we already have cats/dogs. We'd want useful/better, and then more useful/more better and so on. Look what we've done to cats/dogs...
There is no such thing as "good enough" in science or in business/marketing >> whatever we get we'll be pushing it further to ... what limit? That's what you're asking and I'm wondering about.
Movie on this subject was Planet of the Apes.
[Coëxist?]
To be honest, I am not sure how I would react to the humangutan. My quick immediate reaction to the mental image, as you specify, is pity – but that might not be my reaction in reality.
The biomimetic robot humangutan ... sdo I know that it's a machine? Does it have sentience? Assuming the answersto those two are "yes" and "no" respectively, my answer is "curiosity" ... if "no" to the first question, we are back to pity ... if "yes" to both questions, I think I would be in new territory and can't be sure what I would feel.
But no ... we can't choose our immediate, visceral, instinctive reactions. They may be trainable in time, but not at first contact.
I don't see how I am asking about the limit to which we would push? I don't believe there is ever a self imposed limit on where a human will push to; any limits have to be imposed by nature or economic affordability - and even then we do everything we an to circumvent them. But what I'm asking is not that: it's how, if there are no natural limits, the evolution (robotic and social) will develop.
By "limit" I meant, vaguely, the point at which things get complicated. For me, that point is where the robot goes truly autonomous.
Do you think we can interact intellectually with an autonomous machine without engaging our emotions?
The humangutan makes you sad because it's sentient. Can you not be sad or happy or sympathetic to an autonomous machine? (That's a genuine question; I don't know how I would answer it.)
If the emotions are necessarily entangled with autonomous relations, then I think you are back treating them as non-machines.
Just to stir the pot, here are some quotes from a Star Trek: The Next Generation episode, The Measure of a Man:
"Commander Riker has vividly demonstrated that Commander Data is a machine; do we deny that? No, it is not relevant—we too are machines, merely machines of a different type. Commander Riker has also demonstrated that Data was built by a man; do we deny that? No. Children are 'constructed' from the 'building blocks' of their parents' DNA. Are they property?"
- Captain Jean-Luc Picard
"It sits there looking at me, and I don't know what it is. ... Is Data a machine? Yes. Is he the property of Starfleet? No. We've all been dancing around the basic issue: does Data have a soul? I don't know that he has. I don't know that I have! But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."
- Captain Phillipa Louvois
And a link to articles on the topic in the September 4, 2008 Economist Technology Quarterly called I, human.
In that article, they reference a (pdf) scientific paper, Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI by multiple authors. Its title is self-explanatory.
Also from The Economist, this reader comment following a June 2008 article that surveys the state of robotics:
"Rather than fearing that technology may eclipse what is called the human, I think of technology as eclipsing those things that we assume are essential for humanity but are, in fact, inessential. I see technology as forcing us to focus on what is the essential. This doesn't mean that we must reduce the scope of what it is we call the human, but , rather,that we must look for a bigger challenge. Indeed, it shows how reductive our conception of the human has been and still is. It demands that we release ourselves from it. Their time has not come, ours has."
Julie Heyward: Star Trek has obviously improved since I last saw Scotty trying to repair the warp drive with an elastoplast while Spock and Kirk wrestled with aiens in the athletic supporter storage room...
:-)
You must have seen more Star Trek than I have (I don't have TV reception). As I remember it, the number of athletes in the cast could have stored their supporters in a shoe-box. Or a sandwich baggie.
But "athletic supporters" really encapsulates the topic at hand.
The robots' athletic supporter would be ... the DVD rack? And how does he feel about his DVD burner?
Julie Heyward: I've seen only about a dozen Star Trek episodes ... all of them Captain Kirk vintage. My references came, instead, from Ursula K Le Guin's spoof of the same period, Intracom (in Stopwatch, 1974, London: NEL). She echoes your point about the crew size: "It is a small crew, but a select one, being
composed entirely of officers."
JH> Can you not be sad or happy or
JH> sympathetic to an autonomous
JH> machine?
I don't know ... but I think that the answer (whatever it is) will be confused by our biological programming to empathise with visibly discrete entities. We find it easier to empathise with a single human being than with a population; ditto a single deer (thinking of your recent "perishable snapshots") than a herd; a whale rather than a school; and so on. Since robotics is moving towards distributed entities, it is unlikely that a robot humangutan would be built, so ... would we emotionally view an intelligent city (for example) or an conscious internet as empathisable individuals or an unempathisable masses? And what about entities which "lived" entirely in software, never visible to human eyes?
Like you, I don't know the answers ... just interested in the questions.
I've seen only about a dozen Star Trek episodes
While they have their own cliches, the Next Generation series ones are generally worth watching. They're still showing a couple of episodes on BBC2 Friday nights (i.e. in the small hours of Saturdays).
Post a Comment