Stross's response is to say that we already have it to some degree: grandmaster-level chess-playing computers, expert medical diagnosis systems, self-driving cars, etc. But he doesn't quite mention what I wish he had, which is that general intelligence is a whole different animal. Writing a program to solve a single problem like playing chess may be computationally challenging, but it's not intelligence. Krugman gives him a good lead-in to this topic:
[W]hen I took computer science... the instructor told us all that by getting chess-playing computer programs we'd learn a lot about the nature of consciousness, and we ended up learning a lot about the nature of chess.
That's probably inevitable when focusing on a specific problem like chess. It's pretty clear that Deep Blue (the chess supercomputer that beat Kasparov in 1997) was not simulating the thought process of an actual chess grandmaster in any meaningful way. This subject is incredibly deep, more so than I want to go into right here. Stross does mention one interesting quote from Edsger Dijkstra: "The question of whether a machine can think is no more interesting than the question of whether or not a submarine can swim."
I'm not sure I agree with Dijkstra on this one. What he seems to mean is that problems of definition overwhelm what you really want to talk about, so that the question becomes meaningless. But one could take him to mean that a submarine is a mechanism to accomplish the same task that swimming accomplishes for living things. If so, then analogously an AI computer should accomplish the same tasks that thought accomplishes for living things. Clearly we do not possess such an AI today - our best computers only accomplish tiny subsets of what our brains can do. But the question is certainly more interesting that definitional ones about whether submarines "swim."
No comments:
Post a Comment