Saturday, 17 March 2012

Von IBM lernen heißt siegen lernen

(Paul Rand 1981)
IBM have a unique and highly visible record in AI winnage. While Herbert Simon and Allen Newell predicted already back in 1957 that a computer would beat the best human player in chess in only ten years' time, it took actually four times that long until IBM's Deep Blue finally won in a tournament against Garry Kasparov in 1997. Since then, computers have basically been unbeatable in chess. Over the years the work needed to reach this point has led to a number of reassessments regarding human cognition, how complex many tasks are and how good at least some of us actually are at them.

Another IBM Research landmark took place last year, when Watson, an AI program built to excel in Jeopardy!, beat two all-time champions in a specially televised tournament. Allegedly you can find the actual shows on Youtube (but of course I can't link to them because I'm a too much of a copyright prude). And of course there are bound to be more such landmark achievements to come. One of those could be delivered by the Blue Brain project, which is an attempt to build a molecular-level computer simulation of an entire human brain.

The game show Jeopardy! itself is perhaps not that well known in Finland. As far as I can remember, a local version of it ran on one of the channels I never watch for a season or two after which it was probably cancelled. It is a quiz format with 2 sets of 6 categories of questions with 5 questions each worth a different amount of money plus various added features. The questions or clues rely heavily on wordplay and all kinds of creative elements so it is definitely not just a matter of parsing the clue and looking up the answer, as the variability in the clues makes them impossible to interpret them correctly just by parsing. The most easily distinguishing characteristic of the game is that answers (at least pragmatically speaking) in turn have to be phrased syntactically in the form of a question. For a novice competitor of the human persuasion this of course adds to the cognitive load and occasionally makes them forget this, thus making even a factually correct answer technically incorrect. For a computer program equipped with a good lexical database, on the other hand, complying with this rule is fairly trivial: just check whether the answer is a person or not and whether it is singular or plural and prefix who/what and is/are accordingly. If you want to get really fancy, also see whether you should prefix the answer with an article (definite or indefinite).

Chess as a game is computationally closed. The numbers of pieces, their positions and possible moves and complete games are all finite. The number of possible games is enormous, and except for the very end of the game, it is quite simply impossible to even list all of them, never mind also evaluating all of them to see which move is the best one to take next. So while simpler games such as tic-tac-toe can be solved by simple brute force in an instant (see xkcd), Deep Blue had to rely heavily on heuristics and libraries of sequences of moves to accomplish its task. (By the way, Deep Blue used a great deal of dedicated custom hardware to do this, while Watson ran just on general-purpose computers.) Jeopardy!, on the other hand, is not computationally closed. The list of potential topics for the categories is unbounded, and typically a category combines at least two of them, described initially only through a clever and opaque title for the category.

As an occasional legal theorist and thus someone who is also interested in rules just for their own sake, at least for me it is interesting to note that for chess, the rules of the game itself are only about ten pages long. What is even more relevant is that Deep Blue does not even implement all of them properly. For example, the program was not able to recognize a tie, but rather relied on people to recognize such events. I have not been able to find a mention about what the program would have done in the event of an invalid move by its opponent that goes unnoticed by the judge, either. (Not that that is likely to be a real issue at that level.) And of course a computer is completely unaffected by such possible rule violations as Tal's devious and distracting smile, which was also used as an example by Ronald Dworkin (Taking Rights Seriously p. 102). Chess and in particular Chess AI is therefore not a very good model for the legal field. As an application, it is not easily generalizable, and the market for chess processors must be quite limited.

Watson, on the other hand, was not built just to kick some human butt in Jeopardy! and to get a ton of publicity while doing it. IBM have announced plans to develop the underlying DeepQA technology further for deployment in a variety of fields of expertise, including law. For the time being it seems to me that they are strongly concentrating on the medical and financial sectors. (It certainly wouldn't surprise me if their market research had shown that lawyers are too conservative and not tech-savvy enough for it to be a potential commercial success, at least for now.) But there is still a live connection, as CMU, the academic home of both Deep Blue and Watson, is just down the street from Pitt Law School, one of the centres of AI & law research in the US. Watson principal investigator Dr. David Ferrucci also gave a keynote address at ICAIL last June in Pittsburgh. We'll have to wait and see.

The idea of Watson, J., has even reached the top of the legal establishment at our poor frontiers (or whatever 'raukoilla rajoilla' is in English). At the Finnish Bar Association's annual conference this January, the Parliamentary Ombudsman of Finland, Petri Jääskeläinen, LL.D., opened his address (only in Finnish, sorry) with Watson (and Chicken Run, nice touch there) and the horror scenario of computers deciding cases of law. My own position on the issue is of course already on the record: I'm afraid neither my rank nor my GI tract are strong enough for me to trust my gut instincts enough to even consider the question whether without having at least some clues about the how, where and why as well. And right now we're really only getting to the point where we are shaping the right questions while at the same time trying to figure out at least tentative answers to them to see whether indeed even the questions make sense. It is not like decision-support systems or the evil robot judges will just appear out of nowhere (and for the time being the utter bogosity of public sector software procurement procedures is certainly the first and the best line of defence against them...), but rather they should preferably appear in contexts where they actually make all kinds of sense. Meanwhile, there are plenty of much less scary types of intelligent software technologies not used enough in the legal (especially information retrieval) field to get things started, but more about them later on.

More on Deep Blue and chess:
Feng-Hsiung Hsu: Behind Deep Blue: Building the Computer that Defeated the World Chess Champion (Princeton UP 2002)
Diego Rasskin-Gutman: Chess Metaphors: Artificial Intelligence and the Human Mind (MIT Press 2009)
Pertti Saariluoma: Chess Players' Thinking: A Cognitive Psychological Approach (Routledge 1995)

More on Watson: IBM Research (lots of videos as well)

No comments:

Post a Comment

Post a Comment