Friday 23 March 2012

What is innovation?

I'm being profiled on WIN-novators later this spring. As an preview, here's my take on the question in the title of this post:

I suppose there are basically two possible aspects on innovation. The first is taking something preexisting and familiar and making it just a little better. The other alternative is a radical break with tradition and doing things differently altogether. In a sense this echoes the distinction between normal science and scientific revolutions introduced into the philosophy of science by Thomas Kuhn. But in reality (rather than philosophy) these aspects are complementary rather than dichotomous.

To take a concrete example: the shift from, say, a 32 nanometre to a 22 nanometre process in semiconductor manufacturing is not immediately visible for the computer user and from this perspective it may seem like yet another meaningless number in the computer specifications orat best a small incremental change. From the manufacturing perspective, on the other hand, shedding those extra nanometres has required enormous technological advances. One additional constraint onthe design is also the amount of heat being generated into a smaller and smaller space that still has to be dissipated through a cross-section of comparable size, leading to the invention of 'darksilicon' (powering down parts of a chip that are unused at any given moment). And on the other hand, the cumulative effect of such changes in terms of processing power, storage capacity and so on (Moore's lawand all that) enable new approaches to all kinds of problems thatwould have been quite impossible a decade or two ago. After all, eventoday's smartphones are more powerful than supercomputers in most ofthe 1980s.

The ability to deal with immense amounts of data in real time isdefinitely one of the two biggest driving forces for artificialintelligence in the foreseeable future. Recently I looked into the history of machine translation, and one of the earliest systems actually took twice as long just to do what amounts to looking up each individual word in a dictionary and stringing them together into an approximation of a translation as what it would take for a human translator to produce a correct translation. A system like Google Translate, on the other hand, has ginormous collections of multilingual documents with aligned language elements and uses them together with some heavy statistical processing to do the same job and produce at least something understandable if not correct in justfractions of a second. Oh, and the other driving force? Bio-inspired AI, or seeing how Nature has solved a given problem and trying to reproduce that in an artificial design.

As for my own work, I try to keep these different perspectives in mind, and while I occasionally like to throw words like 'robot judge' around, it is more as an abstract target (and of course also asprovocation) rather than as something I am actually concretely interested in implementing. But it is certainly helpful in trying to keep in mind the whole range of issues potentially involved in working with legal AI, and not just the issues du jour the research community finds interesting at the moment. In my opinion, one part of the problem is also that mainstream legal theory does not study law and legal reasoning in particular as a form of cognitive activity and has managed to all but ignore all the scientific progress made in both linguistics and psychology over the past fifty years, and in much of my work I take theories from those disciplines and try to apply them to questions of legal theory in a very general sense (mostly because almost nobody else seems to be doing it). Still, the best way forward for me seems to be trying to model some very small corner of the legal system using some particular technique (and I think I'm stuck with fuzzy logic at least until I've finished my dissertation) to see whether it works and then trying to see whether there are some broader conclusions to be drawn based on it. And in the best case it might even do something useful (read: marketable) at the same time. In this respect, AI & law seems to be about twenty years behind language technology.

We don't particularly need robot judges, but for example judicial decision-support systems could help actual judges in making correct and consistent decisions more efficiently and reliably, thus perhaps enabling them to spend more time on cases where uniquely human capabilities are really required. And if at the same time technology also revolutionizes the way legal services are provided (as predicted by Richard Susskind in particular), maybe the parties do not even have to go to court in the first place.

Thursday 22 March 2012

Augmenting Man

"The combination of machines and ICT has brought exponential development into the engineering world. As a result we see the emergence of autonomous machines. The complexity of tasks as well as the complexity of environments where these machines can work is steadily increasing. We can build cars that can navigate autonomously through city traffic finding destinations without any human intervention. Therefore, it is fair to say that machines have, on a functional level, already reached cognitive abilities comparable to horses or dogs. But this is not the end of development. Soon there will be no type of manual labor in which machines will not outperform humans. This is technically already true today. Currently, machines are merely held back by economic and societal constraints. The weakest of these constraints is the cost of hardware. Moore’s Law guarantees that computing power that today can only be found in supercomputers will be available in pocket sized devices in little a more than a decade. Some other constraints are more difficult to overcome. The more powerful and more complex a machine is, the more damage it can potentially create. This is the reason there are no self-steering cars on the roads yet. We have suggested a path of best practices and ethics to improve machines and reduce intentionally malicious behavior. Nevertheless, even those best practices leave us with a residual risk. This residual risk is not necessarily small. It may indeed be so large that certain types of machines will not be able to enter the market because of liability concerns. This limitation will only be overcome by the creation of an ultimate machine. For human parents responsibility and liability for a child ends with it becoming an adult. Similarly a machine can become an ultimate machine by emancipating itself from its manufacturer/owner and indeed becoming a distinct legal or even social entity. Interestingly, this can be done by creating a legal construct around this ultimate machine that in itself has economical value.

Nevertheless, the big question remains: how will our societies hold up to this rapid change? For example, currently our entire tax and social system, indeed most of our culture, is centered on the concept of work as the means of creating one’s livelihood. For example, the European Union has set a goal of increasing the part of the population (between 15 and 64 years of age) in gainful employment to 70 per cent up to 2010. Yet when machines are able to perform manual labor cheaper and more efficiently than humans, what jobs will remain? Former US Secretary of Labor, Robert Reich, assumes that manual labor will eventually be replaced completely by machines. Nevertheless he argues that there will still be a high demand for a human work force. These new workers will have to be highly educated and trained “symbolic analysts” – lawyers, doctors, journalists, consultants, and the like – which create value beyond mere manufacturing. However currently only a fraction of the labor force is capable of performing these jobs. Even though goverments have stated their intention to increase investment in education it is questionable whether this goal can be achieved for everyone. And even if it were possible, the advancement in information technology is not restricted to manual labor. Machines have augmented the physical performance of man to the point were he becomes superfluos. The same augmentation is also taking place with our cognitive abilities. The famous quote of the computer being a “bicycle for the mind” becomes evident when we consider the vast amount of data a single person can analyze with the help of a personal computer. Therefore, the observation that machines in the long run are not destroying jobs but creating new ones is merely that; an observation and not a law. There might well be a threshold of automation that changes the rules of the game entirely. If that should happen this would be one aspect in which we have to change our culture radically. In any case, how well we are prepared for these new machines will determin the social acceptance and ultimately the cost of the transition. Since development is still gradual, there will be several years left to create new practises. There is likely not a simple nor a single answer. The convergence of disciplines and the accelerating speed of technological progress will require a holistic approach and result in ad-hoc solutions. Fortunately, we can start learning about the problem and its solutions already today. After all, the future is already here, just not equally distributed."


William Brace, Anniina Huttunen, Vesa Kantola, Jakke Kulovesi, Lorenz Lechner, Kari Silvennoinen and Jukka Manner,
"Augmenting Man,"
in Bit Bang, Rays to the Future, Yrjö Neuvo and Sami Ylönen, Helsinki: Helsinki University Print, 2009, p. 236-263


Wednesday 21 March 2012

File sharing + Robots = "Low Orbit Server Drones"

Have you heard about the Pirate Bay's latest idea? They are planning Low Orbit Server Stations (LOSS). In other words, they would like to host "parts of their site in GPS-controlled drones, instead of old-fashioned data centers." Due to a court decision my operator Elisa has blocked its customers access to the Pirate Bay. Consequently, I am not able to check what they say about the subject, but below you can find some information:

The Pirate Bay Attacks Censorship With Low Orbit Server Drone

The Pirate Bay Planning "Low Orbit Server Drones"

And I thought that there is no way to combine file sharing and robot studies...

How I Became Interested in Intelligent Systems?

During the academic year 2008-2009 I participated in the Bit Bang post-graduate course.

“Bit Bang – Rays to the Future is a post-graduate cross-disciplinary course on the broad long-term impacts of information and communications technologies on lifestyles, society and businesses. It includes 22 students selected from three units making the upcoming Aalto University: Helsinki University of Technology (TKK), Helsinki School of Economics (HSE) and University of Art and Design Helsinki (UIAH). Bit Bang is a part of the MIDE (Multidisciplinary Institute of Digitalisation andEnergy) research program, which the Helsinki University of Technology has started as part of its 100 years celebration of university level education and research. Professor Yrjö Neuvo, MIDE program leader, Nokia’s former Chief Technology Officer, is the force behind this course.”

We wrote a joint publication based on the fall and spring group works. The book was published in co-operation with Sitra, the Finnish Innovation Fund:

http://lib.tkk.fi/Reports/2009/isbn9789522480781.pdf

During the fall term my group wrote about the processor and memory a book chapter “The Digital Evolution – From Impossible to Spectacular” and in spring we were given topic intelligent machines. Finally, our book chapter was titled “Augmenting Man”. And that's the way it all started. (To be continued...)

Monday 19 March 2012

Current Projects and a Book on Linking

Currently, I participate in the Graduate School Law in a Changing World:

http://www.helsinki.fi/omm/english/index.htm

"LCW graduate school covers all fields of legal studies, from various branches of positive law to general jurisprudential studies. Each doctoral student will get acquainted with the europeanisation and the globalisation of law... LCW provides the doctoral students with a systematic 4-year research training programme."

So far, we have had great fun together!

In addition, I am a member in a research project titled "New technologies in the content production and usage" funded by the Helsingin Sanomat Foundation:

http://www.hssaatio.fi/images/stories/Pihlajarinne_Tiivistelm_pitk_suomi.pdf

Our principal investigator LL.D., docent Taina Pihlajarinne just published a book about linking (A Permission to Link) . It is available only in Finnish:

http://www.efokus.fi/flash/lupa_linkittaa/#/1/

LL.M. Studies in the United States - Do We Have Something to Learn?

I interviewed three women who took part in LL.M. studies in the US during the academic year 2010–2011. The interviewees were born 1980–1982 and graduated from Law School in Helsinki and Turku in 2006–2007. LL.M. studies were conducted at the Harvard Law School, the University of Chicago Law School and the New York University School of Law (NYU Law). My goal was to examine the American education and to compare it to the Finnish legal education.
In the learning environment in the US the work done before the lectures is emphasised. Students have to write reaction papers in which they consider the themes for the next lecture in advance. The epilogue after the lecture then brings the work together. The learning material is so large that students are forced to learn to identify relevant material and compress it before the exam. The process complies with problem solving and inquiry-based formula. Students work at the limits of their performance with the teacher.
There are some differences in the Anglo-American and continental legal systems. However, case-based learning has been successfully tested also in Finland. In Finland teachers often complain about the lack of debate. The work done before the lectures improves the level of the discussions. We must also remember that the debate is not necessarily at a very high level, if no one leads it. The teacher in the United States is not a partner and a coach, but an authority, whose position is based on know-how. This is an exception in relation to the tradition of inquiry learning. The view and the teacher identity are shaped by experience. The respondents stressed the importance of the experience of the teachers. Young researchers do not have a decade of experience. This can be compensated by a desire to develop.
To write reaction papers before the lectures is a good way to improve the quality of the discussions and interactivity in classes. In this case, lectures may be fruitful for the teacher as well. A teacher's toolbox should also include articles and court cases to analyze. It is not enough to go through students' seminar papers.
This is an excerpt of an article published in Lakimies 2012/1:

Saturday 17 March 2012

Games computers play, part n

While we wait for the next RoboCup, today's The New York Times has a story titled 'The Computer’s Next Conquest: Crosswords'. Crossword puzzles may seem like a trivial task solved quickly by brute force and a good word list without even having to look at the clues, but that depends a lot on the puzzle and a bit on the language as well. Morphology is of course not much of an issue for English, but as far as I know (and I'm by no means an expert here), neither is it usually one for Finnish, either, since the only inflections used in Finnish crosswords are typically only the singular and plural nominative and the first infinitive. Where it gets challenging are answers containing several words without indicated positions for intra-word spaces. The simplest brute force approach is to treat such an answer just as a string of random characters subject to a check for correctness afterwards. With a 15-character string and 26 characters to choose from, we get over 10^21 alternatives. Evaluating all those at a million per second would take only about five million years, and there might be two to four of them in a single puzzle... Of course, not all of the characters are unknown, but rather some characters can be determined based on orthogonal answers, which in turn limit the number of phonotactically permissible characters next to the ones already known. I suppose something like Markov chains together with some heuristics about typical patterns should do a much better job. Still, figuring out an answer like SNOISSIWNOOW might still remain a bit of a challenge.

Von IBM lernen heißt siegen lernen

(Paul Rand 1981)
IBM have a unique and highly visible record in AI winnage. While Herbert Simon and Allen Newell predicted already back in 1957 that a computer would beat the best human player in chess in only ten years' time, it took actually four times that long until IBM's Deep Blue finally won in a tournament against Garry Kasparov in 1997. Since then, computers have basically been unbeatable in chess. Over the years the work needed to reach this point has led to a number of reassessments regarding human cognition, how complex many tasks are and how good at least some of us actually are at them.

Another IBM Research landmark took place last year, when Watson, an AI program built to excel in Jeopardy!, beat two all-time champions in a specially televised tournament. Allegedly you can find the actual shows on Youtube (but of course I can't link to them because I'm a too much of a copyright prude). And of course there are bound to be more such landmark achievements to come. One of those could be delivered by the Blue Brain project, which is an attempt to build a molecular-level computer simulation of an entire human brain.

The game show Jeopardy! itself is perhaps not that well known in Finland. As far as I can remember, a local version of it ran on one of the channels I never watch for a season or two after which it was probably cancelled. It is a quiz format with 2 sets of 6 categories of questions with 5 questions each worth a different amount of money plus various added features. The questions or clues rely heavily on wordplay and all kinds of creative elements so it is definitely not just a matter of parsing the clue and looking up the answer, as the variability in the clues makes them impossible to interpret them correctly just by parsing. The most easily distinguishing characteristic of the game is that answers (at least pragmatically speaking) in turn have to be phrased syntactically in the form of a question. For a novice competitor of the human persuasion this of course adds to the cognitive load and occasionally makes them forget this, thus making even a factually correct answer technically incorrect. For a computer program equipped with a good lexical database, on the other hand, complying with this rule is fairly trivial: just check whether the answer is a person or not and whether it is singular or plural and prefix who/what and is/are accordingly. If you want to get really fancy, also see whether you should prefix the answer with an article (definite or indefinite).

Chess as a game is computationally closed. The numbers of pieces, their positions and possible moves and complete games are all finite. The number of possible games is enormous, and except for the very end of the game, it is quite simply impossible to even list all of them, never mind also evaluating all of them to see which move is the best one to take next. So while simpler games such as tic-tac-toe can be solved by simple brute force in an instant (see xkcd), Deep Blue had to rely heavily on heuristics and libraries of sequences of moves to accomplish its task. (By the way, Deep Blue used a great deal of dedicated custom hardware to do this, while Watson ran just on general-purpose computers.) Jeopardy!, on the other hand, is not computationally closed. The list of potential topics for the categories is unbounded, and typically a category combines at least two of them, described initially only through a clever and opaque title for the category.

As an occasional legal theorist and thus someone who is also interested in rules just for their own sake, at least for me it is interesting to note that for chess, the rules of the game itself are only about ten pages long. What is even more relevant is that Deep Blue does not even implement all of them properly. For example, the program was not able to recognize a tie, but rather relied on people to recognize such events. I have not been able to find a mention about what the program would have done in the event of an invalid move by its opponent that goes unnoticed by the judge, either. (Not that that is likely to be a real issue at that level.) And of course a computer is completely unaffected by such possible rule violations as Tal's devious and distracting smile, which was also used as an example by Ronald Dworkin (Taking Rights Seriously p. 102). Chess and in particular Chess AI is therefore not a very good model for the legal field. As an application, it is not easily generalizable, and the market for chess processors must be quite limited.

Watson, on the other hand, was not built just to kick some human butt in Jeopardy! and to get a ton of publicity while doing it. IBM have announced plans to develop the underlying DeepQA technology further for deployment in a variety of fields of expertise, including law. For the time being it seems to me that they are strongly concentrating on the medical and financial sectors. (It certainly wouldn't surprise me if their market research had shown that lawyers are too conservative and not tech-savvy enough for it to be a potential commercial success, at least for now.) But there is still a live connection, as CMU, the academic home of both Deep Blue and Watson, is just down the street from Pitt Law School, one of the centres of AI & law research in the US. Watson principal investigator Dr. David Ferrucci also gave a keynote address at ICAIL last June in Pittsburgh. We'll have to wait and see.

The idea of Watson, J., has even reached the top of the legal establishment at our poor frontiers (or whatever 'raukoilla rajoilla' is in English). At the Finnish Bar Association's annual conference this January, the Parliamentary Ombudsman of Finland, Petri Jääskeläinen, LL.D., opened his address (only in Finnish, sorry) with Watson (and Chicken Run, nice touch there) and the horror scenario of computers deciding cases of law. My own position on the issue is of course already on the record: I'm afraid neither my rank nor my GI tract are strong enough for me to trust my gut instincts enough to even consider the question whether without having at least some clues about the how, where and why as well. And right now we're really only getting to the point where we are shaping the right questions while at the same time trying to figure out at least tentative answers to them to see whether indeed even the questions make sense. It is not like decision-support systems or the evil robot judges will just appear out of nowhere (and for the time being the utter bogosity of public sector software procurement procedures is certainly the first and the best line of defence against them...), but rather they should preferably appear in contexts where they actually make all kinds of sense. Meanwhile, there are plenty of much less scary types of intelligent software technologies not used enough in the legal (especially information retrieval) field to get things started, but more about them later on.

More on Deep Blue and chess:
Feng-Hsiung Hsu: Behind Deep Blue: Building the Computer that Defeated the World Chess Champion (Princeton UP 2002)
Diego Rasskin-Gutman: Chess Metaphors: Artificial Intelligence and the Human Mind (MIT Press 2009)
Pertti Saariluoma: Chess Players' Thinking: A Cognitive Psychological Approach (Routledge 1995)

More on Watson: IBM Research (lots of videos as well)