Prompted by the discussion at a presentation I gave yesterday on intelligent legal technology (titlend Älykäs oikeusteknologia, ie. in Finnish, for once) I guess I feel the need to export a part of the neural network that subsequently emerged in the form of a blog post. I was asked to define artificial intelligence, and since I refused to provide a definition, I was asked again. And again. So even if providing such a definition is quite irrelevant as I don’t research legal AI in general (in which case the delineation between AI and non-AI might be of interest) but rather some specific questions (modelling vagueness and uncertainty in law) which without a question are AI & law questions, here, for explanatory rather than definitive use, with no warranties for fitness for any particular purpose, yadda yadda yadda, are my two cents:
Artificial Intelligence is the cross-disciplinary enterprise of trying to do things with a computer which when done by people are said to require intelligence and which computers cannot (yet) do. (The careful reader may notice a certain degree of isomorphism with a popular definition within the extended cognition framework...)
(And comparison shoppers, here is Wikipedia’s current version: “Artificial intelligence (AI) is the intelligence of machines and robots and the branch of computer science that aims to create it.”)
So: Consistently with the bottom-up approach to AI I like to advocate in general, I don’t think allusions to the Turing test or the Singularity or whatever are all that interesting, as far as actual progress is concerned, the cognitive arts advance through innovations which are very small increments from the perspective of AI as a whole but can be quite dramatic for the topical discipline in question.
I do think that the trying (or aim[ing] to create) is an important part of what makes AI AI. Doing arithmetics also requires intelligence but has never been a part of AI since computers could do (and indeed were built to do) it properly from the beginning. And so on the way from notrespondingstilltrying to commercial viability, AI projects start being called computational whatever or whatever technology (hence legal technology). Of course the boundaries are vague and the whole boxological excercise of little use in anything other than turf-wars in academia.
And the fact that the definition refers to human intelligence just serves to illustrate the futility and question-beggitude of definitions for one simple reason: The psychological understanding of human intelligence just adds even more layers of complexity. For example, IQ tests cannot possibly measure human intelligence per se and in general. What they measure instead is a specific indicator known as the g factor (or general intelligence), which has been shown to correlate (reasonably) well with the more specific intelligent abilities.
Even if working on definitions can occasionally serve an useful purpose, personally I think in most cases the more expedient alternative is to follow Justice Potter Stewart in Jacobellis v. Ohio: “I know it when I see it.” For historical reasons, jurisprudence in Finland still has a particular affinity for concepts and definitions not really seen elsewhere to the same degree. I’m planning to address this issue in extenso at some point with the title Begriffsjurisprudenz 2.0. (Spoiler alert: may also offend ontologists.)
Musings on Law and Intelligence (Artificial and Natural)
"The future is already here - it's just not very evenly distributed." William Gibson
"Wisdom is the abstract of the past, but beauty is the promise of the future." Oliver Wendell Holmes
Thursday, 13 December 2012
Saturday, 8 December 2012
Peter Thiel on Singularity and legal technology
Betabeat (just one of many tech blogs I follow regularly) had yesterday a interesting post on Peter Thiel’s (Stanford Law graduate, PayPal co-founder &c &c) presentation at the Legal Technology and Informatics course held at Stanford law for the first time this past autumn. Blake Winters has kindly written and published an essay on the presentation, on which these brief comments are based.
Personally I think all this talk about the Singularity is mostly just a distraction (and of course fodder for dystopic science fiction). Actually functioning general-purpose artificial intelligence is not simply just a matter of bytes and CPU cycles or even fully replicating the neural network of a human brain at some instant (because so much of human intelligence depends on neurogenesis and the formation and pruning of connections, processes which only a couple of decades ago were still thought to end by adulthood), and anyway it is so far in the horizon that it is impossible to use as a target. There is still a lot of work to be done in trying to make sense about the actual functioning of human cognition. (The discussion about free will and whether Libet’s experiments show that it doesn’t exist is a good example.) Even if the Singularity does arrive at some point, the interaction of humans and computers at that time will not be something we can easily imagine. (Just compare whatever you are using to read this with a completely character-based interface (your only choice thirty years ago). And I still fondly remember the sound of a good mechanical teleprinter...)
To date, AI has been most successful when trying to solve very difficult but still quite concrete problems with computational methods. My rule of thumb is that when AI starts being useful, it stops being called AI. (Hence I also prefer to talk about (intelligent) legal technology rather than legal AI.) There are many branches of computer science and other computational sciences which started out basic AI research, with language technology as just one good example.
But more importantly, as for the shorter timeframe, I totally agree with Thiel. Computers are much better than people at some tasks and legal technology has great potential for radically transforming the marketplace for legal services (for the better) in the near future. The work we do at Onomatics will hopefully a good example from the more technologically advanced end of the scale, but our domain (trademark law) is just a very small corner of the entire legal system.
All this just reminds me that I should finally get around to writing two blog posts I’ve been thinking about for quite a while, one titled “Why do computers make better lawyers than people” and the other – of course – “Why do people make better lawyers than computers”. Real Soon Now!
Further reading:
Personally I think all this talk about the Singularity is mostly just a distraction (and of course fodder for dystopic science fiction). Actually functioning general-purpose artificial intelligence is not simply just a matter of bytes and CPU cycles or even fully replicating the neural network of a human brain at some instant (because so much of human intelligence depends on neurogenesis and the formation and pruning of connections, processes which only a couple of decades ago were still thought to end by adulthood), and anyway it is so far in the horizon that it is impossible to use as a target. There is still a lot of work to be done in trying to make sense about the actual functioning of human cognition. (The discussion about free will and whether Libet’s experiments show that it doesn’t exist is a good example.) Even if the Singularity does arrive at some point, the interaction of humans and computers at that time will not be something we can easily imagine. (Just compare whatever you are using to read this with a completely character-based interface (your only choice thirty years ago). And I still fondly remember the sound of a good mechanical teleprinter...)
To date, AI has been most successful when trying to solve very difficult but still quite concrete problems with computational methods. My rule of thumb is that when AI starts being useful, it stops being called AI. (Hence I also prefer to talk about (intelligent) legal technology rather than legal AI.) There are many branches of computer science and other computational sciences which started out basic AI research, with language technology as just one good example.
But more importantly, as for the shorter timeframe, I totally agree with Thiel. Computers are much better than people at some tasks and legal technology has great potential for radically transforming the marketplace for legal services (for the better) in the near future. The work we do at Onomatics will hopefully a good example from the more technologically advanced end of the scale, but our domain (trademark law) is just a very small corner of the entire legal system.
All this just reminds me that I should finally get around to writing two blog posts I’ve been thinking about for quite a while, one titled “Why do computers make better lawyers than people” and the other – of course – “Why do people make better lawyers than computers”. Real Soon Now!
Further reading:
- Peter Thiel on The Future of Legal Technology - Notes Essay (by Blake Masters)
- Notes from Peter Thiel’s CS183 Startup class at Stanford (by Blake Masters)
- Syllabus for the Stanford Legal Technology and Informatics course
- Course reader for the same
Subscribe to:
Posts (Atom)