Thursday 8 November 2012

Happy World Usability Day!

Today, November 8th, is World Usability Day, arranged annually since 2006. List of events here (although if you are interested in events near you, the map on the front page is much more usable), there’s even one here in Helsinki).

If you only know usability from real-world usability, or more likely the lack of it (good usability is unobtrusive), here are the standard definitions:
“[Usability refers to] the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” - ISO 9241-11

“Human-centered design is characterised by: the active involvement of users and a clear understanding of user and task requirements; an appropriate allocation of function between users and technology; the iteration of design solutions; multi-disciplinary design.” - ISO 13407
For more information, see The User Experience Professionals’ Association website.

Of course I’ll also take the opportunity to mention my paper titled Software Usability and Legal Informatics (draft paper on SSRN) which I will be presenting later this month at the KnowRight conference. As far as I know, there has been very little earlier scholarship on the topic in legal informatics, but pointers are most welcome. I will be pursuing this line of research further in other articles at least over the next couple of years.

[Update: presentation now available here.]

Monday 5 November 2012

Phantoms, RoboCops and teleportation law - just an ordinary day at an extraordinary conference

About a month ago (oh dear) I had again the pleasure of attending GikII, the world's top most number one conference on geek law, this time at the London campus of the University of East Anglia. I also had a presentation of my own there, with the title "Is Botox® the New Tinfoil Hat? On Mind-Reading, Behavioural Biometrics, and Privacy", with the following abstract:
Biometrics are an old acquaintance for data protection law. However, the legal interest has thus far focused on the use of biometrics for identification purposes only, that is, using them as an unique key giving an individual access to something or tying that individual to other, non-biometric personal data. The role of biometric data as (potentially even sensitive) personal data in its own right has yet to received the same kind of attention. Biometrics can also be used for example for personalized outdoor advertising even without positively identifying its individual target. The technological development is extremely fast, and, as with many emerging technologies, law struggles with keeping up to date.

One particularly interesting development is the combination of behavioural biometrics with face recognition. Continuous analysis of facial microexpressions based on the Facial Action Coding System (probably most familiar from the TV drama Lie to Me) is being developed for a number of purposes, such as profiling airline passengers and lie detection. For lie detection and `mind-reading' in general, simple optically based biometrics are at least as reliable (ie. not very, at least at this point) as the more widely known fMRI-based and other neuroimaging methods, while being totally noninvasive and thus easy to deploy without the consent or even knowledge of the data subject, and at a fraction of the cost. This type of use of fMRIs and neuroscience in general is already a hot topic in law, but the same questions should be understood more broadly and without commitment to any specific type or level of analysis or any particular technology. Looking for explanations on the neuronal level just confuses the non-specialist completely, thus lending neuroscience explanations their seductive allure.

And so this season’s fashion tip for all paranoiacs is to swap your tinfoil hat for Botox®, as it paralyses the muscles causing facial microexpressions, thus making the technology unreliable. Anti-facial-recognition makeup, which confuses the system by making specific parts of the face undistinguishable, is of course another possibility.
Slides here.

As an example case I used the AVATAR system in pilot use on the US-Mexican border in Nogales, Arizona, only since a couple of months ago. You can find more info on AVATAR here and here. I do feel the need to point out that I did not want to talk specifically about AVATAR, but about the wider privacy implications of face recognition and other biometric technologies in the long term. And to have a bit of fun while doing it, of course.

Somehow I did however manage to briefly mention an issue I seem to return to in every paper, namely the question of judicial (or in this case administrative) decision support. In the EU, the most general regulation of this issue is in the Data Protection Directive (95/46/EC), Article 15, on automated decisions, which can only be warranted by statute or for the purposes of fulfilling a contract. In either case, the system making automated decision must have a safeguard in the form of human supervision with the possibility to override. A similar regulatory scheme is proposed to continue under the forthcoming Data Protection Regulation, this time in Article 20 titled Profiling.

The requirement for human supervision is of course good and necessary, but it is by no means enough by itself. If a system makes correct judgments 90% of the time, people seem to have the tendency to infer that it is correct the other 10% of the time as well. One absurd example of this kind of uncritical sticking to procedure (without machines!) is the Twitter joke trial, which was also taken up by Ray Corrigan in his GikII presentation.

There are different ways to mitigate this. The obvious one is that, especially in a context where the system's decisions are routinely followed, the error rate could not possibly be allowed to be anywhere near 10%. To be sure, rigorous testing protocols are required. One possibility is also to open up the algorithms for review, which can be done ex ante as a part of an authorization protocol, or, especially in an individual case, ex post, or both.

Still, with Big Dada, neither of these is enough of an answer. Creating rigorous tests for real-life systems of this type is easier said than done, and “cheating” on the test by making sure at least all the known test cases work as they should is only common sense. Carving specific requirements for testing in stone is a surefire way to kill all innovation in this field.

Releasing the algorithms isn't panacea, either. When the systems are developed by commercial companies (ahem), there is considerable reluctance (or at least a hefty price tag) for this kind of openness. In a national security setting it Just Isn't Done. And in any case, the sheer complexity of the task means that access to the algorithm is of no use whatever when you are trying to board a flight but THE COMPUTER SAYS NO.

So what's my answer? Quite simple: require that decision support systems are always constructed to give explicit and, upon request, detailed reasons for their decisions in human-compatible terms, just like in a well-written court decision. This makes it easy for anyone to see if there is something completely off in the inputs or the line of reasoning, and step in and override. It also allows for a more qualitative type of testing, when not only the decision but also its rationale can be included in the evaluation. And when the system does something harmless enough (say evaluates likelihood of confusion for trademarks (smiley)), the rationales can be used for educational purposes.

So is this a case for more regulation? Even if it were, the legislator's track record in this field does not exactly promise any immediate relief. One way to solve this is to let the markets decide, but that requires an educated customer base who knows what to require and why. I guess we'll just have to wait and see.

Guest post at VoxPopuLII

I had a guest post on my research in general on the VoxPopuLII blog of the Legal Information Institute at Cornell Law a couple of weeks ago.