Why the big fuss?

The usual parade of whimsy on this blog about this or that in public services, or things-that-make-my-head-hurt-in-general, has been rudely interrupted by a series of diatribes on identity and trust online, with a focus on people interacting with government.

Why, why all this attention to some rather obscure mental masturbation?—I hear you cry.

By way of brief explanation:

1. It’s fascinating: intellectually, socially, and philosophically; here we find very real and somewhat abstract concepts fusing together to try and do an important job.

2. Did I say important? It’s really, really important. Progress on this issue has the potential to shape some pretty fundamental things about our privacy, freedom and relationship with the state (and with each other).

3. It’s big. A brief look at the history of computerising National Insurance or patient health records is enough to show that national-scale anything of this nature is not to be taken lightly. Grand schemes have to be very well designed before implementation (and not just technically, but socially and behaviourally); start-small-and-scale requires a good understanding of what will really change as things grow.

4. It’s fraught with paradoxes: it’s easy to imagine tempting answers – very hard to design workable solutions. What seems easy in broad outline dissolves into complexity in the detail. The ingredients themselves are elusive: shape-shifters at times—identity information can be there to act as a reference (help me look your record up), a verification (we’ll check that out against our database), a diversion of risk (well, we ask all callers for their date of birth), a red herring (no, I do need your mobile phone number before I’m allowed to talk to you…Data Protection innit?)…and more. And the best hope we currently have for a solution relies on concepts which are far from our mental models of how such things should work.

5. I don’t know all the answers—I know a few of the questions, that’s all. I am happy to be set straight about any of this—if you can describe a simple, workable solution, please do so. Just don’t start “well, can’t we just give everyone a unique number…?” ok? If you hear someone spout that we should be able to knock up an Amazon-type “account for government” tomorrow, gently ask them to go a little further. Ask a few questions. Ask if it has to be “the real you” holding the account. Ask if you can have more than one. Ask if you’ll have to have one, even in the distant future. But be nice.

Finally, a consoling thought, before I leave this topic for a while. There are some parallels here with another tricky technology/people problem: for thousands of years cryptography was beset by one major problem—how do you get a key from Alice to Bob so that Bob can unlock a message when he receives it? Anyone intercepting the key could then intercept the message that followed and open it. Seemingly intractable—one’s only option was to find clever ways to exchange or vary the keys—it was blown away by a neat bit of maths in the mid-70s, leading to a simple form of code-making (involving no exchange of keys) which underpins secure ecommerce to this day. Perhaps there’s something out there, as yet undiscovered, which will allow us to square these circles of usability, privacy and assurance. A public-key cryptography equivalent for identity. I just wish I could find it first.

5 Comments

  1. It’s a tradeoff.

    This happens offline, of course: if I walk into council buildings and start blathering on about stuff, there’s a risk that I’m not who I claim to be. That risk is comparatively small, because — frankly — who else is going to want to pay my council tax? The risk is there nonetheless.

    Online, we delegate some of the mechanics of this to automatic mathematical and algorithmic processes. If the records say “Sarah Smith” and I’m a 30-something male, I’ll at the very least get a very quizzical look when I’m standing face-to-face with somebody. Online, there’s nothing to prevent me from passing myself off as somebody else, provided I can satisfy the systems which take the place of this quizzical look — that is, by providing a set of credentials.

    This is our baseline position: the holder of the key is assumed to be the keyholder (if you follow my meaning). They have to be, because nothing else actually works. We try to ensure that they really are one and the same person — breaking authentication out into two- or three-factor processes — but there’s always a way to evade this (is it my wife using her computer, or is it me? does it actually matter, if she’s handed over all of her details to me? what happens if I forced her to? none of these questions have simple answers, and so virtually every online identity system doesn’t bother to deal with them).

    Even before this, though, there’s the issue of how to get to this point. How does the account get created?

    The mechanics are seemingly straightforward: verify that a person is who they say they are, then create an account for them, generate authentication tokens, and so forth.

    That first problem is the biggest one, though. Creating an account if you *assume* a person is who they say they are isn’t horribly difficult, even allowing for tricky things like “make sure the person picking the password is the person who requested the account” and so on — at least within a generally-acceptable margin of error. Performing identity verification before you even get that far is hard, though.

    The problem is, essentially, that any online identity system like this would get used for lots and lots of things. Each of these things has its own identity requirements, and more often than not, some kind of checks are carried out every time you interact with them. If you’re going to delegate identity verification as a whole to an online system (with the various promises of efficiency which result), then the online account needs to have absolutely watertight associated verified identities to begin with.

    Even if it doesn’t, and is supposedly assumed to not do (like online banking, for example), in a system like this I’d wager feature-creep would mean that a situation where something was far more reliant upon the veracity of the identity claims than the system was supposed to support. And, of course, this was one of the criticisms of the ID Card scheme — it made identity theft easier because everything was delegated to that initial application; the more things that would use ID cards, the more chance of the house of cards falling down.

    Perhaps there’s another way, though. Maybe you could have an account and *attach* verified identity claims to it instead. Rather than building a system which is inherently trusted, you build one which is inherently untrusted. In fact, the ‘account’ doesn’t even have to exist centrally in order for this to work — the central stuff could just maintain a reference of some kind to it.

    So by default I have an account and everything about it is essentially unverified, but it uses public-key auth. It has _strong_ authentication associated with it, but a _weak_ identity. This is basically the situation we’re in with 90% of PGP keys out there.

    Services which I can use with this account will have identity strength requirements, just like they do offline. I can’t walk into a bank with no paperwork and expect to walk out with an account. The same thing applies here.

    So I jolly off to my local… er, not sure, actually… and have over my passport and a recent household bill (which is bloody difficult to get hold of nowadays, incidentally), and they look me up on the electoral roll, and update records to show that on the 15th January 2011 I verified my name and address, signing the assertion with their key, and it gets attached to my account.

    In theory such a system could work. In practice? I doubt it. That verification step is still a weak point (find somebody who does IDV and offer a suitable bribe…), and doing nefarious things gets easier if there’s only one set of checks to blag your way through instead of one at the bank, one at HMRC, one at the car hire place, …

    I do wonder how much of the “government stuff we want to do online” really needs strong identity, though. I’m not convinced there’s actually very much. I’m sure there’s plenty where the government would LIKE strong identity, but don’t actually get it now. Gosh, we need to make sure only British citizens can submit e-petitions? Goodness me, grow a pair.

    So there we go. Can of worms. In a nutshell. As it were.

    • Very well said: I like your analysis leading to an weak relationship with strong ones attached. That, in essence, is the foundation of the current Government Gateway, at least in its user-facing function. As I haven’t been able to find a single, simple summary anywhere so far of how the Gateway works – from the customer’s viewpoint – I’ll be writing one here as a future post, with some help from the original architecture documentation which I’ve kindly been provided with. It may be interesting – I suspect it’s not what the vast majority of people (quite rationally) believe it to be.

      Now up at https://paulclarke.com/honestlyreal/2011/02/how-the-government-gateway-works/

    • “In theory such a system could work. In practice? I doubt it.”

      I think it will work well enough, if a party doing IDV does not so much sign your key, but a combination of your key and an authentication token.

      The approach you describe is already used by banks for establishing your identity, except they don’t sign your key afterwards. Clearly, for banking purposes, that approach is “good enough”. So if your bank signed off on (your-key,is-good-for-banking) rather than your key alone, then another bank can verify that you’ve got a sig for “is-good-for-banking” from a party they trust.

      For government stuff, it’s up to the government to decide what degree of proof of your identity is good enough for them to sign off on (your-key,is-good-for-government-stuff).

      And always third parties can decide what token, i.e. “strenght” of proof they need.

      (Incidentally, in order to get an ID card in Germany, you need to proof who you are via something called a family book, which is a state-wide register of children and their parents. In Germany, then, your ID card would be government-strength proof of identity.)

      • The issue isn’t the principles of it, but the day-to-day practice. People cut corners. Organisations as a whole cut corners more than individuals, in truth. There will always be rules about what is considered “sufficient” verification and those rules will be broken: because (much like with 3-D Secure on credit/debit cards) the people who specify the systems have no incentive to get it right, and no disincentive if they screw it up. In this context, the people who lose out if an aspect is done badly are the ordinary people.

        3-D Secure (Verified by Visa/MasterCard SecureCode) is actually a good example of this: the system is horribly flawed in many respects, but most seriously in terms of how it’s treated.

        If a fraudulent transaction is carried out and is confirmed via 3-D Secure, the assumption is that it must be legitimate, because it passed the 3-D Secure process, which is deemed to be foolproof, even though it clearly isn’t.

        The same problem arises with any identity system where delegation occurs: if the requirements of delegation are set too low (i.e., you accept “weaker” proof online than you would in person), then you risk undermining the whole system, and at a minimum people suffer as a result. This isn’t a difficult situation to envisage; something starts out with strong requirements, then it turns out there’s a problem getting enough places on board to perform the verification to that level, and rather than ditch the whole system, the department or institution responsible decides to save face by weakening it.

        So, that’s why I say that it’s fine in theory — there’s nothing *technically* wrong with it. The reality of it is that there’s more to a system like this than the technical factors: people, and especially organisations, are inevitably flawed.

        • The issue is actually the principles of it. Sorry for replying so late, I got distracted.

          Your unmodified example, the one that doesn’t include an authentication token, assumes that identity can be established absolutely.

          That’s a design principle, not a question of how the design is applied in practice. It’s also a design principle that fails to model reality accurately, as you point out.

          When I propose to include a token in banks’ signing of keys, it’s to weaken that principle from “identity can be established absolutely” to “identity can be established well enough for a particular purpose represented by token X”.

          The definition of what exactly “well enough” means is left up to whoever “owns” (i.e. signs) a particular token, and can then be enforced as rigorously as the owner requires.

          That altered security model has several advantages, but the main one in this context is that proofs of identity for various tokens are isolated from each other; that is, a weakness in the practice of establishing identity for one purpose does not affect proof of identity for any other purpose.

          Primarily that means the system as a whole is more resilient against attack. But it also means that tokens for which proof of identity has been enforced badly can be retired (and replaced) more easily, without invalidating your key completely.

Leave a Reply

Your email address will not be published. Required fields are marked *