honestlyreal

Icon

Who are you again?

This online identity stuff is very difficult—as I’ve written here before: much harder to truly grasp than it should be, in a peculiar way. I think that one of the reasons is that there are really two, logically separate things going on. Unless one puts a bit of mental legwork into understanding them—well, almost philosophically—all that follows in terms of technical solutions and so on can be irrelevant, at best.

So, those two parts: 1. how do you “prove” you are who you say you are? and 2. (the bit that’s perhaps harder to encapsulate) what is the relationship model that’s constructed when such a “proof” transaction takes place?

Let me try it another way: (1) what are you trying to prove and how do you go about that? and (2) what are the consequences of you having done that “proving”?

I hope to make some progress in illustrating why they’re quite different, but both very, very important. The first of those two parts—the “what and how you prove” bit—is the subject of this post. Probably because it’s the easier of the two. Though still complicated.

You never really prove anything, of course. If we are going to get into the business of cutting people open to extract a bit of DNA from their very bones and analysing it against some sort of uber-register of genome sequences…yeah, yeah, yeah. But we’re not. So stop being silly. (And they might have implanted somebody else’s bones, anyway. Ok, that’s silly. Or is it? Let’s move on. You see the point: every obstacle is just another challenge.)

What we do instead is use a number of arbitrary proxies for identity: tokens that either alone or in combination give a certain sense of assurance that their presenter is who they claim to be. The passport is a common (and relatively strong) example. There’s the photoID (with a government issued driving licence being rather more trusted than a cheaply-laminated snooker club membership card). There’s the infamous utility bill—which has the benefit of also fixing the presenter to a physical location of residence. You get the picture. Sometimes the detail is checked against something else, sometimes it’s recorded, and sometimes it’s not checked in any meaningful way, but the request itself is enough to dissuade naughtiness.

Because, for most of the transactions one carries out with government (central, local, police, whatever) checks like this are pretty damn important. (At least they are perceived to be, anyway, certainly in comparison to some private sector transactions. Compare the following headlines: “x% of cardholder-not-present credit card transactions are fraudulent, costing £Ybn per year” with “x% of online benefits claims are fraudulent, costing £Ybn per year”. Which one will have the nation frothing that Something Must Be Done? But that’s for another post…)

The guys at the gate of Caterham tip ask for a utility bill to confirm that you’re allowed to dump there. (Well, only when it’s busy, it seems.) To them, a location is the only important fact that’s been asserted—who I am, or indeed whether that utility bill matches anything else about me or my car, are unimportant. At the supermarket checkout, the young-looking booze buyer will only be troubled for something featuring a date of birth, and so on.

The tokens we use to give that degree of proof don’t have to be physical bits of paper, of course. We can memorise PIN numbers, or be asked for known facts about our previous transactions which only we’d be likely to know the answers to. We can set up “shared secrets” in advance so that only we will know the answer when challenged by our remote interlocutor.

We can have combinations of things used together—to see my bank statements online I now have to put my bank card into a reader the bank have sent me, pass a challenge, and then enter a result online. Sure, if you have my card, my reader, know my PIN and at the same time can open a session of my online banking you are me, at least as far as my bank is concerned. But that’s a lot of hardware and effort, and reasonably proportionate to the stakes involved, I’d say. We talk of “something you have and something you know” as a basic type of multi-factor authentication, or “something you have, something you know and something you are” if we add in a biometric component.

You see the point?—there isn’t really any proving going on. Just an exchange of information that gives a certain level of assurance, upon which trust can then be built. Sometimes it’s done well. And sometimes it’s not. Sometimes the requests for “proof” information are proportionate to the task being undertaken. And sometimes they’re not. But the request/risk relationship is likely to be quite specific to the task being attempted.

You’ll notice that I freely used offline examples above, when normally I bang on about how hard all this is in the online world. Well, the concepts are the same. It’s just that there are some characteristics of online channels that tilt the tables of risk. The lack of a face-to-face element removes some of the visual cues we might use to strengthen trust in a claimed identity. But this applies to the phone as well (how many times have I assumed the guise of “Mrs-C-with-a-cold” to try and sort out a minor squabble with a utility company?).

No, what makes things really very different in the online channel are those two old favourites: accessibility and recordability. The friction of having to find a benefits office, queue up, and try it on with the clerk by wearing a false moustache all disappears. You can be fast, anonymous and massively multi-tasked, using tools to try thousands of entry points and potential tokens simultaneously.

And what you do undertake, successfully or unsuccessfully, creates a record—leading to all sorts of other consequences—something that doesn’t happen when a guy in a fluorescent jacket glances at your water bill. Nobody writes anything down in lots of offline transactions—that’s important. Or captures and indexes it, for example, on video. (The indexing bit matters, by the way…but that’s taking us into the next area: the Nature of the Relationship.)

Oh, and I fear there’s one other powerful reason why this is so challenging for those who “think digitally”—a digital relationship is generally conceived as one of certainty—the bits match the requirement, ergo the door is unlocked; whereas everything above is an assembly of probabilities, seeing people less as people but as a collection of analogue risks, in a context where “good intent” and “assurance” are just shades of grey. No wonder we experience some cognitive dissonance in this area.

If you’re now drowning in a sea of uncertainty and looking lovingly back at that idea of sawing people open and extracting an inarguable(?) DNA sequence—congratulations. This is a highly normal response. Rushing back to a “unique identifier” to solve everything is pretty common. Engadget managed to do that neatly in their headline yesterday on the latest moves in US federal identity assurance—even though the source material talks about something rather different—a distributed identity framework. I’ll cover this, and the fallacy of the “unique ID” as a solution, in the next post: this dark business of the relationship that’s created as a result of digital transactions.

I might need my Greek hero and his friendly chelonian to help with that one. This stuff is not easy.

But what helps me sometimes, when thinking about this topic, is that this is a game you can play at home. Sort of. Every time you exchange anything about you (whether that involves your facial features, your money, or information about you) with anyone, anyone at all, online or offline, think about what’s actually being exchanged, why, and what the consequences could be. Try witholding everything except what turns out to be absolutely essential. Lie, subvert, play (within reason). It’s going to be useful to hone this awareness and these skills, I suspect.

Now read on…