(BYOI) Bring Your Own Identity : Lie to me

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Buffer 0 Email -- Filament.io 0 Flares ×

You see that right? You see that? They always look at me like that. I mean..Do I look like a criminal to you? Don’t answer that.

-Dr. (standing in the bank, talking to some blonde woman) – Lie to Me (TV Series)

Let’s face it, we all lied at work. At least once.



A research demonstrated that Americans average almost two lies per day and other researches  demonstrated that the distribution of lies follow Pareto’s principle:20% of people tell 80% of the lies, and 80% of people account for the remaining 20% of lies.

Quite impressive isn’t it? Well I’m Italian and I do live in Italy so do the math. Oh I see your smile ,you just nodded with your head about that very last sentence aren’t you? So how come is possible that most of the (in)famous data braches are outside of Italy?

Maybe we are way more good than “you” in keep our breaches secret? Maybe aren’t big company in Italy…ehmm Fiat-Chrysler anyone? Maybe there’s nothing that really matter here..ehmm  Finmecanica (yes we do build some of your weapons) anyone?

No I’m not going into a defense of Italian culture vs. other cultures and neither to demonstrate that Italians are better into security than others but simply pointed out a simple fact:

What average type of liar do you have to face in your ecosystem? ‘Cause yes culture and the way you were educated may play a role here.

Now let’s step back and clarify the context.

  • Fact: we are moving toward a world where traditional identity models: enterprise and customer identity access management solution are going to collide. That’s inevitable.
  • Fact: it’s a inevitable collision due many factors: We’re moving fast toward models of authentication that leverage federation with other systems to provide continuous authentication and identification. The concept of a real-time continuous identification is fascinating: be able to be recognized everywhere, anytime to avoid identity thieving and to provide a frictionless experience . It’s an inevitable collision due the fact that the advent of things connected provide us multiple way to access our data from the private realm (customer) to our working realm (enterprise) and vice versa.
  • Fact: the collision will produce an exponential grow of identities, attributes that may be managed only switching from a traditional model to what we call: identity relationship management. What this means, how we do describe a relationship is not the scope of this post but please do see the work Kantara is doing here.

So the context is that I am an aggregation of multiple identities linked each other to other identities in a complex web of relationships. This complex model has indeed many virtues but what’s the relation with the fact that we do lie?

As said before let’s put things in context here:

  • IRM (Identity Relationship Management) is directly linked to the idea of continuous authentication/identification.
  • Continuous AAI (Authentication/Authorization/Identification) grant an higher level of security as long as I am able to identify that a certain persona is still who claim to be.
  • To identify a persona correctly I have to be enough precise in identify every way,system, thing that this persona may use to actually use my data.

If those three are perfectly executed than companies should be safe, but..

Relationship of identities do not enforce any security but simply describe the way our various identities are related to each other offering a better model to design our solution. So there’s not real security involved if not the technical level required by the eventual solution to works. Is up to the persona to guarantee the integrity of the various identities that compose the relationships.

Continuous AAI as IRM is a wonderful technical solution that guarantee that any persona may be correctly identified and what this persona is doing, which of its many identities are used, how and when (i.e.: social login to authenticate/identify a company user) but again this relay on the fact that I am conscious that all the pieces that I’m using to evaluate constantly the persona (continuous) has not been breached. It’s indeed harder that before to be  breached but still apply to the context of:

the user notify me of a breach  or I suspect/identify a breach based on an abnormal behavior.

Persona identification is related to its various identities and the way those are accessed, used and protected, no matter how good are we to increase the underline identification process we still heavily relay on the integrity of the persona on tell us the truth..oh wait!

There are, at macro level, two type of liars:

The repetitive liars: those who lie constantly and continuously and who do not perceive lies as exceptions but as a normal way of interact with others.

The exception liars: those who lie when under pressure or in the case of an unexpected event that makes them uncomfortable.

Our security models are typically based on the latter, we do expect our users to lie to us when a breach occurs but usually we do not pay attention to the first type of liars.

Why? Well to paraphrase George Orwell

If you want to keep a secret, you must also hide it from yourself.



We are all the first type of liar, we all lie to ourselves to the point we convince ourselves we do not lie. Don’t think at the negative meaning you usually link to lies but think more at the so called “white lies”. We all learned that sometimes you may actually say something that is not entirely true to someone else if this will be for its own good.

We all been taught that this behavior is acceptable as long we do not exceed in telling the “white lies”, but there isn’t a magic number and indeed this “limit” is a derivation of what our parents, the society where we live call it  “exceeding”.

Now this means that, as in many human behaviors, we act based on our perception of what is right and what is wrong. So this “magic number” of acceptable lies is different from individual to individual (i.e.: Americans have an average rate of two lies per day).This also means that what for me is an acceptable behavior may be not perceived the same way by another user.

now think at this for a moment: who are you? What is your role in the company you work for?

We often stress CxO to enable user rather than enforce security on them.

But what enabling a user in identity really means? If I expect a user to lie constantly and I am a rare liar I would be tempted to isolate this user and to enforce an higher security policy over him. The result would be that I would probably ended up with an employee that perceive the security policies as a constriction rather than a protection.

Same apply if I am a systematic liar and I’m the head of security, I’ll expect everyone to behave as I do and I could eventually do not recognize potential risks in my organization due the fact that my “threshold” is too high.

Not only if we do analyze better the frequent liars typology we will find out quickly that those are usually the most creative, original thinkers, that do socialize better than others and that adapt more rapidly that their “rare liars” counterparts.

yes we just started a race here where between someone who, at this point, want to escape from our policies and someone who try to restrict this behavior.

One would be tempted to say that there’s no solution with a frequent liar but instead is exactly the opposite. A frequent liar will be more predictable than a rare liar, it’s probably more open to collaborate since it doesn’t perceive his own behavior as negative as the “rare liar” does.

This user is the key user since it will probably spread the voice around the company with only one single potential issue that is still a “frequent liar” that means that may eventually give a wrong interpretation of a policy just to not show that the very same policy is not entirely clear to him.

Now think again about Italians and the fact that we are (at least in the general opinion outside of Italy) frequent liars, lazy workers and other nice things. Think for a moment the way we do things and correlate them to what you just read: we do expect others to be like us, we do expect them to lie or to be a frequent liar, we accept the “half truth” of things as an acceptable social behavior, we do expect that under pressure you’ll lie to us and we do know that we do not really need to know you’re lying but we need to know if you are that kind of person that will do that only when under pressure or constantly..’casue in the last case we do know we have to learn, adapt and react in a different way.

Am I saying that Italians do it better? Nah I’m not Madonna and anyway I’m not that egocentric (even if my wife would not agree on this point with me) what I’m saying is that if we go down the road of discipline the way we do treat identities to the level that we “know” everything in real-time we must be really sure of what we know or at least we do think we know.

And here comes the second type of liar the “rare liar”. This is quite an interesting use case, this individual is enabled, he apply the security policies diligently, he/she has an higher degree in terms of moral and integrity still he may, under certain conditions, lie to us, slowing down the entire process of reaction to a breach.

So the question in this case is how do I handle this type of persona? well let me play a little bit with you. Let say you are that type of persona, you’re pretty sure you do not share company data around, you apply the basic principle of security so this case should not apply to you.

So questions are:

  • have you ever read a sensitve email/communication on a plane, train, plublic place?
  • have you ever had a sensitive conversation while in public?
  • have you ever used a company device to access your personal data ?
  • have you ever recharged a company device to a public USB recharger ?

I could continue but you already lied to me..well actually to yourself..so now you know how to handle the rare liar.