The Deep Truth Of IT Security

Adriana pointed me to a short post on Euan Semple’s blog which has led to quite an amusing bunfight:

A thought on IT security …. there’s not enough granularity in their paranoia.

Security is a tradeoff. Security professionals get it wrong by not considering the business case. Users get it wrong by asking for things which are not possible to secure and not listening to advice. Is more “granularity” the solution? Or is it just proper evaluation and open-mindedness on each side?

The comment was one I came up with in a conversation with others here in Boston on IT security and the problems that are caused with the broad brush approach that most organisations take. As with everything there are no doubt exceptions.

[T]he prevailing behaviour is to apply sweeping security measures that constrict low risk, legitimate activities, for the sake of the small number of high risk ones that grab headlines. Social computing IS about discussing things which is why we are. My concern is for those inside corporate firewalls who can’t take part in our conversation because their IT department has deployed filters that block my blog!

…etc; I see stuff like this quite often, and it’s a good thing to see; people and companies should take the matter of their IT security seriously, and such discussion is a good indication that people care.

Before I carry on, let me establish my credentials; I’ve been working in the field of IT security since 1988, and was hacking for 2 or 3 years before that. Since that time I’ve published papers, presented, taught, moderated USENET groups, broken world records in, done TV programs about, defined software interfaces for, run development teams about, built communities and implemented tools of, argued about, fought over, been misrepresented in the press regarding, won vengance and generally lived large in the world of IT Security.

So I’ve seen a lot of shit; and there’s one thing which hardly anybody in the commercial security industry is ever going to tell you straight. So listen up, because this is it:

There Is No Such Thing As IT Security. There Is Only Policy. And A Lot Of *That* Is Bad Or Wrong Or Designed By Idiots, Or Pushed By Idiots With Product To Sell And/Or Who Want To Keep Their Jobs.

Most people will naturally focus on the latter, inflammatory wording in the second half of that statement, so instead I shall focus on the first part, not only in order to build suspense, but also because it is far far far more philosophically important.

Security” is a will-o’-the-wisp. It’s an meta-quality. You can take a Windows XP machine without firewalls or virus-protection, strap it to the internet backbone, and – if you choose – it will be perfectly secure. Yes within a matter of minutes it will be overrun with viruses, worms, become a haven for lowlife-scum who run botnets and probably crash (possibly terminally) but hey if that’s what you want, and/or if you don’t mind being a sort of digital Typhoid Mary and a platform from which others may be attacked, then you will remain perfectly secure.

Yes, this may seem a perverse way of thinking, but it is the correct one.

You see: when you get hacked, you don’t really suffer a “security failure”; instead you may lose system integrity; you may suffer an exploitation which defeats access controls, bypassing or negating the software which enforces separation of privilege, leading to the execution of unauthorised code.

Your data may be copied, leading to a loss of secrecy or loss of privacy. This can break your chain of trust. If they are unencrypted or weakly encrypted, files containing your credentials could be copied, permitting criminals to fraudulently identify themselves as you, they can spoof authentication processes thereby to steal from third-party vendors.[1]

Or your data may be destroyed, leading to a denial of service – your ability to work – although this problem can equally be caused by someone swamping your network – legitimately or otherwise – or by the other demons of fire, flood, disease, power/hardware failure, and so forth.

Notice: nowhere in the above to I refer to “security”.

So what is security? You know when you’ve got it, sure, but you lose it by losing something else entirely – access, secrecy, privacy, credentials, privileges (Edit:) and resources which should be yours alone to use or disburse…

In short: security is not any of the above. It’s somewhat intangible. Possibly even fictional. It doesn’t even satisfy the laws of mathematical commutation – you can add a firewall to a network, and that’s all you’ve done; but if someone breaks that firewall then they also break your security, so surely they subtract from you something that you never explicitly added?

So how does that work? I’ll explain a little later.

Getting back to the topic: what does “security” mean more practically? In the real world? From the perspective of 20 years in the trade?

Well, it means “policy”. You may not immediately see it that way, but bear with me.

If you choose to wire the aforementioned unprotected XP machine to the internet and call it “secure”, you can do that. Alternatively if you choose to install antivirus software on a laptop and call anything that gets past that a “security failure” then you have implicitly created a “security policy”.

People create security policies all the time. A lot of them tend to come with the culture:

I bought that laptop / that DSL line / that webhosting service, therefore anyone who uses it without my permission is a bad person.

…and I wouldn’t disagree.

This also explains the firewall paradox above: the security you lose (via the broken firewall) is the confidence you you gained from the implicit expectation of control it provided to you. That the firewall was bypassed is an insult to your policy (implicit or explicit) and therefore an insult to your ability to project control over your own resources, and it is therefore an insult to you.

Thus the scope of your security policy – we could almost call this your “morality” – and the extent of your sacrifice of effort towards its adherence (firewalls, latency, hardware cost, tithing) defines how much insult you take in others’ (hackers’) wild behaviour towards your IT resources.

So if you choose to be more permissive – or if you live a simple zen-like security lifestyle which is hard to insult – then in either case you will be “more secure”; the former because you are too open-minded to be insulted by anything (ie: to consider anything a breach of security), the latter because you have nothing much to insult (ie: nothing much to attack).

Ah, but this is where the fun begins; wherever you find the permissive or the simple, you also find the authoritarian, and (possibly worse) you find the middle-class wannabees who think they “get it” and who start defining security, and therefore start by writing policy. Lots of policy, since they perceive “policy = security” and therefore “more policy = more security”. And this is a bad thing, because once you get amateurs drafting policy the whole thing goes rapidly toxic, as any Whitehall civil servant will tell you.

So what happens? Amateurs equate security with fine-grained control, and sometimes that’s good and correct but most of the time it’s overzealous, tasteless and wrong, like painting your bedroom black because you think black is ‘cool’.[2]

For instance: If you have a company of 10,000 employees it is not good security practice to try and maintain a list of which employees of what positions are permitted to access whatever particular websites. Such a rule is neither permissive, nor is it simple, and therefore it almost certainly will be burdensome, expensive to maintain, hard to scale, and dumb. It may sound really impressive to say that you’re going to classify and qualify and constrain every individual employee’s access to the near-infinite resource of the Internet, but really you’re creating an N-squared or NxM scaling problem, and maybe N=10,000 but the value of M is really close to infinity.

That’s a bad thing.

So this is why I tell people to paint their security policy with a broad brush; stick content-filters on your Internet gateways to block access to porn, if that’s your thing, but make your controls equally applicable to everybody. That way you don’t have the cost of having to deploy an authentication solution – a potential single point of failure – for anyone who merely wants to access the interweb.

Ignore those zealots who would micromanage your access controls and privileges – and if the complain, put them personally in charge of firewall maintenance for a couple of months. Don’t give them any budget. That generally puts life into perspective for anyone.

(Edit) Ignore vendors who would try to sell you tools to implement the above cited micromanagement. They make money by pandering to the stupid.

To be practically secure: be permissive (“all employees can access the web…”) – be simple (“but we log everything that goes on, we block porn sites and the websites of companies who are suing us, and we will fire you if you get us into legal trouble…”) – and be explicit about the extent of your policy (“and these are our rules and are the totality of our rules”). Write an Acceptable Use Policy and make it something that people will bother adhering to:

Acceptable Use Policy — (abbreviation: AUP)
A formal set of rules that governs how a network may be used. For example, the original NSFnet Acceptable Use Policy forbade non-research use by commercial organizations. AUPs sometimes restrict the type of material that can be made publicly available; many AUPs ban the transmission of pornographic material.
The enforcement of AUPs has historically been very uneven. This was true of the NSFnet AUP: its limitations on commercial activity were so widely ignored that it was finally abandoned in 1994, enabling the development of today’s commercial Internet. See also Netiquette, Terms of Service.

…and don’t make life difficult for yourself; after all, you’ll be the one to bear the cost, and unlike other forms of morality there is no IT Security Heaven in which you’ll get your just rewards.

You’ll just have an easier life down here on Earth.

ps: I’ll post a rant about vendors, some other time. Update: See the update in red, above.


[1] This latter is often misrepresented in the press as “identity theft”; this is bogus. Nobody ever steals your identity. People replicate it to commit fraud and in the process tarnish your name. You’re not the victim – that honour goes to the person who sells goods to the fraudster who poses as you. Your lot is that of “collateral damage” and yes it is a bloody nuisance, but that still does not make you the victim. That the press make you out to be so is possibly one of the stupider misconceptions of security, since it aids and abets profound idiocies like “Identity Cards” which do nothing to address the real problem.

[2] I speak as an advocate of Role Based Access Control, but that’s typically for a handful of users to access a handful of privileged commands – not a matter of bookkeeping which each and every whosome can access whatever file in /usr/bin.

10 thoughts on “The Deep Truth Of IT Security

  1. Pingback: Jackie Danicki » The deep truth of so-called IT security

  2. Stephen Usher

    To paraphrase… “Security is an illusion. Good security doubly so.”

    I look at all this in terms of energy barriers.. the policy that is, and the potential energy of the attackers and the users. If I can make a security policy where the energy barrier between the inside users and the outside is low but the energy barrier for the potential attackers is high enough that it’s too much bother for all but the most persistant and knowledgable then I’ve created the optimal conditions. For the attackers who have enough “energy,” no barrier (other than completely unplugging the cable) will be enough to stop them so why should the users be constrained by that exceptional condition?

    It’s all about risk management, as is life. Life is a compromise so why should computer protection be any different?

    And I agree, some policies backfire and actually lower the attackers’ energy barrier, e.g. password expiry. The idea behind this is that if a password is snarffed then the threat is contained because the threat is time limited. (The forgets that is a password is snarffed then it’s highly likely that however short the password is valid for the system will have been compromised and some OS flaw *WILL* have given the attacker full control.) If you make passwords expire then users will either (a) pick stupidly simple password, or, if this is prevented in software, (b) they will write it down even if they are told not to because it’s too high an energy barrier to remember it. So, in the end something which gave the illusion of “more security” actually made the system less secure.

    Reply
  3. Stephen Usher

    Why do you say that? Surely kids may mitigate the risk of being alone late on in life and hence are part of a risk management strategy.

    (O.K. It’s also well known that people are very poor at managing risk and determining the proper levels of risk, but hey.)

    Reply
  4. Chris

    There can be two classes of victims.

    Perhaps there are two underlying offenses — obtaining PII by fraud, and the later fraud by impersonation.

    Reply
  5. Jackie Danicki

    Actually, I misquoted Weinberger. He said:

    “if avoiding risk is your highest goal, you’ll never get married and you’ll certainly never have children. Loving your children increases your exposure dramatically!”

    Reply
  6. Pingback: dropsafe : Understanding Your Personal Information’s Value = The End of “Nothing To Hide”

  7. Pingback: dropsafe : Jeremy Bentham, of his own volition, lives forever in a panopticon – #HHLDN #UCL

  8. Pingback: Andy Smith of the #CabinetOffice is a Epic Fucking #Security Hero – #socialmedia #cyberbullying #dailyfail – dropsafe

Leave a Reply