Category Archives: * key postings

FLASH: Hansard: Minister admits: “There is no offence of cybercrime in law” /ht @gwire #cybersecurity

Cybercrime: 13 Feb 2013: Hansard Written Answers and Statements – TheyWorkForYou.

Parliamentary Under-Secretary of State for Crime and Security:

Screen Shot 2013-02-15 at 14.27.43

Quote:

The Government’s Cyber Security Strategy made it clear that there are crimes which only exist in the digital world, in particular those that target the integrity of computer networks and online systems. However, the internet is also used to commit crimes such as theft and fraud, often on an industrial scale.

The internet has provided new opportunities for those who seek to exploit children and the vulnerable.

There is no offence of cybercrime in law. Offences involving illegal access to computer systems may be prosecuted under the Computer Misuse Act 1990, but many other offences committed online would be prosecuted under legislation dealing with the substantive offence, such as fraud.

The walls are starting to crack. Massive hat-tip to @gwire

Eventually a tech reporter notices the fracas and chimes in to tell us the whole Infosec community is a bunch of jerks

In a nutshell, this keeps happening:

Silent Circle’s an interesting case, since it’s gotten some gentle criticism lately from a variety of folks — well, mostly Nadim Kobeissi — for being partially closed-source and for having received no real security audits. Nadim’s typical, understated critique goes like this:

And is usually followed by the whole world telling Nadim he’s being a jerk. Eventually a tech reporter notices the fracas and chimes in to tell us the whole Infosec community is a bunch of jerks:

And the cycle repeats, without anyone having actually learned much at all. (Really, it’s enough to make you think Twitter isn’t the right place to get your Infosec news.)

Yep. And we need more, diverse and less judgemental reporters.

Update: Incidentally, the Tweet that Quinn took exception to is this:

How this is juvenile Why we should do without charm and humour when on Twitter and amongst our own kind, eludes me. Perhaps infosec people are meant to be serious about everything?

Massive HT to @matthew_d_green and @marshray

Regrettably @Mat Honan is Entirely Wrong about “Killing Passwords” /cc @Wired

I’ve read Mat’s article and I know where he’s coming from.

Mat’s article does make some sense – and his story is near tragic:

This summer, hackers destroyed my entire digital life in the span of an hour. My Apple, Twitter, and Gmail passwords were all robust—seven, 10, and 19 characters, respectively, all alphanumeric, some with symbols thrown in as well—but the three accounts were linked, so once the hackers had conned their way into one, they had them all. They really just wanted my Twitter handle: @mat. As a three-letter username, it’s considered prestigious. And to delay me from getting it back, they used my Apple account to wipe every one of my devices, my iPhone and iPad and MacBook, deleting all my messages and documents and every picture I’d ever taken of my 18-month-old daughter.

…but he suggests a course of action – disposing of password security – borne out of fear rather than reason; and unfortunately passwords are architecturally very sound.

I’ll recap some of those architectural principles in a moment.

But look at the arguments that Mat makes about passwords:

Requiring you to remember a 256-character hexadecimal password might keep your data safe, but you’re no more likely to get into your account than anyone else. Better security is easy if you’re willing to greatly inconvenience users, but that’s not a workable compromise.

This risk is solvable using password management software. Won’t work for 100% of situations but can be made to work for better than 95% of them.

He then makes a weird argument for privacy from the following thought experiment.

Imagine a miracle safe for your bedroom: It doesn’t need a key or a password. That’s because security techs are in the room, watching it 24/7, and they unlock the safe whenever they see that it’s you. Not exactly ideal. Without privacy [from the security techs] we could have perfect security, but no one would accept a system like that.

Well, actually, it is ideal, and also viable so long as you get away from his strawman argument; I discuss the desirability of being able to demonstrate who you are rather than use identity proxies (such as driving licenses, passwords or digital certificates) in Hankering For A World Without “Identity” or “Federation”.

But what he’s trying to say is that having systems somehow be smart enough to know it’s you is not really acceptable from a privacy perspective, and I can agree with that for some perspectives in authentication.

There’s a long diatribe slinging mud at passwords in general – mentioning some upstart software like John the Ripper in process :-) – and pads out the article with stories of people who have suffered.

Then there’s a sidebar called How to Survive the Password Apocalypse which is actually full of really good advice; it’s the sort of stuff that everyone should do even if it contains suggestions like “give bogus answers to security questions” – a topic that British MPs and Tabloids are apt to get exercised about.

Then there’s a complaint that “passwords are hard to remember”, and then there is cybermafia padding and “How One Guy Had His Google 2-Step Authentication Broken”, and then we get down to:

The age of the password has come to an end; we just haven’t realized it yet. And no one has figured out what will take its place. What we can say for sure is this: Access to our data can no longer hinge on secrets—a string of characters, 10 strings of characters, the answers to 50 questions—that only we’re supposed to know. The Internet doesn’t do secrets. Everyone is a few clicks away from knowing everything.

Instead, our new system will need to hinge on who we are and what we do: where we go and when, what we have with us, how we act when we’re there. And each vital account will need to cue off many such pieces of information—not just two, and definitely not just one.

No. Sorry but no. The problem of password security is eminently solvable:

  • don’t let users choose guessable passwords
  • encourage/force users to use password management software
  • protect the hashes on the backend with something decent ie: bcrypt()

…and this needs to happen because the benefits of passwords as a technology are huge:

  1. passwords are easy to deploy
  2. passwords are easy to manage
  3. passwords don’t require identity linkage between silos – so your Google username can be different from your Skype username, can be different from your SecretFetishPornSite.com username
  4. passwords are scalable – you can use as many different ones as you like
  5. passwords can be varied between silos so that loss of one does not impact the others
  6. passwords don’t (necessarily) expire
  7. passwords are the purest form of authentication via ‘something you know’, and thus ideal for the network or “cyber” environment.
  8. you don’t need to pay an intermediary or third-party a surcharge just to get a new password, nor to maintain an old one

…so long as you ameliorate each of the related disbenefits:

  1. passwords are easy to deploy
    which means they’re used everywhere
  2. passwords are easy to manage
    which means they’re managed haphazardly
  3. passwords don’t require identity linkage between silos
    but people are generally too lazy to maintain more than one or two identities
  4. passwords are scalable
    but people are generally too lazy to remember more than one or two passwords
  5. passwords can be varied between silo
    but people are generally … see above
  6. passwords don’t expire
    but most of them are guessable in a matter of minutes or hours
  7. passwords are ‘something you know’
    and so anyone who knows your password is indistinguishable from you

…which amelioration the above “use password managers to protect decent passwords, and keep them well” solution largely does.

So, why? Why do I insist we cling on to passwords in the face of Mat’s suggestion that:

Two factors should be a bare minimum. Think about it: When you see a man on the street and think it might be your friend, you don’t ask for his ID. Instead, you look at a combination of signals. He has a new haircut, but does that look like his jacket? Does his voice sound the same? Is he in a place he’s likely to be? If many points don’t match, you wouldn’t believe his ID; even if the photo seemed right, you’d just assume it had been faked.

…which all sounds terribly secure?

The reason to cling onto passwords is that they are a distributed, non-hierarchical technology.

In short: there’s a lot less that can go wrong when the identities are discrete and thinly spread.

So, sorry Mat. You’re wrong all the way up to this point – but then you go and add:

The security system will need to draw upon your location and habits, perhaps even your patterns of speech or your very DNA.

We need to make that trade-off, and eventually we will. The only way forward is real identity verification: to allow our movements and metrics to be tracked in all sorts of ways and to have those movements and metrics tied to our actual identity. We are not going to retreat from the cloud—to bring our photos and email back onto our hard drives. We live there now. So we need a system that makes use of what the cloud already knows: who we are and who we talk to, where we go and what we do there, what we own and what we look like, what we say and how we sound, and maybe even what we think.

That shift will involve significant investment and inconvenience, and it will likely make privacy advocates deeply wary. It sounds creepy. But the alternative is chaos and theft and yet more pleas from “friends” in London who have just been mugged. Times have changed. We’ve entrusted everything we have to a fundamentally broken system. The first step is to acknowledge that fact. The second is to fix it.

…and I’m afraid that if I were to enumerate the fearmongering number of ways that you’re wrong here, I would not finish this posting tonight.

Perhaps I’ll come back to it.

In the meantime, please reach out to Privacy International.

Regulators, Password Hashing & Crypto considered as a Branding Exercise: #bcrypt #security /cc @schneierblog @glynwintle

Pardon the rambling, mildly-edited nature of this, but: earlier this year, Whit Diffie mailed me and asked:

What is your view of BCrypt and how does it relate to the algorithm you did for Sun? I can’t recall whether yours was earlier or later, so I don’t know whether to ask why you didn’t use BCrypt.

…and I replied, cc’ing Ulrich Drepper:

Hi Whit, [cc: and Hi Ulrich]

On 15 Jun 2012, at 05:19, Whitfield Diffie wrote:

What is your view of BCrypt

  • Niels Provos
  • Good guy
  • Blowfish based
  • Limited to [72 character] passwords but that’s OK/ish.
  • Good (i.e.: memory hungry) expansion phase means it’s unlikely to be put on a GPU any time real soon.**
  • Strong branding – relevant function name with few ambiguous clashes – unlike SHA256-FOO-PBKDF2-BAR branding of some password-hashing-friendly algorithms that confuse web developers who only hear SHA-256 and use that instead

If you want to take a message to your next crypto conference, please tell them that they need to start branding their algorithms because the ignorant code monkeys get confused easily.

For all of of the above reasons – especially the latter, no joke – I recommend bcrypt() to the universe for almost ALL applications of password hashing.

[...deletia...]

and how does it relate to the algorithm you did for Sun?

[It was an] Elegant, amusing hack which addresses some, perhaps most of the issues that bcrypt() does – but which suffers from:

  • being from Oracle now
  • being based on MD5 which though quite legitimate in these circumstances has had its reputation trashed amongst the semi-literate security geeks
  • having the name SunMD5 which contains the name MD5 which – previous point aside – means people think they can use any old MD5 library and get most of the benefit for cheap; but the reputation-trashing has also done for that

Had we called it hiccough or something bizarre, it might have taken off and survived to this day.

I can’t recall whether yours was earlier or later, so I don’t know whether to ask why you didn’t use BCrypt.

Because the point was not to come up with a new algorithm, that was just a sideline to generate some new ideas and to show off the benefits of the new Pluggable Crypt API that Casper, Darren and I designed; and it evidently worked:

http://en.wikipedia.org/wiki/Crypt_(Unix)#SHA2-based_scheme

The commonly used MD5 based scheme has become easier to attack as computer power has increased. Although the Blowfish-based system has the option of adding rounds and thus remain a challenging password algorithm, it does not use a NIST-approved algorithm. In light of these facts, Ulrich Drepper of Red Hat led an effort to create a scheme based on the SHA-2 (SHA-256 and SHA-512) hash functions. The printable form of these hashes starts with $5$ or $6$ depending on which SHA variant is used. Its design is similar to the MD5-based crypt, with a few notable differences:

  • It avoids adding constant data in a few steps.
  • The MD5 algorithm would repeatedly add the first letter of the password; this step was changed significantly.
  • Inspired by Sun’s crypt() implementation, functionality to specify the number of rounds the main loop in the algorithm performs was added. The specification and sample code have been released into the public domain.

…but RedHat made the cardinal mistake of calling the algorithm SHA2 which means that nobody is going to use it because normal people can’t distinguish between that and the family of other cryptographic hash functions.

If they had named it bongweasel or something the community would be all over it now.

Perhaps it’s time to do a Pidgin-like rebranding exercise on it?

…Drepper – being the author of the algo above – replied:

You’re missing the reasons for the new implementation and also why anything like bcrypt is completely unusable: regulators.

The regulators in certain industries only “know” that

  1. MD5 == bad
  2. SHA-2 == tested and not broken
  3. any other crypto algo == [extra] work for them

This is why SHA-2 was used and why having SHA in the name is necessary.

I don’t say that these are good reasons. But you deal with the government and InfoSec departments…

Drepper has credentials with financial services; however I responded:

I get that; however I think it’s easy to say “PseudoBrandableName, which is based on the SHA-2 algorithm” – compare: “bcrypt() which is based on Blowfish” – this is not meant as a criticism of you/RedHat but I honestly believe that the wrong naming decision was made in terms of encouraging adoption.

In conversations with non-cryppies (i.e.: most devs) who asked me for advice on this I several times had to go chasing after them and say “No, no! not SHA256!“, when I heard them repeating [the error] to their friends; the poor kids hear SHA-2 and their brain fills in the (incorrect) rest, and they pull in a SHA-256 library and roll their own hashes… which is disastrous.

And they tell their fellow devs, which is worse.

So if you don’t care to consider changing the naming that’s your decision, and I can see why. That’s fine. I wish I had named SunMD5 something better, but I did not understand this back then.

But I care less about mildly confusing a supposedly-ignorant regulator than I do about fixing the certainly-ignorant free market and so will be doing what most of the UK pentester geeks I know are also doing, which is recommending Bcrypt() until such time as something better-and-unambiguous comes along; here in the UK the government agencies are adopting open-source tools to replace proprietary solutions, and because of this movement they are getting BCrypt because that’s the best we can provide them with in terms of (say) widespread support in WordPress/Drupal/Alfresco/Java/PHP.

UK regulatory approval of Bcrypt() will follow in the wake of that.

So that’s what I’m thinking: we need better branding. Bcrypt() and Scrypt() is on the fringes of having a good name and having bcrypt() confused with its parental Blowfish is only acceptable because it requires particularly tenacious developer ignorance to deploy a symmetric encryption algorithm as a hashing function* without realising ones’ mistake.

And I still think “bongweasel” would be an excellent hashing algorithm name.


Footnotes:

* Yes, I do know the history of DES-based Morris crypt() and realise the irony of this sentence. But the point stands.

** Bcrypt footnote from Glyn Wintle:

Just for clarity, Blowfish encryption itself is implementable faster on GPUs but only with the same key

http://researchweb.iiit.ac.in/~rishabh_m/gpu_crypto.pdf

The key setup is where the problem lies, maintain separate large S-boxs per key causes all sorts of problems.

Also for referance using john assuming that its set to 1024 iterations ($2a$10 prefix) the kind of numbers you get on a cpu are a few thousand c/s in other words in the real world at the momment with non stupid passwords, this is a very good password hash. Now as you and I know the definition of a stupid password is where the fun starts …

John currently has a GPU version, blowfish-opencl that runs on a 7970, its half the speed of the cpu code at the momment. The fact you can dial up the difficulty of bcrypt is the long term win. :)

On a diffrent note, Sha512 (insert correct letters here to make Alec happy) is very doable on a GPU, the biggest issue is that you have to do 64 bit maths so this slow things down a little bit because gpus are optimised for 32 bit math, so you end up haveing to do two 32 bit operations.

The solution to password guessability is this…

Extracts from three other posts:

Password Cracking in a Nutshell

The solution to guessability – even via brute force – is to get users to choose unguessable passwords; for that [see extract below].

And those passwords that they choose must most certainly be defended with the best algorithms possible on the server side to help keep them economically unguessable, hence bcrypt() or PBKDF2 implemented by security-aware developers.

I am no longer convinced that it is possible to wean people away from the method of reusable passwords, and fear that to do so may in other ways be unwise.

…I could make three password guesses per second…

There’s a pendulum that swings from structured-guessing to brute-force, and I think we’re on the return path at the moment; the complementary solution to widespread adoption of bcrypt() and PBKDF2 is to fix the user password management problem.

For me that means:

  • adoption of long pseudorandom passwords; plucking a number out of the air I go around telling people to use 16-character random mixed-case alphanumerics with punctuation, which are clearly untenable for human use especially when you…
  • change your passwords on a schedule – once a quarter, once a year, whatever the schedule it’s in case someone has picked specifically your ciphertext to break and is throwing a few thousand bucks’ worth of AWS and Hashcat at it specifically – so you need to change before their likely time-to-break is exceeded; because this is all such a pain you will need to…
  • use a decent password management tool like 1Password, PasswordSafe, whatever, so you can stop whining about how long the passwords are and how hard it is to choose and change them because most of the work is done for you; and finally you need…
  • user education and motivation. But you knew that already didn’t you?

And finally: seven basic rules for developers setting up password systems

  1. If any part of your user interface or code truncates password plaintext input at a length of less than 255 characters, it’s a bug.
  2. If you can’t cope with password plaintexts that contain SPACE and TAB characters (update: or if you impose any charset restrictions) it’s a bug.
  3. If your passwords are not hashed, it’s a bug.
  4. If you’re hashing your passwords with anything other than Bcrypt, it’s a bug; bcrypt() maxes out at 72 character passwords, but that’s not your fault…
  5. If you allow people to use a password of less than 12 characters, it’s a bug.
  6. If you do not encourage people to select a unique password for your service, it’s a bug.
  7. If you do not encourage people to use passphrases, it’s a bug.

Yes, the rules are opinionated. They are even biased and make sweeping assumptions. They don’t even address issues like UNICODE. But if you address these seven points in every application in the world, you’ll make password cracking a phenomenally tougher job.

Final: Evidence submitted to Committee re: the Draft Communications Data Bill #CCDP

Thanks to Mike R for kicking me about the formatting.


Date

23 August 2012

Qualifications

In response to the call for written evidence on the Communications Data Bill, I submit the following:

My name is Alec Muffett. I have worked in Information Technology since 1988, and I specialise in system and network security.

Notably for the period 1992 to 2009 I worked for Sun Microsystems, a major hardware manufacturer now owned by Oracle; for the final 10 years of that employment I rose to be Principal Engineer and Chief Architect for Security for Sun’s European "Professional Services" team – selling, designing and implementing complete solutions for financial services organisations and internet services providers including the likes of CRESTCo, Deutsche Bank, Credit Suisse, RBS, Ericsson, and the UK, Spanish and Portuguese subsidiaries of Vodafone.

I now work diversely as an independent cybersecurity consultant, a part-time security officer for a social software SME, blog on security issues both personally and at Computerworld UK, and I am a director of the Open Rights Group.

In light of this diversity I submit this work on my own behalf.

Note

I have read the submission made by Glyn Moody, which he has reprinted in Computerworld [1] and I see no value in addressing the points that have already been covered in that submission, other than to recommend them most highly as correct and worthy of consideration.

On Terminology

Following a cue from other discussion of the bill, I shall use the term Content Service Providers (CSPs) broadly, including firms that would more typically be referred to as Internet Service Providers (ISPs) – as well as including the likes of Google, Yahoo, Microsoft, etc, under that umbrella.

Keyword Summary

  • anti-competitive business landscape

  • negative impacts of regulation

  • small/medium enterprise communications providers

  • inhibiting business agility and growth

  • conflict of strategic interest

  • cybersecurity risk

Evidence

I would like to submit the following evidence:

On the Risks of CCDP Architecture

1. That in abandoning the former architecture suggested by the Interception Modernisation Programme (IMP) – that of building an Orwellian "centralised" database – in favour of a more media-friendly but equally Orwellian "distributed" database, the Communications Capabilities Development Programme (CCDP) greatly magnifies the information security management risks inherent in that system.

2. Therefore the costs are at least equally magnified; where once there was a nominally "single" database with centralised information security management there may now be a hundred with independent management and access controls; therefore the risks are multiplied by at least a hundred, and the cost of managing those risks will increase by a proportional factor.

3. Therefore it seems highly implausible that the Home Office quoted £2bn to implement IMP and yet now quotes a lesser figure of £1.8bn to implement CCDP.

4. So my answer to question 9 (Is the estimated cost of £1.8bn over 10 years realistic?) is "No, most definitely not, even allowing for Moore’s Law because that will simultaneously be working to aid communicators and interceptors both".

On the Equality of CCDP to its forebear

5. Of course it is facile to sketch the IMP implementation as having ever been designed around a truly centralised database; to do so would require that for every N gigabits of network bandwidth between two arbitrary points in Britain (Glasgow and Edinburgh, say) there would have to be a second, equally-sized, dedicated N gigabits of network bandwidth just to carry a copy of that data to Cheltenham

6. So for IMP a copy of the entire British Internet would also have to flow to Cheltenham, an architecture which would not be tenable.

7. Therefore IMP must always have been based upon deploying distributed sensors performing data reduction and filtering before passing the data back to a controller, a system structurally identical to CCDP, putting the lie to the suggestion that they are in any way significantly different proposals.

8. So my answer to question 3 (How do the proposals in the draft Bill fit within the wider landscape on intrusion into individuals’ privacy?) is that "CCDP is the same as IMP, and should be entirely thrown out in the same way and for the same reasons."

On CCDP’s impact upon CSP technical implementation and profit margin

9. So the proposals are now for distributed databases at each Communication Service Provider (CSP), somehow at a reduced cost; the only way to achieve this is to gradually pass costs of the hardware onto the CSPs. This will lead to three obvious scenarios:

10. Large, well-funded CSPs will absorb the costs and manage their responsibilities towards a the interception devices with reasonable care, including locked hardware cages, restricted access to interception equipment hardware, security-cleared staff, etc.

11. Virtual CSPs (for instance, Tesco’s ISP service) resell the services of large CSPs and therefore will be "covered" for compliant interception capability somewhat automatically – so long as we can assume that mechanisms exist that can tie a Tesco user’s information to the identity of traffic traveling upon the underlying CSP network.

12. Small to Medium CSPs will be faced with a challenge: the cost of obtaining and installing interception hardware and of setting up special controls – hardware cages, restricted access, security clearances – will be a burden on capital and operational expenditure, making significant impact upon business margins.

13. This is because security costs money to implement properly.

14. But once installed at the Small/Medium CSP, the interception hardware will also impact upon creative network architecture; in a microcosm of the "Edinburgh/Glasgow" point above, a copy of all of the CSP’s traffic will have to flow to the interception devices.

15. To an enterprise network architect this is akin to entering a boxing ring with a ball-and-chain secured to one ankle; it impacts your ability to make optimal use of the hardware that you have budgeted for and purchased because you are handicapped by government mandate – always having to bear in mind that one must not tithe but in fact wholly duplicate traffic flows so that the interception box may have its due; and that you must integrate your shiny new hi-tech network with inherently "legacy" (ie: somewhat archaic) approved interception hardware.

16. Also: Moore’s Law does not (yet) stand still, so technology deployed to permit sufficient interception today will be overwhelmed in a year, perhaps three; so the ball-and-chain will have to be regularly replaced even if we quit boxing and instead take up the 4x400m Men’s Relay – in which case multiple balls-with-chain will be suddenly required, and possibly disposed of if the architecture is backed-out due to failure.

17. So my answer to question 24 (Are the proposals for the filtering arrangements clear, appropriate and technically feasible?) is that irrespective of their feasibility the proposals are not appropriate and will negatively impact innovation at some of the places where Britain needs it most, viz: the SME Communications Sector.

18. The large CSPs understand this and are somewhat proof against it by virtue of their maturity and size, and thus are more than happy for the Government to deploy this inherently anti-competitive measure against those who might replace them by virtue of technical innovation in service provision.

On the Cost to the Consumer

19. So it should be clear that the costs of CCDP are eventually borne fourfold by the consumer: in extra service charges, in extra tax upon the same, in lost innovation and in lost competition.

On Intercept Data Remanence and Leakage

20. To return to the many interception devices; even if they become "virtual" devices that are somehow "in the cloud" they must still store their data somewhere, and through this diversity and frequent upgrading and replacement of interception devices it is inevitable that the data will eventually fall into the hands of the general public – either by error (selling old hard disks on Ebay) or malice (paying-off a supposedly trusted employee).

21. It goes without saying that such data is valuable; the fact that a particular IP address – corresponding to a famous footballer – repeatedly visits a particular pornographic website is easily a tabloid headline and therefore of value.

22. It is possible of course to mitigate some of these risks through encryption, but then the question becomes one of where are the encryption keys kept? – if on the same hard disks then the decryption of the footballer’s pornography habit is open to any journalist.

23. Or alternately "Hardware Security Modules" and other "Trusted" devices could be deployed to keep the keys, but this pushes up the cost of each interception device, and the complexity of managing it also – so once again we look askance at that £1.8bn figure and wonder where the cost of doing security "properly" is hidden?

24. So my answer to question 22 (Does the technology exist to enable communications service providers to capture communications data reliably, store it safely and separate it from communications content?) is "Perhaps, but my suspicion is ‘not at that price point’, and "not with this distributed architecture and ownership", and further repeat that it is not necessary to see the actual content in order to write, blog or tweet a story that Footballer X is visiting Porn Site Y every Friday Night"

25. And my answer to question 23 (How safely can communications data be stored?) is "Very safely, but you’ll have to pay rather more than £1.8bn to do it properly, and you would have to inhibit any change, progress or innovation within the CSP industry because the churn of technology will throw up the chaff of disposed interception equipment, ripe for amateur analysis."

On Technical Measures to Crack Encryption on behalf of Snooping

26. My answer to question 26 (Are there concerns about the consequences of decryption?) would include "Would Parliament assent to the security services decrypting and taking a copy of all HTTPS/SSL-encrypted web traffic leaving the Houses of Parliament?", but that might be considered flip, so I’ll just say "yes" and note that others than members of Parliament might feel similarly; see also the Select Committee report referenced below.

On the capability to circumvent Interception

27. My answer to question 25 (How easy will it be for individuals or organisations to circumvent the measures in the draft Bill ?) is "Trivially easy; the technologies already exist, are widely deployed, essential tools for the liberty of citizens of repressive regimes, and will only get better and more numerous with time."

28. To ban these tools would be highly retrogressive, technically infeasible, [2] [3] set a bad precedent globally, and be disastrous for liberty.

On New Privacy Technology

29. Thus: because of the two scenarios outlined below I appeal to the committee to please revolt against the notion that there is ever a situation where security measures taken by individuals and organisations can ever be "too good".

31. It is of course very easy to have "too much" security – a suffocating problem that one might encounter at (say) an American airport; but that is not the same as security which is "too good".

32. Security can never be too good.

33. Underscoring CCDP (and its brethren) is the assumption that the Government needs to, indeed must have visibility not only of the fact of communication between two computers, but also that it needs to / must have visibility of (some) content of that communication, howeverso protected.

34. This assumption is evidenced by the very fact that question 26 (re: decryption) was asked in this call for evidence.

35. This assumption is misconceived, and in fact unwise.

36. The Internet – cyberspace – is a digital, on-or-off, one-or-zero, do-or-not-do place, where one’s ability to attack another’s system is largely a function of knowledge, understanding, competence and luck rather than logistics, and where natural defences such as the English Channel do not exist. In Westminster’s cyberspace one is as far from Tobermory as Moscow, and individual actors may appear as large and relevant as nation states.

37. Thus I am concerned that beyond the Government’s helping itself to any data that is now openly available on the Internet, and/or any data which it might coerce from regulation of Internet business, its next logical step would be to prohibit adoption of technologies which restore absolute privacy to individuals and organisations.

38. We have seen such attempts before, with "Mandatory Key Escrow" in the late 1990s, demanding that everyone surrender copies of their SSL keys so that the security services could peep into everyone’s encrypted transactions.[4]

39. So it strikes me that the future will contain an either/or scenario:

40. Either the security services learn to adapt to a world where there simply are some forms of data which they are not in a position to know, learn or demand, and thereby evolve alternative strategies to work around this – just as they did previously with the failure of Mandatory Key Escrow, and could do with the abandonment of CCDP.

41. Or else the Government to some extent bans its citizens from having strong security and privacy – from having security that is too good – thereby undesirably reducing the resistance of the British populace as a whole to cyberattack from the rest of the world, with the inevitable side-effect that the security services never evolve their skill set beyond "how to demand data from third parties".

42. The third option, of course, is to muddle along somewhere in the middle, trying to ignore the inevitable rise of internet privacy tools that are effectively interception-proof by virtue of being too good.

43. But that’s what we’re currently doing, isn’t it?

[1] See: The Googlisation of Surveillance blogs.computerworlduk.com/open-enterprise/2012/08/submission-on-uk-governments-snooping-bill/index.htm

[2] See: How the Great Firewall of China is Blocking Tor www.cs.kau.se/philwint/pdf/foci2012.pdf

[3] See: How governments have tried to block Tor www.youtube.com/watch?v=GwMr8Xl7JMQ (video of public lecture)

[4] See: Select Committee on Trade and Industry Seventh Report www.parliament.the-stationery-office.co.uk/pa/cm199899/cmselect/cmtrdind/187/18713.htm which from 1998 strongly reflects much discussion that now surrounds IMP/CCDP

Exactly 21 years ago: “CRACK: A Sensible Unix Password Cracker”

How to make several other people feel much older than I actually do – muahahahaha…

This was version 2.7; the shit really went down hard when v3.2+fcrypt was posted on August 23rd 1991; but that’s another story.

Happy memories. :-)


From: aem@aber.ac.uk (Alec David Muffett)
Newsgroups: alt.sources,alt.security
Subject: CRACK: A Sensible Unix Password Cracker
Keywords: password des encryption unix frisbee
Message-ID: <1991Jul15.183637.6511@aber.ac.uk>
Date: 15 Jul 91 18:36:37 GMT
Followup-To: alt.security
Organization: University College of Wales, Aberystwyth
Lines: 1622

"Crack" (with a capital C) is a program I have been developing over the past 18 months, in parallel with some major mucking about that I've been doing to the crypt() function. Crack takes a 'sensible' approach to searching for dictionary or user-related passwords, and produces a report not dis-similar to that generated by COPS' "pass.c".

It has been tested and works without mods on Ultrix 3.x and Ultrix 4.x, and SunOS 4.x, although in order for it to drop easily onto other machines I have crippled a few functions a little: see "PROGS/crack.h" for options that you can #define to get a little speed back.

It does nothing particularly clever other than to do things in a orderly manner and do them quickly, and therefore I have no qualms about releasing this software to the net. There are many other optimisations that could be done to the code (replacing malloc(), etc, springs to mind), but for portability I have not done so to the first release.
Please let me know how you get on.

alec
--
INET: aem@aber.ac.uk JANET: aem@uk.ac.aber BITNET: aem%aber@ukacrl
UUCP: ...!mcsun!ukc!aber!aem ARPA: aem%uk.ac.aber@nsfnet-relay.ac.uk
SNAIL: Alec Muffett, Computer Unit, Llandinam UCW, Aberystwyth, UK, SY23 3DB

Terry Pratchett & scarcity economics in communications regulation #commsreview HT: @dml @adamkinsley

I was chatting with DML earlier today about communications regulation and how certain mindsets – common in regulatory environments – are not equipped to deal with the abundance that is possible with digital goods; instead there are always appeals to there being self-evident limits to:

  • how many network addresses remain (yay IPv6)
  • how big a computer you actually need (how many “MIPS”?)
  • how much e-mail/content one can read in a day (“information overload is killing business”)
  • what bandwidth allocations you can pack into a radio spectrum (this month)
  • what minimum network bandwidth a home user requires for legal use (ie: no filesharing)

…this last one in particular came up in Twitter-discussion of today’s communications review:

I quietly exploded at the implications of this:

“…as someone who works from home and frequently has to sling 1Gb Linux images around, anyone suggesting I can make do with 2.5Mb is a bloody lunatic. I could easily use a 20Mb line bidirectionally. What is worse is not having adequate symmetric bandwidth to host a server at home. What small minded, short sighted, petty idiot thinks that there’s such a thing as a “legally permissible” amount of bandwidth…”

…but the answer, as ever, is in Terry Pratchett; an extract from Night Watch which I* believe to be his finest work; watch the character of Reg Shoe: (my emphasis)

‘A present from the lads down at the Shambles, sarge,’ said Dickins, arriving with a wagon. They said it’d only spoil otherwise. Is it okay for me to dish ‘em out to the field kitchens?’

‘What’ve you got?’ said Vimes.

‘Steaks, mostly,’ said the old sergeant, grinning. ‘But I liberated a sack of onions in the name of the revolution!’ He saw Vimes’s expression change. ‘No, sarge, the man gave them to me, see. They need eating, he said.’

‘What did I tell you? Every meal will be a feast in the People’s Republic!’ said Reg Shoe, striding up. He still hung on to his clipboard; people like Reg tend to. ‘If you could just take it along to the official warehouse, sergeant?’

‘What warehouse?’

Reg sighed. ‘All food must go into the common warehouse and be distributed by my officials according to-‘

‘Mr Shoe,’ said Dickins, ‘there’s a cart with five hundred chickens coming up behind me, and there’s another full of eggs. There’s nowhere to send ‘em, see? The butchers have filled up the ice-houses and smoke-rooms and the only place we can store this grub is in our guts. I ain’t particularly bothered about officials.’

‘On behalf of the Republic I order you-‘ Reg began, and Vimes put his hand on his shoulder.

‘Off you go, sergeant,’ he said, nodding to Dickins. ‘A word in your ear, Reg?’

‘Is this a military coop?’ said Reg uncertainly, holding his clipboard.

‘No, it’s just that we’re under siege here, Reg. This is not the time. Let Sergeant Dickins sort it out. He’s a fair man, he just doesn’t like clipboards.’

‘But supposing people get left out?’ said Reg.

‘There’s enough for everyone to eat themselves sick, Reg.’

Reg Shoe looked uncertain and disappointed, as though this prospect was less pleasing than carefully rationed scarcity.

‘But I’ll tell you what,’ said Vimes. ‘If this goes on, the city will see to it the deliveries come in by other gates. We’ll be hungry then. That’s when we’ll need your organizational skills.’

‘You mean we’ll be in a famine situation?’ said Reg, the light of hope in his eyes.

‘If we aren’t, Reg, I’m sure you could organize one,’ said Vimes, and realized he’d gone just a bit too far. Reg was only stupid in certain areas, and now he looked as though he was going to cry.

I just think it’s important to be fair-‘ the man began.

‘Yeah, Reg. I understand. But there’s a time and a place, you know? Maybe the best way to build a bright new world is to peel some spuds in this one? Now, off you go. And you, Lance-Constable Vimes, you go and help him.’

….and there you have it; some people have an overwhelming urge to sort things out on behalf of the little people, and the first thing they try to do is put themselves in the middle of everything.

But not from a lust for power, oh no… nothing so crass as that.

It’s benevolence.


*as agreed with several other friends