Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

31 July 2016

Listening skills


From an information security perspective, it is easy to get into the habit of framing threats in terms of someone “breaking into” things. Which from just the terminology alone, inspires mental images of balaclava wearing attackers, explosions and melodramatic electrical short circuits etc. However, in reality it often turns out that no drama and no active breaking-in is required at all. In fact, in many situations all you require is a spot of sedentary, passive listening to get access to whatever you wish to.

So, say for example you want to get into a building. These days, most access control systems are based on single-factor RFID, where you wave your card at a reader at the point of entry. All that is required is to loiter at a high-footfall point near the building during peak hours, carrying a long range RFID scanner in your rucksack. Hey presto, a handful of cloned door entry cards [1]. For the occasions where the card also requires a PIN to be entered, or where there is only a PIN, then that’s not a problem either. Simply wait for someone else to use it, then use your trusty infrared camera to see the buttons their finger has touched [2]. Worried that you’ll leave your face all over their CCTV? You shouldn’t have to be if the building is protected with IR sensitive cameras. To defeat them, all you need is an IR cap, which looks normal to the naked eye, but includes powerful IR lamps in the peak to blind the cameras [3]. Instant incognito, wherever you go.

Great, so once you’re in the building; then what? Now you just need an unattended network point to plug into. Simply find an empty meeting room, or if the office is the usual hot-desking affair, then look for an unused one and ask if it’s ok to grab it for a few hours. At this point, all you need to do is plug in the packet sniffer that you brought with you. These now come pre-packaged in a variety of different, anonymous looking formats, such as filtered power strips etc. [4]. No need to wait, just return later the same day to collect it.

Why would a packet sniffer be any use to you? It’s because modern network infrastructure is enormously complex, which in practice means prone to issues. Although by design, switches are supposed to keep traffic point-to-point between sender and receiver, when the switch gets a bit confused, it mostly falls back into spraying the traffic everywhere. Sometimes this is due to a misconfiguration, but it can also happen when lines flap, devices are reset, or internal switch tables get flushed or overloaded. Unfortunately, on a large network with thousands of connections, that will happen much more frequently than you might imagine. The result being that rather conveniently, all you need to do is to plug into the network, and then just wait for something interesting to come to you.

About ten years ago, when I was running a lot of on-site security assessments, I wrote a research tool called “passive aggressive” to automate the task. Typically, after leaving it plugged into a corporate network for an hour or so, I would have the credentials for managing the network hardware, along with a dozen active directory accounts too (including, if I was really lucky, someone in the IT team who was a privileged user).

The elegance of this approach is that it is almost entirely passive. No breaking in is required, which means there should be little (if any) audit trail produced that would trip up any monitoring tools. No incident to respond to!

How do you stop something like this happening? The truth of the matter is that getting your information security right isn’t something you achieve by buying a product, running a tool, or gaining a certification. It is a long, laborious journey that entails first understanding what you need to protect, and only then taking proportional, pragmatic action to gain the maximum value out of every penny you spend.

The journey of a thousand miles starts with first engaging your brain. ;)


References


  1. http://hackaday.com/2013/11/03/rfid-reader-snoops-cards-from-3-feet-away/
  2. http://petapixel.com/2014/08/29/heres-iphone-thermal-cameras-can-used-steal-pin-codes/
  3. http://odditymall.com/justice-caps-hide-your-face-from-surveillance-cameras
  4. http://lifehacker.com/5952327/turn-a-raspberry-pi-into-a-super-cheap-power-strip-packet-sniffer

12 July 2016

How do you like them cyber apples?

On first glance, it’s easy to think of the HMG Cyber Essentials scheme as being a very low bar as far as security standards go. After all, it only contains a scant five requirements, which consist of:

  1. Boundary firewalls and internet gateways
  2. Secure configuration
  3. Access control
  4. Malware protection
  5. Patch management

Which on the face of it, is actually pretty straightforward. However, a forthcoming HMG revision to the requirements scope may redraw your organisational landscape dramatically.The particular line reads:

“Remote devices with access to internal services are in scope, regardless of whether or not they are owned by the organisation.”

Which has the effect of making Cyber Essentials apply to all your home workers, and by inclusion, their personal phones and PCs that they use to access the office systems.

Clean-up, aisle five... ;)

What’s up with the MySpace passwords?

Just in case you missed it, it was announced in the last few days that MySpace was hacked in 2013 and allegedly kept it quiet [1]. The result being that 427m user accounts were compromised, and as has become the norm, there is a general wringing-of-hands amongst the security community about how poorly MySpace protected the passwords. I mean, using an unsalted SHA1 hash to protect a password? How Last-Tuesday.

However, if you are one of the people who is thinking that MySpace would have been ok if they had just salted the passwords and then used SHA256 to hash them, then sadly you are equally wrong, and I shall explain why.

For an industry that is rooted in technology, information security is disproportionately full of Conventional Wisdom [2] that is misplaced or outright wrong. So why might that be, when there are so many obviously clever people around? I personally think it’s because the knowledge domain has grown to be so enormous that no single person can cover it all. So even the brightest practitioners are left skimming the surface. The result is that there is a tendency to latch onto something and to repeat it as fact, without having understood the detail.

A good example of this is the holy crusade against using hashing algorithms that are perceived to be weak for password storage, such as MD5 and SHA1. Now it is true, they are weaker than recent alternatives like SHA256, so why would I think it is Conventional Wisdom? It’s because when it comes to storing passwords, the choice of hash algorithm is often the least critical factor. Plus, whilst it is better than nothing, just adding a salt isn’t going to make everything ok either.

The crux of the issue is down to the volume of computing power that can be brought to bear on the hashes, once they are obtained. Which today, with the use of GPUs and custom ASICs, is both enormous and also cheap in relative terms. Don’t forget that in this scenario, no-one is exploiting weaknesses in the hash function: it’s a matter of raw performance. The result being that with the appropriate hardware, it is possible to calculate billions of hashes a second, no matter which algorithm is chosen. Which means that any argument that proposes one hash function over another is effectively irrelevant [3].

So if SHA256 and a salt isn’t the solution, what is?

Firstly, it’s worth stating that no approach to encryption or hashing offers absolute security, just a quantifiable probability. So password storage is all about accepting that protecting the hashes isn’t about making it impossible to recover the password, just about making it too time consuming or costly to be practical.

Secondly, it’s also worth stating that the password storage is only half the equation. If the passwords themselves are weak, then it really won’t matter at all how they are stored, as the simplest route to recovering them is to just use a dictionary to guess common passwords.

So what is the answer? It just so happens that the principals of what makes a good approach to storing passwords were established years ago, with the creation of Key Derivation Functions (KDF) [4]. No home brewed solutions required, simply take an off-the-shelf algorithm, plug in the passwords, and store the output for later comparison.

So why is a KDF better than SHA256 and a salt? It’s because the contemporary KDFs are both processor and memory intensive. Which means that they can’t be used to calculate millions of potential password guesses a second, and what’s more, the memory requirements mean they are impractical to run on GPUs and ASICs.

So in summary:

  • If you aren’t enforcing strong passwords, it doesn’t really matter how you store them, they’ll still be easy to recover. Garbage-in-garbage-out.
  • When it comes to password storage, the hash function isn’t anywhere near as important as the KDF. 
  • If you are still recommending something home-brewed rather than a KDF for processing passwords prior to storage, then stop.

References


  1. http://www.dailydot.com/technology/myspace-database-hack-leakedource/ 
  2. Conventional Wisdom is a body of ideas that are generally accepted to be true, however they are not necessarily so. https://en.wikipedia.org/wiki/Conventional_wisdom 
  3. http://blog.ircmaxell.com/2011/08/rainbow-table-is-dead.html 
  4. https://en.wikipedia.org/wiki/Key_derivation_function

Know thyself!

When evaluating security controls, it is common to use self-certification as a way to strike a balance between cost and value. For example, whilst you could pay your auditor to flip every stone in your organisation (thereby funding their progeny through medical school), it makes much more financial sense to focus their time on the areas of greatest risk, or least foreknowledge. So how are these areas generally chosen? Typically, through the answers provided in a questionnaire.

Now, whilst using a questionnaire for the quantitative evaluation of security controls is quite straight forward (you count things that are there, or otherwise) the qualitative evaluation is much more subtle. Mostly because it is difficult to separate the answers from both personal and contextual bias.

My own experience of this has been best informed through interviewing several thousand candidates for consultancy roles. As part of this, I have always used a brief telephone interview as the first step in filtering out any mismatches. And whilst the main purpose of the call is to evaluate the psychology of the candidate, the general format will follow a questionnaire targeted at exploring knowledge in several technical domains, along with detecting any affinity towards a particular Disney Princess.

As part of this interview, each technical domain is preceded with a request for the candidate to rate their knowledge on a scale of zero to five, where zero is no knowledge and five is they know everything. In my experience, the answers to these questions really only fall into three broad buckets: those who consistently answer three, those that consistently answer four, and those that alternate between answering one and four.

In practice, it is a rarity for anyone to answer zero or five, just as it is equally rare for anyone to rate theirself accurately: those with weak knowledge consistently over state, whilst those with strong knowledge consistently under state (if only as a form of professional modesty).

So what do I personally take away from this?

In my experience, a qualitative questionnaire is almost worthless to leave with someone to fill in later. In fact, even if you go through it interactively with someone, the answers themselves are rarely useful. For me, the real value lies in reading the interviewee’s body language (or aural cribs) as you take them through the questions.

Once complete, you will probably still not have a reasonable qualitative evaluation of any controls, but if you are paying attention, you will know exactly which areas your interviewee is worried about, or doesn’t understand. No matter what answer they actually gave.

There is no spoon. ;)

No, you’re not Penetration Testing. Get over yourself.

The amount of time I have seen wasted haggling-the-toss over what constitutes a Penetration Test is a constant source of amusement (especially as it is almost always by someone who clearly has no idea). And yes, I’m quite aware of the irony of me taking this article and throwing it onto the same vanity bonfire, along with the rest of the waffle.

The reality is that it is now coming up to twenty years that I have been delivering commercial Penetration Testing services, and in all that time I can think of only one client that actually wanted a genuine, dictionary-definition Penetration Test. Just one!

Why might that be? Because it is potentially a very expensive exercise, both from the perspective of the amount of effort required to actually break all the way into someone’s systems (especially when it might require identifying zero-day vulnerabilities and developing new exploit code to use them), plus also because there is a very real chance that delivering those exploits will impact on expensive production platforms.

Meanwhile, the daily reality for those delivering the projects is that they are almost always delayed to the eleventh hour, and time constrained to the point where there is barely enough to get the basics complete. And that’s even before you add in the headaches caused by unreliable, unavailable systems, and access-control mishaps. Once you factor all that in too, it means that there is rarely enough time to do more than note the presence of a new vulnerability, let alone to pursue it through to a fully working exploit (and no, popping a javascript alert box doesn’t count).

So at best, what is described as a Penetration Test is often little more than a comprehensive scan of vulnerabilities, topped-up with some manual verification of issues that the tools don’t do a very good job of finding. At worst? It’s not even that.

And what about the single, mythical client that actually wanted a real Penetration Test? Oh, that was a London council, where the IT Security Manager blew a big chunk of his budget on getting us to hack all the way into his (apparently much loathed) CEO’s desktop. Simply to prove a point.

Ah the ego. She is beautiful, no?

Can I interest ma’am in a slice of TDD?

The perception of time passing is a funny thing. It only seems like yesterday that I was still a child and the summer holiday seemed to last a lifetime. And now, here I am somewhat surprised that my complement of fingers and toes are no longer enough to count the years that I have been developing software. In fact, I now need an assistant to contribute two feet and one hand too. Ouch!

Anyway, in all my years of writing code there has been a constant stream of faddy languages, gimmicky ideas, and people making proud announcements that developers are no longer needed due to their revolutionary product that will “virtually write the code for you”.

Needless to say that most have passed on a lot quieter than when they arrived. However, as well as the hot air, there have also been a handful of really simple, yet elegant ideas that have forever changed the way development is delivered. Ideas such as the agile manifesto.

Out of these, if I were to pick the one that has had the biggest impact on my own code, and has also helped me to do the same for client organisations, then it would be Test Driven Development (TDD). For those not already familiar with the term, the basic concept of TDD is that you first write your tests, then develop your code until it passes the test. What could be more simple than that?

In my opinion, if the one improvement an organisation makes is to follow this basic TDD recipe, then the overall quality and reliability of their software will be greatly improved. And as a result, for possibly the first time ever, they will also have the confidence that it actually does what it is supposed to do. Why? Because they can prove it empirically.

However, whilst this will get them a product that does everything they expect it to (or at least, everything the person who wrote the tests expected), it gives them little confidence around anything else. In terms of the Rumsfeldian quadrant, they are only tackling the known knowns, and they are still leaving all the unknown stuff to its own devices. And you know how the devil likes to make work for idle unknowns.

So when I first start working with a development team to fix their security issues, it is common to find their tests are very simple, and mostly focused on proving a desired use case works. Very little thought has been given to the application gracefully detecting and coping with anything else. Which is a bit scary, as for every use case there is an equal and opposite bus-load of abuse cases (figuratively swerving across the central reservation and into oncoming traffic).

I’m already starting to ramble, so I’m going to end here, but before I do I just want to finish up with my top-tips for getting the best from the TDD process:

  • Build both use and abuse case tests before writing any actual code. 
  • Seek out the most anally-retentive developer on the team, and cajole them into writing (or at least checking) your tests. 
  • There are a lot more possible abuse cases than use cases, so if you count your tests and find that you don’t have more negative tests than positive, then you should start suspecting that something is amiss.
  • Don’t stop refactoring until all the tests pass. 

Contactless payment cards: if they're a benefit, it's not my problem...



There has been a bit of a hullabaloo on the social networks recently about contactless payment cards. It’s the usual sensationalist story intended to catch your interest and make you foam at the mouth, where the usual faceless legions of criminals are apparently wandering around with merchant terminals, bumping into you on the train and taking a contactless payment without even a how-do-you-do.

Normally I wouldn’t get involved, but I’ve also noticed that some people are wading into the argument and posting links to a debunking site claiming that it isn’t something to worry about [1]. However, that’s not entirely true.

Knowledge is after all power, so if reading this makes you a little more informed, then hopefully you’ll make better decisions and maybe even live happily ever after. Awwww!

So, the facts:

Uno: “your transactions are guaranteed against fraud”. Whilst true, noticing the fraud and providing the burden of proof still falls to you. Then even once detected, it will often take weeks to get your money back [2]. Additionally, contactless payments are often processed offline, so that a stolen card can still be used for weeks’ after it has been reported to the bank [3]. I don’t know about you, but I probably wouldn’t notice an isolated transaction for £20, and even if I did, would I spend hours on the phone to the bank, followed by filling in claim forms? Probably not. So like many things in life, prevention is definitely much better than cure.

Dos: “contactless cards only work at short distances”. Whilst this is true for the merchant terminals (intentionally so, otherwise anyone standing at the same bar could be accidentally paying for your drinks. Heaven forbid), it isn’t true for a custom piece of hardware [4]. Using the right equipment, your contactless card can be accessed by someone standing well away from you, and you would never know. Makes sense though, after all it is contactless by design, no?

Tres: “contactless card transactions can only be made by authorised merchants”. This bit is true, and what’s more, to be an authorised merchant you need to jump through a collection of hoops to prove your identity. However, that isn’t the whole story. The information available to someone accessing your contactless card includes the long card number (which the card industry refers to as the PAN) and the expiry date. Both of which can be obtained without making a contactless payment [5]. These are the self-same details that the bank considers sensitive, and encourages you to protect so that you don’t become the victim of fraud [6]. However, the bank themselves have put them on your payment card, so that anyone in the same room can read them without you ever knowing. Doesn’t make a lot of sense does it?

So in summary, if you think that it would be fine to print your card details on a T-Shirt and wander around, then you have nothing (new) to worry about. For everyone else, I would recommend keeping all your contactless cards (yes, your Oyster card and building access tokens too) in something designed to protect them from unauthorised access.

References


  1. http://www.thatsnonsense.com/can-criminals-press-a-contactless-pos-device-to-your-wallet/ 
  2. http://www.bbc.co.uk/programmes/articles/1KD40dVs0FmtnRv4ByszLr8/bank-fraud-easy-to-be-a-victim-hard-to-get-your-money-back 
  3. http://www.theguardian.com/money/2015/dec/19/contactless-payments-card-fraud-after-cancellation-bank-account  
  4. http://www.telegraph.co.uk/finance/personalfinance/bank-accounts/10416659/Engineers-claim-to-prove-risks-of-contactless-bank-cards.html  
  5. http://www.which.co.uk/news/2015/07/which-reveals-contactless-card-flaw-409322/  
  6. https://www.lloydsbank.com/credit-cards/internet-fraud-protection.asp