Risky Business #250 -- Hack it like it's 1999

Getting nostalgic with Huawei stack-based overflows...
August 14, 2012 -- 

On this week's show we chat with Recurity Labs' Felix "FX" Lindner and Greg Kopf in the feature segment.

These guys recently shredded some Huawei equipment. They owned it hard and turned it into a DEFCON talk [pdf]. They'll be along a bit later on to tell us why hacking away at Huawei kit made them feel nostalgic.

This week's show is brought to you by the fine folks at Australian pentesting firm HackLabs, so I hope you'll keep them in mind next time you're firing off those RFPs!

HackLabs founder and main man Chris Gatford joins us in this week's sponsor slot to discuss the extremely clever social engineering attack against accounts belonging to technology journalist Mat Honan. he got owned pretty hard. No clientsides, no exploits, no bruteforcing. Just a few phone calls.

Adam Boileau, as always, joins us to discuss the week's news headlines. Grab links to this week's stories in the show notes page here.


foxie.claire's picture

Back in the day, hacking is a special skill. Now, there are many people who can do this stuff. - James Cullem

kristenfli's picture

You can not do that anymore, because the systems from 1999 is really vintage, if you can hack it today I will be really amaze. - Larry Starr Sarasota

pleriche's picture

Pat - I'm not advocating keystroke dynamics specifically, but rather using a range of mechanisms which can be combined to estimate a probability that the supplicant is genuine. That said, I don't think keystroke dynamics need be very difficult to implement - at the simplest level all you need is a 26x26 matrix of inter-key timings, but there's probably a PhD's worth of statistical refinement you could add. (But the matrix might need to be a bit bigger for the Chinese market!) Implemented as a traditional key-logger, it could provide an input to an algorithm which would continuously adjust the strength of additional authentication required for a privileged operation, or the value of a transaction that would be allowed before invoking a 2-man rule, according to the confidence in the authentication.

Humanly, we do this sort of continuous reauthentication without thinking. If you and I sat down for a chin-wag in a bar, after a certain number of beers you'd be able to determine quite easily the point at which I started talking rubbish and no longer could be taken seriously!

Patrick Gray's picture

Heya Phil, thanks for making the time to comment.

I think the idea of being able to create some type of fingerprint of someone's typing patterns isn't in itself a terrible one, but it seems to me that this type of authentication measure would be extremely difficult to implement.... to the point where the implementation burden would outstrip the positives.

It sort of reminds me of one of the CINDER submissions (cyber insider DARPA thingy) where someone was proposing monitoring computer operators through webcams and trying to automate the detection of changes in their behaviour. Can you imagine the FP rate?

Also, how do you securely store these signatures? They'll become another bit of auth data that can be stolen, just like passwords... the only difference is you can change your password, but you can't change the way you type.

I just think it falls down on a few levels and I really, sincerely doubt it'll be adopted anywhere. I've been wrong about this sort of thing in the past though, so who knows! :)

pleriche's picture

Hey, Pat, I think you and Adam were maybe just a bit hard on the keystroke authentication guys in #250. I totally agree that the CNet article, implying that passwords could soon become redundant, is a load of ****, but over the months we've heard - passwords: fail; tokens: fail; and now biometrics: fail. So what are we left with? There just isn't another game in town, or even riding over the horizon.

We like to knock the banks when they get it wrong, but I would contend that maybe they're the only sector which has correctly analysed the problem and come up with some sort of solution - in so far as there is one. The key realisation is that authentication can rarely be expressed as a purely go/no go process, but rather it's a means of establishing an acceptable probability that the supplicant is who he claims to be. For example, in deciding whether to accept a credit card transaction as genuine, the banks have sophisticated systems for watching patterns of usage in order to spot an anomalous transaction, and if necessary, apply further authentication such as calling the card holder's mobile number.

Seen in this context, although keystroke dynamics would be useless on its own it could be used to build confidence (or otherwise) in a primary authentication, along with other schemes (such as the Stanford University implicit learning authentication reported a few weeks ago), which equally might be useless or impracticable on their own.

Banks do have an advantage in this: they can calculate the cost of a false acceptance and set the confidence required for authentication accordingly. (We'll pass over whether it's the cost to them or to the cost to the customer, but the principle holds.) Unfortunately, I'm not sure many CISOs would be quite ready to define a threshold confidence level they could accept for authentication to corporate systems, though paradoxically, perhaps some government agencies might be closer as they are used to calculating password strength in terms of entropy and encryption strength in terms of key length.

In time, we need to get to a situation where a required authentication confidence level is set for each information asset according to its value, and achieved by an appropriate range of authentication measures. Thus, a simple password might be required for access to the company intranet, additionally a token for some systems, and biometric or behavioural authentication on top for the most sensitive systems. Furthermore, the authentication system should ideally continue to monitor behaviour after access has been granted, as unusual patterns might make it necessary to revise the original confidence level achieved. And if we recognise that a lower confidence may be acceptable for reauthentication after a screen lock (depending on the known physical security level of the user's location), there may be a dividend for the user too.

In summary, let's not knock individual authentication components but rather look for progress towards building complete authentication systems. And let's not forget that there are three essential components to an authentication system: strong enrolment (or binding of the credentials to the individual), a strong authentication mechanism or mutually supportive set of mechanisms, and strong monitoring in order to detect attacks and validate the ongoing effectiveness. Given such a system, we can start to think about whether it's any good.

Regards - Philip

Patrick Gray's picture

Sorry, my bad. Left the http:// out of the link. Fixed!

ss23's picture