Podcasts

News, analysis and commentary

Risky Biz Soap Box: Keep your vendors honest with attack simulation

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

This month’s Soap Box podcast is brought to you by AttackIQ, a company that makes attack simulation software.

This is a wholly sponsored podcast that won’t bore you to tears.

There are countless CISOs who listen to this podcast who’ve shovelled an awful lot of money at their organisation’s security controls. Whether that’s endpoint/AV or fancy network kit that’s supposed to detect exfil, the sad truth is most organisations have no way to know if their expensive kit is actually doing what it’s supposed to.

Until, of course, they get breached. Then there is much wailing and gnashing of teeth.

So the idea behind attack simulation is pretty simple. You load a lightweight agent on to your corporate systems, the agent then runs scriptable attack scenarios that can simulate attacker behaviour.

These attack scripts might get some endpoints to start nmapping internal systems. They might start changing some registry keys or stimulate a bunch of disk activity that looks like an encryption/ransomware process. They might start sending off a bunch of dummy data via a DNS exfil technique. Did your endpoint solution catch the funny registry stuff? Did your network controls catch the simulated exfil?

Now imagine you have 1,000 pre-coded attack simulations with all sorts of different combinations and permutations of attacker behaviours. How many of them do you actually need to run through before you can spot the weak points in your defences?

Attack simulation is a great way to test and validate your security controls, and you can do it continuously.

AttackIQ’s cofounder and CEO Stephan Chenette joined me to talk about attack simulation and what it’s good for.

Risky Biz Soap Box: Keep your vendors honest with attack simulation
0:00 / 0:00

No encryption was harmed in the making of this intercept

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

(UPDATE 17/7/17: The original version of this post implied major technology companies were only handing over user metadata via Mutual Legal Assistance Treaties. That is not the case and the piece has been edited for clarity.)

Over the last few days people have been losing their minds over an announcement by the Australian government that it will soon introduce laws to compel technology companies to hand over the communications of their users.

This has largely been portrayed as some sort of anti-encryption push, but that’s not my take. At all.

Before we look at the government’s proposed “solution,” it might make sense to define some problems, as far as law enforcement and intelligence agencies are concerned. The first problem has very little to do with end-to-end encryption and a lot more to do with access to messaging metadata.

If you’re Australian and you’re reading this blog, you’d most likely know that Australia passed a metadata retention law that came into effect in April this year. It requires telecommunications companies and ISPs (i.e. carriage service providers, or CSPs) to keep a record of things like the IPs assigned to internet users (useful for matching against seized logs) as well as details around phone, SMS and email use.

The problem is, people have moved towards offshore-based services that are not required, under Australian law, to keep or hand over such metadata. Think of iMessage, WhatsApp, Signal, Wickr and Telegram.

Australian authorities do have options when it comes to requesting metadata from these companies. They can just ask, and depending on the company they might get something back. I’m told the major companies generally help out, especially those with a presence here. Companies like Berlin-based Telegram? Not so much.

Some other companies might just tell you to go away. Then the only way forward, depending on where the app maker is based, might be an MLAT – a request through a Mutual Legal Assistance Treaty, specifically, the Mutual Assistance in Criminal Matters Act of 1987.

Detective plod draws up the paperwork, then the request goes off to our Attorney General’s Department, then to the US AG, then to the FBI, and then you might get something back about a year later. If you’re lucky.

If you’re seeking useful metadata involving communications that took place via Signal, you won’t get anything back anyway because they just don’t log much. (This is also an issue for US law enforcement.)

Currently, metadata access is at the whim of a patchwork of company policies, and the metadata tap – in the case of some communications apps – has been turned off completely. And as far as law enforcement is concerned, blocks to obtaining metadata are a very big problem.

There are no easy solutions here, but it’s part of the reason you’ve heard our Attorney General George Brandis talk a lot about treaties and mutual assistance over the last few months. Currently, there’s nothing the Australian government can do to speed up the process when authorities are dealing with offshore organisations.

The second problem involves messaging content. Now that we live in a world where anyone can buy a secure mobile handset (an iPhone) and use an end-to-end (e2e) encrypted messaging application (WhatsApp, Signal etc), there are serious challenges around intercepting communications. Currently, if you ask Facebook for some WhatsApp messaging data, they can simply say they don’t have it. That’s the beauty of end-to-end encryption.

But the Australian government has announced proposed laws that will seek to compel tech companies to hand over the content of user communications, e2e encrypted or not.

It’s very, very important to note at this point that there are legal barriers to obtaining communications content that simply don’t apply to metadata. Metadata is made available by request in most jurisdictions (i.e. without a warrant), but content is a whole other ballgame. In the case of a typical criminal investigation the police need a telecommunications intercept warrant to tap someone’s phone or internet connection. They can’t simply request it.

It’s here that people have spun off planet earth into frankly bizarre speculation as to what the government wants.

I’ve seen an awful lot of people suggesting that the government will compel tech companies to downgrade the encryption they use in their products, either by forcing them to adopt weak ciphers or maybe some sort of funny curve, reminiscent of the suspect Dual Elliptic Curve Deterministic Random Bit Generator incorporated into RSA’s BSAFE library. (That’s a mouthful, but you can read about that here.)

The thinking is, if everyone starts running crap crypto, the coppers can sniff the communications off the wire.

Let me put this bluntly: If this is what the government winds up suggesting, then by all means hand me a bullhorn and show me where to point it. It is a ridiculous idea that would erode so many of the security gains that we’ve made over the last decade.

But this is not what the government will suggest. If you want to know what this will look like from a technical perspective, just look at how authorities currently address this problem.

Thanks to our pal Phineas Fisher, we’ve had a glimpse into the sausage factory that is the law enforcement trojanware industry. Gamma Group and Hacking Team, two companies that make surveillance software for mobile phones, were both hacked by Mr. Fisher and the gory details of their operations laid bare.

What we learned is that law enforcement organisations already have perfectly functional trojans that they can install on a target’s phone. These trojans can already intercept communications from encrypted apps.

If you can access the endpoint – the phone – then you can access the user’s messages. No weakening of encryption is required.

These types of law enforcement trojans have typically been delivered to handsets by exploiting security vulnerabilities in mobile operating systems. Unfortunately for law enforcement, but fortunately for us, exploiting vulnerabilities on mobile handsets has become more and more difficult, time consuming and expensive. iOS is the leader here, a damn fine operating system, but Android is definitely catching up.

I want to spell this out clearly so there’s no confusion: The government already has the legal authority to access your end-to-end encrypted messages if they have a warrant. The barrier is not a legal one, it’s a technical one. Access to the expensive exploits used to deliver interception software to handsets is being rationed due to cost, feeds an industry full of shady players like Hacking Team, and in some cases agencies are simply unable to install surveillance software on to the phones of some really god-awful people, even though they have a warrant.

So, the government wants the tech companies to “fix” this for them. That’s why they’re not talking technical details. The regime will not be prescriptive, and thankfully the government knows that it’s probably not the most appropriate organisation to advise Apple or Google on the finer points of technology.

The feeling is non-US law enforcement and intelligence agencies aren’t getting the coverage they’d need to do their jobs. This is why we’ve seen New Zealand and the UK pass laws that supposedly compel US companies to assist them when they ask. (I hear they’re not being enforced yet.)

So let’s break down how it may work: Under this law, the AFP might ask Facebook, which owns WhatsApp, to hand over the message history and future messages of user X, because they have a court-issued warrant.

Now it’s all very well and good for WhatsApp to argue that it doesn’t have the technical means to do so, which is a response that has lead to all sorts of tangles in Brazil’s courts, but the Australian law will simply say “we don’t care. Get them.”.

In practice, there are a number of ways to skin this cat that don’t involve weakening encryption.

For example, Until May this year, WhatsApp backups weren’t even encrypted. (That’s right, all this song and dance about your messages being end-to-end encrypted, only to have them shunted into services like Apple’s iCloud, and we all know how well protected iCloud is!)

Even now, the precise encryption technique used by WhatsApp isn’t clear. Are they using a key generated on your device to encrypt your messages? That would be of limited use, considering the point of a backup is to restore your message history when you lose your phone and the corresponding encryption key. So my guess is it’s a form of encryption that is recoverable by WhatsApp.

What if the user doesn’t have backups turned on? Well, I’m sure there are some clever people out there at WhatsApp HQ who could figure out how to turn on a user’s backups for them.

A retort I often hear when I lay out a scenario like that one is that users will just move to another app, maybe something like Telegram, which is based in Germany. At that point, an enterprising police officer might contact either Google or Apple, two companies that control something like 99% of the cellphone market share, and ask them to devise a way to retrieve the requested data from that device. Like, say, pushing a signed update to the target handset that will be tied to that device’s UDID (Unique Device Identifier). That way there’s no chance the coppers can intercept that update and re-use it on whomever they want.

Again, no encryption was harmed in the making of this intercept.

There are some legitimate concerns around how a regime like this could be abused. However, the legal bar for content interception here in Australia is much higher than for metadata. Content access requires a warrant. If cops were looking to abuse this access then they’d need to engage in some pretty serious criminality, like forging warrants. And if the access regime revolves around asking the tech companies to do the grunt-work on behalf of the authorities, all intercepts should actually be easy to audit periodically.

In other words it would be a stupid way to spy on your girlfriend.

Now look, I’m not advocating for these laws. I’m not. What I am trying to do is move the goalposts for this discussion. The responses that I’ve seen to this proposal from the Twitterati have mostly been really daffy. People will insist the government doesn’t know what the hell it’s asking for (it does), that it wants to break maths (it doesn’t) and that it’s impossible for technology companies to provide law enforcement with what they need without introducing unacceptable new vulnerabilities and risks into our technology ecosystem (depends on your definition of “unacceptable”.).

I’d like to see the goalposts set up around a much simpler discussion than one about technology and encryption: To what degree do we believe, as a society, that the right to privacy is absolute?

Do we believe that law enforcement bodies should have the authority to monitor the communications of people suspected of serious criminal offences? If so, what should the legal process for provisioning that access look like? I mentioned auditing access under this scheme a couple of paragraphs ago. If we’re going to have a regime like this, can we have a decent access auditing scheme please? These are the sorts of things I would prefer to be talking about.

It’s also important to remember that Australia is not America. We don’t really have the same libertarian streak as our US cousins, so it’s entirely possible there won’t be a substantial backlash to these proposals. That makes framing this discussion properly – as a conversation about balancing our need for privacy with our desire for safety – vitally important.

If people who want to participate in this debate keep screaming that the government consists of a bunch of idiots who want to outlaw maths, well, the real conversation just won’t happen and no meaningful controls around the extent of access and the oversight of that access will be granted.

Not that you can expect grown up conversations between the tech firms and the government. The tech companies will fight this tooth and nail, both on libertarian/political grounds, and on business grounds. The government will do the usual scaremongering around terrorists and pedophiles. Expect some downright misleading information from both sides and absolutely bonkers salvos fired in both directions.

Can’t wait.

PS: Blind Freddy could have seen this coming.

Risky Business #461 -- AWS security with Atlassian's Daniel Grzelak

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

On this week’s show we chat with Atlassian’s head of security, Daniel Grzelak, all about some AWS security tools he’s come up with. He also previews a new tool for generating AWS access key honeytokens at scale, which is really neat.

This week’s show is brought to you by Veracode!

Veracode’s director of developer engagement, Peter Chestna, will be along in this week’s sponsor interview to have a yarn about some common misunderstandings between security people and developers. We look at misunderstandings both ways.

Adam Boileau is this week’s news guest. We talk about all the latest dark markets drama, plus the Great Nuclear Hax Freakout of 2017.

See links to show notes below, and follow Patrick or Adam on Twitter if that’s your thing!

Risky Business #461 -- AWS security with Atlassian's Daniel Grzelak
0:00 / 0:00

Risky Business #460 -- Haroon Meer talks Kaspersky drama, NotPetya, the cryptowars and more

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

Adam Boileau has some out of town business to handle this week so he can’t join us in the news segment. But that’s ok, because industry legend Haroon Meer has very kindly agreed to fill in for him! We chat to Haroon shortly about all the latest NotPetya developments, we’ll also talk about the drama Kaspersky is experiencing right now, as well as dissecting the latest battle reports from the cryptowar! All the news is covered.

This week’s show is brought to you by ICEBRG!

ICEBRG’s co-founder, Will Peteroy, joins the show this week to chat a bit about what they’re up to. Will has an interesting background. He was the technical director of a government agency Red Team. That meant red team exercises against agencies, but he was also responsible for doing assessments on security products. He also put in a bunch of time at Microsoft where he was the endpoint for product security for Windows and Internet Explorer, which meant he was the recipient of oh-so-much-0day for around a year and a half. So yeah, Will knows what he’s doing, and he’s made a thing, and you’re going to hear about that thing after this week’s news.

See links to show notes below, and follow Patrick or Haroon on Twitter if that’s your thing!

Risky Business #460 -- Haroon Meer talks Kaspersky drama, NotPetya, the cryptowars and more
0:00 / 0:00

Risky Biz Soap Box: Bugcrowd founder and CEO Casey Ellis on the future of crowdsourced security

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

In this edition of the Risky Business Soap Box podcast we chat with the founder and CEO of Bugcrowd, Casey Ellis, about the establishment of the bug bounty market and how things have shaped up. We also look at where it’s going.

The days of bounty programs being operated solely by large technology firms are long gone. Casey predicted that shift years ago. The question becomes, where will bounty programs be in three years from now?

Well, Casey doesn’t shy away from making some bold predictions. He thinks most enterprises will have vulnerability reporting mechanisms within two years, and a substantial proportion of those will offer rewards to bug hunters via companies like Bugcrowd.

He also sees bounty programs increasingly serving the specialist market.

You can find Casey on Twitter here.

Risky Biz Soap Box: Bugcrowd founder and CEO Casey Ellis on the future of crowdsourced security
0:00 / 0:00

Risky Business #459 -- Actually yes, "cyber war" is real for Ukraine

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

This week we’ll be chatting with Andy Greenberg from Wired about his cover story for that magazine. He travelled to Ukraine back in March to research his story on Russian attacks against the Ukrainian power network. He joins us this week to share the insights he gleaned during his travels.

This week’s show is brought to you by SensePost.

SensePost are based in South Africa and England, but they are very well known for offering training courses at Black Hat. This year will be the 17th year they’ve run training courses there… as can be expected their brand new devops security course has gone absolutely gangbusters in terms of registrations this year, but they’re also offering a bunch of other courses. They’ll be joining us to chat about trends in training in this week’s sponsor interview.

Adam Boileau, as always, drops by for the week’s news segment. You can add Patrick, or Adam on Twitter if that’s your thing. Show notes are below…

Risky Business #459 -- Actually yes, "cyber war" is real for Ukraine
0:00 / 0:00

Risky Business #458 -- Reality Winner, Qatar hax and Internet regulation calls

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

On this week’s show we’re covering off all the big news of the week: the arrest of Reality Winner, the apparent hacks that have ratcheted up the political crisis in Qatar and the renewed calls for Internet companies to be more government-friendly.

In this week’s feature interview we catch up with Samy Kamkar to get his take on what the lowering cost of hardware-based hacking could mean for our increasingly automated world. And in this week’s sponsor interview we chat with Duo Security’s Pepijn Bruienne about some recent attacks against the Mac OS software supply chain.

Big thanks to Duo Security for sponsoring this week’s show. Duo makes all manner of kick-ass two factor authentication solutions, you can check them out at Duo.com.

You can add Patrick, or Adam on Twitter if that’s your thing. Show notes are below…

Patrick is taking a vacation. Risky Business will return on June 28

Risky Business #458 -- Reality Winner, Qatar hax and Internet regulation calls
0:00 / 0:00

Risky Business #457 -- Shadow Brokers turn to ZCash, plus special guest John Safran

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

On this week’s show we’re taking a detour: This week’s feature interview has absolutely nothing to do with infosec. But it is related to the Internet. Sort of. If you squint a little.

This week’s feature guest is John Safran. He’s been gracing television screens here in Australia for nearly 20 years, but John is also a rather brilliant author. I’ve just finished reading John’s new book, Depends what you mean by Extremist, Going Rogue with Australian Deplorables. Honestly, it’s fascinating enough for me to just squeeze it into this show.

Basically John wrote a book about the year and a half he spent hanging out with all sorts of extremists; Left-wing Marxists, anarchists, right wing anti-Islam types and even Islamic State supporters, some of whom are now up on terror-related charges.

I speak to John about the Internet’s influence on extremism, as well as extremism in general. I highly, highly recommend this book. It’s a fascinating look at the contemporary political landscape through the eyes of extremist movements of all flavours, and it’s not a tough read. It’s actually quite funny and it really the most on-point thing I’ve read in a long, long time.

This week’s show is brought to you by Bugcrowd, big thanks to them! And in this week’s sponsor interview we’ll chat with Casey Ellis, Bugcrowd’s founder and CEO. Now that outsourced bug bounties have gone mainstream, we know more what they’re for and how people find them useful. So we speak to Casey about how a lot of orgs are basically just throwing the lower value testing out to bounties to free up their infosec teams to do higher value work. We talk about that and a couple of other points.

Adam Boileau, as always, drops in to discuss the week’s security news!

You can add Patrick, or Adam on Twitter if that’s your thing. Show notes are below…

Risky Business #457 -- Shadow Brokers turn to ZCash, plus special guest John Safran
0:00 / 0:00

I got a detail wrong in my latest conference talk

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

During last week’s AusCERT conference I did a 50 minute talk that reflected on a 15 year career writing about information security. It was a repeat of the talk I did at BSides Canberra in March.

It covered thoughts on attribution, fake activist groups (Guardians of Peace, Cutting Sword of Justice etc), the possible motivations of high-impact leakers (Mark Felt, Chelsea Manning, Edward Snowden) and the need to create norms around acceptable state behaviour when it comes to computer network operations.

In the leakers section I got a detail wrong and I want to correct it. Hopefully I’ll convince you that in context of what I was talking about the error doesn’t actually change all that much.

That whole section of the talk was really written to put forward the case that leakers have complicated motives. Even when leaks are in the public interest, it doesn’t mean that the leakers’ motives are as pure as the driven snow.

I speculated that perhaps FBI deputy director Mark Felt, better known as Watergate source Deep Throat, might have been tactically leaking against people who stood in between him and the FBI directorship. He loathed both Nixon and FBI director L Patrick Gray (no relation) and only lasted another month at the bureau after Gray got the knife and was replaced by William Ruckelshaus.

So that’s a theory: His leaks brought down the people in his path, but in the end he didn’t get the top job, so he resigned. I wasn’t trying to prove Felt was motivated by self interest, just that it’s a plausible motivator.

I also spoke about Chelsea Manning. She was relentlessly bullied during her time in the army, frequently clashing with both her superiors and the rank and file. I have no doubt that Manning is indeed, as she claims, a pacifist. But I also have no doubt that the relentless bullying influenced her decision to leak. She was isolated and miserable, but found a friend in Wikileaks’ Julian Assange. I sincerely believe there was an element of rage underpinning those leaks. Some revenge. (And honestly? Fair enough. The military failed her, big time.)

Eventually I boil the whole thing down to these factors: Self interest, public interest, ego, rage and combinations of the four.

To explore ego as a possible motivator, I spoke about Edward Snowden. Snowden always strived for great things but didn’t quite make the grade. He wanted to be a special forces soldier, he failed. He wanted to be NSA TAO, he failed. But when he leaked massive amounts of NSA documents, he could invent himself as anything he wanted, and he has. But a bunch of his public statements about his experience at NSA seem pretty shaky, bordering on outright bullshit.

It’s been nearly four years since Snowden went public with his leaks. In the talk I said it feels to me like something is off about the guy. Details have filtered out through the grapevine, and they tend to clash with his public statements.

It’s clear, for example, that he massively overstated his seniority at NSA. And parts of his story just don’t line up. I’m not talking about the conspiracy theories that a foreign power put him up to it or he was some sort of spy – I think that’s really, really unlikely – it’s more that he mislead on things that are basically inconsequential, like his reason for washing out of his military training. He also failed to correct some really shitty reporting on his leaks.

We’re getting to the mistake, hang in there.

As an example of Snowden coming across as less than totally honest I cited his non-reaction to an article written for The Guardian about the so-called PRISM program in 2013. In that piece, Greenwald writes: “The Prism program allows the NSA, the world’s largest surveillance organisation, to obtain targeted communications without having to request them from the service providers and without having to obtain individual court orders.”

In my talk I described that as totally wrong, but it’s actually only mostly wrong.

There was no “direct access” and NSA did actually have to request this material from the service providers. That’s been established. The part I got wrong is NSA doesn’t actually have to obtain an individual court order for every selector tasked from a court. In my talk I said it did.

Selectors are created under FISC oversight, but the court’s job is to ensure the compliance of those selectors to the rules it established and maintains, not to green-light each selector.

Over the last few years I’ve chatted with people who are familiar with this program. For their part, the technology companies mentioned in the PRISM program stories were all baffled when the story broke, both publicly and privately. Greenwald made it seem that the NSA had unfettered access to their servers. Their response, in most cases, is that they would only hand over data to the authorities if there was a valid court order.

So, over the years I’ve asked some people who’d know to tell me about the process that NSA goes through to “task” collection on an individual using PRISM.

They said that in order to obtain information from a company like, say, Facebook, they’d have to start by preparing a “FISA package”. This means they’d have to put together a case that could show the proposed target isn’t a US citizen, is not in the USA, and that intercepting their data is likely to reveal something of importance to national security.

These packages are worked up – that process involves senior NSA staff – then the package is sent up the chain for authorisation. When authorisation is granted, it’s the FBI, not the NSA, that approaches the technology company and asks it to hand over the data.

And here’s where I made the mistake: The tech companies said they hand over data based on court orders. People familiar with the NSA side of this program described the authorisation process for each individual target. I mistook these two data points as meaning the FISA court was authorising each individual collection. They don’t.

The package is actually sent off to the Office of the Director of National Intelligence (ODNI) and Department of Justice (DoJ) for post-tasking review. You can read about that process here. That’s the detail I got wrong.

But the FISA court is involved. It oversees and mandates the process through which the validity of selectors is determined, and there was regular review of the rules around tasking. Everyone tells me these rules were strict and adhered to rigidly. That’s not to say mistakes aren’t made. In a post-Snowden review, NSA found 0.4% of PRISM tasking accidentally collected the information of people who were either located in the USA (not allowed) or US citizens (also not allowed).

I realised I got this detail wrong when fellow AusCERT attendee Troy Hunt posted a picture of my slide that referenced FISC authorisations for individual selectors. Just looking at that slide in isolation I had a funny feeling.

So I went back to my notes and some source documents and realised I’d made the mistake. I asked Troy to remove the Tweet, not because I’m trying to hide my mistake, but because I don’t want people to believe something that isn’t true. It was a typical case of a non-lawyer getting something law-related wrong.

That said, I don’t think it really changes my argument with regard to Snowden. Even though some people may see ODNI and DoJ selector authorisation as inferior to direct authorisation by a court, albeit a secret one, the fact remains that none of the reporting even acknowledged any oversight or even a process for tasking.

Take this Ed Snowden quote: “I, sitting at my desk, certainly had the authorities to wiretap anyone, from you or your accountant, to a federal judge or even the President, if I had a personal e-mail,” he told The Guardian.

No, Ed, you didn’t.

In the case of PRISM I’m pretty sure the NSA senior staff might object, given collection against US citizens is verboten under 702. If they didn’t then ODNI or DoJ might have some feelings about it. And if they let it through my guess is the FBI might actually think something was wrong if you were trying to task collection on the US president.

Even if he wasn’t talking specifically about the PRISM program in that instance, everyone I’ve ever known who spent any time at a five eyes SIGINT agency tells me the same thing – everyone’s searches are logged and audited no matter what the program. The compliance hurdles and internal rules are universally described as a pretty serious (but necessary) pain in the ass.

This next part is important: I’m not an expert in intelligence oversight, and I can’t say whether the NSA’s oversight is appropriate or not. But I can say that it’s just crazy to write up stories about these programs without even mentioning the tasking procedures, auditing and oversight. These stories have convinced people that individual NSA operators could simply spy on whoever they like, using direct access to the back-end servers of major Internet companies. It’s just not correct.

My argument is Snowden’s silence following the publication of some of these stories is a massive red flag when it comes to his credibility.

But because he painted himself as a truth-telling whistleblower, Snowden was able to convince some journalists and many among the public that he was the only source who could be trusted when it came to discussing these programs. Everything else, his supporters say, is disinformation.

Of course, there has been legitimate public interest in Snowden’s disclosures. The NSA had been doing some pretty shady shit, most notably the (since discontinued) 215 phone metadata collection program. But that doesn’t make Snowden himself a saint. He’s not. He is what I’d charitably describe as “properly weird”.

In telling that story, I did get a detail about oversight wrong. Sorry about that!

Risky Business #456 -- Your MSP *will* get you owned

Presented by

Patrick Gray
Patrick Gray

CEO and Publisher

Adam Boileau
Adam Boileau

Technology Editor

On this week’s show Adam pops in to discuss the week’s news. (Links below) After the news segment Adam and Patrick both chat about topics near and dear to their hearts: Shoddy infosec marketing and shoddy MSP security.

This week’s show is brought to you by WordFence, a company that makes a WordPress security plugin. It’s not so much an enterprise security tool, but it turns out that when you run two million Wordpress plugins you wind up collecting some pretty valuable threat intel and IOCs. WordFence’s Mark Maunder joins the show this week to talk about WordPress security and malware distribution!

You can add Patrick, or Adam on Twitter if that’s your thing. Show notes are below…

Risky Business #456 -- Your MSP *will* get you owned
0:00 / 0:00