If Typhoid Mary carried a cell phone, we would all want to know where she’d been over the last few days.
Technology exists right now to trace the historical location and movement of any person who has tested positive for COVID-19. That location history is more detailed and accurate than information the Center for Disease Control (CDC) gets from interviewing people who have tested positive, and it can be used to map the trajectory of the disease over time and place, all while protecting privacy. However, privacy concerns and sufficient resources within public health organizations have hindered development of a location history solution.
These concerns are understandable, because there have been reports about third party location aggregators or surveillance equipment providers trying to sell bulk location information to the government.
A better approach - discussed below - dismisses third party aggregators because they largely are unaccountable, the data sources are speculative and without consent provenance, and the data tends to be less comprehensive and representative of communities.
Over a dozen countries have introduced or deployed tracking technologies, physical surveillance and censorship measures in a bid to slow the spread of the virus. A Digital Rights Index has been published to help stem overreach, promote scrutiny, and ensure that intrusive measures don’t continue for any longer than absolutely necessary.
So how would a location history solution work while protecting privacy? Consider what your device already knows about you. If you use Google Maps, for example, your Timeline can be seen in the Maps Menu. Click and you will see a detailed summary of your daily travels for as long as you’ve stored it, and your actual route is displayed on the adjacent map. My history for January 17th shows that I flew from San Jose to Seattle, took a 1:10pm ferry to Bainbridge Island, went to the barber at 2:30pm, then to the post office at 3pm, then home, and then had dinner at Sawan’s Thai Kitchen at 6:30pm. If I fell sick and tested positive two days later, I doubt that I could relate the details of my movements for two or three days before diagnosis with that degree of specificity.
But if I provide my cell phone number and/or account identifier to the public health official and consent to it, the data could then be sent to CDC - a governmental entity under the Stored Communications Act who can by law request emergency location information from Google or any other platform or provider that maintains my location history.
The emergency request is the same procedure used dozens of times each day where law enforcement submits a request to a provider to disclose user information in emergency cases like kidnappings. It is tried and tested. The infrastructure exists for it right now, including rapid delivery of the data back to the governmental entity.
Privacy concerns can be minimized by ensuring that the user’s opt-in consent for sharing with the CDC solely is for the purpose of tracing potential infectious contacts and cannot be shared with other governmental agencies without the person’s added consent. Further, the CDC can confirm it will destroy the identifiable information promptly upon receipt of the location history - the CDC only needs to know where a person with a positive test traveled and when. Everyone’s location history already is known to their providers; the person who tested positive already is sharing their movements as best as possible with health providers. The person infected is consenting to their information being used to notify others of the risk and for no other purpose. Contact tracing already is being done at the local level with scarce resources.
More can be done once the location history of the infected user is known. Platforms and wireless carriers can use incoming CDC or user data requests to determine how many other users were in the vicinity of the positive case at any given time. This is called geofencing. It is done today in response to search warrants from law enforcement to identify users in and around a crime scene, or, all registered phones on a cell tower serving a crime scene area.
Rather than the CDC simply telling the local community that a person has tested positive in their county, providers instead can tell specific proximate users precise facts by means of a text, email, or device notification: a person who tested positive was on the 9am flight from San Jose, landed at SeaTac at 11:10am and got a cab 10 minutes later, was on the 1:10pm ferry to Bainbridge Island, stopped at various places, and went home. That is actionable intelligence - it relieves the anxiety of people on a later flight or ferry or who ate before the infected person, or all those people who only are told someone has the disease in the community at large. It tells others who were in close proximity that they should self-isolate.
No, this is not a substitute for greater testing, but it may help direct valuable testing resources to a particular at-risk community and to target resources better. Imagine that there were 10 people identified on that 1:10 ferry. With their location maps layered on top of each other, we see a trajectory for the disease throughout the community and further identify the specific risk of immediate contact by others in the vicinity. Perhaps everyone gets directed to shelter-in-place, or, perhaps the proximity map shows only small pockets of concern. Whatever the data shows is immediately actionable at the local level and the CDC will be getting aggregate location data for those in proximity to persons who tested positive.
Knowing that a significant number of persons with the disease were in the general population at a specific time and place is better than any currently available information today, and is more accurate than anecdotal data from those who have tested positive. And again, the CDC (i.e. the government) is only ever getting the opt-in data for the person who tested positive; the providers are doing the rest. Some have complained that this solution is not perfect, doesn’t cover all places or people, isn’t granular enough to avoid “false positives” and requires providers to do something to facilitate it. Right now, the alternative is for everyone to stay home and live with the anxiety that interacting with anyone puts you and your family at risk. That is one big false positive. The approach above is surgical, and most times, good is better than perfect - at least with pandemics.
We also have seen how location information can be used to quarantine or restrict people’s movement in places like China. No one wants a virtual ankle bracelet for quarantine in this country, but those are some of the ideas being floated now. The benefit of the location tracing proposed here is that it is opt-in by those who have tested positive, and privacy protective for the user and all those who were in close proximity to persons so identified. It is better than using a surveillance hammer.
There is some privacy risk to the infected user whose location history becomes part of a map, in that crowdsourcing may identify the individual. But that risk can be lowered by not mapping the end point - if it is a personal residence for example. There is some risk inherent in the use of location data - but again, the degree of specificity for what goes on the map can be determined by the provider and minimized to exclude key data points. A rule might display “post office” but not display “home address”.
It is important to say again that this proposal alone is not a comprehensive solution to the difficult problem of contact tracing. There may be smaller numbers of users with location history enabled on various platforms due to privacy concerns. But if data is drawn from Google, Foursquare, Facebook, Uber, Lyft and other platforms, a comprehensive map will emerge that is sufficient to show trajectory and allow CDC to identify hot spots and resource needs, while simultaneously reducing anxiety in the areas least affected or proximate to individuals who have tested positive.
Albert Gidari is the Director of Privacy at Stanford Law School’s Center for Internet and Society and retired partner at Perkins Coie LLP where he represented wireless companies and Internet platforms.
Read more on proposed contract tracing solutions in the Risky.Biz feature story: ‘The cyberpunk dystopia we feared is here, and just in the nick of time’.
The unprecedented COVID-19 pandemic has raised a thorny question for technologists and lawmakers: how might the location data from our cellphones be used to help contain the spread of the virus?
Two broad use cases have emerged: the first is using location data to monitor compliance with quarantine. And the second is contact tracing - using location data to track down people that have come into contact with a person that tests positive to the virus.
The team at Risky Biz discussed both in a livestream this week with regular co-host and Insomnia Security founder Adam Boileau, adjunct professor at Stanford University’s Center for International Security Alex Stamos, and Crowdstrike founder and former CTO Dmitri Alperovitch.
Monitoring quarantine compliance
In an ideal world, people that have tested positive to a deadly and contagious disease would dutifully self-isolate to prevent further infection, and those that they’ve recently come in contact with would dutifully quarantine before their test results come in.
In Western democracies, the use of monitoring for such a purpose requires legislative change and a dramatic suspension of social norms.
In the United States, governments do not have the legal authority to tap cell phone records or social media data for the purpose of enforcing quarantine compliance. The United States is struggling to even make the case for using geofencing data to convict a suspect with a bank robbery.
Emergency powers are gradually being put into place as clusters of infections emerge. Airlines, for example, are now required under US law to submit data to the Center for Disease Control and Prevention (CDC) data about all incoming passengers for the purpose of enforcing quarantine. And the White House is now in discussion with US tech giants such as Facebook and Google about how their location data might also be put to use.
Today, anonymised data from mobile networks and apps is already made available to researchers for the purpose of tracking the spread of disease. Users of IoT thermometers, for example, can already opt-in to share their data for use in the aggregate.
But the prospect of using the data at the individual level for purposes that could be deemed punitive is ethically and legally complex.
Albert Gidari, Director of Privacy at the Center for Internet & Society at Stanford Law School notes that the US Stored Communication Act would not permit compelled disclosure. “Any system devised to take advantage of location history would have to be consent-based and rely on voluntary cooperation of providers,” he told Risky.Biz.
Compelled disclosure might also prove ineffective. The Electronic Frontier Foundation argues that the threat of having your movements monitored could create a perverse disincentive: people that feel unwell - but not so unwell to present for testing - may choose to avoid being tested to avoid it. And if such a system offered no agency or benefit to those being monitored, what is to stop them from simply leaving their mobile device at home?
“We can’t expect that people who choose to be non-compliant are going to use an app voluntarily,” Boileau notes. “So at that point, [authorities] are left with using the phone infrastructure - or other companies that have location data. In New Zealand, for example, the telcos have the data for emergency call location - and in an emergency, a whole bunch of the usual rules don’t apply.”
There are potential benefits for users - measuring compliance with quarantine would be an important input into determining “how long we should be in lockdown”, he said. In other words - put up with surveillance now, and lives can return to normal much sooner.
But that’s a very difficult sell - what’s acceptable to a person in New Zealand or Scandinavia might not fly in Germany or the United States.
Using mobile location data for contact tracing presents many of the same legal and ethical challenges as monitoring compliance with quarantine. But it offers far more palatable use cases for countries seeking to balance containment of the disease with preserving civil rights in the longer term.
Gidari posits the concept of a system whereby individuals that test positive may voluntarily disclose their mobile phone number or online account identifier to healthcare agencies. The government could then use existing lawful arrangements with tech companies to request rapid emergency access to the user’s location history.
The agency could also request aggregate geofencing data to have the provider alert other users who were in close proximity to the person during their illness. If protected by privacy-preserving caveats - such as limiting which agency can access the data and how long they can retain or use the data - it might be something privacy advocates can live with.
“We don’t need a Korea-style approach to this problem to get actionable data in the hands of the CDC or other health care providers,” Gidari said. “We can protect privacy too.”
Stamos - who has previously been an expert witness on cases that involve location-based data - isn’t confident that cell tower data is precise enough for contact tracing without generating an unacceptable number of false positives. But data from Bluetooth beacons and WiFi SSIDs might do.
The government of Singapore used Bluetooth as part of their efforts to contain the virus. Citizens were encouraged to voluntarily download the ‘TraceTogether’ app, provide the Ministry of Health their mobile phone number and turn Bluetooth on permanently. The app asks for user consent to log any other user of the app that spends more than 30 minutes within 2m of the person. The data is then acted upon if any of the users return a positive test.
Over 600,000 Singaporeans have already volunteered to download the app, perhaps motivated by the sense of national solidarity pervasive in Singapore, or perhaps by the assumption that using a government-issued app will fast-track access to testing when it becomes necessary.
In any case, the app has its limitations. The iOS app has to run permanently in the foreground to be effective, and the Android version must be manually configured to run in the background. Users are unlikely to be so diligent that they remember to turn it on every time they are in a public place - well in advance of getting sick - limiting the use case to people already on high alert, such as those that came into contact with a person waiting for test results. Developers may improve TraceTogether now that Singapore plans to release the app’s source code.
Other efforts to convince users to voluntarily download a privacy-preserving app - such as Cambridge University’s ‘FluPhone’ app in 2011 and MIT’s new ‘PrivateKit’ app - haven’t driven enough user interest to make a meaningful impact.
Stamos sees a faster way to enrol users in a privacy-preserving system. Any time Google or Facebook offer features like ‘People You May Know’, he notes, they are effectively already performing a similar feature to contact tracing. And both of those platforms have in excess of 2.5 billion users.
“Contact tracing is a technique already proven in the field by Google and Facebook,” Stamos said. “This is why sometimes when you go into a store, you end up getting related ads in your feed - because Bluetooth beacons placed in the store have recorded your interest for future advertising.”
He envisions a system under which any Facebook or Android user that tests positive to Coronavirus could - at the push of a button in an app they are familiar with - give permission for Facebook or Google to contact any other account holders that have been in the same Bluetooth Beacon or WiFi network (SSID) for more than 30 minutes.
Stamos recommends the tech giants get on the front foot and build this capability voluntarily for US users, lest they be compelled by governments to build a compromised solution.
“If I tested positive, I’d much prefer to hit a button and have Google and Facebook inform everyone that I’ve been in contact with, warning them to go get tested,” he said. “And that data doesn’t necessarily have to go to the government. It could be a relationship between me and counterparties, mediated by an app we use in common.”
As long as the app is opt-in, that consent is provided, and that the app brokers the tracing and notification (rather than the user or other human operator), it could be rolled out in the United States without the need for legislative change, he said.
“All the infrastructure is there to do it,” he said. “It would use the same [geofencing] mechanisms these companies use today, which we know to be legal.”
The same wouldn’t apply for Europe, where GDPR and other regulations would likely prove too prohibitive.
Even the most diehard privacy advocates say they would be willing to make a compromise in such an emergency.
But contact tracing apps will only help, Alperovich notes, if there is enough testing capacity available to help the population know if they are infected or have been in contact with somebody infected. That’s not available in the US today.
“It won’t do anything to trace people if we can’t actually test them,” he said. “But maybe when we get to the point of re-opening this country, and we want to make sure we don’t have new outbreaks, it’s something to consider.”
Speaking as a person that has opted out of platforms that track his location data, he remains cautious.
“I would want full transparency,” he said. “I’d want the source code of the app published by the government. I’d want strict oversight on how the data is used and I’d want mandatory purging of that data every so many days.”
“If it can be effective, and if the user volunteers to submit data on social networks they already use, then with the right safeguards - I’m a tentative yes.”
Even Boileau, who often quips that commercial surveillance is the “cyberpunk dystopia” we always dreaded, is in reluctant agreement.
“The voluntary approach has some real benefits,” he said. “It’s an emergency. We’ve got the data and we should use it. Privacy can just suck it for a while.”
For more coverage:
On this week’s show Patrick and Adam discuss the week’s security news, including:
This week’s show is brought to you by Thinkst Canary.
Thinkst’s Haroon Meer joins the show this week to talk about what he tells customers when they ask him if Thinkst could go rogue and own all their customers.
You can subscribe to the new Risky Business newsletter, Seriously Risky Business, here.
You can subscribe to our new YouTube channel here.
Subscribe to the weekly Seriously Risky Business newsletter at our SubStack page.
Tech firms asked to help COVID contact tracing
Lawmakers have asked US tech companies to contribute data to help health authorities monitor quarantine compliance and trace recent contacts of people infected with coronavirus.
As authorities the world over rush to flatten the curve of coronavirus infections, even the most diehard privacy advocates are exhibiting a willingness to temporarily let civil liberties slide in the name of saving lives.
You might be surprised by which of our regular Risky.Biz contributors said as much when we hosted a livestream discussion on cell phone tracking earlier today - which featured Dmitri Alperovitch, Adam Boileau, Patrick Gray and Alex Stamos.
Healthcare hit with ransomware, despite promised truce
Two prominent ransomware actors promised not to target primary healthcare providers until the COVID-19 crisis is resolved.
The Maze and DoppelPaymer ransomware gangs told Lawrence Abrams at Bleeping Computer that they would assist hospitals directly if incidentally infected by their malware. DoppelPaymer’s disclaimer is that it will continue attacking pharmaceutical companies and the broader medical supply chain.
Abrams told Risky Biz that he’s also since heard from the Netwalker ransomware gang, who explicitly stated that all its victims have to pay - healthcare or not.
This week London-based insurer Beazley disclosed that it handled twice as many ransomware-related claims in 2019 than the year prior, and that 35% of the 700+ organizations claiming losses from ransomware attacks in 2019 were healthcare providers.
InfoSec pros turn the tables on ransomware
The COVID-19 crisis is bringing out the best in the InfoSec community, with hundreds of hackers donating their time to projects that aid the healthcare sector.
This week Risky.Biz covered the story of 200 volunteer researchers that in their first week identified 50 hospitals with vulnerable VPN endpoints.
Meanwhile, we are starting to see ‘Coronavirus Fraud Coordinators’ appointed by US Attorneys across the United States, whose remit includes prosecuting ransomware gangs that use Coronavirus-related lures.
Are we at ‘peak cyber’?
There’s talk in VC-land about whether we’ve reached the peak of speculation on cyber security startups.
Some US$5 billion was invested in cyber security startups across 311 deals tracked by Pitchbook in 2019. While nobody would expect an epidemic-plagued 2020 to reach these heights, there is some evidence emerging that the market was already coming off its peak.
Newly-unemployed targeted in mule schemes
Cybercrime gangs have long promised unsuspecting jobseekers attractive ‘work from home’ roles that actually serve to launder stolen funds.
As unemployment soars across the Western world, we can anticipate that these gangs will find it easier to hire new mules. Brian Krebs has a great story on a new muling operation that is advertising for new roles to ‘process transactions for a Coronavirus Relief Fund’.
Because we really need a Windows zero-day right now
Microsoft has warned clients of a zero-day vulnerability in Windows - specifically in Adobe Type Manager Library. The vulnerability is being exploited by malicious actors and Microsoft has listed a number of temporary workarounds until a patch is available.
FSB’s botnet schematic dumped online
A hacking group that calls itself ‘Digital Revolution’ has published 12 documents that it claims to have stolen from a subcontractor to Russian intelligence service FSB. The documents include a 2018 proposal to build the intel agency ‘Fronton’ - a Mirai-style botnet from compromised IoT devices. Two years later, there is little evidence that the project went ahead.
Three reasons to actually be cheerful this week:
New IoT botnet: Meet ‘Mukashi’, a new botnet made up of compromised Zyxel NAS devices and routers. The vendor’s patch for the vulnerability - which doesn’t fix older Zyxel devices and the vulnerability - scores a perfect 10 for severity.
Trickbot adapted for espionage: TrickBot - typically used a banking trojan - has been modified for targeted attacks on telcos in what appears to be an espionage campaign.
WHO sent you that email? Attackers are setting up over 2000 malicious domains a day relating to COVID-19, with many mimicking the World Health Organization. Attackers didn’t need any in one recent phishing campaign, which abused an open redirect condition in the US Department of Health and Human Services website. Not a great look.
Enjoy this update? You can subscribe to the weekly Seriously Risky Business newsletter at our SubStack page. Feedback welcome at email@example.com
Around 50 hospitals around the world are less likely to get popped in ransomware attacks this week, thanks largely to a loose band of InfoSec pros that banded together to help healthcare providers during the COVID-19 crisis.
While they aren’t yet going after ransomware gangs in vigilante-style retribution, the group’s pro bono work has already helped pinpoint over 50 healthcare organizations running vulnerable versions of Citrix NetScalers or Pulse Secure VPN gateways.
Vulnerable VPN endpoints have been targeted by several ransomware gangs in recent months, and despite promises from some groups not to target healthcare organizations, hospital networks and the medical supply chain continue to fall victim.
The voluntary threat intel and hunting effort has been welcome help for Errol Weiss, chief security officer at the Health Information Sharing and Analysis Center (H-ISAC), which has taken on the role of aggregating and disclosing vulnerability information collected by the group to affected healthcare providers.
The group of independent researchers - which now numbers around 200 - has no name. Most of its members prefer anonymity and volunteer outside of work hours. So far they have provided H-ISAC data from honeypots set up to detect opportunistic scanning activity. They also scanned the internet for IP addresses hosting vulnerable VPN endpoints, from which H-ISAC extracted a list of 50 healthcare providers. H-ISAC has sent those organisations links to technical write-ups on the vulnerabilities in question, as well as generic mitigation advice, irrespective of whether they are H-ISAC members.
Weiss is optimistic the advisories will be acted on. “Based on our prior experience, most [hospitals] will pay attention and do something,” he said. The hospitals will be prompted with further information if their systems continue to show up in scans, he said.
Ohad Zaidenberg, one of the few public figures working to corral volunteers, told Risky Business the group has only “just started.”
“From tomorrow, we will start to work actively,” he said, but was coy as to what the next phase of their program involves.
Healthcare CSOs we spoke to this week were grateful for the camaraderie and generosity of their industry peers. But they also cautioned to not expect too much of hospitals under strain.
“The offers of intel-sharing and threat hunting is only useful to the extent that hospitals have the capacity and capability to consume it,” said Christopher Neal, CSO of Ramsay Health Care, which operates a global network of 480 medical facilities in 11 countries. In most hospital networks, Neal said, there are insufficient resources available to act on the information - even prior to the coronavirus outbreak.
Neal wants to see “clearer public policy arguments to increase funding for security programs” in healthcare.
Weiss said that he is keen to receive more Indicators of Compromise (both atomic indicators and TTPs) about ransomware attacks, as well as decryption methods for various strains of the malware. But he recognizes the difficulties that might emerge as the initiative scales. Automation may be required to filter and sort through the volume of data coming in and to prepare actionable reports.
Still, he said, “I’d rather have that problem than the reverse.”
As multiple cities head into lockdown, IT teams face extraordinary pressure to urgently deliver remote working to more users in a broader number of roles.
Over the coming weeks, the contrast between well and poorly resourced IT teams will be stark. Many won’t have the wherewithal to navigate this crisis without introducing unacceptable risks. Those that can will leap ahead. The tools we have on-hand to provide remote access in 2020 are orders of magnitude better than even a year or two ago.
Web-based identity brokers, trivially-deployed MFA and identity-aware proxies have arrived to save us from the hell of “just install TeamViewer”. And while the least imaginative solution to the crisis is to ramp up VPN access, others will dare to use this crisis as an opportunity to move to a “zero trust” delivery model.
This week we’re asking: What can organisations do to quickly stand-up work from home options for a displaced workforce that might even leave us in a more secure place than we started?
It’s safe to say that if a user wasn’t offered remote access to enterprise systems before COVID-19, it was probably for a fairly intractable reason. Many admins will now be looking for a ‘least worst’ option to make it happen fast. So let’s start there.
Availability and speed probably trump all other considerations at present. But security has to hold out on a few minimum requirements:
So what if the supply-chain of new devices breaks down, and BYOD becomes your only choice?
Connecting user-owned devices to virtual desktops in an organisation’s private cloud may be a reasonable compromise, especially for users requiring access to older or resource-intensive apps.
VDI isn’t the worst option - but you’re going to need a lot of spare compute, storage and network capacity. A sudden influx of remote users isn’t going to be cheap. If you’re going to go to that much effort and cost, you may as well be thinking longer-term.
Adjunct Professor at Stanford and fellow Risky.Biz contributor Alex Stamos suggests CIOs take the urgent use case to provide remote access - which has very good chances of being funded - and use it as a stepping stone to zero-trust.
It might not be as big a leap as you think.
Any organisation that has deployed Office 365, for example, has created a cloud-hosted identity store (in Azure AD). Microsoft’s Azure AD Application Proxy can use this identity store to provide the same remote (SSO) access into internally-hosted web apps as Microsoft’s cloud suite.
CSOs and CIOs aren’t limited to Microsoft technology here, either. Akamai, Cloudflare and others now offer the network-level plumbing required to provision internal services to remote workers via “identity-aware” proxy services. Users sign-in using SSO (via Azure AD, Okta, whatever), then get piped through Akamai or Cloudflare’s network to internal apps.
So if you’re really stuck - and feeling brave - the users previously bound to the workstation at HQ might make for a great pilot group. It’s relatively new tech and there will be teething issues, but it’s certainly worth a look.
You can also build a strong case for taking a new approach to remote access when you look at the initial infection vector used in recent attacks.
Attacking vulnerable users
There’s already been a proliferation of COVID-themed credential phishing campaigns from both State-sponsored attackers and cybercrime gangs, to such a degree that US Attorney General William Barr has urged the Department of Justice to prioritise prosecution of COVID-themed scams.
We should also anticipate that attackers will double-down on tech support scams. Users will be asked to follow unfamiliar procedures over the coming weeks. Some will be unfamiliar with the devices they’ve been assigned. They’ll have no prior experience with connecting using the corporate VPN. They may never have raised requests for IT support when outside the network.
These attacks will have a higher impact than usual, as many users will be connecting to corporate apps from user-owned devices. These devices will be highly susceptible to malware infection, unmonitored, difficult to support and difficult to acquire and re-image after they get infected.
Malware distributors won’t need to innovate much to net a bigger and more profitable catch.
Trawling for exposed remote access
We can expect attackers to scan for internet-exposed RDP (remote desktop protocol - defaults to port 3389) and ports used for third-party remote support tools (VNC, TeamViewer etc) to find low-hanging fruit.
Ransomware actors in particular are fond of abusing exposed RDP connections as an initial infection vector for attacks - as evidenced by recent ‘big-game hunting’ ransomware attacks in France. We’re also seeing commodity malware distributors like the TrickBot gang target RDP.
To date, researchers we’ve spoken to that run RDP honeypots haven’t picked up on major changes in attacker behaviour. Scanners are gonna scan, epidemic or not, and there were enough boxes to own before the crisis.
But as Insomnia Security’s Adam Boileau noted in a Risky.Biz livecast this week, the impacts of the many poor decisions made this week are likely to be long-felt.
“Admins will install VNC on desktops, punch some holes in the firewall, and hand out a port number and a password. We will live with a very, very long tail of the mess we’ve made.”
Attackers will also be keeping an eye out for victims that haven’t patched VPN kit against known vulnerabilities.
In hindsight, it was probably good fortune that offensive security researchers got so intimate with corporate VPN apps during the course of 2019. A quick refresher:
Where do you expect attackers to focus their attention? Hit me up on Twitter.
Welcome to the first edition of Seriously Risky Business, your weekly batch of the big stories shaping cyber policy, curated by Brett Winterford.
Feedback welcome at firstname.lastname@example.org
Attackers prey on COVID confusion
If we hoped ransomware gangs would give hospitals a reprieve during a global health epidemic, prepare to be disappointed. Local Czech media reports that the University Hospital in Brno - the country’s second-largest - had to shut down and isolate systems and re-route some patients to counter a ransomware infection.
Predictably, State-sponsored attackers and cybercrime gangs have capitalized on the chaos caused by the COVID-19 pandemic. Attacks with COVID-themed lures have been attributed to known actors in Russia, China and North Korea, and to a large number of profit-motivated gangs. US Attorney General William Barr has urged the Department of Justice to prioritize prosecution of COVID-themed scams.
This week the Risky Biz team looked at the challenges of securing a (newly) remote workforce in response to the epidemic. See a replay of our livestream with Adam Boileau, Patrick Gray, Alex Stamos. (I also made a cameo to introduce myself.)
Targeted attacks bypass MFA
If there’s one take-away from our livestream, it’s the need to prioritise multi factor authentication (MFA) in the rush to offer remote access. That remains the case even as we see more reports of MFA being bypassed in targeted attacks.
Amnesty International published a study of phishing techniques used against journalists and human rights activists in Uzbekistan, which include use of reverse proxies to bypass MFA. Attackers established a MITM proxy between phishing victims and legitimate websites, stealing tokens generated from an authenticated session to log-in as the legitimate user.
While Amnesty recommends use of hardware security keys (which invalidate these types of attacks), it nonetheless notes that ANY use of MFA is a far more secure outcome than dismissing it altogether.
Source code audit validates concerns about mobile voting app
A second security audit has unearthed a litany of security vulnerabilities in Voatz, a mobile voting app piloted in several recent US elections.
Security firm Trail of Bits was commissioned by a philanthropic body that promotes mobile voting to audit the Voatz source code, as a follow up to an earlier test with a narrower scope by MIT researchers. ToB published 79 findings - a third of them rated as ‘high’ severity. Voatz’ CEO told Vice Motherboard he is comfortable accepting most of these risks - whether election officials or anyone else will is another matter. The fact a product like this was hurtling towards market dominance is a worrying development.
“Boring” trial of Russian hacker suspended
The trial of Yevgeny Nikulin, a Russian hacker the US indicted for breaches at Dropbox, LinkedIn and Formspring, isn’t smooth sailing for US prosecutors.
After exhaustive efforts to extradite Nikulin from the Czech Republic and two days of hearings in the US, the judge has complained about the prosecution’s evidence being so ‘boring’ that it put several jurors to sleep.
We’re not entirely sure what level of fireworks and intrigue the judge and jury were expecting: the trial has already heard of links between cybercrime actors and the FSB, between the accused and a Group-IB exec, and now a COVID-19 scare: one of the prosecution’s key witnesses - a Secret Service agent - had to be isolated after exposure to a person with COVID-19 symptoms, necessitating a two-day delay in the trial.
Vice publishes trove of phone unlock data
Vice Motherboard has made a healthy contribution to the ‘Going Dark’ debate, analyzing over 500 warrants in which US law enforcement sought to unlock a suspect’s iPhone.
Reporter Joseph Cox and researcher Izzie Ramirez combed all the cases to record whether data was successfully extracted. A slim majority were - indicating certain agencies have the necessary cracking tools to break into devices. Equally, the data shows these capabilities are not universally available across law enforcement or reliably effective, and that the price of unlocking tech appears to be on the rise. Vice’s sample data set demonstrates that taking a hard position on either side of the debate isn’t at all constructive.
Auto-updates for Wordpress plugins
In a new and long-overdue development, WordPress users can soon choose to auto-update plugins and themes.
Unfortunately, the world’s most popular CMS is also the world’s most vulnerable web application framework, largely because WordPress plugins are often orphaned or sold to actors with nefarious motives. Even if new plugins and themes can be auto-updated, legacy installs might take a decade to bleed out.
Auto-updates also introduce a new software supply-chain risk: now when your attacker buys or compromises a plugin you trust, you might automatically accept any change they make to the code.
Avast abandons AV engine over bug
Three reasons to actually be cheerful this week:
Iranian RATs - The European Network of Transmission System Operators for Electricity (ENTSO-E) was most probably hacked by Iranian state actors, according to Cyberscoop. ENTSO-E didn’t attribute in their breach disclosure, but Cyberscoop have since drawn a line between ENTSO-E’s breach and recent threat analysis by Recorded Future to do it for them.
Russians outsourcing troll farms - Facebook discovered and removed from its network several dozen accounts in Ghana and Nigeria accused of being fronts for Russia’s banned troll farm, Internet Research Agency (IRA).
Russians trolls off the hook - The US has dropped charges against two Russian firms for election interference, over fears the trial might disclose the sources and methods used to gather evidence against them.
Still flipping bits - Researchers have figured out ways to defeat TRR - the protection chip vendors use to prevent Rowhammer attacks. You can safely leave it to vendors to worry about their next move - there’s easier ways to pop a box.
On this week’s show Patrick and Adam discuss the week’s security news, including:
This week’s sponsor interview is with Sam Crowther, founder of Kasada. They do bot detection and mitigation and apparently they’re quite good at it. Sam joins the show to talk through the new greyhatter of anti-anti-bot. It’s actually a really fun conversation, that one, so stick around for it.
If you don’t know already, all guests who appear on the Risky Business Soap Box podcast paid to be here. These podcasts are promotional, but as regular listeners know, they’re not just mindless recitations of marketing talking points.
This edition of Soap Box is brought to you by Trend Micro, which is a company that’s in a really interesting position at the moment.
With Symantec acquired by Broadcom, which only really cares about the biggest 500 companies in the world, Sophos absorbed, Borg-style, by Thoma Bravo and McAfee sitting in the corner eating its paste, there’s an opportunity for a new “portfolio” security software firm to emerge, and Trend wants to be it.
Jon Clay is Trend’s director of global threat communications and he joined me for this conversation about ransomware, how EDR is becoming “just another feature,” and what the role for a “portfolio” company in infosec is going to be in the future.
On this week’s show Patrick and Adam discuss the week’s security news, including:
This week’s sponsor interview is with Scott Kuffer of Nucleus Security. They have built a web application that pulls together feeds from all your vulnscanners and vulnerability-related software (Snyk, Burp, whatever), normalises it then lets you slice it, dice it, and send it through to the most relevant project owner/dev team. It’s insanely popular stuff, and Scott pops along this week to talk about vulnerability management and what his last year has looked like as Nucleus’s business has boomed.
These Soap Box podcasts are wholly sponsored. That means everyone you hear on one of these editions of the show, paid to be here. But that’s ok, because we have interesting sponsors!
Today’s sponsor is AttackIQ. They make an attack and breach simulation platform. They started sponsoring risky biz when they were a little baby startup, but these days, as you’ll hear, attack sim is actually emerging as a budget line item, particularly for larger companies.
They use the platform to test their existing controls, figure out where they have gaps or bad products, then kick on to planning from there… then retest, evaluate, plan, implement, etc etc etc.
For a lot of organisations, something like this is going to be really helpful. Another super helpful thing is that AttackIQ is all in on MITRE ATT&CK.
AttackIQ is, in fact, one of the first vendors I know of that jumped on the MITRE ATT&CK bandwagon. They got in early, and this podcast is mostly going to be focussed on ATT&CK. Chris Kennedy is AttackIQ’s CISO and VP of customer success! He did one of these soap boxes last year and it was really popular with the CISOs who tune in to risky biz.
He joined me for this discussion about MITTRE ATT&CK; where it’s at, where it’s going, how people are using it and how AttackIQ is using it to make its products more useful.
On this week’s show Patrick and Adam discuss the week’s security news, including:
This week’s sponsor interview is with Dave Cottingham from Airlock Digital.
They make whitelisting software that’s actually useable. And until I did this interview I didn’t know that their agent actually does host hardening as well, which is pretty cool. Since we last spoke they’ve also popped up in CrowdStrike’s app store thingy, which means a bunch of you Crowdstrike customers will be able to dabble in some whitelisting if you want to.
Dave joins the show to talk about a bunch of stuff, including their experience having Silvio Cesare do a code audit on their agent.