The US Government is tapping the data of mobile advertising companies to identify non-compliance with social distancing measures, according to the Wall Street Journal. The scoop follows reports last week that the White House sought assistance from US tech giants to help monitor quarantine compliance and perform contact tracing.
Last week Risky Business explored what measures might prove effective and published a guest column by Stanford Law’s Albert Gidari suggesting Facebook and Google volunteer their expansive reach to offer privacy-preserving solutions. In the absence of either announcing initiatives, startups are stepping up to the plate.
The William and Flora Hewlett Foundation invests in creative thinkers and problem solvers who are working to ensure everyone has a meaningful opportunity to thrive.
In this (sponsored) podcast Akamai’s CTO of Security Strategy Patrick Sullivan talks us through the basics of identity-aware proxies. With more and more internal applications being served to newly external users, identity-aware proxies are the new hotness.
Risky Biz Soap Box: VPNs are out, identity-aware proxies are in
If Typhoid Mary carried a cell phone, we would all want to know where she’d been over the last few days.
Technology exists right now to trace the historical location and movement of any person who has tested positive for COVID-19. That location history is more detailed and accurate than information the Center for Disease Control (CDC) gets from interviewing people who have tested positive, and it can be used to map the trajectory of the disease over time and place, all while protecting privacy. However, privacy concerns and sufficient resources within public health organizations have hindered development of a location history solution.
A better approach - discussed below - dismisses third party aggregators because they largely are unaccountable, the data sources are speculative and without consent provenance, and the data tends to be less comprehensive and representative of communities.
Over a dozen countries have introduced or deployed tracking technologies, physical surveillance and censorship measures in a bid to slow the spread of the virus. A Digital Rights Index has been published to help stem overreach, promote scrutiny, and ensure that intrusive measures don’t continue for any longer than absolutely necessary.
So how would a location history solution work while protecting privacy? Consider what your device already knows about you. If you use Google Maps, for example, your Timeline can be seen in the Maps Menu. Click and you will see a detailed summary of your daily travels for as long as you’ve stored it, and your actual route is displayed on the adjacent map. My history for January 17th shows that I flew from San Jose to Seattle, took a 1:10pm ferry to Bainbridge Island, went to the barber at 2:30pm, then to the post office at 3pm, then home, and then had dinner at Sawan’s Thai Kitchen at 6:30pm. If I fell sick and tested positive two days later, I doubt that I could relate the details of my movements for two or three days before diagnosis with that degree of specificity.
But if I provide my cell phone number and/or account identifier to the public health official and consent to it, the data could then be sent to CDC - a governmental entity under the Stored Communications Act who can by law request emergency location information from Google or any other platform or provider that maintains my location history.
The emergency request is the same procedure used dozens of times each day where law enforcement submits a request to a provider to disclose user information in emergency cases like kidnappings. It is tried and tested. The infrastructure exists for it right now, including rapid delivery of the data back to the governmental entity.
Privacy concerns can be minimized by ensuring that the user’s opt-in consent for sharing with the CDC solely is for the purpose of tracing potential infectious contacts and cannot be shared with other governmental agencies without the person’s added consent. Further, the CDC can confirm it will destroy the identifiable information promptly upon receipt of the location history - the CDC only needs to know where a person with a positive test traveled and when. Everyone’s location history already is known to their providers; the person who tested positive already is sharing their movements as best as possible with health providers. The person infected is consenting to their information being used to notify others of the risk and for no other purpose. Contact tracing already is being done at the local level with scarce resources.
More can be done once the location history of the infected user is known. Platforms and wireless carriers can use incoming CDC or user data requests to determine how many other users were in the vicinity of the positive case at any given time. This is called geofencing. It is done today in response to search warrants from law enforcement to identify users in and around a crime scene, or, all registered phones on a cell tower serving a crime scene area.
Rather than the CDC simply telling the local community that a person has tested positive in their county, providers instead can tell specific proximate users precise facts by means of a text, email, or device notification: a person who tested positive was on the 9am flight from San Jose, landed at SeaTac at 11:10am and got a cab 10 minutes later, was on the 1:10pm ferry to Bainbridge Island, stopped at various places, and went home. That is actionable intelligence - it relieves the anxiety of people on a later flight or ferry or who ate before the infected person, or all those people who only are told someone has the disease in the community at large. It tells others who were in close proximity that they should self-isolate.
No, this is not a substitute for greater testing, but it may help direct valuable testing resources to a particular at-risk community and to target resources better. Imagine that there were 10 people identified on that 1:10 ferry. With their location maps layered on top of each other, we see a trajectory for the disease throughout the community and further identify the specific risk of immediate contact by others in the vicinity. Perhaps everyone gets directed to shelter-in-place, or, perhaps the proximity map shows only small pockets of concern. Whatever the data shows is immediately actionable at the local level and the CDC will be getting aggregate location data for those in proximity to persons who tested positive.
Knowing that a significant number of persons with the disease were in the general population at a specific time and place is better than any currently available information today, and is more accurate than anecdotal data from those who have tested positive. And again, the CDC (i.e. the government) is only ever getting the opt-in data for the person who tested positive; the providers are doing the rest. Some have complained that this solution is not perfect, doesn’t cover all places or people, isn’t granular enough to avoid “false positives” and requires providers to do something to facilitate it. Right now, the alternative is for everyone to stay home and live with the anxiety that interacting with anyone puts you and your family at risk. That is one big false positive. The approach above is surgical, and most times, good is better than perfect - at least with pandemics.
We also have seen how location information can be used to quarantine or restrict people’s movement in places like China. No one wants a virtual ankle bracelet for quarantine in this country, but those are some of the ideas being floated now. The benefit of the location tracing proposed here is that it is opt-in by those who have tested positive, and privacy protective for the user and all those who were in close proximity to persons so identified. It is better than using a surveillance hammer.
There is some privacy risk to the infected user whose location history becomes part of a map, in that crowdsourcing may identify the individual. But that risk can be lowered by not mapping the end point - if it is a personal residence for example. There is some risk inherent in the use of location data - but again, the degree of specificity for what goes on the map can be determined by the provider and minimized to exclude key data points. A rule might display “post office” but not display “home address”.
It is important to say again that this proposal alone is not a comprehensive solution to the difficult problem of contact tracing. There may be smaller numbers of users with location history enabled on various platforms due to privacy concerns. But if data is drawn from Google, Foursquare, Facebook, Uber, Lyft and other platforms, a comprehensive map will emerge that is sufficient to show trajectory and allow CDC to identify hot spots and resource needs, while simultaneously reducing anxiety in the areas least affected or proximate to individuals who have tested positive.
Albert Gidari is the Director of Privacy at Stanford Law School’s Center for Internet and Society and retired partner at Perkins Coie LLP where he represented wireless companies and Internet platforms.
The William and Flora Hewlett Foundation invests in creative thinkers and problem solvers who are working to ensure everyone has a meaningful opportunity to thrive.
The unprecedented COVID-19 pandemic has raised a thorny question for technologists and lawmakers: how might the location data from our cellphones be used to help contain the spread of the virus?
Two broad use cases have emerged: the first is using location data to monitor compliance with quarantine. And the second is contact tracing - using location data to track down people that have come into contact with a person that tests positive to the virus.
The team at Risky Biz discussed both in a livestream this week with regular co-host and Insomnia Security founder Adam Boileau, adjunct professor at Stanford University’s Center for International Security Alex Stamos, and Crowdstrike founder and former CTO Dmitri Alperovitch.
Watch the recent Risky Business livestream on COVID-19 surveillance:
Monitoring quarantine compliance
In an ideal world, people that have tested positive to a deadly and contagious disease would dutifully self-isolate to prevent further infection, and those that they’ve recently come in contact with would dutifully quarantine before their test results come in.
In Western democracies, the use of monitoring for such a purpose requires legislative change and a dramatic suspension of social norms.
In the United States, governments do not have the legal authority to tap cell phone records or social media data for the purpose of enforcing quarantine compliance. The United States is struggling to even make the case for using geofencing data to convict a suspect with a bank robbery.
Emergency powers are gradually being put into place as clusters of infections emerge. Airlines, for example, are now required under US law to submit data to the Center for Disease Control and Prevention (CDC) data about all incoming passengers for the purpose of enforcing quarantine. And the White House is now in discussion with US tech giants such as Facebook and Google about how their location data might also be put to use.
Today, anonymised data from mobile networks and apps is already made available to researchers for the purpose of tracking the spread of disease. Users of IoT thermometers, for example, can already opt-in to share their data for use in the aggregate.
But the prospect of using the data at the individual level for purposes that could be deemed punitive is ethically and legally complex.
Albert Gidari, Director of Privacy at the Center for Internet & Society at Stanford Law School notes that the US Stored Communication Act would not permit compelled disclosure. “Any system devised to take advantage of location history would have to be consent-based and rely on voluntary cooperation of providers,” he told Risky.Biz.
Compelled disclosure might also prove ineffective. The Electronic Frontier Foundation argues that the threat of having your movements monitored could create a perverse disincentive: people that feel unwell - but not so unwell to present for testing - may choose to avoid being tested to avoid it. And if such a system offered no agency or benefit to those being monitored, what is to stop them from simply leaving their mobile device at home?
“We can’t expect that people who choose to be non-compliant are going to use an app voluntarily,” Boileau notes. “So at that point, [authorities] are left with using the phone infrastructure - or other companies that have location data. In New Zealand, for example, the telcos have the data for emergency call location - and in an emergency, a whole bunch of the usual rules don’t apply.”
There are potential benefits for users - measuring compliance with quarantine would be an important input into determining “how long we should be in lockdown”, he said. In other words - put up with surveillance now, and lives can return to normal much sooner.
But that’s a very difficult sell - what’s acceptable to a person in New Zealand or Scandinavia might not fly in Germany or the United States.
Contact Tracing
Using mobile location data for contact tracing presents many of the same legal and ethical challenges as monitoring compliance with quarantine. But it offers far more palatable use cases for countries seeking to balance containment of the disease with preserving civil rights in the longer term.
Gidari posits the concept of a system whereby individuals that test positive may voluntarily disclose their mobile phone number or online account identifier to healthcare agencies. The government could then use existing lawful arrangements with tech companies to request rapid emergency access to the user’s location history.
The agency could also request aggregate geofencing data to have the provider alert other users who were in close proximity to the person during their illness. If protected by privacy-preserving caveats - such as limiting which agency can access the data and how long they can retain or use the data - it might be something privacy advocates can live with.
“We don’t need a Korea-style approach to this problem to get actionable data in the hands of the CDC or other health care providers,” Gidari said. “We can protect privacy too.”
Stamos - who has previously been an expert witness on cases that involve location-based data - isn’t confident that cell tower data is precise enough for contact tracing without generating an unacceptable number of false positives. But data from Bluetooth beacons and WiFi SSIDs might do.
The government of Singapore used Bluetooth as part of their efforts to contain the virus. Citizens were encouraged to voluntarily download the ‘TraceTogether’ app, provide the Ministry of Health their mobile phone number and turn Bluetooth on permanently. The app asks for user consent to log any other user of the app that spends more than 30 minutes within 2m of the person. The data is then acted upon if any of the users return a positive test.
Over 600,000 Singaporeans have already volunteered to download the app, perhaps motivated by the sense of national solidarity pervasive in Singapore, or perhaps by the assumption that using a government-issued app will fast-track access to testing when it becomes necessary.
In any case, the app has its limitations. The iOS app has to run permanently in the foreground to be effective, and the Android version must be manually configured to run in the background. Users are unlikely to be so diligent that they remember to turn it on every time they are in a public place - well in advance of getting sick - limiting the use case to people already on high alert, such as those that came into contact with a person waiting for test results. Developers may improve TraceTogether now that Singapore plans to release the app’s source code.
Other efforts to convince users to voluntarily download a privacy-preserving app - such as Cambridge University’s ‘FluPhone’ app in 2011 and MIT’s new ‘PrivateKit’ app - haven’t driven enough user interest to make a meaningful impact.
Stamos sees a faster way to enrol users in a privacy-preserving system. Any time Google or Facebook offer features like ‘People You May Know’, he notes, they are effectively already performing a similar feature to contact tracing. And both of those platforms have in excess of 2.5 billion users.
“Contact tracing is a technique already proven in the field by Google and Facebook,” Stamos said. “This is why sometimes when you go into a store, you end up getting related ads in your feed - because Bluetooth beacons placed in the store have recorded your interest for future advertising.”
He envisions a system under which any Facebook or Android user that tests positive to Coronavirus could - at the push of a button in an app they are familiar with - give permission for Facebook or Google to contact any other account holders that have been in the same Bluetooth Beacon or WiFi network (SSID) for more than 30 minutes.
Stamos recommends the tech giants get on the front foot and build this capability voluntarily for US users, lest they be compelled by governments to build a compromised solution.
“If I tested positive, I’d much prefer to hit a button and have Google and Facebook inform everyone that I’ve been in contact with, warning them to go get tested,” he said. “And that data doesn’t necessarily have to go to the government. It could be a relationship between me and counterparties, mediated by an app we use in common.”
As long as the app is opt-in, that consent is provided, and that the app brokers the tracing and notification (rather than the user or other human operator), it could be rolled out in the United States without the need for legislative change, he said.
“All the infrastructure is there to do it,” he said. “It would use the same [geofencing] mechanisms these companies use today, which we know to be legal.”
The same wouldn’t apply for Europe, where GDPR and other regulations would likely prove too prohibitive.
Even the most diehard privacy advocates say they would be willing to make a compromise in such an emergency.
But contact tracing apps will only help, Alperovich notes, if there is enough testing capacity available to help the population know if they are infected or have been in contact with somebody infected. That’s not available in the US today.
“It won’t do anything to trace people if we can’t actually test them,” he said. “But maybe when we get to the point of re-opening this country, and we want to make sure we don’t have new outbreaks, it’s something to consider.”
Speaking as a person that has opted out of platforms that track his location data, he remains cautious.
“I would want full transparency,” he said. “I’d want the source code of the app published by the government. I’d want strict oversight on how the data is used and I’d want mandatory purging of that data every so many days.”
“If it can be effective, and if the user volunteers to submit data on social networks they already use, then with the right safeguards - I’m a tentative yes.”
Even Boileau, who often quips that commercial surveillance is the “cyberpunk dystopia” we always dreaded, is in reluctant agreement.
“The voluntary approach has some real benefits,” he said. “It’s an emergency. We’ve got the data and we should use it. Privacy can just suck it for a while.”
The William and Flora Hewlett Foundation invests in creative thinkers and problem solvers who are working to ensure everyone has a meaningful opportunity to thrive.
On this week’s show Patrick and Adam discuss the week’s security news, including:
Azure resource constraints hit Europe
Should we unleash surveillance on COVID-19, privacy be damned?
Browser maintainers cease new releases
South Korea-linked APT crew attacks World Health Organization
Much, much more
This week’s show is brought to you by Thinkst Canary.
Thinkst’s Haroon Meer joins the show this week to talk about what he tells customers when they ask him if Thinkst could go rogue and own all their customers.
You can subscribe to the new Risky Business newsletter, Seriously Risky Business, here.
You can subscribe to our new YouTube channel here.
Links to everything that we discussed are below and you can follow Patrick or Adam on Twitter if that’s your thing.
Risky Business #576 -- Are cloud computing resources the new toilet paper?
Subscribe to the weekly Seriously Risky Business newsletter at our SubStack page.
Tech firms asked to help COVID contact tracing
Lawmakers have asked US tech companies to contribute data to help health authorities monitor quarantine compliance and trace recent contacts of people infected with coronavirus.
As authorities the world over rush to flatten the curve of coronavirus infections, even the most diehard privacy advocates are exhibiting a willingness to temporarily let civil liberties slide in the name of saving lives.
You might be surprised by which of our regular Risky.Biz contributors said as much when we hosted a livestream discussion on cell phone tracking earlier today - which featured Dmitri Alperovitch, Adam Boileau, Patrick Gray and Alex Stamos.
Healthcare hit with ransomware, despite promised truce
The Maze and DoppelPaymer ransomware gangs told Lawrence Abrams at Bleeping Computer that they would assist hospitals directly if incidentally infected by their malware. DoppelPaymer’s disclaimer is that it will continue attacking pharmaceutical companies and the broader medical supply chain.
Abrams told Risky Biz that he’s also since heard from the Netwalker ransomware gang, who explicitly stated that all its victims have to pay - healthcare or not.
The COVID-19 crisis is bringing out the best in the InfoSec community, with hundreds of hackers donating their time to projects that aid the healthcare sector.
This week Risky.Biz covered the story of 200 volunteer researchers that in their first week identified 50 hospitals with vulnerable VPN endpoints.
Meanwhile, we are starting to see ‘Coronavirus Fraud Coordinators’ appointed by US Attorneys across the United States, whose remit includes prosecuting ransomware gangs that use Coronavirus-related lures.
Are we at ‘peak cyber’?
There’s talk in VC-land about whether we’ve reached the peak of speculation on cyber security startups.
Some US$5 billion was invested in cyber security startups across 311 deals tracked by Pitchbook in 2019. While nobody would expect an epidemic-plagued 2020 to reach these heights, there is some evidence emerging that the market was already coming off its peak.
Cybercrime gangs have long promised unsuspecting jobseekers attractive ‘work from home’ roles that actually serve to launder stolen funds.
As unemployment soars across the Western world, we can anticipate that these gangs will find it easier to hire new mules. Brian Krebs has a great story on a new muling operation that is advertising for new roles to ‘process transactions for a Coronavirus Relief Fund’.
Because we really need a Windows zero-day right now
A hacking group that calls itself ‘Digital Revolution’ has published 12 documents that it claims to have stolen from a subcontractor to Russian intelligence service FSB. The documents include a 2018 proposal to build the intel agency ‘Fronton’ - a Mirai-style botnet from compromised IoT devices. Two years later, there is little evidence that the project went ahead.
Three reasons to actually be cheerful this week:
Singapore open sources contact tracing app: The state of Singapore will release a mobile app that identifies who has been within 2m of a coronavirus patient for longer than 30 minutes. Over 600,000 Singaporeans volunteered to download the app and submit data to health authorities.
Chrome, Firefox remove FTP support: Mozilla has joined Google in removing support for the ageing File Transfer Protocol in their web browsers. On behalf of every blue team: good riddance!
New IoT botnet: Meet ‘Mukashi’, a new botnet made up of compromised Zyxel NAS devices and routers. The vendor’s patch for the vulnerability - which doesn’t fix older Zyxel devices and the vulnerability - scores a perfect 10 for severity.
Trickbot adapted for espionage: TrickBot - typically used a banking trojan - has been modified for targeted attacks on telcos in what appears to be an espionage campaign.
WHO sent you that email? Attackers are setting up over 2000 malicious domains a day relating to COVID-19, with many mimicking the World Health Organization. Attackers didn’t need any in one recent phishing campaign, which abused an open redirect condition in the US Department of Health and Human Services website. Not a great look.
The William and Flora Hewlett Foundation invests in creative thinkers and problem solvers who are working to ensure everyone has a meaningful opportunity to thrive.
Around 50 hospitals around the world are less likely to get popped in ransomware attacks this week, thanks largely to a loose band of InfoSec pros that banded together to help healthcare providers during the COVID-19 crisis.
Vulnerable VPN endpoints have been targeted by several ransomware gangs in recent months, and despite promises from some groups not to target healthcare organizations, hospital networks and the medical supply chaincontinue to fall victim.
The voluntary threat intel and hunting effort has been welcome help for Errol Weiss, chief security officer at the Health Information Sharing and Analysis Center (H-ISAC), which has taken on the role of aggregating and disclosing vulnerability information collected by the group to affected healthcare providers.
The group of independent researchers - which now numbers around 200 - has no name. Most of its members prefer anonymity and volunteer outside of work hours. So far they have provided H-ISAC data from honeypots set up to detect opportunistic scanning activity. They also scanned the internet for IP addresses hosting vulnerable VPN endpoints, from which H-ISAC extracted a list of 50 healthcare providers. H-ISAC has sent those organisations links to technical write-ups on the vulnerabilities in question, as well as generic mitigation advice, irrespective of whether they are H-ISAC members.
Weiss is optimistic the advisories will be acted on. “Based on our prior experience, most [hospitals] will pay attention and do something,” he said. The hospitals will be prompted with further information if their systems continue to show up in scans, he said.
Ohad Zaidenberg, one of the few public figures working to corral volunteers, told Risky Business the group has only “just started.”
“From tomorrow, we will start to work actively,” he said, but was coy as to what the next phase of their program involves.
Healthcare CSOs we spoke to this week were grateful for the camaraderie and generosity of their industry peers. But they also cautioned to not expect too much of hospitals under strain.
“The offers of intel-sharing and threat hunting is only useful to the extent that hospitals have the capacity and capability to consume it,” said Christopher Neal, CSO of Ramsay Health Care, which operates a global network of 480 medical facilities in 11 countries. In most hospital networks, Neal said, there are insufficient resources available to act on the information - even prior to the coronavirus outbreak.
Neal wants to see “clearer public policy arguments to increase funding for security programs” in healthcare.
Weiss said that he is keen to receive more Indicators of Compromise (both atomic indicators and TTPs) about ransomware attacks, as well as decryption methods for various strains of the malware. But he recognizes the difficulties that might emerge as the initiative scales. Automation may be required to filter and sort through the volume of data coming in and to prepare actionable reports.
Still, he said, “I’d rather have that problem than the reverse.”
The William and Flora Hewlett Foundation invests in creative thinkers and problem solvers who are working to ensure everyone has a meaningful opportunity to thrive.
As multiple cities head into lockdown, IT teams face extraordinary pressure to urgently deliver remote working to more users in a broader number of roles.
Over the coming weeks, the contrast between well and poorly resourced IT teams will be stark. Many won’t have the wherewithal to navigate this crisis without introducing unacceptable risks. Those that can will leap ahead. The tools we have on-hand to provide remote access in 2020 are orders of magnitude better than even a year or two ago.
Web-based identity brokers, trivially-deployed MFA and identity-aware proxies have arrived to save us from the hell of “just install TeamViewer”. And while the least imaginative solution to the crisis is to ramp up VPN access, others will dare to use this crisis as an opportunity to move to a “zero trust” delivery model.
This week we’re asking: What can organisations do to quickly stand-up work from home options for a displaced workforce that might even leave us in a more secure place than we started?
Avoiding the worst
It’s safe to say that if a user wasn’t offered remote access to enterprise systems before COVID-19, it was probably for a fairly intractable reason. Many admins will now be looking for a ‘least worst’ option to make it happen fast. So let’s start there.
Availability and speed probably trump all other considerations at present. But security has to hold out on a few minimum requirements:
Use managed devices, wherever possible -
Unfashionable though it might be to say, users need to be held to a minimum standard of security. For the majority of companies that haven’t arrived at a zero-trust nirvana, we only get the control and visibility necessary to secure remote connections when we can enforce policy on the device.
Avoid third-party remote support tools -
Limit use of VNC, TeamViewer and other remote support tools. Users should only connect via remote sessions that are encrypted, and on apps that can be patched and monitored by the security team. If you aren’t using application whitelisting tools, a combination of Group Policy (restrict hashes of their EXE files) and firewall rules might be the best you can manage.
MFA, always -
All user connections should require a second factor of authentication - irrespective of device or access mechanism. Hardware MFA is king, SMS the least desirable, and the many variations in between the most practical.
Scan and patch -
All components of the remote access solution should be patched against known vulnerabilities - with close attention paid to VPN agents and concentrators.
Avoid RDP altogether -
If you don’t absolutely need it, you should ideally have disabled RDP. But if you must…
Don’t expose RDP to the internet -
User connections should only be made from managed devices over an SSL VPN.
Avoid direct RDP connections -
RDP sessions should be forced through a centrally-managed RD Gateway deployed in a DMZ, preferably behind a web application firewall. If that sounds like a performance nightmare, it’s because it is. We’re going on the assumption that you’re desperate.
Enforce basic security config -
Long and complex passwords, MFA and account lockouts for multiple incorrect passwords, in the very least.
Hunt -
RDP is so commonly abused by attackers, you’re going to need to keep a close eye on it.
So what if the supply-chain of new devices breaks down, and BYOD becomes your only choice?
Connecting user-owned devices to virtual desktops in an organisation’s private cloud may be a reasonable compromise, especially for users requiring access to older or resource-intensive apps.
VDI isn’t the worst option - but you’re going to need a lot of spare compute, storage and network capacity. A sudden influx of remote users isn’t going to be cheap. If you’re going to go to that much effort and cost, you may as well be thinking longer-term.
Adjunct Professor at Stanford and fellow Risky.Biz contributor Alex Stamos suggests CIOs take the urgent use case to provide remote access - which has very good chances of being funded - and use it as a stepping stone to zero-trust.
View the recent Risky Business livestream on enabling a work-from-home workforce:
Identity-Aware Proxies: your Coronavirus friend
It might not be as big a leap as you think.
Any organisation that has deployed Office 365, for example, has created a cloud-hosted identity store (in Azure AD). Microsoft’s Azure AD Application Proxy can use this identity store to provide the same remote (SSO) access into internally-hosted web apps as Microsoft’s cloud suite.
CSOs and CIOs aren’t limited to Microsoft technology here, either. Akamai, Cloudflare and others now offer the network-level plumbing required to provision internal services to remote workers via “identity-aware” proxy services. Users sign-in using SSO (via Azure AD, Okta, whatever), then get piped through Akamai or Cloudflare’s network to internal apps.
So if you’re really stuck - and feeling brave - the users previously bound to the workstation at HQ might make for a great pilot group. It’s relatively new tech and there will be teething issues, but it’s certainly worth a look.
How are you most likely to be attacked?
You can also build a strong case for taking a new approach to remote access when you look at the initial infection vector used in recent attacks.
Attacking vulnerable users
There’s already been a proliferation of COVID-themed credential phishing campaigns from both State-sponsored attackers and cybercrime gangs, to such a degree that US Attorney General William Barr has urged the Department of Justice to prioritise prosecution of COVID-themed scams.
We should also anticipate that attackers will double-down on tech support scams. Users will be asked to follow unfamiliar procedures over the coming weeks. Some will be unfamiliar with the devices they’ve been assigned. They’ll have no prior experience with connecting using the corporate VPN. They may never have raised requests for IT support when outside the network.
These attacks will have a higher impact than usual, as many users will be connecting to corporate apps from user-owned devices. These devices will be highly susceptible to malware infection, unmonitored, difficult to support and difficult to acquire and re-image after they get infected.
Malware distributors won’t need to innovate much to net a bigger and more profitable catch.
Trawling for exposed remote access
We can expect attackers to scan for internet-exposed RDP (remote desktop protocol - defaults to port 3389) and ports used for third-party remote support tools (VNC, TeamViewer etc) to find low-hanging fruit.
Ransomware actors in particular are fond of abusing exposed RDP connections as an initial infection vector for attacks - as evidenced by recent ‘big-game hunting’ ransomware attacks in France. We’re also seeing commodity malware distributors like the TrickBot gang target RDP.
To date, researchers we’ve spoken to that run RDP honeypots haven’t picked up on major changes in attacker behaviour. Scanners are gonna scan, epidemic or not, and there were enough boxes to own before the crisis.
But as Insomnia Security’s Adam Boileau noted in a Risky.Biz livecast this week, the impacts of the many poor decisions made this week are likely to be long-felt.
“Admins will install VNC on desktops, punch some holes in the firewall, and hand out a port number and a password. We will live with a very, very long tail of the mess we’ve made.”
Vulnerable gateways
Attackers will also be keeping an eye out for victims that haven’t patched VPN kit against known vulnerabilities.
In hindsight, it was probably good fortune that offensive security researchers got so intimate with corporate VPN apps
during the course of 2019. A quick refresher:
In April 2019, US Homeland Security warned of authentication bypass flaws in a long list of enterprise VPN apps. Using these flaws, attackers that compromised a victim’s endpoint could assume the user’s full VPN access and go for broke in the corporate network. Palo Alto and Pulse Secure were the only vendors to immediately respond with patches for their VPN desktop apps.
Researchers dropped a new set of bugs found in Palo Alto Networks, Pulse Secure and Fortinet VPN solutions at Black Hat in August. Within days, attackers were scanning thousands of vulnerable Pulse Secure VPN endpoints and Fortigate SSL VPN web portals, collecting private keys and passwords for use in later attacks. From late 2019, the flaws were being actively exploited by APT crews and weeks later by ransomware gangs - including the crew that crippled Travelex.
Already in 2020, we’ve seen attackers scanning for vulnerable Citrix gateways. It’s assumed that the ransomware actors that popped German auto parts manufacturer Gedia, France’s Bretagne Telecom, steel manufacturer EMRAZ and possibly the German City of Potsdam abused a set of critical vulnerabilities found in Citrix products in late 2019.
Where do you expect attackers to focus their attention? Hit me up on Twitter.
This week’s sponsor interview is with Sam Crowther, founder of Kasada. They do bot detection and mitigation and apparently they’re quite good at it. Sam joins the show to talk through the new greyhatter of anti-anti-bot. It’s actually a really fun conversation, that one, so stick around for it.
Links to everything that we discussed are below and you can follow Patrick or Adam on Twitter if that’s your thing.
Risky Business #575 -- World drowns in Coronavirus phishing lures as crisis escalates
If you don’t know already, all guests who appear on the Risky Business Soap Box podcast paid to be here. These podcasts are promotional, but as regular listeners know, they’re not just mindless recitations of marketing talking points.
This edition of Soap Box is brought to you by Trend Micro, which is a company that’s in a really interesting position at the moment.
With Symantec acquired by Broadcom, which only really cares about the biggest 500 companies in the world, Sophos absorbed, Borg-style, by Thoma Bravo and McAfee sitting in the corner eating its paste, there’s an opportunity for a new “portfolio” security software firm to emerge, and Trend wants to be it.
Jon Clay is Trend’s director of global threat communications and he joined me for this conversation about ransomware, how EDR is becoming “just another feature,” and what the role for a “portfolio” company in infosec is going to be in the future.
Risky Biz Soap Box: Trend Micro's Jon Clay talks ransomware and being a portfolio company