Seriously Risky Business Newsletter
August 25, 2020
Srsly Risky Biz: Tuesday, August 25
Written by
Your weekly dose of Seriously Risky Business news is supported by the Cyber Initiative at the Hewlett Foundation .
Former Uber CSO Joe Sullivan charged with obstruction of justice
Uber's former chief security officer Joe Sullivan has been charged with obstruction and misprision (concealing evidence of a felony) over his role in Uber's handling of a 2016 data breach.
The US Department of Justice alleges in a criminal complaint that while Uber CSO, Sullivan withheld information about an ongoing security incident from Federal Trade Commission (FTC) investigators, who were investigating the ride-sharing company over a 2014 breach that pre-dated Sullivan's tenure.
In November 2016, ten days after Sullivan provided sworn testimony to the FTC about the 2014 breach, attackers contacted Uber to demand US$100k after swiping 57 million Uber driver and passenger records from a poorly secured S3 bucket. Under Sullivan's direction, Uber paid the attackers using its bug bounty program and demanded they sign an NDA and declare they never stole any Uber data. The DoJ contends Sullivan knew this was false.
Exfiltration of data from a compromised system would ordinarily constitute a crime and fall outside the scope of a bug bounty program. Sometimes a bounty participant can get carried away and touch data they shouldn’t, but they don’t usually follow up their scope creep with a ransom demand. As Risky.Biz exclusively reported in 2017 , Sullivan orchestrated the bug bounty exercise in part as a way to ID the attackers and (if possible) secure the stolen data. Our understanding is that this scheme was successful. But inexplicably, Uber didn't inform law enforcement or the FTC about its scheme. The same attackers subsequently went on to extort money from other organisations.
The crux of the obstruction complaint is whether Sullivan should have been obliged to update his FTC testimony in light of the second incident, which much like the 2014 breach revolved around poor AWS S3 credential management. The FTC's Civil Investigative Demand (CID) sought information from Uber about breaches "from January 1, 2014, until the date of full and complete compliance with this CID". The second incident occurred prior to FTC giving Uber an all clear, which in turn was only granted after Uber assured them steps had been taken to neutralise this type of incident. Sullivan signed off on those assurances.
There are hard lessons here for CSOs. It's wholly appropriate for initial investigations into security incidents to be contained to a 'need to know' audience, and for the containment period to be extended if there is a real possibility of identifying attackers and preventing further harm to customers. Efforts to contain this information are not of themselves evidence of conspiracy to 'cover-up' the incident. However, Sullivan's predicament is an edge case because of the FTC investigation and Uber's organisational structure.
Sullivan was a direct report to Uber CEO Travis Kalanick, enmeshing him into the brutal politics of a startup whose valuation was growing by a few billion a month. He did not, as CSOs often do, report through the company's General Counsel. Sullivan's defence team told reporters this week that Uber's legal counsel was kept abreast of the 2016 incident, but in-house and external legal counsel appear (in the criminal complaint) to have testified to the contrary. Craig Clark, a specialist attorney assigned directly to Sullivan's team, was fired alongside him when new Uber CEO Dara Khosrowshahi learned of the incident in 2017. So expect arguments about who had knowledge of the incident, who they reported to, and who should ultimately be held accountable to be thrashed out in the courtroom.
You can both sympathise with Sullivan's unique predicament and shake your head at why Uber chose not to disclose after the attackers were identified. The decision not to disclose raises questions about professional judgement. The role of the CSO is inherently about managing contradictory demands, oftentimes responding to unreasonable demands from CEOs to do whatever is necessary to manage a company's reputational risks.
Every CSO should now ask themselves: if your position relies on fealty to a CEO, and that CEO expects you to own the riskiest of decisions, how exposed will you be when that CEO is pushed out?
How to hack together some Azure app security
After going all SHOUTY ALL CAPS about OAuth phishing last week, we've had some excellent feedback from readers that have hacked up their own ways to deal with the threat in Microsoft 365 without paying through the nose for E5 licenses.
Sifting through Microsoft's support documentation won't get you there. Microsoft has numerous contradictory positions on whether admins should allow users to consent to integrate third-party apps. In most pages, Microsoft strongly advocates for a permissive approach .
"Microsoft itself uses the default configuration with users able to register applications on their own behalf."
But on security-specific support pages published since the spate of OAuth Phishing emerged, it recommends the opposite:
"Microsoft recommends disabling end-user consent to applications."
It's obvious why Microsoft contradicts itself: the company set out to create a thriving marketplace for third-party apps. That requires users and developers to be in the driver's seat and for friction to be minimised.
It's a little galling that Microsoft pitched its premium MCAS product as the solution within three paragraphs as soon as Azure apps recognised as an easy attack vector, when there are more obvious fixes: changing the default settings or providing admins better means to securely configure the apps.
One Risky.Biz reader pointed out that if you don't want to pay the US$50+ per user for an E5 license, MCAS can be purchased as a standalone add-on for US$5 per user per month. But licensing MCAS for the sole purpose of closing a gap Microsoft left open isn't an investment organisations should be asked to make.
Other readers offered up some free, but hacky, suggestions. For a start, they switch off user consent to integrate third party apps and audit all existing apps joined to their tenancy (using Powershell queries ... because there is no management interface for this). If there are third-party apps deemed relevant and safe or large numbers of users, they create user groups in Azure Active Directory for the purpose of assigning authorised users to the relevant app, manually configure the app to 'require user assignment', then set a 'tenant-wide admin consent' for the app. There are several steps where it's easy to screw up and assign apps with excessive privileges globally or to the wrong users, but that's about as good as it gets right now.
Microsoft is toying with new configuration options to dissuade companies from switching 'user consent' off altogether. Some readers are experimenting with a new feature (still in preview) that pushes all new app requests through an administrator consent workflow. A security admin must then assess the security implications of every new app requested. Also in "preview" is a 'Permissions Classifications' option , in which admins choose whether users can consent to new apps based on the specific permissions the app asks for. This cuts down on the number of apps to be manually reviewed. Reader Wim van den Heijkant sent through a Powershell script he wrote to examine what permissions various apps are asking for.
It shouldn't be our job (or yours) to translate Microsoft's documentation. At the risk of being repetitive: Microsoft needs to change default settings so that organisations can still use Azure apps without their users being able to consent to new apps willy nilly. Microsoft should also provide a simple allow list/block list management console for third party apps, ideally with the ability to drill down from tenancy level to manage per user, user group or app. These are the base level of admin capabilities you'd expect from enterprise software.
Its absence sticks out like the dog's proverbials.
Google drops everything over Gmail bug
Google patched a security issue in Gmail in just seven hours following the publication of a blog post describing the bug. The issue was originally disclosed to Google in April 2020.
Allison Husain, a security researcher at Berkeley, found a way to bypass SPF and DMARC checks in Gmail and disclosed it to Google on April 3. Husain discovered that an attacker could use one Gmail account as a relay to grant authenticity to spoofed emails sent to any other Gmail account, irrespective of SPF and DMARC settings. It's the kind of bug spam and phishing operators salivate over, but it didn't rise to the top of Google's priority list until Husain blogged about it and published proof of concept code on August 19. (To be fair to Husain, she reported the issue to Google 137 days earlier. That gave them more time than Google's Project Zero gives vendors to fix bugs.)
It might be a coincidence, but several Google services were out of action for about six hours on August 20.
FBI, CISA issue time-warp bulletin straight out of 1994
CISA and the FBI have seen enough social engineering over the humble telephone lately that they've issued an updated advisory about 'voice phishing' or 'vishing'.
As the July 2020 attack on Twitter demonstrated, there are limits to what multi-factor authentication (MFA) can do to keep out attackers who don't mind making a little noise on the way in, especially for targets working at home from unmanaged devices.
FBI and CISA warn attackers are working around multi-factor by setting up phishing pages that imitate the VPN login pages of a targeted company. The pages are configured to capture usernames, passwords and OTP challenges.
Attackers call employees of the targeted company, often from spoofed VoIP numbers, posing as the IT help desk and explaining that the user's VPN connection needs to be “refreshed”. If the victim is conned and enters credentials into the attacker's site, the attackers pass these credentials into the company's actual VPN log-in pages in real-time. This will include a valid OTP. We would note that in many cases when an MFA push request is triggered -- as opposed to an OTP field being present -- victims often approve it on the assumption it was pushed out by the IT help desk.
The attack isn't novel. It’s been standard attacker tradecraft for eons. But a COVID-ravaged world is especially susceptible, and the number of miscreants embracing social engineering is going berserk.
Risk-averse organisations typically restrict VPN connections to managed devices with security certificates installed , but as we noted back in March , a lot of organisations didn't have a fleet of managed laptops at the ready when COVID-19 forced the entire workforce to connect from home.
It's also worth noting that specialist providers like Duo and Okta have added contextual warnings to MFA requests to help users discern whether an MFA prompt was triggered by a new or unfamiliar device. Duo's displays the IP address, location and timezone.
For organisations that do allow unmanaged devices access to the corporate network, you can only really rely on people and process controls: a number of which are listed in the advisory . Your security awareness and training teams and anybody involved in IT/security comms should be across them.
Mysterious P2P botnet targets SSH servers
Malware analyst Ophir Harpaz has dissected a botnet that has infected over 500 SSH servers since January 2020, finding new targets by brute-forcing SSH passwords.
Dubbed 'FritzFrog', the botnet largely went unnoticed among malware analysts until Harpaz discovered ways to analyse it . There is no central C2 server, the malware is fileless (runs in-memory), and P2P messages sent between infected devices evenly distribute lists of new devices to crack.
While Harpaz's analysis found that FritzFrog is dropping a crypto-miner, she is unconvinced that this is the end-game of the malware. She observed a great more effort in the code dedicated to its P2P and brute-forcing worm modules. "I can quite confidently say that the attackers are much more interested in obtaining access to breached servers than making profit through Monero," she told Risky Business.
A backdoor left in compromised SSH servers strongly supports this theory. "This access and control over SSH servers can be worth much more money," she said, "especially when taking into the account the type of targets [we have] witnessed."
For now, this botnet won’t compromise your system if you’ve made use of SSH keys and disabled password authentication. Harpaz's blog post also includes advice on how to check if your SSH server was previously compromised.
Two reasons to actually be cheerful this week:
- Chasing the great white whale: Researcher Dan Gunter released an open source tool to help Linux admins and security teams hunt down GRU's evasive 'Drovorub' malware (exposed in last week's NSA/FBI joint advisory ). He doesn't have Drovorub samples (neither does anyone we know), so building a tool to detect it is a next-level commitment. If Gunter's white whale is indeed out there, there's a harpoon waiting for it!
- One place to watch all the OSINT: There are lots of great open source intel projects, but running suspicious files, IP addresses or domains through multiple tools can be cumbersome. So young developer Eshaan Bansal has given Matteo Lodi's Intel Owl tool a fresh coat of paint and connected it to a bunch of new services, and the result looks promising .
No wonder the Norks are so prolific
A US Army report says that there are over 6000 operatives working in cyber operations for North Korea. That's a lot of cybers. It's about as many operators as America's Armed Services contribute to US Cyber Command.
CISA/FBI warns about the latest Nork malware
CISA and the FBI released a joint malware analysis report about the latest workaday RAT (Remote Access Trojan) used by North Korean actors in attacks on defence contractors. First dubbed Operation 'In(ter)ception' by ESET in June, Operation 'North Star' by McAfee in July and Operation 'DreamJob' by ClearSky in August, attackers impersonate legitimate recruiters on LinkedIn dangling attractive job offers in front of their victims. They follow up with documents (such as job descriptions) laced with malware. The campaign targeted victims in the US, Europe and Israel over the last 12 months. The malware samples/IOCs in the report appear derived from more recent attacks.
Taiwan goes public on China's aggression
The Taiwanese Government has called out Chinese state actors over attacks targeting ten government agencies over the last two years , largely via the compromise of four managed IT service providers. In one campaign, attackers gained persistent access to five mail servers run by an outsourced IT provider, compromising the email accounts of 6,000 Taiwanese officials. Chinese state actors have also targeted Taiwan's prized semiconductor industry and state-owned petroleum companies in recent months.
This particular fraud was out of the ordinary
Credit bureau Experian South Africa was duped into forwarding information about 24 million of its customers to a local fraudster. The company says that sending large volumes of PII data is a service provided "in the ordinary course of business" with legitimate customers. ¯\_(ツ)_/¯
Trump admin offers carve-out for WeChat sales in China
As expected , lobbyists from multinational companies descended on the White House to point out to Team Trump that a global ban on WeChat would hurt their sales in China. Bloomberg reports that lobbyists are now (verbally) being promised some sort of carve-out from Trump's proposed WeChat ban.
TikTok petitions against Trump's Executive Order
ByteDance filed for injunctive and declaratory relief from Trump's TikTok ban and provided a high-level overview of its cyber security and content moderation practices. ByteDance argues that the ban violates TikTok Inc's Fifth Amendment rights and was triggered without any proof that TikTok presents a threat to US national security. It wants a judge to rule the ban invalid.
Malicious Azure app targeting SANS a BEC scam
The attackers who pried open a Microsoft 365 inbox at the SANS Institute (as reported last week ) were looking for files that contained keywords like 'payments' and 'invoice'. Yep, it was a straight up Business Email Compromise scam, sent to 17 SANS staff in total, according to a video debrief . SANS claims it detected the malicious app during a "systematic review" of its email configuration, but didn't say what prompted that review. The most obvious reason to review your inbox rules is when a supplier asks why it hasn't been paid. Just sayin’.
In ancient times, there was a listening device called an iPod
One of the more intriguing stories to be published this week was a first-hand account of how in 2005 Apple allowed two engineers from US Defence Contractor Bechtel access to the source code for the fifth-generation iPod, who were tasked with creating some sort of customised device. We can only imagine what the customised iPods were used for. In any case, it was the last of the hackable iPods: the operating system on all future Apple devices was digitally-signed to prevent tampering.
This week's long read
If you found the Mueller report a bit of a let down, the final instalment of the US Senate Committee on Intelligence Report into Russian Active Measures and Interference in the 2016 Election [pdf] should scratch that four-year-old itch, even if you only have time to read the executive summary. For deeper gratification, head straight to Chapter III(B) on P.170: 'Hack and Leak'.