close
    • chevron_right

      Ross Anderson

      news.movim.eu / Schneier · Monday, 1 April - 00:21 · 2 minutes

    Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

    I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security . (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996. I know I was at the Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993 I, as an unpublished book author who only wrote a couple of crypto articles for Dr. Dobbs Journal , asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

    I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many —and odds are that you will find an unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He analyzed block ciphers in the 1990s, and attacks against large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering —now in its Third Edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and back doors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

    He’s listed in the acknowledgments as a reader of every other of my books from Beyond Fear on. Recently, we saw each other on only a couple of occasions every year: at this or that workshop or event. Most recently was last June, at SHB 2023 , in Pittsburgh. He was going to attend my Workshop on Reimagining Democracy , but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop . I learned something from him every single time we had a conversation. And I am not the only one.

    My heart goes out to his wife Shreen and his family. We lost him much too soon.

    • chevron_right

      Hardware Vulnerability in Apple’s M-Series Chips

      news.movim.eu / Schneier · Tuesday, 26 March - 16:23 · 2 minutes

    It’s yet another hardware side-channel attack:

    The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

    […]

    The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

    […]

    The attack, which the researchers have named GoFetch , uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

    The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

    The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

    Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

    Slashdot thread .

    • chevron_right

      Security Vulnerability in Saflok’s RFID-Based Keycard Locks

      news.movim.eu / Schneier · Tuesday, 26 March - 16:04 · 1 minute

    It’s pretty devastating :

    Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok . The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

    Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

    If ever. My guess is that for many locks, this is a permanent vulnerability.

    • chevron_right

      On Secure Voting Systems

      news.movim.eu / Schneier · Thursday, 21 March - 16:10 · 1 minute

    Andrew Appel shepherded a public comment —signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

    From the executive summary:

    We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

    Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

    Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

    Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots , which are optically scanned, separately and securely stored , and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

    • chevron_right

      Licensing AI Engineers

      news.movim.eu / Schneier · Thursday, 21 March - 16:07 · 1 minute

    The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

    This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

    I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

    • chevron_right

      Google Pays $10M in Bug Bounties in 2023

      news.movim.eu / Schneier · Thursday, 21 March - 16:04

    BleepingComputer has the details . It’s $2M less than in 2022, but it’s still a lot.

    The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

    For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

    Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

    During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

    Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

    Slashdot thread .

    • chevron_right

      Public AI as an Alternative to Corporate AI

      news.movim.eu / Schneier · Sunday, 17 March - 11:27 · 2 minutes

    This mini-essay was my contribution to a round table on Power and Governance in the Age of AI .  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

    The increasingly centralized control of AI is an ominous sign . When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

    To benefit society as a whole we need an AI public option —not to replace corporate AI but to serve as a counterbalance —as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

    Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

    Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

    The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

    The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.