• chevron_right

      AI and the Indian Election

      news.movim.eu / Schneier · Tuesday, 11 June, 2024 - 05:44 · 5 minutes

    As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world.

    The campaigns made extensive use of AI, including deepfake impersonations of candidates, celebrities and dead politicians. By some estimates, millions of Indian voters viewed deepfakes.

    But, despite fears of widespread disinformation, for the most part the campaigns, candidates and activists used AI constructively in the election. They used AI for typical political activities, including mudslinging, but primarily to better connect with voters.

    Deepfakes without the deception

    Political parties in India spent an estimated US$50 million on authorized AI-generated content for targeted communication with their constituencies this election cycle. And it was largely successful.

    Indian political strategists have long recognized the influence of personality and emotion on their constituents, and they started using AI to bolster their messaging. Young and upcoming AI companies like The Indian Deepfaker , which started out serving the entertainment industry, quickly responded to this growing demand for AI-generated campaign material.

    In January, Muthuvel Karunanidhi, former chief minister of the southern state of Tamil Nadu for two decades, appeared via video at his party’s youth wing conference. He wore his signature yellow scarf, white shirt, dark glasses and had his familiar stance—head slightly bent sideways. But Karunanidhi died in 2018. His party authorized the deepfake.

    In February, the All-India Anna Dravidian Progressive Federation party’s official X account posted an audio clip of Jayaram Jayalalithaa, the iconic superstar of Tamil politics colloquially called “Amma” or “Mother.” Jayalalithaa died in 2016.

    Meanwhile, voters received calls from their local representatives to discuss local issues—except the leader on the other end of the phone was an AI impersonation. Bhartiya Janta Party (BJP) workers like Shakti Singh Rathore have been frequenting AI startups to send personalized videos to specific voters about the government benefits they received and asking for their vote over WhatsApp.

    Multilingual boost

    Deepfakes were not the only manifestation of AI in the Indian elections. Long before the election began, Indian Prime Minister Narendra Modi addressed a tightly packed crowd celebrating links between the state of Tamil Nadu in the south of India and the city of Varanasi in the northern state of Uttar Pradesh. Instructing his audience to put on earphones, Modi proudly announced the launch of his “new AI technology” as his Hindi speech was translated to Tamil in real time.

    In a country with 22 official languages and almost 780 unofficial recorded languages , the BJP adopted AI tools to make Modi’s personality accessible to voters in regions where Hindi is not easily understood. Since 2022, Modi and his BJP have been using the AI-powered tool Bhashini , embedded in the NaMo mobile app , to translate Modi’s speeches with voiceovers in Telugu, Tamil, Malayalam, Kannada, Odia, Bengali, Marathi and Punjabi.

    As part of their demos, some AI companies circulated their own viral versions of Modi’s famous monthly radio show “Mann Ki Baat,” which loosely translates to “From the Heart,” which they voice cloned to regional languages.

    Adversarial uses

    Indian political parties doubled down on online trolling, using AI to augment their ongoing meme wars. Early in the election season, the Indian National Congress released a short clip to its 6 million followers on Instagram, taking the title track from a new Hindi music album named “Chor” (thief). The video grafted Modi’s digital likeness onto the lead singer and cloned his voice with reworked lyrics critiquing his close ties to Indian business tycoons.

    The BJP retaliated with its own video , on its 7-million-follower Instagram account, featuring a supercut of Modi campaigning on the streets, mixed with clips of his supporters but set to unique music. It was an old patriotic Hindi song sung by famous singer Mahendra Kapoor , who passed away in 2008 but was resurrected with AI voice cloning.

    Modi himself quote-tweeted an AI-created video of him dancing—a common meme that alters footage of rapper Lil Yachty on stage—commenting “such creativity in peak poll season is truly a delight.”

    In some cases, the violent rhetoric in Modi’s campaign that put Muslims at risk and incited violence was conveyed using generative AI tools, but the harm can be traced back to the hateful rhetoric itself and not necessarily the AI tools used to spread it.

    The Indian experience

    India is an early adopter, and the country’s experiments with AI serve as an illustration of what the rest of the world can expect in future elections. The technology’s ability to produce nonconsensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible.

    The Indian election’s embrace of AI that began with entertainment, political meme wars, emotional appeals to people, resurrected politicians and persuasion through personalized phone calls to voters has opened a pathway for the role of AI in participatory democracy.

    The surprise outcome of the election, with the BJP’s failure to win its predicted parliamentary majority, and India’s return to a deeply competitive political system especially highlights the possibility for AI to have a positive role in deliberative democracy and representative governance.

    Lessons for the world’s democracies

    It’s a goal of any political party or candidate in a democracy to have more targeted touch points with their constituents. The Indian elections have shown a unique attempt at using AI for more individualized communication across linguistically and ethnically diverse constituencies, and making their messages more accessible, especially to rural, low-income populations.

    AI and the future of participatory democracy could make constituent communication not just personalized but also a dialogue, so voters can share their demands and experiences directly with their representatives—at speed and scale.

    India can be an example of taking its recent fluency in AI-assisted party-to-people communications and moving it beyond politics. The government is already using these platforms to provide government services to citizens in their native languages.

    If used safely and ethically, this technology could be an opportunity for a new era in representative governance, especially for the needs and experiences of people in rural areas to reach Parliament.

    This essay was written with Vandinika Shukla and previously appeared in The Conversation .

    • chevron_right

      Ross Anderson

      news.movim.eu / Schneier · Monday, 1 April, 2024 - 00:21 · 2 minutes

    Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

    I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security . (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996. I know I was at the Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993 I, as an unpublished book author who only wrote a couple of crypto articles for Dr. Dobbs Journal , asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

    I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many —and odds are that you will find an unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He analyzed block ciphers in the 1990s, and attacks against large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering —now in its Third Edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and back doors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

    He’s listed in the acknowledgments as a reader of every other of my books from Beyond Fear on. Recently, we saw each other on only a couple of occasions every year: at this or that workshop or event. Most recently was last June, at SHB 2023 , in Pittsburgh. He was going to attend my Workshop on Reimagining Democracy , but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop . I learned something from him every single time we had a conversation. And I am not the only one.

    My heart goes out to his wife Shreen and his family. We lost him much too soon.

    • chevron_right

      Hardware Vulnerability in Apple’s M-Series Chips

      news.movim.eu / Schneier · Tuesday, 26 March, 2024 - 16:23 · 2 minutes

    It’s yet another hardware side-channel attack:

    The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

    […]

    The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

    […]

    The attack, which the researchers have named GoFetch , uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

    The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

    The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

    Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

    Slashdot thread .

    • chevron_right

      Security Vulnerability in Saflok’s RFID-Based Keycard Locks

      news.movim.eu / Schneier · Tuesday, 26 March, 2024 - 16:04 · 1 minute

    It’s pretty devastating :

    Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok . The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

    Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

    If ever. My guess is that for many locks, this is a permanent vulnerability.

    • chevron_right

      On Secure Voting Systems

      news.movim.eu / Schneier · Thursday, 21 March, 2024 - 16:10 · 1 minute

    Andrew Appel shepherded a public comment —signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

    From the executive summary:

    We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

    Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

    Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

    Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots , which are optically scanned, separately and securely stored , and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

    • chevron_right

      Licensing AI Engineers

      news.movim.eu / Schneier · Thursday, 21 March, 2024 - 16:07 · 1 minute

    The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

    This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

    I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

    • chevron_right

      Google Pays $10M in Bug Bounties in 2023

      news.movim.eu / Schneier · Thursday, 21 March, 2024 - 16:04

    BleepingComputer has the details . It’s $2M less than in 2022, but it’s still a lot.

    The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

    For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

    Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

    During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

    Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

    Slashdot thread .