Categories
Russia News

US intelligence spotted Chinese, Iranian deepfakes during 2020 campaign season [Video]

Operatives working for the Chinese and Iranian governments prepared fake, AI-generated content as part of a campaign to influence U.S. voters in the closing weeks of the 2020 election campaign, current and former U.S. officials briefed on the intelligence told CNN.The Chinese and Iranian operatives never disseminated the deepfake audio or video publicly, but the previously unreported intelligence demonstrates concerns U.S. officials had four years ago about the willingness of foreign powers to amplify false information about the voting process.The National Security Agency collected the intelligence that gave U.S. officials insights into China and Iran’s capabilities in producing deepfakes, one of the sources said.Now, with deepfake audio and video much easier to produce and the presidential election just six months away, U.S. officials have grown more concerned over how a foreign influence campaign might exploit artificial intelligence to mislead voters.At an exercise in the White House Situation Room last December in preparation for the 2024 election, senior U.S. officials wrestled with how to respond to a scenario where Chinese operatives create a fake AI-generated video depicting a Senate candidate destroying ballots, as CNN has previously reported.At a briefing last week, FBI officials warned that AI increases the ability of foreign states to spread election disinformation.It’s unclear what was depicted in the deepfakes that the Chinese and Iranian operatives prepared in 2020, according to the sources, or why they were not ultimately deployed during that election.At the time, some U.S. officials who reviewed the intelligence were unimpressed, believing it showed China and Iran lacked the capability to deploy deepfakes in a way that would seriously impact the 2020 presidential election, a former senior U.S. official told CNN.”The technology has to be good; I don’t think it was that good,” the former official said. “Secondly, you have to have a risk appetite. China, no. Iran, probably yes.”Keeping an eye on adversariesThe NSA has continued to collect intelligence on foreign adversaries developing deepfakes and the potential threat they pose to U.S. elections now that the technology has advanced dramatically over the last four years, the former senior official added, pointing out that in 2020, there wasn’t, for example, a large language model like ChatGPT that was easy to use.CNN has requested comment from the NSA.US officials have maintained a high level of visibility into the AI and deepfake advancements made by countries including China, Iran and Russia since the 2020 election. But putting that intelligence to use inside the U.S. remains a challenge, the former official said.”The question becomes how quickly can we spot an anomaly and then share that rapidly within the United States,” the former official told CNN. “Are we winning the race against a series of adversaries that might operate within the U.S.? That’s the challenge.”The threat of deepfakes and foreign influence is poised to come up in a Senate Intelligence Committee hearing on Wednesday, when lawmakers will get a rare opportunity to publicly interrogate the director of national intelligence and other senior officials on foreign threats to elections.While they didn’t deploy their deepfakes in 2020, Iranian government operatives did undertake a brazen attempt that year to influence voters by imitating the far-right Proud Boys group and disseminating a video purporting to show the hack of a U.S. voter registration database, according to U.S. prosecutors.”The fact that the Iranians pulled the Proud Boys crap but didn’t try deep fakes was either a lack of faith in the capabilities or a sign of no clear internal guidance,” one person familiar with the intelligence told CNN.Lost in translation For foreign influence operations to be effective, they also need to resonate with the American public, something China has struggled with, the former senior U.S. official said.”I think it’s clearly a cultural piece,” the former official said. “They really have a very difficult understanding of the issues that that are divisive or necessarily how to play to those issues, where the Russians do not.”Generative AI, or AI used to create video, audio, imagery or text, has made foreign influence actors more efficient in creating content, but “there is no evidence that it has made them or their campaigns any more effective,” said Lee Foster, an expert in tracking foreign influence operations online.”Generative AI has so far not helped actors resolve the main bottleneck they face: distribution,” said Foster, who is a co-founder of AI security firm Aspect Labs. “Actors have rarely struggled with creating content. Getting it in front of the right eyeballs at a meaningful scale has been and continues to be the sticking point, one that AI so far has not helped them overcome.”Foster and other experts have cautioned against exaggerating the impact of foreign influence operations, including those that use AI, because it benefits the propagandists themselves. Disinformation in the U.S.But the US remains fertile ground for conspiracy theories, whether domestic or foreign in origin.Nearly 70% of Republicans and Republican-leaners said that President Joe Biden’s 2020 election win was not legitimate, according to a CNN poll released in August.And positive views of many government institutions are “at historic lows,” with just 16% of the public saying they trust the federal government always or most of the time, according to a Pew Research Center survey released in September.The 2024 U.S. election will present new opportunities for foreign influence operations. U.S. military aid to Ukraine is essentially on the line, with Democrats largely backing Biden’s support for Ukraine and some leading Republicans, including former President Donald Trump, increasingly backing away from foreign aid.FBI officials are concerned that the war in Ukraine and U.S. support for Kyiv might be an “animating event for the Russians” in terms of conducting interference or influence operations aimed at the U.S. election, a senior FBI official told reporters last week.

Categories
Western Europe News

Justice Department to seek tougher sentences for AI-fueled election crimes [Video]

Federal prosecutors will pursue tougher sentences in cases in which artificial intelligence is used to commit an election-related crime, including threatening violence against election workers and voter suppression, Deputy U.S. Attorney General Lisa Monaco said Monday.The Justice Department policy change is an effort to keep pace with a chaotic information environment ahead of the 2024 presidential election, as AI tools have made it much easier to mimic politicians’ voices and likenesses to spread false information. The new policy applies to cases in which AI makes the election-related crime “more dangerous and more impactful,” Monaco said.”Our democratic process and the public servants who protect it have been under attack like never before as threats evolve and spread,” Monaco told a meeting of the Justice Department’s Election Threats Task Force on Monday afternoon. AI and other advances in technology are “emboldening those threatening election workers and the integrity of our elections,” she said.The emergence in recent years of AI-driven software programs that can produce deepfakes, or fake audio or video, has complicated the threat environment for election workers and the federal and state officials trying to protect them.U.S. officials focused on election security are concerned that AI tools will exacerbate this already-fraught threat environment, pointing to an incident during the Democratic primary in New Hampshire in January as an example.An AI-made robocall imitating President Joe Biden targeted thousands of New Hampshire voters, urging them not to vote in the primary. A New Orleans magician made the robocall at the behest of a political consultant working for Minnesota Rep. Dean Phillips, a long-shot Democratic challenger to Biden, the magician told CNN.Federal officials have also been mapping out ways in which foreign powers might try something similar to the fake Biden robocall to try to influence voters. Senior officials from the Biden administration in December drilled for a hypothetical scenario in which Chinese operatives created an AI-generated video showing a Senate candidate destroying ballots.The Justice Department is already under pressure from election officials to do more to investigate the flood of harassing phone calls and emails they have received in recent years. Many of those threats have been made without AI tools and by people who falsely believe there was widespread fraud in the 2020 election.An Ohio man was sentenced in March to over two years in prison for making death threats to an Arizona election official, whom he accused of committing fraud.The danger is persistent. Thirty-eight percent of more than 900 local election officials surveyed in February and March by the nonprofit Brennan Center for Justice reported experiencing threats, harassment or abuse. More than half of the election officials surveyed expressed concern for the safety of their colleagues and staff.”Election offices across the country continue to deal with threats and harassment for doing their jobs, and in many places, this behavior has been nearly nonstop since mid-2020,” Amy Cohen, the executive director of the National Association of State Election Directors, told CNN. “This should be unacceptable to all Americans.”Election offices have in recent years been working more closely with state and local law enforcement on personal safety training and “physical security best practices to help staff stay safe,” Cohen said.