News

IT NEWS

Rules on deepfakes take hold in the US

For years, an annual, must-pass federal spending bill has served as a vehicle for minor or contentious provisions that might otherwise falter in standalone legislation, such as the prohibition of new service member uniforms, or the indefinite detainment of individuals without trial.

In 2019, that federal spending bill, called the National Defense Authorization Act (NDAA), once again included provisions separate from the predictable allocation of Department of Defense funds. This time, the NDAA included language on deepfakes, the machine-learning technology that, with some human effort, has created fraudulent videos of UK political opponents Boris Johnson and Jeremy Corbyn endorsing one another for Prime Minister.

Matthew F. Ferraro, a senior associate at the law firm
WilmerHale who advises clients on national security, cyber security, and crisis
management, called the deepfakes provisions a “first.”

“This is the first federal legislation on deepfakes in the history
of the world,” Ferraro said about the NDAA, which was signed by the President
into law on December 20, 2019.

But rather than creating new policies or crimes regarding deepfakes—like making it illegal to develop or distribute them—the NDAA asks for a better understanding of the burgeoning technology. It asks for reports and notifications to Congress.

Per the NDAA’s new rules, the US Director of National Intelligence must, within 180 days, submit a report to Congress that provides information on the potential national security threat that deepfakes pose, along with the capabilities of foreign governments to use deepfakes in US-targeted disinformation campaigns, and what countermeasures the US currently has or plans to develop.

Further, the Director of National
Intelligence must notify Congress each time a foreign government either has, is
currently, or plans to launch a disinformation campaign using deepfakes of “machine-generated
text,” like that produced by online bots that impersonate humans.

Lee Tien, senior staff attorney for Electronic Frontier Foundation, said that, with any luck, the DNI report could help craft future, informed policy. Whether Congress will actually write any legislation based on the DNI report’s information, however, is a separate matter.

“You can lead a horse to water,” Tien said, “but you can’t necessarily make them drink.”

With the NDAA’s passage, Malwarebytes is starting a two-part blog on deepfake legislation in the United States. Next week we will explore several Congressional and stateside bills in further depth.

The National Defense Authorization Act

The National Defense Authorization Act of 2020 is a sprawling, 1,000-plus page bill that includes just two sections on deepfakes. The sections set up reports, notifications, and a deepfakes “prize” for research in the field.

According to the first section, the country’s Director of
National Intelligence must submit an unclassified report to Congress within 180
days that covers the “potential national security impacts of machine manipulated
media (commonly known as “deepfakes”); and the actual or potential use of
machine-manipulated media by foreign governments to spread disinformation or
engage in other malign activities.”

The report must include the following seven items:

  • An assessment of the technology capabilities of foreign governments concerning deepfakes and machine-generated text
  • An assessment of how foreign governments could use or are using deepfakes and machine-generated text to “harm the national security interested of the United States”
  • An updated identification of countermeasure technologies that are available, or could be made available, to the US
  • An updated identification of the offices inside the US government’s intelligence community that have, or should have, responsibility on deepfakes
  • A description of any research and development efforts carried out by the intelligence community
  • Recommendations about whether the intelligence community needs tools, including legal authorities and budget, to combat deepfakes and machine-generated text
  • Any additional info that the DNI finds appropriate

The report must be submitted in an unclassified format. However,
an annex to the report that specifically addresses the technological capabilities
of the People’s Republic of China and the Russian Federation may be classified.

The NDAA also requires that the DNI notify the Congressional
intelligence committees each time there is “credible information” that an
identifiable, foreign entity has used, will use, or is currently using deepfakes
or machine-generated text to influence a US election or domestic political
processes.

Finally, the NDAA also requires that the DNI set up what it
calls a “deepfakes prize competition,” in which a program will be established “to
award prizes competitively to stimulate the research, development, or
commercialization of technologies to automatically detect machine-manipulated
media.” The prize amount cannot exceed $5 million per year.

As the first, approved federal language on deepfakes, the NDAA is rather non-controversial, Tien said.

“Politically, there’s nothing particularly significant about
the fact that this is the first thing that we’ve seen the government enact in
any sort of way about [deepfakes and machine-generated text],” Tien said,
emphasizing that the NDAA has been used as a vehicle for other report-making
provisions for years. “It’s also not surprising that it’s just reports.”

But while the NDAA focuses only on research, other pieces of legislation—including some that have become laws in a couple of states—directly confront the assumed threat of deepfakes to both privacy and trust.

Pushing back against pornographic and political deception

Though today feared as a democracy destabilizer, deepfakes began
not with political subterfuge or international espionage, but with porn.

In 2017, a Reddit user named “deepfakes” began posting short clips of nonconsensual pornography that mapped the digital likenesses of famous actresses and celebrities onto the bodies of pornographic performers. This proved wildly popular.

In little time, a dedicated “subreddit”—a smaller, devoted forum—was created, and increasingly more deepfake pornography was developed and posted online. Two offshoot subreddits were created, too—one for deepfake “requests,” and another for fulfilling those requests. (Ugh.)

While the majority of deepfake videos feature famous actresses and
musicians, it is easy to imagine an abusive individual making and sharing a
deepfake of an ex-partner to harm and embarrass them.  

In 2018, Reddit banned the deepfake subreddits, but the creation of deepfake material surged, and in the same year, a new potential threat emerged.

Working with producers at Buzzfeed, comedian and writer Jordan Peele helped showcase the potential danger of deepfake technology when he lent his voice to a manipulated video of President Barack Obama.

“We’re entering an era in which our enemies can make anyone
say anything at any point in time, even if they would never say those things,” Peele
said, posing as President Obama.

This year, that warning gained some legitimacy, when a video of Speaker of the
House of Representatives Nancy Pelosi was slowed down to fool viewers into thinking
that the California policymaker was either drunk or impaired. Though the video
was not a deepfake because it did not rely on machine-learning technology, its impact
was clear: It was viewed by more than 2 million people on Facebook and shared
on Twitter by the US President’s personal lawyer, Rudy Giuliani.

These threats spurred lawmakers in several states to introduce legislation to prohibit anyone from developing or sharing deepfakes with the intent to harm or deceive.

On July 1, Virginia passed a law that makes the distribution of nonconsensual pornographic videos a Class 1 misdemeanor. On September 1, Texas passed a law to prohibit the making and sharing of deepfake videos with the intent to harm a political candidate running for office. In October, California Governor Gavin Newsom signed Assembly Bills 602 and 730, which, respectively, make it illegal to create and share nonconsensual deepfake pornography and to try to influence a political candidate’s run for office with a deepfake released within 60 days of an election.

Along the way, Congressional lawmakers in Washington, DC, have matched the efforts of their stateside counterparts, with one deepfake bill clearing the House of Representatives and another deepfake bill clearing the Senate.

The newfound interest from lawmakers is a good thing,
Ferraro said.

“People talk a lot about how legislatures are slow, and how
Congress is captured by interests, or its suffering ossification, but I look at
what’s going on with manipulated media, and I’m filled with some sense of hope
and satisfaction,” Ferraro said. “Both houses have reacted quickly, and I think
that should be a moment of pride.”  

But the new legislative proposals are not universally approved. Upon the initial passage of California’s AB 730, the American Civil Liberties Union urged Gov. Newsom to veto the bill.

“Despite the author’s good intentions, this bill will not solve
the problem of deceptive political videos; it will only result in voter
confusion, malicious litigation, and repression of free speech,” said Kevin
Baker, ACLU legislative director.

Another organization that opposes dramatic, quick regulation on deepfakes is EFF, which wrote earlier in the summer, that “Congress should not rush to regulate deepfakes.”

Why then, does EFF’s Tien welcome the NDAA?

Because, he said, the NDAA does not introduce substantial policy
changes, but rather proposes a first step in creating informed policy in the
future.

“From an EFF standpoint, we do want to encourage folks to actually
synthesize the existing knowledge and to get to some sort of common ground on
which people can then make policy choices,” Tien said. “We hope the [DNI report]
will be mostly available to the public, because, if the DNI actually does what
they say they’re going to do, we will learn more about what folks outside the
US are doing [on deepfakes], and both inside the US, like efforts funded by the
Department of Defense or by the intelligence community.”

Tien continued: “To me, that’s all good.”

Wait and see

The Director of National Intelligence has until June to submit
their report on deepfakes and machine-generated text. But until then, more
states, such as New York and Massachusetts, may forward deepfake bills that
were already introduced last year.

Further, as deepfakes continue to be shared online, more companies may have to grapple with how to treat them. Just last week, Facebook announced a new political deepfake policy that many argue does little to stop the wide array of disinformation posted on the platform.

Join us next week, when we take a deeper look at current Federal and statewide deepfake legislation and at the tangential problem of fraudulent, low-tech videos now referred to as “cheapfakes.”

The post Rules on deepfakes take hold in the US appeared first on Malwarebytes Labs.