August 4, 2021:- If you lie awake worrying that there are too few people incarcerated, too few criminal offenses on the statute books, and too much unregulated speech (in fact altogether too much unregulated human activity in general) rest easy. Help is at hand. The Massachusetts Legislature is considering a bill that would criminalize videos that make it look as if people are saying things that they did not really say.
It seems unlikely that the bill, H. 1755, sponsored by Representative Jay D. Livingstone, will become law, not this session anyway. It is a refile of H. 3366, which he filed in 2019. For reasons that I explain below, I hope this bill does not become law, not this session, not next session, not ever.
The clue is in the typo
Whoever drafted the bill apparently drew inspiration, and most of the text, from a federal bill titled the Malicious Deep Fake Prohibition Act of 2018 filed by United States Senator Bill Sasse (R – Nebraska). If you want to read Attorney Nina Iacono Brown’s critique in Slate of Senator Sasse’s bill and similar proposals, click here.
Copying another legislator’s bill is not a violation of the Copyright Act, of course (on which subject see below). In fact, they should have gone the whole hog and copied the title too. Because what did the drafters choose as a moniker for Representative Livingstone’s adaptation of Senator Sasse’s bill? They called it “An Act to protect against deep fakes used to facilitate torturous or criminal conduct.”
Aside from the irony-laden, Freudian-slippy typo (I am quite sure that they meant to write “tortious” not “torturous”) it’s just too much of a mouthful. But that problem is a small one compared with the bill’s potential impact on freedom of expression. It would hand the shut-uppers yet another tool with which to silence heterodox speakers.
Trust me, I’m from Big Tech
H. 1755 was on the agenda for the Joint Committee on the Judiciary on July 27, 2021. If you would like to watch the relevant part of the hearing, click here and scroll to 1:09:40. There you can see and hear testimony from Nick Gatz, manager of State Government Relations for Adobe, who states that the company is neutral on H.1755 and offers the Legislature its expertise “on the topic of content manipulation and online misinformation,” which is the sort of thing Adobe is against, I gather.
Adobe is so very much against content manipulation and online misinformation that it has established an entity called the Coalition for Content Provenance and Authenticity. If that name was approved by a focus group, I am quite sure that its members either: (a) had no familiarity with Orwell’s 1984; or (b) considered the book to have been not so much a cautionary tale as an instruction manual.
Coalition of the all too willing
The purpose of the Coalition for Content Provenance and Authenticity? To deploy technology that will help us — naïve saps that we are — sort the real-news wheat from the fake-news chaff, thereby obviating the need for legislation. Why should politicians bother to extend control over online speech with laws (laws that could conceivably be struck down by bothersome judges or repealed by the great unwashed) when Big Tech has an app for that? If the alternative to the Act to Protect Against Deep Fakes Used to Facilitate Torturous or Criminal Conduct is the Coalition for Content Provenance and Authenticity, forgive me for not sighing with relief.
One of the more famous members of the coalition is Twitter, the company that (like Google’s YouTube) runs advertisements for the Chinese government, says the Columbia Journalism Review:
“According to a number of reports, the most recent ads push the message that protesters in Hong Kong are violent extremists and that state police are simply doing their best to keep the peace.”
Yes, Twitter takes money to promote the Chinese Communist Party line that pro-democracy protestors are violent extremists, a falsehood that does not count as “online misinformation” so far as Twitter is concerned, apparently.
Another coalition member is Microsoft, which, according to Business Insider, complies with China’s censorship laws. For example, earlier this year, when users in the United States tried to find images of Tank Man via Microsoft’s search engine, Bing, their searches yielded no results.
Readers may recall that Tank Man was the protestor who stood in front of Red Army tanks during the Tiananmen Square demonstrations. He was being a “violent extremist,” I suppose. But Bing’s omission was merely the result of “human error,” according to reports on the British Broadcasting Corporation (BBC).
“Beijing is known to require search engines operating in its jurisdiction to censor results, but those restrictions are rarely applied elsewhere.”
The most important word in that sentence is “rarely.” Fans of Gilbert and Sulivan’s H.M.S. Pinafore may be recalling the Captain’s lines, “What, never? Well, hardly ever.”
Coincidentally, the BBC is another member of the Coalition for Content Provenance and Authenticity. For readers unfamiliar with the BBC, it is Britain’s publicly-funded media organization that makes popular dramas, documentaries, and situation comedies and, once upon a time, used to be a trustworthy source of news, at least in comparison with, say, TASS or Pravda. It is also the organization that employed Martin Bashir, the reporter who secured a TV interview with Diana, Princess of Wales, by using faked bank statements that fueled the princess’s paranoid delusions that she was the victim of a conspiracy involving, inter alia, royal bodyguards; her husband and heir apparent to the Crown, Prince Charles; the Secret Intelligence Service; and GCHQ, Britain’s equivalent of the National Security Agency.
The BBC followed up on Bashir’s fakery with an equally fake internal inquiry and not only retained his services but gave him a promotion. For the report of the independent inquiry, click here.
In addition to Martin Bashir, the BBC employed Jimmy Savile who, during his lengthy broadcasting career, sexually assaulted approximately 72 people and raped several more, including an 8-year-old girl, crimes to which the BBC later admitted it had “turned a blind eye.”
So Twitter, Microsoft, and the BBC are now coalescing with other media corporations in order to protect us — poor, credulous, undiscerning, gullible us — against content manipulation and online misinformation. What, as they say, could possibly go wrong.
During the hearing, the House chair of the committee suggested that deepfakes might be better dealt with via a new federal law. This brought to mind a current federal law, namely section 506 (c) of the Copyright Act, which makes it a crime to place on any work a false copyright notice:
“Any person who, with fraudulent intent, places on any article a notice of copyright or words of the same purport that such person knows to be false… shall be fined not more than $2,500.”
This provision came to mind for two reasons. First, it was only last year that the Supreme Court of the United States issued its decision in Georgia, et al v. Public.Resource.Org, Inc., on the subject of copyright in legislative works (the public edicts doctrine). The court reiterated the well-established point that legislators cannot claim copyright in the works they create in the course of their official duties.
That’s why Senator Ben Sasse has no grounds to go after State Representative Livingstone. And it is why the Massachusetts Legislature cannot claim copyright in the documents that it publishes. If it did so, e.g. by fraudulently posting a false copyright notice on its website, it would be violating section 506 (c) of the Copyright Act.
And that was the second reason that the provision came to mind as I watched the hearing, because right there on the screen, at the bottom of the page, appeared the following words:
“Copyright © 2021 The General Court of the Commonwealth of Massachusetts”
I wonder if that qualifies as “online misinformation.”
From tort to crime
If we cannot safely place total trust in Twitter, Microsoft, the BBC, and the Coalition for Content Provenance and Authenticity as a whole (and we can’t), would we be any better off with Rep. Livingstone’s Act to Protect Against Deep Fakes Used to Facilitate Torturous or Criminal Conduct? No, and here’s why.
The proposed law would make it a crime to distribute a video in order to “facilitate criminal or tortious conduct” if the video was “created or altered in a manner that [it] would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.”
The word “facilitate” is pretty clear, I suppose, and the term “criminal conduct” is easy enough to grasp. It covers things like assault and battery, and fraudulently placing a false copyright notice in violation of section 506 (c) of the Copyright Act.
But what qualifies as tortious conduct? We have torts aplenty in Massachusetts, but here are two that tend to come up in the context of online spats: defamation and the intentional infliction of emotional distress. To me, these are the two torts that seem likely to provide a pretext for political prosecutions under H. 1755, allowing Massachusetts politicians to use the courts to silence their opponents. Do such things really happen here? For just one example, see my post titled “Free speech wins (four years after judge bans candidate from mentioning opponent’s name.”
It can be difficult for public figures such as politicians to shut up their detractors with defamation lawsuits. They have to prove “actual malice,” i.e. that the speaker made a false statement knowing that it was false or with reckless disregard of whether it was false or not.
Easier, then, if you are an elected tribune of the people, to seek a civil harassment-prevention order, as did the politician in the case I discuss in the aforementioned post. Even easier, perhaps, to bring a private criminal complaint under the proposed Act to Protect Against Deep Fakes Used to Facilitate Torturous or Criminal Conduct or, better still, get your friend the prosecutor to ask a grand jury to issue an indictment.
If H. 1755 becomes law and you share a deepfake with the intent to cause emotional distress to, say, Senator Suehappy Thinskin you won’t be looking at your screen for a while; you’ll be looking at two and a half years in the slammer.
To safely forward the video of the esteemed Senator without fear of criminal prosecution, you would need to know — prior to sharing it — that it was not “created or altered in a manner that would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.”
How could you be sure? Perhaps you could look for a certificate of authenticity issued by the Coalition for Content Provenance and Authenticity. But the Coalition (i.e. Twitter, Microsoft, the BBC, etc.) might not issue certificates to videos that criticize the powerful. It might routinely withhold certificates from people who say things that the powerful do not like.
But the absence of a certificate would not necessarily mean that the video was deepfake. So you could roll the dice, share the video, and hope that you don’t get a call from the offended hack’s lawyer or from law enforcement.
Even if the video is authentic, you might worry that people with friends in high places might be able to persuade law enforcement — and even a judge and jury — that it is not. Readers may have noticed that when somebody says something true, but embarrassing, about a powerful person, the powerful person first denies it and then attacks the somebody who said it, often with the eager help of the online mob. Even if the truth of the statement eventually becomes apparent, by that point the speaker’s life has been turned upside down.
Yes, H. 1755 says that “no person shall be held liable under this section for any activity protected by the Massachusetts Constitution or by the First Amendment to the Constitution of the United States.” But when do you, the speaker, find out whether your activity was protected by the Massachusetts Constitution or by the First Amendment to the Constitution of the United States? When a judge says so, i.e. long after you’ve been interrogated and prosecuted.
Those risks, I suspect, would make you think twice about forwarding the video of Senator Suehappy Thinskin saying or doing something idiotic. We call this the chilling effect.
But shouldn’t there be laws against using deepfakes to defame people or cause them emotional distress? Yes, and we already have them, e.g. the torts called defamation and the intentional infliction of emotional distress.
If you still think we need more criminal offenses for prosecutors to threaten people with, check out @ACrimeADay on Twitter. Spoiler alert: There are a lot.
Back in 2019, the Massachusetts bill to ban deepfakes had two cosponsors, but this time Representative Livingstone is going it alone. The bill is losing support rather than gaining it. You may think that I should take heart from this trend, but I do not. Why? Because of the difference between bad ideas and nuclear waste.
At some point, with the passage of time, nuclear waste stops being dangerous. Not so with bad ideas. You cannot summon forth the ideas that H. 1755 embodies, bottle them, bury them in a lead-lined underground vault, and wait for them to disintegrate into harmless nothingness. No, they remain in the atmosphere, floating freely like wraiths, sometimes for decades, until they suddenly make themselves manifest as emergency bills or outside sections in the State budget.
That is why I am no more relieved at the bill’s feeble prospects this session than I am about entrusting the task of identifying deepfakes to the likes of Twitter, Microsoft, and the BBC.
P.S. For the full text of Representative Jay Livingstone’s bill, H. 1755, scroll down below the image.
SECTION 1. Chapter 266 of the General Laws is hereby amended by inserting after section 37E the following section:-
Section 37E 1/2. (a)As used in this section, the following words shall have the following meaning unless the context clearly requires otherwise:
“Audiovisual record,” any audio or visual media in an electronic format and includes any photograph, motion-picture film, video recording, electronic image, or sound recording.
“Deep fake”, an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.
(b) Whoever (1), creates, with the intent to distribute, a deep fake and with the intent that the distribution of the deep fake would facilitate criminal or tortious conduct, or (2) distributes an audiovisual record with actual knowledge that the audiovisual record is a deep fake and with the intent that the distribution of the audiovisual record would facilitate criminal or tortious conduct shall be guilty of the crime of identity fraud and shall be punished by a fine of not more than $5,000 or imprisonment in a house of correction for not more than two and one-half years, or by both such fine and imprisonment.
No person shall be held liable under this section for any activity protected by the Massachusetts Constitution or by the First Amendment to the Constitution of the United States.