Explore

Growing Challenges, 2020 Social Media Update Part 2

11 / 13

A multi-part update series reviewing recent news, resources, and cases related to social media and the technical and legal challenges it creates in eDiscovery

In New Notifications, we reviewed updated social media usage statistics and other evidence of its growing evidentiary significance.  In this Part, we discuss recent three areas of growing or potential challenges and related news stories.


Since we last revisited this topic, three aspects of social media evidence have become frequently-discussed areas of growing or potential challenges for practitioners: ephemeral messaging on the rise, more emoji in litigation, and deepfakes on the horizon.

Ephemeral Messaging on the Rise 

Ephemeral messaging is messaging like email or text messaging but with added functionality that automatically deletes messages after they have been read or after a short period of time specified by the sender.  It is offered as a feature or option by some social media services (e.g., Snapchat, Instagram), as well as by some dedicated apps and services (e.g., Wickr, Confide).  Many such apps also take other steps to protect message confidentiality, such as applying end-to-end encryption and implementing anti-screenshot features.

Ephemeral messaging presents obvious challenges for eDiscovery preservation and collection.  These apps are designed to auto-delete messages even more quickly than typical automated janitorial functions do, and most employ end-to-end encryption prior to deletion.  If the app effectively deletes messages prior to collection, they may be unrecoverable, and even if not deleted, they may be unrecoverable without custodian cooperation due to the encryption.

Ephemeral messaging functionality was initially popularized by Snapchat, was quickly made the primary focus of more business-oriented applications like Wickr and Confide, and was later adopted as an option by Instagram and experimented with by Facebook.  Ephemeral messaging apps are particularly popular with the young.  A 2016 study showed that they were in use by 56% of smartphone owners ages 18-29.  They are also growing in popularity among businesses.  Uber made headlines in November 2017 when an employee testified about the company’s internal use of ephemeral messaging app Wickr, and Uber is not alone.

Ephemeral messaging has been in the news again recently, on three different fronts:

  1. In March 2019, Mark Zuckerberg announced that Facebook would be moving towards encrypted, ephemeral messaging, which could create significant new eDiscovery challenges.
  1. Also in March 2019, the DOJ announced a change to its FCPA Corporate Enforcement Policy allowing for the potential use of instant messaging apps, including ephemeral messaging apps, which had previously been flatly prohibited (no change was made in the applicable preservation duty; more latitude was just given in how to fulfill it).
  1. Finally, in a recent survey of 20 U.S. federal district and magistrate judges, “close to 70%” of respondents indicated that ephemeral messaging apps were the “new data type legal teams should be most worried about in the next five years” [emphasis added].

More Emoji in Litigation 

As social media communication channels (and smartphones) have become more frequent discovery sources, so too have emoji (or emoticons) shown up more frequently in cases.  In 2019, Santa Clara University law professor Eric Goldman published “Emojis and the Law in the Washington Law Review, which revealed that “[b]etween 2004 and 2019, there was an exponential rise in emoji and emoticon references in US court opinions, with over 30 percent of all cases appearing in 2018” [emphasis added].  Examples range from landlord disputes to sex trafficking cases.  As they increase in frequency, emoji are creating special challenges for eDiscovery and litigation, both technical challenges (due to their differing appearances from platform to platform) and challenges of interpretation (due to their inherent ambiguity and their context dependency).

Deepfakes on the Horizon 

Another new source of potential complications in social media evidence is the rise of “deepfakes.”  Deepfakes are “highly realistic, falsified imagery and sound recordings” in which the original faces and/or voices are replaced.  They are not created using the types of 3D rendering “typically employed in Hollywood VFX studios” but, rather, using publicly-available machine-learning algorithms and source media (i.e., images, videos, and audio recordings).  They have quickly progressed from illicit applications to viral social media amusements:

Like all video-adjacent technology, deepfakes first saw significant use in pornography, flooding the Internet with videos that to the untrained . . . eye look like celebrities doing porn.  Otherwise, it’s mostly used for comedy, like putting Steve Buscemi’s face on Jennifer Lawrence, or swapping Arnold Schwarzenegger and Danny De Vito in Twins.  Sometimes, deepfakers reinstate a role’s “original” casting (like Tom Selleck as Indiana Jones), bring actors like Bruce Lee back from the dead, or – in this latest case – ponder remakes of classics with Marvel Cinematic Universe stars.

Deepfake technology has the potential to create a variety of serious legal complications, including: defamation and false light claims from those depicted (including potential liability for duped journalists), defendants challenging the authenticity of video evidence against them (even when it’s real), and a new need for video analysis and expert testimony (and the costs that will come with both).

At least three states have already taken some legislative steps related to deepfakes.  In July 2019, Virginiaofficially expanded its nonconsensual pornography ban to include realistic fake videos and photos, including computer-generated ‘deepfakes.’”  In September 2019, Texas attempted to criminalize deepfake election interference:

Texas Senate Bill 751 (SB751) amended the state’s election code to criminalize deepfake videos created “with intent to injure a candidate or influence the result of an election” and which are “published and distributed within 30 days of an election.” Doing so is now a class A misdemeanor and offenders can be sentenced to a year in a county jail and fined up to $4,000.

Finally, in October 2019, California passed two measures related to deepfakes: one that authorizes election candidates to bring civil actions against distributors of election-related deepfakes, and one that “that provides a plaintiff whose likeness was used in a computer-generated nude or sexual act video or image with the ability to obtain damages and other relief.”


Upcoming in this Series

In the next Part, we will continue our 2020 social media update series with a review of some recent cases of interest.


About the Author

Matthew Verga

Director of Education

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. A thirteen-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Whether you prefer email, text or carrier pigeons, we’re always available.

Discovery starts with listening.

(877) 545-XACT / or / Subscribe for Updates