Get started on your doctoral degree at American Military University.
By Dr. Keith Ludwick
Faculty Member, Doctoral Programs, American Military University
For terrorists to be successful, they must communicate their message to the world. Without broadcasting their message to a wide audience, terrorists cannot hope to impact policy, change government behavior or incite like-minded individuals into action.
However, over the decades, terrorist communication has evolved. Some 30 to 40 years ago, extremist groups propagated their message through print media, television or radio.
Today, their range of communication options extends well beyond those traditional methods. Terrorists can now use blogs, websites, Twitter, Facebook, YouTube, podcasts and many other avenues that are globally available and easy to access.
New Zealand Shootings Highlight the Need to Stop Glorifying Terrorists’ Attacks
The recent, tragic events in New Zealand highlight the need to question current avenues of terrorists acquiring the public attention they need to propagate their ideology. While high-powered rifles, racism, and xenophobia have dominated public discourse, the live-streaming of the attacks in the Christchurch mosques needs to reignite the debate on how media companies should monitor extremist material, live-streaming, and the role of government regulation.
As most people are now aware, Brenton Tarran’s alleged killing spree on March 15 was live-streamed to his Facebook account as he savagely killed at least 50 worshippers at prayer in two mosques. The widespread public attention to this attack raises several questions that news organizations, world governments and technology companies must ultimately answer as society is now clearly entrenched in the digital age.
The New Zealand attack was not the first to use the immediacy of social media to propagate an individual’s hateful ideology. Surely it will not be the last. The Pulse nightclub shooter in Orlando, Omar Mateen, also posted his attack live on his Facebook account.
These efforts at broadcasting ideology via social media, YouTube and live-streaming need to be discussed in academia and public debates; they will undoubtedly be a part of future terrorist attacks. Questions facing policymakers and technology companies fall into three broad areas.
#1: What Is the Responsibility of Technology Companies to Monitor Content?
Technology companies often struggle with their influence on the public. It is probably safe to say that few technology companies have evil intentions or a desire to broadcast hate, but they are not altruistic either.
Most public organizations have shareholders and answer to boards of directors, which expect their companies to operate for financial gain. Negative press or the thought that a particular company is being used to spread a violent ideology does not help the bottom line.
These organizations monitor their digital content for violations of their policies to prevent the distribution of objectionable material. However, technology companies often cite the difficulty of monitoring content with automated tools because of the risk of “false positives” and the fear of removing benign content.
The counterargument is that these companies invest millions of dollars in artificial intelligence (AI) and machine-learning algorithms to determine and predict the buying habits of their users. While there are technical limitations and differences between scanning millions of minutes of uploaded video in real-time versus retroactively scanning social media posts for buying trends, the technology is advancing at a rapid pace.
Companies have invested millions, perhaps billions, into AI to track buying trends; why can’t they do the same to filter out hateful or racist content? There seems to be a disconnect between the two goals.
#2: What Is the Government’s Role in Regulating Digital Content to Prevent the Release of Offensive Material?
Regulation is a scary word to many, particularly when it pertains to technology and the digital world. The original concept of the Internet centered on the free flow of ideas. Letting the government regulate digital content and holding companies responsible for displaying or distributing objectionable content would create a host of problems.
First, governments struggle to keep laws and policies up to date concerning technology. Particularly in today’s era of extreme political partisanship, developing robust statutes that will stand the test of time is difficult, if not impossible.
Second, accountability presents its own set of problems. If an extremist with a racist agenda streams a violent attack, it takes less than a second for that content to become available and duplicated worldwide. If users re-tweet or share some of that video, even a few frames, is the platform company liable or is the user? If we get to the point of monitoring individual accounts for re-tweeting and sharing, when does that become constant surveillance and monitoring?
#3: Should Content Be Moderated at All?
The “forbidden fruit” analogy argues that curious individuals will seek out objectionable material. Letting the free market of ideas decide which content is appropriate could affect the viability of violent posts.
More and more, social media is being thought of as a legitimate news source, on par with mainstream news media. Every mainstream news media outlet is active on social media.
With literally millions of real and pseudo-real reporters broadcasting local news, would it be a form of censorship if their companies were to moderate content? Even traditional news sources on broadcast television and cable outlets offer glimpses of violent videos, albeit with the perfunctory warning, “Viewer discretion is advised.” Time will tell.
Terrorist Attacks Will Continue to Be Widely Publicized
The shocking events in New Zealand are only the tip of the iceberg. The very fact that part of the newsworthiness of this attack was that it was live-streamed will only encourage others to do the same. It is most likely that future attacks, particularly those conducted by “lone wolves,” will take advantage of the media coverage of live-streaming and attempt to duplicate the efforts of Mateen and Tarrant.
Terrorists must communicate their hate and inspire recruits. Communication through online methods is now standard for many terrorist organizations.
During the first 10 or so years of social media, terrorists used online sites to recruit individuals and communicate internally. Now, we are seeing terrorists regularly turn to social media to spread their message beyond their usual audience.
About the Author
Keith Ludwick, Ph.D., is an adjunct professor in the School of Security and Global Studies at American Military University. He holds a Bachelor’s degree in Computer Science from California State University — Sacramento, an M.A. in Strategic Studies – Homeland Security and Defense from the Naval Postgraduate School, and a Ph.D. in Biodefense from George Mason University. Keith served as a Supervisory Special Agent in the FBI, specializing in counterintelligence and technical operations. He retired in 2018, devoting his time to teaching and research.