Fake News vs Free Speech: Facebook’s Festering Information Challenge

Facebook Buttons

Fake news, charges of election meddling by foreign propaganda, and now allegations of listening in on conversations made through the chat interface of the platform: face it Facebook, you're no longer the friendly-go-lucky "connect to friends and family" tech company you were when you started out. Between heavy mining and sale of personal data, combined with an algorithm that shows customized stories of interest your users and becoming the internet's second largest search engine, you've gone from being a coffee shop where friends sit down and swap stories to an information catwalk, spotlighting chic news, trends and memes with a limelight everyone wants in on. You've got the power to make or break careers, product sales, news cycles, even governments, and the world is starting to notice. Unfortunately, it looks like you're noticing the complications with this influence a little too late. Information is power Facebook, and as you've grown strong perhaps you never considered just what that power could be used for. It was one thing after all, to market custom advertisements for product and services, but influencing election results? A foreign power using your algorithms to buy ads and propaganda in support of their preferred candidates? Live-streaming of suicides? The ability for masses of people to become roused in anger over to the point of shooting over an untrue rumour at a pizza parlour? Who could have possibly predicted these problems?

Librarians, that's who.

When it comes to finding the balance between education and doctrines of control, keeping an open mind without promoting hate speech or disturbing community safety, no field knows the score better than information science. This is the field that offered open access to information before the Internet existed, that thrives as a centre for community education, a cause that hasn’t always sat well with existing and incoming powers. The American Library Association, the largest and oldest library organization in the world, includes “social responsibility and the public good” as part of its core values. Of course, ‘public good’ and ’social responsibility’ are both subject to interpretation on what is beneficial and detrimental to the community, and finding the balance between allowing different viewpoints while keeping misinformation to a minimum is no easy task. That's what has amazed me the most over all these controversies over what should and shouldn’t be accessible via Facebook, when information should be left alone due to free speech and when items should be removed: I could swear I've seen these debates before. The controversy of information access and ‘fake news’ takes me back to sitting in with my peers (hi guys) as professors presented case studies and expectations on management for librarians, teaching information literacy and collections development.

Libraries and Controversy

From banned books to basic collections, librarians have long been the frontline soldiers of how to meet community information needs without supporting or promoting any particular agenda. Prior to the current online marketplace where users can search for information at their leisure with internet access and Wi-Fi, public libraries were the go-to for information access on any given topic. Actually, they still are: according to ALA librarians answer 6.6 million questions each week, with 1.5 billion in-person visits a year. However, having an accessible spot for information has not been easy: historically they’ve been a target. The Library of Alexandria Library is the most famous and disputed case, history is rife with other examples where libraries were burned and destroyed due to the information accessible within, including the Library of al-Hakam II, the Imperial Library of Constantinople, Glasney College, and the National University of Tsing Hua. Book burnings were commonplace in 1930s Germany, and even today many books are challenged for their right to be on the shelf; the task of promoting free speech while avoiding spread of misinformation has resulted in more than one controversy. Is a book of nude poses sexually explicit material or art? Do books on homosexual, bisexual, transgender issues belong on the shelves, and where? What about materials for children, such as Justin Richardson's highly challenged work, And Tango Makes Three? Is the library accepting of donations, or are there concerns such that accepting individual purchases will allow promotion of agenda?

Consider the case study if someone donates a book illustrating apartheid propaganda in South Africa: is it placed with material on the country, placed with history, or kept off the shelves completely? From a historical perspective it may be a valuable study piece, but the material may also be seen as disturbing to users, or worse a case of adding salt to wounds on existing racial tensions within the local public. If it remains, should it include a warning that the material is racially charged propaganda, to allow the readers to understand the blatant racism behind the illustrations, or do we trust them to understand that while an example of propaganda in history, the actual information within the illustrations is false, developed with intent of persuading individuals to a particular line of thought? Concerns and challenges like these have resulted been different decisions and outcomes, often dependant on the librarians, local government authorities and the populous involved. Like the “report abuse” button, librarians are subject to formal challenges from citizen and government alike, weighing in on whether titles should continue to remain on the shelf, or remove them post-haste. Least you think all of this is a lot of hullabaloo over nothing, keep in mind there’s an entire week dedicated to 'Banned Books'.

Here are some of the titles that have been challenged over the years:

  • The Da Vinci Code
  • Winnie the Pooh
  • Harry Potter
  • Them Great Gatsby
  • Alice's Adventures in Wonderland
  • The Hungry Caterpillar

 The Facebook Algorithm: from Publishing Platform to Hands-On Librarian.

"But wait!" comes the cry, "Facebook isn't the library, why should the company have any responsibilities towards what goes up on the network? All it does is provide a platform for users to post, and given the numbers (4.75 billion pieces of content shared daily) it would be impossible to perform traditional collections management anyway."

While it's true Facebook may see itself only as offering the means to post content to the web, and certainly the numbers are a challenge, the idea that Facebook doesn’t have a say in what people read is holding less and less water. Even if Facebook cannot censor what is published on the platform, it certainly does have increasing control over what gets read. This is the point of its all-powerful algorithm, and the concept of paid impressions, where instead of users asking a question and are retrieved a bundle of information with potential answers, users pose the question and the algorithm goes back to the stacks, picks out what materials are most relevant based on what it knows about the user and their existing preferences, along with the preferences of whomever has paid for their information to get preferred treatment. While it might seem like the same information is consumed, that isn’t the case: in one scenario, the user sees everything that may or may not be relevant to their interests, in the other case the user sees what is expected as relevant, but missed the opportunity to see other information entirely. As a result, you can’t go looking at subject to get completely unfiltered new ideas of what’s out there and what the world is talking about; instead you get an idea of what is out there relevant to your community/profile and what is deemed relevant to marketers. Make no mistake: the difference matters, and the result is a diet of information substantially skewed. The possibilities of implementing bias get worse when information creators have the ability to specify whom should see what based on their dollars and the understanding of how the algorithm works. Already we know the combination of Facebook user data and political ad targeting is far more powerful than anything we could have imagined. A good read on the subject is Clive Verioni’s book SPIN. Printed in 2014, with insights into how politicians can use big data to target their messages, Veroni was ahead of his time. Worse, on top of these bubbles, our ability to discern concrete facts from hidden agendas isn’t nearly as good as it needs to be: while 44% of Americans believe they can recognize ‘fake news’, studies show only 25% are able to actually do so, with the remaining 75% still believing some truth in what they’ve read. Ouch!

So how can Facebook deal with this? Recognizing that individual curation of content simply isn’t feasible, how can Facebook continue to operate as a global information hub without algorithm-savvy parties deliberately using the platform to imprint their bias with more force, and twist the ‘truth’ to belief in the information frequently fed to the reader? Tough question, and unfortunately no straight answers, however there are a few things the social networking giant would be wise to consider:

1.Consider a Ban on Political and Campaign Advertising

Is it possible to change election outcomes through use of sophisticated ad-targeting and messaging down to the information each person sees? If yes, does Facebook really want that power to begin with, and the complications, if not outright disastrous consequences that come with it? While some bills are going forward, including legislation within the U.S. Senate to have technology companies such as Facebook and Google disclose details about political advertisements on their platforms, there is a question if that will do enough. If the fact comes out that a foreign government paid billions to promote a candidate on one side, would the release of information be valuable in helping readers understand what they’ve been taking in, or would it simply add more “they say, we say” fuel to pre-election (and seemingly post-election) rhetoric? Removing political ads would be a good step to completely avoid the issue altogether; however unfortunately that doesn’t seem likely. Politicians won't want that, and Facebook won't want that, as it would mean less incoming funds. Nevertheless, if the technology giant does continue to insist on ideology and political neutrality, removing adverts from pundits on both side of the fence would send a strong, clear message.

2. Pop the Information Bubbles

Facebook needs to upgrade those algorithms to include balance opposing viewpoints, and allow more variety of sources into information spheres. While removing the skewed bias of the Facebook algorithm won’t solve all of its problems, it can help, by exposing individuals to more sides of the story and allowing them to question their own values, rather than reinforcing bias. If 'you are what you eat' is a common enough phrase as a reminder to include more balance in food choices, then 'you are what you read' is a reminder that are opinions, thought patterns and processing is heavily influenced by the diet of information we feast upon, be it in text or other forms of media. This challenge could be addressed with an upcoming change, as Facebook looks to split their feed between family and friends, and posts by organizations. However, simply posting news stories from media organizations won’t be enough: if the algorithm is keeping to the existing model of pushing news and media based on existing profiles with more value on posts by friends, the bubble will remain; not to mention that it won’t stop misinformation from being spread by false ‘human’ friends, bot accounts deliberately created to help promote or sway particular opinions. There's also a question if political ads will be spit to the news feed, or remain as-is.

3. Teach Information Literacy.

If Facebook really has taken up the roll of an information provider, in addition to a content publisher, it needs to find ways educate its users who are getting their information from the platform about questioning sources, identifying bias, and separating ‘fact’ from ‘cleaver fake’. To quote the great Sherlock Holmes, another literary great that, yes, has been removed from libraries and been on the banned list, "Never trust to general impressions, my boy, but concentrate yourself upon details." Individuals need the ability to question what they read, to look beyond the clickbait to the issues actually at hand without the colouring of the author, and if something seems amiss, be encouraged to discover more. Already Facebook has started accepting this challenge with its “Tips to Spot Fake News”, a feature that can be found a click away from the news feed. If the company will consider adding robust efforts however, remains to be seen.

4. Introduce a New Rating System.

This is a tough call because rating systems for information and articles at the moment are incredibly east to sway: bots run rampant on the internet, and all it takes is a few dozen false accounts or a shout-out to members of a particular community to sway values in one direction or another. Finding a way to get industry expertise involved with a yay or nay while avoiding the bots however, still has potential: for example, a medical news story that gives more power within its ratings to those who have identified and verified of working within the medical field. Peer-reviewed articles are popular in Academia for a reason: not only is the research published, but it has already been viewed by other experts in the field who agree with the facts, giving the article more weight as valuable insights. This would not however, be an easy task to accomplish: not only would Facebook need to determine the algorithm for weighing the value of reviews, likes or dislikes, but it would also need to verify that the valued reviewer is in fact someone with experience in the field, and not a bot or self-proclaimed expert. There’s a potential to lower the score of “fake news” by pitting stories against professionals, but it is unknown if the efforts would be worthwhile to Facebook’s business to consider implementing.

One thing is certain: Facebook, like it or not, has evolved beyond its origins as a technology company, and the sooner it accepts that liability the better. Continuing to argue it should hold no responsibilities in the content uploaded by posters and advertisers is becoming an increasingly slippery slope, in part because like it or not the platform is now an information hub, and while it might not be able to vet everything that is posted it clearly does have some sway in what users can and cannot access, what items are seen by more users and what articles are buried within feeds, a fact the company increasingly flaunts as a reason to pay for advertising. More importantly, Facebook needs to realize it cannot continue to play up to both sides of political and polarized spheres with lip service, or make micro adjustments while not really changing things at all, if for no other reason than because not making a choice IS the choice. By leaving things as-is, Facebook will have decided, if it hasn't already, that it considers certain extreme viewpoints acceptable, that it really doesn’t care if its platform is used for calculated persuasion or plays a part in deadly harm. By not making changes, Facebook will accept that when it comes to the spread of misinformation, even if real damage is done, the tech giant doesn’t care.

To quote another literary hero: "with great power comes great responsibility." When it comes to information, Facebook unquestionably has the power; is the company prepared to take on the responsibility that comes with it?

Posted in Aware, Information Ethics and tagged , , , , , .

Leave a Reply

Your email address will not be published. Required fields are marked *