Will Facebook’s Cambridge Analytica Fallout Really Change the Company?

Facebook: A World of Likes, Dislikes & Data Collection

By now, you've probably heard of Cambridge Analytica and the most recent scandal involving Facebook user data. As huge big data firm that works in behavioural targeting, with ties to both the pro-BritEx movement and Donald Trump’s election campaign, Cambridge Analytica has been exposed for misusing the data of 50+ million Facebook users. While this isn’t the first time social media data has been used for political campaigning, the Cambridge Analytica case is extremely jarring: the firm advertised its ability to help secure election votes by developing psychometric profiles used in pinpoint targeted advertising. Worse, it was discovered it never should have had access to the data it uses in the first place: Cambridge Analytica received the data from researcher Aleksandr Kogan, who both mined the information from Facebook under the claim of academic interests, and later failed to uphold promises in deleting the data.

Unfortunately for Facebook, efforts to distance themselves from their connection to the public embarrassment foundered, with further investigation revealing that yes, Facebook was aware the data had been given to Cambridge Analytica. This was no data breach, where information is accessed by an outside party without the knowledge or approval of the internal collectors. Instead, concerns are rising because at the end of the day Cambridge Analytica used the data exactly how Facebook has designed the data to be used. The social media darling, while not new to the world of privacy problems by a long shot, has taken huge hits, with more and more users downloading their data only to be shocked by how much the tech giant really has. The current crises has resulted in a blast of privacy awareness and concern over information controls, to a market share drop of $80 billion by March 29th 2018, with advertisers pulling out in droves. CEO Mark Zuckerberg has seen his personal net worth drop by $14 billion, and efforts to allay fears have fallen, first with a less than stellar CNN Interview, and now being called on to testify on Facebook’s data practices before congress. Further damning stories are flooding out of the woodwork; notoriously the leaked memo by executive Andrew Bosworth that exposes just how aware Facebook has been with regard to poor data practices and ethically questionable behaviour. In the meantime the business has asserted new privacy controls to fix the damage are coming, but really should users trust them?


A Quick Look at Facebook’s History with Privacy Problems

While the Cambridge Analytica scandal might be the most recent headline in Facebook’s history of privacy scandals, it is by far not the first. Here’s a short timeline of privacy concerns that have caught public attention since the company’s foundation in 2004.

  • In 2006, Facebook caused an alarm through the reveal of its newest feature: the News Feed. While news feeds have become a standard of social media since, it first shocked users by placing all of their posts in once centralized, accessible feed, showing off the latest and greatest but also making it easier for casual browsers to skim through past material and dig up ‘old news’.


  • In 2007, Facebook introduced Beacon, a new feature that allows the business to use information, including data gained from third-party sites, for target advertising. Beacon however, raised alarms with privacy and security experts alike, for being much more intrusive and stealthy at data collection than communicated to users, not to mention blasting timelines with unexpected actions, such as the purchase of gifts intended for the holidays. Beacon’s opt-out process was confusing to users, even after tweaking thanks to negative feedback. Zuckerberg apologized for the release, and the tool was shut down two years after release.


  • In 2009 Facebook attempted to show a more pro-active privacy awareness side by introducing new tools intended to give users choice over how their information was shared. Only one catch: the news tools were found to be even more confusing to users who wanted to better protect their privacy, and pushed for even more info to go public. This triggered an investigation by the Federal Trade Commission.


  • In 2010, advertisers were caught using privacy loophole to retrieve and collect even more revealing personal information than users anticipated. Facebook fixed the code, apologized, and announced plans to re-design their privacy controls.


  • In 2011 Facebook was charged by the Federal Trade Commission for deceiving users: while the network claimed users could keep their information private, “and then repeatedly allowing it to be shared and made public”. The FTC brought forth an eight-count complaint against Facebook for not living up to the privacy promises made to users. Facebook settled by taking steps to live up to promises in the future, and apologized


  • In 2013 Facebook brought in a new feature, Graph search, which was revealed to allow search into far more depth than previously. Like the 2006 Newsfeed, Graph Search raised alarms because of its power to dig up dirt previously long forgotten: with the power to search a person’s posts, comments, comments and posts by friends, status updates, check-ins and photos, suddenly any past embarrassment or faux-pas because by a search query away.

By this point, it's very hard to believe Facebook when they claim they're going to get serious about privacy, if for no other reason than the fact that they've said as much before. One or two of these incidents could be minor issues, times such as with the newsfeed rollout when Facebook’s engineering team clearly didn’t consider the ramifications of what they were developing, and communicated poorly to their users. By 2010 however, the idea that privacy simply slipped their minds is difficult pill to swallow: the company had been challenged by privacy concerns for years, and in 2011 was actively charged by the FTC. Moreover Facebook is not a fresh startup anymore, with a team of talented but green programmers:, and with over 40.7 in revenue in 2016 alone, the business could clearly higher top engineering and user testing expertise who could factor privacy into upgrades. However, the company didn’t; instead they apologized and pushed forward in the same manner they always do. That’s why headlines like “Facebook says it will update privacy controls” ring so hollow at this point: been there, done that, wait until the heat is off and take a good look at the next upgrade.


Facebook’s Business Model: You Are the Product

Part of the reason for Facebook's hollow stance on privacy is not because the business can't take it into product design, but because it is in the best interest of the business not to. Facebook markets itself as a free community platform that anyone can log into so long as they have online access; but make no mistake about it, someone has to pay those bills. Instead, Facebook makes money by selling the data users input, boasting to advertisers and businesses they know more about buyers and communities than ever before, thanks to technology tracking, algorithms and a share-everything online culture. It’s a crafty business model, but one that has clearly worked out: put out the main product for free, get users interested, find buyers for the users, with the more users that can be attracted to Facebook the more money the business can make. Seems legit, but make no mistake, as the Cambridge Analytica scandal shows, what’s done with that user data won’t always be in the user’s best interests. One might also remember another business model that operates by offering ‘free' product to one market and getting paid by another: the bait offered up on the fishing expert's hook.

Other actions by Facebook have shown their priority is getting user data first, addressing concerns afterwards. The business’s Beacon and Graph Search are solid past examples of this, while other less invasive measures are nevertheless jarring: actions like pushing users to connect Facebook to SMS messaging services, to using ‘alarm red’ for all notifications and messages as a way of further getting users to click and engage. While some users wave these concepts off, often with claiming "they have nothing to hide", there's a distinct lack of awareness over how much control users actually give Facebook over their lives in exchange for data. As media theorist and writer Douglas Rushkoff points out, with Facebook, we are not the drivers, we are the passengers: we might say where we want to go, but the business selects the rout that will get us there; and if it wants to control the information we absorb along the journey, to steer us towards a particular slant or show us only half the scenery along the way, it has the power to do so.

“The data will remain in the hands of one company. Even if its current leaders are responsible and trustworthy, what about those in charge in 20 years?” Joe Miller & Vladan Joler

Where Facebook will go next from all of this remains to be seen. On the one hand, hopefully this will be a wake-up call. As a company providing both a technology and an information access product, to provide truly better privacy and transparency, Facebook needs to take a much closer look into its business model, data practices and the values it focuses on when developing new information structures. Certainly, businesses that are not Facebook would be wise to pay attention and take a deep dive into their own practices: $80 billion dollars isn’t chump change, and very few other organizations would have any hope of survival by now. With changing political and cybersecurity environments, including Apple vs the FBI, China’s new use of big data, the GDPR and a growing question of political access and influence on social networks, privacy has become more of a hot topic than ever before as users become more aware, and concerned, over what impact systems are really able to have over their lives through access to digital footprints and predictive futures. Part of the fallout fuel for Facebook and Cambridge Analytica is that users are becoming more aware and asking more questions about what really goes on behind the scenes with their data, and wondering how much trust, if any, they should be giving companies without checks or balances.

As for Facebook, although the loss of capital and ongoing investigations from privacy offices around the world certainly aren’t fun, reports have shown the social network isn’t too worried about actual loss of users through the #DeleteFacebook campaign. Where else would they go? Facebook has the advantage of being more likely to have your other contacts and friends on the platform after all, not to mention easy integration with other tools such as sharing music tastes via Apple iTunes or movie recommendations from Netflix. If you run a business, like it or not Facebook is the world’s second largest search engine, so a total removal is a cut marketing may not be able to afford.

While there is hope things will change thanks to the latest public statements, it’s hard not to be sceptical given the business’s past. New privacy controls will do little to change a business model based on the harvesting and selling of the data it takes in. Alas, the writer is reminded of an Aesop fable that seems to run a bit too true:

The Scorpion and the Frog

A scorpion and a frog meet on the bank of a stream and the scorpion asks the frog to carry him across on its back.

The frog asks, "How do I know you won't sting me?"

The scorpion says, "Because if I do, I will die too."

The frog is satisfied, and they set out, but in midstream, the scorpion stings the frog. The frog feels the onset of paralysis and starts to sink, knowing they both will drown, but has just enough time to gasp "Why did you sting me, Mr. Scorpion, even though it costs us both our lives?"

Replies the scorpion: "It’s my nature. It’s what I do.“

Posted in General, Information Ethics, Privacy and tagged , , , .

Leave a Reply

Your email address will not be published. Required fields are marked *