From the very beginning of Facebook’s existence, there are questions about Zuckerberg’s ethics. According to BusinessInsider.com, he used Facebook user data to guess email passwords and read personal email in order to discredit his rivals. These allegations, albeit unproven and somewhat dated, nonetheless raise troubling questions about the ethics of the CEO of the world’s largest social network.
The real reason to quit Facebook is not only because of trust and privacy issues, or continual changes in format, sponsored posts, increased advertisements or the selling of your personal data and attention. Watch this Video for the Top 10 Reasons why not to Trust Facebook:
10 Reasons not to trust Facebook
10) Osama Alert:
Government agencies pay Facebook millions of dollars each year to secretly hand over user information. This includes photos, messages, and “deleted” posts. Former NSA whistle blower Edward Snowden revealed the shocking extent of these exchanges in 2014. Facebook’s covert involvement in the NSA PRISM program means it has no obligation to users to disclose what it is handing over. Documents revealed that even a Facebook message about Al-Qaeda or a joke about Osama Bin Laden was enough to trigger the NSA surveillance software.
9) Social Depressant
In 2012 Facebook conducted a secret experiment to manipulate the feelings of almost 700,000 users. For one week, user’s News Feeds were doctored to show posts that were either overwhelmingly positive or negative. Disturbingly, users were more likely to post more positive or negative words themselves based on what they had been shown. Results demonstrated what many have long thought – that Facebook can subconsciously influence user thoughts and feelings.
8) Facebook Stalker:
Every time someone uses Facebook, the social media site collects the user’s browser and internet history without seeking consent. An investigation by data consultant Nik Cubrilovic uncovered the site’s ability to track users’ private activity even after they log out. Even more alarming, the process works two ways. Visiting websites with integrated Facebook features – such as “like” or “share” buttons – gives those websites access to personal Facebook data. This little known arrangement has been in place since 2010.
7) Manipulating Reactions:
In 2016 Facebook responded to demand for a “dislike” button with its “reactions” facility. The service lets users respond to posts with 6 emoji options including ‘like’, ‘sad’, ‘love’, ‘wow’, ‘happy’ and ‘angry’. However, not long after this happened, Belgian officials issued warnings against using the ‘reactions’ feature, claiming that it was a danger to user privacy. The officials warned that the service used the ‘reaction’ feature to collect users’ psychological data to sell on to advertisers, allowing Facebook algorithms to manipulate users into buying featured products when they are in ‘good moods’.
6) Shady MasterCard deal:
In 2014 Facebook signed a two-year deal with MasterCard that gives the credit card company access to Facebook users’ information, mined using its ‘Priceless Engine’ technology. This engine technology collects highly sensitive data from Asia Pacific and Australian Facebook users. This in turn helps MasterCard and banks drive sales by monitoring the effect of their marketing strategies on Facebook users. MasterCard is using the same technology to sell Facebook users’ information to banks, in order for the banks to target potential clients and borrowers more easily.
5) Facial Recognition:
Facebook’s facial recognition tool uses advanced algorithms to identify users based on uploaded photographs. With over 200 million new photos uploaded to Facebook every day, its software is the most sophisticated of its kind in existence. However, this poses huge threats to individual privacy, with Facebook amassing the most comprehensive database of human faces in the world. Many technology experts, such as Dr. Joseph Atick, have raised concerns that Facebook’s facial recognition will expose social media users to untraceable identity theft and stalking.
4) Facebook ad campaign:
Unknown to most, any personal data users upload can be used as part of a Facebook ad. The site adverts feature information on brands and pages “liked” by friends, to encourage those in their network to follow suit. Often they use profile picture thumbnails to entice a user’s friends to brand pages and websites. These companies pay Facebook huge sums for the service. Unsurprisingly, users aren’t even informed of their involvement, let alone compensated.
3) Money grabbing scams:
A 2014 investigation by the Next Web revealed that between 67 and 137 million Facebook accounts are either fake or duplicates. A more dangerous discovery is how these are being used. Many of these accounts post bogus status updates, which announce dubious ways to ‘get rich quick’. As ‘friends’ open these links, their information is swiped and used by umbrella companies to steal user identities and financial information. The 2016 Cisco Security Report identified at least 33,681,000 Facebook scams, making it the most common online attack method.
2) Withdrawal Symptoms:
In 2016 Facebook admitted that it had secretly run psychological tests on 20% of its users. The company deliberately crashed its android app to monitor how long it would take before users gave up refreshing it. The experiment tested the loyalty of its users, following fears that growing competition between Facebook and Google might lead to its removal from the Google Play store. Results found that Facebook is so ingrained in people’s lives that there was no point during the experiment in which users stopped trying to access the site. But more worrying is the fact that users remained unaware that they were being experimented on.
1) Facebook for President:
In May 2016 former Facebook staff alleged that they were encouraged to exclude conservative news from the site’s ‘trending’ section. Trending news topics are selected by algorithms that identify recently spiking keywords. However, the former Facebook workers alleged that human teams override the algorithm and blacklist news items themselves, in order to promote their own ideological beliefs. Staff were told to source information from liberal news sources, and suppress Republican updates even if they were trending among users.
Source: All Time 10