Technology
Shiva Kakkar
Mar 02, 2017, 05:14 PM | Updated 05:12 PM IST
Save & read from anywhere!
Bookmark stories for easy access on any device or the Swarajya app.
On the 16 February this year, Facebook CEO Mark Zuckerberg published a manifesto extolling the virtues of Facebook and how it can help in building a global community and ‘shaping’ and ‘saving’ the world (read, Donald Trump). In his manifesto, Zuckerberg addresses the issue of ‘fake news’ which has become a contentious topic in the US after Donald Trump’s presidential ascent. Left-liberalist media outlets have blamed Hillary Clinton’s loss on the ‘fake news’ circulated about her e-mail scandal. There is widespread debate that news being circulated on social media needs some kind of filtration and pruning before it reaches the audience. Zuckerberg argues that Facebook, by using artificial intelligence and machine learning algorithms, will provide ‘real’ news on its platform. On similar lines, Google recently launched a new machine learning algorithm called ‘perspectives’ to moderate comment sections on news websites. The application has been deployed by a various media outlets and categorises certain comments as ‘toxic’ based on their ability to trigger polarising discussions. The actions by these companies are being passed off as an attempt to promote clean debates and non-toxic discussions in a safe environment. But is this really the agenda? Not quite.
Under the garb of providing ‘real news’, what Zuckerberg is proposing, is to systematically curate all information flowing in and out of his platform and label certain versions of information as ‘real’ or ‘fake’. Of the thousands of stories being uploaded by various agencies and users, Facebook’s algorithms would selectively decide what is ‘real’ and what is ‘fake’; what is ‘toxic’ and what is ‘non-toxic’. This is an application of technology designed exclusively for misuse and aimed at purging all non-conformist views opposing the liberal agenda. For example, a machine algorithm may very well decide that the ABVP version of the incident (as reported in Swarajya) has a lesser probability of being real, and may flag it as ‘fake’ and remove (or limit) it from the feeds of users. At the same time, versions from the left-liberalist outlets would be circulated more, considering the left leaning posture of technology companies and overall domination of the leftist narrative over news media.
This illustration is not a purely a firmament of imagination. Facebook has already demonstrated its ability in manipulating user behaviour. In 2014, Facebook was involved in a social experiment wherein it deliberately tweaked its algorithm responsible for displaying user feeds, to show more depressing news to users in their timelines. Users encountering these feeds as a result, got emotionally more sad and depressed. This incident shows the extraordinary power that social media can have on the mental and emotional disposition of people. Zuckerberg’s propositionto use AI and machine learning algorithms for curating information is essentially aimed at manipulating how people think and respond to information.
Artificial intelligence and machine learning algorithms are highly prone to be loaded with ideological imperatives. These learning algorithms are essentially lines of codes written by programmers. Their ‘learning ability’ (in this case, the ability to discern between ‘real’ and ‘fake’) is dependent upon the logic and disposition of the programmer. Their discerning ability is not completely objective or neutral. Information incongruent to Facebook’s liberal worldview is likely to be suppressed. Instances of suppression of conservative news by curators at Facebook and Twitter have already been widely reported. By replacing human curators with automated algorithms, there would be no way of identifying such censorship taking place. By promising to get rid of ‘fake news’, Zuckerberg doesn’t really aim to provide real, unbiased information. Rather, the aim is to label and classify certain information as ‘biased’ and ‘fake’, the measure for which would be an algorithm which nobody would be aware of.
Similarly, Google is aiming to curtail user opinion on non-social networking platforms. Google’s ‘perspective’ algorithm can filter out and delete comments in response sections of websites that it dubs as ‘hate speech’ or ‘toxic’. It is inevitable that non-conformist comments will be termed as ‘toxic’ and ‘hate speech’ deliberately purged. By systematically removing all counter views under the garb of ‘hate speech’ or being ‘toxic’, organisations like Facebook and Google are looking at subverting all debate. To the unconcerned reader it would appear that only a singular dominant version of the story exists. News would be censored without users ever coming to know that they have been passed off curated content. This is large scale social engineering through the (mis)use of technology.
The reason for this response probably lies in the resurgence of right leaning ideology around the world. The three major conservative resurgences - Modi’s rise in India, Trump’s presidency and Brexit can be credited to social media. Traditionally, the Left has been able to dominate public discourse because of its control over academic institutions and media which continuously reflect each other’s message and reinforce its narrative. Social media disrupted this hegemony. The left has understood its mistake and has now trained its guns at this very platform.
Edward Snowden recently commented on this issue. He suggests that the world we live in today is a ‘post-truth’ society. We are more concerned with feelings, emotions and political correctness of arguments rather than the logic of the argument itself. Discussions are termed as ‘free speech’ or ‘hate speech’ purely based on their political correctness and not the rationale behind them. The solution to ‘fake news’, Snowden says, is not in suppressing information but in providing even more information. There is no need for Facebook or Google to spoon feed people with their curated version of truth. Their responsibility is to disburse as much information as possible across the political spectrum and leave it to the people to decide what is ‘real’ and what is ‘fake’.
Shiva Kakkar is currently pursuing his doctoral studies at the Indian Institute of Management, Ahmedabad. He holds a critical view of the radical left-liberal narratives and considers them to be divisive and detrimental for the society and its constituents.