Swarajya Logo

Ideas

WhatsApp, Facebook And Google Aren’t Just Tech Platforms; They Are Semi-Media Houses And Treated As Such

  • Technology platforms are not neutral and they are direct accelerators of content.
  • They thus have to invest more to prevent misuse, and take partial responsibility for what happens through their platforms.

R JagannathanAug 27, 2018, 01:29 PM | Updated 01:29 PM IST
Google, Facebook, Twitter and WhatsApp cannot claim they are just tech platforms; they are enablers of content.

Google, Facebook, Twitter and WhatsApp cannot claim they are just tech platforms; they are enablers of content.


WhatsApp, the instant messaging service that has almost replaced SMSes among connected users, has been in the doghouse due to the use of its services for spreading fake news about child lifting, leading to lynch-mobs beating up some innocent people. Its application for starting a payments service has been stuck in the works due to fears over data privacy and other issues. No less a minister than Ravi Shankar Prasad, Minister for Law, Electronics and IT, has warned that the company needs “to find solutions to deal with sinister developments like mob-lynching and revenge porn and has to follow Indian law.”

WhatsApp has tried to meet the concerns halfway by labelling forwarded messages as such, and limiting the number of forwards to five per try. This will slow down the flow of misinformation to some extent, but not stop it fully, for motivated groups can still escalate fake news through an ever-spreading chain.

The WhatsApp issue brings to the fore a peculiar problem thrown up by digital technology: the difficulty in separating the platform from its consequences; media is merging with the message.

If you are running a newspaper or a TV channel, it is obvious that you run a media company, and are legally responsible for the content that passes through your medium. But are Google, Facebook, WhatsApp or Twitter responsible for what is published on their platforms when they claim that they are mere enablers of communication, not participants in the creation of content?

Similarly, Uber and Ola, both app-based taxi-hailing services, say that they are only responsible for the technology, not the conduct of the drivers or customers who use their platforms. They see the drivers as “independent contractors” offering rides to customers coming to the platform. While they do facilitate the rating of both drivers and commuters on their apps, they take no responsibility for a driver who rapes a customer, or a customer who robs the driver.

Three related questions arise:

One, when technology platforms mediate between two sets of customers, can they deny all responsibility for what happens when some transactions go wrong?

Two, can platforms be considered separate from the services they enable? If Facebook is a publishing platform for several publishers and bloggers, can they really separate their platform role from their publishing role?

Three, can the old norms of freedom – which applied to print and television – be valid in the digital and social media spaces, where the sheer speed of information transmission can be bewildering and difficult to monitor and control?

Santosh Desai, writing in The Times of India, clearly believes that social media and other tech platforms that enable fast spread of information or disinformation need to be run to new rules and regulations. “While it easy to agree with the argument that the principal problem lies with the users of this service and not the service itself, that doesn’t mean that no change is required in the way that we regulate new forms of media. Technology is not innocent, just because it carries no mischief in its intent. The users might be responsible, but their usage is not independent of the technology; indeed, it is often deeply embedded in it.” (Italics mine)

Pierre Omidyar, a French-American billionaire and founder of eBay, even suggests that some users of social media must be forced out, since they are vitiating democracy. In a post republished by The Times of India last year and written against the backdrop of the 2016 US presidential elections where Russia was said to have covertly intervened in favour of Donald Trump, he wrote: “For all the ways this technology brings us together, the monetisation and manipulation of information is swiftly tearing us apart. From foreign interference in our elections to targeted campaigns designed to confuse and divide on important social issues, groups looking for an effective way to infiltrate and influence our democracy have found generous hosts in the world of social media. But the time has come for these unwelcome guests to leave the party.”

The problem, though, is who will decide who are “unwelcome guests” to the party, and whether any kind of news filtering will inevitably lead to subtle forms of censorship, thus suppressing the voices of those very people who benefited from the democratisation of news.

Twitter, for example, blocks users who are either abusive or are seen to be partisan, but it (along with Facebook) has been accused of stifling voices from the Right in India, and Conservative voices in the US. Many people were temporarily blocked without adequate reason, producing a chilling effect on free speech.

Google and Facebook still deny that they are media companies even though they take away an overwhelming market share of digital advertising revenues. But, in an indirect way, they are now committed to spending millions of dollars to creating armies of fact-checkers to ensure that their platforms are not used to accelerate the dissemination of misleading, dangerous or false news. The idea is to nip fake news sooner than later.

The takeouts from the above discussion are the following.

#1: Technology platforms are not neutral – mere pipes that are not responsible for what passes through them. They are direct accelerators of content. They thus have to invest more to prevent misuse, and take partial responsibility for what happens through their platforms.

#2: While a platform may merely be performing the middle function between, say, content creators and viewers, or between taxi owners and riders, there is a need for new nomenclature to describe these emerging relationships. In the UK, the Matthew Taylor report on modern work practices pointed out that Uber’s drivers were not independent contractors, but “dependent contractors”, which implies that Uber has some responsibility towards its drivers (and customers, if they are badly served by the drivers). In the context of content, Google, Facebook, Twitter and WhatsApp cannot claim they are just tech platforms; they are enablers of content. They may not be media in the conventional sense of the term, but they are halfway there; they are “media-enablers” and hence must share some of the responsibilities that regular media have.

#3: Regulators, and society, must strike a balance between the need to curb damaging disinformation and freedom of speech. Thousands of fact-checking sites will not be enough to prevent millions, even billions, of content generators from creating viral and dangerous content. Technology, including artificial intelligence, will have to do part of the job. But if fact-checking ends up in the disablement of dissenting news and views, the democratisation of news will be over. And that is a bad thing.

The sobering reality is that fake news spreads because it panders to a human need to believe in some “facts” and not others. It is similar to the need for us to believe in god or miracles. Just as you cannot banish god and superstition by fact-checking claims made on behalf of religion, fact-checking is only a part of the solution to the problem of fake news.

The ultimate answer to fake news is human maturity, which is usually the result of time and experience. No one can fool all the people all the time. Fake news will start losing traction when large numbers of people start using their commonsense and develop a healthy distrust of what they see and hear. The reason why fake news is spreading today is partly because the old media has lost its credibility by being in bed with the establishment for decades. There is disbelief in the alleged neutrality of some of the old media writers and TV anchors. Once the nexus between old media and vested interests is broken, fake news will fade away.

Join our WhatsApp channel - no spam, only sharp analysis