US President Donald Trump’s attempt at reining in social media platforms has made policymakers around the world sit up and take notice. In India, the US move has fuelled a debate over the responsibility and culpability of such platforms for the content they carry. Though our Information Technology Act of 2000 covers such matters, its application tends to discourage these apps from intervening in what people put out, with the result that moderation is minimal and it takes a court order to take down offensive or dangerous material. As of now, if a platform does not interfere at all, its liability for content is shielded; if it does, it could be dragged into a dispute. But the harm posed by fake news and messages that incite lawlessness is obvious. We thus need an updated legal framework that allows social media to serve its purpose without being a menace to civil society. So far, online discourse in India has been a free-for-all, where gross misinformation gets peddled all too often, sometimes as a means to some political end. The big question, however, is how our online spaces can be sanitized without compromising free speech and user privacy, both of which are rights our citizens are entitled to.
At one end of the spectrum, some argue that it is better to let such platforms operate as mere carriers that are distinct from publishers, and let what is posted be considered the user’s own expression. If this approach is adopted, regulating content would be difficult and an unacceptable status quo would prevail. The limits of such a hands-off approach are being witnessed in the US, where public pressure has made apps make editorial interventions and the White House has responded by trying to expose them directly to lawsuits for what gets posted. So, at the other end of the argument are those who would have these organizations treated on par with content generators, to be held liable for all that they allow online—or “publish”. Given the sheer volume of traffic on popular apps, it would be hard for them to keep watch. Earlier, the Indian government had sought to frame draft guidelines for content moderation, with the use of Artificial Intelligence recommended for the purpose. This may work to some extent, but algorithms cannot be expected to sniff out every post designed to offend sensibilities or instigate violence. There are other complexities. Much of India’s traffic is on WhatsApp, which is encrypted as part of its promise to users. Chats cannot be pried into. In contrast, Twitter’s tweets go out openly, so it can be monitored.
Perhaps the country needs a mix of legal provisions and self-regulation (by apps and their users), suitably optimized to balance all concerns. Social media platforms could be asked to watch what goes out and devise their own ways to do it. Their exposure to legal action can be calibrated finely so that they neither trample free speech nor permit content that violates our principles of reasonable restrictions on that freedom. Encryption need not be abandoned so long as encrypted apps put in place effective mechanisms for bad content to be flagged by users. We could look to legislative action elsewhere, too. Some aspects of European models, for example, may be worthy of emulation. We have gone far too long without any proper regulation of social media. This has endangered some of the principles we have chosen to live by under our Constitution. The aim of intervention is not to stifle the internet, but safeguard the values we hold dear.