True Review



by Vinta Nanda May 30 2024, 12:00 am Estimated Reading Time: 6 mins, 43 secs

Authoritarian governments have multiple tools at their disposal to control social media and suppress free speech, ranging from legal and regulatory measures to sophisticated disinformation campaigns, writes Vinta Nanda.

Promoting content with alternative points of view or voices of dissent on social media in India presents challenges due to systematic content suppression by popular platforms. Despite the promise of social media as a space for free expression, many users find that their posts are rejected or removed when they contradict government propaganda or present perspectives that diverge from the mainstream narrative. This is not a coincidence but a result of programming and policy enforcement by social media companies.

In my efforts to understand this phenomenon, I’ve been trying to find out through online sources to go deeper into the role of AI in moderating content. It becomes clear that AI, inherently reliant on human intervention, is programmed to adhere to the policies set by social media companies. The policies often align with governmental directives, especially in countries where it is in the corporate interest of the platforms to comply with local laws and regulations to maintain their market presence.

The intersection of governmental control and corporate compliance stifles free speech. Social media platforms, under pressure from governments with authoritarian intentions, employ sophisticated algorithms to monitor and remove content deemed undesirable. This not only affects individuals trying to voice alternative opinions but undermines the democratic potential of the platforms by prioritizing state-sanctioned narratives over a diversity of viewpoints.

Understanding this is crucial for those advocating for free speech. It reveals the intricate balance social media companies must navigate between adhering to local laws and upholding principles of free expression. Over then, to the understandings I’ve gathered.


Authoritarian governments can mandate the removal of content they deem harmful, often labelling dissenting views as illegal or dangerous. They may also resort to restricting access to social media platforms during critical times like elections or protests, to stifle communication and organization among opposition groups.

There is extensive monitoring of social media activity, which allows governments to identify and target dissenters. This can lead to arrests, harassment, or other forms of intimidation. Fear of surveillance and potential repercussions also leads individuals to self-censor, stifling free expression and open discussion.

Laws targeting cybercrime, national security, or public order can be broadly applied to criminalize online speech. These laws are often vague and can be used arbitrarily. Governments pressure social media companies to comply with local laws, which can include demands for user data or the removal of specific content.

State-controlled media outlets (in the case of India, mainstream media is largely controlled by the BJP) use social media to amplify government-approved messages, often drowning out independent voices. Political parties in power deploy armies of fake accounts to spread propaganda, manipulate public opinion, and attack critics. These include bots, trolls, and paid commenters. Organized efforts to flood social media with pro-government content and disinformation dominate discussions and marginalize dissenting opinions.

Governments can even influence social media algorithms to favour pro-government content. This is achieved through partnerships or pressure on platform operators. By exerting influence over social media platforms’ content moderation policies, governments can ensure that unfavourable content is suppressed.

Authoritarian regimes use economic incentives or threats to compel social media companies to comply with their demands, such as opening local offices and adhering to local laws.


Platforms like Facebook, Twitter, and Instagram provide mechanisms for users to give feedback on ads and content moderation practices. When a large number of users report an issue or express dissatisfaction, it prompts the platform to potentially revise its policies or algorithms.

However, organized campaigns and movements do exert pressure. For example, the #StopHateForProfit campaign saw major brands boycotting Facebook ads to protest the platform’s handling of hate speech. Such collective actions lead to policy changes or increased transparency from the platform. Entities can lobby for legislative changes that impose new regulations on social media platforms. For example, the introduction of the General Data Protection Regulation (GDPR) in the EU significantly altered how platforms handle user data and privacy.


It is humans who define the problem that the AI will solve and establish what success looks like for the AI system. It is humans who collect relevant data required for training the AI in the form of text, images, audio, video, or other types of data and label the data if supervised learning is involved.  

It's humans who decide on the algorithms that will train the model, such as gradient descent for neural networks or backpropagation for deep learning models and feed training data into the model and adjust parameters to minimize error. It’s humans who use metrics like accuracy, precision, recall to evaluate the model and integrate the trained model into a production environment where it can start making predictions on new data.

Humans decide what data is relevant, how to collect it, and ensure that the data represents the problem domain accurately. Humans clean and preprocess the data, removing inconsistencies, correcting errors, and handling missing values.

Humans assess AI systems for ethical issues, fairness, and bias, making necessary adjustments to mitigate these concerns. While many aspects of AI development involve automation and the use of advanced algorithms, human intervention is essential throughout the entire lifecycle of an AI system.


Governments enact laws that regulate speech, such as hate speech laws, defamation laws, and regulations against the promotion of violence or terrorism. Social media platforms have to comply with these laws to operate within a country. Governments also often have specific regulations for advertising, including truth-in-advertising laws, regulations against misleading claims, and guidelines for endorsements and testimonials.

Governments take legal action against social media platforms or individual users for violating laws. Authorities can issue orders for the removal of illegal content, such as child exploitation material, terrorist propaganda, or fake news during elections.

Social media companies develop community guidelines that outline what content is allowed on their platforms. These guidelines cover issues like hate speech, harassment, graphic violence, and misinformation. They use AI and machine learning algorithms to detect and remove content that violates their policies. These systems scan for keywords, images, and behaviours that match known patterns of policy violations. Teams of human moderators review flagged content, make judgments on edge cases, and handle appeals from users who believe their content was unfairly removed.

Platforms often provide mechanisms for users to appeal content removal or account suspensions. This process involves human review and can lead to the reinstatement of content if it’s found to comply with guidelines.


The variety of tools available for creating and editing posts, stories, and ads are designed and programmed by platform developers. Advanced targeting options and analytics tools allow marketers to reach specific audiences and measure campaign performance. AI-driven chatbots and interaction tools facilitate customer engagement, programmed to respond to queries and provide information based on predefined scripts and learning algorithms.

Rules and automated systems for content moderation determine what types of content can be posted and promoted. The capabilities and restrictions experienced by marketers on social media platforms are deeply influenced by the programming decisions made by developers and engineers. These decisions shape the tools, algorithms, and policies that define what is possible and permissible on the platform.

While AI has the potential to enhance various aspects of our lives, it is important to recognize that it is ultimately a tool shaped by human hands—developers and engineers design it, and governments regulate it. Therefore, the promise of AI bringing about democracy is inherently limited by the biases and perspectives of those in control.

True democracy is rooted in the equal representation of diverse opinions across all sectors of society, ensuring that not only gender but also political and social diversity are reflected.

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of The writers are solely responsible for any claims arising out of the contents of this article.