Women online suffer a disproportionate amount of harm and abuse, but it isn’t all based on their gender.
Our ongoing research involves collecting case studies from both India and Australia to understand how various marginalised identities can impact young women’s experiences of online violence, and how social media companies – including Facebook, Twitter and Instagram – aren’t doing enough to stop it.
India is a rich case study for this research, as it’s a country where women have many different expressions of identity in large numbers – and where there remains a lot of racial, religious and social tension across society.
And those with marginalised identities have to deal with more stigma and targeting.
What’s worse is platform content moderators are failing to recognise this cyber violence – often because they don’t understand the nuance and contexts in which stigmas operate.
What is cyber violence?
Cyber violence can be understood as harm and abuse facilitated by digital and technological means.
In 2019, there was a 63.5% increase in the number of cyber violence cases being reported in India, compared with 2018.
We’ve long understood the need for an intersectional approach to feminism. We now need the same approach to protecting women’s safety online.
There has since been a further rise in cases against women from marginalised communities, including Muslim and Dalit women.
The app developers used the images of some 100 Muslim women without their permission, to put them up “for sale” in a fake auction.
The purpose was to denigrate and humiliate Muslim women in particular.
This is mirrored in Australia.
Young Indigenous women are susceptible to being on the receiving end of cyber violence which not only targets them by gender, but also race.
A 2021 research report by eSafety found Aboriginal and Torres Strait Islander women felt victimised by racist and threatening comments made online, usually in public Facebook groups.
They also reported feeling unsafe and having their mental health significantly impacted.
Speaking on behalf of women from marginalised backgrounds, Faruqi said: “It is based on where I come from, what I look like, my religion.”
This email came in last night. I wish I could say it’s unusual, but this is what some of us deal with daily in this country. Just opening my mouth to join the public debate can provoke such violent, abusive responses. pic.twitter.com/9iomWY21cU — Mehreen Faruqi (@MehreenFaruqi) August 5, 2021
Young women with marginalised identities
Research on cyber violence against women in India reveals how hatred towards certain religions, races and sexual orientations can make gender-based violence even more harmful.
When women express their opinions or post pictures online, they are targeted based on their marginalised identities.
The eSafety Commission in Australia joined a global partnership to end cyber violence against women. But a great deal of work still needs to be done.
For instance, Kiruba Munusamy, an advocate practising in the Supreme Court of India, received racist and caste-based slurs for speaking out about sexual violence online.
And women with marginalised identities continue to be victimised online, despite attempts to control this.
Take Australia’s “Safety by Design” framework, developed by the eSafety commissioner.
Despite having some gathered traction in the past few years, it remains a “voluntary” code that encourages technology companies to prevent online harm through product design.
In India, hate speech against Muslims, in particular, has been on the rise.
India has laws (albeit flawed) that can be used to deal with online abuse, but better implementation is needed.
With a Hindu majority and radicalisation, it can be difficult to report incidents.
Victims are concerned about safety and secondary victimisation, wherein they may face further abuse as a result of reporting a crime.
It’s hard to know the exact amount of cyber violence perpetrated against women with marginalised identities.
Yet it’s clear these identities are linked to the amount of, and type of, abuse women face online.
One study by Amnesty International found Indian Muslim women politicians faced 94.1% more ethnic or religious slurs than women politicians of other religions, and women from marginalised castes received 59% more caste-based slurs than women from more general castes.
Recognition in platform design
Five years ago, Amnesty International submitted a report to the United Nations highlighting the need for moderators to be trained in identifying gender-related and identity-related abuse on platforms.
Similarly, in 2019 Equality Labs in India published an advocacy report discussing how Facebook failed to protect people from marginalised Indian communities. This is despite Facebook having caste, religion and gender as “protected” categories under hate speech guidelines.
Yet in 2022 social media companies and moderators still need to do more to approach cyber violence through an intersectional lens. While platforms have country-specific moderation teams, moderators will often lack cultural competency and literacy on matters of caste, religion, sexuality, disability and race.
There could be various reasons for this, including a lack of diversity among staff and contractors.
In a 2020 report by Mint, one moderator working for Facebook India said she’s expected to achieve an accuracy report of 85% minimum to keep her job.
In practice, this means she can’t spend more than 4.5 seconds on content being reviewed.
Such structural issues can also contribute to the problem.
I don’t have a personal FB profile, but keep a public one active for the purposes of book promo, etc. Here is a screenshot from a regular morning after I’ve posted about my new book #TalkingAboutARevolution… one wonders how these people have so much time lol. pic.twitter.com/dEK3n7u3RS — Yassmin Abdel-Magied (@yassmin_a) May 21, 2022
The way forward
Content moderation can be complex and requires collective expertise from communities and advocates.
One way forward is to enforce transparency, accountability and resource allocation to build solutions within social media companies.
In November last year, the Australian government released the draft of a bill aimed at holding social media companies accountable for content posted on their platforms and protecting people from trolls.
It’s anticipated these regulations will ensure platforms are held responsible for harmful content that affects users.