Australian group takes fight to Facebook, saying platform is ‘awash’ with hateful Islamophobia
An Australian-Muslim rights organisation is threatening legal action against Facebook after accusing the tech giant of allowing Islamophobia to proliferate online.
The Australian Muslim Advocacy Network has been campaigning for Facebook to act against hate speech in the wake of the Christchurch massacre, saying the social media giant must take responsibility for the real-world harm and violence unmoderated hate speech causes.
They point to the lack of action against hateful comments on Facebook such as, “Muslims are the only people on Earth who will earn their genocide”, “Drown ’em at birth”, and “Can we go kill these f***ers yet”.
“Islamophobia is awash on Facebook,” lawyer and AMAN advisor Rita Jabri-Markwell told SBS News.
“There are strangers out there who completely hate you and want you to die … because of who you are and because of your religious identity.
“They don’t care that you’re a mother or a teacher or member of Australian society. They just want you to be eradicated.”
AMAN has sent Facebook a legal letter outlining its concerns about the spread of hate speech and dangerous conspiracy theories directed towards Muslims, on various accounts, links and group pages on the site.
The letter, which SBS News has seen, warns Facebook that it could be liable for user-generated hate speech and subject to section 18C of the Race Discrimination Act 1975.
“We understand that space needs to be made for criticising extremist interpretations and groups within the Muslim community, but contextualised analysis of the links and echo chambers on your platform make it objectively clear when racism and dehumanisation underlies their purpose,” it reads.
The organisation has conducted research from 2015 which suggests Facebook has been home to more anti-Islam groups and pages than many other social networks.
An example of an anti-Islam Facebook group’s questions before submission to the group.
The Christchurch terrorist was active on a number of far-right groups on Facebook, and used the platform to live-stream his massacre.
AMAN has identified a number of Facebook accounts belonging to anti-Islam websites, which it says are masquerading as news on the platform. It is asking Facebook remove these accounts.
Some of the websites propagate invasion conspiracy theories similar to those that inspired the Norwegian terrorist Anders Breivik and the Christchurch mosque shooter, the organisation says.
“There is, I think, quite a very basic, obvious link between the amplification of those narratives on Facebook, and the real-life death threats that Muslims face when they’re going about their [lives],” Ms Jabri-Markwell said.
“The hatred that people have for you because you are visibly Muslim, you know, shows in the way that we’ve been crafted in public discourse.
“It is a very visceral and very negative image and Facebook has been supporting that through all these pages and groups through allowing these third party websites which openly suggest that Muslims are inherently subhuman, that we are apparently doing all these conspiracy theory things.”
The group also wants Facebook to amend its hate speech policy to recognise hateful characterisations of Muslims, such as painting Muslims as subhuman or savages.
The policy prohibits attacks against people based on their “characteristics” such as race, religion, or sexual orientation but allows attacks and criticisms of institutions.
Facebook is reviewing its policies about where the fine line between attacks against characteristics and institutions can be crossed, including engaging with external stakeholders such as legal and human rights experts.
The social media giant told SBS News in a statement that it was working to address more implicit hate speech and violent content, including combating attacks against the Muslim community.
“There is more work to do and we appreciate feedback from the Australian Muslim Advocacy Network and others as we refine our policies to keep people safe,” said Mia Garlick, Director of Public Policy, Facebook Australia and New Zealand.
Reforms to the Online Safety Act currently before the federal parliament would extend the powers of the eSafety Commissioner so it could remove racist or dehumanising content targeted against children as well as adults.
The eSafety Commissioner Julie Inman Grant said she was “very concerned” about the proliferation of hate speech on social media, particularly Facebook, targeting Muslim communities.
“It’s clear Facebook needs to be enforcing its own policies and needs to be more vigilant and proactive in identifying and removing this harmful content quickly,” she told SBS News in a statement.
Australians from culturally and linguistically diverse backgrounds are up to three times more likely to experience online abuse and are targeted by hate speech at higher levels than the national average, she added.
“This abuse disproportionately targets their religion, race and ethnicity.”
This isn’t the first time Facebook has been urged to remove Islamophobic content off its platform.
Recently, a group of 30 members of the US Congress co-signed a letter urging Facebook to remove “dangerous” and “deadly” anti-Muslim content on the social media platform.
Parts of Australia declare natural disaster during 'once in 100 years' floods
Miami Beach declares state of emergency and curfew as spring break crowds spark safety concerns
Ohio State player says he received death threat after team's upset loss
Facebook has become one of the fastest growing social media platforms. At the end of 2013, Facebook
had 1,23bn monthly active users and 757 million daily users who log onto Facebook. Within this
online space, there are also a growing number of online virtual communities, and hate groups who are
using this space to share a violent, Islamophobic and racist narrative which attempts to create a hostile
virtual environment. It is important to analyse these ‘new’ communities by monitoring the activities
they conduct, because the material they post, potentially can have a damaging impact on community
cohesion within society. Moreover, as a result of recent figures that show an increase in online antiMuslim abuse, there is a pertinent need to address the issue about Islamophobia on social media.
This research examined 100 different Facebook pages, posts and comments and found 494 instances
of online hate speech directed against Muslim communities. The findings revealed some interesting
parallels and common characteristics shared within these groups, which helped the author to create a
typology of five characteristics of anti-Muslim hate espoused on Facebook. Overall, this study found
Muslims being demonised and vilified online which had manifested through negative attitudes,
discrimination, stereotypes, physical threats and online harassment which all had the potential to
incite violence or prejudicial action because it disparages and intimidates a protected individual or
orientation. There are also other offences such as using the content of a website which
can also be illegal when it threatens or harasses a person or a group of people. If such
material is posted because of hostility based on race, religion, sexual orientation, disability
or transgender then it can be viewed as a hate crime. This material can also be
disseminated in either words, pictures, video, music and could include; messages calling
for racial or religious violence, direct webpages with pictures, videos or descriptions that
glorify violence against anyone due to their race, religion, disability, sexual orientation or
because they are transgender and chat forums, where people ask other people to commit
Messages can be spread at great speed, people can remain anonymous and the nature of
cyber space remains unregulated. In particular for hate groups, wanting to recruit people
for their cause and also be given a platform to spread unsolicited material which can often
go unnoticed (Hewson et al., 2003). This allows them to capture audiences and use the
Internet as a propaganda tool for those purposes. Indeed, these communicative messages
can also cause a lot of discontent and impact upon measures of community cohesion
(McNamee et al., 2010).
Hate speech in this context is any form of language used to depict someone in a
negative fashion in regards to their race, ethnicity, gender, religion, sexual orientation or
physical and mental disability with promotes hate and incites violence (Yar, 2013;
Feldman et al., 2013). This also links into the convergence of emotional distress caused by
hate online, the nature of intimidation and harassment online, and the prejudice that seeks
to defame groups through speech intending to injure and intimidate.
Hate on the Internet can have direct and indirect experiences for victims and
communities being targeted (Awan & Zempi, 2015a; Awan, 2016; Chakraborti &
Garland, 2009). In one sense, it can be used to harass and intimidate victims and on the
other hand, it can also be used for opportunistic crimes (Christopherson, 2007). The
Internet, therefore is a powerful tool by which people can be influenced to act in a certain
way and manner. What also is left in terms of direct impact is important, because it
impacts upon local communities and the understanding of how this could constitute acts
of violence offline (Douglas et al., 2005). Awan and Zempi (2015) found that online and
offline anti-Muslim hate crime can impact upon people’s lives to the extent that they feel
a sense of anxiety, depression and feelings of isolation. This is particularly strong when
considering hate speech online that aims to threaten and incite violence.
As noted above, a lot of the material online can also cause a lot of fear and it is
imperative that the police and other agencies within the security sector work together to
tackle hate crime on the Internet (Awan & Zempi, 2015b). The Association of Chief
Police Officers (ACPO) (2013) note how online hate material can cause damage to
community cohesion. They state that: “We understand that hate material can damage
community cohesion and create fear, so the police want to work alongside communities
and the Internet industry to reduce the harm caused by hate on the Internet” (cited
Hate crime on the Internet, can also be used as a means to create a virtual storage and
communicative messages that go beyond the physical to the virtual dimension (Iganski,
2012). For Perry (2003, p.19) this means the spectrum of hate crime does cross the line
into the virtual realm and as such Coliandris (2012, p. 82) argues hate crimes “are capable
- Educator as Life Coach - Measure, Monitor, and Improve Your Students Emotional State and Behavior Usage of Life Coaching into the Classroom
- Twitch is adding over 350 new tags including transgender, Black, disabled, veteran, and Vtuber The aim is to give creators more choices.