Stand Together Network

ONLINE SAFETY AND IMAGE BASED SEXUAL ABUSE

What is online image based sexual abuse:

Image-based sexual abuse (sometimes referred to as ‘revenge porn’) is a criminal offence. 

It’s when is when someone shares sexually explicit images or videos of another person without their consent, and with the aim of causing them distress or harm. It refers to materials that are shared both online and offline, and includes uploading images to the internet and social media channels, sharing by text and email, and showing someone a physical or electronic image or video. 

domestic violence, oppression, woman-7669890.jpg

Who can be a victim

Anyone can be affected by image-based sexual abuse, but the perpetrator will often be an ex-partner or person known to you. Image-based sexual abuse is a violation of privacy and people who have been targeted often feel humiliated, angry or depressed. You might feel too ashamed or embarrassed to report the crime to the police, but if you’ve experienced image-based sexual abuse it’s important to remember that you’re not to blame – only the offender is responsible for this crime taking place. 

Who can support who's affected

Once someone has sexually explicit images or videos of you, it’s hard to control how they use them. Here are some tips to help you stay safe online: 

  • Even if you are in a relationship, think carefully before you share any sexual images with anyone, regardless of whether this is online, in person or via text message. 
  • Check your privacy settings on social media regularly to keep them up to date. 
  • Don’t share personal information or contact details online. 
  • Turn your webcam off when you are not using it. 

If someone has posted explicit images of you online, report the incident to the website where the images were posted and ask for them to be removed. If you decide to report the crime to the police, try to keep evidence of the incident by taking a record and screenshots of any posts or messages. 

If you need further advice on how to get explicit online material removed, contact the Revenge Porn Helpline on 0845 6000 459. 

https://www.endviolenceagainstwomen.org.uk/campaign/online-harms-image-based-sexual-abuse/

Image-based sexual abuse is a widespread and devastating form of sexual abuse, which includes taking and/or distributing nude or sexual images without consent (including threats to do so). This in turn includes so-called ‘revenge porn’, ‘up- skirting’, fake-porn, sexual extortion and videos of sexual assaults and rapes. 

Online abuse of women and girls is very real. We cannot separate our online and offline lives – our online experiences are real life. We should all be able to socialise, work, learn, get involved in activism or join a community online, free from the threat of abuse. But women and girls are having their rights and freedoms restricted online, and this issue is worsening. A problem of this magnitude must be named and tackled. 

What is image-based abuse?

Image-based abuse (IBA) happens when an intimate image or video is shared without the consent of the person pictured. 

This includes images or videos that have been digitally altered (using Photoshop or specialised software).

An intimate image is one that shows 

  • a person’s genital area or anal area (whether bare or covered by underwear)
  • a person’s breasts (if the person identifies as female, transgender or intersex)
  • private activity (for example a person undressing, using the bathroom, showering, bathing or engaged in sexual activity) 
  • a person without attire of religious or cultural significance if they would normally wear such attire in public 

Image-based abuse is sometimes called other things like ‘revenge porn’, ‘intimate image abuse’ or ‘image-based sexual abuse’. ‘Revenge porn’ Is the term usually used in the media. But in many cases image-based abuse is not about ‘revenge’ or ‘porn’. Image-based abuse can happen for many reasons and can include many kinds of images and video. 

If you have experienced image-based abuse, the most important things to remember are that it is not your fault and you are not alone. 

https://claremcglynn.files.wordpress.com/2015/06/mcglynnrackley-ojls-offprint-jan-2017-image-based-sexual-abuse.pdf  

^ PDF click for info 

https://www.emerald.com/insight/content/doi/10.1108/978-1-83982-848-520211054/full/html  

Abstract

Remote learning Coronavirus COVID-19 Resource Center

The non-consensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse,” is a major social and legal problem in the digital age. In this chapter, we examine the problem of image-based sexual abuse in the context of digital platform governance. Specifically, we focus on two key governance issues: first, the governance of platforms, including the regulatory frameworks that apply to technology companies; and second, the governance by platforms, focusing on their policies, tools, and practices for responding to image-based sexual abuse. 

After analysing the policies and practices of a range of digital platforms, we identify four overarching shortcomings: (1) inconsistent, reductionist, and ambiguous language; (2) a stark gap between the policy and practice of content regulation, including transparency deficits; (3) imperfect technology for detecting abuse; and (4) the responsibilities of users to report and prevent abuse. Drawing on a model of corporate social responsibility (CSR), we argue that until platforms better address these problems, they risk failing victim-survivors of image-based sexual abuse and are implicated in the perpetration of such abuse. We conclude by calling for reasonable and proportionate state-based regulation that can help to better align governance by platforms with CSR-initiatives. 

https://claremcglynn.com/imagebasedsexualabuse/ (Videos online) 

A key focus of my work is on securing better laws and policies to challenge all forms of image-based sexual abusea term that refers to the non-consensual taking, making and/or sharing of intimate images without consent, including threats and ‘deepfakes’.  

What is image-based sexual abuse? My early work with Erika Rackley developed the term image-based sexual abuse‘ to better explain the nature and harms of these abuses and you can read more about this in our academic research and in this blog. We have argued for comprehensive legal changes to enable victim-survivors to seek justice, and for improved support and legal assistance to enable victims to reclaim control of their lives. We are currently participating in the Law Commission’s review of this area of law and you can read here our policy briefing and response to the consultation. 

Our landmark report published in 2019 – Shattering Lives and Myths – identified legal and policy failings that must urgently be addressed. We interviewed over 50 victims and stakeholders across the UK to find out their experiences and recommendations for change, as part of larger project involving 75 victims and 6000 survey participants across the UK, Australia and New Zealand. More info here and in our book. 

The report was launched in Parliament and you can read more about the report, public and media debate and my blog with Erika Rackley and Kelly Johnson. 

EU Digital Services Act

I have recently worked with the charity HateAid and Prof Lorna Woods on proposals to hold large porn companies accountable for the non-consensual sexual imagery on their websites. We produced an expert opinion which justifies these measures. 

Over recent years, I have worked closely with politicians, policy-makers and campaign groups to introduce new laws criminalising all forms of image-based sexual abuse – a term that includes ‘revenge porn’, ‘fake-porn’ and ‘up-skirting’. English law was reformed in 2015, with Scots law the year after. These laws are a welcome start, but more must be done to better challenge these abuses and protect victims. 

My blogs highlight the need for urgent action, the harms of image-based sexual abuse and why we must focus on the harms victims’ experience , rather than the motivations of perpetrators. 

I have given evidence before the Scottish Parliament Justice Committee on reform proposals, as well as recommending comprehensive law reforms before Parliament’s Women & Equalities Select Committee. I actively participate in policy and political debates, most recently suggesting improvements to the laws on upskirting including here on the BBC, outlining the harms of photoshopped images and ‘deepfakes’ in the Guardian and advocating the protection of victims by granting them automatic anonymity when reporting to the police.  

This policy activity includes working with tech companies such as TikTok and Facebook. For example, in 2018 I addressed Facebook’s Global Safety team at their HQ in Silicon Valley, with Durham Sociology’s Kelly Johnson and participated in their global roundtable brainstorming next steps to challenge the sharing of non-consensual sexual imagery. 

Quick and easy explainers of my suggestions and recommendations can be found in my blogs including one on why ‘Revenge Porn’ Is A Form Of Sexual Assault another on why the Law Must Protect All Victims of Image-based Sexual Abuse, Not Just Upskirting and recently why laws on ‘upskirting’ must not require proof of sexual gratification

You can read the research in more depth in my article with Erika Rackley in the Oxford Journal of Legal Studies (Access the full research article) which examines harms of image-based sexual abuse and set out the ways in which laws and policies need to be reformed. 

We have further developed our ideas in another article which argues that all forms of image-based sexual abuse are part of a pattern of sexual violence, a form of sexual assault, and should be recognised as such. This article is published in the journal Feminist Legal Studies and is called: Beyond ‘Revenge Porn’: the continuum of image-based sexual abuse

https://www.weprotect.org/blog/a-global-race-to-stop-and-prevent-abuse-online/  

Earlier this month, the Australian Parliament passed the Online Safety Act. We Protect Global Alliance Board Member and Australia’s ESafety Commissioner Julie Inman Grant explains the additional powers given to her office in this new legislation and how they have worked with industry to develop Safety by Design principles and tools.

At the world’s first online safety regulator, it can often feel as though we are in a constant race. A race to keep on top of rapid changes in technology. A race to educate people about the potential harms of the online world. And a race to keep predators away from our children.  

This race requires speed, but also care. Care to ensure regulations are aligned with the threat; that they allow for evolutions and revolutions in technology; and that they safeguard the rights of citizens. In Australia, we have taken some great strides forward in what is a marathon, not a sprint. 

The new Online Safety Act will give us at the ESafety Commissioner’s Office new and strengthened tools to help more Australians who are experiencing online harm. In addition, the Act will lift industry safety standards and plug some critical gaps we’ve identified in keeping our citizens safer online. 

New tools to tackle child sexual abuse material

Along with a fresh, world-first adult cyber abuse scheme for Australian adults, an enhanced cyberbullying scheme for Australian children, and stronger information-gathering powers, ESafety will have a modernised Online Content Scheme. Building on the track already laid down over 20 years of online content regulation, the updated Scheme arms E-Safety with the regulatory tools to tackle child sexual abuse material, no matter where it is hosted. 

These tools are sorely needed. Since its establishment, E-Safety has intervened to stop cyberbullying of children at the source, investigated and provided support to victims in thousands of cases of image-based abuse, and assisted in removing a vast trove of images and videos of online child sexual abuse lying in full view online. 

Over the 2020 pandemic restrictions and lockdowns around the world, E-Safety saw a doubling of reports about illegal and harmful content online, including child sexual abuse material.

In 2021, the proliferation of child sexual abuse and online exploitation has continued unabated. Between July 2020 and June 2021, we managed the highest number of reports of content showing the abuse and torture of children ever in the Online Content Scheme’s 21-year history. 

New powers in the legislative arsenal are by no means a cure-all, but they will go some way to helping further protect our children and citizens. However – all too often – our intervention comes only after the harm has been done. 

Putting child safety at the heart of technology design

I see a major part of my job as preventing these harms in the first place. One of our key approaches is a huge investment in evidence-based education and awareness materials for our citizens. 

But I believe that to really move the bar and narrow the threat surface for the future, the responsibility for the hosting of CSAM and the viral perpetuation of online child sexual abuse enabled by their technologies, needs to be addressed by the platforms themselves.  In short, the technology industry’s commitment to safety needs to improve. 

From our unique vantage point of regulating big tech for safety transgressions, E-Safety wants to see tech companies be successful at raising their safety standards and practices. This desire led us to spearhead Safety by Design’: an initiative we hope will serve as a real catalyst for the change we all want – and need – to see. 

Safety by Design (SbD) seeks to shift the technology design ethos from “move fast and break things,” or “growth at all costs,” to one that places safety and user dignity at the heart of product development.  This is an area where we can and should see innovation thrive and can certainly be a means for bolstering trust, reputation and revenue goals. 

Just as product liability laws have kept consumers safer from faulty manufactured goods and food safety standards are designed to keep the public from getting sick, governments around the world are embracing this concept for technology product development.  In fact, just last month, the G7 leaders, plus Australia, South Korea and South Africa, endorsed a set of Internet Safety Principles asserting that “safety by design” should be considered a fundamental corporate responsibility. 

Helping companies assess and address safety risks

As part of our SbD initiative, we recently released two dynamic and interactive tools to help companies and start-ups on their Safety by Design journey. These tools were developed in concert with more than 180 tech companies and related stakeholders and the tools are freely available to companies that want to assess potential safety risks in their platforms. Even more importantly, the tool surfaces positive and innovative safety by design practices and interventions in place today, giving companies clear pathways for addressing any insufficiencies in design or built-in safety protections. 

Much as WeProtect Global Alliance’s national and global response frameworks acknowledge that responses to child sexual exploitation and abuse cannot be addressed in isolation, neither can safety concerns. 

This is why, in the global race to keep ahead of online harms, the tools can be used, for free, anywhere in the world, by any tech company. They are open to all, and firms of any size can undertake an ‘online safety health-check’. Following the assessment, the tool produces tailored and specific recommendations on how to address any shortcomings to achieve best practice. 

The internet has truly become an essential utility, with the tech giants providing both the ‘vehicles’ and the online ‘highways’. However, if one is to build these digital highways, one should also build guardrails, install stop signs, and, suspend reckless or dangerous drivers. Otherwise, citizens become the online casualties. 

Safety by Design can help ensure the online world we depend on every day can be accessed safely. All of us, and especially those who are young or otherwise at-risk, deserve no less. 

Julie Inman Grant is Australia’s E-Safety Commissioner. Inman Grant’s journey to becoming E-Safety Commissioner started in the United States, first working at the intersection of technology and policy in Congress, followed by more than two decades of working in the technology industry, including stints at Microsoft, Twitter and Adobe. You can find more information, including access to the Safety by Design tools at www.esafety.gov.au 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of We Protect Global Alliance or any of its members. 

Stay Informed with Our Newsletter

Subscribe to our newsletter for insightful resources and updates.

As a subscriber, you’ll receive a complimentary copy of our informative guide, ‘The Ultimate Guide to Career Intervention.’ This guide is an invaluable resource, offering expert advice and strategies for career development. Additionally, you’ll stay abreast of the latest in free training programs, community services, and campaigns. We also include a regular job bulletin and other relevant news, keeping you informed and ahead in your professional journey. Don’t miss out on these essential updates – subscribe today!”

Learn with Confidence

Our Accreditations 

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. More Info