This is a truly awful story about online abuse, but I thought it was worth raising some awareness of. This article contains two really troubling factors; the first about offenders taking advantage of COVID-19 to abuse children, but it also discusses the awful effects that discovery of an abuser has devastating effects on the spouse (and the rest of the family of course).
This highlights the importance of online safety in combating online abuse and how some many parties have a part to play in the solution to this.
Sadly this is only one of many online harms, but it’s one of the largest and one that causes the most pain.
The issue of the impacts causes on the rest of the family is seldom discussed, but worth being aware of. Thankfully there are some charities that support those left in the wake, and the article highlights PartnerSpeak but in the UK we see organisations like Mothers of Sexually Abused Children (MOSAC) amongst others.
These principles make it clear that they will take further steps that will make their users safer, and should be applauded. It steers clear of the current debate about end-to-end encryption (more to come of that later) but is a strong move in the right direction.
On the 14th January 2020, Lord McNally placed a bill infront of Parliament that would “assign certain functions of OFCOM in relation to online harms regulation”. This executive summary appears to be a loose description of what’s contained, with the text seemingly requiring OFCOM to write a report each year with recommendations for the introduction of an Online Harms Reduction Regulator. It is not clear why recommendations are required every year, nor why the lead has now moved fromDCMS to OFCOM (I can only assume that it is because the OFCOM is much closer to the cut-and-thrust of Regulation.
So what might it mean for Platform and Service Providers?
In the short term – probably not a lot, however, there are a couple of key points that providers may want to keepabreast of: 1) We can see that progress is being made – and is more likely to increase than decrease. 2) The Online Harms (as initially laid out in the OHWP) have been narrowed down to focus on certain ones in particular. This means that a Platform Provider is probably well advised to ensure that these are being tackled actively. The Harms laid out in the paper are: (a) terrorism (or it could be in reference to this definition); (b) dangers to persons aged under 18 and vulnerable adults; (c) racial hatred, religious hatred, hatred on the grounds of sex or hatred on the grounds of sexual orientation; (d) discrimination against a person or persons because of a protected characteristic; (e) fraud or financial crime; (f) intellectual property crime; (g) threats which impede or prejudice the integrity and probity of the electoral process; and (h) any other harms that OFCOM deem appropriate.
Unfortunately, as you have probably recognised, these descriptions of Online Harms are not well correlated with those laid out in the OHWP – and so OFCOM will probably struggle initially with the lack of tight definition of these Harms, before it can make any meaningful report.
Your app/platform/website/service is a force for good,
right? I’ll assume so (if it’s not, you’re on the wrong side of us, and your
time is up!) because generally developers, product managers, entrepreneurs and
customer services are out there to add value and delight their customers. So,
you may not be planning to have to spend your scarce development effort on not
just protecting your platform from cyber attack, but from protecting your users
from other users and threat actors. It’s an unfortunate fact of the modern
internet that bad actors are out there, and the chances are they will use your
platform to attach other people.
So, your first question might be…what is it I need to
safeguard my users against?
Great question, and there was a time when that mostly came
down to removing profanity (bad language) and (perhaps) ensuring their was no
abuse or harassment going on. Then came the widespread problem of Online Child
Sexual Exploitation Online (OCSE) and (if you allow file exchange) the transferring
and distribution of Child Sexual Abuse Material (CSAM).
But it doesn’t stop there. The recent Online Harms White Paper (OHWP) identified 29 harms and describes the list as “neither exhaustive nor fixed”.
…advocacy of self-harm
…extremist content and activity
…promotion of Female Genital Mutilation (FGM)
—etc (29 at the moment…)
But identifying online harms and coming up with mitigations is not your day job, so how we thought we would make your life a lot easier. Step 1 was to creat a place where you could see the full list of identified harms, with (some sort of!)* definition.
This is only a start of the resources we plan to provide to make your life easier in providing a great technology response.
We’re happy to work with you to provide mitigations for all of these. Some will be technology, some will be process and others may simply be a tweak in your policy, but we believe that knowing what you need to guard against is the first step in providing a response.
We’re passionate about making the internet a safer place for people, but we’re not in the business of Social Media, Chat or Content. But what if you are?
We have pulled together what we think is the most complete and current catalogue of Content Moderation companies on the internet. So if you’re a developer and looking for a service to make your site or app a safer place, then head over to the Content Moderation app store and let us know what you think.
(1) The OHWP lays out the direction for Government (and hence points towards the sort of regulatory things that might impact businesses) and
(2) It has the input of a decent number of people with experience in this area, and so acts as a good reference point.
But there’s one area that we found particularly important to look at, and that was the categorisation of Online Harms. There’s a couple of take away points here that we think should be raised:
It recognises that the list is not exhaustive and is expected to evolve over time
It de-scopes some things that, whilst there is still a requirement for businesses to protect against, the OHWP feels is out of scope.
The first is the most important we think. For us, it is a ringing endorsement of our approach and thinking. Many other companies have taken (either deliberately or as a result of their legacy) an approach that focuses on CSEA or CSAM and may have built some other protection mechanisms in. We have always started from the position of understanding ALL online harms and focusing on protection against all.
This government recognition of the full suite of harms (and that it is constantly evolving) helps you start thinking about your Online Harm protection plan. Time to stop thinking just about Safeguarding or CSEA, but thinking more holistically about Online Harms.
So what are these harms?
Online CSEA
Terrorist content
Illegal upload from prisons
Gang culture and incitement to serious violence
Sale of illegal goods & services e.g. drugs & weapons
Organised immigration crime
Modern slavery
Extreme pornography
Revenge pornography
Hate Crime
Cyber bullying and trolling
Advocacy of self-harm
Encouraging or assisting suicide
Sexting of indecent images by under 18s
Self Generate Indecent Imagery (SGII)
Online dis-information
Harassment & cyberstalking
Intimidation
Extremist content and activity
Coercive behaviour
Violent Content
Promotion of FGM
Children accessing pornography
Children accessing inappropriate material
Online manipulation
Interference with legal proceedings
Clearly, it’s unlikely that you will have a plan, or measures in place to address all of these, but do you think it’s worth getting in touch with someone that can help you? If you want some help making head or tail of this, get in touch.
The internet is an incredible place, and has brought imeasurable good in the world, bit it’s no secret that it’s brought a great deal of harm too. Just as the variety of online benefits increases every day, so do the Online Harms. Even as we write, the UK government is set to release its Online Harms White Paper (OHWP) in which we hope it will enumerate the harms it considers to be tackled. From our research, there appears to be no widely accepted listing of harms, which makes it difficult to tackle them head on. The best we have found so far comes from Ofcom (more on that later).
So, as you may expect, we’re delighted that there are organisations that are providing real, quality advice on measures that companies can take to keep their platform safe.
There’s been a long, winding journey for the European legislation on Copyright, culminating today with a vote passing it.
Most notably, Article 13 is likely to impact Platform and Service providers seeking to keep their Platform clean.
Article 13 holds larger technology companies responsible for material posted without a copyright licence.It says that content-sharing services must license copyright-protected material from the rights holders or they could be held liable unless:
it made “best efforts” to get permission from the copyright holder
it made “best efforts” to ensure that material specified by rights holders was not made available
it acted quickly to remove any infringing material of which it was made aware
Those platform providers (especially with User Generated Content (UGC) will recognise and may be comfortable with such clauses. They are similar to best practice approaches when protecting User online from harmful content.
If you’re looking for a Content Moderation Company (even one that can help you with the EU copyright Directive) head over to the Content Moderation Marketplace and find one that works for you.
We often have conversations with small platform and service providers that are starting to think about making their service a safer place for their users. This applies to almost all types of website and internet based services, but in particular those that provide some social media function, some chat platform or those with User Generated Content (UGC).
One of the first things that is mentioned is the lack of clear guidance available on what the service or platform’s responsibilities are and what steps they should be taking.
Whilst this is targeted towards protecting children online, it is sound advice for any platform or service seeking to protect users online.
We’ll unpack more on this in later posts, but if you’re looking at their advice (for example):
…use tools such as search algorithms to look for slang words typically used by children and young people, and to identify children under 13 who may have lied about their age at registration
The Online Harms White Paper will be the biggest step forwards for online Safety this decade
2019 is turning out to offer green shoots of hope in the world of online safety. We fully expect to see the much anticipated Online Harms White Paper from the Dynamic Duo (the Home Office and the Department for Culture, Media and Sport (DCMS)) as well as the much needed Safety by Design framework from the Australian eSafety Commissioner.
You may think that these are not connected, and indeed they may not be, but parliamentary comment suggests otherwise…
Lord Ashton of Hythe suggests that the OHWP may either contain SbD principles or reference some (we don’t know whether this was a reference to the Australian eSafety Commissioner work).
We hope that we’ll see some coherent advice for online safety to help companies reduce online harms, but the truth is that nobody knows what the OHWP actually contains.
Whatever the outcome, we’ll read it and try to help you with a clear steer about how to make your platform safe.