[Newsletter Archives] Newsletter – 24/02/2020 – 3 Quick Tips For Online Safety

The following content is the basic content of an email that was sent to our email list subscribers. We try and post a copy of email newsletters here, roughly two weeks after they were sent to subscribers. If you want content like this, free, to your inbox, long before it arrives here, then click here to subscribe. Due to the way content renders on the internet, it won’t be an exact version of the original email.

Hi << Test First Name >>

3 Quick Tips to make your Service Safer for your users.


You don’t have the time to be wading through reams and reams of government-issued documentation and guidance, about Online Safety for your users, so today I’ve picked just a few tips for you to pique your interest. I’d be keen to hear back (just hit ‘reply’, or comment on the blog post version of this) to let me know what you’ like to hear more about, and I’ll make it happen….

1. Content moderation

Pre, Post or Reactive?If you’ve not engaged with a Content Moderation service for online safety yet, then you may have thought you just ‘plug it in’ and it happens. Well…. You have to make some choices I’m afraid. Here are some of the basic strategies available to you:
Pre-Moderation. This is effectively where the content from Users (for example User Generated Content (UGC)) is placed in a queue and your moderation service (automated or human) forms an opinion before it gets published.
Post-Moderation. As you may have already guessed – This allows content from your users to be published immediately (giving that warm glow of achievement of having ‘gone live’) – but the content is replicated in a queue to be moderated as soon as it can be got to.
Reactive Moderation. This effectively lets the community (or (and this is not good) Law Enforcement!) report content to be moderated. This can then be plugged into your automated, semi-automated or Human moderation team.

IWF Make a Report Link

If your users are in the UK, they can make a report of images or videos of Child Abuse they come across directly to the IWF. You can link to the ‘Make a Report Button’ (https://report.iwf.org.uk/en/report) for an easy option for your users.

Default Settings

If you operate in the UK, you are now required to make sure that your service has it’s ‘Default Settings’ to be ‘High Privacy’ (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child). This is in the new ICO Age Appropriate Design code (coming into force soon) and they will have the power to fine companies that do not comply!

Hope those tiny-tips offer some pointers, but please do reply and let me know if you have other ideas.

Best

Keep in touch.
Matt

Online Harms

The (reasonably) recent release of the UK government’s Online Harms White Paper (OHWP) brought with it some interesting (and somewhat) helpful positions about online harms.

Why should you care?

Well, we think it matters for two reasons.

(1) The OHWP lays out the direction for Government (and hence points towards the sort of regulatory things that might impact businesses) and

(2) It has the input of a decent number of people with experience in this area, and so acts as a good reference point.

But there’s one area that we found particularly important to look at, and that was the categorisation of Online Harms. There’s a couple of take away points here that we think should be raised:

  1. It recognises that the list is not exhaustive and is expected to evolve over time
  2. It de-scopes some things that, whilst there is still a requirement for businesses to protect against, the OHWP feels is out of scope.

The first is the most important we think. For us, it is a ringing endorsement of our approach and thinking. Many other companies have taken (either deliberately or as a result of their legacy) an approach that focuses on CSEA or CSAM and may have built some other protection mechanisms in. We have always started from the position of understanding ALL online harms and focusing on protection against all.

This government recognition of the full suite of harms (and that it is constantly evolving) helps you start thinking about your Online Harm protection plan. Time to stop thinking just about Safeguarding or CSEA, but thinking more holistically about Online Harms.

So what are these harms?

  • Online CSEA
  • Terrorist content
  • Illegal upload from prisons
  • Gang culture and incitement to serious violence
  • Sale of illegal goods & services e.g. drugs & weapons
  • Organised immigration crime
  • Modern slavery
  • Extreme pornography
  • Revenge pornography
  • Hate Crime
  • Cyber bullying and trolling
  • Advocacy of self-harm
  • Encouraging or assisting suicide
  • Sexting of indecent images by under 18s
  • Self Generate Indecent Imagery (SGII)
  • Online dis-information
  • Harassment & cyberstalking
  • Intimidation
  • Extremist content and activity
  • Coercive behaviour
  • Violent Content
  • Promotion of FGM
  • Children accessing pornography
  • Children accessing inappropriate material
  • Online manipulation
  • Interference with legal proceedings

Clearly, it’s unlikely that you will have a plan, or measures in place to address all of these, but do you think it’s worth getting in touch with someone that can help you? If you want some help making head or tail of this, get in touch.

Best

Matt

Subscribe

* indicates required

Please select all the ways you would like to hear from SUPS | Safe Users and Platform Services:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.

Safety by Design – Keeping platforms safe for users

The internet is an incredible place, and has brought imeasurable good in the world, bit it’s no secret that it’s brought a great deal of harm too. Just as the variety of online benefits increases every day, so do the Online Harms. Even as we write, the UK government is set to release its Online Harms White Paper (OHWP) in which we hope it will enumerate the harms it considers to be tackled. From our research, there appears to be no widely accepted listing of harms, which makes it difficult to tackle them head on. The best we have found so far comes from Ofcom (more on that later).

So, as you may expect, we’re delighted that there are organisations that are providing real, quality advice on measures that companies can take to keep their platform safe.

Whilst not released yet, I’d encourage anyone with a social media element to their platform, or who takes User Generated Content (UGC) to follow the Australian eSafety office for their imminent release of the Safety by Design framework.

We’ll keep you posted.

How to make your website safer for humans – the basics in the UK

We often have conversations with small platform and service providers that are starting to think about making their service a safer place for their users. This applies to almost all types of website and internet based services, but in particular those that provide some social media function, some chat platform or those with User Generated Content (UGC).

One of the first things that is mentioned is the lack of clear guidance available on what the service or platform’s responsibilities are and what steps they should be taking.

A good place to start in the UK is the UK Council for Child Internet Safety and the principles they provide

Take a look at the HTML version here.

Whilst this is targeted towards protecting children online, it is sound advice for any platform or service seeking to protect users online.

We’ll unpack more on this in later posts, but if you’re looking at their advice (for example):

…use tools such as search algorithms to look for slang words typically used by children and young people, and to identify children under 13 who may have lied about their age at registration

The you may also want to take a look at our Online Safety Content Moderation Company list.

The Online Harms White Paper (OHWP) and Safety by Design (SbD) – Will they conflict?

The Online Harms White Paper will be the biggest step forwards for online Safety this decade

2019 is turning out to offer green shoots of hope in the world of online safety. We fully expect to see the much anticipated Online Harms White Paper from the Dynamic Duo (the Home Office and the Department for Culture, Media and Sport (DCMS)) as well as the much needed Safety by Design framework from the Australian eSafety Commissioner.

You may think that these are not connected, and indeed they may not be, but parliamentary comment suggests otherwise…

Lord Ashton of Hythe suggests that the OHWP may either contain SbD principles or reference some (we don’t know whether this was a reference to the Australian eSafety Commissioner work).

We hope that we’ll see some coherent advice for online safety to help companies reduce online harms, but the truth is that nobody knows what the OHWP actually contains.

Whatever the outcome, we’ll read it and try to help you with a clear steer about how to make your platform safe.

Happy 2019