[Newsletter Archives] – Newsletter – 30/04/2020 – Great Advice If You Have Young People Online

The following content is the basic content of an email that was sent to our email list subscribers. We try and post a copy of email newsletters here, roughly two weeks after they were sent to subscribers. If you want content like this, free, to your inbox, long before it arrives here, then click here to subscribe. Due to the way content renders on the internet, it won’t be an exact version of the original email.

Hi << Test First Name >>

Advice for parents, guardians and carers of young people online during the COVID-19 Pandemic.

The ever brilliant Australian eSafety Office has released a well put together guide here:

https://www.esafety.gov.au/key-issues/covid-19/advice-parents-carers

It covers the following:

As they say:

Staying connected online has never been more important, now that many of us are physically isolated from family members, friends, colleagues and support networks.The internet is a great way to socialise, learn, work, play and be entertained. But there are also risks.So eSafety is adding new content every day to help you stay safe online

We highly recommend heading over there and checking it out.

Keep in touch.
Matt

[Newsletter Archives] Newsflash – 17/02/2020 – OHWP Update

The following content is the basic content of an email that was sent to our email list subscribers. We try and post a copy of email newsletters here, roughly two weeks after they were sent to subscribers. If you want content like this, free, to your inbox, long before it arrives here, then click here to subscribe. Due to the way content renders on the internet, it won’t be an exact version of the original email.

Hi <<First Name>

UK Online Harms ‘Regulation’ – Update publishedIf you’ve been following the UK’s government-led consultation into how it should handle legislation that will reduce Online Harms, then you’ll be interested to know this.

There’s been a release of new information; an update that contains the summarised responses. Worthy of note:
 In particular, while the risk-based and proportionate approach proposed by the White Paper was positively received by those we consulted with, written responses and our engagement highlighted questions over a number of areas, including freedom of expression and the businesses in scope of the duty of care. Having carefully considered the information gained during this process, we have made a number of developments to our policies. These are clarified in the ‘Our Response’ section below. Keep in touch.
Matt

Online Harms Reduction Regulator – firm steps emerge – will it affect you?

On the 14th January 2020, Lord McNally placed a bill infront of Parliament that would “assign certain functions of OFCOM in relation to online harms regulation”. This executive summary appears to be a loose description of what’s contained, with the text seemingly requiring OFCOM to write a report each year with recommendations for the introduction of an Online Harms Reduction Regulator.
It is not clear why recommendations are required every year, nor why the lead has now moved fromDCMS to OFCOM (I can only assume that it is because the OFCOM is much closer to the cut-and-thrust of Regulation.

So what might it mean for Platform and Service Providers?

In the short term – probably not a lot, however, there are a couple of key points that providers may want to keepabreast of:
1) We can see that progress is being made – and is more likely to increase than decrease.
2) The Online Harms (as initially laid out in the OHWP) have been narrowed down to focus on certain ones in particular. This means that a Platform Provider is probably well advised to ensure that these are being tackled actively. The Harms laid out in the paper are:
(a) terrorism (or it could be in reference to this definition);
(b) dangers to persons aged under 18 and vulnerable adults;
(c) racial hatred, religious hatred, hatred on the grounds of sex or hatred on the grounds of sexual orientation;
(d) discrimination against a person or persons because of a protected characteristic;
(e) fraud or financial crime;
(f) intellectual property crime;
(g) threats which impede or prejudice the integrity and probity of the electoral process; and
(h) any other harms that OFCOM deem appropriate.

Unfortunately, as you have probably recognised, these descriptions of Online Harms are not well correlated with those laid out in the OHWP – and so OFCOM will probably struggle initially with the lack of tight definition of these Harms, before it can make any meaningful report.

The Bill is in its Second Reading and we will try and provide further update as it progresses.

Safety by Design – Australian eSafety

The hotly anticipated Safety by Design Framework has finally launched from the office of eSafety in Australia.

Safety By Design Overview Front Page Graphic
The SBD Overview

If you’re in the business of providing online services or platforms that have User Generated Content (UGC) or chat, and you don’t know what this is, then you’re definitely going to want to head over there and take a look. Essentially it’s the ‘Best Practice Guide’ for developing online services that counter the Online Harms that we so often talk about.

You really should head over to the main page and pick the document that best suits you, however, if you want the short, sharp ‘Principles’ (with the well thought out and described sub-points) we’ve copied them here for you below.

As always, get in contact in you want to know more.

SbD Principle 1: Service provider responsibilities. The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their service is less likely to facilitate, inflame or encourage illegal and inappropriate behaviours. To help ensure that known and anticipated harms have been evaluated in the design and provision of an online service, a service should take the following steps:

1. Nominate individuals, or teams—and make them accountable—for user-safety policy creation, evaluation, implementation, operations.

2. Develop community standards, terms of service and moderation procedures that are fairly and consistently implemented.

3. Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user-safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.

4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.

5. Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.

6. Prepare documented risk management and impact assessments to assess and remediate any potential safety harms that could be enabled or facilitated by the product or service.

7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users. 8. Consider security-by-design, privacy-by-design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity and availability of personal data and information

SbD Principle 2: User empowerment and autonomy. The dignity of users is of central importance, with users’ best interests a primary consideration. The following steps will go some way to ensure that users have the best chance at safe online interactions, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience. Services should aim to:

1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.

2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the user base.

3. Leverage the use of technical features to mitigate against risks and harms, which can be flagged to users at point of relevance, and which prompt and optimise safer interactions.

4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal. 5. Evaluate all design and function features to ensure that risk factors for all users—particularly for those with distinct characteristics and capabilities—have been mitigated before products or features are released to the public.

SbD Principle 3: Transparency and accountability. Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns. To enhance users’ trust, awareness and understanding of the role, and importance, of user safety:

1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service.

2. Ensure that user-safety policies, terms and conditions, community standards and processes about user safety are visible, easy-to-find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.

3. Carry out open engagement with a wide user-base, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.

4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.

5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.

Your list of Online Harms to protect your users against

Your app/platform/website/service is a force for good, right? I’ll assume so (if it’s not, you’re on the wrong side of us, and your time is up!) because generally developers, product managers, entrepreneurs and customer services are out there to add value and delight their customers. So, you may not be planning to have to spend your scarce development effort on not just protecting your platform from cyber attack, but from protecting your users from other users and threat actors. It’s an unfortunate fact of the modern internet that bad actors are out there, and the chances are they will use your platform to attach other people.

So, your first question might be…what is it I need to safeguard my users against?

Great question, and there was a time when that mostly came down to removing profanity (bad language) and (perhaps) ensuring their was no abuse or harassment going on. Then came the widespread problem of Online Child Sexual Exploitation Online (OCSE) and (if you allow file exchange) the transferring and distribution of Child Sexual Abuse Material (CSAM).

But it doesn’t stop there. The recent Online Harms White Paper (OHWP) identified 29 harms and describes the list as “neither exhaustive nor fixed”.

  • …advocacy of self-harm
  • …extremist content and activity
  • …promotion of Female Genital Mutilation (FGM)
  • —etc (29 at the moment…)

But identifying online harms and coming up with mitigations is not your day job, so how we thought we would make your life a lot easier. Step 1 was to creat a place where you could see the full list of identified harms, with (some sort of!)* definition.

That’s why we have release the Online Harms Catalogue

This is only a start of the resources we plan to provide to make your life easier in providing a great technology response.

We’re happy to work with you to provide mitigations for all of these. Some will be technology, some will be process and others may simply be a tweak in your policy, but we believe that knowing what you need to guard against is the first step in providing a response.

Do get in touch if we can help you.

Is this the only Content Moderation Catalogue on the Internet?

We’re passionate about making the internet a safer place for people, but we’re not in the business of Social Media, Chat or Content. But what if you are?

We have pulled together what we think is the most complete and current catalogue of Content Moderation companies on the internet. So if you’re a developer and looking for a service to make your site or app a safer place, then head over to the Content Moderation app store and let us know what you think.

If you provide a Content Moderation Service, then why not get in touch and either add or edit your listing?

Please go ahead and spread the word.

Online Harms

The (reasonably) recent release of the UK government’s Online Harms White Paper (OHWP) brought with it some interesting (and somewhat) helpful positions about online harms.

Why should you care?

Well, we think it matters for two reasons.

(1) The OHWP lays out the direction for Government (and hence points towards the sort of regulatory things that might impact businesses) and

(2) It has the input of a decent number of people with experience in this area, and so acts as a good reference point.

But there’s one area that we found particularly important to look at, and that was the categorisation of Online Harms. There’s a couple of take away points here that we think should be raised:

  1. It recognises that the list is not exhaustive and is expected to evolve over time
  2. It de-scopes some things that, whilst there is still a requirement for businesses to protect against, the OHWP feels is out of scope.

The first is the most important we think. For us, it is a ringing endorsement of our approach and thinking. Many other companies have taken (either deliberately or as a result of their legacy) an approach that focuses on CSEA or CSAM and may have built some other protection mechanisms in. We have always started from the position of understanding ALL online harms and focusing on protection against all.

This government recognition of the full suite of harms (and that it is constantly evolving) helps you start thinking about your Online Harm protection plan. Time to stop thinking just about Safeguarding or CSEA, but thinking more holistically about Online Harms.

So what are these harms?

  • Online CSEA
  • Terrorist content
  • Illegal upload from prisons
  • Gang culture and incitement to serious violence
  • Sale of illegal goods & services e.g. drugs & weapons
  • Organised immigration crime
  • Modern slavery
  • Extreme pornography
  • Revenge pornography
  • Hate Crime
  • Cyber bullying and trolling
  • Advocacy of self-harm
  • Encouraging or assisting suicide
  • Sexting of indecent images by under 18s
  • Self Generate Indecent Imagery (SGII)
  • Online dis-information
  • Harassment & cyberstalking
  • Intimidation
  • Extremist content and activity
  • Coercive behaviour
  • Violent Content
  • Promotion of FGM
  • Children accessing pornography
  • Children accessing inappropriate material
  • Online manipulation
  • Interference with legal proceedings

Clearly, it’s unlikely that you will have a plan, or measures in place to address all of these, but do you think it’s worth getting in touch with someone that can help you? If you want some help making head or tail of this, get in touch.

Best

Matt

Subscribe

* indicates required

Please select all the ways you would like to hear from SUPS | Safe Users and Platform Services:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.