Safety by Design – Australian eSafety

The hotly anticipated Safety by Design Framework has finally launched from the office of eSafety in Australia.

Safety By Design Overview Front Page Graphic
The SBD Overview

If you’re in the business of providing online services or platforms that have User Generated Content (UGC) or chat, and you don’t know what this is, then you’re definitely going to want to head over there and take a look. Essentially it’s the ‘Best Practice Guide’ for developing online services that counter the Online Harms that we so often talk about.

You really should head over to the main page and pick the document that best suits you, however, if you want the short, sharp ‘Principles’ (with the well thought out and described sub-points) we’ve copied them here for you below.

As always, get in contact in you want to know more.

SbD Principle 1: Service provider responsibilities. The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their service is less likely to facilitate, inflame or encourage illegal and inappropriate behaviours. To help ensure that known and anticipated harms have been evaluated in the design and provision of an online service, a service should take the following steps:

1. Nominate individuals, or teams—and make them accountable—for user-safety policy creation, evaluation, implementation, operations.

2. Develop community standards, terms of service and moderation procedures that are fairly and consistently implemented.

3. Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user-safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.

4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.

5. Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.

6. Prepare documented risk management and impact assessments to assess and remediate any potential safety harms that could be enabled or facilitated by the product or service.

7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users. 8. Consider security-by-design, privacy-by-design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity and availability of personal data and information

SbD Principle 2: User empowerment and autonomy. The dignity of users is of central importance, with users’ best interests a primary consideration. The following steps will go some way to ensure that users have the best chance at safe online interactions, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience. Services should aim to:

1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.

2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the user base.

3. Leverage the use of technical features to mitigate against risks and harms, which can be flagged to users at point of relevance, and which prompt and optimise safer interactions.

4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal. 5. Evaluate all design and function features to ensure that risk factors for all users—particularly for those with distinct characteristics and capabilities—have been mitigated before products or features are released to the public.

SbD Principle 3: Transparency and accountability. Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns. To enhance users’ trust, awareness and understanding of the role, and importance, of user safety:

1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service.

2. Ensure that user-safety policies, terms and conditions, community standards and processes about user safety are visible, easy-to-find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.

3. Carry out open engagement with a wide user-base, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.

4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.

5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.

Your list of Online Harms to protect your users against

Your app/platform/website/service is a force for good, right? I’ll assume so (if it’s not, you’re on the wrong side of us, and your time is up!) because generally developers, product managers, entrepreneurs and customer services are out there to add value and delight their customers. So, you may not be planning to have to spend your scarce development effort on not just protecting your platform from cyber attack, but from protecting your users from other users and threat actors. It’s an unfortunate fact of the modern internet that bad actors are out there, and the chances are they will use your platform to attach other people.

So, your first question might be…what is it I need to safeguard my users against?

Great question, and there was a time when that mostly came down to removing profanity (bad language) and (perhaps) ensuring their was no abuse or harassment going on. Then came the widespread problem of Online Child Sexual Exploitation Online (OCSE) and (if you allow file exchange) the transferring and distribution of Child Sexual Abuse Material (CSAM).

But it doesn’t stop there. The recent Online Harms White Paper (OHWP) identified 29 harms and describes the list as “neither exhaustive nor fixed”.

  • …advocacy of self-harm
  • …extremist content and activity
  • …promotion of Female Genital Mutilation (FGM)
  • —etc (29 at the moment…)

But identifying online harms and coming up with mitigations is not your day job, so how we thought we would make your life a lot easier. Step 1 was to creat a place where you could see the full list of identified harms, with (some sort of!)* definition.

That’s why we have release the Online Harms Catalogue

This is only a start of the resources we plan to provide to make your life easier in providing a great technology response.

We’re happy to work with you to provide mitigations for all of these. Some will be technology, some will be process and others may simply be a tweak in your policy, but we believe that knowing what you need to guard against is the first step in providing a response.

Do get in touch if we can help you.

Is this the only Content Moderation Catalogue on the Internet?

We’re passionate about making the internet a safer place for people, but we’re not in the business of Social Media, Chat or Content. But what if you are?

We have pulled together what we think is the most complete and current catalogue of Content Moderation companies on the internet. So if you’re a developer and looking for a service to make your site or app a safer place, then head over to the Content Moderation app store and let us know what you think.

If you provide a Content Moderation Service, then why not get in touch and either add or edit your listing?

Please go ahead and spread the word.

Will Article 13 of the Copyright bill damage your platform?

There’s been a long, winding journey for the European legislation on Copyright, culminating today with a vote passing it.

Most notably, Article 13 is likely to impact Platform and Service providers seeking to keep their Platform clean.

Article 13 holds larger technology companies responsible for material posted without a copyright licence.It says that content-sharing services must license copyright-protected material from the rights holders or they could be held liable unless:

  • it made “best efforts” to get permission from the copyright holder
  • it made “best efforts” to ensure that material specified by rights holders was not made available
  • it acted quickly to remove any infringing material of which it was made aware

Those platform providers (especially with User Generated Content (UGC) will recognise and may be comfortable with such clauses. They are similar to best practice approaches when protecting User online from harmful content.

If you’re looking for a Content Moderation Company (even one that can help you with the EU copyright Directive) head over to the Content Moderation Marketplace and find one that works for you.

How to make your website safer for humans – the basics in the UK

We often have conversations with small platform and service providers that are starting to think about making their service a safer place for their users. This applies to almost all types of website and internet based services, but in particular those that provide some social media function, some chat platform or those with User Generated Content (UGC).

One of the first things that is mentioned is the lack of clear guidance available on what the service or platform’s responsibilities are and what steps they should be taking.

A good place to start in the UK is the UK Council for Child Internet Safety and the principles they provide

Take a look at the HTML version here.

Whilst this is targeted towards protecting children online, it is sound advice for any platform or service seeking to protect users online.

We’ll unpack more on this in later posts, but if you’re looking at their advice (for example):

…use tools such as search algorithms to look for slang words typically used by children and young people, and to identify children under 13 who may have lied about their age at registration

The you may also want to take a look at our Online Safety Content Moderation Company list.