Safety by Design – Australian eSafety

The hotly anticipated Safety by Design Framework has finally launched from the office of eSafety in Australia.

Safety By Design Overview Front Page Graphic
The SBD Overview

If you’re in the business of providing online services or platforms that have User Generated Content (UGC) or chat, and you don’t know what this is, then you’re definitely going to want to head over there and take a look. Essentially it’s the ‘Best Practice Guide’ for developing online services that counter the Online Harms that we so often talk about.

You really should head over to the main page and pick the document that best suits you, however, if you want the short, sharp ‘Principles’ (with the well thought out and described sub-points) we’ve copied them here for you below.

As always, get in contact in you want to know more.

SbD Principle 1: Service provider responsibilities. The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their service is less likely to facilitate, inflame or encourage illegal and inappropriate behaviours. To help ensure that known and anticipated harms have been evaluated in the design and provision of an online service, a service should take the following steps:

1. Nominate individuals, or teams—and make them accountable—for user-safety policy creation, evaluation, implementation, operations.

2. Develop community standards, terms of service and moderation procedures that are fairly and consistently implemented.

3. Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user-safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.

4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.

5. Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.

6. Prepare documented risk management and impact assessments to assess and remediate any potential safety harms that could be enabled or facilitated by the product or service.

7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users. 8. Consider security-by-design, privacy-by-design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity and availability of personal data and information

SbD Principle 2: User empowerment and autonomy. The dignity of users is of central importance, with users’ best interests a primary consideration. The following steps will go some way to ensure that users have the best chance at safe online interactions, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience. Services should aim to:

1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.

2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the user base.

3. Leverage the use of technical features to mitigate against risks and harms, which can be flagged to users at point of relevance, and which prompt and optimise safer interactions.

4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal. 5. Evaluate all design and function features to ensure that risk factors for all users—particularly for those with distinct characteristics and capabilities—have been mitigated before products or features are released to the public.

SbD Principle 3: Transparency and accountability. Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns. To enhance users’ trust, awareness and understanding of the role, and importance, of user safety:

1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service.

2. Ensure that user-safety policies, terms and conditions, community standards and processes about user safety are visible, easy-to-find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.

3. Carry out open engagement with a wide user-base, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.

4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.

5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.

Safety by Design – Keeping platforms safe for users

The internet is an incredible place, and has brought imeasurable good in the world, bit it’s no secret that it’s brought a great deal of harm too. Just as the variety of online benefits increases every day, so do the Online Harms. Even as we write, the UK government is set to release its Online Harms White Paper (OHWP) in which we hope it will enumerate the harms it considers to be tackled. From our research, there appears to be no widely accepted listing of harms, which makes it difficult to tackle them head on. The best we have found so far comes from Ofcom (more on that later).

So, as you may expect, we’re delighted that there are organisations that are providing real, quality advice on measures that companies can take to keep their platform safe.

Whilst not released yet, I’d encourage anyone with a social media element to their platform, or who takes User Generated Content (UGC) to follow the Australian eSafety office for their imminent release of the Safety by Design framework.

We’ll keep you posted.

How to make your website safer for humans – the basics in the UK

We often have conversations with small platform and service providers that are starting to think about making their service a safer place for their users. This applies to almost all types of website and internet based services, but in particular those that provide some social media function, some chat platform or those with User Generated Content (UGC).

One of the first things that is mentioned is the lack of clear guidance available on what the service or platform’s responsibilities are and what steps they should be taking.

A good place to start in the UK is the UK Council for Child Internet Safety and the principles they provide

Take a look at the HTML version here.

Whilst this is targeted towards protecting children online, it is sound advice for any platform or service seeking to protect users online.

We’ll unpack more on this in later posts, but if you’re looking at their advice (for example):

…use tools such as search algorithms to look for slang words typically used by children and young people, and to identify children under 13 who may have lied about their age at registration

The you may also want to take a look at our Online Safety Content Moderation Company list.