The following content is the basic content of an email that was sent to our email list subscribers. We try and post a copy of email newsletters here, roughly two weeks after they were sent to subscribers. If you want content like this, free, to your inbox, long before it arrives here, then click here to subscribe. Due to the way content renders on the internet, it won’t be an exact version of the original email.
Hi << Test First Name >>
Advice for parents, guardians and carers of young people online during the COVID-19 Pandemic.
The ever brilliant Australian eSafety Office has released a well put together guide here:
Staying connected online has never been more important, now that many of us are physically isolated from family members, friends, colleagues and support networks.The internet is a great way to socialise, learn, work, play and be entertained. But there are also risks.So eSafety is adding new content every day to help you stay safe online
We highly recommend heading over there and checking it out.
The following content is the basic content of an email that was sent to our email list subscribers. We try and post a copy of email newsletters here, roughly two weeks after they were sent to subscribers. If you want content like this, free, to your inbox, long before it arrives here, then click here to subscribe. Due to the way content renders on the internet, it won’t be an exact version of the original email.
Hi << Test First Name >>
3 Quick Tips to make your Service Safer for your users.
You don’t have the time to be wading through reams and reams of government-issued documentation and guidance, about Online Safety for your users, so today I’ve picked just a few tips for you to pique your interest. I’d be keen to hear back (just hit ‘reply’, or comment on the blog post version of this) to let me know what you’ like to hear more about, and I’ll make it happen….
1. Content moderation
Pre, Post or Reactive?If you’ve not engaged with a Content Moderation service for online safety yet, then you may have thought you just ‘plug it in’ and it happens. Well…. You have to make some choices I’m afraid. Here are some of the basic strategies available to you: Pre-Moderation. This is effectively where the content from Users (for example User Generated Content (UGC)) is placed in a queue and your moderation service (automated or human) forms an opinion before it gets published. Post-Moderation. As you may have already guessed – This allows content from your users to be published immediately (giving that warm glow of achievement of having ‘gone live’) – but the content is replicated in a queue to be moderated as soon as it can be got to. Reactive Moderation. This effectively lets the community (or (and this is not good) Law Enforcement!) report content to be moderated. This can then be plugged into your automated, semi-automated or Human moderation team.
IWF Make a Report Link
If your users are in the UK, they can make a report of images or videos of Child Abuse they come across directly to the IWF. You can link to the ‘Make a Report Button’ (https://report.iwf.org.uk/en/report) for an easy option for your users.
Default Settings
If you operate in the UK, you are now required to make sure that your service has it’s ‘Default Settings’ to be ‘High Privacy’ (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child). This is in the new ICO Age Appropriate Design code (coming into force soon) and they will have the power to fine companies that do not comply!
Hope those tiny-tips offer some pointers, but please do reply and let me know if you have other ideas.
The hotly anticipated Safety by Design Framework has finally launched from the office of eSafety in Australia.
The SBD Overview
If you’re in the business of providing online services or platforms that have User Generated Content (UGC) or chat, and you don’t know what this is, then you’re definitely going to want to head over there and take a look. Essentially it’s the ‘Best Practice Guide’ for developing online services that counter the Online Harms that we so often talk about.
You really should head over to the main page and pick the document that best suits you, however, if you want the short, sharp ‘Principles’ (with the well thought out and described sub-points) we’ve copied them here for you below.
SbD Principle 1: Service provider responsibilities. The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their service is less likely to facilitate, inflame or encourage illegal and inappropriate behaviours. To help ensure that known and anticipated harms have been evaluated in the design and provision of an online service, a service should take the following steps:
1. Nominate individuals, or teams—and make them accountable—for user-safety policy creation, evaluation, implementation, operations.
2. Develop community standards, terms of service and moderation procedures that are fairly and consistently implemented.
3. Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user-safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.
4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.
5. Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.
6. Prepare documented risk management and impact assessments to assess and remediate any potential safety harms that could be enabled or facilitated by the product or service.
7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users. 8. Consider security-by-design, privacy-by-design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity and availability of personal data and information
SbD Principle 2: User empowerment and autonomy. The dignity of users is of central importance, with users’ best interests a primary consideration. The following steps will go some way to ensure that users have the best chance at safe online interactions, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience. Services should aim to:
1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.
2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the user base.
3. Leverage the use of technical features to mitigate against risks and harms, which can be flagged to users at point of relevance, and which prompt and optimise safer interactions.
4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal. 5. Evaluate all design and function features to ensure that risk factors for all users—particularly for those with distinct characteristics and capabilities—have been mitigated before products or features are released to the public.
SbD Principle 3: Transparency and accountability. Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns. To enhance users’ trust, awareness and understanding of the role, and importance, of user safety:
1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service.
2. Ensure that user-safety policies, terms and conditions, community standards and processes about user safety are visible, easy-to-find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.
3. Carry out open engagement with a wide user-base, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.
4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.
5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.
Your app/platform/website/service is a force for good,
right? I’ll assume so (if it’s not, you’re on the wrong side of us, and your
time is up!) because generally developers, product managers, entrepreneurs and
customer services are out there to add value and delight their customers. So,
you may not be planning to have to spend your scarce development effort on not
just protecting your platform from cyber attack, but from protecting your users
from other users and threat actors. It’s an unfortunate fact of the modern
internet that bad actors are out there, and the chances are they will use your
platform to attach other people.
So, your first question might be…what is it I need to
safeguard my users against?
Great question, and there was a time when that mostly came
down to removing profanity (bad language) and (perhaps) ensuring their was no
abuse or harassment going on. Then came the widespread problem of Online Child
Sexual Exploitation Online (OCSE) and (if you allow file exchange) the transferring
and distribution of Child Sexual Abuse Material (CSAM).
But it doesn’t stop there. The recent Online Harms White Paper (OHWP) identified 29 harms and describes the list as “neither exhaustive nor fixed”.
…advocacy of self-harm
…extremist content and activity
…promotion of Female Genital Mutilation (FGM)
—etc (29 at the moment…)
But identifying online harms and coming up with mitigations is not your day job, so how we thought we would make your life a lot easier. Step 1 was to creat a place where you could see the full list of identified harms, with (some sort of!)* definition.
This is only a start of the resources we plan to provide to make your life easier in providing a great technology response.
We’re happy to work with you to provide mitigations for all of these. Some will be technology, some will be process and others may simply be a tweak in your policy, but we believe that knowing what you need to guard against is the first step in providing a response.
We’re passionate about making the internet a safer place for people, but we’re not in the business of Social Media, Chat or Content. But what if you are?
We have pulled together what we think is the most complete and current catalogue of Content Moderation companies on the internet. So if you’re a developer and looking for a service to make your site or app a safer place, then head over to the Content Moderation app store and let us know what you think.
(1) The OHWP lays out the direction for Government (and hence points towards the sort of regulatory things that might impact businesses) and
(2) It has the input of a decent number of people with experience in this area, and so acts as a good reference point.
But there’s one area that we found particularly important to look at, and that was the categorisation of Online Harms. There’s a couple of take away points here that we think should be raised:
It recognises that the list is not exhaustive and is expected to evolve over time
It de-scopes some things that, whilst there is still a requirement for businesses to protect against, the OHWP feels is out of scope.
The first is the most important we think. For us, it is a ringing endorsement of our approach and thinking. Many other companies have taken (either deliberately or as a result of their legacy) an approach that focuses on CSEA or CSAM and may have built some other protection mechanisms in. We have always started from the position of understanding ALL online harms and focusing on protection against all.
This government recognition of the full suite of harms (and that it is constantly evolving) helps you start thinking about your Online Harm protection plan. Time to stop thinking just about Safeguarding or CSEA, but thinking more holistically about Online Harms.
So what are these harms?
Online CSEA
Terrorist content
Illegal upload from prisons
Gang culture and incitement to serious violence
Sale of illegal goods & services e.g. drugs & weapons
Organised immigration crime
Modern slavery
Extreme pornography
Revenge pornography
Hate Crime
Cyber bullying and trolling
Advocacy of self-harm
Encouraging or assisting suicide
Sexting of indecent images by under 18s
Self Generate Indecent Imagery (SGII)
Online dis-information
Harassment & cyberstalking
Intimidation
Extremist content and activity
Coercive behaviour
Violent Content
Promotion of FGM
Children accessing pornography
Children accessing inappropriate material
Online manipulation
Interference with legal proceedings
Clearly, it’s unlikely that you will have a plan, or measures in place to address all of these, but do you think it’s worth getting in touch with someone that can help you? If you want some help making head or tail of this, get in touch.
The internet is an incredible place, and has brought imeasurable good in the world, bit it’s no secret that it’s brought a great deal of harm too. Just as the variety of online benefits increases every day, so do the Online Harms. Even as we write, the UK government is set to release its Online Harms White Paper (OHWP) in which we hope it will enumerate the harms it considers to be tackled. From our research, there appears to be no widely accepted listing of harms, which makes it difficult to tackle them head on. The best we have found so far comes from Ofcom (more on that later).
So, as you may expect, we’re delighted that there are organisations that are providing real, quality advice on measures that companies can take to keep their platform safe.
There’s been a long, winding journey for the European legislation on Copyright, culminating today with a vote passing it.
Most notably, Article 13 is likely to impact Platform and Service providers seeking to keep their Platform clean.
Article 13 holds larger technology companies responsible for material posted without a copyright licence.It says that content-sharing services must license copyright-protected material from the rights holders or they could be held liable unless:
it made “best efforts” to get permission from the copyright holder
it made “best efforts” to ensure that material specified by rights holders was not made available
it acted quickly to remove any infringing material of which it was made aware
Those platform providers (especially with User Generated Content (UGC) will recognise and may be comfortable with such clauses. They are similar to best practice approaches when protecting User online from harmful content.
If you’re looking for a Content Moderation Company (even one that can help you with the EU copyright Directive) head over to the Content Moderation Marketplace and find one that works for you.
We often have conversations with small platform and service providers that are starting to think about making their service a safer place for their users. This applies to almost all types of website and internet based services, but in particular those that provide some social media function, some chat platform or those with User Generated Content (UGC).
One of the first things that is mentioned is the lack of clear guidance available on what the service or platform’s responsibilities are and what steps they should be taking.
Whilst this is targeted towards protecting children online, it is sound advice for any platform or service seeking to protect users online.
We’ll unpack more on this in later posts, but if you’re looking at their advice (for example):
…use tools such as search algorithms to look for slang words typically used by children and young people, and to identify children under 13 who may have lied about their age at registration