Today is a significant day. It sees the announcement of a piece of law that child online safety campaigners have been waiting for, for a long time – the Online Harms Bill. It has taken 5 years since the law was promised to get to this stage and many children have experienced preventable online harms during that time. It is being heralded as the first ever comprehensive legal internet regulation system but many of its component parts are already in use in other countries. How effective will they be here?
What it does
The draft Bill extends the work of Ofcom to act as a new online regulator with the power to fine companies up to £18m or 10% of their annual global turnover (whichever is higher) if they fail to take down harmful content. Similar powers already exist under some legislation in the USA and have not yet been effective at creating significant change in social media companies’ behaviours, however whether this creates change in the UK will depend on how proactive a regulator Ofcom is prepared to be.
Ofcom will have a new power to block access to sites in the event that they fail to comply with the law. This power is more similar to one seen and working effectively in Canadian legislation and could be a very important tool IF Ofcom is prepared to use it proactively – you can imagine the size of the lobby that would be working against a threat to block access to Snapchat for example. Whether this power is effective will depend on whether Ofcom is independent and radically robust in the exercising of its new authority. The biggest offenders under the new law are highly likely to be the social media giants. Will Ofcom be prepared to act against them?
Companies will be under a new duty of care to take action not only against dangerous content but also against content that is lawful but harmful, such as information about suicide and self-harm.
What it does not do
It contains a threat of criminal action against senior managers if tech companies fail to live up to their responsibilities, with the new rules being reviewed every two years, however it does not activate this threat at this stage. We have already seen many cases internationally where companies are so large that they are financially prepared to absorb fines in order to avoid having to change practices in a way that would more substantially damage their income and, without this accountability being used more proactively, it is likely they will continue to do this.
It does not create a legal right for an individual child who has been grossly harmed online, to hold a tech company, whose negligence may have facilitated their abuse, to account.
It creates a responsibility on tech firms for fraudulent user-generated content, including financial fraud such as romance scams or fake investment opportunities, but arguably does not create the same level of responsibility in relation to child sexual abuse.
What happens now
The Bill will be introduced to the UK parliament today but may then have a relatively slow passage into law as amendments etc. are debated through its readings. In Scotland, we need to educate as many people as possible on the detail of the Bill (including children) so that they can express their views as to whether they feel it will do enough to protect them and then communicate those views to the Scottish MPs who sit in the UK Parliament to speak on their behalf.