A government minister has threatened to ban social media firms if they don’t start censoring content.

The threat hasn’t come from China, which recently closed Bing for a day, after already locking out Twitter, Google, and Facebook. It’s come from the UK. Speaking on the BBC’s Andrew Marr Show, a political interview program, the British health secretary Matt Hancock warned social media firms that they will face legislation if they don’t do a better job of policing the posts made on their platforms.

The threat comes after a BBC investigation into the suicide of Molly Russell, a 14-year-old girl who died in 2017. After her death, the family reviewed the Instagram posts she had been following. They found account after account depicting depression, self-harming, and suicide.

“I have no doubt that Instagram helped kill my daughter,” her father, Ian Russell, told the BBC.

It’s not the first time that a Facebook property has been accused of failing to act against dangerous content. In 2017, after a series of violent posts were uploaded to Facebook, Mark Zuckerberg published a statement about how the company could improve its services for the community. The solution, he argued, was to hire 3,000 more people for Facebook’s community operations team around the world “on top of the 4,500 we have today” to review the millions of reports the company receives every week.

That was a clumsy solution then and two years later, it doesn’t look any better. If a leading technology company’s solution to policing content is to hire more people to sit in cubicles and decide whether individual posts should stay or go for eight hours at a stretch, the whole industry has a problem.

That problem is scale. Social media sites initially relied on the community to police itself. They hoped that the reports would be rare enough for a small team to be sufficient to judge posts and boot out abusers. It’s a reasonable solution for a small site. It’s unreasonable for a platform with more than a billion users.

It’s not that platforms have done nothing at all beyond throwing eyeballs at the problem. Visitors to self-harming content on Instagram are directed to organizations that can help, and asked whether they want to continue browsing. Facebook has been using AI to try to identify posts that predict suicide.

But it’s not enough. Platforms that know everything about their users can certainly do a better job of restricting harmful content to underage viewers. They can program their algorithms to better detect abusive content, and prevent it from being shown or shared.

The social media world has been so focused on pleasing advertisers and matching their messages to the right users, that it’s failed to live up to its responsibilities. It’s that failure that’s now leading governments to threaten to restrict their activities by law.

It’s also an opportunity for a new wave of social media platforms. Facebook usage has declined. The platform that started life on campuses is now shedding young users at a rate of nearly 3 million a year. Among older users, concerns about privacy and data sharing have led many to delete the apps from their phones and cut back on visits.

There is now a demand for a social media platform that’s responsible enough to use technology, including AI, to police content. That site would also protect privacy and be transparent about its use of data even if that move forces advertisers to produce better ad copy instead of focusing on targeting. And it would regard its users not as numbers but as a community that needs to be protected as well as nurtured.

If Facebook doesn’t build itself into that social media platform it may well find that someone else build it instead.