Can laws adapt to provide adequate protections from harmful content on the internet?

A turtle stuck on its back

Local and international laws and processes are struggling to address the ubiquitous influence of technology and the realities of the online world we live in.

While changes are taking place to protect some vulnerable groups within countries, the problem is global and it's getting bigger and more complex.

People who produce and publish content can wield quite a lot of power. They do more than reflect societies. With the power of the internet, they can shape societies.

Since the proliferation of publishing platforms, and the ease and speed of self-publishing, we have become acutely aware of how publishing power can influence our world for better or worse.

Add the rise of artificial intelligence and ‘machine’ writers into the mix, and you can foresee that it will be harder and harder to discriminate fact from fiction and determine ‘legitimate’ publishing sources.

Who should have these powers and freedoms is being hotly debated by governments, leaders, academics, platform owners, and communities.

It’s not a straightforward debate.

  1. Precisely when should the right to freedom of expression give way to societal protections?
  2. Who can realistically assert legal rights and get legal relief, when publishing is instant and crosses borders, and content is rapidly sharable and hard to track?
  3. How does the world come to grips with the fact ultimate publishing power resides with media barons and big tech platform owners and the algorithms they own?
  4. How could governments and legal systems safeguard us from offensive, unlawful content created by machines in unknown, non-physical jurisdictions?
  5. How will we be able to determine the origin of publishing sources and hold them to account in any realistic timeframe or meaningful way?

The Christchurch Terrorist Attacks

The NZ Government asked questions like these following the Christchurch terrorist attacks.

On 15 March 2019, a lone gunman entered two Christchurch mosques during Friday prayer and open-fired, killing 51 people and injuring 40 more. The terrorist livestreamed the first shooting on Facebook and this footage was seen and shared by thousands of people around the world, including many school-aged children.

The livestreaming and sharing of this horrific footage led to the launch of the Christchurch Call. The Christchurch Call is a global community of over 120 governments, tech companies, and civil society organisations acting together to eliminate terrorist and violent extremist content online. It’s an impressive ambition and one we must continue to strive for.

For the the Christchurch Call Advisory Network (CCAN) and community to bring about meaningful change would require true collaboration and partnership between states and private sector.

To reduce the spread of extremist content online must require tech companies to disclose and ultimately change how their algorithms work.

That means a monumental shift in thinking, even a new era of responsible, aspirational leadership, where the needs of citizens and societies as a whole are valued over the pursuit of profit.

Comparisons can be drawn with how we tackle climate change, plastics and pollutions, and other complex global problems. 

Hate speech laws under review in New Zealand

The Christchurch terrorist attacks also triggered a governmental review of New Zealand’s hate speech laws.

In New Zealand, protections are afforded under the Humans Rights Act 1993, which aims to give all people equal opportunities and prevent unfair treatment on the basis of irrelevant personal characteristics.

Currently, under the Human Rights Act 1993, it is unlawful to publish or distribute threatening, abusive, or insulting words likely to ‘excite hostility against’ or ‘bring into contempt’ any group on the grounds of colour, race, ethnic or national origins. Only one prosecution for inciting racial disharmony has been successful under the Act.

This month, after several years of divisive debate and thousands of public submissions, the New Zealand Government found it could do little to change the law without unwanted or unintended consequences.

In the end, the NZ Government settled on one small change to the Human Rights Act that will extend existing protections to include religious beliefs.

The NZ Law Commission will advise on a further extension to the law to include rainbow, gender, and disabilities communities.

(Other internet-based offences, such as cybercrime and online bullying, are covered by other legislation.)

What do you think?

Can laws adapt to provide adequate protections from harmful content on the internet and how?

Leave a comment: