The world’s most powerful technologies are already in boardrooms, on battlefields, and in billionaires’ pockets.

This should be an enormous concern to all of us. I've been worried that tech ethics is a very long way behind the curve since studying philosophy at university.

Now, I fear, it's all too late.

The technologies I have in mind are those like AI, blockchain, quantum computers, neurotechnology and biotechnology – those which have the potential to radically reshape our world, concentrate power, or outpace our ability to fully grasp their impact.

Among critics, I see this shared narrative that the best route is to walk away from potentially dangerous innovation. And I get that. I'm extremely concerned about numerous applications of such tools. Many are already being realised today.

But I think it's a mistake to respond this way. I don't pretend that there isn't a great deal of nuance, nor that there aren't extremely compelling rebuttals possible, especially for specific use cases.

Nevertheless, I'm making the case that we need to stay engaged, both ethically and practically

The case for rejection

When we have grave concerns about a technology, deciding to stay out of it feels right out of principle.

I've certainly felt this way and see many others who do, too.

AI can be discriminatory and dangerous; so do I really want to consider working in AI companies?

Blockchain enables exploitation and unsustainable practices, so would I really trade crypto?

Is the mere possibility of a dystopian future caused by the progress of these emerging technologies enough to force me to reject them?

After all, their scope for harm is staggering. So, let's spend a moment briefly stating some of the cases against them, and I'm going to pick on two which are garnering the bulk of recent attention: AI and blockchain.

The (abridged) case against AI

AI is already a hotbed for misuse and bias. Here are some examples.

  • Mass surveillance: It's enabling authoritarian systems, like China’s facial recognition.
  • Exploitation: Deepfakes enable harassment, abuse, and political disinformation.
  • Bias: There's already a long track record of AI amplifying prejudice and perpetuating systemic injustices. Just weeks ago, a AI system used for UK benefits was found to be discriminatory and biased. Bias is in many AI systems.
  • Black boxes: Neural networks and other deep learning models make unexplainable critical decisions, such as loan approvals or medical diagnoses.
  • Energy consumption: It consumes huge amounts of energy. Google and Microsoft are amongst those commissioning nuclear reactors to power AI.
  • Job loss: AI threatens many jobs, and it isn't obvious how we'd reskill people.

Beyond these immediate harms, there's the potentially existential threat. If there is even a small chance we could create what renowned physicist Max Tegmark calls “super-capable, amoral psychopaths” (fascinating video if you're interested!) in the form of AI, we should absolutely have alarm bells ringing. After all, humans are more than capable of subjugating both animals and other humans. Is it so unlikely our creations will "think" the way we do, either by accident or by design?

Bradley C Bower/AP

Three Mile Island Credit: Bradley C Bower/AP

The (abridged) case against blockchain

And in the end, blockchain has the potential to destabilise global financial systems. Decentralising power sounds sexy, but feels rather 'be careful what you wish for'. Despite the flaws, sovereign nations should control monetary policy. (There's a huge, interesting topic here, but I'll swerve it for now!)

Of course, billionaires have been exploiting this for some time. The promise of a financial system free from government control? Sounds a great way to shift power from elected institutions to the ultra-wealthy, who can manipulate markets with a single tw*et. Funny, can't think of an obvious example here...

Andrey Rudakov—Bloomberg/Getty Images

A Russian Bitcoin mining farm Credit: Andrey Rudakov—Bloomberg/Getty Images

We could come up with similar cases for other emerging tech, including quantum computers, neurotechnology and biotechnology, but I think the point is adequately made.

There are so many documented harms, many of which make for horrifying reading.

The risks are real, immediate, and demand attention.

The case against rejection

It feels powerful and simple to say "no".

That narrative is straightforward: These tools are products of systems driven by profit and power. We say “no” to prioritise people over progress, to refuse to perpetuate harm, to reject innovation for the sake of innovation at the cost of human rights, environmental destruction, or global instability. There's a hope that rejection will slow their development or diminish their impact.

And yes – I broadly agree, and worry, that power has hugely disproportionate ability to exploit the tech, and that it is inherently dangerous in the hands of humans.

Rejecting the tech feels like a form of neutrality, or of protest. Rejection can also look like organising against the tech, too.

But I'd argue this kind of thinking is shortsighted.

Stepping away comes with two huge costs:

1. We abandon our influence

By stepping away, we leave control to those prioritising profit, exploitation, or convenience.

These tools will be built, deployed, and scaled with or without our input.

Without our engagement, their flaws are more likely to go – and grow – unchecked.

The counter-argument here probably goes a bit like this: by participating, we're legitimising and accelerating it. That's true even with the best intentions. And humans, best intentions or not, are fallible.

But, in fact, participation might be the only way to manage its risks. We'd be in a bad place if everyone with concerns chose not to engage. Being inside means we can blow the whistle when we see problems, build safeguards, enforce ethical standards, mitigate harm and harness potential benefits.

NVIDIA powers a great deal of both amoral and immoral AI, but their developers created a toolkit called NeMo Guardrails to help developers ensure their AI models operate safely and ethically. The idea is to embed ethics by design, like preventing algorithmic bias and misuse. By being part of the development process, they influenced how AI is builtboth inside and outside NVIDIA.

When we take an active role in ensuring our tech is both functional and ethical, we reduce risks that might otherwise be overlooked in our collective rush to innovate.

2. We miss opportunities

We're also missing enormous opportunities to use tech for good.

We can...

  • build tools that directly tries to mitigate harms caused by the very same technologies
  • build new tools for good.

A family friend of mine is helping use AI to prevent vision loss for people with diabetes. XPRIZE offer prestigious prizes to people using AI and quantum computing to solve global challenges from water scarcity to biodiversity. The World Food Programme (WFP) uses blockchain for its Building Blocks project to distribute aid to refugees.

Building Blocks

Aid delivered through Building Blocks World Food Programme

There are so many possible use cases for good. My impression – and it is only an impression – is that they are grossly underfunded in comparison with tech that can harm. Of quantum computing, XPRIZE say:

"Relatively few companies and university researchers are focused on translating quantum algorithms into real-world application scenarios and assessing their feasibility to address global challenges once sufficiently powerful hardware is available" (Quantum for Real-World Impact)

But I still worry that the ‘tech for good’ narrative is just an idealistic distraction.

Are these just marginal benefits that can be used as PR cover while the larger systemic harms go unchecked? May unintended harms arise even from efforts to do good?

I love this Vox article questioning whether big AI companies can ever be ethical.

It makes me wonder if we could have a scenario akin to big oil's greenwashing – cramming marketing materials with wind and solar while renewables made up a fraction of a percent of their energy.

Misleading Shell advert banned by ASA

Misleading Shell advert banned by ASA Shell, via The Guardian

History is full of examples of human fallibility in innovation, from DDT pesticides to medical treatments like thalidomide, that caused unforseen harm.

But this doesn't mean we throw up our hands and give up. More and more thinkers are calling for involvement, as Yuval Noah Harari (author of Sapiens) does in Homo Deus for example, where he pushes for responsible use of new technologies in ways that serve humanity, over avoidance, which risks them being hijacked by powerful forces.

We do this by making space for ethical foresight and risk-spotting. Looking back at history is a little dangerous here since some of these new technologies have radically more scope for harm than previously, but in the past, major innovation — from vaccines to renewables — came with risks, but we tackled them head-on.

It's entirely probable that if we don’t invest in them, we’ll end up defenceless. Tech multiplies good and bad. Our challenge is maximising the good.

Engagement for sceptics

Potential harm and potential good aren't mutually exclusive. A technology can cause harm and drive progress.

We have to lean into the messiness of ethical compromise. There will be gray zones and nuance to work through.

We can reasonably make the case, too, that the same innovation that created these problems can drive their solutions. Since the problems already exist, we can argue we should use the tech to try to solve them. To borrow an example from blockchain tech, I've recentle learned a bit about proof-of-stake, which reduces the carbon footprint of blockchain.

These kinds of small but meaningful steps show the potential for reducing harm. These innovators are within the system, not outside.

I can't round off without acknowledging that engagement doesn’t diminish what I see as a desperate need for us to get a handle on regulation, despite an overwhelming worry that it's all too late. Enforceable policies are essential to curbing some of the worst abuses of these tools. Not to say that regulation itself is going to be effective – some groups and governments will build and use tools for nefarious purposes whatever we do. That should make us even more resolute to have something to defend with. Nor do I diminish the challenges regulation faces – I'm pessimistic that it can catch up in time nor get the balance right between protecting humans and our planet without stifling innovation or handing further power to monopolies. The topic really deserves a space of its own. Nevertheless, I would be remiss not to mention it.

Rounding up

On one hand, the risks for emerging technologies are enormous, increasingly immediate, and sometimes downright terrifying. On the other, walking away from them means forfeiting our chance to influence it for the better.

I’ve laid out my case for staying involved, but this is just one perspective.

Should we lean in, or say no? If you have thoughts, please feel welcome to share them. I'm open to changing my mind, too!

It seems to me that since the bus is moving, we need to be on it. We can't stop it, so there's little point being on the kurb or throwing rocks. Better to see that it reaches the right destination.



Reading

It’s practically impossible to run a big AI company ethically – Article by Sigal Samuel on Vox.com. Eye-opening, clear thinking on the innate capitalist pressures that drive big AI companies down the ethical drain.

Risks of the AI arms race by David De Cremer on the Financial Times. An interesting case study on how competitive haste undermines safety.

Life 3.0 – Book by Max Tegmark. A favourite! I love Tegmark's work in general – this one is about the impact of AI on the future of life and how we might navigate this transformation. Available from The Guardian Bookshop