In 2018, Google removed the iconic mantra “Don’t be evil” from its official code of conduct. It had been there since 2000.
To be fair, the mantra does appear at the very bottom of the document, followed by an appeal to speak up if something isn’t right. The removal of the mantra from its original place in the code of conduct is, however, indicative of an overall trend in Big Tech.
It comes as no surprise, then, that Google faced public scorn and outrage for ousting AI ethicist Timnit Gebru in 2020. The problem that big companies have with ethics teams is, it seems, that there’s always the possibility that ethicists might do their job and speak out when they see things going in the wrong direction.
This isn’t a rant about big tech and its occasional (or constant — you decide) disservice to the greater good, however.
Ethical conduct shouldn’t be on the shoulders of some in-house ethics team or depend on lawmakers who often don’t have detailed knowledge about what’s feasible in tech.
Ethics should, and must, be part of every company’s culture, from small freelance developers to Big Tech companies. Ethics should also be in the back, or front, of every engineer’s mind, from junior developers to C-suite executives.
That doesn’t mean that ethics is an individual responsibility. It’s definitely collective.
But every developer will profit from reflecting on their own conduct from time to time. It may or may not help their career, salary, or company. But it will help them when they look in the mirror.
I’m not a priest but I do know that most software engineers have some sort of rulebook — just like most priests have a Bible, Quran, or other work of reference. In the United States, the largest societies for software engineers, the National Society of Professional Engineers (NSPE) and the Association for Computing Machinery/Institute of Electrical and Electronic Engineers Computing Society (ACM/IEEE-CS) have professional codes of ethics.
These are not meant to be complete rulebooks or exhaustive checklists of how to behave. Like in the Bible or the Quran, ethical advice can be interpreted in myriad different ways. It’s up to the judgment and good faith of every engineer to behave the right way.
Ethics, from its Greek root ἦθος (pronounced ethos), is the study of how best to live. This doesn’t mean “try getting up at 4 a.m. every morning, work out, drink some green juice, and hustle until you’re making millions.”
In contrast to today’s highly individualistic guidelines, scholars from Ancient Greece to India have sought to find guidelines that serve the greater good. Because when most people serve the greater good, everybody has a better life overall. We live in a society, after all.
Such lines of thinking, paradoxically, might be more important now than ever.
Directly or indirectly, ethics has a close relationship to software. Both interact and are centered around humans.
Tables, chairs, and house plants also interact with humans, some readers might argue. That’s true — but unlike chairs, the software is extremely easy to scale and redistribute.
A chair needs to be produced first, which involves material processing, assembly, and logistics. If the market demands more chairs, it can take months of labor to meet that demand. Compared to software, the supply of chairs is always somewhat limited.
Ethically speaking, that’s a good thing. If you buy a faulty chair, you might be angry when it crashes into pieces and you hurt your rear end. However, it’s quite unlikely that this chair will wreak havoc on the world’s economy, given that probably just a few ten- or hundred thousand of these chairs exist. And if you complain, the company might be able to give you a refund.
Compare this to software. The tiniest fault in an otherwise impeccable corporate software might lead hackers into temptation.
As a result, you won’t complain about a hurting rear end. As a result of a big data breach, your whole life could turn upside down. Your credit scores might tumble. Someone might steal your identity.
And you’re not the only one who’s in danger. Software is available to anyone with a phone and an internet connection — that’s 5 billion people at the time of writing this piece.
Of course, this doesn’t mean that software is inherently dangerous. But the sheer breadth and scale of potential damage should wake up every software engineer.
It’s the responsibility of those who create software to avoid bad outcomes and encourage good ones. If you’re working on killer drones or on surveillance software for totalitarian states, I won’t judge you. But I encourage you to be honest with yourself and ask yourself whether this is the career you really want.
If you’re a system administrator and you’re procrastinating installing a new software update in the database where sensitive client data is stored, I won’t judge you. You might be under incredible pressure to deliver other results, or you might just be trying to survive in a toxic work environment.
Furthermore, engineers shouldn’t just try to avoid harm with the products they’re building. Making the TikTok recommendation algorithm even more addictive won’t upend livelihoods. But it also won’t be beneficial for the users, and the people around them who might crave some face-to-face interaction with their fellow zombies.
Instead, engineers should build products that do good.
Since the beginning of the war in Ukraine, I’ve been working together with volunteers to create a Telegram bot to help people find safe spaces and flee the country. While it’s not perfect, and also not for profit, I would argue that this product does good for the general public.
What does put me in a moral dilemma, however, is the fact that I have been very eager to build solutions for a war that is essentially in my neighborhood (I live in France), yet I have had much less desire to build something for other conflict zones like Mali or Afghanistan. Part of this stems from the fact that I don’t know very much about these countries, their culture, and their habits when it comes to software usage.
Nevertheless, these are not insurmountable challenges. And while I don’t see myself working for non-profits as a full-time career, being able to do this on the side is a good feeling, morally but also per gut instinct.
Other great examples of software for good are medical equipment like surgery robots or X-ray imagery processing, or apps that help people reduce their carbon footprint or the amount of plastic waste they produce. And there are many more.
This is not about becoming some kind of messiah and doing only good stuff.
Killer drones might save lives if deployed correctly. Surveillance software, correctly used, might prevent terrorist attacks.
It all depends on the context. And every engineer’s moral compass.
Sometimes it can feel like we’re living in increasingly amoral times, in which individual success is valued ten times more than actions for the public good. However, the fact that almost every company — including Big Tech! — includes in its code of conduct some encouragement to do the right thing is a piece of hope.
Happy coding, and trust your gut instinct.