Book Report: Ethical Machines

Reid Blackman’s book on AI ethics provides a good and practical overview of the field, but conspicuously avoids the trickier and more studious questions of technology ethics.

The ethics of Artificial Intelligence (AI) and Machine Learning (ML), and internet technology in general, has been an increasingly vivacious topic in the last few years, in particular after the high-profile firing of Timnit Gebru from Google. The public demos of technology like Dall-E, Stable Diffusion, and ChatGPT have added to the fury of the discussion. On the one hand, a generally left-leaning crowd is speaking as loudly as they can about the possible harms that unregulated AI can bring; on the other, data scientists, capitalists, and techno-futurists have lauded the technology and the benefits it can provide. I will not “both sides” this debate. Rather, it serves as a frame for what Reid Blackman’s recent text, Ethical Machines, leaves in and leaves out.

I’ve personally nearly abandoned the topic of technology ethics, having spent a few years using my training as an aeronautical engineer, and later as a medical device researcher, to attempt to bring a frameworked approach to the IT industry. I’ve left the discussion because I frankly felt it has been going nowhere. It’s been a few years now that we have shown that o bir hemşire sometimes gets autotranslated to “she is a nurse” while o bir bilim adamı becomes “he is a scientist.” In 2023, anyone who cares about this is already aware of it, and anyone who’s aware of it and doesn’t care never will. The risk curve with these more fundamental examples of AI bias has stabilized. We’ve already seen previously “woke” CEOs make hard-right shifts.

The racism exhibited by many AI tools is shocking, but the developers have gotten better at bottling up those output modes. It’s not perfect, not by any means, but it is also true that they have listened, and they are working to remove or eliminate at least the most egregious examples. Whether that’s enough is a subject for a debate that we desperately need to have.

What I’ve found missing in these discussions is a structured framework for identifying and modeling ethical problems in the field of AI. The biggest fallacy I see is people asking whether something is “ethical,” as if there is a single universal ethical framework we abide by in the world. This is a trap; there are many ethical frameworks, some in tension with each other, which we apply day-to-day. The rules that a journalist holds themselves to are different than an activist. A doctor follows a different code than a lawyer. In medical research, we’ve learned by human kind’s most tragic dance with the devil. Out of it came frameworks like the Nuremberg Code, the Declaration of Helsinki, and later, the Belmont Report. These have provided us with concrete and clear guidelines for how to manage, criticize, and assess medical research from an ethical vantage. While medical research abuse gives us to outrage, we have tools and structures to handle any such transgressions. It’s an imperfect system, but it is a system.

No such practical analogue exists in technology. Sure, some societies have ethical codes, but these are weak and rarely have teeth. What we need is a way to evaluate technology and AI ethics in a meaningful way. In medical research, we have the principles of beneficence, justice, and respect for persons. In AI, we can begin to look at bias, explainability, and privacy. This is precisely the structure that Blackman takes in his work.

With this framework, Blackman takes aim at the fluffy nothingness of many companies' ethics statements. Worse than meaningless, these positions provide no guidance on how to do AI development, what to avoid, and how to redress any ethical issues that may arise. To this end, Blackman’s framework gives at least constructive guidance.

The second reason I have stopped engaging in tech ethics discussions arises from nihilism. All of the ethical codes and frameworks in the world won’t matter if they never are used. Ethical codes must have some bite. They must sometimes stop people from doing wrong things. They must sometimes close the valves on the flow of capital. This is the point that many are trying to make when speaking out against AI ethics. It is not only that the AI could be unfair, biased, privacy-violating, or intransparent, but that there is nothing stopping the companies from using, deploying, and profiting off it. A bad thing should happen to an AI developer who behaves with willful, negligent, or reckless disregard of ethical norms. Not only does no such framework currently exist, none of our discussions about AI ethics seem to be willing to put one in place.

Blackman, like me, is a technology consultant with high-paying clients. He therefore studiously avoids this topic in his book, restricting the discussion of AI ethics largely to point treatment of point problems, and not how to establish guiding AI ethics at a scale that shapes the technology to a more positive and global vision of human advancement. I will not accuse him of ethics-washing, as he does not, but his text is a neoliberal treatment of the field: ethics issues are ones to be handled atomically, with at best high-level guidance from the executive team. His neoliberalism leaks into his privacy analysis in appalling ways. He does not seem to be willing to say that behaving ethically sometimes might mean leaving millions or billions of dollars on the table for the betterment of humankind. In medical research we have these discussions. Technology needs them, too. Blackman’s book came out before the public release of ChatGPT. In hindsight, what Blackman elides says as much as what he does not.

Posted: 29.01.2023

Built: 29.03.2024

Updated: 24.04.2023

Hash: e3829a8

Words: 957

Estimated Reading Time: 5 minutes