SIAM News Blog
SIAM News
Print

Artificial Intelligence: Ethics Versus Public Policy

By Moshe Y. Vardi

The computing field is currently experiencing an image crisis. In 2017, Wall Street Journal columnist Peggy Noonan described Silicon Valley executives as “moral Martians who operate on some weird new postmodern ethical wavelength” [4]. Niall Ferguson—a historian at Stanford University’s Hoover Institution—defined cyberspace as “cyberia, a dark and lawless realm where malevolent actors range” [7]. The following year, Salesforce CEO Marc Benioff declared that a “crisis of trust” affects data privacy and cybersecurity.

Many people view this situation as a crisis of ethics. In October 2018, The New York Times reported that “[s]ome think chief ethics officers could help technology companies navigate political and social questions” [6]. Numerous academic institutions are hurriedly launching new courses on computing, ethics, and society. Others are taking broader initiatives and integrating ethics across their computing curricula. The ongoing narrative implies that (i) a deficit of ethics ails the modern technology community and (ii) an injection of ethics is the remedy.

The prospect of increasingly powerful artificial intelligence (AI), which has marched from milestone to milestone over the past decade, is of specific concern. In recent years, many challenging problems—such as machine vision and natural language processing—have proven amenable to machine learning (ML) in general and deep learning in particular. Growing concerns about AI have only intensified the ethics narrative. For example, the Vatican’s Rome Call for AI Ethics has found support with a myriad of organizations, including tech companies. Multiple tech companies are also involved with the Partnership on AI, which was established “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society” [5]. Facebook (now Meta) has donated millions of U.S. dollars to establish a new Institute for Ethics in Artificial Intelligence at the Technical University of Munich, since “ensuring the responsible and thoughtful use of AI is foundational to everything we do” [1]. Google announced that it “is committed to making progress in the responsible development of AI.”

Big tech is watching you, but who is watching big tech? Public domain image.
Nevertheless, the problem with present-day computing lies not with AI technology per se, but with its current use in the computing industry. AI is the fundamental technology behind “surveillance capitalism,” which Shoshana Zuboff defines as an economic system that centers on the commodification of personal data with the core purpose of profit-making [9]. Under the mantra of “information wants to be free,” several tech companies have become advertising companies and perfected the technology behind micro-targeted advertising, which matches ads with individual preferences. Zuboff argued eloquently about the societal risk of surveillance capitalism. “We can have democracy, or we can have a surveillance society,” she wrote in a 2021 article in The New York Times, “but we cannot have both” [10]. Internet companies effectively harvest the grains of information that we share, then use them to construct heaps of data about us. Similarly, the grains of influence that internet companies provide yield a mound of influence of which we are unaware, as evidenced by the Cambridge Analytica scandal [2]. ML enables these outcomes by mapping user profiles to advertisements. AI also moderates content for social media users with a primary goal of maximizing engagement and—as a consequence—advertising revenues.

The AI-ethics narrative thus leaves me deeply skeptical. It is not that I am against ethics; instead, I am dubious of the diagnosis and remedy. For example, consider the Ford Model T: the first mass-produced and mass-consumed automobile. The Model T went into production in 1908 and jump-started the automobile age. But with the automobile came automobile crashes, which now kill more than 1,000,000 people worldwide each year. Nevertheless, the fatality rate has been decreasing over the past 100 years with improved road and automobile safety, licensed drivers, drunk driving laws, and the like. The solution to automobile crashes is not ethics training for drivers; it is public policy, which makes transportation safety a public priority. I share this ethics skepticism with Dutch philosopher Ben Wagner, who wrote that “[m]uch of the debate about ethics seems to provide an easy alternative to government regulation” [8].

At the same time, I do believe that surveillance capitalism—while perfectly legal and enormously profitable—is unethical. For example, the Association for Computing Machinery’s (ACM) Code of Ethics and Professional Conduct starts with “[c]omputing professionals’ actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good.” It would be extremely difficult to argue that surveillance capitalism supports the public good. The strain between a legal, profitable, and arguably unethical business model on one hand, and a façade of ethical behavior on the other hand, creates unsustainable tension within some tech companies. In December 2020, computer scientist Timnit Gebru found herself at the center of a public controversy that stemmed from her abrupt and contentious departure from Google as technical co-lead of the Ethical Artificial Intelligence Team; higher management had requested that she either withdraw an as-yet-unpublished paper that detailed multiple risks and biases of large language models, or remove the names of all Google co-authors. During the aftermath of Gebru’s dismissal, Google fired Margaret Mitchell — another top researcher on its AI ethics team. In response to these firings, the ACM Conference for Fairness, Accountability, and Transparency suspended its sponsorship relationship with Google, stating briefly that “having Google as a sponsor for the 2021 conference would not be in the best interests of the community” [3].

The biggest problem that computing faces today is not that AI technology is unethical—though machine bias is a serious issue—but that large and powerful corporations use AI technology to support a business model that is arguably unethical. The computing community must address its relationship with surveillance capitalism corporations. For example, the ACM’s A.M. Turing Award—the highest award in computing—is now accompanied by a prize of $1 million that is supported by Google. Yet relationships with tech companies are not the only quandary. We must also consider the way in which society views officers and technical leaders within these companies. Holding community members accountable for the decisions of the institutions that they lead raises serious questions. The time has come for difficult and nuanced conversations about responsible computing, ethics, corporate behavior, and professional responsibility.

That being said, it is unreasonable to expect for-profit corporations to avoid profitable and legal business models. Ethics cannot be the remedy for surveillance capitalism. If society finds the surveillance business model offensive, the remedy should be public policy—in the form of laws and regulations—rather than an ethics outrage. Of course, we cannot divorce public policy from ethics. For instance, we ban human organ trading because we find it ethically repugnant, but the ban is enforced via public policy rather than ethical debate.

The information technology (IT) industry has successfully lobbied for decades against attempts to legislate/regulate IT public policy under the mantra “regulation stifles innovation.” Of course regulation may stifle innovation. In fact, the whole point of regulation is to stifle certain types of innovation — the kind that public policy wishes to stifle. At the same time, regulation also encourages innovation. Automobile regulation, for example, undoubtedly increased automobile safety and fuel efficiency. Regulation can be a blunt instrument and must be wielded carefully; otherwise, it may discourage innovation in unpredictable ways. Public policy is hard, but it is better than anarchy.

Do we need ethics? Of course! But the current situation is a crisis of public policy, not a crisis of ethics.


References
[1] Candela, J.Q. (2019, January 20). Facebook and the Technical University of Munich announce new independent TUM Institute for Ethics in Artificial Intelligence. Meta. Retrieved from https://about.fb.com/news/2019/01/tum-institute-for-ethics-in-ai.
[2] Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The scandal and the fallout so far. The New York Times. Retrieved from https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html.
[3] Johnson, K. (2021, March 2). AI ethics research conference suspends Google sponsorship. VentureBeat. Retrieved from https://venturebeat.com/2021/03/02/ai-ethics-research-conference-suspends-google-sponsorship.
[4] Noonan, P. (2017, October 5). The culture of death — and of disdain. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/the-culture-of-deathand-of-disdain-1507244198.
[5] Partnership on AI. (2019). Report on algorithmic risk assessment tools in the U.S. criminal justice system. Retrieved from https://partnershiponai.org/paper/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system.
[6] Swisher, K. (2018, October 21). Who will teach Silicon Valley to be ethical? The New York Times. Retrieved from https://www.nytimes.com/2018/10/21/opinion/who-will-teach-silicon-valley-to-be-ethical.html.
[7] Vardi, M.Y. (2019). Departments: Are we having an ethical crisis in computing? Comm. ACM, 62(1), 7.
[8] Wagner, B. (2018). “Ethics as an escape from regulation. From ‘ethics-washing’ to ethics-shopping?” In Being profiled: Cogitas ergo sum. Amsterdam, Netherlands: Amsterdam University Press.
[9] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: Public Affairs.
[10] Zuboff, S. (2021, January 29). The coup we are not talking about. The New York Times. Retrieved from https://www.nytimes.com/2021/01/29/opinion/sunday/facebook-surveillance-society-technology.html.

Moshe Y. Vardi is a university professor and the Karen Ostrum George Distinguished Service Professor in Computational Engineering at Rice University, where he leads an initiative on technology, culture, and society.

blog comments powered by Disqus