Table of Contents
Facebook, which was founded in 2004, grew to 100 million members in just four and a half years. The rate and scope of its expansion were unparalleled. Before anyone had a chance to appreciate the difficulties the social media network could pose, it had grown into an entrenched juggernaut.
The Cambridge Analytica scandal highlighted the platform’s role in compromising citizens’ privacy and its potential for political influence in 2015. Around the same time, in Myanmar, social media reinforced misinformation and demands for violence against the Rohingya, the country’s ethnic minority, culminating in a genocide that began in 2016.
The Wall Street Journal reported in 2021 that Instagram, which was acquired by Facebook in 2012, had performed studies indicating that the app was harmful to the mental health of adolescent girls.
Facebook supporters argue that these consequences were unexpected and unanticipated. Critics argue that instead of moving quickly and breaking things, social media corporations should have planned ahead of time to avoid ethical disaster. However, all sides agree that emerging technologies have the potential to create ethical nightmares, which should make corporate executives — and society — extremely concerned.
We are at the start of a new technological revolution, this time with generative AI – models that can generate text, images, and other data. OpenAI’s ChatGPT reached 100 million users in just two months. Microsoft delivered ChatGPT-powered Bing six months after its first release; Google demonstrated its latest large language model (LLM), Bard; and Meta published LLaMA. ChatGPT-5 will most likely here before we realize it.
Moreover, unlike social media, which is still heavily centralized, this technology is now in the hands of thousands of people. Stanford researchers rebuilt ChatGPT for around $600 and made their model, Alpaca, open-source. More than 2,400 people had created their own versions by early April.
While generative AI is the focus of our attention right now, other technologies on the horizon promise to be equally disruptive. Today’s data crunching will be compared to kindergarteners counting on their fingers thanks to quantum computing. Blockchain technology are being developed for uses other than cryptocurrencies. Augmented and virtual reality, robotics, gene editing, and a slew of other technologies have the potential to transform the world for better or worse.
If history is any guide, the corporations that will bring new technologies into the world will take a “let’s just see how this goes” attitude. History also indicates that this will be detrimental to the unwary test subjects: the general people. It’s difficult not to be concerned that, in addition to the benefits they’ll provide, technological advances will bring with them a slew of societal-level consequences that we’ll spend the next 20 years attempting to repair.
It’s time to try something fresh. Companies developing these technologies must consider the following question: “How do we develop, apply, and monitor them in ways that avoid worst-case scenarios?” Companies who acquire and, in certain circumstances, tailor these technologies (as corporations are doing now with ChatGPT) face an equally formidable challenge: “How do we design and deploy them in a way that keeps people (and our brand) safe?”
In this article, I shall attempt to persuade you of three points: First, organizations must explicitly recognize the ethical hazards posed by these new technologies, or, better yet, possible ethical nightmares. Ethical nightmares are not arbitrary. Systemic invasions of privacy, the spread of democracy-undermining falsehoods, and the delivery of inappropriate content to children are all on everyone’s “that’s terrible” list. I don’t care where your firm lies on the political spectrum — whether it’s Patagonia or Hobby Lobby — these are our ethical nightmares.
Second, because of how these technologies work — what makes them tick — the potential of ethical and reputational problems has skyrocketed.
Third, business leaders, not technologists, data scientists, engineers, coders, or mathematicians, are ultimately responsible for this effort. Senior executives decide what is made, how it gets made, and how carefully or recklessly it is deployed and monitored.
These technologies present frightening possibilities, yet the challenge of confronting them isn’t that difficult: Leaders must describe their worst-case scenarios, or ethical nightmares, and explain how they plan to avoid them. The first step is to become comfortable discussing ethics.
Business Leaders Can’t Be Afraid to Say “Ethics”
In 2018, He attended my first nonacademic conference after 20 years in academia, ten of which were devoted to researching, teaching, and publishing on ethics. A Fortune 50 financial services company sponsored it, and the topic was “sustainability.” Having taught environmental ethics classes, thought it would be fascinating to observe how firms view their responsibilities in relation to their environmental affects. When he arrived, there were speeches about educating women all across the world, raising people out of poverty, and improving everyone’s mental and physical health. Few people were discussing the environment.
It took him an embarrassingly long time to realize that “sustainability” does not mean “practices that do not destroy the environment for future generations” in the corporate and charity spheres. Instead, it refers to “practices in pursuit of ethical goals” and the claim that those practices benefit the bottom line. He couldn’t understand why businesses didn’t just say “ethics.”
This practice of substituting the word “ethics” with a less defined term is popular. There’s also Environmental, Social, and Governance (ESG) investing, which is investing in companies that avoid ethical risks (emissions, diversity, political acts, and so on) in the hope that those policies can safeguard earnings.
Some businesses claim to be “values driven,” “mission driven,” or “purpose driven,” although these terms rarely refer to ethics. “Customer obsessed” and “innovation” are not ethical values; a purpose or objective might be fully amoral (leaving immoral aside). So-called “stakeholder capitalism” is capitalism with a hazy commitment to the well-being of unidentified stakeholders (as if stakeholder interests do not conflict). Finally, the field of AI ethics has expanded dramatically in the last five years or so. “We want AI ethics!” corporations yelled. “Yes, we, too, are for responsible AI!” their twisted response says.
Moral challenges do not vanish by semantic acrobatics. If we are to properly solve our problems, we must appropriately name them. Is it recommended that personal data be avoided for the purposes of targeted marketing? When is employing a black box model incompatible with ESG criteria? What if your purpose of connecting people also connects white nationalists?
Let us use the transition from “AI ethics” to “responsible AI” as an example of the detrimental effects of changing language. To begin, when business leaders discuss “responsible” and “trustworthy” AI, they are referring to a broad range of topics such as cybersecurity, regulatory, legal concerns, and technical or engineering hazards. These are crucial, but the final result is that technologists, general counsels, risk officers, and cybersecurity engineers focus on areas in which they already have expertise, namely everything except ethics.
Second, when it comes to ethics, leaders become fixated on very high-level and abstract principles or ideals – on notions like fairness and autonomy. Companies frequently fail to drill down into the very real, concrete ways these questions play out in their products because this is only a small portion of the total “responsible AI” picture. Ethical nightmares that outpace obsolete policies and laws are left unnamed and as likely as they were before the deployment of a “responsible AI” framework.
Third, the emphasis on identifying and pursuing “responsible AI” provides firms with a hazy aim with hazy milestones. Organizational AI statements include phrases like “we are for transparency, explainability, and equity.” However, no corporation is (or should be) completely transparent with everyone; not every AI model must be explainable; and what constitutes equity is very debatable. It’s no surprise that firms who “commit” to these ideals quickly discard them. There are no objectives here. There are no landmarks. There are no prerequisites. There is also no description of what failure looks like.
However, when AI ethics fail, the consequences are clear. Ethical nightmares abound: “We discriminated against tens of thousands of people.” “We tricked people into giving up all that money.” “We engaged in systematically violating people’s privacy.” In summary, if you understand your ethical nightmares, you understand what ethical failure looks like.
What Causes Digital Nightmares?
Understanding how emerging technologies work — what makes them tick — can help explain why the likelihood of ethical and reputational hazards has risen dramatically. He has concentrated on three of the most crucial.
1. Artificial intelligence
Let us begin with a hotly debated technology: artificial intelligence, or Artificial Intelligence. Machine learning (ML) accounts for the vast bulk of AI.
At its most basic, “machine learning” is software that learns by example. And, just as people learn to discriminate on the basis of race, gender, ethnicity, or other protected characteristics by watching others, software does as well.
Assume you wish to train your photo recognition program, Zeb, to recognize images of your dog. You show that software numerous examples and tell it, “That’s Zeb.” The software “learns” from those samples, and when you take a new photograph of your dog, it recognizes it as a photograph of Zeb and labels the photograph “Zeb.” If it isn’t a photo of Zeb, the file will be labeled “not Zeb.” If you show your software examples of “interview-worthy” résumés, the approach is the same. It will use those examples to determine whether new résumés are “interview-worthy” or “not interview-worthy.” The same is true for university applications, mortgage applications, and parole applications.
The software is recognizing and duplicating patterns in each case. The problem is that these tendencies are occasionally unethical. For example, if the examples of “interview-worthy” résumés represent past or current biases against specific races, ethnicities, or genders, the software will detect it. Amazon once developed a resume-screening AI. And, in order to assess parole, the United States’ criminal justice system has deployed predictive algorithms that duplicate previous prejudices against Black defendants.
It’s important to emphasize that the discriminatory trend can be found and duplicated regardless of the intentions of the data scientists and engineers who wrote the software. In truth, Amazon data scientists noticed the flaw with their AI indicated above and attempted to remedy it, but they were unsuccessful. Amazon made the correct decision to abandon the initiative. However, if it had been employed, an unwitting hiring manager would have used a tool with unethical discriminatory operations, regardless of the individual’s intentions or the organization’s stated principles.
Discriminatory effects are simply one of the ethical nightmares that AI must avoid. There are also issues about privacy, the potential of AI models (particularly huge language models like ChatGPT) being used to manipulate people, the environmental cost of the massive computing power required, and a plethora of other use-case-specific hazards.
2. Quantum computing
The specifics of quantum computers are extremely sophisticated, but for our purposes, we only need to know that they are computers capable of processing massive amounts of data. They can execute calculations that would take thousands of years on today’s greatest supercomputers in minutes or even seconds. Companies like IBM and Google are investing billions of dollars in this hardware revolution, and we should expect to see more quantum computer integration into traditional computer processes every year.
Quantum computers add fuel to an existing challenge in machine learning: unexplained, or black box, AI. Essentially, we don’t know why an AI tool generates the predictions that it does in many circumstances. When the photo software examines all of Zeb’s photos, it analyzes them at the pixel level. More specifically, it is recognizing all of the pixels as well as the myriad of mathematical relationships between those pixels that comprise “the Zeb pattern.”
Those mathematical Zeb patterns are astonishingly complex — far too sophisticated for ordinary mortals to comprehend — which means we have no idea why it (right or incorrectly) dubbed this latest photograph “Zeb.” And, while we may not care about getting explanations in the instance of Zeb, we may care a lot if the program advises to refuse someone an interview (or a mortgage, or insurance, or admittance).
Black box models become completely impenetrable due to quantum computing. Data scientists can currently provide explanations for an AI’s outputs that are simplified representations of what’s truly going on. However, simplification might lead to distortion. And, because quantum computers can process billions of data points, distilling that process down to an explanation we can grasp — while remaining certain that the explanation is more or less correct — becomes increasingly challenging.
This raises a slew of ethical concerns: Under what circumstances can we rely on the results of a (quantum) black box model? What are the proper performance benchmarks? What should we do if the system looks to be faulty or behaves strangely? Do we accept the incomprehensible results of a machine that has previously proven reliable? Or do we reject such outputs in favor of our very restricted but understandable human reasoning?
Assume you and me, along with a few thousand of our friends, each have a magical notepad that includes the following features: When someone writes on a page, it appears in everyone else’s notebook at the same time. Nothing that has been written on a page can ever be erased. The content on the pages, as well as their order, are permanent; no one can remove or alter the pages. When you transfer an asset to someone, both your and their pages are instantly and automatically updated.
This is how blockchain works at a high level. Each blockchain adheres to a set of rules encoded into its code, and updates to those rules are made by whoever manages the blockchain. However, the quality of a blockchain’s governance, like any other type of management, is dependent on the answers to a series of critical questions.
As an example: What information belongs on the blockchain and what information does not? Who gets to decide what happens? What are the requirements for inclusion? Who keeps an eye on things? What is the procedure if a bug is discovered in the blockchain code? Who decides whether or not to make structural changes to a blockchain? What is the distribution of voting rights and power?
Bad blockchain governance can result in nightmarish scenarios such as people losing their savings, having personal information about themselves revealed against their will, or fraudulent information being placed into people’s asset pages, allowing deceit and fraud.
Blockchain is most often associated with financial services, but every industry stands to integrate some kind of blockchain solution, each of which comes with particular pitfalls. For instance, we might use blockchain to store, access, and distribute information related to patient data, the inappropriate handling of which could lead to the ethical nightmare of widescale privacy violations.
Things seem even more perilous when we recognize that there isn’t just one type of blockchain, and that there are different ways of governing a blockchain. And because the basic rules of a given blockchain are very hard to change, early decisions about what blockchain to develop and how to maintain it are extremely important.
These are business issues, not (just) technological ones
Companies’ ability to accept and exploit these technologies as they evolve will be critical to their ability to remain competitive. As a result, leaders will be required to ask and answer questions such as:
- What constitutes an unfair or unjust or discriminatory distribution of goods and services?
- Is using a black box model acceptable in this context?
- Is the chatbot engaging in ethically unacceptable manipulation of users?
- Is the governance of this blockchain fair, reasonable, and robust?
- Is this augmented reality content appropriate for the intended audience?
- Is this our organization’s responsibility or is it the user’s or the government’s?
- Does this place an undue burden on users?
- Is this inhumane?
- Might this erode confidence in democracy when used or abused at scale?
Why is this obligation assigned to company leaders rather than, say, technologists tasked with implementing new tools and systems? After all, most leaders aren’t fluent in the code and arithmetic behind software that learns by example, quantum physics underlying quantum computers, or blockchain cryptography. Shouldn’t professionals be in charge of such important decisions?
The problem is that these aren’t technical concerns; they’re ethical and qualitative in nature. These are the kinds of issues that company leaders, guided by qualified subject matter experts, are tasked with addressing. Offloading that obligation to coders, engineers, and IT teams is both unfair to those individuals and foolish for the firm. It’s fair that leaders might find this duty overwhelming, but they are unquestionably responsible.
The Ethical Nightmare Challenge
I’ve attempted to persuade you of three claims. First, leaders and organizations must explicitly acknowledge the ethical nightmares that new technologies have created. Second, the operation of these technologies poses a major risk. Third, it is the responsibility of senior leaders to provide ethical guidance to their specific firms.
These claims support the following conclusion: organizations that use digital technology must address ethical nightmares before they harm people and brands. This is what I term the “ethical nightmare challenge.” To overcome it, businesses must develop an enterprise-wide digital ethical risk program.
The first section of the program, which I refer to as the content side, asks: What are the ethical nightmares we’re attempting to avoid, and what are their possible sources? The second section of the program, which I refer to as the structure side, addresses the question, “How do we systematically and comprehensively ensure those nightmares don’t become a reality?”
Ethical nightmares can be expressed in varied degrees of intricacy and sophistication. The industry you work in, the type of business you are, and the types of interactions you need to have with your clients, customers, and other stakeholders all influence your ethical nightmares. For example, if you’re a health care provider with physicians that use ChatGPT or another LLM to make diagnoses and treatment recommendations, your ethical nightmare may include widespread incorrect suggestions that your staff lacks the expertise to detect.
If your chatbot is undertrained on information pertaining to specific races and ethnicities, and neither the developers nor the clinicians are aware of this, your ethical nightmare would be systematically providing false diagnoses and poor treatment to those who have already been discriminated against. If you’re a financial services firm that uses blockchain to transact on behalf of clients, one ethical nightmare might be the lack of the capacity to repair faults in the code — a result of the blockchain’s ill-defined governance. This could mean, for example, being unable to reverse fraudulent transactions.
It is important to note that articulating nightmares entails specifying specifics and repercussions. The more specific you can get — which depends on your understanding of the technologies, your industry, the various contexts in which your technologies will be deployed, your moral imagination, and your ability to think through the ethical implications of business operations — the easier it will be to build the appropriate structure to control for these things.
While the methods for detecting nightmares are same across businesses, the strategies for implementing suitable controls differ based on the organization’s size, existing governance structures, risk appetites, management culture, and other factors. Companies’ forays into this arena can be classed as official or informal. Every organization, in an ideal world, would take the formal approach.
However, considerations such as limited time and resources, the rate at which a company (actually or incorrectly) believes it will be impacted by digital technology, and business demands in an unpredictable market can all make the informal approach seem reasonable. In such instances, the casual approach should be regarded as a first step that is preferable to doing nothing at all.
The formal approach is systematic and complete, and it takes a significant amount of time and resources to construct. In a nutshell, it is concerned with the ability to develop and implement an enterprise-wide digital ethical risk strategy. In general, there are four steps.
3. Education and alignment
First, all senior leaders must grasp the technology well enough to agree on what defines the organization’s ethical nightmares. Building and implementing a strong digital ethical risk strategy requires knowledge and leadership alignment.
Executive briefings, workshops, and seminars can help with this education. However, it should not demand or attempt to teach math or coding. This technique is designed to help non-technologists and technologists alike understand the hazards that their firm may face. Furthermore, it must be about the organization’s ethical nightmares, not about sustainability, ESG criteria, or “company values.”
4. Gap and feasibility analyses
Leaders must first understand their organization and the likelihood of their worst nightmares coming true before developing a strategy. As a result, the second stage is to conduct gap and feasibility evaluations to determine where your organization is currently, how far it is from adequately protecting itself from an ethical nightmare occurring, and what it will take in terms of people, processes, and technology to fill those gaps.
Leaders must understand where their digital technologies are and where they will most likely be designed or bought within their firm to do this. Because if you don’t understand how technologies work, how they’re used, or where they’re headed, you won’t be able to prevent nightmares.
Then a variety of questions present themselves:
- What policies are in place that address or fail to address your ethical nightmares?
- What processes are in place to identify ethical nightmares? Do they need to be augmented? Are new processes required?
- What level of awareness do employees have of these digital ethical risks? Are they capable of detecting signs of problems early? Does the culture make it safe for them to speak up about possible red flags?
- When an alarm is sounded, who responds, and on what grounds do they decide how to move forward?
- How do you operationalize and harmonize digital ethical risk assessment relative to existing enterprise-risk categories and operations?
The responses to queries like these will differ greatly amongst firms. It’s one of the reasons why digital ethical risk strategies are challenging to develop and implement: they have to be tailored to work with current governance structures, policies, processes, workflows, tools, and staff. It’s simple to suggest that “everyone needs a digital ethical risk board,” modeled after the institutional review boards that evolved in medicine to limit the ethical hazards associated with human subjects research. However, it is impossible to continue with “and every one of them should look like this, act like this, and interact with other groups in the business like this.” Good strategy does not emerge from a one-size-fits-all answer in this case.
5. Strategy creation
Building a company strategy based on the gap and feasibility evaluations is the third phase in the formal approach. This entails, among other things, defining goals and objectives, deciding on a metrics and KPI strategy (for assessing both compliance with the digital ethical risk program and its impact), creating a communications plan, and identifying key success factors for implementation.
Cross-functional participation is required. Technology, risk, compliance, general counsel, and cybersecurity leaders should all be involved. Direction should also come from the board of directors and the CEO. Without their enthusiastic support and encouragement, the program will get watered down.
The fourth and last step is strategy implementation, which includes process reconfiguration, training, support, and continual monitoring, including quality assurance and quality improvement.
New procedures, for example, should be customized by business domain or role to be consistent with current procedures and workflows. These protocols should clearly describe distinct departments’ and individuals’ roles and duties, as well as set explicit mechanisms for identifying, reporting, and addressing ethical issues. Furthermore, novel workflows must seek an ideal balance of human-computer interaction, which will vary depending on the types of jobs and the relative hazards involved, as well as develop human oversight of automated flows.
The informal approach, on the other hand, typically entails the following endeavors: entrusting executives in distinct units of the business (such as HR, marketing, product lines, or R&D) with identifying the processes required to complete an ethical nightmare check; and creating or leveraging an existing (ethical) risk board to advise various personnel — either on individual projects or at a more institutional level.
This strategy does not necessitate official policy changes, departmental harmonization and integration, official changes in governance structure, or comparable initiatives. While it has a strong punch, it is neither systematic nor thorough. Ethical risks may slip through the gaps and end up on the front page.
In my area of expertise, I’ve discovered that the vast majority of organizations are run and headed by excellent individuals who have no intention of harming anyone. But I’ve also noticed that “ethics” is a word that most businesses avoid using. It is regarded as subjective or “squishy,” and falls outside of the purview of business.
Both of these points of view are incorrect. Invading people’s privacy, automating discrimination on a large scale, weakening democracy, putting children in danger, and betraying people’s trust are all obvious. They are ethical nightmares on which almost everyone can agree.
Instead of understanding and attempting to prevent their participation in all of this, many leaders remain focused on business as usual: roles and duties are fixed. Quarterly reports must be submitted. Shareholders are keeping an eye on everything. People have day jobs; they cannot be moral galactic guards at the same time. In many ways, it is not evil, but rather normal operating procedure, that is the adversary of the good, or at the very least of the not-bad. Yesterday’s tools may have required bad intent from those who wielded them to cause havoc, but today’s tools do not.
If you give them the chance, the breathing room, to do the right thing, they will gladly do it. Providing such a chance entails not only allowing, but also promoting or compelling people to speak in the language of ethical nightmares. Make it your top priority. Integrate it into your current business strategy. Ensure that everyone in the organization can tell you about their worst fears and can rattle off five or six things the company does on a daily basis to prevent them from happening.
Leaders must recognize that creating a digital ethical risk plan is well within their capabilities. The majority of employees and customers want their companies to have a digital ethical risk strategy. Management should not be afraid.