Beyond the Business Case for Responsible Artificial Intelligence: Strategic CSR in Light of Digital Washing and the Moral Human Argument

[ad_1]

1. Introduction

Ethical issues arising from artificial intelligence (AI) design and deployment in business organizations are increasingly debated. Undesirable social outcomes entailed by AI are shaping scholars and practitioners’ concerns about the future of protecting human rights, ensuring environmental sustainability, facing workforce technological unemployment and reskilling, dealing with racial and gender discriminations, focusing on privacy and data control, among others. Artificial intelligence is here defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (OECD 2019 [1], p. 7). AI can be further understood as a collection of technologies with the distinctive “ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” [2].
In light of such disruption, corporate social responsibility has become critical to ensure companies’ stance on ethical AI, both in the forms of taming and preventing undesirable consequences and supporting AI for positive social and environmental development for internal and external stakeholders [3]. Corporations are exposed to a number of unprecedented ethical dilemmas when having to perform decision making over trade-offs between profit maximization through AI and ethical AI-driven practices [4]. Tensions arising from such dilemmas can be solved through the adoption of “washing” practices (also namely ethical digital washing, blue washing, ethics AI washing, machine washing): “tech giants are working hard to assure us of their good intentions surrounding AI. But some of their public relations campaigns are creating the surface illusion of positive change without the verifiable reality. Call it “machinewashing” [5]. Such a use of washing through symbolic actions and misleading claims has been discussed as a phenomenon not only pertaining to the domain of communication strategy but also extensively involving the moral stance of a CSR strategy [6], especially when confronting the field of ethical AI, where doing nothing (or only pretending to act) is already considered a source of potential harm due to underestimation of future shortcomings.
Strategic CSR, aiming to extensively adopt ethical behavior in corporate practices and to create at the same time social and economic prosperity [7], has yet to be confronted with the need to provide additional weight on the moral arguments to support corporate involvement in AI ethical responsibility. Although the relationship between CSR and corporate performance is a troubled one, with scholars struggling to find unquestionable positive correlation between the two, CSR has indeed been proven beneficial for the bottom line in a vast number of cases [8,9,10]. Major theories, such as CSV [11], have been building on the so-called “business case” for creating economic value while exploiting the chance to provide a social positive impact or solve a social issue yet untacked by the market [9]. Nonetheless, criticism towards such an approach has underlined how tensions between socially oriented and economic-based motives might arise, with significant consequences on organizational survival and success as well as ethically desirable outcomes [12]. Although Porter and Kramer already answered such criticisms to their CSV perspective by highlighting how the use of the business case is essential to draw attention and elicit social positive impacts, dilemmas arising in AI ethics present a degree of complexity [13] entailing that it would be highly difficult to come up with optimal economic and socially oriented solutions easily compatible with CSV strategies.
Such dilemmas are quite common within a widening grey area, where legislation is still lacking and public policies are falling behind the fast pace of technological advancements [14], leaving increased space for corporations to play a critical role in protecting and enhancing human rights and human development [15] (Scherer et al., 2016) or increasing the chances of AI’s harmful consequences through irresponsible behavior [16]. Grey areas concern both businesses dealing with AI developing, confronted with the problem of designing responsible AI and ethical machines (see, for example, the value-alignment problem [17]), as well as businesses purchasing new technologies and using them to increase their efficiency and performance. In the latter case, with businesses struggling to not fall behind in such a technological race with innovative competitors, ethical concerns posed by the use of AI might easily be underestimated in favor of time and resource-saving approaches [18].
While some scholars have argued that corporate interests in ethical AI stems from the concern that failing to operationalize ethical AI will negatively impact economic performance [19], others have sharply criticized current trends of ethical involvement by corporations—especially big tech players—by labeling it “ethical digital washing” [20]. To navigate such a debate, going back to arguments concerning the possible (or not) existence of a “market for virtue” [21] is therefore urgent to assess motives and drivers of responsible AI behavior in business organizations.
This work aims at offering a critical review of supporting the “business case” and “moral case” for strategic CSR in light of artificial intelligence-related ethical dilemmas in organizational everyday life. It seeks to contribute to the subfield of strategic CSR and AI [3,22] by adding to it the need for a “moral argument” to support ethical behavior when confronting AI widespread ethical challenges.
Indeed, the paper enhances our knowledge on debates spurring from the overlapping of CSR and ethics of AI in at least three ways: first, by refashioning the classical debate on business arguments and moral arguments supporting CSR in light of current challenges for the management of AI [23,24]. Second, by looking into already circulated cases of digital washing and corporate inconsistencies, offering their discussion through business ethics’ lenses point at washing practices as sources of underestimation of dehumanization perils entailed by AI [25,26]. Third, by presenting a “human argument” able to overcome current limits of the classical “business argument” and already circulated “moral argument” in order to support integral ethical commitment to AI designs and deployment for social purposes [27,28].
The purpose of this paper is to contribute to the normative literature in the field of business ethics and sustainability [29,30], conceived as a body of knowledge of cornerstone importance to inform applied ethics in business and administration practices [31] as well as to inform future empirical research [32]. More in detail, the paper follows the “symbiotic approach” of business ethics’ normative/empirical split [32,33]. Such an approach entails a practical relationship between the two streams by investing normative research (to which body this work pertains) of morally enrooted debates on business organizations and practices, while empirical research is invested in enquiries adding to and further detailing morally debated problems by normative works. The latter approach is distinct from “theoretical integration”, where normative concepts can be directly drawn from empirical studies [32]. To form the underpinning arguments to this work, the author has therefore drawn the major discussed concepts from major normative studies in the field (for example, the business case vs. the moral case for CSR are highlighted in cornerstone textbooks of the discipline [34]). Therefore, the following paragraphs are each dedicated to themes derived from an extensive literature review conducted by summarizing and critically assessing the inherent body of normative production. The descriptive use of cases, reported and commented, aims to offer relevant and real-life examples where the normative arguments illustrated can be applied [35]. This work seeks to add to the literature adopting a “foundational normative method”: analyzing business problems by highlighting their ethical enrooting in ethical theories (or religious theories), referring to such theories to discuss inherent motivations and outputs of decision making, evaluating their relevance for the theory [36]. This work does so with a specific focus on the intersection of the ethics of AI and strategic CSR ethical underpinnings.
The remainder of the paper, structured by devoting each section to summarizing a major debate in the field and one subsequent section on its refashioning in light of AI developments, is organized as follows: in Section 2, the paper presents the main arguments supporting the business case for strategic CSR while, in Section 3, highlighting its main perks and limits when confronted with AI developments and corresponding ethical downfalls. Section 4 presents the main arguments for supporting the moral case underpinning CSR strategies, and Section 5 discusses such arguments in light of AI ethical disputes. Section 6 presents the washing strategy and related tensions arising from strategic CSR adoption, while Section 7 discusses digital ethical washing in corporations addressing AI design and how to counterbalance such a tendency by leveraging on the “human argument”.

2. Strategic CSR and the Business Case

The link between corporate social responsibility (CSR) and strategy and competitive advantage is a classic exploration within CSR literature and business ethics debates [37]. Most of such a debate has reached a consensus around the paradigm of “Strategic CSR”. Camilleri and Sheehy [7] define the latter as the integration of responsible practices with corporate practices, thus strategically deploying CSR as source of multiple targeted positive outcomes, such as competitive advantage, long-term economic sustainability, discouraged additional regulation, first mover advantage in setting high ethical standards [38]. In all its accounts, strategic CSR has the ultimate aim to create economic and social value at the same time [39]. Central to such a view is the understanding of practices, policies, and processes aimed at addressing social responsibility as tied to its impact on corporate performance. As Vishwanathan et al. [40] have underlined, among others, four variables of the positive impact of CSR on CSP are as follows: “(1) enhancing firm reputation, (2) increasing stakeholder reciprocation, (3) mitigating firm risk, and (4) strengthening innovation capacity”. Other mediating mechanisms have been identified in strategic planning and targeting positive outcomes for the hosting country in MNCs [41], the guaranteed efficiency of CSR activities through bettering stakeholders’ relationships, venturing new opportunities, focusing on brand equity and leveraging on communication [42], substantive self-regulatory codes of conduct [43], enhanced firm social perception—through psychological factors as warmth and competence—of CSR [44]. Responsible features integrated within corporate governance have further been identified as a mediating factor in CSR investments’ positive impact on corporate performance [45]. Strategic CSR has been discussed as yielding positive outcomes on employees’ commitment [46], legitimacy in potentially adverse contexts [47], as viable support for organizations to have success in sustainability and to thrive. Nonetheless, an unequivocal causal relationship between strategic CSR and positive corporate performance remains empirically contested [48,49], although several studies confirm that cases of positive correlation are not rare and, especially given some conditions, can lead to profit maximization. This is particularly true for long-term orientation strategies [50]. For instance, in cases of advertising or leveraging on the willingness to pay premium prices for embedded products with social value, this is shown in consumers’ appreciation of congruence between goals, actions, audiences, and the firm’s proposed strategy [10,51,52].
Such an extensive body of empirical literature relies on what has been summarized as “the business case” for strategic CSR. This stream of literature is closely tied to the classic debate on the business case for the very foundational models of CSR of any kind, which compels a variety of approaches, mostly grouped as follows: “(1) cost and risk reduction; (2) gaining competitive advantage; (3) developing reputation and legitimacy; and (4) seeking win–win outcomes through synergistic value creation” [53]. The business case has been at the core of the now widespread approach of creating shared value proposed by Porter and Kramer [11] and further popularized with the well-known motto of “doing good and doing well” [54]. Such a paradigm has clearly focused on the existing nexus between business and society as critical for strategic decision making. Nonetheless, as Carroll and Shabana discuss, when arguing for strategic investing in CSR, there is no need to prove a direct relationship between CSR and performance, as such a link can be indirect, or if not clearly identified, other variables connected to responsible behavior might be beneficial to firm survival, change, and sustainability. Together with external pressures, such as reputation, customers’ demands, and regulatory frameworks, the business case has been indeed identified as one of the principal drivers of responsible management [55]. This is true whether responsible strategies are adopted as a reaction to pressures as well as to keep ahead of potential future risks, regulations, backlashes [56]. The business case has become so central that even critics have admitted its uncontestable relevance: “while profitability may not be the only reason corporations will or should behave virtuously, it has become the most influential.” [21].

3. The Business Case for AI in Strategic CSR

Thanks to its focus on long-term orientation, by foreseeing future challenges and preparing in advance for possible business’ shortcomings in ethical and juridical compliance, the motives leading to strategic CSR orientation have acquired new urgency in light of AI’s recent disruptive evolution. Being one of the major projected sources of increased business profitability and, at the same time, yielding major social impacts both in terms of positive and negative outcomes, responsible AI use is a field increasing in interest for CSR scholars and experts [14,57].
By focusing on corporate strategy towards AI as a driving (or impeding) competitive advantage, was a way to enable a better performance, and as a means to reach new goals, the literature has focused on crucial aspects of novel strategic decision making, reforms in organizational culture, alignment of innovation capacity with newly set goals [58,59]. As the literature provides clear indications concerning AI’s potential to increase productivity, free humans from repetitive and unsatisfying tasks, and unleash the offering of new services and products, the same optimistic literature on the future of AI warns about the need to harness such potentiality with careful ethical evaluations and practices [60]. Leveraging between such a potential and its related ethical risks concerns virtually every aspect of organizational life: building trust and confidence in employees towards innovation challenges, enhancing trust from the consumers, keeping the pace with solicited upskilling and reskilling of the workforce, promoting everchanging ways of monitoring competitors’ technological advancements, designing systems of control both over machines’ decision making, and concerns over the adequate fast-paced growing need for compliance with public legislation [61].
The extent and the features of unprecedented time and energy consumption required for organizations to focus on AI as a driver of competitive advantage can easily lead to underestimated or postponed guard-railing ethical downfalls, especially in a time of yet unestablished common paths and clear guidelines [62]. While the grey area between ethical conduct and legal/illegal obligations is expanding, due to AI advancing at a faster pace than law makers and policy makers’ interventions [63], business organizations’ role in preventing, taming, and addressing AI social and environmental impact is growing in centrality [64].
While organizations are generally embracing AI-related CSR strategies proactively [65], voluntary engagement in the effective taming of AI’s social impact is still in its infancy. Nonetheless, the business case for investing in CSR linked to AI has already gained popularity; by focusing on maximizing AI’s business potentiality while minimizing its threats, organizations are aiming at exploiting social disruption brought by AI for business opportunities [3,66]. Nonetheless, the ethical consequences of the future steps of AI foresee a much harder challenge than trying to align business and ethical considerations in the short term due to AI’s forecasted ability (although still debated) to surpass human work capabilities in many sectors [3]. In order to meet such a challenge, going broadly from ethical risk-management strategies to engaging employees in being proactive in embracing change and operationalizing ethical thinking throughout all the organization by raising awareness [67] requires the mobilization of material and immaterial resources that need a motivational background broader than the mere business case. The next sections the paper will discuss several arguments supporting the latter view in light of possible backlashes not only on organizations’ mishandling of ethical risks but also on humanity at large.

4. The Moral Case for Strategic CSR: Humanizing Stakeholders within Ethical Theories

In their cornerstone work concerning “mapping the territory” of CSR theories, Garriga and Melé [68] underline, alongside the business argument and others, ethical theories underpinning CSR as a way to reach a desirable society by focusing on the common good for its own sake. This body of literature compels the so-called “moral arguments” to support CSR. The theories belonging to this approach, mostly but not solely tied to Catholic social teaching and Aristotelian tradition [69], display the following requirements for CSR: “(1) meeting objectives that produce long-term profits, (2) using business power in a responsible way, (3) integrating social demands and (4) contributing to a good society by doing what is ethically correct” [68]. Among those theories, normative stakeholder theories have a central role in stressing the relational value of business activities and the importance of interpersonal relationships at the individual and social level as an ontological component of business organizations [70]. Notably, CSR definitions often rely on the stakeholder theory, although, when focused on profit maximization, they tend to adopt an instrumental view of stakeholders’ relations [71,72]. On the other hand, the normative stakeholder theory has largely contributed to underpinning ethical views of CSR due to its ability to stress the intrinsic value of stakeholders’ dignity and, consequently, stakeholders’ demands. Such an approach has thus been fundamental in “humanizing” stakeholders, i.e., treating others with respect to their integral humanity instead as a means to achieve economic benefits [73].
Strategic CSR also relies heavily on stakeholders’ relations as a key driver of economic and social value creation [74]. The literature on the topic has extensively addressed the role of stakeholders’ pressure in a firm strategic orientation to substantive CSR depending on the stakeholders’ salience and resource availability [43], highlighting how firms decide to engage in symbolic or substantive actions depending on the targeted groups and the pressure type applied. The bourgeoning literature of empirical studies enquiring into features and models of stakeholder management and engagement within strategic CSR as well as into its role in bettering stakeholders’ relationships relies, as mentioned, on a vast body of normative literature. Indeed, it is relying on the moral arguments for CSR that responsible behaviors can be explained as authentic strategic orientation to stakeholders as persons, with their integrity, due respect for their wellbeing, freedom, psychological, physical, and spiritual integrity [75,76]. Such an understanding helps also in addressing virtuous models of stakeholders’ engagement [77] as well as offering a suitable framework to debate the role of worldviews, system of beliefs, and ideologies—more than factual evidence—in leading executives to adopt CSR [78,79].
Moral arguments backing responsible behavior are thus relevant for strategic CSR, as they underline how the latter rarely can be pursued without a significant component of spiritual, ideal, value-laden influences, especially due to the fact that those have a crucial impact in the quality of stakeholder relations that are built [80,81]. If virtuous stakeholder engagement requires moral motives, then the latter can be understood as a prerequisite for any best practices of pursuing CSR, as it has been summarized: “Only when firms are able to pursue CSR activities with the support of their stakeholders can there be a market for virtue and a business case for CSR” [53] p. 102. Drawing on such a perspective, the business argument backing CSR would be addressed as actually resulting from a previous moral engagement rather than the major motive underpinning corporate positive social impact.

5. The Moral Case for Responsible AI

Two are the main concerns and corresponding approaches, which can be identified within the current debates concerning the business’ responsibilities for ethical AI. The first body of concerns deals with the responsibility of business actors investing and designing new AI technologies (see, for example, the case of OpenAI starting as a non-profit and quickly developing into a highly profitable organization [82]). A second body of literature deals with the responsibility of a business in implementing and using AI for any given organizational scope, incurring numerous ethical shortcomings (see, for example, the classic case of ethical implications of self-driving cars, [83] or extensive concerns about AI bias in recruiting and human resource management, [84]). Corporate responsibility thus has to be discussed both concerning the design and use of technology. An underestimation of the moral arguments backing CSR, hence the foreseeing of suitable actions, depends on two main theoretical shortcomings: first, the understanding of the firm as solely an economic actor, thus overlooking its ontological being as a moral and social body [85]; second, similarly to the reductionist perspective of the first, an understanding of technological progress as neutral and not imbued with moral considerations [86]. The collapsing of the two in the field of organizational ethics concerning AI leads to overlooking the social impact of corporate power and behavior, which affects virtually all CSR future perspectives [3].
Another moral argument relies on the intrinsic nature of ethical questions entailed by AI, which are difficult to be posed and faced by prioritizing the bottom line over social concerns. CSR practices and policies to meet the ethical challenges arising with AI do not only have to operationalize often vague lists of principles and ethical codes but also actually provide answers and set standards for fundamental questions, such as: Who should be held accountable for the design and impact of AI and how? Which are the purposes and the limits of organizational automation through AI? Which level and models of stakeholders’ engagement are needed to prevent unethical outcomes? [14,87]. Most of such questions do not bare a direct link with corporate performance, and if they do, it is actually a trade-off in between speed in technology development and the adoption and careful ethical guardrail.
Most of such questions entails a refashioning of moral cases for strategic CSR as well as CSR in general terms. For instance, the classic supply-chain ethical conundrum of exploiting the Global South is now posing in AI labor intensive sectors some of the very same issues already seen with MNCs’ use of overseas sweatshops to produce garments and low-quality products (ex., the textile industry, with scandals and boycotts, see among many others the Nike case) [88,89]. By relying on low- and under-paid labor, for example, in the Philippines, to train algorithms and ChatGPT generative AI, Scale AI is facing accusation for “data sweatshops” creation [90], while competitors, such as Enabled Intelligence (https://enabledintelligence.net/news/opinion-ai-data-sweatshops-are-bad-news-and-threaten-national-security/ (accessed on 17 December 2023)) are using the scandal to share ethical stances to differentiate themselves. Nonetheless, reputational backlashes happen only when consumers and external stakeholders have a certain level of awareness and access to relevant information, which in the case of AI might be additionally difficult, and shared knowledge is rare. Most notably, even in cases of self-described transparent organizations, the sharing of information is scarce [91].
The third issue has been raised in the form of an ethical concern over the “power concentration” of AI developers and owners. Many have discussed how big techs are pursuing ambitious AI projects, publicly warning about possible threats to humanity, while at the same time not foreseeing any change in their governance structures and internal processes so as to fundamentally alter their problem-solving capacity towards ethical shortcomings [92]. Addressing what have been called “ethical nightmares” would instead force companies to adopt advanced frameworks to address potential harms before their consequences are widespread, relying on senior executives’ integrity and guidance to keep ahead of future challenges, not perpetually trying to solve them ex post [93]. This would force them to adapt a concept of accountability that entails difficult but clear choices on governance models and pursued goals, leaving current approaches not unaltered [94]. The latter compels critically addressing the business and societal relationship, exceeding the business case for social responsibility alone, to approach a collaborative mindset between private and public interest [95,96]. Indeed, for all the above-mentioned reasons and stakes, AI is the field par excellence in which considerations other than economic ones come into play when addressing the relationship between business and society. In the most notorious case, the issue has addressed by turning public interest into private ownership instead of the other way around [82]. Concern has thus been raised on a progressive alteration of the business and societal relationship not in favor of stakeholders but rather of a few people in charge of big corporations profiting from AI’s newest frontiers; Bietti has long argued about technologies reflecting existing social power relations and being influenced in their design by power structures [20]. In the same facet, as Elliott et al. have discussed, digital corporate responsibility can face huge vested interests as stakes in a digital society, as the latter have to be faced not only through taming unwanted consequences at the micro level of communities and the general public but also at the meso and macro level of organizations as corporations and big tech owning technologies [24].
Another argument is that corporate culture is among the main critical variables to focus on to build a significant CSR strategic orientation towards ethical AI and, as such, requires beliefs and ideals about the desirability of moral behavior [23,97]. Indeed, even when irresponsibility is addressed from the standpoint of “practical ethics” (i.e., “immoral actions as violations of established habits of a culture” [57] p. 787), when deployed in organizations, it entails tolerating or failing to prevent misbehavior as a part of organizational culture, especially when it takes the form of “ordinary irresponsibility”. Furthermore, culture, relying on values, poses the latter at the center of the debate on main drivers of ethical AI. Indeed, a focus on ethical challenges leads to discussing value salient notions such as fairness, basic entitlements, inequality, and others [14] (Floridi et al., 2021 [98]) rather than on a singular principle of profit maximization and efficiency rationale. Aligning AI with such human values and evaluations is among the most difficult goals of ethical AI [17]. In addition, S.S. Lindebaum et al. have discussed (in line with the classic critique of the all-encompassing technique proposed by Adorno and Horkheimer, [99]) the ways technological advancements can bring to organizations the tendency to converge towards a singular principle of rationalization, progressively abandoning and underestimating value-laden processes and rationales [100]. The mechanization of values, just as any intrinsic humanistic feature, leads to impoverished social intercourse and outcomes, thus dehumanizing the organization itself in the medium-long run. On the ethical spectrum ranging from “use AI for good” to “malicious use of AI”, moral concerns of organizations are at the core of choosing where to collocate oneself through a responsible management of technology enrooted in the understanding of the latter not as univocal and neutral [101,102].

This body of extensive concerns about the need to converge on moral arguments supporting and backing responsible AI design and development, thus refashioning traditional CSR debates, is further detailed by scholars analyzing the washing phenomena and, particularly, washing practices concerning AI ethics. In the next few paragraphs, some exemplary cases are illustrated and debated in their relevance for strategic CSR theory implications.

6. Washing and CSR Tensions

A further argument to shift the focus from the business case to the moral underpinning of social responsibility is tied with the common arising of tensions within implementing and integrating strategic CSR. Integrating CSR into core structures, activities, policies, and processes leads to several major risks and internal shortcomings, such as tensions arising from an unexpected incompatibility of CSR goals with previous or actual business goals; inconsistent behaviors of the organization in light of organizational change; clashing of views concerning competing decision-making rationales that make it difficult to align the economic and social goals [103]. Tensions are thus common to arise both internally [104,105] and externally [106,107]. While some scholars underline how CSR can itself be the terrain upon which to overcome and navigate tensions by developing responsible practices, overcoming a top-down approach [103], others sharply criticize approaches of strategic CSR and CSV by arguing that they rely on underestimating the occurrence of trade-off situations, rather than win–win option availability between economic and social goals [12]. Indeed, according to the main critics of CSV and strategic CSR visions, such perspectives fall short to understand how conflicting social and economic goals may be in everyday life settings.
Such tensions have a wide impact on organizations; far from being only idealistic or tied to black-or-white tendencies to behave in the mere interest of shareholders or solely moved by altruistic purposes of social impact, tensions may inform strategic decisions at all organizational levels, including a difficult dilemma rising in how to evaluate CSR policies’ convenience itself [108]. Moral problems in business activities mostly arise in the form of an ethical dilemma as well as often being concealed in their nature to decision makers themselves; rarely do moral issues arise as being clearly “black or wihite” but rather presenting contexts where the moral choice to be made results in questionable and unclear ways. Hence, practical ethical behavior depends on (at least) two dimensions of ethical business problems: clear and unclear moral judgment (evaluating what is the right thing to do) and the level of motivation (the desire to do the right thing), which both determine the degree of urgency of ethical dilemmas [109]. Initiatives and policies of “washing” have been identified as a common option, although less common than previously thought [25], chosen by corporations to face the above-mentioned tensions and try to align the business case with “symbolic” CSR strategies [108,110]. Indeed, as CSR initiatives influence a variety of stakeholders and, especially, customer relationships [111], their pretended enactment for deceiving purposes can be enlisted within its strategic deployment as well [6].
“Washing” is a polysemic term that has acquired numerous definitions and has been used to describe and analyze slightly different phenomena; while most of the literature focuses on greenwashing as “disinformation” and a communication strategy [112], some have underlined how washing can refer to more substantive practices, such as pretending ethical behavior by blaming unethical outcomes on other actors [113] and engaging in symbolic actions to divert attention from other questionable practices. Washing practices are not only confined to sustainability issues but rather are common in gender issues, sometimes under the label of “pinkwashing” [114] or “rainbow washing” concerning LGBTQ+ community support [115], most of which can now be subsumed under the term “Wokewashing”: pointing at a corporation’s sensibility and actions in support of marginalized, stigmatized, and unprivileged social groups in the attempt (authentic or pretended) to be considered “awake”—conscious and active—in fighting social inequalities [116].
Research generally confirms that washing strategies negatively affect the bottom line [117]. Greenwashing, for instance, can backlash depending on the efficacy of signaling and the attentiveness of stakeholders, such as non-governmental organizations [118], especially when it comes to gaining environmental legitimacy. Indeed, inconsistencies arising between corporate images and messages towards an external audience with internal policies and practices are easy to spot and mostly lead to perceived inauthenticity [119]. Moreover, inauthentic involvement in social issues usually leads to overstating promises and stances, which negatively affect the bottom line when such promises are not delivered and go unfulfilled [119]. Despite such a wide consensus among scholars, the motives of washing are still debated [120], along with models of stakeholders’ backlash as well as past examples of “shamed” organizations [121].

7. Ethical Digital Washing and the Need for a Moral Human Argument

Critics of current corporate involvement into self-regulation concerning the ethics of AI, especially in the forms of lists of general principles, have been arguing that such an approach is unsuitable to solve major social issues arising for business organizations. This phenomenon has been discussed as a sign of “uselessness” of the whole field of AI ethics [122]. Conversely, it has been addressed as proof of the potentiality of ethical AI research, as only a part of the identified mitigating strategies is currently being adopted by corporations [65]. Among other major concerns, the idea of “washing” practices developed on AI ethical issues is spreading within the literature.
Machinewashing, also labeled “blue-washing” and “ethical digital washing”, has been defined as “misleading information about ethical AI communicated or omitted via words, visuals, or the underlying algorithm of AI itself. Furthermore, and going beyond greenwashing, machinewashing may be used for symbolic actions such as (covert) lobbying and prevention of stricter regulation.” [28]. Ethical digital washing, as a phenomenon of ethical instrumentalization, has been further detailed as “corporate practices that co-opt the value of ethical work” by limiting ethical experts’ intervention to symbolic hiring with no internal space of maneuver, employees’ hiring policies focused on maintaining an uncritical consensus, the use of nudging techniques to divert attention from intrinsic ethical issues of certain technologies, and focusing on the ethical design of specific technologies while ignoring or defunding actions to focus on system-level unethical consequences [20].
Washing practices in AI, as counterbalancing effective ethicists’ expert intervention, can thus be conceived as a strategic shortcut to actual real commitment to human-centered technologies’ development and use. Such shortcuts can be viable and useful to businesses, as they exploit two current trends: the first, narrow approaches to ethical practices in AI; the second, existing imbalances of power. Concerning the first, as Van Maanen argues, top-down principles enlisting approaches to ethical behavior in technological advancements should be replaced by practices based on a bottom-up understanding of the human stakes involved in each critical situation. This practice would be fitter to the very nature of ethics, which resides within phronesis rather than episteme: “In contrast to episteme—whose statements have an idealized, atemporal, and necessary character—practicing phronesis is concrete, temporal, and presumptive. Phronesis is the art of judging what to do in concrete situations, without assuming that the judgments will hold for everyone, everywhere, and every time” [26], p. 9. Washing practices via ethical codes and principles lists are thus to be understood as a way to exploit internal and external unawareness of the inherent realm of ethical conduct and, consequently, the moral obligations of the corporation. Second, operationalizing principles into practices is naturally very difficult in a fast-paced changing technological ecosystem; nonetheless, critics of corporate ethical behavior are addressing power imbalances and malicious motives behind the inconsistencies and ineffectiveness of corporate action. One of the main contested terrains is companies’ ability to shape public discourse over AI promises and risks, whether by direct involvement in codes, the funding of studies, dedicated departments, or indirect influence by funding social and academic initiatives aimed at debating AI. This ethical activism by corporations in the tech field has been labeled “owning ethics”, pointing to a process of the institutionalization of such a tendency [123]. In this way, critics of washing include within it the unethical behavior of business projects aimed at co-opting and influencing the debate on AI ethics [92] in relative media coverage, scholarly enquiry, and government actions [124]. The well-known case of Timnit Gebru, fired from Google’s short-lived ethical team because of concerns raised upon machine learning threats [86], and the quick shut-down of the committee itself are only the tip of the iceberg. Such a conundrum is difficult to tackle; because of a legitimate power concentration, because of a lack of suitable legislation, and because of what has been called “the ethicist dilemma”, ethics experts occupy the difficult position of having to communicate and initiate actions facing ethical shortcomings [92].
To effectively tame machinewashing and ethical digital washing strategies, resuming the CSR underpinning debate can be highly helpful. Indeed, there is a positive and formally universal consensus over the need to ethically guardrail AI development and use, although business responsibilities in doing so are not clear and univocal. To some, the extent of organizational moral responsibility in AI is tied to the degree of social consequences on humans yielded by a certain technology [125], while others have argued for an integral responsibility in keeping control over AI, which is overall potentially threatening to humankind [126]. The latter has been developed as a “human” moral argument backing CSR; organizations have to develop ethical AI because risks of dehumanization are at stake for too large a number of people with too heavy consequences.
The extensive impact of AI on internal and external stakeholders calls for an additional analysis of stakeholders’ engagement and management in AI developing. Aiming at stakeholders’ wellbeing when confronted with AI elicit careful considerations of moral boundaries exceeding usual compliance. For example, in the case of algorithmic recruiting, its impacts on internals and externals, through dehumanizing practices and bias and discrimination perils, add further dimensions of mistrust and moral hazard than previous human misbehavior. AI governance and accountability requires corporate ethical behavior going beyond simple juridical compliance to prevent boycotts, employee distrust, and protests and other ethical troubles [91], as the stakes of AI deployment are strictly tied to spreading the dehumanization of stakeholders rather than humanizing them [73].
Indeed, claims of dehumanization perils tied to AI are multiplying from AI’s own developers and leaders: with the main case being represented by the open letter to pause AI experiments issued in 2023 on the Future of Life website with signatories ranging from tech leaders to global philosophers, as prestigious as Elon Musk and Yuval Noah Harari. Nonetheless, the implications of AI business practices are rarely fully and efficiently considered in the majority of AI ethical guidelines issued so far (Attard-Frost et al., 2023 [27]), further fueling concerns about the “let’s see how it goes” approach taken by companies releasing new technologies [102]. Against this scenario, the role of the ethicist within big tech industries, and generally in AI-implementing organizations, arises as being highly relevant to prevent “ethics to be spoken by power” instead of ethics speaking to power, i.e., to promote efficacy in tech ethical initiatives to shape organizational engagement towards them [127]. Such involvement can yield the necessary “cross-organizational awareness” that has been discussed as crucial to prevent “ethical nightmares” entailed by AI spreading [67].
Such reclaiming of the role of moral reasoning within the business and public sphere has indeed gained urgency because of “Inevitability and contingency” of big tech’s support for ethics for the latter to be effective [127]. This further sheds new centrality upon the CSR moral argument as the main argument to prevent AI dehumanization perils. At this stage, the “human argument” relies on a call to responsibility in light of perils such as losing control over the technology [128], the destiny of creative and artistic work and jobs [129], human rights’ systematic undermining [96], the affected quality of interpersonal relationships [61,130], among others. Within such a scenario, an assessment of strategic CSR needs to carefully consider whether corporate actions enhance or diminish negative and positive impacts of such disruptive challenges. Relying on moral rather than business arguments has never been so salient to keep people and their wellbeing as the target of organizational behaviors and mission [131], as washing and wrongdoings will entail much more than eventual boycotts or reputational damage.

8. Concluding Remarks

To summarize, calls to tame and prevent undesirable consequences of AI on humanity are multiplying, with corporate ethical behavior on top of its concerns. Hence, framing the stakes for corporate involvement in AI ethics becomes central to inform organizational decision making and pursue organizational AI responsibility. The moral case for responsible AI in business greatly contributes to stressing how, within the current social order, business not only has a social responsibility to comply with current and future regulatory frameworks but rather has a distinctive human responsibility to consider when evaluating ethical dilemmas and trade-offs between AI-driven increased profitability and production and its consequences in terms of stakeholders’ wellbeing. This paper contributes in identifying a distinctive “human argument” backing ethical AI design and development, contributing to debates on underpinning arguments for CSR in at least two ways: first, by identifying the limits of the business argument underpinning strategic CSR when faced with AI development; second, by discussing corporate practices of digital ethical washing as harmful to both organizational reputational and compliance dimensions as well as to social and human wellbeing; third, by suggesting corporate practices of cross-organizational involvement of ethicists as necessary to guardrail and prevent ethical AI shortcomings.

Such a “human argument” can integrate common moral arguments for CSR and strategic CSR, integrating “the business case” and overcoming its main above-mentioned limits. In particular, it supports strategic CSR beyond common considerations upon law enforcement and compliance and regardless of image return, as the latter may induce washing behavior and strategies. In this way, the ethics of AI represents an exemplary case of strategic CSR, which focuses on the comprehensive protection of human rights at every stage of design and implementation [95,96]. The discussed perspective of the paper can inform various streams of future empirical research: scholars interested in “human centered AI” development can focus on detailing which new professional roles within the organization are envisioned to ensure responsible AI development throughout all the pipeline; scholars involved in CSR policies’ evaluation can rely on the human argument to test whether ethical guidelines and codes of ethical conduct are focused only on short-term ethical risks or foresee long-term AI impact; comparative studies can be foreseen on organizations relying on extremely accentuated business arguments and organizations showing exceptional moral commitment to the ethical use of AI in order to contribute to shaping the “human argument” perspective or criticize it.
Furthermore, this view is in line with cornerstone traditions within CSR and business ethics scholarship, such as political CSR, urging businesses to protect and enhance human rights and desirable human conditions in all those cases in which the state and the public do not have the will or the power to intervene [15,132]. Future paths of research can be followed in asking how businesses are interacting with the public sphere to contribute to legal frameworks on the ethics of AI or how they are hindering responsible AI law enforcements.

The practical implications of the paper can be envisioned for all organizations confronting ethical AI challenges: first, by focusing on the long-term impact and social repercussions of decision making concerning AI without underestimating its ethical downfalls; second, by foreseeing a strategic planning of CSR strategies involving AI by either promoting external consulting on the ethical guardrail and training of technologies or internalizing ethical surveillance at all stages of AI implementation; third, by informing CSR initiatives focused on AI with a holistic view in which stakeholders would be affected and how to address them through specific programs of involvements in order to avoid dehumanization processes from arising, ultimately by diverting resources that might be intended for washing practices to ethical programs aimed to engage targeted audiences within AI transition processes.

[ad_2]

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More