Lately, we are starting to see the emergence of standards around responsible AI. In the United States, Alondra Nelson, Deputy Assistant to the President and OSTP Deputy Director for Science and Society, put forth a blueprint for an AI Bill of Rights. In addition, the National Institute of Standards and Technology recently released a guide to developing responsible AI. In their new Artificial Intelligence Risk Management framework, published in January 2023, and in their newly launched Trustworthy and Responsible AI Reference Center, NIST references the environmental impact of AI: “AI technologies, however, also pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet.” Despite these developments, more robust connections between environmental and social harms are needed. At Green Software Foundation, we believe that responsible AI must consider AI’s carbon emissions alongside their social ramifications. Practitioners must pay attention to emergent standards and research findings to mitigate AI’s negative social and environmental impacts.
Many AI researchers have pointed out that social justice and environmental concerns are often related. In their famous “Stochastic Parrots” paper, Emily Bender et al. examine the environmental and ethical issues associated with ever-larger language models. The authors connect the environmental impact of ML to other ethical implications, including its contribution to perpetuating inequalities by showing prejudice towards LGBTQ+ individuals and ethnic minorities through filtering mechanisms and harmful ideologies. LMs usually benefit those who already have the most power and privilege. As they argue, “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources — both of which disproportionately affect people who are already in marginalized positions.” Mozilla researchers are examining case studies where AI is addressing environmental justice issues. They are using the concept of “speculative friction,” asking workshop participants to imagine new forms of governance and accessibility that could come out of slowing down the production and deployment of AI.
Several respondents of the SOGS survey referenced the importance of integrating responsible AI with green software initiatives. This intersection will likely emerge as a focus area for software practitioners, advocates, researchers, and policymakers, especially given the journalistic and regulatory attention on generative AI.