This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

It’s Time for Philanthropy to Get Involved Driving Equity in AI

Date: May 30, 2023

Sarah Di Troia

Senior Strategic Advisor, Product Innovation, Project Evident

Never Miss A Post

Share this Post:

AI is here, and it is changing the way many in the philanthropic and nonprofit sectors (and other fields, too, of course) work, or even think about their roles; for some, this is an exciting prospect, for others a deeply threatening one. For most of us, though, it lies in a grey area somewhere in between, even as we know its direct impact on us is imminent. In this series, we set out to get a snapshot of where AI stands right now: what it can actually do, how an expert advises those in philanthropy to approach it, how it’s already being used in the sector, and the ethical considerations and implications of the tool. 

In December 2020, Google fired Timnit Gebru, a leader of its ethical AI team. Since dismissing Gebru, Google let go two other leading AI researchers. In November 2022, Twitter laid off all but one member of its Machine Learning, Ethics, Transparency, and Accountability division. In January of 2023, Microsoft eliminated its Ethics and Society team. In March of 2023, Twitch (owned by Amazon) laid off or reassigned their Responsible AI group.

These most recent dismissals of AI ethics experts by companies at the forefront of the AI transformation occurred in parallel with increased societal concern about AI. AI will create huge societal benefits — and potential burdens in the form of unchecked replication of human bias, accelerating and cementing inequities. Currently, there is a lack of clear financial incentives to consider the ethical aspects of AI in Environmental, Social, and Governance (ESG) and impact reporting and an absence of federal AI regulation. Teams focused on AI ethics are costly; as such, ethics staff are vulnerable to technology layoffs, arguably at a time when we need them most. AI is here — and it is clear that philanthropy has a powerful role to play in highlighting and shaping equity in AI in the social and education sector and in our broader society.

Despite this shedding of AI ethics experts by technology companies, many organizations, government agencies, and academic institutions across the globe are working on various aspects of ethical AI, as detailed by AIethicist.org, a global repository of current research on AI ethics. The challenge is that AI tools and processes, like other technology created by for-profits, are optimized for scalability and profitability, not equity. In advance of AI regulation that addresses bias, philanthropy has an influential role in shaping AI as consumers who prioritize equity in their purchasing decisions and in supporting AI and equity advocacy efforts.

AI will tremendously impact our society and how we work, including philanthropists and social and education sector practitioners. Thus, foundation leaders must take steps to learn about AI and envision how it will reshape their and their grantees’ activities with a view toward how to drive more equitable outcomes. Thinking about the intersection of equity and AI is a strength of the nonprofit and philanthropic sector — and it is time for our sector to get active. Grantmakers have four powerful ways they can shape the market of AI tools and services to be inclusive and equitable:

1. Encourage your team to start learning about AI. Foundations can host AI professional development for staff and encourage their staff to attend webinars and classes to build their knowledge about AI uses to create equitable outcomes in the social sector. Among other opportunities, Project Evident has social sector AI case studies and hosts AI 101 webinars and master classes for funders and practitioners and the Evaluators’ Institute at Claremont Graduate University has a two-day class on Machine Learning for Evaluators. Encourage your staff to experiment with ChatGPT to draft communications or summarize knowledge in an area of interest. To shape the market for AI tools to be inclusive of equity, we first need to understand AI basics.

2. Align your AI administrative dollars with your equity values. Many foundations are interested in how AI could help create more efficiencies in their grantmaking process. AI can lessen the burden of repetitive tasks, allowing humans to do more of what they are uniquely suited to: listen to community needs, engage with social and education sector organizations, and think strategically. When evaluating AI vendors or grants management vendors incorporating AI, insist on understanding each company’s values. Ask about the data used to train the algorithm, how they tested for bias, whether the algorithm is transparent so third parties can test it, and how they will continue monitoring it. One of the reasons AI vendors don’t address equity is for-profit customers don’t ask about it. In 2018, total private giving from individuals, foundations, and businesses totaled over $425 billion — it is time we start asking.

3. Make more AI grants and align those gifts with your equity values. AI is quickly coming into the nonprofit sector, but we do not have the same 20-year runway the for-profit sector had to explore how AI could create value. Only a few pioneering funders are granting in the space including Google.org, Patrick J. McGovern Foundation, and Fast Forward. We need to catch up as a sector by funding AI “native” nonprofits (those that already deploy AI to drive equitable outcomes) as well as AI “explorers” (who have data and are interested in piloting AI but are constrained in their access to capital and support). Within the framework of a general operating grant, foundations could prescribe that grant dollars spent on AI vendors follow similar criteria of aligned values and attention to bias. By tying your grant dollars to your equity values, we can signal to AI vendors that equity is the third pillar (along with scalability and profitability) for AI to be a positive force in our society. And by funding more social and education sector AI experimentation, we can learn what works to drive equitable outcomes.

4. Support advocacy efforts that address AI and equity. For-profit customers don’t ask about equity in AI tools because the law does not require it. AI vendors don’t address equity because their customers don’t ask for it, and companies are not prone to do extra activity if it is not tied to profits. Philanthropy-supported coalitions can create regulatory and market pressure to design AI tools and applications with equity in mind. Canada, the UK, and the EU are already meeting to discuss AI regulation that will safeguard against potential AI harm. Foundations and philanthropists have an essential role in supporting equity, rooted in their experience alongside nonprofits in addressing institutionalized bias in our society when equity is not prioritized. As a sector, let’s make sure that AI is part of the solution, not a magnifier of the problem.

Sarah Di Troia is a senior strategic advisor, product innovation at Project Evident. Find her on LinkedIn and learn more about Project Evident at projectevident.org.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

An AI Roadmap for Philanthropy in 2025
An AI Roadmap for Philanthropy in 2025

As grantmaking organizations increasingly explore how AI tools can transform the way we work in civil society, the Technology Association of Grantmakers (TAG) recently released results from a global survey of grantmakers in our 2024 State of Philanthropy Tech report....

read more