Check out CEP’s new searchable database of resources for individuals and grantmakers.

Contact Us



Responsible AI: How Philanthropy Can (and Should) Support the Movement

Date: March 5, 2024

Rachel Kimber

Speaker, Consultant, and Grantmaking Strategist

Joanna Drew

Founder, Hilo Consulting

Ravit Dotan

Founder and CEO, TechBetter

Mark Greer II

Co-Executive Director, Transforming Power Fund

Never Miss A Post

Share this Post:

2023 is the year AI ate the internet, according to the New Yorker and anyone not living under a rock. AI has so deeply penetrated our society that we can’t ignore the existential questions it raises, from the future of work to ethical dilemmas to questions about whether it will exacerbate bias and inequity or be used for the good of civil society.

While the for-profit sector is lightyears ahead of its nonprofit counterpart, readily adopting AI into their operations and tech stacks, the social sector is lagging woefully — but unsurprisingly — behind, given the resources needed to embark on such a transformation. Recognizing the need for a fundamental cultural shift, philanthropy has a responsibility to encourage, fund, and accelerate the adoption of responsible AI in and for the social sector.

We are all individually called to experiment and learn about AI, warts and all, but funding is the critical element to ensure that AI is harnessed for good. And we all know nonprofit infrastructure is perennially and abysmally under-funded. So, take this as a call-to-action, dear philanthropic partners, as your role is crucial in moving the sector forward and, equally as crucial, in helping with responsible AI development and adoption.

Making Inroads on Responsible AI Adoption

Despite the lack of funding, the sector presses forward. Nathan Chappell, an author and thought leader at the intersection of AI and philanthropy, and Mallory Erickson, a fundraising consultant and coach, among others, have created Fundraising.AI, a steering committee to help fundraisers promote the development and use of responsible artificial intelligence in their work. The Fundraising.AI team hosted a two-day virtual summit in October, covering topics ranging from the evolution of artificial intelligence to practitioner use cases and the ethical implications of incorporating AI into fundraising, and offering a much-needed infusion of visionary thinking on AI in the social sector.

Soon after the summit, the Technology Association of Grantmakers (TAG), in collaboration with Project Evident, released the “Responsible AI Adoption Framework” in December. The framework was developed to “guide grantmakers in the tactical and strategic adoption of AI.” Underscoring the importance of “human-centered adoption,” the framework outlines the considerations for incorporating AI into philanthropy in the areas of individual use, organizational efficiency, and mission attainment. Version one of a living, breathing framework, with input from over 300 grantmakers, TAG and Project Evident have provided a solid foundation upon which we can all build in 2024 and beyond.

Addressing the Digital Divide

We are now in the sixth wave of innovation, according to economists and historians who theorize that progress is marked by major technological advancements that lead to significant societal changes. But history tells us that waves of innovation, especially in the context of digital technologies, contribute to a digital divide. The benefits of technological advancements aren’t often evenly distributed and tend to lead to economic disparities.

The digital divide is now a digital chasm. Fundraisers and grantmakers alike acknowledge that the playing field has undergone seismic shifts that require our attention. But the digital divide that has been growing at least since the Internet became commonplace is not an inevitable outgrowth of technological advancement.

At the December Salesforce World Tour NYC, Roy Lee, the associate dean of analytics and technology at NYU Stern School of Business, articulated the path forward. As critical as it is, the concept is often overlooked: Digital transformation will only be successful when champions, end-users, and powerful (resourced) decision-makers leverage the people, processes, and tools that we engage with every day. This is especially true with AI implementation and adoption.

This moment of accelerated innovation requires a holistic funding approach that considers not only the technological challenges but also the social and ethical implications. Philanthropy has a moral obligation to steward initiatives that address both the longstanding digital divide and emergent — and sometimes disruptive — technologies. It is not only an imperative for philanthropy but also an opportunity to harness AI for its own funding decisions and play a pivotal role in its development. It is much more effective to catch the wave than swim against the tide.

Ensuring a Sustainable AI Future

Turning back to the social sector: How are we — fundraisers, grantmakers, academics, nonprofit folks — going to ride this wave of innovation in service of social impact and civil society? These four crucial concepts serve as a possible framework to help philanthropy spearhead a collaborative approach to developing and adopting responsible AI.

1. Drive Sector-Level Vision

In philanthropy, we share a vision of contributing to something bigger than our individual selves, to help fund solutions to the world’s wicked problems. We can do this not only by vigorously supporting the social sector in developing and adopting AI tools across the board, but by tapping into the expansive power of predictive and generative AI as funders. These tools have the potential to reshape the way we use historical data to inform our funding choices while simultaneously staying attuned to changing community needs. If developed and used responsibly, AI gives philanthropy a 360-degree vantage point, driving our sector-level vision of how we create and maximize positive social impact.

2. Fund the Back-End to Front-Load Impact

The foundational issues to AI adoption in the social sector are compounded by the scant resources (read: time and money) with which most nonprofits operate. And it is nonprofits, and by extension all of us in the global community, that stand to gain the most by adopting AI.

Philanthropy can and should play a crucial role in funding solutions to help the social sector and civil society access innovative technology, build capacity to manage its adoption and usage, and to share knowledge across the sector to improve the efficacy of the tools. The Patrick J. McGovern Foundation has set an example for philanthropy’s role in funding AI solutions, recently announcing $66.4 million in awards to 148 organizations that are “leveraging AI and data science to address the world’s most urgent and complex challenges.”

3. Develop Guidelines

One of the top-rated sessions at the Fundraising.AI summit in October was a discussion with Microsoft Legal Counsel Cass Matthews, who introduced the importance of governance and guardrails as the essential starting point when introducing novel technological solutions. We will accelerate the responsible use of AI once we determine the rules of engagement and start building collaborative learning playing fields.

While the shared terms of engagement are yet to be defined, once guidelines are proposed and iteratively reviewed, it is philanthropy’s responsibility to experiment with those boundaries and determine what works best. In fact, it is incumbent on all of us to experiment, play, and experiment more. We learn through play. And, incidentally, so does AI.

4. Invest in Equitable Innovation

We must also be aware of the transformative potential and inherent risks AI poses to vulnerable communities (Black, Indigenous, and other people of color, women, LGBTQ+ individuals, and others). One of the primary concerns is the risk of AI perpetuating systemic biases. For instance, facial recognition technology has higher error rates for people of color, leading to wrongful identifications and unfair surveillance. AI used in hiring processes to automatically scan and weed out applicants can also lead to algorithmic bias, potentially discriminating against legally protected candidates.

We must advocate for funding initiatives that rigorously audit AI systems for such biases and develop more equitable algorithms. This involves supporting research and advocacy that diversifies AI training data and promotes transparency in AI decision-making processes.

Beyond protection, philanthropy should also focus on supporting AI applications that directly benefit vulnerable communities. This includes investment in healthcare AI that accounts for diverse medical needs and historical health disparities, AI-driven environmental justice initiatives that help monitor and address pollution in marginalized areas, and AI tools that aid in equitable resource distribution in areas like education, housing, and business development.

As the entities who wield the most power in the social impact space — dollars equal power — philanthropy has a responsibility to thoughtfully address one of the biggest challenges that we will face in the history of our sector. How we act now will determine the future of our work. We are creating history, and while we may not like change, we will like irrelevance a whole lot less.

Rachel Kimber is a grantmaking strategist and speaker/consultant on digital strategies and technological solutions for social impact. Joanna Drew is an independent grant writing consultant and founder of Hilo Consulting. Ravit Dotan, Ph.D. is an AI ethics advisor, researcher, and speaker. Mark Greer II is co-executive director at the Transforming Power Fund.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog