One Year After ChatGPT4: Where is Philanthropy?

Sarah Di Troia

We are closing in on one year since the release of ChatGPT4, an upgraded version of the first public generative AI large language model. Even though traditional AI has been a part of the technology landscape for decades and our consumer lives for the past ten years, ChatGPT4 catapulted AI into a reliable daily headline and a frequent topic of conversation. The conversation about AI among grantmakers, however, has not advanced dramatically since then, despite calls for grantmakers to do more.

Many funders remain in a holding pattern, unsure how to respond to the relative ease of access by staff and grantees to generative AI, let alone experiment with it for mission attainment for themselves and their grantees or take a position on its impact on society.

The challenge is that AI is already here. In the fall of 2023, Project Evident collaborated with Stanford’s Institute for Human-Centered AI and conducted the first survey about AI interest and use across grantmakers and nonprofits in the social and education sectors. We believe it is important that the voice of the social sector is included in the AI discourse alongside industry and academia. This survey is a first step toward developing robust lines of communication between AI researchers and philanthropy, and assessing the needs of the field.

The survey revealed that 48 percent of funders and 66 percent of nonprofit respondents claim their organization currently utilizes some type of AI. While the nonprofit field often lags industry in technology adoption, in this instance, grantmakers are trailing their grantees. Given that nonprofits rely on funders for capital, differences in levels of use could impede AI experimentation in the social and education sectors. The longer it takes for philanthropy and nonprofits to gain experience with AI, the greater the delay in having a strong, experience-based voice about AI’s role in civil society.

What are grantmakers’ barriers to funding and using AI? Part of the problem may be internal structures. 79 percent of grantmaker respondents did not report having a specific technology grantmaking priority, and close to the same percentage do not plan to create one in the next year. Instead, most funding that goes toward technology is channeled through other priority funding areas. The trend to invest in technology through other program areas means that funders must educate more staff about AI to facilitate AI grantmaking.

Additionally, making technology grants primarily through priority funding areas could limit grantmakers’ ability to invest in addressing the underlying systemic factors that perpetuate the data divide between the commercial and the social and education sectors, e.g. the unequal relationship between those with the skills and assets to enable collecting and mining of big data, and those whose data is collected. Systemic factors that contribute to the data divide include access to AI upskilling professional development, technology talent, computing power, investment capital required for a modern technology stack, and shared datasets. These barriers cut across individual grantee or program areas and may be overlooked if grantmakers are only looking through the lens of an individual program strategy.

Importantly, it is not skepticism about AI’s ability to create efficiencies and support mission attainment that limits grantmakers’ AI use. 78 percent of funders surveyed believe their organization would benefit from using more AI and over 90 percent of all survey respondents want to learn more. Rather, grantmakers cite fears of bias, uncertainty about how to use AI, and their own lack of expertise.

Bias in AI systems is the most cited barrier to AI adoption by all survey respondents, followed by challenges in envisioning how AI can be used and a lack of expertise inside the organization. These barriers exemplify the need for AI education across the sector to support continued use and experimentation. Funders should invest in the development of unified, cost-effective, and scalable upskilling resources for both their own organization and for grantees, with a focus on how AI can support mission attainment and how to avoid bias. And they should look to existing reosurces — like the framework for responsible AI adoption in philanthropy — recently published by Project Evident and the Technology Association of Grantmakers. As part of this work, funders should also invest in surfacing, developing, and disseminating case studies and stories of early adopters to study progress and share insights and findings.

By investing in AI projects in general, and specifically supporting the development of educational resources and case studies, we can begin to close the data divide with the commercial sector and give the social and education sectors a stronger voice in AI and civil society.

Sarah Di Troia is a senior strategic advisor, product innovation at Project Evident. Find her on LinkedIn and learn more about Project Evident at projectevident.org.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

SHARE THIS POST
AI, role of philanthropy, technology
Previous Post
The Power of a Collaborative Model for Environmental Stewardship
Next Post
How Flexible Funding for Women’s Funds Can Shift the Paradigm

Related Blog Posts