Funders, make a plan to gather vital feedback on your work in 2026!

Contact Us

Search

Blog

Don’t Presuppose AI Has an Inevitably Positive Purpose

Date: January 8, 2026

Jeff Hauser

Founder and Executive Director, Revolving Door Project

Never Miss A Post

Share this Post:

Date: January 8, 2026

Jeff Hauser

Founder and Executive Director, Revolving Door Project

The Center for Effective Philanthropy recently released a report titled “AI With Purpose” on how nonprofits are or (for the most part) aren’t integrating these technologies into their daily operations. The report warns that “few foundations provide funding or monetary support for grantees’ use of AI,” partly because “foundation staff lack an understanding of AI’s nonprofit-related needs” — the first of which is “education” on what exactly the benefit of AI would be for them. Despite these challenges, the report concludes, “much opportunity remains” to “fund capacity building around equitable AI usage.”

Does it? Is this an untapped fountain of philanthropic potential — or are nonprofits not using AI because these technologies simply don’t offer much benefit for them?

It is genuinely unclear what the future looks like for the bundle of algorithmic content-generating technologies known colloquially as “AI.” These technologies very plausibly offer significant benefits to practitioners within hyper-technical, information-intensive industries based around the natural sciences, like medicine and climate research. But most people — and certainly most nonprofits and philanthropies — do not conduct the sort of scientific research where AI is proving to have the greatest benefit. For those of us who engage with governments and communities for a living, it’s hard to divine what exactly AI offers that we can’t get at comparable cost and higher quality elsewhere.

To most people, and most nonprofits, ChatGPT is AI. Fittingly, the CEP blog published an interview with OpenAI’s “head of global policy” (read: de facto chief lobbyist) Chris Lehane alongside the report. Lehane said what he gets paid to say: that AI’s “potential is enormous, and the moment to start is now,” because “The sooner nonprofit leaders understand AI, the better positioned they’ll be to harness its power to scale their impact and avoid falling behind.”

The problem is that CEP’s own report indicates that nonprofits are already experimenting with AI and simply are not finding the enormous potential Lehane promises. According to the report, two-thirds of nonprofits are already using AI, but mostly for routine tasks like internal communications.

This comports with the evidence from other white-collar professions. While more people are using ChatGPT and other (frequently free) large-language models every year, users have found little commercial utility besides drafting rote emails, condensing technical reports, and helping with the occasional bout of writer’s block. Any more difficult writing task requires actual judgment calls from the writer, which quickly hits the problem of AI “hallucinations” — these tools don’t actually comprehend any of the words they’re generating, so they regularly produce misinformation. Some experts think this may be an inherent, unfixable aspect of the technology. It’s also the single largest concern about AI within the nonprofit sector, according to the CEP report.

Image-generating tools haven’t fared much better. It may be impressive to see a computer assemble an image out of nothing from just a few written instructions, but the most useful commercial application of this technology seems to just be a way around paying licensing fees to stock photo companies like Shutterstock. Meanwhile, the same “hallucination” problem produces images with extra limbs and flagrant misspellings, and the public has recoiled at the uncanny, art-by-committee visual style of AI “slop.”

These minimal benefits come with tremendous moral cost — and morality ought to be front of mind for any nonprofit. For one, AI tools, especially image generators, consume huge amounts of energy. Right now, that energy is largely produced via fossil fuels. This might be worth it for climate researchers compiling enormous amounts of technical data, but a nonprofit employee just looking for an image thumbnail to put on a blog post has plenty of other options that don’t harm the planet.

Then there’s the fear of dislocation for workers. Anyone working to fight poverty, hunger, and disinvestment in disadvantaged communities will face severe cognitive dissonance using AI to advance those goals; after all, the sky-high financial valuations for these technologies rely on a tacit promise of eventually replacing millions of people’s jobs. Far from benefitting education, AI appears to be hindering students’ development of critical thinking and research skills, and even fueling new forms of sexual harassment and bullying. And there’s the growing concern with so-called “AI psychosis,” or extensive use of chatbots driving delusional and even suicidal thoughts.

All told, these technologies just don’t offer anything especially useful for the nonprofit workplace. And they come at a real cost to the sort of ethical principles that attract people to nonprofit work in the first place. Indeed, the framing of the report is rather telling; it takes as a given that nonprofits should be using AI, yet never really explains why. Implicitly, the many people foundering in their considerable effort to find white-collar utility in AI are accused of being impediments to progress — why can’t they see that the emperor’s AI clothing is both fashionable and functional, rather than failing to provide any actual cover?

In this respect, the CEP report is just echoing the most common frameworks for AI discourse nationwide. In politics, finance, education, and more, we are told constantly that AI is an inevitable future, to which we must all adapt before our rivals do first. But much of this is marketing puffery from an industry widely accused of driving a financial asset bubble. The AI “industry” offers capital-intensive products that plenty seem willing to toy around with for free, but almost no one seems willing to pay for.

Still, whatever happens in the commercial AI industry, nonprofits should not feel pressured to adopt any technology that offers little clear benefit and incurs major environmental, economic, and ethical cost. Workers and funders both should do what attracted them to philanthropy in the first place — follow their consciences, not hop on trends for their own sake (or at the instigation of marketers coming to you under the pretense of offering friendly advice).

Jeff Hauser is the founder and executive director of Revolving Door Project.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog