To accompany CEP’s new report on AI, Cascade Philanthropy Advisors President Lowell Weiss interviewed 15 leaders from foundations, nonprofits, and affinity groups. In the first post in this two-part series, Weiss shared his conversation with Chris Lehane, head of global policy at OpenAI, about the upsides and downsides of AI for the social sector. In this second and final post in this series, Weiss shares insights from a number of other interviews he conducted.
While technology expert Lucy Bernholz counsels foundations and nonprofits to avoid AI platforms altogether (“Agentic AI is not your friend,” she noted), almost all of my interviewees expressed the strong view that we must engage. In the representative words of Chantal Forster, who consults with foundation CEOs on AI, “We as a sector need to recognize that engaging with AI doesn’t equal endorsement. We must engage and … shape an equitable future for AI.”
I came away from these interviews with the view that learning about and engaging with AI should not just be a priority for our sector. It must be a priority for CEOs. “There’s a real need for more than keyboard warriors to understand AI,” explained Alicia Morrison, Mercy Corps’s interim Senior Director, Technology for Development. “Executives will need to make decisions about AI, and it’s not doing them any good to hide from it. There are huge opportunities and huge risks.”
On the opportunity side, for example, Mercy Corps has the ability to shift from reactive to proactive mode when disasters loom. In fact, it is already doing so. When war broke out in Sudan in 2023, the organization acquired 10 years of satellite imagery showing the health of the country’s crops in each of those years, based on how plants reflect light in different wavelengths. Then they used AI to compare real-time crop health with those historical patterns. “Our Sudan team could instantly see, in red, the areas where people were most vulnerable,” explained Morrison.
AI gave Mercy Corps the ability to deploy its resources where they were needed most — before disaster struck.
On the risk side, my interviewees helped me see that three major threats are already upon us.
Inaccurate and Biased Results: According to philanthropist and technology entrepreneur Bill Shihara, all AI companies — even the ones structured as nonprofits or B Corporations — are prioritizing speed over accuracy. “They are creating platforms that are only as smart as the data they have access to, and so much of the data they’re internalizing is deeply biased,” he said. “I predict that dynamic is going to get worse, not better.” To mitigate this risk, nonprofit leaders must keep “a human in the loop.”
In other words, we must not turn over decision-making to AI bots, and we must always scrutinize AI results without accepting them as gospel.
Compromised Data: No AI platform is safe for private data, such as the outcomes data that foundations receive from grantees. Therefore, chief executives must create policies and other mechanisms to ensure that employees use a) “closed systems” like Snowflake when they’re analyzing data sets with personally identifiable information or b) strip out identifying information and just use ID numbers when working with open systems like ChatGPT.
“Even though Snowflake operates in the cloud, our ‘tenant’ is secure and isolated from others, meaning that only our organization can access and process the data within that environment,” explained Equal Opportunity Schools Chief Product and Strategy Officer Jessica Paulson. “The system is considered ‘closed’ because data never leaves our secure cloud ‘instance,’ and all AI processing happens within that controlled boundary.”
Exacerbating Wicked Problems: AI could drive unemployment. Its data centers could reverse progress toward curbing carbon emissions. It could destabilize democracies with misinformation and Big Brother surveillance. And it could perpetuate and scale existing social inequities. What can be done to mitigate these risks?
The answer from my interviews was clear: Foundations and nonprofits must engage in public policy debates aimed at adopting appropriate protections. “What keeps me up at night? It’s the idea of the nonprofit sector disengaging on AI because of a lack of clarity about how AI adoption impacts societal structures and inequities,” according to Kapor Foundation Chief Research Officer Sonia Koshy, Ph.D.
MacArthur Foundation President John Palfrey is among the small number of nonprofit leaders who appear ready to challenge foundation and nonprofit leaders to join forces and help bend the arc of AI toward justice. Writing in his 2025 annual letter, Palfrey said:
If we move together and with conviction, and ground our investments in inclusive values, we can build a better system that both protects the public interest from the worst excesses and realizes the opportunities AI could provide … The people who are impacted by AI systems should be included and empowered in their implementation.
A key tenet of the both the disability rights and Indigenous Peoples’ movements is, “Nothing about us without us.” That’s also a useful formulation for all of us in the social sector when it comes to AI and other technologies that have tremendous potential both to enhance and to upend our lives.
Even AI itself agrees. I asked ChatGPT, “Should foundation and nonprofit leaders dedicate some of their valuable time to engaging in public policy related to AI?” Here was its no-nonsense response: “Leaders who stay silent risk letting purely commercial or governmental interests dominate these debates.” And then it offered to sketch a practical engagement roadmap that leaders could use as a guide. Let’s do that ourselves. And fast.
Lowell Weiss is the president of Cascade Philanthropy Advisors, a former deputy director of the Gates Foundation, and a former White House speechwriter.


