last posts

Artificial Intelligence Can Help Nonprofits Reach More Donors, But Fundraisers Can't Ignore Potential Pitfalls

techsm5

My colleague always takes the time to write thoughtful notes to donors to our university, and I am moved by the effort she puts into it. She takes the time to learn the names of each donor and she researches their stories to write each note.

I remember seeing her write to a longtime donor who recently passed away. She wrote about how much her gifts had meant to the university and how her legacy would live on for generations to come…

Until those last two sentences, you were probably enjoying the personal tone and clear writing of this essay. Now you’re probably wondering why the Chronicle of philanthropyThe editor did not find any obvious error: why was my colleague writing to someone who was dead?

Well, artificial intelligence is not perfect. But considering the minimal effort I put in, it’s not bad. To create the opening of this piece, I asked the new text-generating artificial intelligence system ChatGPT to “write a short, touching anecdote about watching a colleague who takes the time to write thoughtful notes to donors to our university.

Here is the rest of the story ChatGPT wrote about my alleged colleague’s correspondence with the deceased donor: “She wrote about how her generosity had touched the lives of so many students and how her memory would be upheld. living at the university. I was so moved by her words and her efforts to pay such an intimate and thoughtful tribute to this generous donor. It reminded me that our work has the power to touch lives in a which we did not expect.

Aside from this fatal gift, the overall effect is remarkably humane. There is pathos, direction and embellishments that I doubt I would have found.

It was a fun exercise, but also an eye-opener. It showed me the potential of language apps like ChatGPT to change philanthropy in a meaningful way. Donor communications, proposal writing, giving profiles, giving reports, press releases – and at the other end, philanthropic funding decisions – will, for better or worse, increasingly influenced by AI

Although I now work in the advancement department at the University of Utah, I started my career as a data scientist. It was common in the field to despise those who were too enamored with the potential of AI to destroy the world or build a better one. To many of us, AI was just a glorified form of so-called “curve fitting” – it could find patterns in data like any statistician would, but it wasn’t able of superior intelligence. Early text models confirmed this, producing barely convincing paragraphs and failing Turing tests, the method devised by famed computer scientist Alan Turing to tell people from machines.

More recently, the scope and scale of these models has given them a sophistication and set of capabilities that once seemed impossible. And while human-level AI is still a long way off, recent advances have shown that higher-order thinking isn’t necessary for some of its most important applications.

What does this mean for philanthropy? On the fundraising side, AI will create efficiencies and simplify work that was previously done by experts, such as gathering information on potential donors and developing marketing campaigns.

ChapGPT is in the review phase, but anyone can sign up and test it for free. If the actual costs are reasonable—a big “if” given the computing power required – it’s easy to imagine small nonprofits creating funding requests on par with large nonprofits and large, sophisticated universities, like the one where I work.

AI will likely reduce the cost per dollar raised, especially for smaller organizations that lack the resources to communicate regularly with donors. This will help with accounting, research, hiring, and even more abstract tasks, such as designing a theory of change and demonstrating program impact. The overall effect could be to level the playing field, giving historically marginalized organizations the resources to compete for funding against the largest and best-resourced nonprofits.

The downsides of adopting ever more powerful AI are harder to predict. as much on social media discovered, ChatGPT can be both confidently false and oddly persuasive in its incorrect reasoning. While the examples tend towards inconsistency and generally depend on the wording of the prompt, this is a concerning bug.

Find beneficiaries

Donors will also rely more on AI to achieve their philanthropic goals. It will likely start with consulting firms using it to help foundations select from a sea of ​​potential grant recipients. They could use prompts such as “find all Hispanic-serving institutions with below-average graduation rates that are led by a progressive president.” Even a small variation in the prompt – or more concerning, the underlying data – could yield seemingly correct results but miss the most impactful, deserving, or needy organizations. In this case, ChatGPT produced a list of 10 universities, but a quick fact check revealed that the information provided was, in several cases, incorrect or incomplete.

Research shows that algorithms often misbehave when based on fallible and biased data. “Big Data processes codify the past, they don’t invent the future,” writes Cathy O’Neil in her book “Weapons of Math Destruction.”

Human contact is essential

The danger is that consultants, donors and fundraisers rush to use the technology before its limitations are understood. To ensure that philanthropy benefits from AI without exacerbating its potential for harm, a thorough, human-only approach to monitoring the technology will be needed. O’Neil advocates for ethical audits that reveal bias and discrimination in these types of systems — a difficult task given the billions of data entries and unpredictable reactions of ChatGPT and other new systems.

I issued the challenge to ChatGPT, and it produced an entirely reasonable and valuable response: “There needs to be an emphasis on using AI for the public good and avoiding the pitfalls of using it to amplify existing inequalities. This will require a holistic strategy focused on AI education, responsible use of data, and access to technology and resources for underserved and disadvantaged communities. In addition, there will need to be clear guidelines and ethical frameworks for the use of AI in philanthropy, and a focus on transparency and accountability Finally, it will be important to ensure that those who make decisions about the use of AI AIs in philanthropy come from diverse backgrounds and are aware of the potential consequences of their decisions.

Ironically, the AI ​​can write these guidelines, but it cannot implement them. It will be up to the humans of the nonprofit world to tackle the issues of transparency, accountability, and diversity that ChatGPT accurately describes. Much like the fundraiser he referred to that wrote personal and thoughtful letters to each donor, it will take a human-centered and heartfelt approach to get it right.

.

techsm5

Comments



Font Size
+
16
-
lines height
+
2
-