One of the most difficult tasks in nonprofit management is to document the final outcomes that are being accomplished. Organizations can and do measure outputs, that is, the number of times they do something or the number of people who participate. But, measuring the impact caused by those activities can be really difficult for most nonprofits. Does the local soup kitchen really address the problem of poverty in the community or just create dependency? Does a drug rehab program really help people break their habits or were other factors involved that would have worked even without the program? Does a public campaign to combat racism really influence the community’s behavior and if so how was it changed? These questions and thousands more just like them can be answered with sophisticated and expensive research but that hasn’t been affordable for most nonprofits. In the absence of research driven data most organizations have described their results by telling stories about individuals who have benefited by the program. The assumption is that if this one person was impacted in a positive way then other participants must be benefiting as well.
The nonprofit sector is now calling for a higher standard. This is an extremely important development for the nonprofit sector and it is long overdue. However, this isn’t going to be accomplished without struggle, missteps, and a lot of trial and error. Charity Navigator will be adding results reporting to their rating system by 2016. Their hope is that in the next 3 years all nonprofits will produce consistent results data that is worthy of being used for side by side comparisons. To begin this process they will be reviewing 5 elements that they hope nonprofits will begin reporting on their own web pages. These elements include:
- Alignment of Mission, Solicitations and Resources
- Results Logic and Measures
- Constituent Voice
- Published Evaluation Reports
We think this is an important step forward but we fear Charity Navigator is being wildly optimistic and their efforts could expose the sector to several risks. First, there will be some organizations that will not be able to gear up evaluations in time to meet Charity Navigator’s schedule and as a result their ratings will plummet with unfortunate financial consequences. Professional evaluations, called randomized controlled trials, can easily take 3 years after they are funded, approved and designed. The second risk is that the pressure to produce results data will tempt organizations to dilute or diminish the task of evaluating. The result could be sub-standard evaluations producing inaccurate positive results. It is difficult to imagine that Charity Navigator will have the resources to judge the quality of the evaluations so inaccurate positives are a real possibility. If that happens, donors could be misled by ratings that are based on poor information.
We support the notion of measuring results. In fact we think it is critical! It needs to be a priority for every organization that solicits public funding. But Charity Navigator’s past performance makes us a bit nervous about this new initiative. In our opinion, Charity Navigator’s cookie-cutter approach to the financial evaluation of nonprofits led to the public’s misunderstanding of overhead and huge negative consequences for the sector. We can only hope that this time they are leading by example and that they have thoroughly evaluated their earlier initiatives so those same mistakes won’t be repeated. We would be less nervous if the results from their studies were posted on their web page for all to see.
With or without Charity Navigator evaluation data is critical and should be pursued as a top priority by all nonprofits.