Reap the Benefits of Your Hard Work: Practical Strategies for Using Impact Data


Can you imagine spending months tending to your garden, only to let the vegetables you’ve been growing rot before anyone can enjoy them? Hardly. Yet when it comes to data collection and use, many organizations do just that.

They toil away setting up plans to deliver services and track results. They pour sweat and tears into data collection. Then once the data rolls in, instead of using it to inform their decisions and future planning, they let it sit on a shelf, or in stacks of unread surveys, a funder-required database, etc.

Nobody wants their data to turn into a metaphorical bucket of rotten vegetables.  Yet it’s easy for it to happen if you’re not mindful.

In my almost 20 years of helping organizations get a handle on their outcomes, I’ve seen a few trends emerge that can make or break data use.  The good news is the pitfalls are avoidable if you are aware of them. And technology has made some of the necessary steps easier.

Here’s to hoping the experiences others have had – outlined below – can help you become a more results-oriented organization and reap the benefits of harvested data.

Issue #1: They Don’t Know What Data They Want or Need

Sometimes the thing keeping organizations from really using their data is that they’ve never thought carefully about what kind of data would be most useful to them.  Sure, they may have set-up systems to ensure they can report to funders, but they haven’t considered what would be most useful to their internal conversations and decisions.

Using a framework like a logic model for planning can help organizations understand the relationship between what they do (services or activities) and the results they expect (outputs and outcomes).  It can thus help organizations more easily identify the data they require to determine if those desired results have been achieved.  Without one, choosing is more of a guessing game and can be swayed by external requests.

Sometimes the results an organization prioritizes by using a logic model are the same as those requested by their funders, but not always.  So thinking about what is most important for internal use – independent of external demands – can help ensure you are tracking what you want or need. Once the data starts rolling in, it can also make it easier to focus on your key results amidst all the other information you’re collecting.

Issue #2: They Don’t Have the Data They Want

As noted above, organizations should ideally think upfront about the data they want or need.  That way, they can collect it from the point their programs begin. So the first step to ensuring you have the data you want is to identify it and begin tracking before it’s too late.

The next step is designing data collection tools and processes that are feasible, which helps ensure you actually collect the information you’re seeking. In my experience, when things get busy, the first thing to fall to the wayside is whatever is not a natural part of the day-to-day work.  Often that is a cumbersome data collection process. That’s why it is so important for organizations to utilize techniques that are imbedded in their work whenever possible.

For example, is there a point in your program when you already ask your clients certain relevant questions? If so, can you tweak your existing processes and make it easier to track and compile the desired data across all clients?  Is there a way technology might further streamline things?

Focus on creating a data collection plan that is both meaningful and manageable.  Otherwise, it’s possible you’ll get to the end of a program and not have any data to show for it.

Issue #3: They Don’t Have the Capacity to Easily Compile Data

When I teach workshops on results tracking, I often ask attendees to raise their hand if somewhere in their office there exists a stack of paper surveys that nobody has compiled.  Sheepishly, hands pop up from all corners. Why is this so common? Because organizations often don’t think about how their survey data is going to be compiled before implementing them.

Intake forms, treatment plans and even database fields frequently experience the same fate. Like paper surveys, they do a good job of capturing information on each individual participant, and can serve to inform the staff’s approach with that person, but don’t capture the data in a way that makes compilation and review possible.  Online platforms can help, but only if people are mindful upfront of how questions are asked and fields are designed. And that takes thinking ahead to what kind of data you want to review and how it will be compiled.

Issue #4: They Don’t Have a User-Friendly Format to Review Compiled Data

The next challenge many organizations face is putting the compiled data into a useable format – something that highlights key points in a digestible way.  Snapshot reports or “dashboards” have become increasingly popular for this purpose. But a useful dashboard doesn’t happen automatically.

It requires first identifying what you want to review and when.  It also requires the person-power and/or technological ability to transform the information you’ve collected and compiled into a useable format.  Fortunately, more and more plug-ins exist to support this process, but they still require organizations to invest time and resources to make them productive tools.

Issue #5: They Haven’t Set Aside Time to Review

Even with a dashboard or some other type of internal report, there is no guarantee organizations will actually use the data.  Why? Because they also have to allot specific time to review it.

That can mean putting it on the agenda at standing supervisory meetings each month, at quarterly board meetings, and/or at seasonal retreats. It might also mean creating stand-alone Results Learning Meetings done at the program, department and/or organizational level.  Bottom line, to build a learning culture, you must commit time for people to review and discuss results.

Issue #6: They Don’t Know How to Review & Use Data

For organizations that manage to navigate all of the potential roadblocks already outlined, it can feel like the job is done once the meetings have been scheduled and the dashboards have been created.  But one more hurdle often remains – coaching people on how to review and use data. Staring down a spreadsheet covered in numbers can be daunting. People may not know where to look or what to do with what they’re seeing, and that can keep them from making the most it.

Reviewing and using data is a skill that needs to be cultivated like any other.  Reassuring people that there is no “wrong” way to review data can help, as can creating a set of guidelines to help inform the review.  For example: (1) What story does the data seem to be telling? (2) Is any of it surprising? (3) Does it leave you with other questions or items to discuss? (4) What can you do with what you’ve learned?

Assure people that the data is simply a starting point for conversation. The goal is to use it to see what’s happening, and then discuss as a group why and whether there is anything that can be done to influence future results.  You’re not going to have all the answers. The more important thing is to ask the questions and work as a team to come up with next steps.

More and more nonprofits are seeking to track and learn from their data, which is great news.  Knowing whether what you’re doing is working is key to making an impact. The other piece of great news is that technology has evolved a lot over the nearly 20 years I’ve been doing this work.  It’s made tracking, compiling and viewing results easier than ever before. But technology hasn’t removed people from the equation.

Technology doesn’t do the strategic thinking for you.  You must decide what data to track, commit to tracking it, and then set aside time to review and talk about what it means.  But if you do, then you should be able to use technology to support your efforts – and hopefully avoid ending up with a bucket of metaphorical rotten vegetables.

CATEGORIES: Best Practices