chevron-down Created with Sketch Beta.
Vol. 46, No. 4

Evaluation and measurement in a time of transition: A conversation with Kim Silver

By Marilyn Cavicchia
Kim Silver

Kim Silver

Most organizations (and most individuals, for that matter) have tried a lot of new things in this year of COVID-19. Many have been so successful that they seem worth continuing even after this crisis has passed. At the same time, many of us are eager to resume some of the old, familiar things we were doing, as soon it's safe to do so. 

At a time when it's still hard to predict what lies ahead of us, how can we get a handle on what to keep, what to go back to, what to change, and what to let go? 

Kim Silver, principal at The Silver Line, is an entrepreneurial social sector professional with more than 15 years of experience leading and managing nonprofits, foundations, government agencies and consulting engagements. She has an extensive background assessing social impact, designing feasibility studies and creating new services, programs and partnerships.

Bar Leader recently spoke with Silver about the role of evaluation, measurement, and data in making decisions—especially now. Here is that conversation, edited for length and clarity.

Bar Leader: Bar organizations, like many others, have been doing a lot of new things during the pandemic. In the months ahead, how can they decide what to continue or resume, without getting too "overstuffed?"

Kim Silver: It can be helpful to build a decision-making matrix, which starts with drafting a set of criteria that can be used as a filter, and a structured way to make those decisions.

One organization I worked with recently was considering whether to open another location, and as they looked at each criterion, they gave it a rating in terms of importance. Let me walk you through their criteria:

  • The level of impact that program or that opportunity or that activity could create for the organization. (And on the other hand, could stepping away from whatever it was help them increase their impact?) For this organization, “impact” meant, "Would this opportunity help them to serve more people, and serve more people at the level of quality that they want to?" That is how they meet their mission.
  • Alignment with values. This is an organization that has a very clearly articulated set of values that is a thread that runs through all of their programs.
  • Cost. There are a variety of different ways to think about cost, beyond the actual cost of the program. For this new or existing program, is there budget already? Is there a guaranteed revenue stream? Do we have to go raise more money for it? Would this new opportunity mean bringing on new staff to make it happen?
  • Ease of implementation. How hard is it for us to do this, and when do we recognize that it’s requiring too much time and effort? Is it something we could easily do with existing infrastructure and staff because we’ve already done it a thousand times and it’s just a cut and paste, or we’re implementing new content but using the same platform?
  • Sustainability. This is tied to cost in some ways: Does it require a lot of new resources or new investment to make it happen over the long term, or is it something that can be done inside of an existing budget, built into something we’re already doing, so it can live and breathe and not have to have its own separate line item?
  • Visibility and leadership role. This is an organization that really leads on the issues they’re involved with. Is this something that will help us gain, secure or maintain the visibility of the leadership role we already have?
  • Unique position. Do we have the knowledge, the resources and the expertise to do this in a way that nobody else could do it, meaning that if we can’t do it, it doesn’t happen?  Would that create some unmet need that’s really important to us?

BL: What can people learn from the process of building a tool like this?

KS:
I built this set of filters with a group of board members and an executive director. The conversations that generated it were really important and so helpful, because the people who were going to be making really big decisions for this organization had such a variety of knowledge and expertise on whatever topic they were considering. So, for any board members who were kind of at a distance from what the staff experiences and what this organization’s clients experience, the structured conversations that the tool facilitated created a process that I think was even more valuable than just the tool itself.

It generated not only learning for the people who had to be caught up on some of the knowledge, but it also created a really safe space for people to ask questions and challenge assumptions, and for everyone to be able to come together and remove the emotion from the decision, because we’re all working from this clear set of things that we said were really important to us.

It also gave the staff clear language to use: “This doesn’t align with our values, even though it seems like it would give us a lot of visibility and we’re in a unique position. If we prioritize level of impact and alignment with values, making this choice for our organization doesn’t make sense.” It gave everyone a way to make rational arguments for or against different opportunities.

BL: Is it difficult for some organizations to dig into evaluation, measurement, and data right now because things still feel so strange?

KS: If it’s an intensive project, I totally agree. If it’s a light way of inviting feedback or trying to understand our market a little better, I think those are exercises that don’t require a ton of time and energy, or an external party coming in to help you do it. I think for organizations that know their client base pretty well and have a means of communicating with them that’s pretty regular and the audience is responsive, there are ways to invite intelligence without turning it into a big research project that has to be one more thing on someone’s plate. I’m a big advocate for, "How do you build evaluation and learning into everything you do?" That would make everything you do better, without it being a huge bear. It doesn’t have to be. It can be really practical and relevant and baked into your existing work.

You could have a set of common questions that lives across all of your work, even if it’s just, “This met my expectations,” “ Exceeded my expectations,” “Did not meet my expectations” and then “Tell us why.”  Questions like that are just enormously enlightening. It’s less about, “Was I satisfied?” It’s more about, “I came here for something, and did I get it?”

BL: Let’s look at a specific example where organizations might be considering whether to keep something new they’ve been doing, go back to how things were, or arrive at something in between. A lot of bars seem to be realizing that the best model for their meetings and events going forward may be hybrid, with some in person and some virtual. How can they use data to help them make some of those decisions?

KS: I’m watching this with a number of my clients who are asking, “What are people going to be comfortable with, and what makes sense for us?” People don’t want to build it assuming that people will come in person. An organization can take the temperature of its members or clients with a two- or three-question survey that’s built into a marketing message. Or, at the end of Zoom meetings, invite people to take an exit survey. The goal is just to try to get an estimation of what people’s comfort level is and when they think they might be comfortable coming to an in-person event. But there are so many different factors, if people give a timeframe, just know that it could change tomorrow. You could also frame it as, “How can the organization be helpful?” and give virtual training and in-person events as options.

Also, invite people to reflect on existing programs. What worked about the virtual setup of this program, and what could be improved, given that it will probably continue to be virtual for a while? I think people are willing to share that real-time feedback, especially if it’s not a long and involved survey—just one or two questions. It doesn’t necessarily give us all the answers, but it gives us better information than we had before.

In general, I’m a big advocate of getting information and feedback because you don’t know what you don’t know until someone shares it with you. If providing new things is getting more people engaged, getting the feedback can turn that flywheel and keep people’s energy up around it. If they know that the things they’re offering are valuable, useful, and relevant in a way that they’re hearing from participants, that can be a real motivator.

A lot of times in organizations, you have people who are super risk averse and people who like to try new things every day. So, I think inviting feedback can also be a nice way to bring data into the conversation that can help fuel and make decisions between very different types of colleagues.

Even during the pandemic, when everything keeps changing, asking what worked, what could be improved, and how the organization can help will at least give bar leaders some good, on-the-ground intelligence on how their members are feeling and what they can do to respond and stay in lock step with them to make sure that they’re staying relevant and helpful.

Sometimes, too, there are things we can witness, even without feedback. If we’re trying new things, but we don’t see people signing up, we don’t see people staying on, we see people turning off their cameras, we see people dropping off when breakout groups come along, those are all things we can witness in a totally different way in a virtual world than we could in an in-person world.

Using some of that data and insight could be a way to manage some of the tensions between someone who wants to go back to doing everything the way it was before and someone who wants everything new, new, new. There’s probably some data that could be the arbiter for how much we do either one of those things. 

BL: Let’s talk a bit about revenue, given that it’s taken a hit at a lot of bars, which might raise the stakes in terms of deciding how many new and old things they can sustain.

KS: When I do strategic planning with an organization, part of that is meeting the mission and having programmatic impact, but you can’t achieve those if you don’t have revenue to support your operations.

I would go back to the set of filters we discussed earlier. When you look across the programs or the initiatives that are planned or that are in place, what do we know about the actual cost of delivering them, and what do we know about the actual revenue that comes from them?

Those filters are so helpful because it’s not a black-and-white issue. It’s actually kind of nuanced, because you may have some programs that cost a good bit of money to run and don’t generate a ton of revenue, but they are the ones that are the most visible and that generate the most impact for the organization, and stepping away from those would probably not end up being the decision. But there may be other programs that are less costly, but they’re really complex to implement. I wonder, too, are bars thinking about new sources of revenue that are fundamentally different from membership and dues?

Think about where those cost centers are, and revenue generators, and give some thought to using a structured way to decide what makes sense to keep and what makes sense to sunset—and sunsetting things is OK!

BL: For programs where it’s not so much about revenue, but about how people feel, how do you quantify those?

KS: Sometimes there isn’t a perfect way to quantify everything. When I think about something that’s really visible, and valuable for that reason, people can make a case for the visibility in a more qualitative description. For measuring whether it’s valuable to people, again, inviting feedback from participants is helpful.

One organization said, “All of our workshop feedback is positive. What are we going to learn from that?” You’re going to learn that you’re providing something really valuable. And if you have a presenter where you don’t get that same kind of feedback, you’ll know you should listen to that and make a change.

And again, you can also observe data you already have. For example, you as a bar leader could say, “We offered 25 CLEs last year, and we had 10 of them that were oversold and had a wait list, and then we had another 10 that were half full. What could we learn from the half full ones?” Was it because we didn’t market it well, because the topic wasn’t relevant, or the timing was bad?

BL: It sounds as if you think measurement and evaluation can be more of a daily habit rather than always being a big project in itself.

KS: I find that organizations underestimate how much data, whether it’s qualitative or quantitative, lives inside of their organization. It does take a little time to mine that and really look at it. I’m always surprised: People are sitting on a lot of data! And that wasn’t true 10 or 15 years ago. But the way that we know about every donor and every member, there’s so much AI built into a lot of these systems that there’s a lot of information out there.

I think when people hear “measurement and evaluation,” it strikes fear in them rather than seeing it as a strategic tool that may already be in our work because we have a lot of data already. It doesn’t have to be done by someone with a PhD. It doesn’t have to be done by an external evaluator. You likely have someone on your team who knows what data you have. There could be a conversation between someone who has the questions they want to answer and someone who’s more familiar with the data, and then do a little hunting to see what you have.

I think it’s really an organizational culture of learning and improvement. That’s the bigger thing here. It’s not just a matter of discovering the data. It’s very much leader driven and culture driven, having that mandate from someone who says, “Let’s learn and not guess, and let’s use the data we already have.”

It’s about having some comfort in that. Again, it’s gray; it’s not super black and white. It’s probably not going to be perfect data, but it could give us some new insights to inform the decisions we have to make.