After my graphic medicine collection was placed in the UAMS library, I paid close attention to how people interacted with it. Unfortunately, many people didn’t notice it or know about it due to a lack of marketing. However, those who picked up a book and/or checked one out responded positively. Several patrons left positive feedback on the comments page located with the collection: “Great collection!” “Thanks for providing such a collection.” One patron made reference to a specific title saying, “Thank you for allowing me to experience it!” In discussions with patrons who did not check out a book on why this was, some responded they wanted more information about what graphic medicine was. From this feedback, I chose my next step, although it took time to implement. I partnered with Leah Eisenberg who works in the Department of Medical Humanities and Bioethics of the institution, and together we worked to bring MK Czerwiec for a workshop, a lecture, and class discussions (see Advocacy Toolkit). After her visit, the circulation of the graphic medicine books increased from an average of 1.7 checkouts per month to 11 checkouts per month due to the boost in marketing and deeper understanding of graphic medicine’s benefits.
So, using the tools in the previous entries of my series, you have piloted a graphic medicine collection and/or program. Overall, how did it go? Did people respond positively? Negatively? It is most importantly to figure out the “why.” Why did they enjoy it or not? Why did they feel like it was applicable or not? Why did they think it was beneficial or not? Remember that this is an iterative process. You have been doing evaluations every step of the way so far. You have tested the waters by determining what the decision makers and audience were interested in. You then ventured further with the advocacy toolkit, changing it up and keeping at it to get enough buy-in from decision makers to move forward. Then, based on everything you learned, you started with a pilot (a collection, a program, or both). From here, it is rinse and repeat. Keep evaluating how things are going. Look for ways to respond to the feedback and make the collection and/or program more applicable and beneficial to your audience.
Data and numbers have their limitations, but also their uses. A number doesn’t mean much without comparison and/or scale, so make sure to include context. Evaluating an increase in knowledge can be achieved with a pre-test and post-test. One example is measuring how people feel about comics using a tool like the Likert scale. You can include a statement like “Comics have a place in medicine” and then a range of strongly disagree to strongly agree. Or “Comics are only for discussing trivial matters” and the range. Another data point is the number of people who participated either by checking out a book or attending the program. This can be used to demonstrate interest. However, all of these numbers are part of a deeper understanding. The best numbers are those that can illustrate a larger meaning, especially those that can be graphically displayed in a graph or chart. The data here are quantitative, expressed in numbers.
Beyond the numbers, you have qualitative data, which is basically everything else. There can be a particularly meaningful quote from someone expressing their epiphany. In a class setting, observing participants’ body language can clue you in that everyone is leaning in and engaged by the end. Mostly, qualitative data includes what people said or did. It is common to discount for people to overlook qualitative data because they don’t fit into nice, neat boxes or charts. However, stories, quotes, and pictures can be incredibly powerful. All the numbers in the world sometimes have less power than a moving, emotional statement. Qualitative feedback can be something you can directly improve or add into the collection or program.
This leads us to the growth/change portion of the conversation. Look for every opportunity to grow your collection and/or program. The best place to look for ideas to improve is the qualitative feedback. Make sure you are open to change. It is easy to get focused on what you have already done, but tunnel vision is your biggest potential pitfall here. Be willing to shift focus to another aspect of graphic medicine or a different approach if people are indicating an interest in that. If you have a collection, see if you can bump the number of checkouts by adding programming which points to the collection. Alternatively, if you started with programming, create a collection to supplement interest for people to browse and see for themselves where graphic medicine can be applicable. To know where you should adjust, examine the feedback. If it’s possible, follow up directly with participants and ask specific questions about what they might be interested in. For these conversations, make sure the questions are open-ended to invite longer responses. If you get a short response, push for more detail.
From this, there are two important things to do. First, draft up future plans, which will be the subject of the next post. Second, create a summary of the quantitative and qualitative results for the decision makers. Create a mini report to let them know how your pilot went. If it didn’t go as well as you hoped, make sure to include detailed information about how you plan to grow and adjust as a result of the feedback. Highlight what went best. Make it colorful. Include a chart or graph. Include pictures. These don’t have to be pictures of the participants. It could be an example you created or a picture of some of the titles in the collection or book club. Create a powerful, graphic story of your experience.
The next post will be the last in this series! I will be discussing what to do next. Let me know what you think! Comment or get in touch with me on Twitter @AJaggers324 or Instagram @AJaggers324, like and subscribe. Also if you would like to support me financially, you can go to patreon.com/ajaggers324. Thank you!