March 15, 2012, Erin Aagesen of Wisconsin Literacy, Inc provided a lively and thoughtful discussion of the "nuts and bolts" of conducting a community-based program evaluation.
We started with a brief description of the recent health literacy project “Let’s Talk About Flu.”
“We wanted to improve flu vaccination rates through plain language communication. Here's[a link to] the program: http://t.co/BzGOrfUw . With funding from Anthem BCBS, we developed a lesson book, 1-hour workshop, and distributed flu vaccine vouchers from Walgreens.
During the fall and winter of 2011-2012, our staff conducted 53 workshops and served 921 adults with low health literacy.
We used principles from adult learning theory, including the most important: making the information relevant to participants’ lives.
We conducted the workshops “in trusted locations where the target population regularly gathers…This was really important to us, and a reason we think we saw success. We delivered the workshops in trusted settings where people already live, work, study and socialize.”
How did you determine the people had low health literacy?
“Most participants were adults from our 63 member literacy agencies, who are reading below the 5th grade level. We also worked with populations in which there is generally a large overlap with low literacy, including [the] homeless.”
Can you explain the difference between program evaluation and research as it relates to community based programs?
“According to @NIHforHealth: ‘Research is scientific inquiry... it produces generalizable knowledge that advances a field.’ It would be wonderful if we had a research partner for all community-based initiatives, but that’s not always realistic.
CBOs [Community Based Organizations], it's OK not to be research experts. Define and systematically pursue your goals and methods in alignment with your mission. Program Eval[uation] takes into account program goals, stakeholder interests and “real world” constraints. [The] Goals of Program Eval[uation][is to] Judge…[the] merit or worth of [the] program, [and] provide… info[rmation] for decision making.”
A participant followed up with this question. How did you know the intended audience would attend [the] workshops? Did you do [a] needs assessment first?
Erin replied, “We scheduled during times those groups already meet: during literacy class time, lunch time at senior centers, etc.”
Another participant asked, “Did you have extra support for those who believed "I don't get flu" or "shots don't work/[or]are bad for me"?
Erin responded, “We tried to facilitate a non-judgmental conversation about why people do or do not believe in vaccination. We also documented this info[rmation] on the pre- and post-test, so we can more effectively tailor future programming.”
Please describe the steps that you took to evaluate this project?
First Erin described the different types of evaluations that are undertaken, “Types of Eval[uation]s: Needs Assessment (What do we need?), Process Eval[uation]s (How did we do it?), Outcome Eval[uation]s (What happened as a result?)”
Then she got down to her process, “We used fantastic resources from @UWEXCoopExt (The University of Wisconsin Extension Cooperative Extension) to develop our "Let's Talk About the Flu" eval[uation] plan.” A link to the Cooperative Extension’s website is http://t.co/s9Z4SSAO. According to the plan the steps in Prog[ram] Eval[uation] are:
1. “Engage Stakeholders”
3. “Collect Data”
4. “Analyze & Interpret”
5. “ Use”
“We followed the advice to, ‘start with the end in mind.’ This meant step #1 was defining our desired outcomes. These were: improved knowledge of flu concepts, changing vaccination intentions, and improved flu vaccination rates. We used a pre-and post-test to measure flu prevention knowledge and intentions regarding vaccination. @Walgreens (hence Walgreens) helped us track flu vaccine outcomes via a voucher system. We tracked on-site vaccinations the day of the workshop. We also solicited qualitative feedback from program partners and staff before, during, and after the project.”
Did you pre-test your messages and lecture content with different audiences?
“Yes, a medical student intern pre-tested our lesson book with physicians, adult learners and adult literacy program directors. This was an essential step; we learned a great deal and revised our program and materials based on this feedback. We also had the experience of a prior year's project to draw from. We're all rushed, but I think scheduling time for feedback and revision upfront saved us time in the long run.”
Were freebies essential to get participation?
“We put together a freebie ‘flu kit’ with supplies from Walgreens, a thermometer, cough drops, tissue, hand sanitizer. The Walgreens’ freebies were a hit! They got people engaged. For example, we taught numeracy concepts with the thermometer. I don’t think you always have to give people “freebies,” but you should give them a compelling reason to participate. It helps when you find a partner whose goals align with yours. They [Walgreens] wanted to be a community resource.”
What barriers did you have to overcome to obtain the data that you have?
“It is challenging to obtain data from our audience (adults with low health literacy). They may be unable or reluctant to take tests. In prior years, we asked too many questions, and didn’t emphasize why it mattered to staff and participating agencies. This year, we had to prioritize. We asked 5 basic questions. We also tied data to reimbursement for participating programs.”
“Adult literacy learn[er]s are tested often, and we didn't want to be one more test. By def[inition], they were in our target pop. Because it was prog[ram] eval[uation], not research, we also made a decision to make some data sacrifices to maintain our organization mission. We did not ask learners to put their name on the tests, since stigma is an issue. So no individual data - just averages. The important thing is that we made these choices purposefully, based on our organizational mission and program plan. We did not have to sacrifice anything core to serving our program goals. Just to proving something generalizable.”
Did the process require translators?
“Yes, we budgeted for translators. They were arranged by the agencies, if needed, and we reimbursed them. The participating agencies were reimbursed for time, space, staff. Participants got freebies and a flu vaccine voucher.”
What results did you obtain from your evaluation?
“We went from a very low pre-and post-test completion rate in 2010 to over 85% in 2011. We were thrilled!”
“ [The] flu knowledge (learning objective) results [were] Ave[rage] pre-test [score]: 55.7%. Ave[rage] post-test [score]: 82.7%.”
“[The] intention to get a flu vaccine this year (belief objective) results: Pre-test: 73.9%, Post-test: 83.1%. Total changed: 109 (12%).”
[The] Flu vaccine (behavioral objective) results: 42.4% vaccinated. 12% before workshop, 17.1% [after]”.
“[Our] PR results: photos, videos and press clips [can be seen at] http://t.co/VsarPgkH http://t.co/3n3N3BiM http://t.co/BDp7ReBn http://t.co/DfIwthXy .
What advice would you give others who are intent on evaluating their health literacy programs?
“You have to prioritize. We were successful because we made some decisions about what was crucial data and what was not. [We] gather[ed] stakeholders (clients, staff, community, funders) to discuss key questions before developing…[our] evaluation plan. And of course, ‘begin with the end in mind!”
A participant commented, “that’s many lives potentially saved!”
Erin responded, “Not to mention health care system costs! According to AHRQ, [the] ave[rage] cost for outpatient visit in 2008 was $169.”
Do you have any final thoughts to share?
“I find this liberating [from the University of Wisconsin Extension Cooperative Extenstion] ‘There is no blueprint or recipe for conducting a good evaluation.’ Make it work for you! [It has been] awesome. We loved working on this project and are very pleased we can attach some outcomes to our hard work.”