Qualitative vs. Quantitative Data: What’s the difference?


Qualitative vs. Quantitative Data: What’s the difference?


 

Qualitative vs. Quantitative Data: What’s the difference?

In the world of program evaluation, terms like methods, data sources, and analysis are thrown around like candy in a parade. For those of us who do not regularly participate in the evaluation end of programming, the frequent use of these terms can seem daunting and even downright annoying. As staff and directors of grant-funded programs, you will be advised to collect a variety of data on your student outcomes, and often these data sources are discussed in the aforementioned annoying terms. I hope to give you a little background on the difference between the most commonly used data descriptors: qualitative and quantitative data. Here we go….

Quantitative Data

Beans. Yes, I said beans. I like to think of quantitative data as something that you can count, or QUANTify, like a handful of dried beans. In our programs, these quantitative data commonly look like district assessments, student attendance, activity participation, student GPA, etc. 

For us to demonstrate that our programs have an impact on the student outcomes that our funders care about, these quantitative data sources must be collected. These are hard numbers. Don’t get me wrong; hard numbers measure things like GPAs and attendance but they can also measure “soft constructs” such as social-emotional skills, feelings of belonging, and academic motivation. This is done through a thoughtful survey collection plan. When seeking to use quantitative data, get creative! It does not and should not always be about our traditional success indicators.

Qualitative Data

Now, we all know that numbers are important and are the backbone of what we report to our funders, but now I am going to shift into the more colorful of our data sources: qualitative data. Our programs, schools and students are more than their test scores, attendance data, and activity counts; they have stories and complexities that numbers cannot always demonstrate. If collected properly, qualitative data can serve as a meaningful backdrop to contextualize what we are seeing in our “bean counts”. In the spirit of qualitative data collection, let me tell you a story.

 

Program A serves a cohort of high school students. In order to better understand their experiences with college visit events, program staff distribute a survey. The survey results (quantitative data) show that students rated their trip to Brainiac University very low. If program staff stop here, they would just determine that the students are not interested in that university and would not schedule any more visits there in the future. Thinking that there must be more to the story, Site Coordinator Wanda suggests that their evaluators conduct a focus group with students to ask some follow-up open-ended questions about the college visit series. Through these focus groups, evaluators learn that on their visit to Brainiac University, there was a single tour guide who made an offensive comment, which colored this group of students’ experience in a negative light. Without the follow-up focus group, program staff would not have known the deeper experience of their students regarding their college visit.

 

Qualitative data can be collected in a variety of ways including focus groups, interviews, social media comments, and short answer survey items.

 

Qualitative data can provide you with rich background information that allows you to get to know your students on a deeper level. Although it is rarely asked about in program APRs (Annual Progress Reports) and institutional reports, it can be an extremely useful tool to both increase our awareness of our students’ experiences, as well as demonstrate our impact on a human level.

 

Conclusion

In conclusion, when encountered with the question, “What kind of data should I use?” remember to think outside the box! In order to answer questions about program impact, we can utilize both qualitative and quantitative data. I hope this post helped make these two types of data a bit clearer and also gave you ideas of how these data can be used to both demonstrate program efficacy as well as inform program improvement. Learning about data and program evaluation can feel overwhelming at times, but we are here to help!

 

Contributed By Lauren Coleman-Tempel

Lauren Coleman-Tempel, Ph.D. is the assistant director of Research, Evaluation & Dissemination for the University of Kansas Center for Educational Opportunity Programs (CEOP). She oversees multiple federally funded equity-based program evaluations including GEAR UP and TRIO and assists with the supervision of research and evaluation projects.

Follow @CEOPmedia on Twitter to learn more about how our Research, Evaluation, and Dissemination team leverages data and strategic dissemination to improve program outcomes while improving the visibility of college access programs.