Taking our Programs to the End-Zone: Formative v. Summative Evaluation
Taking our Programs to the End-Zone: Formative v. Summative Evaluation
So, you have been tasked with creating a formative evaluation plan for your program. Your supervisor mentions way that there are some old summative evaluations that you could look at to inform this endeavor. What the heck?! Where do you start? Here, of course!
In this post, we will walk through these two kinds of evaluations together and give you the skillset needed to accomplish the aforementioned task, as well as the foundational knowledge to impress your colleagues.
Let’s get started with a brief rundown of terms and why we should care.
- Formative evaluations (or formative assessments) have one simple overarching goal: improve outcomes.
- Summative evaluations (or summative assessments) can also be boiled down to another goal: describe what happened.
These two methods of evaluating your programs should be used in tandem because they do different things. If we only used formative evaluations, we would never gain a comprehensive look back at the program’s outcomes. If we only utilized summative evaluations, we would be leaving opportunities for improvement on the table.
Let’s take a deeper dive into some examples of each method of evaluation using an analogy from an increasingly controversial American pastime: football!
Formative evaluations happen early in the “drive”, meaning that these evaluations happen while there is still time to change the outcome. The earlier and more consistently you conduct formative evaluations within your programs, the better chance you have at changing the course of the program. Let’s say your team is at the 30-yard line, or in the middle of the first year of their 5-year grant. Conducting a formative evaluation, possibly an audit of all activities that have taken place so far will allow program staff to see where holes in the service plan might lie. If staff wait until they are at the 10-yard line, or the final year of the grant, there is very little time to make changes to the program model.
Other examples of beneficial formative evaluations are semester and yearly reports, focused interviews or process evaluations, and even the Annual Performance Report (APR). For GEAR UP and TRIO programs, the APR is a report that feeds into the Final Performance Report, or the FPR. You can improve upon each year’s APR; the FPR is written in stone.
Summative evaluation is just as it sounds – it is a summary of what has happened in your program. It is a summation of outcomes and performance indicators that have happened over a set period of time, which in your case is generally a grant cycle. This is similar to the final score and stats report from a football game. How many rushing yards did each team have? How many touchdowns did each quarterback pass? What was the percentage of your students who enrolled in a postsecondary program straight from high school?
These evaluations are frequently guided by your program’s objectives – both set at the federal level, as well as your internal objectives written into each grant. At the end of the project, a summative evaluation helps us paint a picture of the final scoreboard. They inform your audiences of how you measured up against your goals.
Now that we have discussed the differences between formative and summative evaluations, we will close with an example of how one program used each to build a robust evaluation plan. The following is a diagram of a 5-year-grant’s evaluation plan.
As you can see, each year’s formative evaluations varied depending on program needs. These are flexible and can be as formal as each program wishes to accomplish its goals. These formative evaluations lead into the summative evaluation, which is a thorough report covering what happened during the 5 years of grant funding.
I would love to dig deeper into how both of these methods can be utilized to evaluate individual initiatives within a project, but that’s a 30-yard pass for another game…I mean, blog post.
Contributed By Lauren Coleman-Tempel
Lauren Coleman-Tempel, Ph.D. is the assistant director of Research, Evaluation & Dissemination for the University of Kansas Center for Educational Opportunity Programs (CEOP). She oversees multiple federally funded equity-based program evaluations including GEAR UP and TRIO and assists with the supervision of research and evaluation projects.
Follow @CEOPmedia on Twitter to learn more about how our Research, Evaluation, and Dissemination team leverages data and strategic dissemination to improve program outcomes while improving the visibility of college access programs.