Skip to Content

archive

Monthly Archives: January 2014

post

Program Evaluation and Band Wagons

View from the SummitIn social media, I find that themes seem to develop that carry through for a week or so.  For me, the theme that I am experiencing on Facebook this week is high school bands; my friends are talking about whether or not they were “band geeks” and who are some famous former high school band geeks.  I, personally, was a high school “choir geek,” but as the proud mother of two proud (former) high school band geeks, I certainly jumped on the band wagon as a band parent, chaperoning for many years.

All of these Facebook posts about band participation got me thinking about jumping on band wagons.  We do it all the time.  We do it with fashion; we do it with technology; and, yes, we do it with program evaluation.  We read or hear about some new way of collecting or analyzing information and, “poof,” our current processes are suddenly second rate.  We are magically transformed into following a new process. 

Changing our processes can be a good thing or a bad thing.  Often, it depends on how much information we have about the new process.  Here are some things that we might want to consider before jumping in:

– What motivated the process developers to create this new process?  What were the problems with the old process that  they were trying to resolve?  How similar are their issues to the issues motivating us to look for a new process?

– What resources (financial, technological, personnel) did the process developers have at their disposal?  Do we have the same resources?  Can we readily acquire the resources that we will need?

– What went well with the new process?  What internal or external factors in the situation of the developers influenced their success?  How similar are these factors to our situation?

– What challenges did the process developers face?  How did they overcome those challenges?  What do they wish they were doing (or could be doing) differently?  How difficult was it to make course corrections, and did that influence the results of the process?  Will we face the same challenges, and if so, how might we be able to overcome them?

– What was the developer’s process for developing the process?  Did they build on someone else’s idea?  What changes did they make, and why? Should we be asking the previous four bulleted points about some other process?

You get the idea.  We sometimes get information about seemingly great processes and jump on the band wagon without completely analyzing the situation.  When it works, it is great.  When it doesn’t, we are back on the sidelines looking for a new band wagon.

For example, I am currently looking at jumping on a band wagon of doing quality assurance reviews of “aged” cases.  I think that the regular QA questions (was eligibility done in a timely fashion, etc.?) will not be particularly insightful, especially in figuring out why the case has been open for years, but perhaps there are other factors in the assessment/eligibility process that might have foreshadowed the need for an unusually long rehabilitation journey.  Have any of you looked at aged cases?  How did you select them (based on the case’s age alone or also based on money spent)?  Do you have a process for that?  Do you have an evaluation instrument? Looking at the bullet points above, I need to look at a range of factors before I leap onto the band wagon!

Have you jumped on any band wagons?  What were the results (good or bad)?  Do you feel that you had the time to fully research your decision?  Is there a program evaluation idea/process that you are considering, but about which you would like more information?  We can help each other!  Leave you comment and questions below; check back to see if someone has asked a question that you can answer.  Let’s use this blog as a forum for professional development.

post

Greetings, Summit Group Members!

Allow me to introduce myself; I am HarrietAnn Litwin, a Management Analyst from Delaware DVR.  I was approached (nay, recruited) by a group of the Summit Navigators to write a semi-regular blog for the Summit Group web site.  In looking for a title, I have decided upon “View from the Summit” which is only slightly ironic as I live in a state where the highest point is just a bit over 400 feet above sea level!

I am not a program evaluation guru.  I arrived at VR with a Masters in Rehab Counseling and a CRC.  Since starting as a Rehab Counselor for the Deaf in 1988, I have performed (and still perform) a large number of roles as is particularly common in small agencies.  In the past few years, my roles have moved increasingly in the direction of program evaluation and quality assurance.  As is the case of many of you who started down the counselor path and moved into the “central office” path, I am finding it necessary to learn while doing.

By attending the VR PEQA Summits, participating in Summit Reading Groups, and connecting with the Rehabilitation Program Evaluation Network (RPEN, a division of the National Rehab Association), I have found that I am not alone out here.

So, I invite you to join me on this trek.  The goal of View from the Summit is to spark discussions and conversations that lead to learning for all of us. This will only work if you, the members, become engaged.  I will come up with some of the topics. I encourage you to use the comment section not only to comment on what I have written, but also to suggest areas that we can explore together.

Other people are out there blogging about program evaluation.  I read the monthly article by Bob Behn, of Harvard, in his Performance Leadership Report.  In his November article, Bob wrote about a subject that is near and dear to many of us, “so I got the data; now what?”  Behn discusses the progression from measurement to management to leadership.  We collect vast amounts of data such as 911, Monitoring Reviews, customer service surveys, quality assurance reviews, statewide needs assessments, etc.  It all tells us something, but the trick is to move through the information and select those pieces that are most important in making quality improvement and that are within our agency’s sphere of control.

Then, decisions need to be made on how to implement necessary change.  Is it a time issue, a resource allocation issue, a training issue, or something else?  Leadership discussion and buy-in, followed by leadership out in front in making the change is important, but ultimately, it is the buy-in of everyone implementing the improvement that is crucial.

Keep an eye on data.  If it is not moving in the desired direction, many things could be happening.  Perhaps more time is needed to see the change occur.  Perhaps a variable has been missed or the root cause was misjudged.  Perhaps, the change, or the rationale behind the change, needs to be revised or re-explained.  The role of the program evaluator is to be a leader to leadership in continuous quality improvement.

Here is a link to Bob Behn’s article:

http://www.hks.harvard.edu/thebehnreport/All%20Issues/BehnReportNovember2013.pdf

The next move is yours! Please share a situation where you have faced the data “head-on” and worked with leadership and your entire VR agency to identify issues, create strategies, and implement successful (or problematic) change.