Skip to Content

blog post

“Big” Thinking in Performance Management

View from the SummitThe other day, I was watching the movie, Big.  Now, generally, I find Big to be a very amusing movie; however, there is one short scene that always bugs me.  When Josh arrives at work on this first day, he is lead to his desk in the cubical farm and immediately starts plugging away.  Even at 12 years of age, he is able to complete his work at a fast pace.  The part that bothers me every time I watch the movie is when his cubical neighbor tells him to slow down, “pace yourself,” so as not to make the others look bad.

The scene bothers me on several accounts.  The cubical neighbor is aware that he is not working at optimal speed.  There are several possible factors at work here.  The neighbor may simply be lazy and not want to work too hard.  Work completion and efficiency just leads to the piling on of more work (the concept that no good deed goes unpunished)!  He may also feel that, as long as there always appears to be more work to do, his employment is secure.  The cubical neighbor influences, or at least attempts to influence, Josh to slow down, and it is unclear how many other co-workers are purposefully working at a sub-optimum pace.

If the goal is to find out how much work there is to do and how many people it takes to do it, fear that managers will “take advantage” of people who work efficiently or uneasiness about job security are barriers to accurate results.

This is a performance management nightmare!

Even if we were to accept that the majority of employees are working with reasonable efficiency, those who intentionally work at a sub-optimum level may make it difficult to manage performance.  If accurate performance measurement is important (which, for the sake of this blog, is something that I am willing to assume), it got me wondering about what strategies management may use to encourage employees to perform efficiently, and, importantly, how we, as program evaluators, may contribute.

One way that program evaluators may contribute is through measurement of the work environment.  Through an anonymous internal, employee environment assessment, management may gain insight into the employees’ perceptions and motivations. For example, employees may be asked to respond agree or disagree to statements about their individual beliefs, attitudes and practices as well as those of the organizations.  Management may use that information to address beliefs and behaviors that run contrary to organizational performance for the betterment of both the individual and organizational efficiency and morale.

Program evaluators may work with management to identify and use data to create and use dashboards to identify efficient employees within the organization, with the caveat that, in making the comparisons, it is important to compare the performance of individuals in similar working environments.  Public dashboard comparisons of performance in situations that employees believe to be dissimilar or unjustified may be demotivating and have the result of decreased performance, while accurate comparisons may lead to effective competition. 

When watching Big, I wonder about not only what Josh would have been thinking (had he been a real person) but also what the script writers were thinking.  I see the scene as a commentary on organizational performance and wonder if a program evaluator could have helped that toy manufacturer to identify the reasons for sub-optimum performance and to address morale in the cubical farm.

Please react by commenting on this blog.  Have you encountered any of the sub-optimal behaviors identified above?  How were they identified?  How were they addressed?  How does your organization handle performance management?

And, by the way, I hope that I will be seeing many of you at the upcoming 7th Annual Summit on Program Evaluation in Vocational Rehabilitation, September 8-9, in Louisville.

NOTE: I am sure that some of you may have noticed that it has been a substantial amount of time since my last blog.  I have been extremely busy in my office.  I have also done a bit of personal traveling.  I will endeavor to blog more often; however, if you are interested in submitting a guest blog, please contact me at HarrietAnn.Litwin@state.de.us.

blog post

Program Evaluation and Band Wagons

View from the SummitIn social media, I find that themes seem to develop that carry through for a week or so.  For me, the theme that I am experiencing on Facebook this week is high school bands; my friends are talking about whether or not they were “band geeks” and who are some famous former high school band geeks.  I, personally, was a high school “choir geek,” but as the proud mother of two proud (former) high school band geeks, I certainly jumped on the band wagon as a band parent, chaperoning for many years.

All of these Facebook posts about band participation got me thinking about jumping on band wagons.  We do it all the time.  We do it with fashion; we do it with technology; and, yes, we do it with program evaluation.  We read or hear about some new way of collecting or analyzing information and, “poof,” our current processes are suddenly second rate.  We are magically transformed into following a new process. 

Changing our processes can be a good thing or a bad thing.  Often, it depends on how much information we have about the new process.  Here are some things that we might want to consider before jumping in:

– What motivated the process developers to create this new process?  What were the problems with the old process that  they were trying to resolve?  How similar are their issues to the issues motivating us to look for a new process?

– What resources (financial, technological, personnel) did the process developers have at their disposal?  Do we have the same resources?  Can we readily acquire the resources that we will need?

– What went well with the new process?  What internal or external factors in the situation of the developers influenced their success?  How similar are these factors to our situation?

– What challenges did the process developers face?  How did they overcome those challenges?  What do they wish they were doing (or could be doing) differently?  How difficult was it to make course corrections, and did that influence the results of the process?  Will we face the same challenges, and if so, how might we be able to overcome them?

– What was the developer’s process for developing the process?  Did they build on someone else’s idea?  What changes did they make, and why? Should we be asking the previous four bulleted points about some other process?

You get the idea.  We sometimes get information about seemingly great processes and jump on the band wagon without completely analyzing the situation.  When it works, it is great.  When it doesn’t, we are back on the sidelines looking for a new band wagon.

For example, I am currently looking at jumping on a band wagon of doing quality assurance reviews of “aged” cases.  I think that the regular QA questions (was eligibility done in a timely fashion, etc.?) will not be particularly insightful, especially in figuring out why the case has been open for years, but perhaps there are other factors in the assessment/eligibility process that might have foreshadowed the need for an unusually long rehabilitation journey.  Have any of you looked at aged cases?  How did you select them (based on the case’s age alone or also based on money spent)?  Do you have a process for that?  Do you have an evaluation instrument? Looking at the bullet points above, I need to look at a range of factors before I leap onto the band wagon!

Have you jumped on any band wagons?  What were the results (good or bad)?  Do you feel that you had the time to fully research your decision?  Is there a program evaluation idea/process that you are considering, but about which you would like more information?  We can help each other!  Leave you comment and questions below; check back to see if someone has asked a question that you can answer.  Let’s use this blog as a forum for professional development.

blog post

Greetings, Summit Group Members!

Allow me to introduce myself; I am HarrietAnn Litwin, a Management Analyst from Delaware DVR.  I was approached (nay, recruited) by a group of the Summit Navigators to write a semi-regular blog for the Summit Group web site.  In looking for a title, I have decided upon “View from the Summit” which is only slightly ironic as I live in a state where the highest point is just a bit over 400 feet above sea level!

I am not a program evaluation guru.  I arrived at VR with a Masters in Rehab Counseling and a CRC.  Since starting as a Rehab Counselor for the Deaf in 1988, I have performed (and still perform) a large number of roles as is particularly common in small agencies.  In the past few years, my roles have moved increasingly in the direction of program evaluation and quality assurance.  As is the case of many of you who started down the counselor path and moved into the “central office” path, I am finding it necessary to learn while doing.

By attending the VR PEQA Summits, participating in Summit Reading Groups, and connecting with the Rehabilitation Program Evaluation Network (RPEN, a division of the National Rehab Association), I have found that I am not alone out here.

So, I invite you to join me on this trek.  The goal of View from the Summit is to spark discussions and conversations that lead to learning for all of us. This will only work if you, the members, become engaged.  I will come up with some of the topics. I encourage you to use the comment section not only to comment on what I have written, but also to suggest areas that we can explore together.

Other people are out there blogging about program evaluation.  I read the monthly article by Bob Behn, of Harvard, in his Performance Leadership Report.  In his November article, Bob wrote about a subject that is near and dear to many of us, “so I got the data; now what?”  Behn discusses the progression from measurement to management to leadership.  We collect vast amounts of data such as 911, Monitoring Reviews, customer service surveys, quality assurance reviews, statewide needs assessments, etc.  It all tells us something, but the trick is to move through the information and select those pieces that are most important in making quality improvement and that are within our agency’s sphere of control.

Then, decisions need to be made on how to implement necessary change.  Is it a time issue, a resource allocation issue, a training issue, or something else?  Leadership discussion and buy-in, followed by leadership out in front in making the change is important, but ultimately, it is the buy-in of everyone implementing the improvement that is crucial.

Keep an eye on data.  If it is not moving in the desired direction, many things could be happening.  Perhaps more time is needed to see the change occur.  Perhaps a variable has been missed or the root cause was misjudged.  Perhaps, the change, or the rationale behind the change, needs to be revised or re-explained.  The role of the program evaluator is to be a leader to leadership in continuous quality improvement.

Here is a link to Bob Behn’s article:

http://www.hks.harvard.edu/thebehnreport/All%20Issues/BehnReportNovember2013.pdf

The next move is yours! Please share a situation where you have faced the data “head-on” and worked with leadership and your entire VR agency to identify issues, create strategies, and implement successful (or problematic) change.

blog post

March 2013 Program Evaluator’s Moment

While I am no longer directly involved in performing tasks as a program evaluator, I keep in close contact with leadership in both the state agency and the state rehabilitation council (SRC).  I feel strongly that the partnership between the state VR program and the local SRC office should be one that is collaborative and facilitates learning, and one that serves toward the goals of improving VR performance and organizational effectiveness.

1) What thoughts do you have about the partnership between these two entities?

2) How can we foster a more collaborative relationship whereby we extend beyond participation in the development of the statewide needs assessment, customer satisfaction surveys, and State Plan development?

3) What other ways do you network with your SRC members?

If you have comments or questions and do not wish to post to the blog, please feel free to contact me:

Darlene Groomes

Associate Professor, School of Education and Human Services, Oakland University

By email: groomes@oakland.edu 

By telephone:  248-370-4237

blog post

A Future Webinar

Webinar: New research tools for exploring disability, rehabilitation-related national-survey and administrative claims data 

 

When: February 20, 2013 2:00 -3:00 pm (EST)

 

Population-based data sources such as national surveys and administrative claims data are valuable resources for testing hypotheses and generating national-level estimates about disability and rehabilitation related-issues. Unfortunately, it can be difficult to identify what datasets are available and further what data are most appropriate for addressing a specific research interest.

 

This presentation will introduce two new innovative web-based free resources designed to help researchers learn:

  • ·           What datasets related to disability and rehabilitation are out there?
  • ·           What topics are covered in each dataset?
  • ·           What are the dataset strengths and limitations?
  • ·           How do I access the datasets?

 

Presenters:

 

William Erickson and Arun Karpur

Employment & Disability Institute

Cornell University

 

The Rehabilitation Dataset Directory is a browse-able/searchable database providing an overview, description, sample and other pertinent information for over 30 datasets. The Rehabilitation Research Cross-dataset Variable Catalogallows the exploration of variables organized by topics (including disability and health conditions, healthcare, health behaviors and more) simultaneously across 6 major datasets.

 

To register for this free webinar, please go to: http://www.ilr.cornell.edu/edi/register/index.cfm?event=4211

 

The tools were developed by the Employment and Disability Institute (EDI) at Cornell University in collaboration with the Center for Rehabilitation Research using Large Datasets (CRRLD ) at University of Texas Medical Branch (UTMB). This work was funded by a sub-contract to EDI through funding from the CRRLD.

 

The Center for Rehabilitation Research using Large Datasets (CRRLD) is funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and is part of the Medical Rehabilitation Infrastructure Network (Grant # R24 HD065702).