PART SIX

Evaluating Interventions

 

A foremost attraction of the ICC method of organizing change programs is that it has a built-in program evaluation. In fact, ICC is an intervention evaluation method. It starts by collecting facts about the present status of the system targeted for change—usually from three or more sources or perspectives. The facts are analyzed and used to establish goals for the intervention. In the action planning stage measurable objectives are described with expected outcomes within various time periods. Action taking builds in regular reviews of progress, often duplicating the same data collection methods used the first time. Thus, at any point, and certainly by closure, there is comparable data from at least three sources on system changes related to the intervention. The three data sources provide an opportunity to triangulate the data, a procedure that gives credibility and conformability to any noticed results of the intervention assuring they are accurate and real.

In textbooks on program evaluation this ICC approach is called the "continuous monitoring" method. This method is more accurate in its re-porting than some others as it collects impact assessments from more than one or two points in time. Interventions are like the stock market going up and down all the time and the more these ups and downs can be recorded, the more complete the picture of the intervention and its effects is likely to be.

Let me now put this "continuous monitoring" method of intervention evaluation into perspective. Hopefully this will encourage the reader to use the ICC model because of these great evaluation strengths. And for those interventions not able to use the ICC model, some very useful alternatives will be suggested.

Levels of Evaluation

LOW

INPUT

  • What time, resources and activities went into the intervention?

LOW

REACTION

  • How satisfied were the stakeholders with the program?

MEDIUM

LEARNING

  • What did the participants learn in the program?

HIGH

BEHAVIOUR

  • How did participants change their behaviour based on what they learned?

TOP

RESULTS

  • How useful were the results of these new behaviours?

 

An input evaluation looks at what was put into the intervention, quantity and quality. For example, an input study of the Toronto Blue Jays baseball team would show the hours spent in training, the focus of the training (batting, fielding, base running, etc.) and describe the participation of the coaching staff. It might describe the 'power' meals the team ate and their nutritional qualities. It might also record the average hours of sleep members had each night or the relaxing activities and atmosphere of the training location. Most input evaluations in the human services that I read describe the academic qualifications of the staff and their years of experience. Interesting information perhaps, but nothing that would satisfy a Jay fan who wants to know if the team is winning its games. Results is the name of this game with everything riding on wins and losses, runs scored per game by Jays and opponents, individual batting average, base running, fielding and injury comparisons. Above all, fans want to know if the team's performance is improving and does it have a chance at the pennant.

A reaction evaluation is typical of most group and organization interventions especially those of a team building, group revitalization or training nature. I call this the 'happiness scale'. While I agree that happy participants are more likely to rehire the consultant, the assumption that happiness is a direct indicator of behaviour change and organization improvement is a bit thin.

Learning is the first step toward change but all learning, especially the acquisition of new knowledge does not lead to changes in behaviour. Everyone knows people who have all the knowledge but don't practice what they preach. It is necessary to follow up on the stakeholders securing new information and insights and determine what kind of behaviour change took place. It is like taking the graduates of a course in leadership who have had their learnings evaluated by a final exam grade and following them up for a month to find out the extent to which the graduates put the new learnings into practice.

The results evaluation assesses what the impact of the behaviour change was on the targeted system. This is especially important in interventions where the goal is a change in the culture as it is more than and different from the sum of individual behaviour changes. During the 1960s I was heavily involved in the Human Relations Training movement. In the early years our evaluations measured learnings and attitude change of participants. We were delighted to find participants learned a great deal about themselves and their relationships with others. They become less authoritarian, dogmatic and prejudiced, and more cooperative and participative in their leadership style. Quite a few years later we started results evaluations and found over 5 of participants deciding to plan a new life style which included a career change. Some participants returned to work and gave their boss feedback about how they really thought things were going on the job. Others wanted to humanize their workplace, teachers introduced developmental discussions into their classrooms, and ministers opened their sermons into dialogues with parishioners. Our participants were seen by their colleagues as being open and personable but perhaps flaky and less concerned about getting the job done. Clearly there were more results with sensitivity training than any other kind of training but it was difficult to decide which changes were helpful. The behaviour change and results were consistent with the self-actualizing objectives of the training but in many cases were seen as counter productive by the employer stakeholders (to say nothing of many spouses or partners). So in addition to knowing the Jays have had 21 wins and 8 losses, the concerned stakeholder (avid fan) would want to know where the team stood in the league and what the chances are for getting into the world series.

Some program evaluators might feel that if you do a thorough results assessment you don't need the other methods. All of the methods are typically included in the ICC model: first because they provide information necessary to the goal setting process, and second because it is helpful to know which parts of the intervention are having the greatest impact—a cost/effectiveness consideration. Figure 17 tries to compare the various evaluation methods and puts them into perspective.

Reviewing the ICC data collection methods that have been recommended in the evaluating interventions template of Figure 17, all fit well into the continuous monitoring style. They all focus on outcomes and results. The Interest Census, which may have been done as a needs assessment or problem census, will show the extent to which stakeholders perceive their interests being met though the intervention. The same is true of the give/gets technique described in the entry phase. For example, residents of the Cree community in Mistassini, Quebec list housing (most are living year round in tents) and education as their top two problems. When a problem census is done again six months and eighteen months later and they are still the top two concerns, the outcome is that stake-holders still see them as the major concerns. When the third top concern of the first census, involvement in local decision making, disappears from the second and third census the outcome is the stakeholders no longer feel it is a problem. As an intervention progresses, the problems change and different problems arise.

FIGURE 17                                                                        Evaluating interventions

Sample:

Input —————>   Intervention   —————> Output ———————> Results

1.   NEEDS ASSESSMENT (INTEREST CENSUS). What needs to be done?
I I  

2.   EFFORT EVALUATION. Input of time, energy, resources.
I——————————I

3.   PROCESS EVALUATION. How things were done and how participants reacted.
              I——————————I

4.   PROGRAM EFFECTIVENESS. Learnings, skills, attitudes.
I——————————————I

5.   CONTINUOUS MONITORING. ICC and SIR model.
I
/ / / / / / / / / / I

6.   OUTCOME EVALUATION. Behaviour change and measurable objectives.
                             I—————————————I

7.   RESULTS EVALUATION. Impact assessment.
                                                                 I—————————————————I

 

The culture analysis works in much the same way, especially the questionnaire or interview format where the results are more easily quantifiable than observations. The outcome of intervention activity will show up as changes in participants' perception of their culture. The SWOT Summary, Stakeholder Analysis, and environmental scans all provide credible and confirmable qualitative measures of an intervention's outcome and results.

Consider a University that wants to increase the involvement and donations of its alumni. A stakeholder analysis starts with an alumni questionnaire recording attitudes, satisfaction and involvement related to the University and its activities. A repeat questionnaire to the same sample of hundreds of alumni eighteen months later has different results. Attitudes toward the University are much more positive and alumni are much more interested in participating in University activities. This data is supported by a 22 increase in registrations at university events (homecoming, short courses, regional meetings) and by a 17 increase in total donations and a 10 increase in the number of people donating. Quite credible measures of the outcomes and results of the interventions.

 

Measurable Objectives

In the goal setting and action planning phases I described the role of measurable objectives. These are specific activities and their expected outcomes are measurable. Figures 15 and 16 showed an example of how this was done. Measurable objectives are usually the single best approach to pro-gram evaluation as they are specific to that project. As they were established during the goal setting and action planning activities by the stakeholders and monitored during action taking duration of the intervention, they meet all the criteria of the ICC 'continuous monitoring' method of program evaluation.

The Centre Field Example shows an organization that after many years of centralized programs, focused on training, conferences and educational materials decided to try an outreach project into local communities. The method was to hire local field consultants to work with the local coordinating committees hoping to strengthen the committees and expand then-influence in the community. This new field consultant program also represented a shift in service delivery focus from experts doing training and providing technical educational material for people working in the field to a community development focus hoping to influence local cultures or 'usual ways of doing things.' Evaluating the effectiveness of this new approach was a high priority of the Centre. After a year's experience the 13 Field Consultants prepared the list of measurable indicators in one of their training sessions using the brainstorming technique - Figure 18.

Three other program evaluation tools are shown on the following pages that were developed to follow up on three of the important measurable indicator areas.

  1. How well are the local committees doing (and are the Field Consultants helping them work more effectively)? Coordinating Committee Evaluation—Figure 19.

  2. How are the field consultants doing? Evaluation of their work by local committees. Centre Planning Questionnaire—Figure 20.

  3. Field consultants facilitate a regional conference once a year and in-crease participation in it. Field Consultant Action Plan—Figure 21.

 


Part 7...