Discussion – Week 10

I’m studying and need help with a Social Science question to help me learn.

Discussion: Use of Research Methods in Program Evaluation

Planned program outcomes (that is the goals, objectives, or intended impacts of a program) should specify what the program intends to accomplish in clear, measurable terms so the success of the program in achieving its goals can be determined.

Evaluations require collecting data and interpreting it so the program officials can determine what the strengths and weaknesses of the program are, how well it is meeting its goals, make plans for improvements, and determine whether it is a cost-effective approach to mitigating or addressing a community need. Interpretation of the results for stakeholders is a key component of the evaluation process.

Program objectives should be:

(1) actionable – focused on what the program staff and participants can do

(2) relevant, realistic, and attainable

(3) measurable or observable,

(4) under the control of the program staff or participants – they should not depend on the actions of others,

(5) timely and meaningful, and

(6) directed toward continuous improvement.

How do we evaluate these outcomes? How do we evaluate all the effects (outcomes) of a program (positive, negative, or neutral) so we can understand its impact for the recipients and the community?

Our objective this week is to explore research methods for evaluating program outcomes, focusing specifically on group programs, which are common in social work.

Just as with other types of programs, social workers must know how to select the appropriate research methods for evaluating program outcomes.

For this discussion, you will select research methods consisting of indicators of change, appropriate questions, and data collection methods that are appropriate for evaluating a foster parent training program described in the case study assigned for this week.

To prepare for this discussion, review the assigned case study entitled “Social Work Research: Planning a Program Evaluation” in Social Work Case Studies: Concentration Year (Plummer, Makris, & Brocksen, 2014).

In the assigned case study, Joan, a social worker and PhD student, plans to evaluate a new curriculum for foster parent training for a large agency with seven regional centers that train new foster parents.

According to Plummer, Makris, and Brocksen (2014), “[t]he primary goals of this new training program include reducing foster placement disruptions, improving the quality of services delivered, and increasing child well-being through better trained and skilled foster families”.

Joan intends to compare the new method of foster parent training with the current training method used by the agency to determine if the new curriculum is superior to the previous one. She plans to use the new curriculum in three of the regional centers and the current method in the agency’s other four regional centers and compare the effectiveness of each approach.

To ensure that the curriculum is the only difference between the new and current training methods, she assigns the same trainers to every regional center so that all the parents are taught by the same trainers. Both training methods will require the same number of hours of training consisting of a series of six three-hour sessions.

She will ensure that the type of people being trained as foster parents at each center are essentially the same (I hope – otherwise she can’t do this comparison!).

By using the same type of participants, personnel, and training schedule at each site, Joan can ensure that the only difference between sites is the curriculum. This arrangement is ideal for Joan to compare the impact from each of the two programs.

___________________________________________________________

Joan comes to you and ask you to help her choose the best research methods for evaluating the outcomes of the two types of foster parent education class.

Choosing research methods to evaluate program outcomes depends on many factors, including these, to name a few:

  • available resources for conducting the evaluation, including time, funds, number of staff available and their level of knowledge and experience with the evaluation process
  • the tools that are most likely to engage people to participate in the evaluation and share valuable information with the evaluators
  • the audience for the information that will be obtained by the evaluators (stakeholders, funders, government accountability officials, interested community members, program recipients, and others)
  • the stage of the program being evaluated (a program in progress or a program that has completed)
  • the goals of the evaluation, which may include these:
    • to understand how well the program met its goals
    • to assess whether the program maintained fidelity to the program plans outlined in the logic model
    • discover what participants learn or what skills they acquire in the program
    • to assess changes in knowledge, attitudes, perceptions, values, or skills that can be attributed to participation in the program (positive, negative or neutral)
    • to determine the expected short-term and long-term changes in participants or conditions as a result of the program, stated in behavioral terms or shifts in attitudes or opinions.
    • to learn how the staff conducted the program, and how closely they maintained fidelity to the program plan
    • increase awareness of what works (and what doesn’t) to solve the problem or provide the service for which the program was created
    • identify successful and less-than-successful program activities
    • understand the recipient’s opinions regarding the program activities that they thought were most beneficial (caution: “feelings are often poor indicators that your service made lasting impact”, McNamara, 2002)
    • to audit the program for efficiency in terms of time and money
    • learn about factors that aid or hinder program success
    • select program components to improve, maintain, expand, or eliminate
    • to advocate for program continuation, particularly in the context of budget cuts
    • make decisions about whether to continue, modify, or terminate the program

Once you know what is feasible in terms of resources, time, funding and what you or your stakeholders need to know, you begin to consider questions you would like to ask stakeholders, including program staff, program participants, and any others whom you think may have insight into the effectiveness of the program.

Taylor-Powell, Steele, and Douglah, (2008) note that

[e]valuations [that are] focused on measuring outcomes include questions that ask about changes and levels of performance for individuals, families, groups, organization, and communities. These changes may be related to knowledge, attitudes, skills, motivations, plans, decisions making, behaviors, practices, policies, and social, economics, civic, and environmental conditions.

An excellent list of questions for use in a program evaluation is found in Building Capacity in Evaluating Outcomes (University of Wisconsin-Extension, Program Development and Evaluation, (Taylor-Powell, Steele, and Douglah, 2008) https://www.wcasa.org/file_open.php?id=921. This booklet provides extensive guidance about the type of questions to ask in an outcome evaluation and the methods for gathering information about program outcomes.

In addition to the questions contained in the links, the following list provides outcome-focused questions. No single evaluation would try to answer all these questions. They are provided to spark conversation and exploration. […] in your discussion posts.

Sample evaluation questions about outcomes/impacts

  • What do people do differently as a result of the program?
  • What do people learn, gain, accomplish?
  • What changes occur [and for whom] as a result of the program?
  • [Do different changes occur for people with differences in] gender, socio-economic status, lifestyle, religion, education, ethnicity, previous experience, situation, etc.?
  • Who benefits [from the program] and how?
  • What are the [emotional, physical], social, economic, [and] environmental impacts [of the program] (positive and negative) on people, communities, the environment?
  • What is the extent of the impact?
  • To what extent have we reached our goals [or] performance targets?
  • What program factor[s] or activities relate to better outcomes? Which seem to help participants the most? The least?
  • Are participants satisfied with what they gained from the program?
  • How do the outcomes of this program compare to other, similar programs?
  • Is the program efficient? That is, does it produce beneficial results without excessive outlay of time, effort, and resources?
  • Are the results worth the money and time invested into it? How efficiently are clientele and agency resources being used?
  • How well does the program respond to the [statement of] need and program purpose?

After you know what information you would like to obtain for your evaluation, you must choose appropriate data collection tools. How will you get valid and useful answers to the questions about the issues that you want to learn about?

All data collection methods have advantages and disadvantages. Some considerations for choosing your data collection techniques are listed below:

  • How useful is your method in getting the information you seek?
  • What assures you of the validity of the data collection method?
  • Is the method reliable, that is, is it understood by your group leaders and respondents in the same way?
  • Is the method precise enough that the questions and answers are not ambiguous, enabling you to truly compare apples to apples?
  • Is the method feasible, given your budget, resources, and time? (Use your imagination on this.)
  • How expensive is it to collect data with this method?
          • Note that there is no method that is perfect – some have advantages in one area but not in another. All these methods are respectable; there is no “wrong” method. Select the method based on your level of evaluation and your interest and assess it according to its strengths and weaknesses in this situation.

________________________________________________________

To prepare for this discussion, review the assigned case study entitled “Social Work Research: Planning a Program Evaluation” in Social Work Case Studies: Concentration Year (Plummer, Makris, & Brocksen, 2014).

Inform Joan (and us) how she might conduct her program evaluation. Envision yourself as a consultant whom Joan contacts for assistance in planning her evaluation. She is not sure how to proceed because there are many elements of an evaluation from which she may choose, and she does not know the best methods for gathering YOUR

Your task: use the tables

  1. Identify two indicators of change or effectiveness that you think would be useful to evaluate in an outcome evaluation of a foster parent training program (your choice – judgments about program activities from staff or participants; changes in behaviors, values, attitudes; expected long-term changes; program efficiency in terms of time and finances; adherence (fidelity) to program plan, or an indicator that interests you) place them in Table 1 below.
  2. After selecting your indicators, enter information regarding how they are relevant to an outcome evaluation (2-3 sentences).

Table 1

Indicators of program effects:

Select TWO:

  1. judgments about program activities (staff or participants);
  2. changes in behaviors, attitudes, skills, values, etc.;
  3. expected long-term changes in participants or conditions;
  4. program efficiency;
  5. fidelity to program plan (logic model)
  6. indicator of interest to YOU

Whom would you ask?

How is this indicator relevant to an outcome evaluation?

List four questions that would effectively obtain information about the outcomes of the program on the indicators for the foster parent training group you chose in Table 1. You may develop your own questions or adapt them from the list above or from these learning resources: Basic Guide to Program Evaluation (Including Outcomes Evaluation) (McNamara, 2002) (https://managementhelp.org/evaluation/program-evaluation-guide.htm#anchor1585345). Scroll up to start reading the section entitled Outcomes-Based Evaluation for ideas on the purpose and methods for obtaining information on program outcomes.

Strengthening Nonprofits: A Capacity Builder’s Resource Library https://www.acf.hhs.gov/sites/default/files/ocs/measuring_outcomes.pdf may be useful in exploring the questions listed below in detail. Do not copy your questions from these lists. Notice how this publication describes the performance indicators of the outcomes. (List of four unique questions that apply to the foster parent training program will be placed in Table 2 below).

  1. Select a data collection method you would choose to obtain information for each question and list its advantages and disadvantages and enter them into the template below. in the template below. Use the table from Overview of Methods to Collect Information, found about half-way through the Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources linked here. Read through the methods and their uses in the paragraphs that follow the table.
  2. Enter the indicators, questions, and data collection methods into Table 2 below.

Table 2.

Indicator from Table 1

Questions to ask to learn about the change in the indicator as a result of group participation

Data Collection Method, with advantages and disadvantages

Indicator #1:

Indicator #2:

  1. In one sentence for each indicator, explain your reason for choosing the data collection method for each of the questions you selected, based on the criteria listed in Dudley (2014), pp. 220-221 (usefulness, validity, reliability, feasibility (use your imagination), and cost (use your knowledge) and its ability to prompt clear and precise information from the respondent.

Answer this question by completing these sentences (5a, 5 b)

5a. My data collection methods for Indicator 1 [name] ________, include _____ [specify your selected methods____] . (Enter these from Table 2).

According to Dudley (2014), when applied to a program evaluation such as the one described in the case study they [are useful/have limited usefulness] [are valid/have limited validity] [are reliable/have limited reliability] [are feasible/have limited feasibility] [are cost-effective/are costly] in this situation because __________.

5b. My data collection methods for Indicator 2 [name] ________, include _____ [specify your selected methods____] . (Enter these from Table 2).

According to Dudley (2014), when applied to a program evaluation such as the one described in the case study they [are useful/have limited usefulness] [are valid/have limited validity] [are reliable/have limited reliability] [are feasible/have limited feasibility] [are cost-effective/are costly] in this situation because __________.

log ins:

[email protected]

Bigmama49! or Georgiagirl49!

“Get 20% discount on your first 3 orders with us” Use coupon: GET20

Posted in Uncategorized