Newsletter: May 2011Vol 11, Issue 5

Banner600

The Varying Values of Evaluation

Greene.11

Dear Colleagues,

 

May Day, Cinco de Mayo, Mother's Day - these are all celebrations of events and people that we hold dear. Celebrations and rituals afford special moments in the day to actively value people, places, and memories that matter to us.

 

Related to our own engagement with "values and valuing in evaluation" (this year's conference theme), I point to three major ways in which values permeate our practice - through (1) the rich diversity of stakeholder interests and perspectives in the contexts in which we work, (2) our designs and methodologies, and (3) the role of evaluation in society - all as mediated by each evaluator's own values commitments. In this column, I'll take up the second of these avenues by which values traverse our evaluation practices.

 

I believe that the different values promoted in our varied evaluation designs and methodologies are all legitimate. They represent distinct historical branches in the evolution of the philosophy of science, the methodology of social science, and evaluation theory. On some branches, values of cultural responsiveness and local meaningfulness are foregrounded. On others, values of methodological rigor and political neutrality are privileged. The fruit of other branches comes from a valuing of learning and critical reflection. And there are the branches heavy with the challenges of advancing social justice and equity in democratic societies. In my view, this rich plurality of values stances in our evaluation methods and practice is generative and contributes in important ways to our community's vitality and health.

 

We share a common challenge, however. And that is the infiltration of politics into decisions about the methodology and thus the integrity of the work that we do. We live in a relentlessly data-driven and accountability-centered era. All decision makers these days demand "credible evidence" upon which to base their policy decisions. But, what constitutes credibility of evidence - especially given (1) the legitimate plurality of methods and approaches that inhabit our community, and (2) the legitimate plurality of interests and values engaged by the varied methods of evaluation? How do we preserve the methodological integrity of our work amidst political pressures for particular methods and data? How do we safeguard the difference between intelligent use of methods and methodolatry? How do we stand up for the importance of evaluation as an independent social practice, even as we acknowledge its values dimensions and commitments?

 

Until next month, regards,

Jennifer


Jennifer Greene

AEA President, 2011

jcgreene@illinois.edu

In This Issue
Policy Watch with George Grob
Statement on Cultural Competence
TechTalk with LaMarcus Bolton
New GEDI Chairs
Series on Graduate Programs
Book: Social Psychology and Evaluation
JMDE Republishes Popular Series
Data Den Member Ethnicity
New Job Postings
Get Involved
About Us
Quick Links
Policy Watch - Safeguarding Evaluator Independence
From George Grob, Consultant to the Evaluation Policy Task Force

Grob

In March, we discussed the topic of threats to evaluator independence. This was spurred by a New York Times article about political pressures to suppress policy research on the dangers of gas-drilling techniques known as hydrofracking. AEA President Jennifer Greene signed a letter to the Times editor emphasizing the need to shield evaluators from such interference.

 

This leads to the more general question of how to safeguard evaluator independence through evaluation policies. Fortunately, Frederick M. Kaiser and Clinton T. Brass of the Congressional Research Service (CRS) developed an excellent treatise on this subject, Independent Evaluators of Federal Programs: Approaches, Devices, and Examples.

There is no simple answer to the question of just how independent an evaluator should be. For example, reasonable people may legitimately disagree about how broadly an evaluation report's distribution should remain entirely within the discretion of the evaluation commissioner. Similar disagreements can arise with respect to an evaluator's right to access certain data sets, or whether evaluators should include recommendations in their reports. There are also disagreements regarding circumstantial factors that may affect independence such as whether evaluators are employees of the agency whose programs are being evaluated or work for consulting firms who are paid by such an agency.

The CRS report points out that one of the best ways to protect evaluator independence is through formal evaluation policies. This has been done through legislation for the Government Accountability Office and inspectors general. But it might also be accomplished through administrative procedures. The CRS report describes possible attributes of such independence, such as criteria for selection of the chief evaluator, recruitment of evaluators, tenure, funding, supervision, purpose of the office, scope and type of evaluations authorized, standards and procedures, reporting schedules, report availability and dissemination, and obligations of the evaluated program agencies to respond to evaluators' findings and recommendations.

The report ends by concluding that: "independent evaluators follow no single path or set of directions. Instead, they reveal numerous ways and directives for possible approaches to assess federal programs; provide relevant information and data to the executive, legislature, stakeholders, and the general public; enhance oversight of affected programs; and aid in the development of new legislation or executive directives."

AEA's Evaluation Roadmap for a More Effective Government  provides advice on evaluator independence in several places, including the following:

"Independence. Although the heads of federal agencies and their component organizations should participate in establishing evaluation agendas, budgets, schedules, and priorities, the independence of evaluators must be maintained with respect to the design, conduct, and results of their evaluation studies." (p. 8)


Evaluators are often asked or need to talk about their independence with clients, and they may be called upon by agency officials to help develop policy or procedures about it. They may find the AEA Roadmap and the CRS report a handy resource for insight on this matter.

 

Go to the EPTF Website and Join the EPTF Discussion List.

 
Members Approve Statement on Cultural Competence in Evaluation

From Cindy Crusto, Chair of the Task Force on Cultural Competence in Evaluation

 

CrustoIn April 2011, the AEA membership approved the AEA Public Statement on Cultural Competence in Evaluation. This culminates six years of work and stems from the Building Diversity Initiative (BDI), a joint initiative with the W.K. Kellogg Foundation that began in 1999 to address the complexity of needs and expectations of evaluators working across cultures and in diverse communities. The intent of the BDI was (1) to improve the quality and effectiveness of evaluation by increasing the number of racially and ethnically diverse evaluators in the evaluation profession, and (2) to improve the capacity of all evaluators to work across cultures.

 

The Task Force on Cultural Competence in Evaluation, a Task Force of the AEA Diversity Committee, crafted the statement. The statement was informed by: a panel of experts in evaluation/research and/or cultural competence, a thorough review of other professional statements and guidelines addressing cultural competence, theory and research on culture and cultural competence, and feedback from multiple reviewers, including the AEA general membership and committees.

   

The statement affirms the significance of cultural competence in evaluation and informs readers of AEA's expectations for culturally competent evaluation. The statement informs the public of AEA's expectations with respect to issues of cultural competence in the conduct of evaluation. AEA's position is that cultural competence in evaluation practice is critical for the profession and for the greater good of society.

 

Because virtually all evaluators will work outside of the cultural contexts that are most familiar to them at some time in their careers, cultural competence in evaluation is relevant for all evaluators and for all types of evaluations. AEA's commitment for the statement underscores that culture is an important dimension to which evaluators must attend and that cultural competence is essential to high quality and ethical evaluation practice. This call for attention to culture and to cultural competence is consistent with the AEA Guiding Principles. The statement is an important step to expound on the Guiding Principles.

 

Important next steps for the statement relate to dissemination and to translation. AEA's Public Statement on Cultural Competence in Evaluation will be posted to the AEA website along with other resources the Task Force developed to further support knowledge, awareness, and skill development. Because the statement was, by design, not a "how to" guide, an important next step is to translate the principles presented in the statement into practice. Undoubtedly, rich discussions will ensue, which will influence all aspects of evaluation practice. The Task Force on Cultural Competence in Evaluation encourages you to attend its planned session at Evaluation 2011 that will focus on the practical application of concepts in the statement. Panelists will provide practice examples that illustrate aspects of the statement and attendees will reflect on the statement, presentations, and their work. Stay tuned for more details. 

TechTalk - Tools for Successful Collaboration
From LaMarcus Bolton, AEA Technology Director

Bolton


Today, perhaps more than ever, successful collaboration can be a key determinant in the success of an evaluand. Collaboration is imperative among evaluators for many reasons. It allows evaluators to gain fresh and new insight by bringing others on board for their projects. In addition, for those working from an empowerment evaluation perspective, collaboration allows everyone involved to feel as if they have a voice.  

The benefits of collaboration are many. Collaboration can often provide feedback from outside sources who are experts in a respective domain. With regard to stakeholders, collaboration allows a new level of communication. Particularly, stakeholders can assist in projects that directly affect them.  

Though there may be no true substitutes for real-life teamwork, there are various technological tools that can be beneficial when physically working together is not an option. Some of our aea365  contributors have highlighted several. Google Chat allows instant messaging between one or more parties. For those who prefer to collaborate via telephone, Skype  is often suggested. Skype allows users to converse with other Skype users nation- and world-wide. For videoconferencing, ooVoo allows up to six users to video chat face-to-face on a computer or phone. Diigo  allows team members to share files, webpages, bookmarks and other file types in an online "cloud" so everyone is able to access them.  

While AEA staff use a variety of the above, our most commonly used collaboration tool is Google Docs. Google Docs allows users to create documents, spreadsheets, and presentations and  it allows users to edit documents and collaborate with others in real-time. We use Google Docs to circulate documents among staff and others to edit drafts, thus facilitating collaboration. A final draft is then exported to Microsoft Office programs, like Word. We also use Google Docs to house and store internal "living documents" that staff is constantly updating.  

To learn more about collaborative tools like these, feel free to join Susan Kistler and myself at this year's AEA/CDC Summer Institute. We are presenting a session entitled  "25+ Low-cost /No-cost Tools for Evaluators." To find out more, please visit the Summer Institute website.  

And, if you have any favorite tech tools not discussed in this article, we would love to know! We are in the process of gathering tech tools of interest for evaluators. To share, I can be reached at marcus@eval.org.

 

The opinions above are my own and do not represent the official position of AEA nor are they an endorsement by the association. 


New Chairs for AEA's Graduate Education Diversity Internship Program

Join us as we extend a hearty welcome to Stewart Donaldson and Katrina Bledsoe as new co-chairs of AEA's Graduate Education Diversity Internship Program (GEDI). Founded in 2005 by AEA's 2012 President Rodney Hopson, the program welcomes graduate students from underrepresented communities into the field of evaluation. Hopson - recognized for spearheading the program with AEA's 2010 Robert Ingle Service Award - served as GEDI chair for five years, then was succeeded by Rita O'Sullivan and Michelle Jay, who will complete their terms this summer and serve as advisors throughout the transition.

 

Mel Mark served as chair of the team that reviewed applications for the two-year term. "Despite there being many strong proposals, the Donaldson and Bledsoe team easily rose to the top. They bring together an impressive array of evaluation expertise, experience in training and mentoring, attention to cultural competence, institutional support, a history of commitment to AEA and its mission and values, and networking with potential trainee mentors and workshop facilitators. It is clear that the GEDI program, an extremely important initiative of AEA, continues to be in good hands." 

 

Donaldson2Donaldson is Professor and Chair of Psychology at Claremont Graduate University, where he serves as Director of the Institute of Organizational and Program Evaluation Research (IOPER) and Dean of the School of Behavioral and Organizational Sciences (SBOS). Donaldson assumed the SBOS deanship in 2001 and under his leadership, the school has thrived academically and financially to become one of the most popular sites in the U.S. in which to apply for graduate training in the areas of applied psychology and evaluation science.

 

Bledsoe is a research scientist and senior evaluation specialist at the Education Development Center, Inc. and specializes in applied social psychology, program evaluation, and community-based health. Bledsoe's expertise is in conducting evaluations for programs such as drug and violence prevention, school-based health education, and mental health services and currently works on domestic and international projects related to suicide and violence prevention, and education. 

The GEDI program, which will welcome its eighth cohort of interns this fall, brings together a cohort of 6-10 outstanding graduate students from around the country for a 10-month internship, workshops, training, and networking and mentoring opportunities.  

Bledsoe2"Over the years, the GEDI program has produced talented and up-and-coming evaluators in the field, all conducting evaluations in a variety of global contexts," says Bledsoe. "I am passionately committed to working with (and learning from!) my future colleagues. I really want to continue to think innovatively and broadly about training, mentoring, and evaluation in the way that Rodney, Rita, and Michelle have. The GEDI program has grown exponentially over the years because of their leadership and I look forward to their continued involvement as mentors and collaborators in this endeavor."


Adds Donaldson: "I am so excited for this opportunity to work with these outstanding graduate students and to build on the amazing success of the program under Rodney Hopson, Michelle Jay, and Rita O'Sullivan."

 

In addition to Mark, members of the review group included Lisa Aponte-Soto (University of Illinois at Chicago), Leon Caldwell (Annie E. Casey Foundation), Michelle Jay, current GEDI Co-chair (University of South Carolina), Ricardo Millett (Millett & Associates), and Rita O'Sullivan, current GEDI Co-chair (University of North Carolina at Chapel Hill).

 

Thanks to all for their service and congrats to Katrina and Stewart! Applications for this year's interns are being accepted now through June 23.

 

Go to the GEDI homepage  


New Webinar Series Spotlights Graduate Programs in Evaluation

The Graduate Student and New Evaluator (GSNE) Topical Interest Group (TIG) will launch a Coffee Break Demonstration series featuring graduate programs in evaluation around the country. Each month, a different program will be highlighted by a faculty member and a student. All webinars in the series will adhere to a similar format for easy comparisons between programs. The sessions will cover strengths of the program's faculty, explain what the student experience is like, and share examples of the type of projects in which students can expect to take part.

 

"GSNE is excited to be coordinating this webinar series with AEA," says Nora Gannon, chair of the GSNE TIG. "We sought to highlight breadth and depth of evaluation training available at institutions of higher education in the US and believe the results will be beneficial for the membership at large."

   

Four webinars in the series are already scheduled, with more on the way. The series will kick off in June with Jennifer Greene and Ayesha Boyce at University of Illinois at Urbana-Champaign. In July, Tina Christie and Lisa Dillman will talk about UCLA's program. In August, Stewart Donaldson and John LaVelle will describe the program at Claremont. And in September, Chris Coryn and Jason Burkhardt will discuss the interdisciplinary program at Western Michigan University.

 

GSNE TIG leaders formed the idea of a webinar series after reading LaVelle and Donaldson's March 2010 article in the American Journal of Evaluation, University-Based Evaluation Training Programs in the United States. LaVelle and Donaldson found evidence for 48 programs around the country. Out of those, GSNE TIG leaders selected a handful of programs to compare and contrast.

 

The entire series will be recorded and archived in AEA's eLibrary. As usual, the Coffee Break Demonstrations are free of charge to AEA members. To sign up for the series, visit our Coffee Break webinar webpage . Please note that you will need to register for each webinar individually. 

Social Psychology and Evaluation

Social Psychology and Evaluation

AEA members Melvin Mark, Stewart Donaldson and Bernadette Campbell are editors of a new book published by Guilford Press. Social Psychology and Evaluation contains 14 chapters that spotlight the background and history of social psychology and evaluation as well as implications for the future.  

From the Publisher's Site:

"This compelling work brings together leading social psychologists and evaluators to explore the intersection of these two fields and how their theory, practices, and research findings can enhance each other. An ideal professional reference or student text, the book examines how social psychological knowledge can serve as the basis for theory-driven evaluation; facilitate more effective partnerships with stakeholders and policy makers; and help evaluators ask more effective questions about behavior. Also identified are ways in which real-world evaluation findings can identify gaps in social psychological theory and test and improve the validity of social psychological findings; for example, in the areas of cooperation, competition, and intergroup relations. The volume includes a useful glossary of both fields' terms and offers practical suggestions for fostering cross-fertilization in research, graduate training, and employment opportunities. Each tightly edited chapter features an introduction and concluding reflection/discussion questions from the editors."  

From the Editors:

"Each of the editors share a background in both social psychology and evaluation," explains Mel Mark. "We value the connections that have occurred between the two fields, including the infusion into evaluation of human capital and methodological skills following people like Don Campbell, Tom Cook, and Peter Rossi, and more recently the common reliance in theory-driven evaluation on behavior change theories originating from social psychology. At the same time, we see many opportunities for mutual benefit that could come from strengthening the social psychology-evaluation relationship."

 

"We're thrilled by the excellent cast of characters who've agreed to contribute chapters. These include major figures in the development and application of behavior change theories, such as Al Bandura and Icek Ajzen, as well as other outstanding social psychologists and evaluators who in one way or another bridge the two fields. Because many of our authors and readers have their feet more firmly in one area or the other, we have written an introduction to and discussion of each chapter. We hope these will make the chapters more accessible to all readers, as well as stimulate further thinking. Most of all, we are excited by the possibility that the book will contribute to a future with more mutual enrichment of evaluation and of social psychology."   

About the Editors:

The editors are all actively engaged members of AEA and longtime contributors. Mark served as 2006 President of AEA, is a former editor of the American Journal of Evaluation and is a Professor and Head of Psychology at The Pennsylvania State University. Donaldson, a current Board member and the 1996 recipient of AEA's Marcia Guttentag New Evaluator Award, is Professor and Chair of Psychology at Claremont Graduate University, where he also serves as Director of the Institute of Organizational and Program Evaluation Research and as Dean of the School of Behavioral and Organizational Sciences. Campbell, former Chair of AEA's Theories of Evaluation Topical Interest Group, is an Assistant Professor at Carleton University.

 

Go to the Publisher's Site
JMDE Republishes Groundbreaking Occasional Paper Series

JMDEEarlier this year, the Journal of MultiDisciplinary Evaluation (JMDE) republished many of The Evaluation Center's Occasional Paper Series (OPS) papers, originally penned in the 1970s by influential evaluation scholars and pioneers in the field including Michael Scriven, Dan Stufflebeam, Jim Sanders, Blaine Worthen, Gene Glass, Donald Campbell, and others. The Series was founded by Stufflebeam in 1974 and brought together prominent thought leaders whose interaction was then either in person, by phone or via hard copy mail. There was no world wide web or instantaneous electronic retrieval or sharing.

 

"For those of us who can barely remember how we got information before the Internet, it's hard to imagine how this sort of "gray literature" would be advertised and disseminated without the aid of the myriad web-based communication tools that are now at our disposal," says Lori Wingate, Assistant Director of the Evaluation Center at Western Michigan University. "Now, more than a decade into the 21st century, in the age of blogs, Tweets, Facebook, listservs, wikis, in addition to standard-fare websites, the idea of such mail-order scholarship seems almost prehistoric. But in 1974, OPS was cutting edge. The Evaluation Network - the first professional evaluation organization - was a year away from being created."

 

"When I came to work at the Center in 1997," Wingate recalls, "there was still a special room here called the "OPS Room" where these documents were stored in neatly labeled cubbies, ready for retrieval and mailing. In 2000, the OPS papers were scanned and put into PDF form so we could make them freely available ... Last year, the Center's website underwent a major overhaul and we carefully considered each component of the old site and what should be done with its content. Publication of the Occasional Papers had become a little too "occasional" to befit the label. But these early writings by some of the evaluation field's most influential figures were too important not to be preserved. Moreover, after thirty to forty years of exile in the "gray literature," they deserved to be dusted off, polished, and showcased. Accordingly, I suggested to JMDE Managing Editor Chris Coryn that he publish them in this journal so as to ensure their availability to the worldwide evaluation community far into the future. To his credit, Dr. Coryn was able to track down nearly all of the authors and obtain their permissions to reprint. You'll see the papers have all gotten facelifts by way of reformatting, but the content is 100 percent original."

 

JMDE is a free, open-access journal with nearly 6,000 subscribers internationally and can be viewed at

http://jmde.com


Data Den - Member Ethnicity
From Susan Kistler, AEA Executive Director

Welcome to the Data Den. Sit back, relax, and enjoy as we deliver regular servings of data deliciousness. We're hoping to increase transparency, demonstrate ways of displaying data, encourage action based on the data set, and improve access to information for AEA's members and decision-makers.

This month we're looking at the membership's composition. As Cindy Crusto noted in the article above, one goal of AEA's Building Diversity Initiative (BDI) was to increase the number of racially and ethnically diverse evaluators. One step in that direction is increasing the racial and ethnic diversity represented within the AEA membership.

We have race/ethnicity data for over 90% of the AEA membership (thank you to all of those who are kind enough to share this information on your membership profile). The graph below looks at two things: (1) the composition of the membership, year by year, for the past nine years, and (2) the composition of the cohort of new members who joined in 2010. The nine left-most columns include all members (including new members) in a given year, the right most column, headed "Joined '10" includes only the just over 1300 new members who joined in 2010.

When looking at the full membership, the change in composition is visible, yet relatively slow. Each year approximately 80% of the membership is retained from previous years. When we look at the composition of just the 2010 cohort of new members, we can see more clearly the increased diversity of our newest colleagues and the likely direction of the association in terms of its racial and ethnic composition.

Thank you to everyone in AEA who has invested themselves in helping the association to make the legacy of the BDI come to life.

Ethnicity by Year  

 

New Jobs & RFPs from AEA's Career Center  
What's new this month in the AEA Online Career Center? The following positions and Requests for Proposals (RFPs) have been added recently: 
  • SLEC Consultant at FSG (Boston, MA; San Francisco, CA; Seattle, WA, USA) 
  • RWJF Evaluation Fellowship Program at Duquesne University & OMG Center for Collaborative Learning (Multiple Sites) 
  • Senior Research Associate at Casey Family Programs (Seattle, WA, USA)   
  • Research Associate: Postsecondary at Research for Action Inc. (Philadelphia, PA, USA)
  • Senior Researcher - Job 6545 at American Institutes for Research (Chicago, IL, USA; or Washington, DC, USA) 
  • Research Associate at Human Resources Research Organization (Alexandria, VA, USA) 
  • Analyst, Assessment and Research - Strategic Compensation at Jefferson County Public Schools (Golden, CO, USA)  
  • Scan of the Arts and Health Field and Arts in Healthcare Network Evaluation at Society for the Arts in Healthcare (Washington, DC, USA)
  • Monitoring and Evaluation Specialist at Thailand Burma Border Consortium (Bangkok, THAILAND) 
  • Applied Research Position in Health Policy at Office of Evaluation and Inspections, Office of Inspector General, Dept. of Health and Human Services (Chicago, IL, USA)

Descriptions for each of these positions, and many others, are available in AEA's Online Career Center. According to Google analytics, the Career Center received approximately 6,300 unique page views in the past month. It is an outstanding resource for posting your resume or position, or for finding your next employer, contractor or employee. Job hunting? You can also sign up to receive notifications of new position postings via email or RSS feed.

 

Get Involved
About Us
The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of evaluation in all its forms.
 
The American Evaluation Association's mission is to:
  • Improve evaluation practices and methods
  • Increase evaluation use
  • Promote evaluation as a profession and
  • Support the contribution of evaluation to the generation of theory and knowledge about effective human action.
phone: 1-508-748-3326 or 1-888-232-2275