| 
 | 
        
        
            | 
            
                | AEA Newsletter August 2008 |  |  |  | 
 |  
            |  Dear AEA Colleagues, One of the most important evaluation initiatives in the 
United States federal government these days is the 
Program Assessment Rating Tool (PART) used by the 
Office of Management and Budget (OMB) to assess 
virtually every federal government program. A PART 
review asks approximately 25 general questions 
about a program's performance and management, 
including several questions explicitly about evaluation. 
The answers determine a program's overall rating 
which is then published on OMB's website 
http://www.expectmore.gov. The sometimes 
controversial PART system was the focus of the first 
AEA Public Issues Forum (see http://www.eval.org/
part.asp) and the newly-established AEA 
Evaluation Policy Task Force (EPTF) identified PART 
as a priority area. 
                Earlier this year, the EPTF contacted Robert Shea, the 
Associate Director of OMB for Administration and 
Government Performance, and a major architect of the 
PART system. I went with the EPTF's Consultant 
George Grob to meet with Shea, with the goals of 
introducing the American Evaluation Association, 
emphasizing the important role professional 
evaluators can play in the systematic assessment of 
Federal programs, and engaging him in a discussion 
of the PART's evaluation approach.
                 
                Shea described OMB's new initiative to review and 
improve the PART program and requested that we 
provide him with detailed comments on a key 
document cited in the OMB PART Guidance 
entitled "What Constitutes Strong Evidence of a 
Program's Effectiveness?" (
http://www.whitehouse.gov/omb/part/2004_program_
eval.pdf). This document has 
been 
especially controversial because of the nature of the 
case it makes regarding the use of randomized 
controlled trials (RCTs) and a formal request to review 
it and provide a thoughtful and balanced critique of the 
document itself and its policy implications is exactly 
what the EPTF was hoping to encourage.
 
                We worked hard in less than a week to produce a 
balanced critique and I am delighted to share with you 
today our cover letter and the comments that we 
provided (download at 
http://www.eval.org/aea08.omb.guidance.responseF.
pdf). We recommended that OMB develop 
new guidance for the evaluation components of PART 
that integrates evaluation more closely with 
information from other questions about program 
planning and management. This guidance should 
describe the variety of methods for assessing 
program effectiveness that are appropriate to the 
needs and development level of a program. We 
argued for a more balanced presentation of the role of 
RCTs, and suggested that there are important 
alternatives to RCTs for assessing effectiveness and 
that RCTs could be enhanced significantly when 
mixed with additional methods that enable 
identification of why and how observed effects occur. 
Finally, we called upon OMB to draw on broader 
expertise in the evaluation community to develop 
future guidance on evaluation for the PART program.
                 
                We were delighted with the reception our comments 
received and with being invited subsequently to make 
a presentation to the first meeting of the newly 
established Evaluation Workgroup of the 
cross-agency Performance Improvement Council. We 
continue to work with OMB staff and other federal 
administrators on efforts to address the major 
evaluation concerns in PART.
                 
                I particularly want to thank all the members of the 
EPTF-Eleanor Chelimsky, Leslie Cooksy, Katherine 
Dawes, Patrick Grasso, Susan Kistler, Mel Mark, and 
Stephanie Shipman-and our consultant George 
Grob, for their highly professional and energetic 
collaboration in preparing this document in such a 
short period of time.
                 
                In the next newsletter we will share an interview we 
subsequently conducted with Robert Shea in which he 
describes the challenges facing the PART system, 
addresses the issue of the role of RCTs in program 
effectiveness evaluation, and describes how 
professional evaluators and AEA can be helpful in 
improving OMB PART in the future.
                 
                Sincerely,
                 
                Bill Trochim, 2008 AEA President
 
 |  | 
 |  
            | PEPFAR Update Evaluating the President's Emergency Plan for AIDS Relief: What a difference a word makes The President's Emergency Plan for AIDS Relief 
(PEPFAR) program seeks to help prevent 7 million 
HIV 
infections, treat 2 million people with HIV/AIDS with 
antiretroviral therapy, and care for 10 million people 
with HIV/AIDS. Its legislative authority expires this year 
and AEA is working to ensure that the PEPFAR 
reauthorization documents incorporate thorough and 
thoughtful program evaluation. In March, Victor Dukay of the Lundy Foundation 
contacted AEA member Jody Fitzpatrick requesting 
help in convincing Congress to include evaluation 
funding in the reauthorization of PEPFAR. Jody relayed 
the request to the Evaluation Policy Task Force (EPTF) 
Chair, and the EPTF went to work through its policy 
consultant, George Grob. While working on the 
evaluation funding issue, we also discovered 
significant problems involving evaluation 
nomenclature. Lawmakers had previously emphasized that 
PEPFAR's funds ($6 billion this year) be used for 
services and prevention activities, but not specifically 
for evaluation. However, they did require the Institute of 
Medicine (IOM) to conduct an evaluation of PEPFAR's 
early implementation. IOM's report, PEPFAR Implementation: 
Progress and Promise, makes a compelling 
case 
for ongoing evaluation of the program. Perhaps because of the IOM report, both the House 
and Senate reauthorization bills authorize "program 
monitoring, impact evaluation research, and 
operations research" for PEPFAR. This is good news. 
However, while these terms are defined in the 
proposed legislation, they are in themselves 
confusing and possible impediments to program 
evaluation. For example, while the definition of 
operations research may suggest evaluation, there is 
a good chance that program implementers would look 
for operations research analysts to do this work rather 
than evaluators. And, it is possible that the original 
drafters of this language intended not the traditional 
field of operations research but the more relevant idea 
of research on operations, an interpretation much 
more consonant with evaluation. The current 
language 
could change the focus of the studies and diminish 
opportunities for evaluators to contribute to the 
improvement of PEPFAR. Conversely, "impact 
evaluation research" sounds a lot more like research 
than impact evaluation, a problem that is not resolved 
by its definition. These are just a few examples of how nuances in 
legislative phrasing can have significant ramifications. 
Other language in these bills also affects the budget 
issues raised by the Lundy Foundation. Furthermore, 
this legislation may reach well beyond the PEPFAR 
program. It could, for example, be used as a 
precedent for incorporating evaluation funding 
requirements into other authorization bills, especially 
for international development programs. Currently, AEA is working in concert with the Lundy 
team to clarify and improve the language in the budget 
implementation reports that accompany this 
legislation. Our experience with the PEPFAR 
reauthorization is laying a foundation for future work in 
the policy arena. This article was written by  George Grob, 
consultant 
to AEA's Evaluation Policy Task Force. |  | 
 |  
            | Cultural Context Nine interns complete nine-month graduate program There were no caps or gowns, but there was pride 
and celebration in abundance at recent 
commencement exercises for the latest participants in 
the AEA/Duquesne University Graduate Education 
Diversity Internship Program (GEDIP). The 
commencement took place at a June 25 luncheon 
held in Atlanta during the annual Summer Institute that 
is jointly hosted by the AEA and the Centers for 
Disease Control and Prevention. Nine graduate 
students from fields as diverse as applied 
anthropology, education, law, public health, and social 
work were applauded for completing the nine-month 
program. The GEDIP program provides graduate 
students of color and other underrepresented groups 
an opportunity to extend their research, theory, and 
practice capacities to evaluation. "Without cultural context and cultural competency in 
evaluation, there can be no evaluation," said Stafford 
Hood, Arizona State University, the commencement's 
featured speaker and a member of AEA's 
Nominations & Elections Committee. "You are now 
part of an extended family, and if you listen closely, you 
will hear the footsteps of those who follow after you 
and whom you will help train." In addition to attending workshops and conferences, 
making site visits to evaluation agencies, and 
participating in group telephone calls about 
evaluation, students were assigned to real-world 
evaluation projects. Half conducted traditional 
evaluations at sites within their geographical area, 
while the other half studied logic model use with a 
National Science Foundation's Science, Technology, 
Engineering, and Mathematics (STEM) program. Lisa Dirks, an Alaska resident who is pursuing a 
master's degree in administration, worked on an 
evaluation project related to homeless and alcoholism 
reduction programs in the Anchorage area and said 
her work "helped me learn how to become a culturally 
responsive evaluator." Derrick Gervin, who is pursuing 
a Ph.D. in social work at Clark Atlanta University, said 
his work with the program gave him "the opportunity to 
watch and apply logic models apply and see how they 
operate in practice." Rodney Hopson, director of the program, noted that 25 
students have come through the GEDIP internship 
program. "It seems like yesterday when the inaugural 
cohort came through this experience, and now they 
are doing post-docs, working as directors of public 
health agencies, doing HIV/AIDS work in Africa, 
entering PhD programs in Public Policy, Public Health, 
and other fields, and finishing their respective 
programs," Hopson said. "This group of cohort 
members named itself 'All Four Directions,' and 
follows the 'Power Ladies' and the 'Supersonics.' 
Each group not only has its own identity, they go on to 
contribute to the lives of communities, institutions, and 
individuals while developing incredible skills and 
learning from experts in the field. What an opportunity 
this has been!" Members of the fifth cohort will be announced this 
fall, following selection in late August/early 
September. |  | 
 |  
            | Scan Findings Study shows evaluators wear many hats! We all know that working as an evaluator means 
having to be flexible, ready for change, and able to 
leap buildings in a single bound. One minute we're 
methodologists, in another we're content experts, and 
at other times we serve as mediator/negotiators. 
Depending on our projects, we even become experts 
at convening meetings, event planning and logistics, 
and troubleshooting electronic gadgets. The Internal Scan of the membership of AEA has 
turned up some interesting findings with regard to 
the "many hats" we wear in our evaluation work. For 
example: While almost all members are involved in 
conducting evaluations (91%), only 8% focus 
exclusively of this type of work. Other               
evaluation-related work includes, in descending order: 
technical assistance, evaluation capacity building, 
training others in evaluation, writing about evaluation, 
planning/contracting for evaluations that others 
conduct, and teaching evaluation. In fact, as one 
respondent mentioned in an open-ended 
question, "Describing myself solely as an evaluator 
can be limiting in the work I do." Members of AEA also conduct their evaluation work 
across multiple content areas. The most common 
content area for AEA members is education 
(combining all categories of education) at 62% of 
members. Second up is health/public health at 41%. 
81% of members work in one or both of these areas, 
and 22% of members work in both areas. As another example of the many hats we wear, for 
those who do evaluation work in health/public health, 
43% do work related to nonprofits, 37% work in the 
area of government, 34% do work in human services, 
34% do work in youth development, 30% work in K-12 
education, 30% do work related to evaluation 
methods, 30% do public policy/public administration 
work, 28% work with special needs populations, and 
27% work in child care/early childhood education. But wait, there's more! Find out more about your 
colleagues and friends who took part in the Internal 
Scan by checking out the report and loads of data 
available online. This article was written by Leslie Goodyear, Chair 
of AEA's Internal Scan Task Force.Go to the AEA Internal Scan webpage |  | 
 |  
            | Examining TIGs Conference session explores TIG structure & TIG effectiveness AEA's Membership Committee will be offering a 
special session Examining AEA's Topical Interest 
Group (TIG) Structure - What Works, What Changes 
Are Needed at the 2008 Evaluation Conference. 
The session follows a survey of TIG leadership 
conducted earlier this year and has findings relevant 
to all AEA members. In 2007-08, AEA's Membership Committee undertook 
an examination of its current TIG structure including 
leadership, governance, activities and benefits to the 
TIG members. As part of this effort, the committee 
conducted a survey of TIG leadership and examined 
similar structures in similar organizations. This Think 
Tank session will introduce to participants some of 
the key findings and begin a dialog to identify common 
elements that might enhance the existing TIG 
structure within AEA. This session will encourage 
participation from all AEA members and especially 
invite the current TIG leadership to join the discussion. TIGs serve a critical function in the professional 
development of AEA members and planning for each 
annual meeting of the association and provide a 
forum for engagement among AEA members with 
similar interests and professional expertise or needs. 
Consequently, the TIGs are instrumental in furthering 
evaluation practice and literature by providing a 
professional 'home' within AEA of common thoughts 
and interests. However, anecdotal evidence suggests 
wide variability in the exact nature of each TIG and in 
level of activity and tangible services for its members. 
This variability in itself may be beneficial to a degree, 
but given the rapid growth in AEA membership, there 
was a desire to explore whether more standardization 
and alternative modalities may be warranted at this 
time. The Membership Committee's Think Tank will 
provide a timely forum for this effort where 
membership and leadership communication can be 
fostered. While several TIG-related think tanks are 
being proposed, they each serve different purposes 
and will allow the TIGs and the Membership 
Committee to triangulate the knowledge gleaned from 
these sessions.Go to Session Summary |  | 
 |  
            |  Eval in Action Lessons learned from expert evaluators AEA members Jody Fitzpatrick, Christina Christie, and 
Melvin Mark are editors of a new 472-page book 
published by SAGE that showcases the decisions 
made and the lessons learned through real-life 
evaluations by real-life evaluators. Evaluation in 
Action: Interviews with Expert Evaluators is 
intended for students, faculty, and professionals 
working in program evaluation. From the Publishers Website:Evaluation in Action takes readers behind the 
scenes of real evaluations and introduces them to the 
issues faced and decisions made by notable 
evaluators in the field. The book builds n "Exemplars," 
a popular section in the American Journal of 
Evaluation (AJE), in which a well-known 
evaluator is interviewed about an evaluation he or she 
has conducted.  Through a dialogue between the 
evaluator and the interviewer, the reader learns about 
the problems the evaluator faced in conducting the 
evaluation and the choices and compromises he or 
she chose to make.  The book includes twelve 
interviews illustrating a variety of evaluation practices 
in different settings, along with commentary and 
analysis concerning what the interviews teach us 
about evaluation practice and ways to inform our own 
practice.
 The book features:
 
Extended examples of how evaluation is actually 
practiced, the real pressures and choices evaluators 
face, the decisions they have to make, and a sense of 
how they make these decisions in the context of real-
life evaluations. A guiding matrix and discussion of the different 
ways in which the interviews may be grouped and 
read, which will help students and practitioners 
looking for more information and insight on particular 
issues. Twelve interviews and cases chosen to represent 
(a) different settings (e.g., welfare reform, higher 
education, mental health, K-12 education, public 
health); (b) different types of evaluations (e.g., 
formative, summative, needs assessment, process, 
outcome); (c) different approaches (e.g., participatory, 
theory-based, research-oriented, decision-oriented); 
(d) different arenas (e.g., federal, regional, state, 
local); (e) and different levels of resources (large and 
small studies).Commentaries and analyses concerning what the 
interviews teach us about evaluation practice and 
ways to inform one's own practice as well as 
discussion questions that provoke the reader to 
consider the key issues of the interview and how one 
interview and experience may contrast with 
another.Introductory and Summary chapters that cover the 
major types of evaluations and the lessons that 
emerge from the interviewees' experiences, all of 
which helps to firmly ground the information and 
issues presented in each interview. Jody L. Fitzpatrick is Director of the Master's in Public 
Administration Program and an Associate Professor 
with the School of Public Affairs at the University of 
Colorado Denver. Christina A. Christie is an Associate 
Professor and Associate Director of the Institute of 
Organizational and Program Evaluation Research in 
the School of Behavioral and Organizational Sciences 
at Claremont Graduate University. Melvin M. Mark is 
Professor and Head of Psychology at Penn State 
University.  A past president of the American 
Evaluation Association, he has also served as Editor 
of the American Journal of Evaluation where he is now 
Editor Emeritus. AEA members receive a 20 percent discount on books 
from SAGE when ordered directly from the publisher. 
The discount code for AEA members is SO5CAES or 
members can call the Customer Care department at 
1-
800-818-7243.Go to the Publisher's Website |  | 
 |  
            |  Evaluator Competencies Book spotlights challenges within changing organizations We are reprinting an updated version of this 
article.  
Our last issue did not acknowledge Marguerite 
Foxon's contribution to the book. Marguerite is a 
member of AEA and a co-author of Evaluator 
Competencies: Standards for the Practice of 
Evaluation in Organizations. AEA members Marci J. Bober, Marguerite Foxon, and 
Darlene F. Russ-Eft are among five co-authors of 
Evaluator Competencies: Standards for the 
Practice of Evaluation in Organizations. Published 
by Jossey-Bass Publishing, the book focuses on the 
challenges and obstacles of conducting evaluations 
within dynamic, changing organizations, and provides 
methods and strategies for putting these 
competencies to use. From the publisher's website:The book is based on research conducted by the 
International Board of Standards for Training, 
Performance, and Instruction and identifies the 
competencies needed by those undertaking 
evaluation efforts in organizational settings.
 "This book will be welcomed by Training, Learning, 
and HR professionals who have struggled with 
evaluation - it has been written with their specific 
needs in mind," says Foxon. Bober adds that the research involved evaluators on 
all continents except for Antarctica. "Thus, the 
identification and subsequent validation of the 
competencies attempts to suggest what is common 
throughout the world." "The most rewarding aspect of the project involved the 
opportunity to work with colleagues from several 
different countries and cultures," says Russ-Eft. "The 
diverse experiences and engaging ideas helped me 
(and others on the team and the ibstpi board) 
appreciate the complexity of the work of an evaluator." Marcie J. Bober, Ph.D., is professor in and chair of the 
Department of Educational Technology at San Diego 
State University. Marguerite Foxon, Ph.D., is a highly 
respected evaluation and performance improvement 
specialist who brings 25 years of experience in 
managing large-scale evaluation and global 
leadership development programs in Australia and 
the United States. Darlene F. Russ-Eft, Ph.D., is a 
professor in and chair of the Department of Adult 
Education and Higher Education Leadership within 
the College of Education at Oregon State University. Jossey-Bass Publishing offers AEA members a 
special savings on its publications when ordered 
directly from the publisher.  To receive your 20% 
discount, please use the promotional code "AEAF8" 
online or by phone (1-800-225-5945).Go to the Publisher's Website |  | 
 |  
            | Hawaii Conference Hawaii-Pacific Evaluation Association hosts third annual conference The Hawaii-Pacific Evaluation Association (H-PEA) will 
be hosting its Third Annual Conference and Pre-
conference Workshops on September 4-5 at the Hilton 
Waikiki Prince Kuhio Hotel. Three half-day pre-
conference workshops will be held on Thursday, 
September 4, followed by an all-day conference on 
Friday, September 5. This year's conference 
theme, "Building An Evaluation 'Ohana' (Family)," 
focuses on evaluation capacity-building. In response 
to requests from H-PEA members, paper 
presentations and a poster session are being 
planned. Workshop presenters and conference 
keynote speakers include Hallie Preskill, Professor at 
Claremont Graduate University and 2007 President of 
the American Evaluation Association, and Tom Kelly, 
Evaluation Manager at the Annie E. Casey Foundation 
in Maryland. H-PEA, a local affiliate of AEA, was 
founded in 2005.Go to the Hawaii-Pacific Evaluation Association Website |  | 
 |  
            | Australasian Conference Meeting in Perth explores the value of evaluation The Australasian Evaluation Society will hold its 2008 
International Conference on September 8-12 in Perth, 
Western Australia. The theme of the Conference is 
Evaluation: Adding Value. Three sub-themes 
are designed to sharpen the focus of the Conference: 
Value for Whom? provides a reminder that effective 
evaluation is 'audience driven' and invites us to 
consider whose interests an evaluation might serve, 
will serve and should serveWhose Values? questions the value basis upon 
which recommendations and program decisions will 
be made, and indicates that this should be a carefully 
considered decision in evaluation; andOptimising Value emphasises that evaluation 
inevitably involves 'trade-offs', in both the conduct of an 
evaluation and the utilization of the information 
obtained, and invites consideration of how the needs 
of the various program stakeholders might best be 
served. Keynote Speakers will address various aspects of the 
Conference theme and discuss specific evaluation 
issues related to the Conference sub-themes, as well 
as provide an international perspective. A special 
feature of the Conference will be an Industry Focus 
each day. Issues of evaluation in health, education, 
Indigenous affairs, performance monitoring, 
community services and environmental and natural 
resource management will all be highlighted. A 
number of specialized workshops, specifically 
designed to develop participants' knowledge and 
competencies relevant to evaluation and its practice, 
will also be available. From the AES website:Go to the Australasian Evaluation Society's WebsiteThe changing landscape of evaluation in Australasia, 
and in the world more generally, requires evaluators 
to 'add value' to decision making about programs, 
policies and services through developing and 
assessing new and alternative procedures for 
evaluation. There are also particular evaluation 
knowledge and competencies which enhance the 
understanding and effectiveness of those involved 
with evaluation. It is this orientation to innovation and 
training which underpins the AES 2008 Perth 
Conference.
 |  | 
 |  
            | Volunteer Bookgroup Leader Training Task Force seeking members Do you have a background in distance learning or 
online community building? Then we would like to 
hear from you for possible participation in a task force 
of the Professional Development Committee. We will be offering orientation for leaders for AEA's 
online bookgroups this fall at the annual conference. 
Surveys from recent online bookgroups suggest 
that a key facet to improving the bookgroup program is 
improving both the quality of the online dialogue and 
the opportunity for making peer-to-peer connections. 
We are bringing together a small team to guide the 
development of an agenda and materials for use at 
the fall training. We aren't asking for volunteers to write (although you 
are welcome to do so), but rather to contribute your 
knowledge and expertise around distance education 
and/or online dialogue and community building. We 
anticipate meeting for up to three one-hour conference 
calls during September and October and exchanging 
emails over the same period to guide the staff's 
development of the agenda and materials. If you would like to be considered for participation, 
please send an email to Susan Kistler, AEA's 
Executive Director, at susan@eval.org, 
indicating your interest as well as your background in 
distance education or online community building. 
Please note that we are not seeking guidance around 
the technology (a new technology platform is coming 
online for AEA in late fall), but rather around facilitating 
meaningful online dialogue among people previously 
unknown to one another. |  | 
 |  
            | Administrivia Evaluation 08 registration rates healthy Conference registration opens each year the first 
week in July. In 2002, 112 people registered for the 
conference in all of the first month. In 2008, 161 
people registered in the first week alone and we
cleared 500 in July on the way to 2500 or 
more registrants for the event. Register early to 
ensure your first choice of 
workshops and lower registration rates!Go to the Conference Website |  | 
 |  
            | Get Involved Get the most of of your membership As fall approaches, we draw nearer to AEA's annual 
Evaluation conference and the fall academic year. As 
always, there are many ways right now to participate in 
the life of the association. Please click 
through to the appropriate item below to find out more.
 |  |