Newsletter: September 2008 Vol 8, Issue 9


TrochimI'll grant you that the conference theme of "Evaluation Policy and Evaluation Practice" might at first glance strike many people as fairly staid. So, why do I find it so exciting? Most evaluators have no problem with the "practice" side of the theme. After all, that's what we do; it's who we are. There are lots of issues there to get jazzed about. But what's the deal with this "evaluation policy" thing?
My short answer is: evaluation policy affects everything we do in evaluation. When an organization or agency determines how often evaluations of programs must be done, it affects all of us practicing evaluators working in that area. When they stipulate the evaluation approaches or methods that must be used, they shape how we do evaluation. When they state the purpose of an evaluation or the stakeholders who should be involved, they affect the transparency and ethics of what can be done. When they specify who is qualified to do evaluation, they potentially limit our accessibility to evaluation work. When they set evaluation budgeting policies, they influence whether we'll have the resources to do our job well. So, evaluation policies drive everyday evaluation practice, and that's why they are critically important.
Many of the biggest controversies in the field of evaluation are about what the appropriate evaluation policy in a specific context should be. In many organizations evaluation policies aren't written or, when they are, they are not well-informed or well-crafted. Often they are essentially unwritten rules or norms that evolve over time. Sometimes you won't even know there is a policy until you propose something that gets rejected (sorry, that budget is higher than we usually allow). Sometimes even when written policies exist they are not followed or are poorly implemented.
If evaluation policies are so important for practice, what might we as evaluators do about them? There are lots of options for us to consider. We need to be developing a clearer sense of what evaluation policy is and might be, taxonomies and examples of different types of evaluation policies, and workable processes for helping organizations develop, revise and maintain them. Evaluators will need to play a major role in helping organizations and their many stakeholders and constituencies think these issues through. We should be advocates for transparency in evaluation policies and for helping assure that evaluation policies are both appropriate and useful in their application contexts. We can encourage organizations to develop written evaluation policies that are informed by the best evaluation thinking and are shaped by data about what works in evaluation practice.
As the leading professional and research association in evaluation, AEA needs to play a key role in encouraging the development and use of good evaluation policies through initiatives like our current Evaluation Policy Task Force. As we prepare for the conference this November, I hope you can find a few moments to think about the role of evaluation policies for practice and the ways evaluation practice can inform those policies. And, I'm hoping that the conference functions like an association-wide think-tank on the practical ways we can enhance the role of evaluation policy and its integral relationship to practice.  I hope to see you all in Denver.  
William Trochim
2008 President, American Evaluation Association
In This Issue
09 Election Results
08 Plenary Sessions
New Public Issues Forum Scheduled
International Travel Awards Announced
Interview with Ba Tall, IOCE President
AEA's Guttentag Award
Evaluating Scientific Research
Get Involved
Quick Links
09 Election Results

The results are in! Join us in welcoming AEA's new board members for 2009-2011.
Elected 2010 President:
Leslie Cooksy, University of Delaware
Elected to AEA's Board of Directors:
Katrina Bledsoe, The College of New Jersey
Beverly Parsons, InSites
Veronica Thomas, Howard University
Congratulations to all!

Presidential Strand Highlights Interplay of Evaluation Policy & Practice
At each year's Evaluation conference, the theme - selected by that year's AEA President - is woven into multiple sessions throughout the more than 500 available to 2,500 attendees from around the world. This year's theme is Evaluation Policy & Evaluation Practice. An evaluation policy is any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation. Evaluation policies can be explicit and written, but often are implicit and sometimes even improvised on the spot. 
dattacropped The Presidential Strand is a series of plenary addresses, expert lectures, panels and other sessions that are related to the theme, and run throughout the conference. The plenary sessions feature Lois-ellin Datta (pictured at right), Elliot Stern (pictured below), and AEA President Bill Trochim, and offer distinct, but complementary perspectives on the conference theme of evaluation policy. In the opening plenary scheduled for 3:15 p.m. on Wednesday, November 5, Datta will provide an historical and meta-evaluative framework for the conference's policy discussions that draws on both an analysis of current policy issues as well as her own considerable experience in the sternevaluation policy world. Datta is a former president of the Evaluation Research Society and the 1981 recipient of AEA's Myrdal Practice Award. Stern, a past president of the European Evaluation Society and founding President of the UK Evaluation Society, will explore some of the diverse ways in which evaluation policy and practice is developing in different European arenas, offering a valuable international and comparative perspective. In his Presidential Address, Trochim will give an overall roadmap to the topic of evaluation policy, particularly highlighting the interplay of evaluation policy and evaluation practice.  His presentation will offer a practical model for the development and revision of evaluation policies and will present a general taxonomy of different types of evaluation policies. In addition, he will summarize recent efforts to influence evaluation policy formation, and consider how evaluation practice might best drive evaluation policy development. Trochim's opening session will be held Thursday and Stern's on Friday, both at 8 a.m.
The Presidential Strand also features expert lectures by Eleanor Chelimsky, Will Shadish, and Frans Leeuw and several paper and panel sessions. The expert lecturers address issues of evaluation policy as manifested in organizational structure, evaluation policy focusing on method, and the development of a national evaluation policy. The Presidential Strand paper and panel sessions draw from the diversity of AEA to examine evaluation policy from a variety of perspectives, disciplines, and program areas. The Strand also includes a reception for the winners of the Student Travel Awards. All the Presidential Strand sessions, are based on the idea that it's valuable for evaluation policy to be explicit and transparent. Evaluation policies profoundly affect the day-to-day work of all evaluators - in terms of the questions that are asked, the methods that are used, and the perspectives that are included. Through the conference focus on the ways in which evaluation policy affects our practice, and how we can influence evaluation policy, attendees will develop a rich understanding of the nature, state, importance and potential of evaluation policy, its integral relationship to and dependence on evaluation practice, and be able to explore their own opportunities for developing and influencing both evaluation policy and evaluation practice. 
For more information on this year's conference, visit
For a complete listing of Presidential Strand sessions, including the ones described above, go to the online Conference Program at and select "Presidential Strand" in the Sponsoring Group or TIG drop-down list.
This article was written by Leslie Cooksy and Mel Mark, 2008 co-chairs of the Presidential Strand.
New Public Issues Forum Focuses on the Politics of Evaluation
Just two days after the November election, the AEA's 2008 Public Issues Forum will bring together a diverse panel of experts to discuss "the politics of evaluation" as seen through the eyes of researchers, practitioners, the media and people from other countries. The forum, entitled Multiple Perspectives on the Politics of Evaluation will take place from 1:40 p.m. to 3:10 p.m. on Thursday, November 6, in Capitol Ballroom-Section 5 of the Hyatt Regency Hotel. The forum is part of the Evaluation 2008 conference being held in Denver from November 5-8.
"Just like any other kind of policy, evaluation policies are negotiated in a political arena," said Leslie J. Cooksy, chairman of AEA's Public Affairs Committee, which organizes the annual forum. "Politics often drives the kinds of programs that are evaluated and the kinds of questions that are asked about them. By looking at how politics plays out for evaluators in different contexts, the forum will provide insights to the political context of evaluation policy."
Katherine McMillan-Culp, a senior research scientist at Education Development Center (EDC) will start the panel discussion with a presentation of how evaluation politics are experienced at the local level and where evaluation practice may be driven by directives from state or federal funders. Leonard Bickman, who directs the Center for Evaluation and Program Improvement at Vanderbilt Peabody College, will discuss the interplay of politics and practice in evaluations of federal demonstrations. Lesley Dahlkemper, a national-award-winning education reporter for public radio who now heads a Denver communications firm that specializes in education issues, will discuss how politically-sensitive evaluation findings are communicated to the public. Peter Dahler-Larsen, a professor in the Department of Political Science and Public Management at the University of Southern Denmark, will provide an international perspective, discussing cross-cultural similarities and differences in the ways that politics affects evaluation. AEA President William Trochim will serve as discussant, connecting the issues identified by each speaker to the theme of the conference.
The AEA launched the Public Issues Forum in the fall of 2006 to help spur interest in public policies that affect evaluators and to encourage outreach to policy-makers outside the evaluation community. Transcripts and audio archives of the 2006 and 2007 forums can be found online on the Public Issues Forum Website.
International Travel Award Winners Announced
While Evaluation 2008 is hosted by the American Evaluation Association, it's also an international event that attracts attendees from around the world. To build our knowledge of evaluation in international contexts as well as support attendees from developing countries and countries in transition, AEA annually awards travel stipends to a small number of competitive selected international evaluators.
The funds are raised through a silent auction held at each conference and this year's auction will be from 6:30 to 8:00 p.m. on Friday, November 7, in Centennial Ballroom D at Denver's Hyatt Regency Hotel. In the past, items sold at the auction have ranged from Indian jewelry and African artwork to bottles of wine and home-made preserves.
"I am certain that this conference will provide an excellent opportunity for sharing ideas and practices, and deepening mutual understanding of evaluation in different regions," says Liezl Coetzee, one of this year's travel awards winners.
A social and economic development consultant from Capetown, South Africa, Coetzee will be a presenter at two sessions at Evaluation 2008 - International Evaluation Standards and Policies (Session 670), to be held from 3:25 p.m. to 4:10 p.m. on Friday and Measurement and Data Quality in South Africa (Session 844) that will take place from 9:50 a.m. to 10:35 a.m. on Saturday.
Each award recipient is asked to share his or her knowledge and expertise via one or more conference presentations. The other three travel award winners are:
  • Debazou Yantio Yantio of Cameroon - Yantio heads the Unit of Programs for Sustainable Management of Natural Resources in the Cameroon Ministry of Agriculture and Rural Development. He will be a presenter at Methods and Data Integrity (Session 936) to be held from 4 p.m. to 5:30 p.m. on Saturday.
  • Kristine Grigoryan of Armenia - As a consultant with Armenia's Social Protection System Strengthening Project (SPSS), Grigoryan is involved in building evaluation capacity in the Ministry of Labor and in local Non-Government Organizations (NGOs). Her poster on Building Monitoring and Evaluation Capacity in Developing Countries will be one of over 100 on display from 6:30 p.m. to 8:00 p.m. on Wednesday, November 5 as part of the evening's reception and poster exhibition.
  • Jamila Assanova from Kazakhstan - Assanova is the executive director of Kazakhstan's Civil Society Development Association (ARGO) and will become Chair of the International Program Evaluation Network (IPEN) in September of 2008. Assanova's poster on Government Program Evaluation in Central Asian Countries will also be part of Wednesday's poster exhibition.

Be sure and meet our award winners when you are in Denver for the conference. The success of the silent auction, and the international travel awards program, depends on the generosity of our many conference attendees. Please bring something from your home region to the conference to donate to the auction and help ensure the future of this program. Donations may be dropped off at the registration desk when you check in.

Interview with Ba Tall, IOCE President
Ba Tall 
Oumoul Khayri Ba Tall, a self-described "evaluation activist," is president of the International Organization for Cooperation in Evaluation (IOCE) and a board member of the African Evaluation Association (AfrEA). A citizen and resident of Mauritania, she started an accounting and business consultancy firm there in 1997 and entered the evaluation arena seven years ago. Her interests rapidly moved into the area of strengthening evaluation capacity-building efforts, and led her to help establish a national association in her country - Association Mauritanienne de Suivi-Evaluation (AMSE). She is the subject of the latest installment in AEA's series of interviews with notable people in the international evaluation community. Below are some excerpts:
"I claim that evaluation capacity building is the most important public investment that is worth making in this age of knowledge - yet we still need to agree on what makes evaluation so pertinent and powerful as a development tool. A close look at the different functions it can play will tell that evaluation as a means towards greater democracy will provide development stakeholders with the right information to engage in policy debate."
"Evaluation networks are developing rapidly in almost all the regions of the world... I am not assuming that effective evaluation networks will lead automatically to good evaluation standing in a given country. But in general, it is observed that evaluation networks because of their work on the ground, tend to be an effective way to build capacity in a given country, and even beyond the country."
"...It is true that there is a growing consensus that methods are driven by the questions and the context, and not the other way round. But in practice, there are too often these 'ready-to-use' toolkits and forms of all nature that evaluation commissioners or headquarters will still prescribe you to make use of, and hard core tenets of 'causal relationships' are still very much around."
"...If evaluation is to be respected and trusted, it has to be professionalized in some way. The Canadian Evaluation Society is working towards the development of credentials - a very ambitious project still under debate - but the process is launched, and we are all observing what will come next."
"My last word to AEA is an appeal for greater engagement with sister associations to systematize the knowledge sharing. I am amazed by how generously you folks share your knowledge and information, how much time to spend on the listservs responding to all questions - I joined EvalTalk when I was working on the Impact Evaluation - so this is a good school..."
Thanks to AEA's International Committee for its help in facilitating this series of interviews as part of our commitment to promote a broader understanding of global evaluation perspectives. To learn more about Howard and his perspectives on international impact evaluation, you can read the complete interview online.
Go to the Full Online Interview
A History of AEA's Guttentag Award
The Marcia Guttentag Award is presented to a promising new evaluator during the first five years after completion of his or her Masters or Doctoral degree and whose work is consistent with the AEA Guiding Principles for Evaluators.
The Marcia Guttentag New Evaluator Award is named in honor of a Harvard professor and social scientist whose life was tragically cut short at the age of 45. A scholar and activist, she already had authored or edited 13 books, 20 chapters, 15 monographs, and more than 50 papers when she died in 1977. She also was the founding president of the Evaluation Research Society a year earlier.

"She was a woman of unusual energy and vitality and was at the height of her powers when she died," states an article in the November 2000 edition of Canadian Psychology. "The vigor and importance of Marcia's research and writing are testified to by the fact that both are still actively influencing the field."

One of Guttentag's most noted works is Too Many Women? The Sex Ratio Question, which was published in 1983 by SAGE Publications (with Paul Second as co-author) and earned a Distinguished Contribution to Scholarship Award from the American Sociological Society the following year.

Guttentag was also well known for her work in the arena of gender equity and women's issues. In 1976, she was recognized by TIME Magazine for reviewing the script of a Maude television episode that spotlighted manic depression. Guttentag took a proactive stance and alerted more than 200 CBS affiliates to be ready for a post-show blitz of phone calls. Thirty-five affiliates agreed to run public service spots telling viewers where they could get help and most stations were poised to refer callers to state or local mental health centers. 
That same year, Guttentag served as president of the Society for Personality and Social Psychology and was the founding president of the Evaluation Research Society in 1976 (which merged with the Evaluation Network in 1986 to become the American Evaluation Association).
Guttentag had great influence in the field of evaluation. In 1975, she and Elmer Struening were editors of the influential Handbook of Evaluation Research, Vol. 2. In 1977, she was editor of the Evaluation Studies Review Annual, a series once on almost every evaluator's shelf. Her work on Bayesian probability in applications of Bayesian probability in evaluation was generative.

Established in 1981, the Guttentag Award seeks to recognize young professionals who exhibit similar promise and potential early in their careers and to support and encourage them in their endeavors. Previous recipients include Jana Kay Smith, Leslie Riggins Cooksy, Stewart Donaldson, Robin Miller, Rodney Hopson, Christina Christie, and E. Jane Davidson.

Eleanor Chelimsky, former president of both ERS and AEA, recalls a visionary who had rallied her peers to consider both the need for an organization representing evaluation and the readiness of the field. Guttentag convened a meeting where many spoke, but she stood out.
"She was at her absolute best that day," says Chelimsky. "I remember Marcia, building her case, making her points, never going beyond her data, but rousing each of us to a personal sense that not developing an evaluation society was what would have been truly unthinkable.
Marcia remains a living presence in evaluation for her intellectual honesty, her courage, her driving creative vision, and her kindness."
Models for Evaluating Scientific Research 
coryncoverAEA member Chris Coryn is author of a new book published by VDM-Verlag. Models for Evaluating Scientific Research focuses on the complex decision-making that determines the best in scientific research.
From the Publisher's Website:
Due to its very nature, the evaluation of research permeates nearly every aspect of the work of researchers. They evaluate the work of others or have their own work evaluated. They evaluate hypotheses that come to mind, the previous literature, the quality of data, the explanatory power of theories, or the design of experiments or instru­ments. However, deciding when someone is or has become a first-rate or world-class researcher is an evaluation at a somewhat different level. It is a complex synthesis of judgments about how well the researcher does each of the constitutive types of evaluation, usually as evidenced in the work they are producing. In the last few decades the evaluation of research has become a high-stakes enterprise. With increasing political governance and federal budgets often in the billions, the livelihood of individual researchers, research groups, departments, programs, and entire institutions often swing in the balance. In this book, the author systematically analyzes and compares the quality of the models used to fund and evaluate scientific research in sixteen countries.
As noted by John Hattie in the book's forward, "I have come to know Dr. Coryn over the years he has written this book, and see that he has left few stones unturned. This book will assist others to see the diversity of approaches, the implications of the models, and the excitement of evaluation applied to an important and exciting set of questions."
Dr. Coryn is a well-published author, is director of the Interdisciplinary Ph.D. in Evaluation (IDPE) program at The Evaluation Center and an assistant professor in the Evaluation, Measurement, and Research Program in the College of Education's Department of Educational Learning, Research, and Technology at Western Michigan University. He also is former chair of AEA's Graduate Students and New Evaluators TIG.
Learn more at
Get the Most of Your Membership
As fall approaches, we draw nearer to AEA's annual Evaluation conference and the fall academic year. As always, there are many ways right now to participate in the life of the association. Please click through to the appropriate item to learn more.
About Us
The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of evaluation in all its forms.
The American Evaluation Association's mission is to:
  • Improve evaluation practices and methods
  • Increase evaluation use
  • Promote evaluation as a profession and
  • Support the contribution of evaluation to the generation of theory and knowledge about effective human action.
phone: 1-508-748-3326 or 1-888-232-2275