Newsletter: April 2014 

Vol 14, Issue 4

Banner600

Message from the President: 
Networking for the Common Good

 

Dear AEA Colleagues,

 

Last month, I drew attention to AEA's Topical Interest Groups (TIGs) and how they are an important way for members to connect and learn. This month, I'd like to draw your attention to another valuable avenue for building connections: Local Affiliates.

  

Local Affiliate organizations have no legal or financial ties to AEA, but they are aligned with the mission of AEA, and the relationship between Local Affiliates and AEA is mutually beneficial.

  

Some of the Local Affiliates predate AEA itself. For example, the Eastern Evaluation Research Society (EERS), an AEA Local Affiliate, is the oldest professional society for program evaluators in the United States.

  

Earlier this month, I had the privilege of participating at the annual conference of EERS. In my comments at the conference, I reflected on the connection between the EERS theme of Balancing Rigor and Resources and the AEA 2014 conference theme of Visionary Evaluation for a Sustainable, Equitable Future. This is one example of the opportunities we have to build connections between Local Affiliates and AEA and look at how we are individually and collectively shaping the evaluation field.

  

Local Affiliates are 100 percent standalone organizations. They establish their own criteria for membership, and determine their own dues and procedures for maintaining their finances and their data. Affiliates differ from each other, not only in size, but in focus, target audience, and main activities. The number of affiliates has increased from about a dozen in the mid-to-late 1990s to 28 today. 

  

Did you know that there is also a Local Affiliate Collaborative (LAC)? Celebrating its 10th anniversary this year, the LAC is the most visible and lasting product of a 2004 grant from the W.K. Kellogg Foundation to help AEA and Local Affiliates do joint projects, share information, and support one another. Tom Chapel, a longtime AEA member and convener of the Atlanta-Area Evaluation Association (AaEA), led this grant and continues to be a leader in the LAC Steering Committee.

 

The LAC Steering Committee — a core of about 10 LAC zealots — meets by phone most months to work on products and events to support affiliates. The LAC is in the midst of producing a set of short "Tip Sheets," covering topics from recruitment to strategic planning. They have also produced tool kits on organizing conferences and starting new affiliates.

 

You don't need to be an AEA member to be in a Local Affiliate, and the Local Affiliates do not need to be composed of AEA members in order to connect with AEA. Groups can petition to become a Local Affiliate by submitting a proposal to AEA's executive director. Please contact her for details on the information she needs.

 

I'd love to hear from you about how you view the value of the Local Affiliate-AEA connection. 

  

Warm regards,

 

Beverly Parsons

AEA 2014 President

In This Issue
2014 Summer Institute
Walking the Talk
Face of AEA
Policy Watch
Diversity
Book Profile
eLearning
p2i
New Job Postings
Register
Get Involved
About Us
Quick Links
Important Note
To ensure this newsletter reaches you every month, add info@eval.org to your email contacts!
Join our Mailing List!
Register Now: 2014 Summer Institute, June 1-4 in Atlanta

Registration for the 2014 Summer Institute is open! Join the American Evaluation Association June 1-4, 2014, in Atlanta for this year's Summer Institute. Evaluators, applied researchers, grantmakers, foundation program officers, nonprofit administrators, and social science students are all invited to attend. Sessions are filled on a first-come, first-served basis. View session availability.

 

The Summer Institute will include three keynote addresses, five rotations of three-hour and 40-minute training sessions, plus two group lunches to allow for networking among conference attendees. View the agenda-at-a-glance.

 

Presenters include experts who have conducted evaluations in a variety of settings, nationally known authors, those working on the cutting edge, evaluation experts, and outstanding trainers. View the workshop and course descriptions.  

 

Register today!

AEA Values - Walking the Talk with Leah Neubauer

Are you familiar with AEA's values statement? What do these values mean to you in your service to AEA and in your own professional work? Each month, we'll be asking a member of the AEA community to contribute her or his own reflections on the association's values.  

 

AEA's Values Statement

The American Evaluation Association values excellence in evaluation practice, utilization of evaluation findings, and inclusion and diversity in the evaluation community.

 

             i.  We value high quality, ethically defensible, culturally responsive evaluation practices that lead to effective and humane organizations and ultimately to the enhancement of the public good.

             ii. We value high quality, ethically defensible, culturally responsive evaluation practices that contribute to decision-making processes, program improvement, and policy formulation.

            iii. We value a global and international evaluation community and understanding of evaluation practices.

            iv. We value the continual development of evaluation professionals and the development of evaluators from under-represented groups.

             v. We value inclusiveness and diversity, welcoming members at any point in their career, from any context, and representing a range of thought and approaches.

            vi. We value efficient, effective, responsive, transparent, and socially responsible association operations.

 

 


 

I am Leah C. Neubauer, program manager and instructor in the Master of Public Health Program at DePaul University. I am the president of AEA's Chicagoland Local Affiliate and an active member of the Local Affiliates Collaborative (LAC) Steering Committee.

 

AEA's vision and values ground and shape my evaluation practice as an evaluator and educator. From my adult education doctoral training, I offer up the notion that our philosophies of evaluation practice are influenced and shaped by our personal and professional values. As such, my insights are anchored around my youth, my first evaluation job, and my AEA experiences.

 

While my 1980s childhood lacked fantasies of an evaluation profession future, my family upbringing — oldest child in a large multigenerational, multiethnic, social-change-minded family — had me in a constant state of assessment and evaluation. My family's desire to discuss opinions on everything provided foundational training for many useful evaluation skills and guiding values, such as: active learning, project management, credible summary statements (at the dinner table), robust stakeholder engagement, and, quite specifically, inclusion of multiple voices in everything (all the time!).

 

From these childhood moments, I formally found evaluation through urban HIV/AIDS activism and community-based programming in the late 1990s. I moved to Chicago for university and volunteered with a Latino-serving community-based HIV/AIDS organization. Knowing that data and outcomes were often the deciding factor between being funded or not, I first witnessed the power of evaluation. I was doing evaluation by day and spending my evenings as an HIV/AIDS outreach educator focused on behavior change. I started thinking about the role of learning and evaluation; I became quite interested in evaluative learning and capacity building. So while working with evaluation team members on "high-quality evaluation," I became quite serious about contributing to evaluative learning, organizational capacity-building, and community enrichment in the same places we conducted evaluation. I believed that evaluation efforts could help keep doors open, while also aiming to enrich the lives of all individuals, organizations, and communities involved. 

 

From these community-based roots, I found AEA via Portland in 2006. My mentor encouraged me to attend given my growing interest in HIV/AIDS and evaluation. I assumed that being an evaluator meant going to AEA, and I also knew that "the famous community psychologist Robin Miller" was a part of the AEA. I had to be there. I journeyed to my first conference. I was hooked in large part because of vision and personal/professional values alignment. It just felt right being at AEA. Folks at AEA cared about the same issues I did. Also, the people proved to be quite fun! 

 

Since my initial AEA experience, I have witnessed values in action through local affiliate work, international evaluation efforts, education in the public health classroom, and AEA's Cultural Competence Dissemination Working Group. I find this to be a very exciting and evolving time in AEA, particularly for fellow self-identified evaluators for social change. I welcome my seat at the AEA member table, enjoying and learning from the AEA company who clearly embodies a lifelong commitment to the vision and values being fully realized.          

Face of AEA -  Meet Michele Tarsilla

AEA's more than 7,800 members worldwide represent a range of backgrounds, specialties, and interest areas. Join us as we profile a different member each month via a short question-and-answer exchange. This month's profile spotlights Michele Tarsilla.

  

Name: Michele Tarsilla

Affiliation: Chair of the International and Cross-Cultural Evaluation TIG; Vice President of the Evaluation Capacity Development Group (ECDG)

Degrees: Ph.D., Interdisciplinary Evaluation, Western Michigan; Graduate Diploma, Public Policy and Program Evaluation, Carleton University; M.S., Foreign Service, Georgetown University School of Foreign Service; B.A., International Political Science, Catholic University   

Years in Evaluation Field: 12

Joined AEA: 2008

  

Why do you belong to AEA? 

  

I decided to join AEA after working in evaluation in Africa and Latin America for a number of years. Becoming part of a professional evaluation association proved quite beneficial to me from the very beginning. On the one hand, I learned about cutting-edge evaluation theories and methods that rapidly helped me refine my own evaluation practice. On the other hand, I had the unique opportunity to exchange with and learn from seasoned evaluation professionals conducting evaluations in international and cross-cultural settings. 

 

Over the years, I got increasingly involved in AEA activities and, by so doing, I became aware of two of the association's distinct features. First, the incredibly diversity of its membership: I learned about theories and methods of direct applicability to my international development evaluation assignments from AEA colleagues whose work was primarily domestic. Second, the continued encouragement for members to develop and disseminate innovative approaches and tools aimed at enhancing the contemporary evaluation practice. I myself had a chance to use the annual conferences and the AEA365Blog as a privileged platform from where to launch my Evaluation Capacity Development framework, discuss my book chapter with Donna Mertens on new trends in mixed methods evaluation, and present the findings of my research with Michael Bamberger on the measurement of unintended and unexpected outcomes in performance and impact evaluations.

  

What's the most memorable or meaningful evaluation that you have been part of?  

  

This is a particularly challenging question, as every evaluation is unique and represents an enriching experience, both from a professional and personal standpoint. That said, of all the evaluations that I have worked on to this date, there is one that I still remember quite vividly: the summative evaluation of a micro-loan program implemented by a credit union in Ontario, Canada, and primarily aimed at marginalized population groups. 

 

Based on the findings of this four-month evaluation, small-size loans granted to individuals (with either no or very low credit score) to start up a small business were effective in improving the economic conditions and social integration of both the borrowers and their respective households. This evaluation is particularly dear to me for two different reasons. First, it prompted me to develop and exert tactful responsiveness and culturally competence toward an unusually diverse sample of respondents in a very short period of time. Having to interview individuals with many different backgrounds, often with very limited language skills or low Internet savvy, and with little willingness to talk about their income and personal well-being with a stranger (me), forced me to think on my feet at all time and adapt my evaluation strategy to unforeseen circumstances throughout the evaluation. Second, the evaluation findings were successfully used to lobby the provincial government of Ontario as well other local foundations to sponsor similar micro-finance programs in the future, by also making additional resources available to make up for the risk of loan delinquency.

  

What advice would you give to those new to the field?

  

My first piece of advice is to look for as many and diverse opportunities as possible to practice evaluation. Some of my colleagues are of the opinion that, in order to succeed in getting evaluation jobs, you need to evaluate programs in only one area over and over so as to become a recognized expert in a specific field. I personally believe that content expertise needs to be matched with methodological versatility and that, therefore, being open to work in a variety of areas will equip you with a stronger methodological toolbox and allow you to land a larger number of assignments in the longer term. I myself have developed my evaluation practice across a large variety of settings over the last 10 years (from evaluating school programs in refugee camps in northwestern Kenya to assessing the gender-responsiveness of HIV and AIDS programs in the favelas of Rio de Janeiro, from developing a M&E framework for a new Decent Work Program in Agriculture in Malawi and Tanzania to delivering of workshops for commissioners, mangers and practitioners in over 25 countries).

 

My second piece of advice is to constantly look beyond your own area of practice and never stop searching for innovative evaluation methods and approaches used in other fields that might prove particularly adequate to address your evaluation questions. Such "evaluative curiosity," combined with the timely review of specialized literature (including open-source peer-reviewed journals) and periodic perusal of online evaluation exchanges sponsored by a variety of evaluation communities of practice, is likely to strengthen your professional skills. I myself am part of several online evaluation communities and, besides downloading those reports and tools advertised by them which I find the most suitable to my interests and needs, I actively contribute to some of their sponsored evaluation discussions from time to time.

 

My third piece of advice is to get used to filtering the incredibly large volume of evaluation-related information that one might get exposed to on a daily basis. In particular, it is critical to be able to assess both the strengths and limitations of those evaluation tools and methods which, despite being advertised as "hot" or "must-haves," might not be appropriate to address your specific evaluation needs. 

Policy Watch - FY 2015 Guidance on Promoting Data for Evaluation  

OrosFrom Cheryl Oros, Consultant to the Evaluation Policy Task Force (EPTF)

 

The Obama administration has issued further guidance on developing and using data to improve government programs via evidence-based decisions. Two recent documents explain the importance and improvement of data use: the 2015 Budget fact sheet on building and using evidence and the Office of Management and Budget Memo (M-14-06) "Guidance for Providing and Using Administrative Data for Statistical Purposes."

 

Previously, we described the many investments in the administration's budget. These total billions in pilot and demonstration programs, outcome-focused grant reforms, and new strategies that pay only for empirically proven approaches. The budget also lists examples of programs to invest in, scale up, or change on the basis of strong evidence.  

 

The OMB memo is designed to promote opportunities and tools for addressing barriers to agencies' access to and use of administrative data for evidence-building. The memo notes that high-quality and reliable statistics provide the foundation for the research, evaluation, and analysis that help the federal government understand how public needs are changing, how well federal policy and programs are addressing those needs, and where greater progress can be achieved. Such statistical information can be created more efficiently through greater use of information that the government has already collected for programmatic and regulatory purposes. Information of this kind is often called "administrative data." 

 

The OMB memo encourages agencies to keep possible statistical purposes in mind in managing their administrative data. OMB further notes that agencies sometimes do not make full, appropriate use of non-public administrative data for statistical purposes, because they perceive the requirements and protections that apply to non-public data as being too complicated and burdensome to navigate.  

 

The memo requests that agencies:

 

  • Foster greater collaboration between program and statistical offices; develop strong data stewardship policies and practices around the statistical use of administrative data; require the documentation of quality control measures and key attributes of important datasets; and require the designation of responsibilities and practices through the use of agreements among these offices. 
  • Promote the use of administrative data for statistical purposes, in ways consistent with legal and policy requirements for such uses, including the need to continue to fully protect the privacy and confidentiality afforded to those providing data. 
  • Use suggested "best practice" tools, including guidance on the interaction of Privacy Act requirements and use of data, as well as model interagency agreements for developing their policies and practices for sharing data with other agencies. 
  • Report to OMB progress in implementing the Memo to identify any barriers to moving forward.

 

Also, as part of the president's Management Agenda and Open Data Initiative, agencies are encouraged to seek ways to open up federal data for private sector innovation and public use. The OMB guidance aims to help agencies access and utilize existing federal data to answer important questions about program outcomes. Many of these efforts are ongoing and involve leveraging existing data, including linking data on program participants to administrative data on earnings, health, or other outcomes. 

Diversity - Applications for the 2014-2015 GEDI Cohort Now Open

From Zachary Grays, AEA Headquarters

 

The American Evaluation Association is proud to announce that we are now accepting applications for the 2014-2015 Graduate Education Diversity Internship! We had an overwhelming response from sites anxious to host interns, and we are looking forward to matching them with the next group of future evaluators. It has been a little more than 10 years since the GEDI program set out to provide a unique internship and training opportunity for graduate students studying evaluation. Over 50 interns later, and thanks to the support of countless host sites and volunteer program chairs, the GEDI program has become a pillar of diversity at AEA. Our current interns are coming to the end of their journey, excited to leave their mark on the evaluation discipline and deepen the profession's capacity to work in racially, ethnically, and culturally diverse settings. 

 

The Graduate Education Diversity Internship, commonly referred to as GEDI, aims to expand the pool of graduate students of color and from other under-represented groups who have extended their research capacities to evaluation. Interns participate actively at their host sites during the academic year by applying skills acquired during classroom time to real-world evaluations. In addition to this invaluable experience on-site, interns meet both online and in person to build on their existing evaluation knowledge and skill set. Under the direction of Directors Dr. Stewart Donaldson and Dr. Ashaki Jackson and Program Liaison John LaVelle, this year's impressive interns — Crystal Coker, Shipi Kankane, Bailey Murph, and Anael Ngando — have spent the last nine months completing the rigorous body of work necessary to be recognized as successful GEDI graduates.  

 

"What I like a lot about the GEDI program is that it forces you to think about issues of culture in evaluation," said Anael Ngando, GEDI intern with Education Development Center Inc. "All the aspects of the program (assignments, workshops, meetings, webinars, etc.) contribute to improving interns' understanding of the relevance of culture in evaluation. The GEDI program gave me real-life, hands-on experience on the process, challenges, and rewards of conducting evaluation." 

 

In less than two months, our GEDI interns will conduct their final presentations and graduate from interns to GEDI alumni at the Summer Institute in Atlanta. It has been an absolute pleasure working with these very talented students this year, and we have no doubt they will make remarkable evaluators. 

 

Applications are now available for the next GEDI cohort. I encourage you all to circulate the call to those who would not only make an excellent GEDI but have a passion for championing diversity and cultural competence in evaluation. 

 

Read more about the GEDI program.  

Book Profile - The Future of Evaluation in Society: A Tribute to Michael Scriven

Stewart Donaldson is the editor of The Future of Evaluation in Society: A Tribute to Michael Scriven (Evaluation and Society), a book published by Information Age Publishing. 

 

From the Publisher's Site:

 

The Evaluation and Society series presents authored manuscripts and edited volumes that advance our understanding of how evaluation theory and practice can contribute — meaningfully and consequentially — to the quality and improvement of both developed and developing societies. Volumes in the series will explore new and refined evaluation approaches and methodologies, research on evaluation's claims, and theoretical engagements with issues that bear on the productive and ethical use of evaluation in society. 

 

From the Editor:

 

Michael Scriven challenges us to examine the five great paradigm shifts that have revolutionized the foundation of evaluation, and that he believes will form the basis for a much brighter future for evaluation in society. Scriven's revolutionary ideas are admired and challenged in original chapters written by key thought leaders in evaluation including Michael Quinn Patton, Ernest House, Daniel Stufflebeam, Robert Stake, Jennifer Greene, Karen Kirkhart, Melvin Mark, Rodney Hopson, and Christina Christie. This volume will be of great interest to evaluation scholars, practitioners, and students of evaluation.

 

About the Editor:

 

Stewart I. Donaldson, Ph.D., is professor and director of the Claremont Evaluation Center, and dean of the Schools of Social Science, Policy & Evaluation and Community & Global Health at Claremont Graduate University. In 2013, he was honored with the American Evaluation Association's Paul F. Lazarsfeld Award for sustained lifetime written contributions to advancing evaluation theory and practice, and was elected president of AEA.  

eLearning Update - Discover Upcoming eStudy Courses and Coffee Break Demonstrations

Our eStudy program is made up of in-depth virtual professional development courses. Below are May's eStudy offerings: 

 

eStudy 042: Intermediate Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use - Michael Quinn Patton  

May 14, May 16, May 19, and May 27

2-3:30 p.m. ET 

 

This eStudy is geared for an audience with intermediate level interest and/or expertise in developmental evaluation (DE). It will focus on actually doing developmental evaluations. DE is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback. The purpose of DE is to help develop and adapt the intervention (different from improving a model).

 

Read more and register.

 

eStudy 040: Digital Qualitative: Leveraging Technology for Deeper Insight - Bob Kahle 

May 20 and May 22

3-4:30 p.m. ET

 

This short course seeks to describe the range of new qualitative techniques available and describes how and when to use them to generate deeper insight as part of your evaluation efforts. This eStudy will occur in two 1.5-hour sessions and will include preparation materials sent before, between, and after the sessions.

 

Read more and register

____________________________________________________________________________________ 

 

Our Coffee Break Webinars are short, 20-minute presentations of tools or tips we think evaluators will find helpful in their work lives. Let's take a look at what's in the pipeline for May:

 

Special Series: How to Write an Article, Session 3 - Conceptualizing Your Experience - George Julnes

Thursday, May 1

2-2:20 p.m. ET 

 

Identify the "so what" question you are trying to answer; explain why this question is significant and to whom it is significant; and clearly express the answer to this question based on the evaluation experience you are describing.

 

Special Series: How to Write an Article, Session 4 - Submitting Your Article - Rachael Lawrence

Thursday, May 8

2-2:20 p.m. ET

 

In this session, Rachael Lawrence, managing editor of the American Journal of Evaluation, will demonstrate the submission process on Scholar One (also known as Manuscript Central) and illustrate common mistakes to be mindful of during submission. She will also provide her insights into the peer review process and a few pointers on how to successfully submit a quality paper to the journal. 

 

CBD182: Qualitative Data: Software solution for the evaluation process - Stuart Robertson

Thursday, May 15

2-2:20 p.m. ET 

 

Stuart Robertson, pre-sales specialist at QSR International, introduced the use of NVivo in an evaluation context. NVivo is software that helps evaluators organize and analyze unstructured, non-numeric data as part of the evaluation process. Examples of useful tools include the Word Frequency Query, the Text Search Query, Matrix Coding Query, and the Modeler. 

 

CBD184: Participatory Facilitation Methods for Evaluation - Rita S. Fierro & Alissa Schwartz

Thursday, May 29

2-2:20 p.m. ET

 

Participatory facilitation methods are currently at the forefront of organizational development and evaluation practices. As evaluators, we can overcome limitations in data collection and analysis by using participatory facilitation techniques and increasing our ability as evaluators to sense, listen, and recognize both the dynamics in the room and the common threads among seemingly conflicting voices. This webinar will introduce you to the methodologies and principals of the Art of Hosting Meaningful Conversations, a holistic approach to facilitating meetings. 

 

You can pre-register for the webinar by clicking the link above. 

Potent Presentations Initiative - Our Favorite Underutilized Presentation Tools
From Stephanie Evergreen, Potent Presentations Initiative Coordinator  

 

Last month, we shared some of our most popular resources. This month, we share our quiet heroes - the tools we should all use more often to help us create dynamite presentations.

 

Ignite Presentation Planning Guide

 

Ignite sessions are a new format at the AEA conference - and increasingly popular outside of conference settings - as an efficient and effective way to convey information. If you don't already know the drill, an Ignite presentation is just five minutes long, consisting of 20 slides that automatically advance every 15 seconds. That isn't very much time! Each slide can really only convey one idea. Our Ignite Presentation Planning Guide will help you think through the single idea each slide will convey, making slide development a thousand times easier. There isn't much room to write your notes for each line, and that's the point! 

 

Rundown Template

 

Originally shared by presentation superstar Kathy McKnight, a rundown template is an outline of your presentation's content, with helpful bonus features. It is especially helpful for organizing longer talks, such as demonstrations and workshops. Download the Rundown Template and complete it by writing in the key points you need to make on each slide, the pace and timing you need to keep to end on time, the supporting documents or materials you need to have on hand, and any notes you need to write to yourself (such as "keep breathing!"). If you need an example, review Kathy's completed rundown.

 

Both of these tools have in one thing in common: They are all about preparation. I suspect these tools are underutilized because most of us skimp on the preparation, but it is one of the key elements that distinguishes potent presentations from weak ones. Check out these tools today!  

New Jobs & RFPs from AEA's Career Center  
What's new this month in the AEA Online Career Center? The following positions have been added recently: 

Descriptions for each of these positions, and many others, are available in AEA's Online Career Center. Job hunting? The Career Center is an outstanding resource for posting your resume or position, or for finding your next employer, contractor, or employee. You can also sign up to receive notifications of new position postings via email or RSS feed.

Register
Get Involved
About Us
AEA is an international professional association of evaluators devoted to the application and exploration of evaluation in all its forms.

 

The association's mission is to:
  • Improve evaluation practices and methods.
  • Increase evaluation use.
  • Promote evaluation as a profession.
  • Support the contribution of evaluation to the generation of theory and knowledge about effective human action.
phone: 1-202-367-1166 or 1-888-232-2275 (U.S. and Canada only) 
websire: www.eval.org