The Promise of Evaluation: What Evaluation Offers Policymakers and Practitioners
Centre for the Study of Conflict
School of History, Philosophy and Politics,
Faculty of Humanities, University of Ulster

The Promise of Evaluation: What Evaluation Offers 
Policymakers and Practitioners frontispiece

The Promise of Evaluation: What Evaluation Offers Policymakers and Practitioners

by Clem McCartney
Published by the University of Ulster, Coleraine 1992, reprinted 1994
ISBN 1 87120 613 8
Paperback 19 pp £1.00

Copies are available in bookshops or, by post, from:

Pat Shortt
Centre for the Study of Conflict
University of Ulster
Northern Ireland
BT52 1SA

T: (01265) 324666 or 324165
F: (01265) 324917

This material is copyright of the Centre for the Study of Conflict and the author(s) and is included on the CAIN web site with the permission of the publisher. Reproduction or redistribution for commercial purposes is not permitted.

The Promise of Evaluation
What Evaluation Offers Policymakers and Practitioners

by Clem McCartney

Centre for the Study of Conflict
University of Ulster




Descriptive Evaluation: What Happened?


Change Measurement: What Changes have Occurred


Contextual Assessment: What else was Happening Alongside the Programme


Impact Assessment: What was Achieved? What was the Outcome of the Programme


Policy Process Analysis: Was the Programme Implemented as Planned? Why did Changes Occur?


Political Judgements: Is the Change Worthwhile?


Other Typologies: Functional and Methodological





The Centre for the Study of Conflict welcomes this study by Dr Clem McCartney on the subject of evaluation. The need for accountability has led to an increasing demand that the consequences of social action be examined with care and with a level of sophistication. This paper is an important contribution to thinking in this area. It is a thoughtful and closely analysed review of the components of the evaluative process by an experienced practitioner who has reflected on his own practice. I am sure that it will be found valuable by all those involved in the worlds of community relations, and social and community development within Northern Ireland, and further afield.

We are grateful for help of various forms from a range of funders and individuals in the production of this work, including Paul Sweeney and NIVT, Tony McCusker and Julie Mapstone, CCRU and PPRU, Eric Adams and the Barrow and Geraldine S. Cadbury Trust.

Before publishing a research report, the Centre for the Study of Conflict submits it to members of a panel of external referees. The current membership of the External Advisory Board comprises:

Dr Halla Beloff, Department of Psychology, University of Edinburgh;
Dr Paul Brennan, UER Des Pays Anglophones, University of Paris III;
Professor Ronnie Buchanan, Institute of Irish Studies, Queen's University Belfast;
Professor Kevin Boyle, Centre for the Study of International Human Rights Law, University of Essex;
Professor John Fulton, School of Education, Queen's University Belfast;
Dr Richard Jenkins, Department of Social Anthropology, University College Swansea;
Dr Peter Lemish, Department of Education, University of Haifa;
Professor Ron McAllister, College of Arts and Sciences, Boston, USA;
Dr Dominic Murray, Department of Education, University College Cork;
Professor James O'Connell, School of Peace Studies, University of Bradford;
Professor John Rex, Centre for Research in Ethnic Studies, University of Warwick;
Professor Peter Stringer, Centre for Social Research, Queen's University Belfast;
Professor Joseph Thompson, Department of Politics, University of Villanova, Pennsylvania.

Seamus Dunn
Director, Centre for the Study of Conflict
March 1992.

Return to publication contents


This paper reflects the interest of the Centre for the Study of Conflict in the practice of evaluation. It is prepared for those who are interested in using the results of evaluation studies: policy makers, funders, programme managers, and practitioners. There is often confusion about what an evaluation study tells the reader and how they can use it confidently in future planning. This is because there are many types of evaluation studies, and many types of evaluation reports, though superficially they may look similar. This paper identifies the different types of evaluation work and indicates the value and limitations of each, and to whom it should be particularly relevant. It should be of help at the stage of commissioning evaluation studies in clarifying what type of study should be undertaken. It is offered also in the hope that it will help in recognising the type of report which has been offered and appreciating what answers it should give and what answers it is not able to provide.

In recent years, evaluation has come close to achieving the status of a talisman in the hands of social policy makers and practitioners. It is hoped that it can provide the answers to many, if not all, the difficult decisions which have to be taken about social issues. Its range is from decisions about renewing grants as small as 50 for voluntary community groups to assessment of the continuation of major statutory services and programmes with budgets of billions of pounds. A thorough and expensive evaluation of a comparative small project may still be justified as a source of guidance for similar programmes on a wider scale.

There is no doubt that evaluation does have an important function to play in the policy process; in fact it can fulfil a number of functions. But it is important to be realistic about what to expect from any specific evaluation study. All evaluations are not the same. They do not address the same questions, and therefore they do not offer the same insights. The danger arises when evaluation is viewed as a uniform process, so that all evaluations are expected to offer similar types of insights. Evaluation then is unable to fulfil its promise, and may be criticised unfairly.

The purpose of this short paper is to consider the expectations which are held about evaluation and compare them with what different types of evaluation can offer. It is hoped that this will allow all those concerned to accept a common set of expectations and that more clarity will be the result. For any programme or service, the funders, policy makers, practitioners, service agencies, and the recipients of the programme are all interested in how it has been introduced and implemented. Many of their questions and their expectations of an evaluation study will be conflicting, but they may not always be aware of what type of evaluation will best answer their questions. Equally they need to know what kinds of questions a specific type of evaluation will not answer adequately, so that they will not be disappointed by the results or try to shape them to give insights which the study cannot sustain.

There are particular difficulties in evaluating some programmes, and they highlight the issues in evaluation very graphically. One such area is community conflict and the examples in this paper are mostly taken from that type of work.

Evaluation, in the sense of learning from experience, has been part of human endeavour since the beginning of time. But evaluation as a systematic and conscious process is of recent origin. Evaluation is concerned with activities and programmes, and their impacts and achievements, whether intended or unforeseen. There are a number of aspects to any good evaluation. It starts with questions to be answered. It collects information or data as the basis for answering the questions. It then analyses the Information in some way depending on the type of questions being addressed. Finally it presents conclusions or suggested conclusions, which, in the hands of planners and practitioners, are a tool for developing future work. Clearly some conclusions will be of greater benefit in some circumstances than in others, depending on the nature of the policy and practice issues which need to be resolved.

This paper follows this pattern in order to distinguish different types of evaluation. It considers each of the main groups of questions with which evaluation is concerned, and tries to show what kind of information it can give, what the value of those findings are, and who is likely to be Interested in them. It emphasises the importance of considering what the findings mean and what are their limitations, as opposed to the easy assumption that the findings give a direct prescription for future action. Other terms such as "monitoring" and "assessment" are often used interchangeably with "evaluation", but it causes less confusion if these terms are restricted to specific activities which are part of the general field of evaluation and these terms will be referred to again when the relevant aspects of evaluation are being discussed.

This paper therefore categorises elements of evaluation according to their possible functions or contributions to the development of policy and practice. For the sake of clarity, a number of important issues are only touched on briefly, as they are not central to the present discussion. The relationship between evaluators and those being evaluated has both practical and ethical implications, but that issue has been dealt with elsewhere (e.g. Weiss 1991). Other typologies such as one based on methods of investigation are not discussed in detail, but the final part of the paper relates the main kinds of methodology to the categorisation used here. The practical problems of information collection and analysis have been adequately discussed already (e.g. Ball 1988). The presumed dichotomy between quantitative and qualitative methods is only touched on briefly. Here the emphasis on functions of evaluation supersedes this controversy, because all methods can contribute to finding answers, and conclusions which are supported by a range of methods will be the most reliable. The danger lies in presuming that too much can be learnt from one method.

Return to publication contents

© CCRU 1998-1999
site developed by: Martin Melaugh
page last modified:
Back to the top of this page