1. 1
Concept, need, goals and tools
Introduction
Evaluation is a systematic determination of a subject's merit, worth and
significance, using criteria governed by a set of standards. It can assist an organization,
program, project or any other intervention or initiative to assess any aim, realisable
concept/proposal, or any alternative, to help in decision-making; or to ascertain the degree of
achievement or value in regard to the aim and objectives and results of any such action that
has been completed. The primary purpose of evaluation, in addition to gaining insight into
prior or existing initiatives, is to enable reflection and assist in the identification of future
change.
Meaning of evaluation:
It is a process of information gathering, information processing, judgement forming and
decision making. It is process of finding out present status of skills/knowledge/functioning of
an individual or group, which can be used purposefully in future.
Evaluation is an integral component of all systems of education at all processes. It is
what enables educators, teachers, administrators, policy makers and the community have an
idea of what is missing and what is available.
2. 2
Definition of evaluation:
According to Antony J Nitko (1986)- “Evaluation involves judging the value
or worth of a pupil of an instrumental method or of an educational programme.”
According to Prof. N.M. Dandekar (1971), evaluation may be defined as a
“systematic process of determining the extent to which educational objectives are achieved
by pupils.”
Programme evaluation
Program evaluation is a systematic method for collecting, analyzing, and using
information to answer questions about projects, policies and programs, particularly about
their effectiveness and efficiency. Program evaluations can involve both quantitative and
qualitative methods of social research. People who do program evaluation come from many
different backgrounds, such as sociology, psychology, economics, social work, and public
policy. Some graduate schools also have specific training programs for program evaluation.
Definition of programme evaluation.
Evaluation is the systematic application of scientific methods to assess the design,
implementation, improvement or outcomes of a program is called as programme evaluation
(Rossi & Freeman, 1993; Short, Hennessy, & Campbell, 1996)
Program evaluation is “…the systematic assessment of the operation and/or outcomes
of a program or policy, compared to a set of explicit or implicit standards as a means of
contributing to the improvement of the program or policy…”
According to Psychology definition for Program Evaluation in normal everyday
language, edited by psychologists, professors and leading students. Help us get better.
3. 3
Program evaluation is a systematic method for collecting, analysing, and using information to
answer questions about projects, policies and programs, particularly about their effectiveness
and efficiency. Wikipedia”
Need of programme evaluation:
Program evaluation is a valuable tool for program managers who are seeking to
strengthen the quality of their programs and improve outcomes for the children and
youth they serve.
Program evaluation answers basic questions about a program's effectiveness, and
evaluation data can be used to improve program services.
Improve program design and implementation.
It is important to periodically assess and adapt your activities to ensure they are as
effective as they can be.
Evaluation can help you identify areas for improvement and ultimately help you
realize your goals more efficiently.
Program evaluation is critical to assessing progress and maintaining alignment with
your organization's mission and community needs.
Program evaluations can also provide input for future program plans.
The need of program evaluation is to determine whether the program is efficient in
terms of using resources wisely to perform the needed work, effective by performance
measures or objectives set, and implemented as stated.
The efficiency and effectiveness of a program can help make decisions, fix
accountability problems, and aid in planning.
4. 4
It can also improve operations, reallocation of resources and contract monitoring
(Lane 1999).
In order to understand the importance, knowledge of the components of the program
evaluation is required.
Goals of programme evaluation
1. Information is needed to make current decisions about a product or program
2. The information, how much can be collected and analyzed in a low-cost and practical
manner, e.g., using questionnaires, surveys and checklists
3. The information should be accurate (reference the above table for disadvantages of
methods)
4. The methods get all of the needed information
5. Additional methods should and could be used if additional information is needed
5. 5
6. The information appear as credible to decision makers, e.g., to funders or top management
7. The nature of the audience conform to the methods, e.g., will they fill out questionnaires
carefully, engage in interviews or focus groups, let you examine their documentations, etc..
8. Evaluation can administer the methods now or is training required
9. The information can be analysed
Tools of programme evaluation
1. End-of-term course evaluation form completed by students in the course
2. Reflective memo, completed by undergraduate instructors for each course taught,
discussed with the Associate Department Head in charge of teaching improvement
3. Interview questions asked of groups of senior students just prior to program completion
1. Define the key questions: Clearly define what questions the evaluation will be
designed to answer. Will program participants be compared to a control group of
nonparticipants, or will two different program model variations be compared to each
other? Interviewing and selecting a third-party evaluator (e.g., university researchers,
individual experts, or firms such as MDRC) can help raise and clarify key questions
for the evaluation to answer.
2. Design the evaluation: Together with the evaluator, design a rigorous study that will
answer your key questions as efficiently and affordably as possible. Different
questions and program models lend themselves to different evaluation methods (e.g.,
randomly assigning participants to different groups, or doing pre/post comparisons).
Longer study duration and larger sample sizes will allow higher levels of confidence
in the results, but also increase the expense of the study.
3. Conduct the study: Conduct the evaluation according to the design. The evaluator
may collect and track all necessary data during the study period, or the non-profit’s
internal data systems and staff may be part of the process.
4. Analyze the results: Analyze the data to answer the key questions and reveal any
additional key insights about the program that may emerge from the evaluation
process. If the program evaluation showed high levels of effectiveness and impact,
seek ways to build upon this success (e.g., strengthening or expanding the program,
publicizing results to seek additional funding). If the results were unclear or negative,
6. 6
discuss potential causes and remedies (e.g., evaluation design changes, program
model changes).
5. Improve: Begin implementing changes to strengthen the program and the non-profit
as a whole.
There are other programme evaluation tools such as:
1. Interview
2. Observation
3. Questionnaire
4. Case study
1. Interview
An interview is a conversation where questions are asked and answers are given. In
common parlance, the word "interview" refers to a one-on-one conversation with one
person acting in the role of the interviewer and the other in the role of the interviewee.
The interviewer asks questions, the interviewee responds, with participants taking
turns talking. Interviews usually involve a transfer of information from interviewee to
interviewer, which is usually the primary purpose of the interview, although
information transfers can happen in both directions simultaneously. One can contrast
an interview which involves bi-directional communication with a one-way flow of
information, such as a speech or oration.
2. Observation
Observational tool is defined as the tool of viewing and recording the actions and
behaviors of participants. It is described as being a systematic observation method,
which implies that the observation techniques are sensible and replicable procedures
so that the research could be reproduced.
Here are some different types of observation methods that can be used
to observe a child:
Anecdotal Records. This observation is usually recorded after the event has occurred
and written in past tense.
Running Records.
Learning Stories.
Jottings.
Sociograms.
7. 7
Time Samples.
Event Samples.
Photographs.
3. Questionnaire
A questionnaire is a research instrument consisting of a series of questions (or other
types of prompts) for the purpose of gathering information from respondents. The
questionnaire was invented by the Statistical Society of London in 1838.
A questionnaire consists of a number of questions that the respondent has to answer
in a set format. A distinction is made between open-ended and closed-ended
questions. An open-ended question asks the respondent to formulate his own answer,
whereas a closed-ended question has the respondent pick an answer from a given
number of options. The response options for a closed-ended question should be
exhaustive and mutually exclusive. Four types of response scales for closed-ended
questions are distinguished:
Dichotomous, where the respondent has two options
Nominal-polytomous, where the respondent has more than two unordered
options
Ordinal-polytomous, where the respondent has more than two ordered options
(Bounded)Continuous, where the respondent is presented with a continuous
scale
4. Case study
A case study is a research method involving an up-close, in-depth, and detailed
examination of a subject of study (the case), as well as its related contextual
conditions.
Case studies can be produced by following a formal research method. These case
studies are likely to appear in formal research venues, as journals and professional
conferences, rather than popular works. The resulting body of 'case study research'
has long had a prominent place in many disciplines and professions, ranging from
psychology, anthropology, sociology, and political science to education, clinical
science, social work, and administrative science.
Conclusion:
Evaluation is a process of information gathering, information processing, judgement forming
and decision making. Program evaluation is a systematic method for collecting, analysing,
and using information to answer questions about projects According to Psychology definition
for Program Evaluation in normal everyday language, edited by psychologists, professors and
8. 8
leading students. Help us get better. It answers basic questions about a program's
effectiveness. There are some tools for programme evaluation such as observation, case
study, interview etc... Observational tool is defined as the tool of viewing and recording the
actions and behaviors of participants. A case study is a research method involving an up-
close, in-depth, and detailed examination of a subject of study. Interview refers to a one-on-
one conversation with one person acting in the role of the interviewer and the other in the role
of the interviewee.
References:
1. http://www.deakin.edu.au/data/assets/pdffile/0005/268511/programme evaluation .pdf
2. https://www.dickinson.edu/download/downloads/id/3583/writingsamples
3. https://en.wikipedia.org/wiki/Program_evaluation.
4. http://www.mrc-ewl.cam.ac.uk/communications/evaluation-tools/.
5.https://www.opm.gov/wiki/uploads/docs/Wiki/OPM/training/Program%20Evaluation%20B
eginners%20Guide.pdf