Do evaluations improve organisational effectiveness
Use of Evaluation results to enhance organizational effectiveness: Do evaluation findings
improve organisational effectiveness?
Author1 Innocent K. Muhumuza1,
1 Planning, Monitoring and Evaluation, Caritas Switzerland, Kampala, Uganda;
The purpose of this paper is to highlight the important factors to consider in designing and
implementing Evaluations that improve program effectiveness (the extent to which a
project or programme is successful in achieving its objectives).
Specifically, this paper looks at defining the term evaluation and utilisation, the types of
use, factors influencing utilisation and presents a case study of utilisation.
Designing, Participation, ownership, utilization, improved organizational effectiveness
Defining evaluation and why evaluations are conducted in organisations.
In the current times of projects and programs management, the terms monitoring and
evaluation (M&E) have tended to be taken as synonymous. However, there is a clear
distinction between the two terms. Even when the two contribute to enhancing
organisational effectiveness, they answer distinct project management questions and
different institutions and scholars define evaluations differently. This paper focuses on
whether evaluations improve organisational effectiveness.
In the Organisation for Economic Cooperation and Development (OECD) glossary of key
terms (OECD, 2002) evaluation is defined as “The systematic and objective assessment of an
on-going or completed project, programme or policy, its design, implementation and results.
The aim is to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability”.
The UNDP defines evaluation (UNDP, 2002) as “A time-bound exercise that attempts to
assess systematically and objectively the relevance, performance and success of ongoing and
completed programmes and projects”
Whereas monitoring is the systematic collection and analysis of information as a project
progresses, evaluation is the comparison of actual project impacts against the agreed
strategic plans (Shapiro, 2010).
In light of the above, then one would question the rationale for conducting evaluations.
It is worthwhile to note that progressively, there has been growing interest and demand for
evaluation for decades among both public and Non-Governmental organisations in part due
to increased demand for accountability by the donors and also the quest to learn from
The United Nations office on Drugs and crime (UNODC) advances Learning and
Accountability as the two reasons for conducting evaluation. This is presumed that it will
improve planning and delivery of interventions and decision making based on findings,
recommendations and lessons learned and provide objective and up to date evidence of
what UNODC has achieved and what impact has been produced using the resources
provided, evaluation also aims at accounting for the use of resources and for the results
The UNDP in its glossary of terms (UNDP, 2002), also points out that the aim of evaluation is
to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability. An evaluation should provide information that is
credible and useful, enabling the incorporation of lessons learned into the decision–making
process of both recipients and donors. This is also a position that is held by the OECD.
The broader purpose of evaluation is to construct and provide judgements about facts and
values to guide choice and action (Dasgupta, 2001). Like Dasgupta, it is argued that
Monitoring and evaluation is a procedure of knowledge generation, self-assessment, and
joint action in which stakeholders in a program collaboratively define the evaluation issues,
collect and analyze data, and take action as a result of what they learn through this process
(Jackson and Kassam, 1998).
The Centers for Disease Control (CDC) recognizes that Program staff may be pushed to do
evaluation by external mandates from funders, authorizers, or others or they may
be pulled to do evaluation by an internal need to determine how the program is performing
and what can be improved. This seems to imply that evaluations may not necessarily be
conducted as a matter of need by the primary users, in this cases the managers and
implementers but because it is demanded by funders and authorizers.
The Development Assistance Committee (DAC) stipulates that in conducting an evaluation, it
should answer questions on Relevance, Efficiency, Effectiveness, Impact and Sustainability of
interventions, projects, program, and policies.
The varying definitions of evaluation and reasons for conducting evaluations point to the fact
that evaluations when utilised can actually improve effectiveness in implementation of
projects/programs that would ultimately result in organisational effectiveness. However, this
is only possible if there is willingness to use the evaluation as an improvement tool.
Understanding Evaluation Use in developing projects:
From the different definitions of evaluation and varying reasons why evaluations are
conducted, it is implied that evaluations should stimulate action. It is also an expectation
within evaluators that their work will be useful to policy makers, program managers and
other stakeholders to solve social problems. It is also argued that society justifies spending
large amounts of money for evaluations with the expectation that there will be immediate
payoffs and so if evaluations are not useful then the funds should be expended for
alternative use (Shadish, Cook and Levinton, 1991). The argument by Shadish et al is in line
with the focus of this paper-The utilisation of evaluations to improve effectiveness. The key
question here is; do the evaluations actually stimulate action? The answer(s) to this question
lead to analysing whether evaluations are utilised [used] or not. The term “Evaluation
utilisation” is used interchangeably with “Evaluation use” in this paper
Different scholars have had different perspectives on utilisation [use] of evaluations.
One way to look at evaluation utilisation is; evaluation utilisation as the application of
evaluation processes, products or findings to produce an effect (Johnson, Greenseid et al,
Evaluation use also concerns how real people in the real world apply evaluation findings and
experience and learn from the evaluation process (Patton, 2013).
Evaluation use is also looked at as “the way in which an evaluation and information from the
evaluation impacts the program being evaluated” (Alkin and Taut, 2003).
Whereas there has been effort to explicitly define the term evaluation utilisation, the use of
evaluations has been seen to be in different ways. It may not matter what ways evaluation
are used as long as the use results in enhanced organisational effectiveness. There are a
number of ways of evaluation use that are generally found in literature. There is
Instrumental, process, conceptual, symbolic, legitimisation, interactive, and enlightenment
Instrumental use: when decision makers use the evaluation findings to modify the object of
the evaluation (i.e. the evaluand) in some way (Shulha & Cousins, 1997). Simply put, this is
the direct action that occurs as a result of an evaluation.
Conceptual use: when the findings of an evaluation help program staffs understand the
program in a new way (Weiss, 1979). This could be something newly understood about a
program, its operations, participants or outcomes through the evaluation. This also implies
that evaluation may not result into direct action but influences understanding.
Enlightment use: when the evaluation findings add knowledge to the field and so may be
used by any one, not just those involved with the program or evaluation of the program
Symbolic use: Occurs when an organisation establishes an evaluation unit or undertakes an
evaluation study to signal that they are good managers. The actual functions of the
evaluation unit or the evaluation’s findings are of limited importance aside from the “Public
relations value”. The organisation or individuals use the mere existence of evaluations and
not any aspects of the results to persuade or to convince.
Legitimisation use: the evaluation is used to justify current views, interests, policies or
actions. The purpose of evaluation is not to find answers to unanswered questions or find
solutions but just provide support of opinions or decisions already made.
Process use: this occurs when individual changes in thinking and behaviour and
program/organisations change procedures and culture among those involved in evaluation
as a result of learning that happens during the evaluation process (Patton, 1997). Process
use is defined as “....cognitive, behavioural, program and organisational changes resulting
from engagement in the evaluation process and thinking evaluatively” (Patton, 2003).
Process use incorporates features from instrumental use, enlightment and conceptual use.
Evaluation Utilisation in practice:
When looking at an organisation that uses evaluation to improve her effectiveness, then
conceptual, instrumental and process uses are better placed to enhance effectiveness.
However, looking at the three uses, process use stands out as what would help organisations
improve her effectiveness because it integrates most features from conceptual and
instrumental use and goes beyond to look at changes in behaviour, and cognitive abilities as
a result of engaging in an evaluation which ultimately influence how organisations work.
Embedded in process use that is critical to enhancing organisational effectiveness is Learning
and applying the learning from the evaluation processes. Taking a case study, here below,
one will notice how evaluations can actually enhance organisational effectiveness.
The Case study presented in this paper depicts an evaluation whose results were applied to
program development, an illustration that indeed evaluations can be utilised to enhance
Case study: Evaluation of Re-integration of Ex-inmates, Advance Afrika-Uganda
The project Design and Evaluation
Advance Afrika (AA) piloted the Youth Entrepreneurship Enhancement Project (YEEP) that
was implemented together with the Uganda Prisons Service (UPS). The project aimed at
improving the livelihoods of vulnerable youth in Northern Uganda in the districts of Gulu,
Lira and Kitgum, which generally serve the Lango and Acholi sub-regions. The project was in
response to the vulnerability of youth in Northern Uganda having been marginalized by
growing up in internally displaced people’s (IDP) camps or as former abductees of the rebel
group the Lord’s Resistance Army, with a majority lacking formal education and therefore
unskilled and or disabled, thus lie idle in the labour market. The project specifically targeted
her interventions at youth ex-convicts aged 18-35years in the three districts; focusing on
equipping the ex-convicts with entrepreneurship skills to generate their income, the project
has potential to improve their quality of the life. The project was implemented through
prison staff and University youth facilitators at Gulu university
An evaluation was conducted at the end of the one year pilot phase, effective from March
2014 –February 2015 to assess the relevance, efficiency, sustainability and impact of the
project. The rationale of the evaluation is to document the experiences: successes,
challenges, opportunities for further growth and lessons learnt during the pilot phase in
order to improve the current project design.
Participants in the evaluations
In the evaluation, staff of Advance Afrika, Uganda Prisons Service and Ex-convicts
participated. Staff participated as implementers and respondents, UPS staff participated as
co-implementers and secondary stakeholders, University youths facilitators as secondary
stakeholders and Ex-convicts as beneficiaries and primary stakeholders. The evaluation was
facilitated by an external evaluator and staff participated as respondents and the analysis of
the responses was involved both the staff and the evaluator
Key findings and recommendations
The project evaluation singled out eight critical areas for improvement. They are;
Training: Conducting refresher training for all UPS trainers with practical sessions including
face to face interaction with interactions with those with expertise in the field of
entrepreneurship to clarify emerging issues. Also increase the duration of the training and
integrate assessment of individual learners’ abilities.
Wider stakeholders buy-in: Encourage the trained social workers to extend the training to
the wider prison staff in order to strengthen the good will for the project.
Strengthening Advocacy: Strengthen media publicity and conduct Targeted and realistic
advocacy with specific demands by getting irrefutable evidence
Follow-up of ex-convicts: UPS to follow-up the entrepreneurship development as part and
parcel of its ordinary course of duty and ensure that UPS has competent social workers.
Training Manual Improvement: UPS management directly is responsible for the review of the
Manual so as to strengthen their commitment to learning and internal capacities for
Monitoring and Evaluation: Advance Afrika and its partners to develop a standard reporting
guide for the project so that they do not waste time reporting on non-issues, while glossing
on key indicators of success.
Project targeting: The project to keep youth outside the project boundaries at minimum in
order to optimize the results for Northern Uganda
Internal strengthening: The evaluation recommended trainings like Community Based
Performance Monitoring training was also recommended to build capacities of both Advance
Afrika and key implementers’
Utilisation of the results:
The evaluation of the YEEP culminated into of a two year project “Advancing Youth
Entrepreneurship project (AYE)-2015-2016” and a three year project “Social Reintegration
and Economic Empowerment of Youths (2016-2018)”. Of the eight evaluation findings, their
implementation was spread across the two projects. In the two resultant projects, the
training manual was revised, the duration of trainings extended from five days to ten days,
an online M&E system accessible to all stakeholders developed, the project geographical
scope was extended to include more prison units in more districts in Lango and Acholi sub
regions, planned for training more social workers, Refresher trainings planned for, use of
radio as a platform for advocacy and awareness creation was adopted and staff assigned
specific case load of ex-convicts to follow up. The development of AYE and SREE informed by
the results of the YEEP is a demonstration of how an evaluation can be used to enhance
Understanding effectiveness of organisations
The OECD defines effectiveness as the extent to which the development intervention’s
objectives were achieved, or are expected to be achieved; taking into account their relative
importance (OECD, 2002) and the Development Assistance committee (DAC) identifies
Effectiveness as one of the evaluation criterion.
For an evaluation to be utilised (or not) to enhance organisational effectiveness, we cannot
deny the fact that there are enabling (disabling) factors. These vary from quality of the
evaluation, organisational, external, technological relational and environmental factors.
Sandison (2005) identifies four factors influencing utilization of evaluations that include,
quality, organizational, relational and external factors. Preskill et al (2003) identify five
factors that influence evaluations. They include organization characteristics, management
support, advisory group characteristics, facilitation of the evaluation process and frequency,
methods and quality of communication.
In this paper, I look at three broadly categorized factors that influence the utilization of
evaluations to enhance organizational effectiveness. They are Quality factors, Organizational
and external factors.
Quality factors: Adapting Sandison categorization, these relate to the purpose and design of
the evaluation, the planning and timing of the evaluation, dissemination and credibility of
the evidence (Sandison 2005). It is a fact that the need and audiences for evaluations change
over time and so there is not such a thing as “one size for all”. Williams et al (2002) alludes
to the same thought when he says “…..one size does not fit all and each purpose privileges
different users”. Patton (1997) also argues that the purpose, approach, methodology and
presentation of an evaluation should derive from the intended use by the intended user.
Implied in this is that an evaluation should be tailored to meet the specific needs of the use
i.e. to meet the intended use by the intended user. This therefore, also demands careful
thinking and selection of stakeholders, determining their level of participation in the
evaluation process and their interests in the evaluation. This is critical for ensuring
ownership of the results and influences their use. This view is supported by Williams et al
(2002) when he says “…active participation of stakeholders at all stages of the evaluation
cycle promotes use”.
Planning for evaluation including the timing, partly determines how deep stakeholders will
participate in the evaluation processes, how much of their time they will need to devote to
the evaluation and how timely is the evaluation to meet their current and future needs. If
the evaluation will take a great amount of the stakeholders’ time it is very likely that there
will be partial participation or no participation at all. Also, if there is no perceived
importance of how the evaluation meets the stakeholders’ current or future needs, then
there is little compelling them to participate in the evaluation process and therefore there
will be limited or no attachment to the evaluation results. Another aspect that cannot be
underestimated in planning for an evaluation is the timing of the evaluation. Specific in this
is the time when an evaluation starts, when it is completed and the results made available.
Often times, evaluations are not utilized If the results are made available long after the key
decisions have been made.
Dissemination and credibility of evidence: when completed, it is important that the results of
an evaluation are shared with the different stakeholders. It is also key to note that different
media for dissemination appeal differently to different stakeholders and so the evaluator
must pay particular attention to the medium and content of the dissemination (for instance
through team discussions, workshops, management meetings). At dissemination, it is when
the stakeholders validate the evidence (and its quality) of evaluation. Where the evidence is
questionable, the chances of utilization are reduced. The evidence should be credible, well
researched, objective and expert and the report itself should be concise, and easy to read
and comprehend (Sandison, 2005). The quality of evidence is judged by; its accuracy,
representativeness, relevance, attribution, generalisability and clarity around concepts and
methods (Clarke et al, 2014). If the evidence is of poor quality, data used is doubted, and the
recommendations perceived irrelevant, the evaluation can in no way be utilized. Feasible,
specific, targeted, constructive and relevant recommendations promote use (Sandison,
2005). Credibility of the evidence is also dependent competence and reputation of the
evaluator-these define the evaluator credibility. In a situation that the credibility of the
evaluator is questionable, then, no doubt the evidence is questionable and so will not be
taken serious by the project teams.
Organizational factors: the different constituent components of organizations can in one
way or another influence the utilization of evaluations. These components include policies,
budgets, structure, systems including process and knowledge management systems and
staff. Sandison (2005) identifies culture, structure and knowledge management as the
organizational factors that influence utilization.
In organization culture, Sandison (2005) looks at a culture of learning and argues that for a
learning organization, senior managers encourage openness to scrutiny and change,
transparency and embed learning mechanisms. Also staff members value evaluation and
have some understanding of the process. Performance is integral to working practice,
managers actively support staff to learn and the organization’s leaders promote and reward
learning. Implied in this is that organizations should be open to sharing, willing to
experiment and improve. But, it is also important to note that learning occurs at different
levels i.e. collectively at organizational level and/or individually at a personal level. It is
imperative for the managers and organizational leaders to avail avenues that facilitate the
learning-sharing, doing, reflection and improvement. These could be formal e.g. seminars or
informal e.g. breakfast tables. In absence of a learning culture, chances are high that the
evaluations will remain on table.
Structure: Organizations, over time, due to increasing demand for Monitoring and
evaluations, have incorporated M&E department or unit to support the evaluation function
in their formal structures. It is important though that there is a good connection or linkage
between the different departments/units e.g. communications and documentation, finance,
advocacy, fundraising and the M&E department or Unit. This also requires that the M&E
staffs are linked to key decision makers in the different departments for purposes of getting
the decision makers to act or push their teams to act on evaluations. This is well put by
Sandison (2005), “…..the evaluation unit is structurally linked to senior decision makers,
adequately resourced, and competent. There are clear decision making structures,
mechanisms and lines of authority in place. Vertical and horizontal links between managers,
operational staff and policy makers enable dissemination and sharing learning. These are
permanent opportunist mechanisms for facilitating organization wide involvement and
learning”. Where organizational operations are highly decentralized with field offices, it
remains important that the M&E staff be part of meetings with directors and senior
management. The structural set-up in the organization can enable or completely disable the
utilization of the evaluations. M&E staff should be competent; the M&E unit should be
adequately resourced (finance, human and technological resources)
Systems: organizations have varying systems to support their operations. Among the
systems, an organization should have an M&E system that also allows for sharing, learning,
and accountability. This means that dissemination and knowledge management should be
deliberate and well planned for. Sandison (2005) argues that there should be systematic
dissemination mechanisms, informal and formal knowledge sharing networks and systems.
Where dissemination of an evaluation happens because the evaluator is pre-conditioned to
do so as a requirement for the completion of an evaluation and not as a requirement of the
organizational learning culture, then it will be no surprise that the evaluation will not be
Policies: A policy is a deliberate system of principles to guide decisions and achieve rational
outcomes (Wikipedia). With the increased institutionalisation of M&E, some organisations
and government departments have gone further to develop M&E policy to guide the
operations and practice in the organisation. The policy also institutionalises evidence based
decision making which indirectly demands that evaluations are utilize. Where such policies
exist and there is good will of the top management to implement the policy requirements,
then evaluations have very high chances of being utilised.
Budgets: Evaluations or broadly M&E require budget for implementation like other project
interventions. This calls for budgeting for utilisation in the advent that some of the
recommendation cannot be integrated in the current interventions. If such provision is not
there (as it is sometimes), then the evaluation will selectively be implemented or not at all.
Organisations often plan for the execution of the evaluation and not the implementation of
External factors: external pressure on organization or commissioners of evaluations may
have an influence on the utilization of evaluations. Such pressure may manifest from the
donors, professional bodies/associations, project beneficiaries, and the need to protect
reputation and funding.
With the increasing professionalization of the evaluation practice, regional and national
associations have been formed and instituted standards for evaluations. The evaluation
standards can be seen to have an influence on the utilization of evaluations and
subsequently enhance (or not) the effectiveness of organizations. The African Evaluation
Association stipulated four principles for evaluation i.e. utility, feasibility, precision and
quality, respect and equity (AfrEA, 2006). The Uganda Evaluation Association considers five
standards, namely; utility, feasibility, quality and precision, ethical conduct and capacity
development (UEA, 2013). The American Joint committee on standards for educational
(UJCSEE, 1994) evaluation considers four evaluation standards and these are Utility,
feasibility, propriety, and accuracy. What appears a “constant” standard is the utility
standard. It emphasizes that the evaluation should serve the information needs of the
intended users. Implied in this is that the design of the evaluation should bear in mind the
intended users and the intended use of the evaluation. The Utility standards are intended to
ensure that an evaluation will serve the information needs of the intended user (Sanders,
1994). Though this is the expected, it may not guarantee the use of the results of the
Project beneficiaries, are a constituent force in the external environment of projects that
have an influence over the success or the failure of projects. Similarly, their role in the
utilization (or non-use) of evaluations cannot be overlooked, of course not as direct users.
With the increasing shift from traditional evaluation approaches to more participatory
approaches, beneficiaries are often involved in evaluations but minimally are the results
communicated to them. HAPI (2006) notes that humanitarian agencies are perceived to be
good at accounting to official donors, fairly good at accounting to private donors and host
governments and very weak in accounting to beneficiaries. The increased demand for
accountability among beneficiaries puts organizations at task to demonstrate so through
evaluations but also pinning the organizations to respond to the emerging issues in the
evaluation. Where the beneficiaries have a passive role to play in project and evaluation
implementation, there are slim chances of influencing the use of the evaluation results.
Organisations, big or small, national or local, young or new are increasingly competing for
the financial resources from the donor agencies. This implies submission to the donor
requirements in application for funds and religiously following and completion of the
application form that also includes a section of project monitoring and evaluation. This
creates a scenario that organisations must demonstrate a solid approach to M&E for them to
win the application. This implies that in this case, an evaluation may be designed and
implemented for donor accountability and not to meet the needs of the organisation.
Organisations are also keen to protect their reputation and funding in the face of evaluations
that they would not want to lose funding and face as a result of publicised criticism. The fear
of repercussions from publicised criticism is real and so organisations are more determined
to protect their reputation and so the rush is not towards recommendations but the image
of the organisation. But, it is also true that some in some cases funding decisions are not
based on funding but donor mandates and other factors. National Auditing Office (2005)
affirms that among some donors effectiveness is just one of a number of issues to consider
when deciding who to fund. In light of the fact that performance is just a factor among
others, an evaluation where it is perceived to pose no threat on funding streams, then their
results are of no consequence.
The expectation, generally, is that when an evaluation is conducted then the results will be
appealing and so compelling to apply. However, this not always automatic. Even when
different authors have different perceptions on use and the factors that influence use, no
single factor can solely influence utilisation of the results.
In an evaluation that will increase utilisation, then planning for an evaluation, stakeholder
participation and credibility of the evidence is paramount. From the case study, one will
notice that the staff of the organisation participated in the evaluation more than just mere
respondents. This builds credibility of the evidence generated and ownership of the results
since analysis was jointly done facilitated by an external person. This probably explains why
the evaluation findings were wholesomely utilised.
Whereas it is good to have well structured organisations, with policies and protocols, care
must be taken on how these could impact on the learning and “doing” culture of the
organisation. In the case study, the organisation is less structured with no clearly designated
M&E function. This could mean that, there are no visible barriers to the learning and doing
culture. It could also imply the structural barriers to utilisation of evaluations. For instance,
in a highly structured organisation, it is possible that executions of ideas goes through levels
and in a case where not all parties have an equal voice, then it is possible that the ideas of
the stronger (respected) voice will take the day.
The increasing role of the donor, professional bodies and the beneficiaries cannot be
overlooked in how well evaluations are utilised. In the case study, the organisation is fairly
young with a self mounted pressure to demonstrate how well the organisation can meet her
objectives so as to win donor confidence. One would rightly say the external and internal
pressure to compel the organisation to grow bigger is a reason why the evaluation was
I am greatly indebted to Ms Kathrin Wyss the Program Delegate and Stefan Roesch the
Junior Program Officer, Caritas Switzerland-Uganda for allowing me to take time within the
busy work schedule and the constant interest in supporting the preparation and review of
Mr Ronald Rwankangi and the team of Advance Afrika thank you for the constructive
interaction that fed into this paper.
And finally, special thanks to Dr. Christ Kakuba (Makerere University, Kampala) and Mr.
Geoffrey Babughirana (World Vision Ireland) for the expertise, time and guidance provided
to make this paper worth reading.
1. Alkin, M.C and Taut, S.M: Unbundling evaluation use, 2003
2. Dasgupta, P.S.: Human Well Being and the Natural Environment, 2001
3. Fleischer, D. N., Christine, C. A: Results from a survey of U.S. American Evaluation
Members, June 2009.
4. Forss, K., Rebien, C. C., Carlsson, K.: Process Use of Evaluations. Types of use that
Precede Lessons Learned and Feedback, 2002.
5. Jackson, E. T .and Kassam, Y.: Knowledge shared: Participatory evaluation in
Development cooperation. Journal of Multidisciplinary Evaluation, 1998.
6. Johnson, K., Greeseid, L., O., Toal S., A., King J., A., Lawrenz, F., Volkov, B.: Research
on Evaluation use-a review of the empirical literature from 1986-2005, 2009
7. National Audit Office: Report by the National Audit Office. Engaging with
Multilaterals, December, 2006.
8. Patton, M. Q: Utilisation-Focused evaluation, 1997
9. Patton, M. Q.: Utilization Focuses Evaluations in Africa, September 1999.
10. Preskill, H., Zuckerman, B., Mathews, B.: An exploratory study of proces use. Findings
and implications for future research. American Journal of Evaluation, 2003.
11. Sanders J. R: The program Evaluation Standards. Joint Committee on Standards for
Educational Evaluation, 1994
12. Sandison, P.: The Utilisation of Evaluations, 2005.
13. Shadish, R. R., Cook, T.D & Levinton, L.C.L: Foundations of program evaluation, 1991
14. Shapiro, J.: Monitoring and Evaluation, 2010
15. Shulha, L.M., and Cousins, J.B: Evaluation use: Theory, Research and Practice since
16. Weiss, C.H: The many meanings of research utilization, 1979