This study aimed to develop a balanced scorecard (BSC) for a tertiary care private university hospital in Pakistan using a modified Delphi technique. An expert panel of clinicians and hospital managers identified and rated potential performance indicators according to importance, scientific soundness, relevance to strategy, feasibility, and modifiability. Of an initial 50 indicators, the panel selected 20 indicators across the four BSC domains of financial, customer, internal processes, and learning/growth. The resulting scorecard will be implemented to monitor performance, address measurement issues, and enable benchmarking with other settings. This represents one of the first attempts to implement BSC in a low-income country hospital setting.
Measuring What Counts in HIS - Balanced Scorecards
1. international journal of health planning and management
Int J Health Plann Mgmt 2010; 25: 74–90.
Published online in Wiley InterScience
(www.interscience.wiley.com) DOI: 10.1002/hpm.1004
Designing a balanced scorecard for a tertiary
care hospital in Pakistan: a modified Delphi
group exercise
Fauziah Rabbani1,2 *, Syed M. Wasim Jafri3y,z, Farhat Abbas 4x,ô,
Mairaj Shah 5k,# , Syed Iqbal Azam1yy, Babar Tasneem Shaikh1yy,
Mats Brommels6,7zz,xx,ôô and Goran Tomson2,7kk,##
1
Department of Community Health Sciences, Aga Khan University, Karachi, Pakistan
2
Department of Public Health Sciences, IHCAR Div International Health, Karolinska
Institutet, Stockholm, Sweden
3
Department of Medicine and Department of Continuing Professional Education , Aga Khan
University, Karachi, Pakistan
4
Department of Surgery, Aga Khan University, Karachi, Pakistan
5
Aga Khan University Hospital, Karachi, Pakistan
6
Department of Public Health, University of Helsinki, Finland
7
Medical Management Centre at Karolinska Institutet, Stockholm, Sweden
SUMMARY
Balanced Scorecards (BSC) are being implemented in high income health settings linking
organizational strategies with performance data. At this private university hospital in Pakistan
an elaborate information system exists. This study aimed to make best use of available data for
better performance management. Applying the modified Delphi technique an expert panel of
clinicians and hospital managers reduced a long list of indicators to a manageable size.
Indicators from existing documents were evaluated for their importance, scientific soundness,
appropriateness to hospital’s strategic plan, feasibility and modifiability. Panel members
individually rated each indicator on a scale of 1–9 for the above criteria. Median scores were
assigned. Of an initial set of 50 indicators, 20 were finally selected to be assigned to the four
* Correspondence to: Dr F. Rabbani, Professor Dept of Community Health Sciences, PO BOX 3500,
Stadium Road, Karachi, Pakistan and doctoral student Dept of Public Health Sciences IHCAR Div
¨
International Health, Nobels vag 9, Karolinska Institutet, SE 171 77 Stockholm, Sweden.
E-mails: fauziah.rabbani@aku.edu; fauziahrabbani@yahoo.com
y
Professor of Medicine.
z
Associate Dean.
x
Professor in the Section of Urology.
ô
The COO (Chief Operating Officer—on site).
k
Manager Clinical Affairs.
#
CME.
yy
Assistant Professor.
zz
Professor of Health Services Management.
xx
Guest Professor.
ôô
Director.
kk
Professor.
##
Director.
Copyright # 2010 John Wiley & Sons, Ltd.
2. DESIGNING BSC USING MODIFIED DELPHI 75
BSC quadrants. These were financial (n ¼ 4), customer or patient (n ¼ 4), internal business or
quality of care (n ¼ 7) and innovation/learning or employee perspectives (n ¼ 5). A need for
stringent definitions, international benchmarking and standardized measurement methods was
identified. BSC compels individual clinicians and managers to jointly work towards improving
performance. This scorecard is now ready to be implemented by this hospital as a performance
management tool for monitoring indicators, addressing measurement issues and enabling
comparisons with hospitals in other settings Copyright # 2010 John Wiley & Sons, Ltd.
key words: balanced scorecard; performance management; indicators; modified Delphi;
private hospital in Pakistan
INTRODUCTION
Hospital performance assessment is becoming increasingly important for different
stakeholders such as health care providers, decision makers, and purchasers of
health care. This is in response to growing demands to ensure transparency,
control, and reduce variations in clinical practice (Groene et al., 2008). With
hospitals consuming more than half of overall health care budget (McKee et al.,
2002), recent hospital reforms are highlighting a quest for achieving more efficient
and effective hospital care. This can be achieved through generalizable,
standardized interpretable, and useable information for clinicians or health service
managers (Willis et al., 2008).
Hospital management teams receive voluminous information from a wide variety of
sources. Despite the widespread use of performance indicators, there is little research
evidence on how to select the essential data to make evidence-informed decision
making (Ovretveit and Al Serouri, 2006). The worldwide health community,
therefore, needs to focus on improving measurement of a small set of priority areas
(Murray, 2007). Formal consensus methods are a set of techniques that synthesize
expert or stake holder opinion to guide and prioritize group decisions in situations
where information is lacking, contradictory or where there is an overload of
information (Campbell et al., 2002). Three main methods have been used in the health
field: the Delphi, the Nominal Group Technique and the Consensus Development
Conference. The comparative advantage of the Delphi technique over other strategies
is the enhanced opportunity for all participants to contribute greater number of ideas
than other group processes, minimizing domination of the process by more confident
or outspoken individuals, the ease of interpreting the results (ideas are generated, voted
on/ranked, aggregated, and evaluated at the session itself), a greater sense of
accomplishment for members (results are available immediately after the session), and
minimal resource requirements with efficient use of time (Murphy et al., 1998).
Conceptualization of hospital functioning is a diverse and complex phenomenon.
WHO strategic orientations are encompassed into six interrelated dimensions:
clinical effectiveness, safety, patient centeredness, responsive governance, staff
orientation, and efficiency (Veillard et al., 2005). Though no performance
management tool is ideal, this multidimensional approach of hospital performance
is captured in the balanced scorecard (BSC), in four different perspectives with an
equal weightage: (i) labeled learning and growth (staff orientation and satisfaction),
(ii) internal processes (clinical outcomes and management of health services),
Copyright # 2010 John Wiley & Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
3. 76 F. RABBANI ET AL.
(iii) customer (patient) satisfaction, and (iv) financial efficiency/performance
(Castaneda-Mendez et al., 1998). BSC serves as a dashboard for meaningful decision
making and quality improvement and relates results to external references while
promoting internal comparisons overtime (Veillard et al., 2005).
The advantage that BSC has over other performance measurement tools is that it is
less of a diagnostic control system for highlighting abnormal activities and more of
an interactive system for providing signals to the organization about management
objectives, stimulating debate, improving quality, and achieving organizational learn-
ing (Gordon and Geiger, 1999). Through its use various healthcare organizations in
high income countries (HICs) improved in their recruitment and retention processes
and employees gained a better understanding of organizational strategies leading to
overall improvement in performance including reduction in costs, better clinical
outcomes, increased staff, and patient satisfaction (Curtright et al., 2000; Kaplan and
Norton, 2000; Hospital Report, 2003; Mannion et al., 2005).
The implementation of management models is considered a step towards
maturity and a change discourse aiming for an efficient and modern organization
(Schalm, 2008). In the context of low income countries (LICs), however, we know
little about successful models to promote greater management effectiveness at the
hospital level (Hartwig et al., 2008). Evidence about BSC usage in LICs is
deficient mainly due to lack of committed leadership, cultural readiness, quality
information systems, viable strategic plans, and optimum resources (Rabbani et al.,
2007). Simple dissemination of written guidelines in LICs is proving to be often
ineffective (Rowe et al., 2005) and health managers face significant challenges in
developing and managing appropriate systems (Green and Collins, 2003). Such
faulty information systems result in a clear lack of knowledge regarding where to
focus priorities; where improvement is needed and whether ongoing initiatives
˚
were having a positive impact (Murray, 2007; Malqvist et al., 2008). Although a
partnership-mentoring model for enhancing management capacity in Ethiopian
hospitals has been tested (Hartwig et al., 2008), to our knowledge BSC specifically
has not been implemented in hospital settings in LICs. Recently BSC was applied
at a national (macro) level to demonstrate how provinces and the country are doing
in delivering the basic package of health services in Afghanistan (Peters et al.,
2007). This innovative adaptation of the BSC in Afghanistan at a macro level has
provided a useful tool to summarize the multidimensional nature of health services
and enabled managers to benchmark performance and identify strengths and
weaknesses in the Afghan context.
There is heavy emphasis on curative services and hospitals consume 45% of the
meager health budget in Pakistan (Abrejo et al., 2008). Despite this the quality of
health care in public hospitals is dismal and 70% health care is being provided by
private facilities (Ghaffar et al., 2000). We conducted this study at a private
university hospital in Karachi Pakistan to assess the feasibility of the modified Delphi
group technique to reach consensus about the indicators of an institutional level BSC,
identify the strengths, and weaknesses of data being generated and recommend ways
to improve hospital performance measurement.
Copyright # 2010 John Wiley & Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
4. DESIGNING BSC USING MODIFIED DELPHI 77
METHODS
This study was conducted at a large private university hospital in Karachi (largest and
most populous city of Pakistan) in 2006. The hospital offers quality care to
outpatients and inpatients of all socio-economic classes (Rafique et al., 2006). It
operates with 542 beds in operation and offers a broad range of secondary and
tertiary services to over 38 000 hospitalized patients and to over 500 000 outpatients
annually. Inpatients have an average length of stay of 3.9 days (AKUH Quality
Manual, 2007: internal document). There are currently 400 trainees (interns,
residents, and fellows) affiliated with the hospital. Clinical services offered by this
university hospital (with staffing details) are listed in Table 1. According to the
hospital’s strategic plan (Health Sciences Centre Committee, 2002; internal
document), the vision is to provide (i) compassionate, accessible, and good quality
care that meets or exceeds expectations (ii) provision of a work environment that
fosters committed and motivated staff, and (iii) enabling leadership in research and
education that improves national health.
This hospital has an extensive health information system in place. An internal
situation analysis, however, identified the need for better integration of information
collected for evidence informed decision making (Health Sciences Centre Committee,
2002: internal document). This report recommended that academicians and
administrators develop a road map together and foster a culture of team work,
shared vision, and institutional ownership. BSC serves as a road map for self-
assessment and continuous improvement towards excellence (Ruiz et al., 1999).
Table 1. University hospital–clinical services
Anesthesia Ã(28) Surgery (52)
Cardiothoracic surgery
Family medicine (12) Dental-oral and maxillofacial surgery
General surgery
Medicine (54) Neurosurgery
Cardiology Ophthalmology
Diabetes, Endocrinology, and Metabolism Orthopedic surgery
Gastroenterology Otolaryngology
General internal medicine Pediatric Surgery
Hematology and Oncology Urology
Neurology
Pulmonary and Critical care medicine Ambulatory care services
Emergency medicine
Allied health services
Obstetrics and Gynecology (15) Pharmacy
Physiotherapy
Pediatrics (23) Nutrition
Pathology and Microbiology (33) Diagnostic services
Cardiopulmonary
Psychiatry (7) Clinical laboratories
Neurophysiology
Radiology (21)
Ã
Number in parenthesis represents total full time faculty. Non-faculty employees are not listed.
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
5. 78 F. RABBANI ET AL.
Therefore, in 2006 a multidisciplinary team comprising of hospital leadership
(including DG and CEO of the hospital, Medical Director, Chief Operating Officer)
agreed that the hospital needs to produce a BSC incorporating clinical and non-clinical
metrics for better clinical outcomes and performance management. In 2008 a new Vice
President (VP) was appointed for health services with the past experience of working
as an Executive Director at Guy’s and St. Thomas NHS Foundation Trust in London.
The newly appointed VP was responsible for corporate and clinical governance,
clinical operations and organization-wide performance measurement and manage-
ment. Under the leadership of VP, BSC was envisaged as an organizational
performance management pyramid (Figure 1) empowering all levels (from executive
to operational) with varying metrics and details. It would serve to link the hospital’s
strategic plan and individual department objectives. The frontline level was to look at
details with a large set of indicators tracked on a monthly/quarterly basis and
concerned with problem solving and improvements whereas the Board and executive
management would be more aligned towards long-term global trends, summary
reports generated biannually and concerned with overall strategy and governance.
Following an assessment for cultural readiness to implement the BSC (Rabbani
et al., 2008), a systematic development plan was used to design the BSC at this
hospital. The cultural assessment showed that the required prerequisites for BSC
implementation particularly conducive leadership, viable strategic plan, and a
functional management information system already existed at this hospital. The steps
used to design the BSC were in line with those outlined in earlier studies (Kaplan and
Figure 1. Proposed approach to develop a balanced scorecard
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
6. DESIGNING BSC USING MODIFIED DELPHI 79
Norton, 1996; Wachtel et al., 1999; Oliveira, 2001; Peters et al., 2007). These are;
(i) building the business case at the executive leadership level of the hospital (Vice
President hospital services, Chief Operating Officer and Medical Director of the
hospital were involved in this study) (ii) identifying strategies and tactical objectives,
(iii) identifying performance measurements, (iv) identifying data sources, (v),
building consensus around the indicators to create the BSC based on the background
material obtained from internal documents (vi) develop communication tools and
targets for each measure based on benchmarking (vii) implementation, refinement,
evaluation, and reusing the BSC.
After working through initial steps this study focused on the creation of BSC for
use at the Medical Directorate level (Figure 1). A subsequent study (in progress) will
report on the adaptation and implementation of BSC at the frontline departmental
level. It is anticipated that results of the latter would be available in 2010 after which
the BSC would be cascaded upward to the CEO and Board level.
In order to select indicators for BSC, a multistage modified Delphi consensus
process developed by RAND (Marshall et al., 2006) was used. We used the modified
Delphi technique so that face-to-face panel discussions with experts in the field could
be conducted and face validity of indicators established. The face validity of the
indicators was defined as whether its meaning and relevance to the assessment under
consideration was self-evident and it superficially appeared to measure what it was
supposed to measure (McBurney, 2001). A panel of nine experts was selected based on
guidelines of the Delphi technique (Campbell et al., 2002). The group of experts was
identified from a variety of professional disciplines and the required range of
professional backgrounds. The panel represented hospital domains of marketing
(managers who conduct quarterly patient satisfaction surveys), clinical quality
assurance (clinicians, physician, and nurse managers who monitor quality care
indicators), human resource management (staff and managers who conduct annual
staff satisfaction surveys), and budget and planning (financial managers furnishing
financial reports). As per recommendation of other studies (Chung et al., 2008) it was
ensured that experts committed time and involvement until the process was complete.
Ethical approval for the study was received from the Ethical Review Committee of
the Pakistani hospital where this study was implemented.
RESULTS
Short listing of indicators by the expert panel
Following an extensive review of existing internal documents (periodical quality
assurance, patient and employee satisfaction surveys, and financial reports), a
preliminary list of 50 indicators was formulated in line with hospital’s strategic plan.
No indicators were removed from consideration at this phase of the activity. The next
step was to prioritize key performance indicators based on the criteria of importance,
scientific soundness (credibility), appropriateness to hospital’s strategic plan,
feasibility (i.e., whether the measure was available easily as part of management
information system, could be collected accurately, reliably and at a reasonable cost)
and modifiability of the clinical outcome measures.
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
7. 80 F. RABBANI ET AL.
The panel used the modified Delphi technique (over a period of 6 months) during
face to face meetings to individually rate each indicator on a scale of 1–9 for the above
criteria. All criteria were given equal weightage. If an indicator was thought not to be
amenable to action it was dropped. Median scores and measures of disagreement for
the whole panel and individual ratings were discussed, in subsequent meetings. Panel
members were given an opportunity to change their ratings after the discussions.
Indicators receiving final scores of 7–9 were regarded as robust, 4–6 as equivocal, and
1–3 as weak (Figure 2). All indicators receiving scores of 7 or more (face validity) were
included in the final set. In addition, a small number of indicators which received
scores of 4–6 were retained if the panelists considered the indicators essential to
contribute to the overall balance and comprehensiveness of the final set.
Twenty indicators (receiving a median score of 7 or more) were finally selected
(Table 2) and organized by the expert panel into the four BSC quadrants: financial,
customer (patient), internal business, and innovation/learning.
Indicators for innovation and learning quadrant of the BSC
The indicators for employee satisfaction (innovation/learning quadrant of BSC) were
selected from the annual faculty and staff surveys and included (i) satisfaction with
job; dimensions of training and skills, work load including double duties performed,
maximum use of staff abilities, decision-making authority, and motivation to strive
for excellence, (ii) collegial satisfaction (helping each other in times of need, respect
from the colleagues, discussion with colleagues to mutually resolve issues),
(iii) satisfaction with supervisor; dimensions of friendly working relationship, regular
feedback, satisfaction with appraisal system, recognition for doing a good job,
openness to suggestions, and good ideas, (iv) satisfaction with organization; annual
faculty and staff turnover, fair treatment without gender and religious discrimination,
opportunities for growth and improvement, proud to work, viewing organization as a
Figure 2. Short listing indicators for BSC using a modified Delphi process
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
8. DESIGNING BSC USING MODIFIED DELPHI 81
Table 2. Shortlisted set of indicators for the BSC using the modified Delphi technique
Indicators
Financial perspective (FP) Median Mean Std. deviation
Average charges (inpatient) 8.00 8.22 0.44
Length of stay (inpatient) 7.00 6.89 0.78
Daily census (inpatient) 8.00 8.11 0.60
Net operating margin 9.00 8.67 0.50
Overall FP Median 8.00
Internal business perspective (IBP): clinical outcomes (efficiency and quality)
Laboratory report turnaround time 8.00 7.89 0.60
Radiology film rejection rate 8.00 8.33 0.50
Unplanned stay after day care procedure 7.00 6.78 1.20
Incidence of blood transfusion reaction 7.00 7.22 1.09
Nosocomial infection 8.00 8.00 0.71
Cross match to transfusion ratio 7.00 7.33 0.87
Needle stick injuries 8.00 7.89 1.17
Overall IBP Median 8.00
Human resource perspective (HRP)
Satisfaction with job 7.00 7.33 0.71
Satisfaction with colleagues 8.00 7.78 0.83
Satisfaction with on campus facilities 8.00 7.44 0.73
Satisfaction with organization 7.00 6.89 0.93
Satisfaction with supervisors 7.00 7.00 1.12
Overall HRP Median 7.00
Patient satisfaction perspective (PSP)
Satisfaction with physicians 7.00 7.11 0.78
Patient Complaints (inpatient) 8.00 7.56 0.88
Satisfaction with nursing services 7.00 7.33 0.87
Proportion of patients recommending 8.00 7.89 0.60
this hospital to their families and friends
Overall PSP Median 7.50
long-term career choice, balance between work and personal life, and (v) satisfaction
with various on- campus staff facilities (sports and gymnasium, utility shops, child
day care center, cash withdrawal facilities, payment of utility bills etc.).
Some of the indicators reviewed were not finally selected as they were considered to
be more specific to the Human Resources Department (HRD) and not directly
influencing clinical service provision at the hospital. These included (i) the performance
of HRD obtained through staff surveys (e.g., assistance provided to new employees,
managing employee discipline cases, promptness in responding to queries etc.), (ii)
indicators for employee safety and emergency preparedness obtained through Safety
and Security Department reports (e.g., fire emergency response time, number of injuries
per 100 full-time employees) and (iii) indicators of workforce management such as staff
absenteeism (doctors and nurses), number and type of employee illnesses.
Indicators for internal business quadrant of the BSC
Indicators for this quadrant were shortlisted from a larger list of quality care
indicators which the hospital (medical directorate) is monitoring through various
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
9. 82 F. RABBANI ET AL.
quality assurance teams for ongoing accreditation by the Joint Commission on
Accreditation of Healthcare Organizations USA (JCAHO). The selected indicators
included three indicators of efficiency; laboratory report turnaround time (number of
samples reported within acceptable time limit per total number of samples analyzed),
radiology film rejection rate (number of rejected films out of total number used and
indicates film wastage rate with implications for appropriate training of radiology
staff in patient positioning and exposure techniques) and cross match to transfusion
ratio (proxy for indicating the actual need for carrying out blood transfusions). There
were two indicators of quality of care and efficiency; unplanned stay after day care
procedure (number of unplanned overstay following day care surgery out of patients
undergoing day care surgery) and incidence of blood transfusion reactions (number
of blood transfusion associated reactions out of total units of blood transfused).
Needle stick injuries were selected as an indicator of staff compliance with safety
techniques and quality procedures and is also an indicator of employee safety. Rate
of nosocomial infections (central line associated blood stream infections in intensive
care units in relation to device days (following CDC guidelines of National Nosocomial
Infection Surveillance) was selected as the indicator of infection control.
The Health Management Information System provided information on mortality
related to anesthesia (using American Society of Anaesthesia guidelines), however,
desirable information on adjusted case fatality rate for tracer conditions was not
available. Similarly peri-operative mortality (in elective procedures) and returns to
operation theatre during the same episode were considered as indicators of operative
safety however due to residual confounding effects (age, severity of illness etc.), these
were not finally selected. Hospital acquired pressure ulcers, adverse event reporting,
patient fall rates, unplanned descents to the floor /1000 patient days were initially
selected as indicators of nursing quality but the panelists were of the opinion that in-
depth discussion was required with Division of Nursing Services and individual
clinical units should consider these while developing their customized scorecards.
Indicators for customer (patient) satisfaction quadrant of the BSC
Quarterly patient satisfaction surveys by the hospital’s Marketing department and
quality reports from the Department of Clinical Affairs were used to select four
indicators of patient satisfaction The patient satisfaction survey captures, analyzes and
monitors patient satisfaction with outpatient, inpatient, diagnostic and emergency
services. For the purpose of this study only inpatient service indicators were discussed.
These indicators are; (i) satisfaction with nursing services (dimensions: provision of
adequate information on health condition/medicines/ follow up care, courtesy,
listening, prompt response to call bell, respect for privacy, skilful insertion of cannulas/
IV lines, proper dressing, provision of special help when needed) (ii) satisfaction with
physicians (dimensions: daily visit of consultant, proper explanation and information
given and respect for privacy) (iii) recommending this service to family and friends (a
proxy indicator for a satisfied client), and (iv) percentage of patient complaints
(dimensions: care, delays, environment, attitude, availability of health staff,
communication, billing system, quality of food, cleanliness of washrooms, level of
noise, scheduled tests and investigation procedures on time etc.).
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
10. DESIGNING BSC USING MODIFIED DELPHI 83
It is important to mention that the Department of Clinical Affairs and the
Hospital’s Risk Management Forum are already taking specific actions against those
patient complaints classified as ‘sensitive’ (resulting in potential defamation,
litigation, or compensation) and hence the latter was excluded from our list of
generic BSC indicators. Moreover ‘overall patient satisfaction’ was a composite
indicator shortlisted but not finally selected. The patients included in the sample are
requested to respond to the question’ were you overall satisfied with the quality of the
service you received? in terms of strongly agree, agree, neutral, disagree, and
strongly disagree. Responses obtained on a likert scale are converted to mean values
and reported as a percentage. Those who respond to this question with strongly agree
and agree are classified as overall satisfied patient. Sample size calculations for this
indicator needed statistical refinement and the panel could not rule out the element of
recall bias in these telephonically conducted interviews. Rates of medication error
(including prescribing, administration, and dispensing) and rationale use of
antibiotics were considered as indicators of patient safety, however, since there
were already specific quality control committees monitoring these on a quarterly
basis they were dropped as key indicators for the institutional level BSC.
Indicators for financial quadrant of the BSC
Four indicators were finally selected (i) average charges per inpatients (total inpatient
revenue earned against total number of patients admitted over a period) is a measure
of the accessibility of patients to hospital and also the cost-effectiveness of services
provided. It is computed by comparing the increase in average charges per inpatient
with the average price increase and the inflation rate, (ii) inpatient length of stay
(indicator of efficiency), (iii) average daily census (average number of patients
occupying bed per day), and (iv) net operating margin (margin on gross revenues
before interest and depreciation: indicator of cost and productivity) were some
indicators routinely generated by the department of Budget and Planning and later
shortlisted for the financial quadrant of the BSC.
FTE per adjusted occupied bed (an indicator of patient staff ratios and efficiency
monitored for JCAHO), % capex expenditure versus planned (highlights the total
capital budget consumed against the annual budget), percentage of referrals from
CHC; the low cost outpatient clinic (another indicator of financial accessibility of
this hospital for all socioeconomic groups) were selected in the initial list of 50
indicators but were dropped later due to lack of available national and regional
benchmarking and because some of this information was not considered relevant for
general public disclosure.
DISCUSSION
To the best of our knowledge, this was the first time that experts (managers, academicians,
and clinicians) were together involved in a scientific process (the modified Delphi group
technique) to develop a BSC for a hospital in a LIC setting. Integrating the activities
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
11. 84 F. RABBANI ET AL.
of different departments is a difficult task for the management of the organization
(Axelsson and Axelsson, 2006). Designing the BSC was possible in our study
because most of the necessary prerequisites for successful BSC implementation in
LICs (committed leadership, viable strategic plans and information systems etc.)
were already in place and cultural readiness for BSC usage had previously been
assessed (Rabbani et al., 2007; Rabbani et al., 2008). The modified Delphi method
successfully incorporated views from health personnel and specialists in the
development of a BSC. The same has been reported from Canadian hospitals
(Robinson et al., 2003). Another study of nine health provider organizations in USA
emphasized the importance of using a lot of teaching, discussion and consensus
building to ensure a successful BSC implementation (Inamdar et al., 2002).
Moreover consensus techniques such as modified Delphi have been utilized in
Thailand to develop trauma care indicators (Suwaratchai et al., 2008). The recent use
of formal consensus methods in Pakistan’s neighboring Muslim country Iran to
identify outcome-based indicators for rational drug prescribing and effective
academic leadership in order to increase the validity of the findings is also
encouraging (Bikmoradi et al., 2008; Esmaily et al., 2008). It is noteworthy that the
criteria (importance, scientific soundness, feasibility etc.) used by other studies
¨ ¨¨ ¨
(Idanpaan-Heikkila, 2006; Marshall et al., 2006; McLoughlin et al., 2006) to short
list indicators also worked well in our setting.
Interestingly the composition of the multidisciplinary panel in our study was
found to be quite similar to other studies on development of a BSC for hospitals in
HICs. In an Australian study, experts in the Delphi panel included both hospital
managers and clinical practitioners (Xiao et al., 1997). In Taiwan (Huang et al.,
2004) the team that worked to develop the BSC included president, Vice President
and all department directors of the study hospital. In another recent study on
implementation of BSC in a community hospital at USA besides others the panel of
experts consisted of directors from patient care services and quality management
(Lorden et al., 2008). It is envisaged that this involvement of staff at different levels
within an organization during the development of BSC will enhance the acceptance
of the scorecard when it is implemented.
The Balanced Scorecard provided an opportunity to capture indicators in four
aspects of hospital performance. This demonstrates that data which are routinely
collected by the hospital can be used to develop an integrated core of
multidisciplinary indicators. This institutional level BSC has been designed for
use at the level of medical directorate. Subsequent studies can contextualize and
customize BSC by each of the implementing frontline clinical units so that specialty–
specific BSCs can be developed. Other studies have also used existing documents in a
¨ ¨¨
similar fashion to create effective BSC (Wachtel et al., 1999; Idanpaan-Heikkila, ¨
2006; Marshall et al., 2006; McLoughlin et al., 2006). In a recently concluded study
BSC was used to track certain nursing indicators in acute care Ontario hospitals using
secondary data (Hall et al., 2008).
There was relatively a high level of agreement about the usefulness of the 20
indicators which were finally selected in our study. These indicators were distributed
across all 4 quadrants of BSC: financial perspective (n ¼ 4), internal business (n ¼ 7),
human resource perspective (n ¼ 5) and patient satisfaction perspective (n ¼ 4). The
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
12. DESIGNING BSC USING MODIFIED DELPHI 85
indicators are similar to the ones shortlisted in both high (Baker and Pink, 1995) and
low income (Hansen et al., 2008) health settings. In the former study revenue
generated, patient volumes, patient and employee satisfaction, nosocomial
infections, average length of stay, and routine laboratory test turnaround time were
included among the 23 indicators shortlisted for the BSC developed for Canadian
hospitals. In the latter study in Afghanistan domains of patient and community, staff,
capacity for service, service provision, financial system, and overall vision were used
to monitor 29 indicators at provincial level. The latter are quite similar to the ones
selected in our setting. In another recent Canadian study following initial steps of
building executive commitment and strategic alignment, 23 indicators were
shortlisted which could be compared across various care centers using the same BSC
dimensions (Schalm, 2008).
The results from this investigation also reflect limitations of routinely collected
data. The Delphi process highlighted that certain indicators selected for BSC
(Table 3) had relatively lower face validity as assessed by their median ratings. This
was mainly due to lack of standardized definitions and measurement techniques,
reliable instruments, adequate sample sizes and response rates etc. Other studies
(Baker and Pink, 1995; Zeitlin et al., 2003) have also noted that methodological
shortcomings of many indicators have generated skepticism about the data sources,
consistency of reporting, derivation of the numbers, and their usefulness in offering
analogous estimates. It is to be noted that Pakistan does not have a national hospital
database. Comparable national/regional targets, benchmarking and a balance
between process and outcome indicators was recommended. It was also noted that
for subsequent designing of BSC for front line clinical departments disaggregation of
information by each clinical specialty would be needed.
It is possible that despite efforts to capture all relevant indicators through publicly
available surveys and documents, certain valuable indicators may have been
overlooked. Some of the shortlisted indicators in the western studies included
allocative efficiency, vertical equity, survival rates and age, sex and disease specific
mortality, and morbidity ratios (Wachtel et al., 1999; Robinson et al., 2003; ten
Asbroek et al., 2004; Schalm, 2008). The BSC developed in our setting did not have
some of the more analytical indicators listed above. Such lack of in-depth outcome
data has been listed as a BSC implementation barrier elsewhere (Schalm, 2008). It is
important to mention that to date at this hospital only diagnostic services (laboratory,
radiology) and pharmacy (data on medications prescribed and dispensed) are fully
computerized. Although each patient visiting this hospital has a unique medical
record number and information related to patient characteristics (age, gender,
diagnosis, length of stay, and clinical intervention performed) is computerized,
however, presenting complaints, co-morbid conditions, and discharge summaries are
still available only on paper files. Similar issues of patient data records have been
reported from Ethiopian hospitals (Hartwig et al., 2008). Moreover it has been shown
in Iran that hospitals are collecting a lot of financial and clinical information in a
fairly computerized but not in a well-organized format (Ghaffari et al., 2008). A need
for an electronic patient record system (an e–health initiative) was, therefore,
emphasized to overcome some of these methodological barriers and come up with
more robust indicators in our setting.
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
13. Table 3. Measurement issues highlighted during the Delphi process
86
Name of indicator Source of data Issues
Financial perspective
Length of stay (inpatient) Monthly reports of department Cannot stratify by severity of illness, number of complications
of budget and planning and co-morbidities
This indicator can be influenced by factors beyond the hospital
environment (e.g., absence of nursing homes and home care
facilities post discharge)
Length of stay can also increase because of management delays
(scheduling of investigation procedures, timing of consultant
ward rounds, weekend admissions etc.)
Internal business (efficiency and quality)
Unplanned stay after day Quarterly quality improvement Cannot differentiate between surgical complications and
care procedure reports management delays (e.g., delay in availability of operation
Copyright # 2010 John Wiley Sons, Ltd.
theatre and low staffing levels)
Cannot comment on which procedure causes maximum delay
Incidence of blood transfusion Quarterly quality improvement Need to set a target based on international comparison
reaction reports
Cross match to transfusion ratio Only trend being monitored currently
F. RABBANI ET AL.
Human resource perspective
Satisfaction with job Annual employee satisfaction Low response rate to the survey
survey
Not disaggregated by department or designation
Only quantitative information collected on a sliding scale
Satisfaction with organization Data collection instrument not validated, questions likely to evoke
a positive response
Satisfaction with supervisors No international/regional comparisons or targets
Patient satisfaction perspective
Satisfaction with physicians Quarterly patient satisfaction Does not convey underlying information about satisfaction with
surveys physician and nursing services by each clinical department and
therefore delays action
Satisfaction with nursing
services
DOI: 10.1002/hpm
Int J Health Plann Mgmt 2010; 25: 74–90.
Measurement issues related to 9/20 indicators of the BSC are highlighted. These indicators received a relatively lower rating (median ¼ 7).
14. DESIGNING BSC USING MODIFIED DELPHI 87
Despite these practical limitations, the Delphi group process led to a pragmatic
interpretation of existing data resulting in the design of a scorecard with
comprehensive indicators in multiple dimensions. It has been reported that for
hospital managers and those developing health policies, studies such as this provide
insight into the factors influencing hospital performance (Xiao et al., 1997). The 20
indicators which emerged from our study using the modified Delphi process have
highlighted the methodological challenges faced during the design of BSC. As a next
step this scorecard will now be customized for individual clinical departments of this
hospital and later implemented at the executive and board level. The BSC would
facilitate rational organization and management of data collection systems and serve
as an evaluation framework for monitoring improvement of clinical outcomes and
quality. A greater cohesion within the hospital units is expected simultaneously.
Lessons learnt will have important bearing for hospital performance measurement
initiatives in other settings.
ACKNOWLEDGEMENTS
The authors thank the senior Aga Khan University (AKU) and hospital (AKUH)
leadership—President Firoz Rasul, Vice President Health Services Dallas Ariotti,
Dean of the Medical college AKU, Mohammad Khurshid, Director General and
CEO AKUH Nadeem Khan, Dean of Research and Graduate Studies AKU, El-Nasir
Lalani, Director Hiuman Resources AKU, Navroz Surani, and chair Dept of
Community Health Sciences (CHS) Dr Gregory Pappas—for encouraging us to
proceed with the work related to Balanced Scorecard (BSC) at AKU. This study is a
component of BSC studies underway. We thank Dr Naushaba Mobeen, Senior
Instructor Community Health Sciences and Dr Wasif Shahzad Manager Dept of
Medicine for assisting in initial meetings. We also express our gratitude to Mr Zafar
Tahir (CHS), Aslam Fareed and Muhammad Feisal (Marketing Department AKUH),
Ms Salma Jaffer (Manager JCIA Coordination AKUH), Ms Shamim Nayani (Senior
Manager Employee Relations, Rehman Hirani, and Khurram Jamal (Dept of Budget
and Planning AKU). Ms Saira Nigar (CHS) assisted us in data analysis and
Ms Shafaq Ambreen, administrative officer (CHS) rendered untiring secretarial assist-
ance. We thank Bo Badr Saleem Lindblad, Professor Emeritus of International Child
Health, Department of Public Health Sciences, Division of International Health
(IHCAR), Karolinska Institutet Medical University, Stockholm, Sweden, and visit-
ing professor, AKU, Karachi, Pakistan, for his overall support. Thanks also go to
Mr Thomas Mellin at IHCAR, Department of Public Health Sciences, Karolinska
Institutet (KI), Sweden for connecting the first author to various information
technology resources during her visits to KI. The authors acknowledge our various
grant sources: Swedish South Asian Network (SASNET: grant ID; EPG05S:06),
WHO EMRO (project ID #: RPC 04/60) and Swedish Institute (Si: Id # 05655/2005).
The major support for this study came from AKU University Research Council
(URC, project ID 052013 CHS).
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
15. 88 F. RABBANI ET AL.
REFERENCES
Abrejo FG, Shaikh BT, Saleem S. 2008. ICPD to MDGs: missing links and common grounds.
Reprod Health 5(4): DOI: 10.1186/1742-4755-5-4
Axelsson R, Axelsson SB. 2006. Integration and collaboration in public health—a conceptual
framework. Int J Health Plann Manage 19: 383–398. DOI: 10.1002/hpm.826
Baker GR, Pink GH. 1995. A balanced scorecard for Canadian hospitals. Healthc Manage
Forum 8: 7–21.
Bikmoradi A, Brommels M, Shoghli A, Sohrabi Z, Masiello I. 2008. Requirements for
effective academic leadership in Iran: a nominal group technique exercise. BMC Med Educ
8: 24.
Campbell SM, Braspenning J, Hutchinson A, Marshall M. 2002. Research methods used in
developing and applying quality indicators in primary care. Qual Saf Health Care 11: 358–
364. DOI:10.1136/qhc.11.4.358
Castaneda-Mendez K, Mangan K, Lavery AM. 1998. The role and application of the balanced
scorecard in healthcare quality management. J Healthc Qual 20: 10–13.
Chung KP, Lai MS, Cheng SH, et al. 2008. Organization-based performance measures of
cancer care quality: core measure development for breast cancer in Taiwan. Eur J Cancer
Care (Engl) 17: 5–18. DOI: 10.1111/j.1365-2354.2007.00796.x
Curtright JW, Stolp-Smith SC, Edell ES. 2000. Strategic performance management: devel-
opment of a performance measurement system at mayo clinic. J Healthc Manag 45: 58–68.
Esmaily HM, Savage C, Vahidi R, Amini A, Zarrintan MH, Wahlstrom R. 2008. Identifying
outcome-based indicators and developing a curriculum for a continuing medical education
programme on rational prescribing using a modified Delphi process. BMC Med Educ 8(33):
DOI: 10.1186/1472-6920-8-33
Ghaffar A, Kazi BM, Salman M. 2000. Health care systems in transition III. Pakistan, Part I. an
overview of the health care system in Pakistan. J Public Health Med 22: 38.
Ghaffari S, Doran C, Wilson A, Aisbett C, Jackson T. 2008. Investigating DRG cost weights
for hospitals in low resource countries: an Iranian example. Int J Health Plann Manage
1–14. DOI: 10.1002/hpm.948
Gordon D, Geiger G. 1999. Strategic management of an electronic patient record project using
the balanced scorecard. J Healthc Inf Manag 13: 113–123.
Green A, Collins C. 2003. Health systems in developing countries: public sector managers and
the management of contradictions and change. Int J Health Plann Manage 18: S67–S78.
DOI: 10.1002/hpm.721
Groene O, Skau JKH, Frolich A. 2008. An international review of projects on hospital
performance assessment. Int J Qual Health Care 20(162): DOI: 10.1093/intqhc/mzn008
Hall LM, Peterson J, Baker GR, et al. 2008. Nurse staffing and system integration and change
indicators in acute care hospitals: evidence from a balanced scorecard. J Nurs Care Qual 23:
242–252. DOI: 10.1097/01.NCQ.0000310655.60187.92
Hansen PM, Peters DH, Niayesh H, Singh LP, Dwivedi V, Burnham G. 2008. Measuring and
managing progress in the establishment of basic health services: the Afghanistan health
sector balanced scorecard. Int J Health Plann Manage 23: 107–117. DOI: 10.1002/hpm.931
Hartwig K, Pashman J, Cherlin E, et al. 2008. Hospital management in the context of health
sector reform: a planning model in Ethiopia. Int J Health Plann Manage 23: 203–218. DOI:
10.1002/hpm.915
Hospital Report. 2003. Acute Care. A joint initiative of the University of Toronto, Ontario
Hospital. Association and the Government of Ontario. Canadian Institute for Health
Information Ontario.
Huang SH, Chen PL, Yang MC, Chang WY, Lee HJ. 2004. Using a balanced scorecard to
improve the performance of an emergency department. Nurs Econ 22: 140–146, 107.
¨ ¨¨ ¨
Idanpaan-Heikkila U. 2006. Selecting indicators for the quality of cardiac care at the health
system level in organization for economic co-operation and development countries. Int J
Qual Health Care 18: 39–344. DOI: 10.1093/intqhc/mzl028
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
16. DESIGNING BSC USING MODIFIED DELPHI 89
Inamdar N, Kaplan RS, Bower M. 2002. Applying the balanced scorecard in healthcare
provider organizations. J Healthc Manag 47: 179–196.
Kaplan RS, Norton DP. 1996. The balanced scorecard: translating strategy into action.
Harvard Business School Press: Boston, MA.
Kaplan RS, Norton DP. 2000. The Strategy-Focused Organization: How Balanced Scorecard
Companies Thrive in the New Business Environment. Harvard Business School Press:
Boston MA.
Lorden A, Coustasse A, Singh KP. 2008. The balanced scorecard framework-A case study of
patient and employee satisfaction: what happens when it does not work as planned? Health
Care Manage Rev 33(145): DOI: 10.1097/01.HMR.0000304503.27803
˚
Malqvist M, Eriksson L, Nga NT, et al. 2008. Unreported births and deaths, a severe obstacle
for improved neonatal survival in low-income countries; a population based study. BMC Int
Health Hum Rights 8: 1–7. DOI: 10.1186/1472-698X-8-4
Mannion R, Davies H, Marshall M. 2005. Impact of star performance ratings in English acute
hospital trusts. J Health Serv Res Policy 10: 18–24.
Marshall M, Klazinga N, Leatherman S, et al. 2006. OECD health care quality indicator
project. The expert panel on primary care prevention and health promotion. Int J Qual
Health Care 18: 21–25. DOI: 10.1093/intqhc/mzl021
McBurney D. 2001. Research methods. Wadsworth Thomson Learning: Belmont, CA,
Stamford, USA.
McKee M, Healy J, Edwards N, Harrison A. 2002. Pressures for change. Hospitals in changing
Europe. World Health Organisation (WHO): European observatory on health care systems
series 36–58.
McLoughlin V, Millar J, Mattke S, et al. 2006. Selecting indicators for patient safety at the
health system level in OECD countries. Int J Qual Health Care 18: 14–20. DOI: 10.1093/
intqhc/mzl030
Murphy MK, Black NA, Lamping DL, et al. 1998. Consensus development methods, and their
use in clinical guideline development. Health Technol Assess 2: 1–88.
Murray CJL. 2007. Towards good practice for health statistics: lessons from the millennium
development goal health indicators. The Lancet 369: 862–873. DOI: 10.1016/S0140-
6736(07)60415-2
Oliveira J. 2001. The balanced scorecard: an integrative approach to performance evaluation.
Healthc Financ Manage 55: 42–46.
Ovretveit J, Al Serouri A. 2006. Hospital quality management system in a low income Arabic
country: an evaluation. Int J Health Care Qual Assur 19: 516–532. DOI: 10.1108/
09526860610686999
Peters DH, Noor AA, Singh LP, Kakar FK, Hansen PM, Burnham G. 2007. A balanced
scorecard for health services in Afghanistan. Bull World Health Organ 85: 146–151.
Rabbani F, Jafri W, Abbas A, et al. 2009. Culture and quality care perceptions in a Pakistani
hospital. Int J Health Care Qual Assur 22: 498–513. DOI: 10.1108/09526860910975607
Rabbani F, Jafri W, Abbas F, Pappas G, Brommels M, Tomson G. 2007. Reviewing the
application of the balanced scorecard with implications for low-income health settings.
J Healthc Qual 29: 21–34.
Rafique G, Azam SI, White F. 2006. Diabetes knowledge, beliefs and practices among people
with diabetes attending a university hospital in Karachi, Pakistan. East Mediterr Health J
12: 590–598.
Robinson VA, Hunter D, Shortt SE. 2003. Accountability in public health units: using a
modified nominal group technique to develop a balanced scorecard for performance
measurement. Can J Public Health 94: 391–396.
Rowe AK, de Savigny D, Lanata CF, Victora CG. 2005. How can we achieve and maintain
high-quality performance of health workers in low-resource settings? The Lancet 366:
1026–1035. DOI: 10.1016/S0140-6736(05)67028-6
´
Ruiz U, Simon J, Molina P, Jimenez J, Grandal J. 1999. A two-level integrated approach to
self-assessment in healthcare organisations. Int J Health Care Qual Assur 12: 135–142.
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm
17. 90 F. RABBANI ET AL.
Schalm C. 2008. Implementing a balanced scorecard as a strategic management tool in a long-
term care organization. J Health Serv Res Policy 13: 8–14. DOI: 10.1258/jhsrp.2007.007013
Suwaratchai P, Sithisarankul P, Sriratanban J, Chenvidhyamd D, Phonburee W. 2008. Utilize
the modified Delphi technique to develop trauma care indicators. J Med Assoc Thai 91: 99–
103.
ten Asbroek AHA, Arah OA, Geelhoed J, Custers T, Delnoij DM, Klazinga NS. 2004.
Developing a national performance indicator framework for the Dutch health system. Int J
Qual Health Care 16: i65–i71. DOI: 10.1093/intqhc/mzh020
Veillard J, Champagne F, Klazinga N, Kazandjian V, Arah OA, Guisset AL. 2005.
A performance assessment framework for hospitals: the WHO regional office for Europe
PATH project. Int J Qual Health Care 17(487): DOI: 10.1093/intqhc/mzi072
Wachtel TL, Hartford CE, Hughes JA. 1999. Building a balanced scorecard for a burn center.
Burns 25: 431–437. DOI: 10.1016/S0305-4179(99)00024-8
Willis CD, Stoelwinder JU, Cameron PA. 2008. Interpreting process indicators in trauma care:
construct validity versus confounding by indication. Int J Qual Health Care 20: 1–8. DOI:
10.1093/intqhc/mzn027
Xiao J, Douglas D, Lee AH, Vemuri SR. 1997. A Delphi evaluation of the factors influencing
length of stay in Australian hospitals. Int J Health Plann Manage 12: 207–218. DOI:
110.1002/(SICI)1099-1751(199707/09)12:3
´
Zeitlin J, Wildman K, Breart G, et al. 2003. Selecting an indicator set for monitoring and
evaluating perinatal health in Europe: criteria, methods and results from the PERISTAT
project. Eur J Obstet Gynecol Reprod Biol 111: 5–14. DOI: 10.1016/j.ejogrb.2003.09.002
Copyright # 2010 John Wiley Sons, Ltd. Int J Health Plann Mgmt 2010; 25: 74–90.
DOI: 10.1002/hpm