2. EVOLUTION OF PEM
Pre-marketing clinical trials are effective in
studying the efficacy of medicine but they have
limitations in defining the clinically necessary safety
of drugs. They are:-
• Small number of patients.
• The study products may received for short durations
(only a single dose), which may not be able to detect
rare ADR’s.
• Pre-marketing developing programs are dynamic.
• Special population are excluded.
2
3. • The contribution of the spontaneous reporting system in
detecting hazards such as the oculomucocutaneous
syndrome with practolol led Inman to establish the
system of Prescription-Event Monitoring (PEM) at the
Drug Safety Research Unit (DSRU) at Southampton in
1981.
• In New Zealand, the medicines adverse reactions
committee (MARC) is responsible for conducting such
studies for academic purposes and the programme is
known as the Intensive medicine monitoring programme
(IMMP).
3
4. WHAT IS PEM?
• A non interventional observational cohort
technique, which involves health professionals
submitting data on all clinical events reported by
a patient subsequent to the prescribing of a new
drug.
• It is a method of studying the safety of new
medications that are used by general practitioners.
• In PEM, the exposure data are national in scope
throughout the collection period and unaffected
by the kind of selection and exclusion criteria that
characterise clinical trials data. 4
6. Here patients being prescribed monitored
drugs, which include virtually all New
Chemical Entities are studied. The criteria for
study drug are:
• NCE
• New Pharmacological Principle
• Predicted wide spread use
• Suspected problems
• Identified but unquantified risks
6
7. • The Information on the 1st
5000-18000
prescriptions for that drug are then obtained.
• Prescribers are contacted with a questionnaire to
determine subsequent events or clinical
outcomes.
• Experiences with the drugs can then be examined
and the incidence of various events can be
estimated.
• Comparisons are made between periods before &
after drug use.
e.g.: The occurrence of Jaundice with Erythromycin
Estolate was found be such method of study 7
8. In one such study conducted by MARC, a Cohort of
3926 patients taking perhexiline & 2837 taking
labetolol, 25% of all patients discontinued taking
their drug under the study.
ADRs were the reason for stopping in 20% & 43%,
for each drug, respectively.
• PEM provides clinically useful information because
it establishes,
• From these data, Incidence densities are calculated
for all events reported during the treatment with the
monitored drug.
8
9. • Incidence density
– IDt = No of events during treatment for period ‘t’ X1000
No of patient-months of treatment for period ‘t’
Numerator = No. of reports of each event
Denominator = No. of patients exposed to the drug
A definite time frame = The period of treatment for
each patient
• These Incidence Densities/Incidence rates are
ranked in order of frequency
• These ranked lists indicate both the nature &
relative frequency of the events reported when these
drugs are used in general practice 9
10. • For an example, a study was carried out to assess
the sedation properties of 4 anti-histaminics in the
market loratadine, cetrizine, fexofenadine ancd
acrivastatine:
10
11. • Results: The odds ratios (adjusted for age and sex) for the
incidence of sedation were 0.63 (95% confidence interval
0.36 to 1.11; P = 0.1) for fexofenadine; 2.79 (1.69 to 4.58; P
< 0.0001) for acrivastine, and 3.53 (2.07 to 5.42; P <
0.0001) for cetirizine compared with loratadine. No
increased risk of accident or injury was evident with any of
the four drugs.
• Conclusions Although the risk of sedation was low with all
four drugs, fexofenadine and loratadine may be more
appropriate for people working in safety critical jobs.
• This study not only showed the sedative effects of the anti-
histaminics, and compared them, it also gave an idea about
the incidence of other ADRs associated with the 4 drugs.
• In the UK, PEM studies for response rates for over 60 drugs
have been carried out and documented. 11
12. ADVANTAGES
• Calculation of incidence density
• Carried out on a national scale
• Comparison of ‘reasons for withdrawal’ and incidence
density
• Outcome of exposed pregnancies
• Signal generation and exploration
• Delayed reactions can be detected
• Disease investigation 12
13. DISADVANTAGES
• No method of measuring compliance
• No method to determine the non-prescription
medication
• Non-return of green forms
• Does not extend to hospital monitoring
• Data collection is an operational difficulty 13
15. HISTORY
• The term record linkage was first used by the chief of
the U.S. National Office of Vital Statistics, Dr. Halbert
L. Dunn in a talk given in Canada in 1946.
• Dr. Dunn advocated the use of a unique number (e.g.
birth registration number).
15
16. • Historically record linkage was assigned to clerks who
would search and review lists to bring together the
appropriate pairs of records for comparison, seek
additional information when there were questionable
matches, and finally make decisions regarding the
linkages based on established rules.
16
17. HISTORY
• Formal development of a theory of record linkage
started with the pioneering work of Fellegi and Sunter
(1969).
• Several people have worked on extending or
modifying their procedure (Jaro 1989; Winkler 1994).
17
19. What is Record Linkage?
• Record linkage is the process of bringing together two or
more records relating to the same individual (person),
family or entity (e.g. event, object, geography, business
etc).
• To find syntactically distinct data entries that refer to the
same entity in two or more input files.
• Part of the data cleaning process, which is a crucial first
step in the knowledge discovery process .
19
22. DETERMINISTIC RECORD
LINKAGE
• A pair of records is said to be a link if the two
records agree exactly on each element within a
collection of identifiers called the match key.
• ALL or NONE
• For example, when comparing two records on last
name, street name, year of birth, and street
number, the pair of records is deemed to be a link
only if the names agree on all characters, the years
of birth are the same, and the street numbers are
identical. 22
23. PROBABILISTIC RECORD
LINKAGE
• Formalized by Fellegi and Sunter [1969].
• Pairs of records are classified as links, possible links, or
non-links.
• Here, we consider the probability of a match in the given
observed data.
• In probability matching, a threshold of likelihood is set
(which can be varied in different circumstances) above
which a pair of records is accepted as a match, relating to
the same person, and below which the match is rejected.
23
25. STANDARDIZATION
• In every data there exist many manual errors and non-
matching abbreviations etc which may present themselves
as separate data without actually being so
• First step
• To clean and standardise the data
• E.g. : For input data belonging to Mr. William Marcus
Smith, entries could have been made by different
individuals as :
– Smith W. M.
– William M. Smith
– W.M. Smith
– W.M. Smithe etc 25
26. BLOCKING
• In order to reduce the search space (i.e. the number
of record pairs to be compared).
• To group similar records together, called blocks or
clusters.
• The data sets are split into smaller blocks and only
records within the same blocks are compared.
• E.g. instead of making detailed comparisons of all
90 billion pairs from two lists of 300,000 records
representing all businesses in a State of the U.S., it
may be sufficient to consider the set of 30 million
pairs that agree on U.S. Postal ZIP code.
26
27. MATCHING
Exact Matching Statistical Matching
• Linkage of data for the same
unit (e.g., establishment)
from different files.
• Uses identifiers such as
name, address, or tax unit
number
• Attempts to link files that
may have few units in
common
• Linkages are based on
similar characteristics rather
than unique identifying
information
27
28. Requirements for defining a RLS
The types of linkages required,
Whether the linkages is
performed in batch and/or
interactive mode,
The security provisions for
confidential data files,
The speed of operation needed,
The volume of records that can
be linked with the system,
The initial cost of software
including licensing and
maintenance costs,
Whether the software is
bundled with other software
packages,
The simplicity and flexibility
in defining the rules used for
linkages,
The accuracy and statistical
defensibility of the product,
The availability of
documentation and training,
and
The maintenance and support
of the software. 28
30. USES
• The system is used to improve data quality and coverage,
for long term medical follow up of cohorts, for creating
patient-oriented rather than event-oriented data, for
building new data sources, and for a range of other
statistical purposes.
• It helps create statistically relevant source of ‘new’
information.
• Answers research questions relating to genetics,
occupational and environmental health and medical
research. 30
31. DRAWBACKS
• Issues of privacy and confidentiality
• Policies for conducting studies using such
systems must be transparent
31
32. APPLICATIONS
• Duplication in data in minimized
• Powerful tool for generating more value out of
existing databases
• Large projects regarding the census of an
entire country can be planned
• More detailed information can be obtained
• Becomes easier to follow cohorts
32