SlideShare une entreprise Scribd logo
1  sur  4
Télécharger pour lire hors ligne
Biologicals 39 (2011) 266e269

Contents lists available at ScienceDirect

Biologicals
journal homepage: www.elsevier.com/locate/biologicals

Statistical considerations for confirmatory clinical trials for similar biotherapeutic
products
Catherine Njue*
Biostatistics Unit, Centre for Vaccine Evaluation, Biologics and Genetic Therapies Directorate, Health Products and Food Branch, Health Canada, 251 Sir Frederick Banting Driveway,
AL 2201E, Tunney’s Pasture, Ottawa, ON K1A 0K9, Canada

a b s t r a c t
Keywords:
Statistical principles
Clinical trials
Equivalence design
Non-inferiority design

For the purpose of comparing the efficacy and safety of a Similar Biotherapeutic Product (SBP) to
a Reference Biotherapeutic Product (RBP), the “Guidelines on Evaluation of Similar Biotherapeutic
Products (SBPs)” issued by the World Health Organisation (WHO), states that equivalence or noninferiority studies may be acceptable. While in principle, equivalence trials are preferred, noninferiority trials may be considered if appropriately justified, such as for a medicinal product with
a wide safety margin. However, the statistical issues involved in the design, conduct, analysis and
interpretation of equivalence and non-inferiority trials are complex and subtle, and require that all
aspects of these trials be given careful consideration. These issues are important in order to ensure that
equivalence and non-inferiority trials provide valid data that are necessary to draw reliable conclusions
regarding the clinical similarity of an SBP to an RBP.
Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved.

1. Introduction
According to the guidelines developed by the WHO for the
evaluation of SBPs, clinical trials utilising equivalence or noninferiority designs will be acceptable for the comparison of efficacy and safety of the SBP to the RBP [1]. Equivalence trials are
preferred to ensure that the SBP is not clinically less or more
effective than the RBP when used at the same dosage(s), but noninferiority trials may be considered, if appropriately justified [1,2].
The choice of the clinical trial design is dependent on many factors,
and the specific design selected for a particular study must be
explicitly stated in the trial protocol. Specifying the design selected
is critical, given that equivalence and non-inferiority trials are
designed to test different objectives. In addition, it should also be
recognised that the statistical principles governing equivalence
and non-inferiority trials are complex and subtle, and must be
given careful consideration [3]. This article will address the
statistical principles underlying equivalence and non-inferiority
trials that must be considered in the design, conduct, analysis
and interpretation of trials for the purpose of demonstrating that
the SBP is similar to the RBP.

* Tel.: þ1 613 9141 6160; fax: þ1 613 941 8933.
E-mail address: catherine.njue@hc-sc.gc.ca.

2. Equivalence vs. non-inferiority, and the choice of the
comparability margin
An equivalence trial is designed to show that a test treatment
differs from the standard active treatment by an amount that is
clinically unimportant (the equivalence margin), while a noninferiority trial is designed to show that a test treatment is not
less effective than a standard active treatment by a small predefined margin (the non-inferiority margin) [4]. These are
different objectives, and it is vital that the specific design selected
for a particular study be explicitly stated in the trial protocol [5].
The factors that should be considered in selecting the clinical trial
design include: the product in question, the intended use of the
product, the target population and disease prevalence [1].
Irrespective of the trial design selected, the next critical step is
the determination of the comparability margin. For the equivalence
trial, both the upper and lower equivalence limits are needed to
establish that the SBP response differs from the RBP response by
a clinically unimportant amount. For a non-inferiority trial, only
one limit is required to establish that the SBP response is not
clinically inferior to the RBP response by a pre-specified margin [6].
Determination of this margin must be given careful consideration
as both sample size calculations and interpretation of study results
depend on it. In practical terms, the comparability margin is
defined as the largest difference between the SBP and the RBP that
can be judged as clinically acceptable, and should be smaller than

1045-1056/$36.00 Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.biologicals.2011.06.006
C. Njue / Biologicals 39 (2011) 266e269

differences observed in superiority trials of the RBP. The choice of
the margin is based upon a combination of clinical judgement and
statistical reasoning to deal with uncertainties and imperfect
information. Adequate evidence of the effect size of the RBP should
be provided to support the proposed margin. The magnitude and
variability of the effect size of the RBP derived from historical trials
should also be taken into consideration.
3. Sample size and power
It is important to note that equivalence and non-inferiority
designs test different hypotheses requiring different formulae for
power and sample size calculations [4]. For an equivalence trial, the
null hypothesis represents lack of equivalence between the test and
reference while for an non-inferiority trial, it represents inferiority
of the test to the reference. Therefore, sample size calculations
should be based on methods specifically designed for either
equivalence or non-inferiority trials. Details of the sample size
calculations should be provided in the study protocol. At
a minimum, the basis of estimates of any quantities used in the
sample size calculation should also be clearly explained, and are
usually derived from results of earlier trials with the RBP or published literature [6].
Determination of the appropriate sample size is dependent on
various factors including: the type of primary endpoint (e.g. binary,
quantitative, time-to event etc.), the pre-defined comparability
margin, the probability of a type I error (falsely rejecting the null
hypothesis) and the probability of a type II error (erroneously
failing to reject the null hypothesis) [4,6]. The expected rates of
patient dropouts and withdrawals should also be taken into
consideration. Keeping all factors the same, including the comparability margin, the type I error rate and the type II error rate, an
equivalence trial tends to need a larger sample size than a noninferiority trial. When estimating the sample size for equivalence
or non-inferiority trials, it is usually under the assumption that
there is no difference between the SBP and RBP. An equivalence trial
could be underpowered if the true difference is not zero. Similarly,
a non-inferiority trial could be underpowered if the SBP is actually
less effective than the RBP. To minimise the risk of an underpowered study, the equivalence or non-inferiority trial should only be
considered and designed once similarity of the SBP to the RBP has
been ascertained using all available comparative data that has been
generated between the SBP and the RBP [1]. In addition, keeping
the probability of a type II error low by increasing the study power
will increase the ability of the study to show equivalence or noninferiority of the SBP to the RBP.
4. Important principles: assay sensitivity and the constancy
assumption
Potential differences between the SBP and the RBP should be
investigated in a sensitive and well established clinical model [1,2].
The sensitivity of the clinical model is important, and there should
be some confidence in the trial’s ability to detect a difference
between the SBP and the RBP if a real difference exists. This is the
concept of assay sensitivity, and refers to the ability of a specific
trial to demonstrate a difference between treatments if such
a difference truly exists [7].
Assay sensitivity is crucial to any trial, but has different implications depending on the trial design. A superiority trial that lacks
assay sensitivity will fail to show superiority of the test treatment (a
failed study), and a superiority trial that demonstrates superiority
has simultaneously demonstrated assay sensitivity. However,
a non-inferiority or equivalence trial that lacks assay sensitivity
may find an ineffective treatment to be non-inferior or equivalent

267

and could lead to an erroneous conclusion of efficacy [4]. The effect
size of the RBP in the current non-inferiority or equivalence trial is
not measured directly relative to placebo, and assay sensitivity
must be deduced. In fact, the current non-inferiority or equivalence
trial must rely on the assumption of assay sensitivity based on
information external to the trial. Factors which can reduce assay
sensitivity include poor compliance with study medication,
concomitant medications, missing data, patient withdrawals, variability in measurements, poor trial quality etc. The design and
conduct of the non-inferiority or equivalence trial must attempt to
avoid or at least minimise these factors to ensure the credibility of
trial results.
Another important principle in equivalence and non-inferiority
trials is the constancy assumption, which means that the historical
active control effect size compared to placebo is unchanged in the
setting of the current trial [7]. To ensure the validity of the
constancy assumption, the equivalence or non-inferiority trial must
be designed and conducted in the same manner as the trial that
established the efficacy of the RBP. The study population,
concomitant therapy, endpoints, trial duration, assessments and
other important aspects of the trial should be similar to those in the
trial used to demonstrate the effectiveness of the RBP.
Trial conduct should not only adhere closely to that of the trial
used to demonstrate the effectiveness of the RBP, but it should also
be conducted with high quality to ensure assay sensitivity.
5. Data analysis and analysis populations
Data analysis from equivalence and non-inferiority studies is
generally based on the use of two-sided confidence intervals
(typically at the 95% level) for the metric of treatment effect.
Examples of metrics of treatment effect include absolute difference
in proportions, relative risk, odds ratio and hazard ratio. Analysis of
non-inferiority trials can also be based on a one sided confidence
interval (typically at the 97.5% level) [6]. For equivalence trials,
equivalence is demonstrated when the entire confidence interval
falls within the lower and upper equivalence limits. Non-inferiority
trials are one sided, and statistical inference is based only on the
upper or lower confidence limit, whichever is appropriate for
a given study [8]. For example, if a lower non-inferiority margin is
defined, non-inferiority is demonstrated when the lower limit of
the confidence interval is above this margin. Likewise, if an upper
non-inferiority margin is defined, non-inferiority is demonstrated
when the upper limit of the confidence interval is below this
margin.
In a superiority trial, analysis based on the Intent to Treat (ITT)
population is considered the primary analysis, and is considered
conservative in this setting. However, for equivalence and noninferiority trials, the ITT population is not considered conservative as it tends to bias the results towards equivalence or noninferiority. The per-protocol (PP) population excludes data from
patients with major protocol violations, and excluding data from
these patients can also substantially bias the results, especially if
a large proportion of subjects are excluded. Hence, a conservative
analysis cannot always be defined for equivalence or noninferiority trials, and this will present challenges during the analysis of the trial results [9]. Equivalence or non-inferiority trials
should be analysed based on both the ITT and PP approaches, and
both sets of results should support the conclusion of equivalence or
non-inferiority.
6. Trial results: possible outcomes and interpretation
Irrespective of the trial design, the totality of the actual observed
results obtained from the clinical trial will determine whether the
268

C. Njue / Biologicals 39 (2011) 266e269

SBP and the RBP can be considered clinically similar. Fig. 1 shows
the possible results (confidence intervals) from a non-inferiority or
equivalence trial designed to compare a test to a reference for
which a higher value in the endpoint corresponds to better efficacy,
and the difference between treatments is defined as test minus
reference.
As illustrated in Fig. 1 where the lower limit of the equivalence
region also represents the non-inferiority margin, several outcomes
can be obtained from the analysis based on the confidence interval
approach. The following is an interpretation of each outcome:
1. Trial failed, regardless of objective (Test is worse than
Reference);
2. Trial failed, regardless of objective (Test is not better, but could
be worse than Reference);
3. Trial failed, regardless of objective (Trial is completely noninformative: Test could be worse than or better than
Reference);
4. Test equivalent and therefore, also non-inferior to Reference
(entire confidence interval within equivalence region);
5. Test not equivalent but non-inferior to Reference (lower limit of
confidence interval above non-inferiority margin);
6. Test not equivalent but statistically and clinically superior to
Reference (lower limit of confidence interval above equivalence
region).
For comparing the SBP to the RBP, outcome 4 is the most
desirable, but as illustrated by the simulation study described
below, such an outcome can only be obtained from an adequately
powered equivalence trial, and would be unlikely to be obtained
from a trial that was designed as a non-inferiority study.
Outcome 5 has different implications for a non-inferiority trial
compared to an equivalence trial, resulting in a successful noninferiority trial, but a failed equivalence trial. Outcome 6 would
result in a failed equivalence trial, but would be acceptable in
a non-inferiority trial if it does not matter whether the test treatment can yield a much better response than the reference [4].
However, in the context of comparing the SBP to the RBP for the
purpose of demonstrating that the SBP is not clinically less or more
effective than the RBP when used at the same dosage(s), outcome 6
is not desirable as it suggests superiority of the SBP relative to the
RBP. As stated in the WHO guidance document, such a finding, if
clinically relevant, would contradict the principle of similarity and
a post-hoc justification that a finding of statistical superiority is not
clinically relevant would be difficult to defend [1]. If similarity of
the SBP to the RBP has been ascertained prior to initiation of the
confirmatory trial, outcome 6 is expected to be a rare event. This

was revealed in the simulation study described below where the
proportion of simulated confidence intervals showing statistical
superiority was about 2%.
If the statistical superiority observed is considered clinically
relevant, then the SBP would not be considered similar to the RBP
and should be developed as a stand alone product [1,2]. This is
clearly an undesirable finding at the end of a confirmatory trial that
marks the end of the comparability exercise [1]. Demonstration
that superior efficacy of the SBP is not associated with adverse
events, if the SBP is prescribed at the same dosage as the RBP would
also be required in all cases [1,2]. To minimise the risk of a superiority finding, all comparative data that have been generated
between the SBP and the RBP should be carefully reviewed and
analysed to ensure that similarity between the SBP and the RBP has
been established with a high degree of confidence prior to initiating
the confirmatory trial [1].
7. A simulation study
A direct simulation of 95% confidence intervals for differences in
proportions was carried out. Each simulation was set up to generate
data from a non-inferiority study with a binary primary endpoint
under the assumption that the proportions in the test and reference
groups are equal. The goal of the simulation study was to demonstrate that equivalence trials require more subjects than noninferiority trials, and that an adequately powered equivalence
trial is needed to obtain outcome 4 (test equivalent to reference).
Two independent binomial samples of given sample sizes and
underlying proportions of success were generated, and the
approximate 95% two-sided confidence interval for the difference
in proportions (test proportion minus reference proportion) was
calculated for each simulation. A range of proportions was
considered, and both proportions were set equal. With the noninferiority margin set to 10%, the simulations were carried out
10,000 times, with the number of subjects in each treatment group
selected in order for each simulated study to have a specified
power. From all the simulations, the observed frequency of confidence intervals supporting claims of non-inferiority (test proportion at most 10% lower) and the observed frequency of confidence
intervals supporting claims of equivalence (entire confidence
interval within the equivalence region) were calculated.
The simulation results are summarised in Table 1. The results in
Table 1 illustrate that outcome 4 (Test equivalent to Reference:
entire confidence interval within equivalence region) can be difficult to obtain from a trial designed as a non-inferiority study, and
show that the proportion of simulated confidence intervals supporting claims of equivalence is about 20% lower (for 80% power)
and about 10% lower (for 90% power) compared to the proportion of
simulated confidence intervals supporting claims of non-

1
2

Table 1
Proportion of simulated confidence intervals supporting claims of non-inferiority or
equivalence of test to reference.

3
4

Power

Proportion
of success
(p1 ¼ p2)

Required
number of
subjects
(N1 ¼ N2)

Proportion of
confidence intervals
supporting noninferiority

Proportion of
confidence intervals
supporting
equivalence

80%

0.60
0.70
0.80
0.90
0.60
0.70
0.80
0.90

400
347
273
163
520
455
360
210

80.1%
79.9%
80.5%
80.5%
89.9%
89.7%
90.4%
90.0%

60.5%
59.5%
60.8%
60.9%
79.8%
79.3%
80.7%
80.0%

5
6

Reference
Superior

Equivalence
Region

Test
Superior

Fig. 1. Possible results (confidence intervals) from a non-inferiority or equivalence
trial.

90%
C. Njue / Biologicals 39 (2011) 266e269

inferiority. In other words, the absolute loss in power ranges from
about 10% to 20% depending on study power.

269

Conflict of interest
Author has no potential conflicts of interest.

8. Conclusion
Acknowledgements
Equivalence trials are preferred, and non-inferiority trials may
be considered if appropriately justified [1,2]. Prior to initiating an
equivalence or non-inferiority trial for the purpose of demonstrating that the SBP has similar efficacy and safety to the RBP,
a careful evaluation of all comparative data that have been generated between the SBP and the RBP should be performed to ensure
that similarity between the SBP and the RBP has been clearly
established. This is important, as the equivalence or non-inferiority
study marks the last step of the comparability exercise, and the
study should be initiated when there is unequivocal evidence that
the SBP can be considered similar to the RBP. It is also vital that the
protocol of the trial designed to demonstrate equivalence or noninferiority of the SBP to the RBP contains a clear statement that
this is the intention.
The choice of the RBP is critical. It should be a widely used
therapy whose efficacy in the relevant indication has been clearly
established and quantified in well-designed and well-documented
placebo controlled trials. The choice of the equivalence or noninferiority margin must be clearly justified. Equivalence and noninferiority trials require different formulae for power and sample
size calculations, and the appropriate set of formulae should be
used in each setting. Factors which reduce the assay sensitivity of
a trial should be avoided or minimised, and the equivalence or noninferiority trial must be designed and conducted in the same
manner as the trial that established the efficacy of the RBP to ensure
the constancy assumption. The totality of the data obtained from
the equivalence or non-inferiority trial will determine whether the
SBP can be considered clinically similar to the RBP.

The author thanks Mike Walsh, Centre for Vaccine Evaluation,
Biologics and Genetic Therapies Directorate, Health Canada and
Bob Li, Office of Science, Therapeutic Products Directorate, Health
Canada for their input and suggestions on the original draft; Keith
O’Rourke, Centre for Vaccine Evaluation, Biologics and Genetic
Therapies Directorate, Health Canada for the running the simulations; and Bill Casley, Centre for Vaccine Evaluation, Biologics and
Genetic Therapies Directorate, Health Canada for proof reading the
article.

References
[1] Guidelines on Evaluation of Similar Biotherapeutic Products (SBPs). World
Health Organisation 2010.
[2] Guidance for Sponsors. Information and submission requirements for subsequent entry biologics (SEBs). Health Canada 2010.
[3] Jones B, Jarvis P, Lewis JA, Ebbutt AF. Trials to assess equivalence: the importance of rigorous methods. British Medical Journal 1996;313:36e9.
[4] Hwang IK. Active-controlled noninferiority/equivalence trials: methods and
practice. In: Ralph Buncher C, Tsay Jia-Yeong, editors. Statistics in the pharmaceutical industry. New York: Chapman and Hall; 2005. p. 193e230.
[5] CPMP Working Party on Efficacy of Medicinal Products, Note for Guidance.
Biostatistical methodology in clinical trials in applications for marketing
authorizations for medicinal products. Statistics in Medicine 1995;14:1659e82.
[6] Statistical Principles for Clinical Trials. ICH Topic E 1998;9.
[7] Choice of Control Group and Related Issues in Clinical Trials. ICH Topic E 2000;
10.
[8] Guidelines on Clinical Evaluations of Vaccines. Regulatory expectations. WHO
Technical Report Series 2004;924:35e101.
[9] Snapinn SM. Non inferiority trials. Current Controlled Trials in Cardiovascular
Medicine 2000;1(1):19e21.

Contenu connexe

Tendances

Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...Cochrane.Collaboration
 
Combination of informative biomarkers in small pilot studies and estimation ...
Combination of informative  biomarkers in small pilot studies and estimation ...Combination of informative  biomarkers in small pilot studies and estimation ...
Combination of informative biomarkers in small pilot studies and estimation ...LEGATO project
 
Statistical issues in subgroup analyses
Statistical issues in subgroup analysesStatistical issues in subgroup analyses
Statistical issues in subgroup analysesFrancois MAIGNEN
 
Superiority, non-inferiority, equivalence studies - what is the difference?
Superiority, non-inferiority, equivalence studies - what is the difference?Superiority, non-inferiority, equivalence studies - what is the difference?
Superiority, non-inferiority, equivalence studies - what is the difference?simonledinek
 
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...Kevin Clauson
 
Estimating design effect and calculating sample size for respondent driven sa...
Estimating design effect and calculating sample size for respondent driven sa...Estimating design effect and calculating sample size for respondent driven sa...
Estimating design effect and calculating sample size for respondent driven sa...Dung Tri
 
Sample size
Sample sizeSample size
Sample sizezubis
 
How to calculate Sample Size
How to calculate Sample SizeHow to calculate Sample Size
How to calculate Sample SizeMNDU net
 
Practical Methods To Overcome Sample Size Challenges
Practical Methods To Overcome Sample Size ChallengesPractical Methods To Overcome Sample Size Challenges
Practical Methods To Overcome Sample Size ChallengesnQuery
 
Sample Size Estimation and Statistical Test Selection
Sample Size Estimation  and Statistical Test SelectionSample Size Estimation  and Statistical Test Selection
Sample Size Estimation and Statistical Test SelectionVaggelis Vergoulas
 
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...nlevy-cooperman
 
Ich sample size
Ich sample sizeIch sample size
Ich sample sizeAyurdata
 
Interim analysis in clinical trials (1)
Interim analysis in clinical trials (1)Interim analysis in clinical trials (1)
Interim analysis in clinical trials (1)ADITYA CHAKRABORTY
 
Sample size and power calculations
Sample size and power calculationsSample size and power calculations
Sample size and power calculationsRamachandra Barik
 
Sampling techniques and size
Sampling techniques and sizeSampling techniques and size
Sampling techniques and sizeDr. Keerti Jain
 
5 essential steps for sample size determination in clinical trials slideshare
5 essential steps for sample size determination in clinical trials   slideshare5 essential steps for sample size determination in clinical trials   slideshare
5 essential steps for sample size determination in clinical trials slidesharenQuery
 
Non-inferiority and Equivalence Study design considerations and sample size
Non-inferiority and Equivalence Study design considerations and sample sizeNon-inferiority and Equivalence Study design considerations and sample size
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
 
Adaptive Study Designs
Adaptive Study DesignsAdaptive Study Designs
Adaptive Study DesignsAnirudha Potey
 
Adaptive Clinical Trials
Adaptive Clinical TrialsAdaptive Clinical Trials
Adaptive Clinical TrialsJay1818mar
 

Tendances (20)

Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...Systematic reviews of adverse effects and other topics not – yet – covered by...
Systematic reviews of adverse effects and other topics not – yet – covered by...
 
Combination of informative biomarkers in small pilot studies and estimation ...
Combination of informative  biomarkers in small pilot studies and estimation ...Combination of informative  biomarkers in small pilot studies and estimation ...
Combination of informative biomarkers in small pilot studies and estimation ...
 
SAMPLE SIZE, CONSENT, STATISTICS
SAMPLE SIZE, CONSENT, STATISTICSSAMPLE SIZE, CONSENT, STATISTICS
SAMPLE SIZE, CONSENT, STATISTICS
 
Statistical issues in subgroup analyses
Statistical issues in subgroup analysesStatistical issues in subgroup analyses
Statistical issues in subgroup analyses
 
Superiority, non-inferiority, equivalence studies - what is the difference?
Superiority, non-inferiority, equivalence studies - what is the difference?Superiority, non-inferiority, equivalence studies - what is the difference?
Superiority, non-inferiority, equivalence studies - what is the difference?
 
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...
Superiority Trials Versus Non-Inferiority Trials to Demonstrate Effectiveness...
 
Estimating design effect and calculating sample size for respondent driven sa...
Estimating design effect and calculating sample size for respondent driven sa...Estimating design effect and calculating sample size for respondent driven sa...
Estimating design effect and calculating sample size for respondent driven sa...
 
Sample size
Sample sizeSample size
Sample size
 
How to calculate Sample Size
How to calculate Sample SizeHow to calculate Sample Size
How to calculate Sample Size
 
Practical Methods To Overcome Sample Size Challenges
Practical Methods To Overcome Sample Size ChallengesPractical Methods To Overcome Sample Size Challenges
Practical Methods To Overcome Sample Size Challenges
 
Sample Size Estimation and Statistical Test Selection
Sample Size Estimation  and Statistical Test SelectionSample Size Estimation  and Statistical Test Selection
Sample Size Estimation and Statistical Test Selection
 
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...
Summary and Discussion Points for Pharmacokinetic (Category2) & Clinical Abus...
 
Ich sample size
Ich sample sizeIch sample size
Ich sample size
 
Interim analysis in clinical trials (1)
Interim analysis in clinical trials (1)Interim analysis in clinical trials (1)
Interim analysis in clinical trials (1)
 
Sample size and power calculations
Sample size and power calculationsSample size and power calculations
Sample size and power calculations
 
Sampling techniques and size
Sampling techniques and sizeSampling techniques and size
Sampling techniques and size
 
5 essential steps for sample size determination in clinical trials slideshare
5 essential steps for sample size determination in clinical trials   slideshare5 essential steps for sample size determination in clinical trials   slideshare
5 essential steps for sample size determination in clinical trials slideshare
 
Non-inferiority and Equivalence Study design considerations and sample size
Non-inferiority and Equivalence Study design considerations and sample sizeNon-inferiority and Equivalence Study design considerations and sample size
Non-inferiority and Equivalence Study design considerations and sample size
 
Adaptive Study Designs
Adaptive Study DesignsAdaptive Study Designs
Adaptive Study Designs
 
Adaptive Clinical Trials
Adaptive Clinical TrialsAdaptive Clinical Trials
Adaptive Clinical Trials
 

Similaire à Statistical considerations for confirmatory clinical trials for similar biotherapeutic products

Sample Size Calculation in Clinical Trials.pdf
Sample Size Calculation in Clinical Trials.pdfSample Size Calculation in Clinical Trials.pdf
Sample Size Calculation in Clinical Trials.pdfBindu238662
 
Bioequivalence studies ( Evaluation and Study design)
Bioequivalence studies ( Evaluation and Study design)Bioequivalence studies ( Evaluation and Study design)
Bioequivalence studies ( Evaluation and Study design)Selim Akhtar
 
Meta analysis
Meta analysisMeta analysis
Meta analysisJunaidAKG
 
Regulatory consideration for Biosimilars by FDA
Regulatory consideration for Biosimilars by FDARegulatory consideration for Biosimilars by FDA
Regulatory consideration for Biosimilars by FDAGopal Agrawal
 
Regulatory consideration for biosimilars by fda
Regulatory consideration for biosimilars by fdaRegulatory consideration for biosimilars by fda
Regulatory consideration for biosimilars by fdaGopal Agrawal
 
Clinical trial design, Trial Size, and Study Population
Clinical trial design, Trial Size, and Study Population Clinical trial design, Trial Size, and Study Population
Clinical trial design, Trial Size, and Study Population Shubham Chinchulkar
 
Simplifying study designs and statistical models for new dose & dosage forms ...
Simplifying study designs and statistical models for new dose & dosage forms ...Simplifying study designs and statistical models for new dose & dosage forms ...
Simplifying study designs and statistical models for new dose & dosage forms ...Bhaswat Chakraborty
 
When to Select Observational Studies as Evidence for Comparative Effectivenes...
When to Select Observational Studies as Evidence for Comparative Effectivenes...When to Select Observational Studies as Evidence for Comparative Effectivenes...
When to Select Observational Studies as Evidence for Comparative Effectivenes...Effective Health Care Program
 
Clinical Development of Biosimilars
Clinical Development of Biosimilars Clinical Development of Biosimilars
Clinical Development of Biosimilars Bhaswat Chakraborty
 
USFDA guidelines for bioanalytical method validation
USFDA guidelines for bioanalytical method validationUSFDA guidelines for bioanalytical method validation
USFDA guidelines for bioanalytical method validationbhatiaji123
 
Review on Bioanalytical Method Development in Human Plasma
Review on Bioanalytical Method Development in Human PlasmaReview on Bioanalytical Method Development in Human Plasma
Review on Bioanalytical Method Development in Human Plasmaijtsrd
 
Ich guidelines e9 to e12
Ich guidelines e9 to e12Ich guidelines e9 to e12
Ich guidelines e9 to e12Khushboo Bhatia
 
M10 bioanalytical method validation
M10 bioanalytical method validationM10 bioanalytical method validation
M10 bioanalytical method validationsawantanil
 
Clinical development of biopharmaceuticals in India
Clinical development of biopharmaceuticals in IndiaClinical development of biopharmaceuticals in India
Clinical development of biopharmaceuticals in IndiaBhaswat Chakraborty
 
L16 rm (systematic review and meta-analysis)-samer
L16 rm (systematic review and meta-analysis)-samerL16 rm (systematic review and meta-analysis)-samer
L16 rm (systematic review and meta-analysis)-samerDr Ghaiath Hussein
 
A gentle introduction to meta-analysis
A gentle introduction to meta-analysisA gentle introduction to meta-analysis
A gentle introduction to meta-analysisAngelo Tinazzi
 
Use of real-world data to provide evidence in.pptx
Use of real-world data to provide evidence in.pptxUse of real-world data to provide evidence in.pptx
Use of real-world data to provide evidence in.pptxDrPallaviKalbande1
 

Similaire à Statistical considerations for confirmatory clinical trials for similar biotherapeutic products (20)

Sample Size Calculation in Clinical Trials.pdf
Sample Size Calculation in Clinical Trials.pdfSample Size Calculation in Clinical Trials.pdf
Sample Size Calculation in Clinical Trials.pdf
 
Bioequivalence studies ( Evaluation and Study design)
Bioequivalence studies ( Evaluation and Study design)Bioequivalence studies ( Evaluation and Study design)
Bioequivalence studies ( Evaluation and Study design)
 
Meta analysis
Meta analysisMeta analysis
Meta analysis
 
Regulatory consideration for Biosimilars by FDA
Regulatory consideration for Biosimilars by FDARegulatory consideration for Biosimilars by FDA
Regulatory consideration for Biosimilars by FDA
 
Regulatory consideration for biosimilars by fda
Regulatory consideration for biosimilars by fdaRegulatory consideration for biosimilars by fda
Regulatory consideration for biosimilars by fda
 
Clinical trial design, Trial Size, and Study Population
Clinical trial design, Trial Size, and Study Population Clinical trial design, Trial Size, and Study Population
Clinical trial design, Trial Size, and Study Population
 
Simplifying study designs and statistical models for new dose & dosage forms ...
Simplifying study designs and statistical models for new dose & dosage forms ...Simplifying study designs and statistical models for new dose & dosage forms ...
Simplifying study designs and statistical models for new dose & dosage forms ...
 
When to Select Observational Studies as Evidence for Comparative Effectivenes...
When to Select Observational Studies as Evidence for Comparative Effectivenes...When to Select Observational Studies as Evidence for Comparative Effectivenes...
When to Select Observational Studies as Evidence for Comparative Effectivenes...
 
Clinical Development of Biosimilars
Clinical Development of Biosimilars Clinical Development of Biosimilars
Clinical Development of Biosimilars
 
Bioequivalence studies
Bioequivalence studiesBioequivalence studies
Bioequivalence studies
 
USFDA guidelines for bioanalytical method validation
USFDA guidelines for bioanalytical method validationUSFDA guidelines for bioanalytical method validation
USFDA guidelines for bioanalytical method validation
 
Review on Bioanalytical Method Development in Human Plasma
Review on Bioanalytical Method Development in Human PlasmaReview on Bioanalytical Method Development in Human Plasma
Review on Bioanalytical Method Development in Human Plasma
 
Ich guidelines e9 to e12
Ich guidelines e9 to e12Ich guidelines e9 to e12
Ich guidelines e9 to e12
 
M10 bioanalytical method validation
M10 bioanalytical method validationM10 bioanalytical method validation
M10 bioanalytical method validation
 
Meta analysis
Meta analysisMeta analysis
Meta analysis
 
Clinical development of biopharmaceuticals in India
Clinical development of biopharmaceuticals in IndiaClinical development of biopharmaceuticals in India
Clinical development of biopharmaceuticals in India
 
L16 rm (systematic review and meta-analysis)-samer
L16 rm (systematic review and meta-analysis)-samerL16 rm (systematic review and meta-analysis)-samer
L16 rm (systematic review and meta-analysis)-samer
 
Dhiwahar ppt
Dhiwahar pptDhiwahar ppt
Dhiwahar ppt
 
A gentle introduction to meta-analysis
A gentle introduction to meta-analysisA gentle introduction to meta-analysis
A gentle introduction to meta-analysis
 
Use of real-world data to provide evidence in.pptx
Use of real-world data to provide evidence in.pptxUse of real-world data to provide evidence in.pptx
Use of real-world data to provide evidence in.pptx
 

Plus de National Institute of Biologics

Defining your-target-product-profile in-vitro-diagnostic-products
Defining your-target-product-profile in-vitro-diagnostic-productsDefining your-target-product-profile in-vitro-diagnostic-products
Defining your-target-product-profile in-vitro-diagnostic-productsNational Institute of Biologics
 
Accelerating development and approval of targeted cancer therapies
Accelerating development and approval of targeted cancer therapiesAccelerating development and approval of targeted cancer therapies
Accelerating development and approval of targeted cancer therapiesNational Institute of Biologics
 
Canonical structures for the hypervariable regions of immunoglobulins
Canonical structures for the hypervariable regions of immunoglobulinsCanonical structures for the hypervariable regions of immunoglobulins
Canonical structures for the hypervariable regions of immunoglobulinsNational Institute of Biologics
 
Development trends for human monoclonal antibody therapeutics
Development trends for human monoclonal antibody therapeuticsDevelopment trends for human monoclonal antibody therapeutics
Development trends for human monoclonal antibody therapeuticsNational Institute of Biologics
 
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...Therapeutic fc fusion proteins and peptides as successful alternatives to ant...
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...National Institute of Biologics
 
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...National Institute of Biologics
 
Therapeutic antibodies for autoimmunity and inflammation
Therapeutic antibodies for autoimmunity and inflammationTherapeutic antibodies for autoimmunity and inflammation
Therapeutic antibodies for autoimmunity and inflammationNational Institute of Biologics
 
Introduction to current and future protein therapeutics - a protein engineeri...
Introduction to current and future protein therapeutics - a protein engineeri...Introduction to current and future protein therapeutics - a protein engineeri...
Introduction to current and future protein therapeutics - a protein engineeri...National Institute of Biologics
 
Pharmaceutical monoclonal antibodies production - guidelines to cell engine...
Pharmaceutical monoclonal antibodies   production - guidelines to cell engine...Pharmaceutical monoclonal antibodies   production - guidelines to cell engine...
Pharmaceutical monoclonal antibodies production - guidelines to cell engine...National Institute of Biologics
 
Intended use of reference products & who international standards or reference...
Intended use of reference products & who international standards or reference...Intended use of reference products & who international standards or reference...
Intended use of reference products & who international standards or reference...National Institute of Biologics
 
Evaluation of similar biotherapeutic products (SBP's) scientific principles ...
Evaluation of similar biotherapeutic products (SBP's)   scientific principles ...Evaluation of similar biotherapeutic products (SBP's)   scientific principles ...
Evaluation of similar biotherapeutic products (SBP's) scientific principles ...National Institute of Biologics
 

Plus de National Institute of Biologics (20)

Waters protein therapeutics application proctocols
Waters protein therapeutics application proctocolsWaters protein therapeutics application proctocols
Waters protein therapeutics application proctocols
 
Potential aggregation prone regions in biotherapeutics
Potential aggregation prone regions in biotherapeuticsPotential aggregation prone regions in biotherapeutics
Potential aggregation prone regions in biotherapeutics
 
How the biologics landscape is evolving
How the biologics landscape is evolvingHow the biologics landscape is evolving
How the biologics landscape is evolving
 
Evaluation of antibody drugs quality safety
Evaluation of antibody drugs quality safetyEvaluation of antibody drugs quality safety
Evaluation of antibody drugs quality safety
 
Approved m abs_feb_2015
Approved m abs_feb_2015Approved m abs_feb_2015
Approved m abs_feb_2015
 
Translating next generation sequencing to practice
Translating next generation sequencing to practiceTranslating next generation sequencing to practice
Translating next generation sequencing to practice
 
From biomarkers to diagnostics –the road to success
From biomarkers to diagnostics –the road to successFrom biomarkers to diagnostics –the road to success
From biomarkers to diagnostics –the road to success
 
Defining your-target-product-profile in-vitro-diagnostic-products
Defining your-target-product-profile in-vitro-diagnostic-productsDefining your-target-product-profile in-vitro-diagnostic-products
Defining your-target-product-profile in-vitro-diagnostic-products
 
Accelerating development and approval of targeted cancer therapies
Accelerating development and approval of targeted cancer therapiesAccelerating development and approval of targeted cancer therapies
Accelerating development and approval of targeted cancer therapies
 
Canonical structures for the hypervariable regions of immunoglobulins
Canonical structures for the hypervariable regions of immunoglobulinsCanonical structures for the hypervariable regions of immunoglobulins
Canonical structures for the hypervariable regions of immunoglobulins
 
Canonical correlation
Canonical correlationCanonical correlation
Canonical correlation
 
Development trends for human monoclonal antibody therapeutics
Development trends for human monoclonal antibody therapeuticsDevelopment trends for human monoclonal antibody therapeutics
Development trends for human monoclonal antibody therapeutics
 
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...Therapeutic fc fusion proteins and peptides as successful alternatives to ant...
Therapeutic fc fusion proteins and peptides as successful alternatives to ant...
 
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...
Fc fusion proteins and fc rn - structural insights for longer-lasting and mor...
 
Therapeutic antibodies for autoimmunity and inflammation
Therapeutic antibodies for autoimmunity and inflammationTherapeutic antibodies for autoimmunity and inflammation
Therapeutic antibodies for autoimmunity and inflammation
 
Introduction to current and future protein therapeutics - a protein engineeri...
Introduction to current and future protein therapeutics - a protein engineeri...Introduction to current and future protein therapeutics - a protein engineeri...
Introduction to current and future protein therapeutics - a protein engineeri...
 
Pharmaceutical monoclonal antibodies production - guidelines to cell engine...
Pharmaceutical monoclonal antibodies   production - guidelines to cell engine...Pharmaceutical monoclonal antibodies   production - guidelines to cell engine...
Pharmaceutical monoclonal antibodies production - guidelines to cell engine...
 
Intended use of reference products & who international standards or reference...
Intended use of reference products & who international standards or reference...Intended use of reference products & who international standards or reference...
Intended use of reference products & who international standards or reference...
 
How dissimilarly similar are biosimilars
How dissimilarly similar are biosimilarsHow dissimilarly similar are biosimilars
How dissimilarly similar are biosimilars
 
Evaluation of similar biotherapeutic products (SBP's) scientific principles ...
Evaluation of similar biotherapeutic products (SBP's)   scientific principles ...Evaluation of similar biotherapeutic products (SBP's)   scientific principles ...
Evaluation of similar biotherapeutic products (SBP's) scientific principles ...
 

Dernier

SGK ĐIỆN GIẬT ĐHYHN RẤT LÀ HAY TUYỆT VỜI.pdf
SGK ĐIỆN GIẬT ĐHYHN        RẤT LÀ HAY TUYỆT VỜI.pdfSGK ĐIỆN GIẬT ĐHYHN        RẤT LÀ HAY TUYỆT VỜI.pdf
SGK ĐIỆN GIẬT ĐHYHN RẤT LÀ HAY TUYỆT VỜI.pdfHongBiThi1
 
Using Data Visualization in Public Health Communications
Using Data Visualization in Public Health CommunicationsUsing Data Visualization in Public Health Communications
Using Data Visualization in Public Health Communicationskatiequigley33
 
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA .pdf
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA    .pdfSGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA    .pdf
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA .pdfHongBiThi1
 
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdf
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdfSGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdf
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdfHongBiThi1
 
Generative AI in Health Care a scoping review and a persoanl experience.
Generative AI in Health Care a scoping review and a persoanl experience.Generative AI in Health Care a scoping review and a persoanl experience.
Generative AI in Health Care a scoping review and a persoanl experience.Vaikunthan Rajaratnam
 
Clinical Research Informatics Year-in-Review 2024
Clinical Research Informatics Year-in-Review 2024Clinical Research Informatics Year-in-Review 2024
Clinical Research Informatics Year-in-Review 2024Peter Embi
 
How to cure cirrhosis and chronic hepatitis naturally
How to cure cirrhosis and chronic hepatitis naturallyHow to cure cirrhosis and chronic hepatitis naturally
How to cure cirrhosis and chronic hepatitis naturallyZurück zum Ursprung
 
historyofpsychiatryinindia. Senthil Thirusangu
historyofpsychiatryinindia. Senthil Thirusanguhistoryofpsychiatryinindia. Senthil Thirusangu
historyofpsychiatryinindia. Senthil Thirusangu Medical University
 
Trustworthiness of AI based predictions Aachen 2024
Trustworthiness of AI based predictions Aachen 2024Trustworthiness of AI based predictions Aachen 2024
Trustworthiness of AI based predictions Aachen 2024EwoutSteyerberg1
 
Male Infertility, Antioxidants and Beyond
Male Infertility, Antioxidants and BeyondMale Infertility, Antioxidants and Beyond
Male Infertility, Antioxidants and BeyondSujoy Dasgupta
 
ayurvedic formulations herbal drug technologyppt
ayurvedic formulations herbal drug technologypptayurvedic formulations herbal drug technologyppt
ayurvedic formulations herbal drug technologypptPradnya Wadekar
 
Mental health Team. Dr Senthil Thirusangu
Mental health Team. Dr Senthil ThirusanguMental health Team. Dr Senthil Thirusangu
Mental health Team. Dr Senthil Thirusangu Medical University
 
Pharmacokinetic Models by Dr. Ram D. Bawankar.ppt
Pharmacokinetic Models by Dr. Ram D.  Bawankar.pptPharmacokinetic Models by Dr. Ram D.  Bawankar.ppt
Pharmacokinetic Models by Dr. Ram D. Bawankar.pptRamDBawankar1
 
Red Blood Cells_anemia & polycythemia.pdf
Red Blood Cells_anemia & polycythemia.pdfRed Blood Cells_anemia & polycythemia.pdf
Red Blood Cells_anemia & polycythemia.pdfMedicoseAcademics
 
power point presentation of Clinical evaluation of strabismus
power point presentation of Clinical evaluation  of strabismuspower point presentation of Clinical evaluation  of strabismus
power point presentation of Clinical evaluation of strabismusChandrasekar Reddy
 
Basic structure of hair and hair growth cycle.pptx
Basic structure of hair and hair growth cycle.pptxBasic structure of hair and hair growth cycle.pptx
Basic structure of hair and hair growth cycle.pptxkomalt2001
 
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptx
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptxBreast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptx
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptxNaveenkumar267201
 

Dernier (20)

SGK ĐIỆN GIẬT ĐHYHN RẤT LÀ HAY TUYỆT VỜI.pdf
SGK ĐIỆN GIẬT ĐHYHN        RẤT LÀ HAY TUYỆT VỜI.pdfSGK ĐIỆN GIẬT ĐHYHN        RẤT LÀ HAY TUYỆT VỜI.pdf
SGK ĐIỆN GIẬT ĐHYHN RẤT LÀ HAY TUYỆT VỜI.pdf
 
Using Data Visualization in Public Health Communications
Using Data Visualization in Public Health CommunicationsUsing Data Visualization in Public Health Communications
Using Data Visualization in Public Health Communications
 
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA .pdf
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA    .pdfSGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA    .pdf
SGK NGẠT NƯỚC ĐHYHN RẤT LÀ HAY NHA .pdf
 
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdf
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdfSGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdf
SGK LEUKEMIA KINH DÒNG BẠCH CÂU HẠT HAY.pdf
 
Generative AI in Health Care a scoping review and a persoanl experience.
Generative AI in Health Care a scoping review and a persoanl experience.Generative AI in Health Care a scoping review and a persoanl experience.
Generative AI in Health Care a scoping review and a persoanl experience.
 
Clinical Research Informatics Year-in-Review 2024
Clinical Research Informatics Year-in-Review 2024Clinical Research Informatics Year-in-Review 2024
Clinical Research Informatics Year-in-Review 2024
 
How to cure cirrhosis and chronic hepatitis naturally
How to cure cirrhosis and chronic hepatitis naturallyHow to cure cirrhosis and chronic hepatitis naturally
How to cure cirrhosis and chronic hepatitis naturally
 
Rheumatoid arthritis Part 1, case based approach with application of the late...
Rheumatoid arthritis Part 1, case based approach with application of the late...Rheumatoid arthritis Part 1, case based approach with application of the late...
Rheumatoid arthritis Part 1, case based approach with application of the late...
 
historyofpsychiatryinindia. Senthil Thirusangu
historyofpsychiatryinindia. Senthil Thirusanguhistoryofpsychiatryinindia. Senthil Thirusangu
historyofpsychiatryinindia. Senthil Thirusangu
 
How to master Steroid (glucocorticoids) prescription, different scenarios, ca...
How to master Steroid (glucocorticoids) prescription, different scenarios, ca...How to master Steroid (glucocorticoids) prescription, different scenarios, ca...
How to master Steroid (glucocorticoids) prescription, different scenarios, ca...
 
Trustworthiness of AI based predictions Aachen 2024
Trustworthiness of AI based predictions Aachen 2024Trustworthiness of AI based predictions Aachen 2024
Trustworthiness of AI based predictions Aachen 2024
 
Biologic therapy ice breaking in rheumatology, Case based approach with appli...
Biologic therapy ice breaking in rheumatology, Case based approach with appli...Biologic therapy ice breaking in rheumatology, Case based approach with appli...
Biologic therapy ice breaking in rheumatology, Case based approach with appli...
 
Male Infertility, Antioxidants and Beyond
Male Infertility, Antioxidants and BeyondMale Infertility, Antioxidants and Beyond
Male Infertility, Antioxidants and Beyond
 
ayurvedic formulations herbal drug technologyppt
ayurvedic formulations herbal drug technologypptayurvedic formulations herbal drug technologyppt
ayurvedic formulations herbal drug technologyppt
 
Mental health Team. Dr Senthil Thirusangu
Mental health Team. Dr Senthil ThirusanguMental health Team. Dr Senthil Thirusangu
Mental health Team. Dr Senthil Thirusangu
 
Pharmacokinetic Models by Dr. Ram D. Bawankar.ppt
Pharmacokinetic Models by Dr. Ram D.  Bawankar.pptPharmacokinetic Models by Dr. Ram D.  Bawankar.ppt
Pharmacokinetic Models by Dr. Ram D. Bawankar.ppt
 
Red Blood Cells_anemia & polycythemia.pdf
Red Blood Cells_anemia & polycythemia.pdfRed Blood Cells_anemia & polycythemia.pdf
Red Blood Cells_anemia & polycythemia.pdf
 
power point presentation of Clinical evaluation of strabismus
power point presentation of Clinical evaluation  of strabismuspower point presentation of Clinical evaluation  of strabismus
power point presentation of Clinical evaluation of strabismus
 
Basic structure of hair and hair growth cycle.pptx
Basic structure of hair and hair growth cycle.pptxBasic structure of hair and hair growth cycle.pptx
Basic structure of hair and hair growth cycle.pptx
 
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptx
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptxBreast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptx
Breast cancer -ONCO IN MEDICAL AND SURGICAL NURSING.pptx
 

Statistical considerations for confirmatory clinical trials for similar biotherapeutic products

  • 1. Biologicals 39 (2011) 266e269 Contents lists available at ScienceDirect Biologicals journal homepage: www.elsevier.com/locate/biologicals Statistical considerations for confirmatory clinical trials for similar biotherapeutic products Catherine Njue* Biostatistics Unit, Centre for Vaccine Evaluation, Biologics and Genetic Therapies Directorate, Health Products and Food Branch, Health Canada, 251 Sir Frederick Banting Driveway, AL 2201E, Tunney’s Pasture, Ottawa, ON K1A 0K9, Canada a b s t r a c t Keywords: Statistical principles Clinical trials Equivalence design Non-inferiority design For the purpose of comparing the efficacy and safety of a Similar Biotherapeutic Product (SBP) to a Reference Biotherapeutic Product (RBP), the “Guidelines on Evaluation of Similar Biotherapeutic Products (SBPs)” issued by the World Health Organisation (WHO), states that equivalence or noninferiority studies may be acceptable. While in principle, equivalence trials are preferred, noninferiority trials may be considered if appropriately justified, such as for a medicinal product with a wide safety margin. However, the statistical issues involved in the design, conduct, analysis and interpretation of equivalence and non-inferiority trials are complex and subtle, and require that all aspects of these trials be given careful consideration. These issues are important in order to ensure that equivalence and non-inferiority trials provide valid data that are necessary to draw reliable conclusions regarding the clinical similarity of an SBP to an RBP. Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved. 1. Introduction According to the guidelines developed by the WHO for the evaluation of SBPs, clinical trials utilising equivalence or noninferiority designs will be acceptable for the comparison of efficacy and safety of the SBP to the RBP [1]. Equivalence trials are preferred to ensure that the SBP is not clinically less or more effective than the RBP when used at the same dosage(s), but noninferiority trials may be considered, if appropriately justified [1,2]. The choice of the clinical trial design is dependent on many factors, and the specific design selected for a particular study must be explicitly stated in the trial protocol. Specifying the design selected is critical, given that equivalence and non-inferiority trials are designed to test different objectives. In addition, it should also be recognised that the statistical principles governing equivalence and non-inferiority trials are complex and subtle, and must be given careful consideration [3]. This article will address the statistical principles underlying equivalence and non-inferiority trials that must be considered in the design, conduct, analysis and interpretation of trials for the purpose of demonstrating that the SBP is similar to the RBP. * Tel.: þ1 613 9141 6160; fax: þ1 613 941 8933. E-mail address: catherine.njue@hc-sc.gc.ca. 2. Equivalence vs. non-inferiority, and the choice of the comparability margin An equivalence trial is designed to show that a test treatment differs from the standard active treatment by an amount that is clinically unimportant (the equivalence margin), while a noninferiority trial is designed to show that a test treatment is not less effective than a standard active treatment by a small predefined margin (the non-inferiority margin) [4]. These are different objectives, and it is vital that the specific design selected for a particular study be explicitly stated in the trial protocol [5]. The factors that should be considered in selecting the clinical trial design include: the product in question, the intended use of the product, the target population and disease prevalence [1]. Irrespective of the trial design selected, the next critical step is the determination of the comparability margin. For the equivalence trial, both the upper and lower equivalence limits are needed to establish that the SBP response differs from the RBP response by a clinically unimportant amount. For a non-inferiority trial, only one limit is required to establish that the SBP response is not clinically inferior to the RBP response by a pre-specified margin [6]. Determination of this margin must be given careful consideration as both sample size calculations and interpretation of study results depend on it. In practical terms, the comparability margin is defined as the largest difference between the SBP and the RBP that can be judged as clinically acceptable, and should be smaller than 1045-1056/$36.00 Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.biologicals.2011.06.006
  • 2. C. Njue / Biologicals 39 (2011) 266e269 differences observed in superiority trials of the RBP. The choice of the margin is based upon a combination of clinical judgement and statistical reasoning to deal with uncertainties and imperfect information. Adequate evidence of the effect size of the RBP should be provided to support the proposed margin. The magnitude and variability of the effect size of the RBP derived from historical trials should also be taken into consideration. 3. Sample size and power It is important to note that equivalence and non-inferiority designs test different hypotheses requiring different formulae for power and sample size calculations [4]. For an equivalence trial, the null hypothesis represents lack of equivalence between the test and reference while for an non-inferiority trial, it represents inferiority of the test to the reference. Therefore, sample size calculations should be based on methods specifically designed for either equivalence or non-inferiority trials. Details of the sample size calculations should be provided in the study protocol. At a minimum, the basis of estimates of any quantities used in the sample size calculation should also be clearly explained, and are usually derived from results of earlier trials with the RBP or published literature [6]. Determination of the appropriate sample size is dependent on various factors including: the type of primary endpoint (e.g. binary, quantitative, time-to event etc.), the pre-defined comparability margin, the probability of a type I error (falsely rejecting the null hypothesis) and the probability of a type II error (erroneously failing to reject the null hypothesis) [4,6]. The expected rates of patient dropouts and withdrawals should also be taken into consideration. Keeping all factors the same, including the comparability margin, the type I error rate and the type II error rate, an equivalence trial tends to need a larger sample size than a noninferiority trial. When estimating the sample size for equivalence or non-inferiority trials, it is usually under the assumption that there is no difference between the SBP and RBP. An equivalence trial could be underpowered if the true difference is not zero. Similarly, a non-inferiority trial could be underpowered if the SBP is actually less effective than the RBP. To minimise the risk of an underpowered study, the equivalence or non-inferiority trial should only be considered and designed once similarity of the SBP to the RBP has been ascertained using all available comparative data that has been generated between the SBP and the RBP [1]. In addition, keeping the probability of a type II error low by increasing the study power will increase the ability of the study to show equivalence or noninferiority of the SBP to the RBP. 4. Important principles: assay sensitivity and the constancy assumption Potential differences between the SBP and the RBP should be investigated in a sensitive and well established clinical model [1,2]. The sensitivity of the clinical model is important, and there should be some confidence in the trial’s ability to detect a difference between the SBP and the RBP if a real difference exists. This is the concept of assay sensitivity, and refers to the ability of a specific trial to demonstrate a difference between treatments if such a difference truly exists [7]. Assay sensitivity is crucial to any trial, but has different implications depending on the trial design. A superiority trial that lacks assay sensitivity will fail to show superiority of the test treatment (a failed study), and a superiority trial that demonstrates superiority has simultaneously demonstrated assay sensitivity. However, a non-inferiority or equivalence trial that lacks assay sensitivity may find an ineffective treatment to be non-inferior or equivalent 267 and could lead to an erroneous conclusion of efficacy [4]. The effect size of the RBP in the current non-inferiority or equivalence trial is not measured directly relative to placebo, and assay sensitivity must be deduced. In fact, the current non-inferiority or equivalence trial must rely on the assumption of assay sensitivity based on information external to the trial. Factors which can reduce assay sensitivity include poor compliance with study medication, concomitant medications, missing data, patient withdrawals, variability in measurements, poor trial quality etc. The design and conduct of the non-inferiority or equivalence trial must attempt to avoid or at least minimise these factors to ensure the credibility of trial results. Another important principle in equivalence and non-inferiority trials is the constancy assumption, which means that the historical active control effect size compared to placebo is unchanged in the setting of the current trial [7]. To ensure the validity of the constancy assumption, the equivalence or non-inferiority trial must be designed and conducted in the same manner as the trial that established the efficacy of the RBP. The study population, concomitant therapy, endpoints, trial duration, assessments and other important aspects of the trial should be similar to those in the trial used to demonstrate the effectiveness of the RBP. Trial conduct should not only adhere closely to that of the trial used to demonstrate the effectiveness of the RBP, but it should also be conducted with high quality to ensure assay sensitivity. 5. Data analysis and analysis populations Data analysis from equivalence and non-inferiority studies is generally based on the use of two-sided confidence intervals (typically at the 95% level) for the metric of treatment effect. Examples of metrics of treatment effect include absolute difference in proportions, relative risk, odds ratio and hazard ratio. Analysis of non-inferiority trials can also be based on a one sided confidence interval (typically at the 97.5% level) [6]. For equivalence trials, equivalence is demonstrated when the entire confidence interval falls within the lower and upper equivalence limits. Non-inferiority trials are one sided, and statistical inference is based only on the upper or lower confidence limit, whichever is appropriate for a given study [8]. For example, if a lower non-inferiority margin is defined, non-inferiority is demonstrated when the lower limit of the confidence interval is above this margin. Likewise, if an upper non-inferiority margin is defined, non-inferiority is demonstrated when the upper limit of the confidence interval is below this margin. In a superiority trial, analysis based on the Intent to Treat (ITT) population is considered the primary analysis, and is considered conservative in this setting. However, for equivalence and noninferiority trials, the ITT population is not considered conservative as it tends to bias the results towards equivalence or noninferiority. The per-protocol (PP) population excludes data from patients with major protocol violations, and excluding data from these patients can also substantially bias the results, especially if a large proportion of subjects are excluded. Hence, a conservative analysis cannot always be defined for equivalence or noninferiority trials, and this will present challenges during the analysis of the trial results [9]. Equivalence or non-inferiority trials should be analysed based on both the ITT and PP approaches, and both sets of results should support the conclusion of equivalence or non-inferiority. 6. Trial results: possible outcomes and interpretation Irrespective of the trial design, the totality of the actual observed results obtained from the clinical trial will determine whether the
  • 3. 268 C. Njue / Biologicals 39 (2011) 266e269 SBP and the RBP can be considered clinically similar. Fig. 1 shows the possible results (confidence intervals) from a non-inferiority or equivalence trial designed to compare a test to a reference for which a higher value in the endpoint corresponds to better efficacy, and the difference between treatments is defined as test minus reference. As illustrated in Fig. 1 where the lower limit of the equivalence region also represents the non-inferiority margin, several outcomes can be obtained from the analysis based on the confidence interval approach. The following is an interpretation of each outcome: 1. Trial failed, regardless of objective (Test is worse than Reference); 2. Trial failed, regardless of objective (Test is not better, but could be worse than Reference); 3. Trial failed, regardless of objective (Trial is completely noninformative: Test could be worse than or better than Reference); 4. Test equivalent and therefore, also non-inferior to Reference (entire confidence interval within equivalence region); 5. Test not equivalent but non-inferior to Reference (lower limit of confidence interval above non-inferiority margin); 6. Test not equivalent but statistically and clinically superior to Reference (lower limit of confidence interval above equivalence region). For comparing the SBP to the RBP, outcome 4 is the most desirable, but as illustrated by the simulation study described below, such an outcome can only be obtained from an adequately powered equivalence trial, and would be unlikely to be obtained from a trial that was designed as a non-inferiority study. Outcome 5 has different implications for a non-inferiority trial compared to an equivalence trial, resulting in a successful noninferiority trial, but a failed equivalence trial. Outcome 6 would result in a failed equivalence trial, but would be acceptable in a non-inferiority trial if it does not matter whether the test treatment can yield a much better response than the reference [4]. However, in the context of comparing the SBP to the RBP for the purpose of demonstrating that the SBP is not clinically less or more effective than the RBP when used at the same dosage(s), outcome 6 is not desirable as it suggests superiority of the SBP relative to the RBP. As stated in the WHO guidance document, such a finding, if clinically relevant, would contradict the principle of similarity and a post-hoc justification that a finding of statistical superiority is not clinically relevant would be difficult to defend [1]. If similarity of the SBP to the RBP has been ascertained prior to initiation of the confirmatory trial, outcome 6 is expected to be a rare event. This was revealed in the simulation study described below where the proportion of simulated confidence intervals showing statistical superiority was about 2%. If the statistical superiority observed is considered clinically relevant, then the SBP would not be considered similar to the RBP and should be developed as a stand alone product [1,2]. This is clearly an undesirable finding at the end of a confirmatory trial that marks the end of the comparability exercise [1]. Demonstration that superior efficacy of the SBP is not associated with adverse events, if the SBP is prescribed at the same dosage as the RBP would also be required in all cases [1,2]. To minimise the risk of a superiority finding, all comparative data that have been generated between the SBP and the RBP should be carefully reviewed and analysed to ensure that similarity between the SBP and the RBP has been established with a high degree of confidence prior to initiating the confirmatory trial [1]. 7. A simulation study A direct simulation of 95% confidence intervals for differences in proportions was carried out. Each simulation was set up to generate data from a non-inferiority study with a binary primary endpoint under the assumption that the proportions in the test and reference groups are equal. The goal of the simulation study was to demonstrate that equivalence trials require more subjects than noninferiority trials, and that an adequately powered equivalence trial is needed to obtain outcome 4 (test equivalent to reference). Two independent binomial samples of given sample sizes and underlying proportions of success were generated, and the approximate 95% two-sided confidence interval for the difference in proportions (test proportion minus reference proportion) was calculated for each simulation. A range of proportions was considered, and both proportions were set equal. With the noninferiority margin set to 10%, the simulations were carried out 10,000 times, with the number of subjects in each treatment group selected in order for each simulated study to have a specified power. From all the simulations, the observed frequency of confidence intervals supporting claims of non-inferiority (test proportion at most 10% lower) and the observed frequency of confidence intervals supporting claims of equivalence (entire confidence interval within the equivalence region) were calculated. The simulation results are summarised in Table 1. The results in Table 1 illustrate that outcome 4 (Test equivalent to Reference: entire confidence interval within equivalence region) can be difficult to obtain from a trial designed as a non-inferiority study, and show that the proportion of simulated confidence intervals supporting claims of equivalence is about 20% lower (for 80% power) and about 10% lower (for 90% power) compared to the proportion of simulated confidence intervals supporting claims of non- 1 2 Table 1 Proportion of simulated confidence intervals supporting claims of non-inferiority or equivalence of test to reference. 3 4 Power Proportion of success (p1 ¼ p2) Required number of subjects (N1 ¼ N2) Proportion of confidence intervals supporting noninferiority Proportion of confidence intervals supporting equivalence 80% 0.60 0.70 0.80 0.90 0.60 0.70 0.80 0.90 400 347 273 163 520 455 360 210 80.1% 79.9% 80.5% 80.5% 89.9% 89.7% 90.4% 90.0% 60.5% 59.5% 60.8% 60.9% 79.8% 79.3% 80.7% 80.0% 5 6 Reference Superior Equivalence Region Test Superior Fig. 1. Possible results (confidence intervals) from a non-inferiority or equivalence trial. 90%
  • 4. C. Njue / Biologicals 39 (2011) 266e269 inferiority. In other words, the absolute loss in power ranges from about 10% to 20% depending on study power. 269 Conflict of interest Author has no potential conflicts of interest. 8. Conclusion Acknowledgements Equivalence trials are preferred, and non-inferiority trials may be considered if appropriately justified [1,2]. Prior to initiating an equivalence or non-inferiority trial for the purpose of demonstrating that the SBP has similar efficacy and safety to the RBP, a careful evaluation of all comparative data that have been generated between the SBP and the RBP should be performed to ensure that similarity between the SBP and the RBP has been clearly established. This is important, as the equivalence or non-inferiority study marks the last step of the comparability exercise, and the study should be initiated when there is unequivocal evidence that the SBP can be considered similar to the RBP. It is also vital that the protocol of the trial designed to demonstrate equivalence or noninferiority of the SBP to the RBP contains a clear statement that this is the intention. The choice of the RBP is critical. It should be a widely used therapy whose efficacy in the relevant indication has been clearly established and quantified in well-designed and well-documented placebo controlled trials. The choice of the equivalence or noninferiority margin must be clearly justified. Equivalence and noninferiority trials require different formulae for power and sample size calculations, and the appropriate set of formulae should be used in each setting. Factors which reduce the assay sensitivity of a trial should be avoided or minimised, and the equivalence or noninferiority trial must be designed and conducted in the same manner as the trial that established the efficacy of the RBP to ensure the constancy assumption. The totality of the data obtained from the equivalence or non-inferiority trial will determine whether the SBP can be considered clinically similar to the RBP. The author thanks Mike Walsh, Centre for Vaccine Evaluation, Biologics and Genetic Therapies Directorate, Health Canada and Bob Li, Office of Science, Therapeutic Products Directorate, Health Canada for their input and suggestions on the original draft; Keith O’Rourke, Centre for Vaccine Evaluation, Biologics and Genetic Therapies Directorate, Health Canada for the running the simulations; and Bill Casley, Centre for Vaccine Evaluation, Biologics and Genetic Therapies Directorate, Health Canada for proof reading the article. References [1] Guidelines on Evaluation of Similar Biotherapeutic Products (SBPs). World Health Organisation 2010. [2] Guidance for Sponsors. Information and submission requirements for subsequent entry biologics (SEBs). Health Canada 2010. [3] Jones B, Jarvis P, Lewis JA, Ebbutt AF. Trials to assess equivalence: the importance of rigorous methods. British Medical Journal 1996;313:36e9. [4] Hwang IK. Active-controlled noninferiority/equivalence trials: methods and practice. In: Ralph Buncher C, Tsay Jia-Yeong, editors. Statistics in the pharmaceutical industry. New York: Chapman and Hall; 2005. p. 193e230. [5] CPMP Working Party on Efficacy of Medicinal Products, Note for Guidance. Biostatistical methodology in clinical trials in applications for marketing authorizations for medicinal products. Statistics in Medicine 1995;14:1659e82. [6] Statistical Principles for Clinical Trials. ICH Topic E 1998;9. [7] Choice of Control Group and Related Issues in Clinical Trials. ICH Topic E 2000; 10. [8] Guidelines on Clinical Evaluations of Vaccines. Regulatory expectations. WHO Technical Report Series 2004;924:35e101. [9] Snapinn SM. Non inferiority trials. Current Controlled Trials in Cardiovascular Medicine 2000;1(1):19e21.