call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
ARCS Presentation 2015
1. The Evolving Role of the
Statistician in Clinical Research
Dr Elisa Young
2. About me
• Education:
• BSc (Med Lab Sci)
• PhD (Pharmacology)
• MSc (Statistics)
• Career: (>10 years)
• Statistician, Project Management, Laboratory
Management, Diagnostic Laboratory Scientist, R&D
Scientist
• Novotech (2 years)
• Statistical lead and statistical programmer on over 30
studies
3. Disclaimer
• This presentation represents my thoughts
and perspectives, based on my
experience.
• Perspectives are based on working for a
CRO with a small-scale statistics team and
may not be representative of other CROs,
nor pharmaceutical companies.
4. Hot Topic
Agenda
• How does the statistician fit into a clinical study team?
• How does the statistician contribute during each of the study
stages?
• How can an organisation get added-value from statisticians?
10. Randomization
Static: Assigns treatment in a sequence established
prior to any patients entering the study.
Dynamic: Generates random treatment assignments
based on the stratification levels of the patients that
have entered the trial as well as the patient entering the
study.
12. Risk Based Monitoring
• Driven by “key risk indicators”
• Review of statistical data over the course of the trial
• Very broad range of metrics to be monitored
• May involve complex modelling to estimate overall
risk and unveil data patterns
• Ownership and responsibilities?
13. Data Monitoring Committees and Interim Analyses
• Includes: FDA safety reports, SRCs, DSMBs,
IAs ……
• Varying purposes: Safety, efficacy, dose selection,
sample size re-estimation, futility
• Responsibilities:
• Organising provision of study output
• Committee member
• Statistical risk assessments
14. End of Study Analysis
• SAS programming
• Validation and QC review
• Interpretation
• Reporting
16. CDISC
• Clinical data standards for submission of databases
and supporting documentation to regulatory bodies
• To become mandatory in the next few years.
• Primary responsibilities statisticians &
programmers
• Entirely new work processes
• Entirely new skillset
• Significant affects on timelines, budgets & resources
18. Getting the most out of your
statistician/programmers
• Data Management Programming
o Data transposition for easier review
o Identify protocol deviations and eligibility violations
o Complex data review
• Comparing old and refreshed data
• Merging data from several sources
• Reconcile CRF data with external vendor data (e.g.
laboratory results, randomisation data, PK data)
19. • Cross-study analyses
• Combine/compare similar studies
• Site performance
• Organisational statistics
• Training
Getting the most out of your
statistician/programmers
During my presentation this afternoon I aim to talk about 3 aspects of clinical statistics, being how does the statistician fit into the clinical study team, how and what does the statistician contribute through a clinical study. And lastly, how can an organisation get the most out of their statisticians.
Throughout the presentation I’m going to highlight some “hot topics” which will hopefully provide some insight into some of the challenges currently being faced by clinical statisticians.
Taking a quick look at a typical clinical study team, there are of course many team members, representing various departments and areas of expertise. The statistician is just one of the team members of any study team and works closely with most of the other study members, as well as other departments such as QA, BD and HR.
For the purpose of this presentation I’ve split out a clinical study into 5 phases, planning, start-up and maintenance, interim analyses and data monitoring committees, and end of study and reporting.
During the planning phase of studies, statisticians may be asked to provide advise on the study design. The level of involvement varies and may involve advice regarding statistical methodology, schedule of events and timing, or relate to study endpoints.
In addition, we’re often asked to provide estimates regarding sample size, to ensure that the study has adequate power to identify statistical significance.
Of course, none of these discussions are done in isolation. They are done in collaboration with the sponsor and principal investigators, amongst others, to ensure that the study design meets all clinical, regulatory and statistical standards.
One hot topic in the planning stages of clinical studies is the changes we’re seeing in Bioequivalence studies. And there are number of changes happening in this area.
The first is High Variable Drugs. Essentially, BEQ study designs have quite specific guidelines, and study design and sample sizes very much revolve around the variation of a drug’s pharmacokinetic, or PK, profile. The more variable a drug is, the more subjects you’re going to need. Standard BEQ designs only allow for a certain level of variation, and so alternative designs for high variability drugs need to be implemented. This often increases the expense of BEQ studies, places more subjects at risk, and ultimately, limits the availability of generics; and as such sponsors are often looking at exploring newly published study designs, to increase feasibility.
A second shift is in the type of statistics used to analyse BEQ studies, with many studies choosing to use Bayesian statistics. I won’t go into how Bayesian statistics differs to the standard frequentist approach in clinical trial analyses, other than to say that the methodologies and thinking behind the two methodologies is really quite different, and depending on their training, this shift may require statisticians to undergo additional training to be able to keep up.
Other changes in BEQ studies relate to the type of endpoints being used to claim equivalence. Traditional study designs focus on PK outcomes, however studies may instead need to use pharmacodynamic or even clinical endpoints to claim equivalence. These approaches are usually specific to either an indication, such as an eye treatment where systemic PK levels are below the level of detection, or the chemical properties of the drug.
Following the study being planned, the study goes into the Start-Up phase, where the clinical and regulatory teams are hard at work getting everything in order to start to the study. There are also a few activities for the statistician here. Firstly, we may be asked to either review the protocol, with a focus on the statistical section of the protocol, or we may be asked to write the section from scratch. This input is actually really beneficial as it is a great way to familiarise ourselves with the study to ensure that any concerns can be raised before recruitment even begins.
Following ethics and governance approval, we’ll then be involved in generating randomisation schedules, as the study requires, and preparing code break envelopes.
Lastly, during start-up, the statistician is commonly asked to review eCRFs to ensure that the data collection is going to allow appropriate statistical analysis as stipulated by the protocol. We may also be asked to review Consistency Check Specifications, or CCS, which is essentially a list of all the ways that the integrity of the data will be checked, such as making sure that values are within a certain range, that dates make sense, and so on.
A hot topic in the Start-Up phase of studies relates to randomization, with many studies now opting to using what’s called Dynamic randomization, as opposed to a standard Static randomization method.
To briefly differentiate, static randomization assigns treatment in a sequence established prior to any patients entering the study. So, the treatment allocation scheme is predefined and unchanged as patients enrol onto the study – the randomization list can be printed off for the whole study ahead of time. This method does not use any information on patients that have entered the trial. As an example you may have a study with a sample size of 6, with 3 active and 3 placebo, where the statistician uses a program to generate a list of the sequential order of treatment allocation. For instance, the first subject will be on active, the second on placebo, the third on placebo, through to the 6th patient.
Dynamic randomization on the other hand generates the random treatment assignment for each subject as they enter the study. This method takes into account the stratification levels, and treatment allocations, of the subjects that have already entered the study, as well as the patient now entering the study, aiming to maintain balance as you continue through enrollment. Because this method is subject-by-subject, there is no randomization schedule available up-front.
The best news about dynamic randomization is that it is done using the eDC platform, and therefore requires very little from the statistician, other than inputting the correct specifications into the back-end.
During much of the maintenance phase, statisticians may very well have very little to do, depending on the length of the study. We tend to kick back into action 3-4 months prior to the proposed database lock date, when we start to prepare a statistical analysis plan, or SAP, which describes, in detail, all of the analyses, including simple data listings, descriptive summaries, inferential analyses and figures, or graphs, which will be output at the end of the study. In conjunction, we’ll put together shells, or mocks, of each output, to detail how the output will look.
Once we have consensus and approval for the SAP and mocks, SAS programming will start to ensure that everything is ready to run by database lock. I’ll be discussing programming in more detail in a few slides time.
Like all other departments, the hot topic for this stage is Risk Based Monitoring.
For statisticians, who have traditionally been involved in the design and analysis of clinical trials at the end, the introduction of RBM will now likely have us be involved in checking the data over the course of the trials. RBM is driven by monitoring indicators of the quality and performance of investigational sites and a common way to evaluate site performance will be through the monitoring of predefined metrics. These will likely include relevant indicators of quality such as accrual rate, frequency of adverse and serious adverse events, frequency of data queries and time taken to resolve them. These metrics will need pre-defined limits or thresholds to trigger actions. Sophisticated RBM approaches go even further, using all available data to build and validate multivariate statistical model to indicate overall risk. For multi-site studies, central statistical monitoring may be required to compare the consistency of the data collected at each site with the data contributed by all other sites. This approach would provide a powerful way to detect abnormal data patterns which may indicate errors, misunderstandings, sloppiness, data fabrication or fraud.
The impact on statisticians is likely to be highly variable, depending on the methodological approaches chosen by the sponsor. Not having direct experience with RBM yet, I think ownership and responsibilities is going to be key. There are so many departments that are critical in RBM, such as clinical operations, data management, statistics and even IT. The other key element will be need for considerable investment in sufficient training to educate people on new processes, and to get everybody comfortable with the new tools and software.
Overall, I think there’s no doubt there will be improvements in data quality by focusing efforts on the data most important to the trial, using statistics and graphics to prioritise and identify warning signs.
This next stage, which is really just a sub-stage of the Maintenance phase, is the occurrence of data monitoring and interim analyses, and also includes study updates submitted to the FDA. There are so many different terms used in studies, but essentially this stage represents any “look” at the data before the end of the study. Theses “looks” can be for a wide variety of reasons, such as reviewing safety or efficacy, or may be for dose selection in sequential dose escalation study designs. It may be re-estimate the sample size, or may be for futility, to determine if the study needs to be stopped early.
The role of the statistician will vary according to purpose of the study, but usually involving organising for the study output to be generated and provided to the appropriate personnel.
Another important role, which is rarely specified upfront, but is essential for the integrity of the study, is for the statistician to prepare a risk assessment, which aims to identify the various risks, and mitigation strategies, related to these data looks, specifically relating to unblinding and bias.
We then come to the end of the study, where things get very busy for a clinical statistician, and statistical programmer. The programmer needs to complete all the programming to generate all the study output, and then all that output needs to be reviewed and validated. To help put things into perspective, each program can easily be over a hundred lines of codes. Depending on the study there can be over 100 pieces of output, including listings, tables and figures.
Once output is validated, the statistician then needs to interpret the findings and prepare a statistical analysis report.
Looking at the SAS programming a little further – in order to generate all the output for any given study, the programmer needs to bring together all the study data, which primarily involves the study database, or eDC, but also includes CTMS data such as protocol deviations, safety laboratory results, randomisation schedules, PK and immunogenicity data and pharmacodynamic data, amongst other study specific datasets. All this data is pooled together in SAS, where a study library of datasets is created. The programmer then needs to prepare some derived datasets, which combines some of the datasets, such as combining the coding data for AEs, CMs and Med Hist, together with the raw data from the CRF page. From there, the programmer will then needs to write programs to generate all the listings, tables and figures, as well as transport and CDISC datasets.
Which brings me to the hot topic in this stage of the study, being CDISC.
To provide a brief overview, sponsors have always had to submit database data and supporting documentation through to the FDA, however the requirements, up until recently, have been pretty minimal. CDISC is a not-for-profit organization which established a set of standards for how database data should be managed, collected, archived and submitted, and although up until now these standards have been optional, recent FDA communication indicates that these standards will become the mandatory standard over the next couple of years.
From a CRO perspective, there has been a sharp increase in the past couple of years in the number of requests for clinical data using the CDISC standards. We are now at the stage, where if you are in the clinical trial business, it is now good business to adopt and integrate CDISC standards into your organization.
The integration of CDISC standards into the organisation has primarily fallen on statisticians and statistical programmers. This is no simple integration, with the introduction of CDISC usually requiring significant changes in an organisation’s work flow. The flow of work is no longer from the Data Management straight to analysis datasets. It is now from data management to what’s known as SDTM datasets, which are CDISC-compliant, and then to analysis datasets. This affects timelines, budget, and resources. In general more of everything is going to be needed due to the increase in workload and number of statistical deliverables.
Lastly, there is also a matter skillset. One key element of the CDISC standards is the documentation of the metadata – which is the data about the data, so field names, lengths, formats, etc. A key part of this work requires a knowledge not only of XML but also of XML Schema and XSL, which are programming languages you would learn in an IT course, not statistics.
In regards to reporting the results of any given study, statisticians are first and foremost involved by preparing a Statistical Analysis Report, which summarises all the data and output from the study.
In addition, statisticians may be asked to prepare in-text tables, collate the tables and figures for Section 14 and compile the listings required for Section 16, which is the appendices.
Lastly, we may be involved in the preparation of manuscripts, conference abstracts or presentations.
In addition to the typical activities of statisticians and SAS programmers in the conduct of a clinical study, there’s many ways that you can get more out of them either at an organisational or study level.
The most important area to expand these roles is with Data Management, and in fact many large CROs will have SAS programmers who specifically work with Data Management. In this role, programmers can help data management by transposing data, so that the data is easier to review. This might be organising all the lab data so that a data associate can use Excel filters and pivot tables.
Programs can also be written to pick up protocol deviations and eligibility violations, either as a net to catch anything that slips through the net, or as a more formal check for deviations that may be difficult to pick up without merging and rearranging data.
Programs can also be written to allow more complex data reviews. A few examples that I’ve been involved in have been – comparing old and refreshed data. This allows data associates to focus on data that is new, removed or amended since the last look at the data. It could also be merging data from several sources, which could help with cross-checking eligibility data with protocol violations and waivers. And lastly, SAS programming can be used to reconcile external vendor data with CRF data to ensure that visits, times, and other information lines up.
At a broader level, statisticians and SAS programmers could prepare cross-study analyses. This may be combining or comparing similar but separate studies. Or it could be at an organisational level, to look at site performance or look at recruitment for specific indications or regions, as sort of a higher level risk based monitoring.
There is then organisational statistics, and I really just add this as a way of saying that there are so many departments in a clinical organisation that use statistics. Whether it’s project management tracking how well projects are running, or QA wanting to compile results about CAPAs, or even HR wanting to analyse results of their most recent employee satisfaction survey. Statisticians and programmers can help with any of these, whether it’s with planning the what and how or helping with analysis and interpretation.
Lastly, there is training. I think it’s beneficial for everybody in a clinical study team to understand the fundamentals of clinical statistics, and specifically the effects of dirty or incomplete data on the ability to properly present and interpret such data. Such training should of course be specific for the attendance, so the training one would give to data management would be quite different to that for project management, clinical or regulatory.
In summary, my take away messages are this: Statisticians are a critical element in of any clinical study team. Keep us involved. Ask us questions and ask more of us.