Timothy Dawes of Genentech and Elliot Hui of the University of California, Irvine share their well-received presentation from SLAS2017 in Washington, DC.
2. Next-Generation HTS
How much improvement would be required to make a
paradigm shift in drug discovery?
● Screen complete library in 1 day
● Screen 20 million compounds
● Generate PK or Tox data during the screen
● What do we do with all the data?
● Cost is most important. Currently $1 per
compound for CRO.
3. Why is this needed?
What new approaches could become possible with
dramatically faster HTS?
● Richer HTS (IC50, + Tox, PK)
○ Chemical activity, etc.
○ Chemical optimizability
● Screening of larger libraries, biologics
○ Tracking and databasing is key
● Broad accessibility - eliminate current
infrastructure
● Success rate improvement
4. What is stopping us?
Are we at the limit of microtiter plate scaling?
● Microtiter format not relevant for next gen?
Faster readers required?
Is assay development the true bottleneck?
System reliability. Less human baby sitting.
# of plates/batch is a key bottleneck.
New format needs to be compatible with the assays.
5. How can we get
there?
What approaches and technologies may hold
promise towards achieving next-gen HTS?
● Parallel screening systems? Cell-based, etc.
● Droplet microfluidics? Issues: diffusion of
compounds out of drops, cell-based or
high-content assays may not transfer.
● Barcoded libraries?
● Artificial intelligence required to process
large amounts of new data?
● Annotation to bring experience from previous
screens to inform future screens. Previous
successes and failures.
7. Sun and Kennedy, PhD dissertation 2015
Droplet formation by automated sipping
Sequential addition of reaction reagents to individual droplets
8. Price and Paegel, Anal Chem 2015
Barcoded Library
Paegel employs combinatorial
compounds tethered to individual
beads with DNA barcode.
Reactions performed in 100 pL
droplets. Hits are sorted and then
identified by DNA sequencing.
Fast enough to screen more than 1
million compounds per day. (Only 30k
demonstrated.)
9.
10. Deep Learning
Requires Deep Data
Machine learning started in the 50s.
Decision trees, nearest neighbors and neural nets.
Each requiring a single data set with single algorithm
or layer.
Deep learning requires larger data sets and employs
multiple layers and algorithms.
We need take more data!! More layers.
● Would companies share data?