SlideShare a Scribd company logo
1 of 7
Download to read offline
How New York Genome Center Manages the Massive Data
Generated from DNA Sequencing
Transcript of a sponsored discussion on how the drive to better diagnose diseases and develop
more effective treatments is aided by swift, cost efficient, and accessible big data analytics
infrastructure.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and
moderator for this ongoing discussion on IT innovation and how it’s making
an impact on people’s lives.
Our next big-data use case leadership discussion examines how the non-profit
New York Genome Center manages and analyzes up to 12 terabytes of data
generated each day from its genome sequence appliances. We’ll learn how to
better diagnose disease and develop more effective treatments aided by swift,
cost efficient, and accessible big-data analytics.
To hear how they exploit the vast data outputs to then speedily correlate for time sensitive
reporting, please join me in welcoming our guest.
Start Your
HPE Vertica
Community Edition Trial Now
We're here with Toby Bloom, Deputy Scientific Director for Informatics at the New York
Genome Center in New York. Welcome, Toby.
Toby Bloom: Hi. Thank you.
Gardner: First, tell us a little bit about your organization. It seems like it’s a unique institution, a
large variety of backers, consortium members. Tell us about it.
Bloom: New York Genome Center is about two-and-a-half years old. It was formed initially as a
collaboration among 12 of the large medical institutions in New York, Cornell, Columbia,
NewYork-Presbyterian Hospital, Mount Sinai, NYU, Einstein Montefiore, and Stony Brook
University. All of the big hospitals in New York decided that it would be better to have one
genome center than have to build 12 of them. So we were formed initially to be the center of
genomics in New York.
Gardner: And what does one do at a center of genomics?
Gardner
Bloom: We're a biomedical research facility that has a large capacity to sequence genomes and
use the resulting data output to analyze the genomes, find the causes of disease, and hopefully
treatments of disease, and have a big impact on healthcare and on how medicine
works now.
Gardner: When it comes to doing this well, it sounds like you are generating an
awesome amount of data. What sort of data is that and where does it come from?
Bloom: Right now, we have a number of genome sequencing instruments that
produce about 12 terabytes of raw data per day. That raw data is basically lots of
strings of As, Cs, Ts and Gs, DNA data from genomes from patients who we're sequencing.
Those can be patients who are sick and we are looking for specific treatment. They can be
patients in large research studies, where we're trying to use and correlate a large number of
genomes to find the similarities that show us the cause of the disease.
Gardner: When we look at a typical big-data environment like in a corporation, it’s often
transactional information. It might be outputs from sensors or machines. How is this a different
data problem when you are dealing with DNA sequences?
Lots of data
Bloom: Some of it’s the same problem, and some of it’s different. We're bringing in lots of
data. The raw data, I said, is probably about 12 terabytes a day right now. That could easily
double in the next year. But than we analyze the data, and I probably store three to four times
that much data in a day.
In a lot of environments, you start with the raw data, you analyze it, and you cook it down to
your answers. In our environment, it just gets bigger and bigger for a long time, before we get
the answers and can make it smaller. So we're dealing with very large amounts of data.
We do have one research project now that is taking in streaming data from devices, and we think
over time we'll likely be taking in data from things like cardiac monitors, glucose
monitors, and other kinds of wearable medical devices. Right now, we
are taking in data off apps on smartphones that are tracking movement
for some patients in a rheumatoid arthritis study we're doing.
We have to analyze a bunch of different kinds of data together. We’d like to
bring in full medical records for those patients and integrate it with the genomic data. So we do
have a wide variety of data that we have to integrate, and a lot of it is quite large.
Gardner: When you were looking for the technological platforms and solutions to accommodate
your specific needs, how did that pan out? What works? What doesn’t work? And where are you
in terms of putting in place the needed infrastructure?
Bloom
Bloom: The data that comes off the machines is in large files, and a lot of the complex analysis
we do, we do initially on those large files. I am talking about files that are from 150 to 500
gigabytes or maybe a terabyte each, and we do a lot of machine-learning analysis on those. We
do a bunch of Bayesian statistical analyses. There are a large number of methods we use to try to
extract the information from that raw data.
Start Your
HPE Vertica
Community Edition Trial Now
When we've figured out the variance and mutations in the DNA that we think are correlated with
the disease and that we were interested in looking at, we then want to load all of that into a
database with all of the other data we have to make it easy for researchers to use in a number of
different ways. We want to let them find more data like the data they have, so that they can get
statistical validation of their hypotheses.
We want them to be able to find more patients for cohorts, so they can sequence more and get
enough data. We need to be able to ask questions about how likely it is, if you have a given
genomic variant, you get a given disease. Or, if you have the disease, how likely it is that you
have this variant. You can only do that if it’s easy to find all of that data together in one place in
an organized way.
So we really need to load that data into a database and connect it to the medical records or the
symptoms and disease information we have about the patients and connect DNA data with RNA
data with epigenetic data with microbiome data. We needed a database to do that.
We looked at a number of different databases, but we had some very hard requirements to solve.
We were looking for one that could handle trillions of rows in a table without failing over, tens of
trillions of rows without falling over, and to be able to answer queries fast across multiple tables
with tens of trillions of rows. We need to be able to easily change and add new kinds of data to it,
because we're always finding new kinds of data we want to correlate. So there are things like
that.
Simple answer
We need to be able to load terabytes of data a day. But more than anything, I had a lot of
conversations with statisticians about why they don’t like databases, about why they keep asking
me for all of the data in comma-delimited files instead of databases. And the answer, when you
boiled it down, was pretty simple.
When you have statisticians who are looking at data with huge numbers of attributes and huge
numbers of patients, the kinds of statistical analysis they're doing means they want to look at
some much smaller combinations of the attributes for all of the patients and see if they can find
correlations, and then change that and look at different subsets. That absolutely requires a
column-oriented database. A row-oriented relational database will bring in the whole database to
get you that data. It takes forever, and it’s too slow for them.
So, we started from that. We must have looked at four or five different databases. Vertica was the
one that could handle the scale and the speed and was robust and reliable enough  and is our
platform now. We're still loading in the first round of our data. We're still in the tens of billions of
rows, as opposed to trillions of rows, but we'll get there.
Gardner: You’re also in the healthcare field. So there are considerations around privacy,
governance, auditing, and, of course, price sensitivity, because you're a non-profit. How did that
factor into your decision? Is the use of off-the-shelf hardware a consideration, or off-the-shelf
storage? Are you looking at conversion infrastructure? How did you manage some of those cost
and regulatory issues?
Bloom: Regulatory issues are enormous. There are regulations on clinical data that we have to
deal with. There are regulations on research data that overlap and are not fully consistent with the
regulations on clinical data. We do have to be very careful about who has access to which sets of
data, and we have all of this data in one database, but that doesn’t mean any one person can
actually have access to all of that data.
We want it in one place, because over time, scientists integrate more and more data and get
permission to integrate larger and larger datasets, and we need that. There are studies we're doing
that are going to need over 100,000 patients in them to get statistical validity on the hypotheses.
So we want it all in one place.
What we're doing right now is keeping all of the access-control information about who can
access which datasets as data in the database, and we basically append clauses to every query to
filter down the data to the data that any particular user can use. Then we'll tell them the answers
for the datasets they have and how much data that’s there that they couldn’t look at, and if they
needed the information, how to go try to get access to that.
Gardner: So you're able to manage some of those very stringent requirements around access
control. How about that infrastructure cost equation?
Bloom: Infrastructure cost is a real issue, but essentially, what we're dealing with is, if we're
going to do the work we need to do and deal with the data we have to deal with, there are two
options. We spend it on capital equipment or we spend it on operating costs to build it ourselves.
In this case, not all cases, it seemed to make much more sense to take advantage of the
equipment and software, rather than trying to reproduce it and use our time and our personnel's
time on other things that we couldn’t as easily get.
A lot of work went into Vertica. We're not going to reproduce it very easily. The open-source
tools that are out there don’t match it yet. They may eventually, but they don’t now.
Getting it right
Gardner: When we think about the paybacks or determining return on investment (ROI) in a
business setting, there’s a fairly simple straightforward formula. For you, how do you know
you’ve got this right? What is it when you see certain, what we might refer to in the business
world as service-level agreements (SLAs) or key performance indicators (KPIs)? What are you
looking for when you know that you’ve got it right and when you’re getting the job done, based
all of its requirements and from all of these different constituencies?
Bloom: There’s a set of different things. The thing I am looking for first is whether the scientists
who we work with most closely, who will use this first, will be able to frame the questions they
want to ask in terms of the interface and infrastructure we’ve provided.
I want to know that we can answer the scientific questions that people have with the data we
have and that we’ve made it accessible in the right way. That we’ve integrated, connected and
aggregated the data in the right ways, so they can find what they are looking for. There's no easy
metric for that. There’s going to be a lot of beta testing.
The second thing is, are we are hitting the performance standards we want? How much data can I
load how fast? How much data can I retrieve from a query? Those statisticians who don’t want to
use relational databases, still want to pull out all those columns and they want to do their
sophisticated analysis outside the database.
Eventually, I may convince them that they can leave the data in the database and run their R-
scripts there, but right now they want to pull it out. I need to know that I can pull it out fast for
them, and that they're not going to object that this is organized so they can get their data out.
Gardner: Let's step back to the big picture of what we can accomplish in a health-level payback.
When you’ve got the data managed, when you’ve got the input and output at a speed that’s
acceptable, when you’re able to manage all these different level studies, what sort of paybacks do
we get in terms of people’s health? How do we know we are succeeding when it comes to
disease, treatment, and understanding more about people and their health?
Bloom: The place where this database is going to be the most useful, not by any means the only
way it will be used, is in our investigations of common and complex diseases, and how we find
the causes of them and how we can get from causes to treatments.
I'm talking about looking at diseases like Alzheimer’s, asthma, diabetes, Parkinson’s, and ALS,
which is not so common, but certainly falls in the complex disease category. These are diseases
that are caused by some combinations of genomic variance, not by a single gene gone wrong.
There are a lot of complex questions we need to ask in finding those. It takes a lot of patience
and a lot of genomes, to answer those questions.
The payoff is that if we can use this data to collect enough information about enough diseases
that we can ask the questions that say it looks like this genomic variant is correlated with this
disease, how many people in your database have this variant and of those how many actually
have the disease, and of the ones who have the disease, how many have this variant. I need to ask
both those questions, because a lot of these variants confer risk, but they don’t absolutely give
you the disease.
If I am going to find the answers, I need to be able to ask those questions and those are the things
that are really hard to do with the raw data in files. If I can do just that, think about the impact on
all of us? If we can find the molecular causes of Alzheimer’s that could lead to treatments or
prevention and all of those other diseases as well.
Gardner: It’s a very compelling and interesting big data use case, one of the best I’ve heard.
I am afraid we’ll have to leave it there. We've been examining how the New York Genome
Center manages and analyzes vast data outputs to speedily correlate for time sensitive reporting,
and we’ve learned how the drive to better diagnose diseases and develop more effective
treatments is aided by swift, cost efficient, and accessible big data analytics infrastructure.
Start Your
HPE Vertica
Community Edition Trial Now
So, join me in thanking our guest. We’ve been here with Toby Bloom, Deputy Scientific Director
for Informatics at the New York Genome Center. Thank you so much, Toby.
Bloom: Thank you, and thanks for inviting me.
Gardner: Thank you also to our audience for joining us for this big data innovation case study
discussion.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of
HP sponsored discussions. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
Transcript of a sponsored discussion on how the drive to better diagnose diseases and develop
more effective treatments is aided by swift, cost efficient, and accessible big data analytics
infrastructure. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.
You may also be interested in:
	 •	 Redmonk analysts on best navigating the tricky path to DevOps adoption
•	 DevOps by design--A practical guide to effectively ushering DevOps into any
organization
	 •	 Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership
Model
	 •	 HPE's composable infrastructure sets stage for hybrid market brokering role
	 •	 Nottingham Trent University Elevates Big Data's role to Improving Student Retention in
Higher Education
	 •	 Forrester analyst Kurt Bittner on the inevitability of DevOps
	 •	 Agile on fire: IT enters the new era of 'continuous' everything
	 •	 Big data enables top user experiences and extreme personalization for Intuit TurboTax
	 •	 Feedback loops: The confluence of DevOps and big data
	 •	 IoT brings on development demands that DevOps manages best, say experts
	 •	 Big data generates new insights into what’s happening in the world's tropical ecosystems
	 •	 DevOps and security, a match made in heaven
	 •	 How Sprint employs orchestration and automation to bring IT into DevOps readiness

More Related Content

What's hot

'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again
'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again
'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King AgainDana Gardner
 
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...Dana Gardner
 
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...Dana Gardner
 
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...Dana Gardner
 
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...Dana Gardner
 
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...Dana Gardner
 
Beyond Look and Feel--The New Role That User Experience Plays in Business App...
Beyond Look and Feel--The New Role That User Experience Plays in Business App...Beyond Look and Feel--The New Role That User Experience Plays in Business App...
Beyond Look and Feel--The New Role That User Experience Plays in Business App...Dana Gardner
 
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...Dana Gardner
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListDana Gardner
 
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...Dana Gardner
 
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...Dana Gardner
 
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...Dana Gardner
 
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...Dana Gardner
 
Agile on Fire: IT Enters the New Era of 'Continuous' Everything
Agile on Fire: IT Enters the New Era of 'Continuous' EverythingAgile on Fire: IT Enters the New Era of 'Continuous' Everything
Agile on Fire: IT Enters the New Era of 'Continuous' EverythingDana Gardner
 
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...Dana Gardner
 
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...Dana Gardner
 
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...Dana Gardner
 
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud Services
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud ServicesAriba's Product Roadmap for 2015 Points to Improved Business Cloud Services
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud ServicesDana Gardner
 
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...Dana Gardner
 
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...Dana Gardner
 

What's hot (20)

'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again
'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again
'Extreme Apps’ Approach to Analysis Makes On-Site Retail Experience King Again
 
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...
IT Support Gains Automation and Intelligence to Bring Self-Service to Both Le...
 
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...
How Malaysia’s Bank Simpanan Nasional Implemented a Sweeping Enterprise Conte...
 
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...
Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor f...
 
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...
BSM and IT Data Access Improvement at Swiss Insurer and Turkish Mobile Carrie...
 
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...
 
Beyond Look and Feel--The New Role That User Experience Plays in Business App...
Beyond Look and Feel--The New Role That User Experience Plays in Business App...Beyond Look and Feel--The New Role That User Experience Plays in Business App...
Beyond Look and Feel--The New Role That User Experience Plays in Business App...
 
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...
Complex Carrier Network Performance Data on Vertica Yields Performance and Cu...
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish List
 
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...
Focus on Data, Risk Control, and Predictive Analysis Drives New Era of Cloud-...
 
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...
Putting Buyers and Sellers in the Best Light, How Etsy Leverages Big Data for...
 
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...
Loyalty Management Innovator AIMIA's Transformation Journey to Modernized and...
 
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...
Redcentric Uses Advanced Configuration Database to Bring into Focus Massive M...
 
Agile on Fire: IT Enters the New Era of 'Continuous' Everything
Agile on Fire: IT Enters the New Era of 'Continuous' EverythingAgile on Fire: IT Enters the New Era of 'Continuous' Everything
Agile on Fire: IT Enters the New Era of 'Continuous' Everything
 
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...
Rolta AdvizeX Experts on Hastening Time to Value for Big Data Analytics in He...
 
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
 
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...
Using Testing as a Service, Globe Testing Helping Startups Make Leap to Cloud...
 
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud Services
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud ServicesAriba's Product Roadmap for 2015 Points to Improved Business Cloud Services
Ariba's Product Roadmap for 2015 Points to Improved Business Cloud Services
 
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...
A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cl...
 
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...
With Large Workforce in the Field, Source Refrigeration Selects an Agile Plat...
 

Viewers also liked

Open Group Panel Explores Changing Field of Risk Management and Analysis in t...
Open Group Panel Explores Changing Field of Risk Management and Analysis in t...Open Group Panel Explores Changing Field of Risk Management and Analysis in t...
Open Group Panel Explores Changing Field of Risk Management and Analysis in t...Dana Gardner
 
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...Growing Threats Make Application Security a Pervasive Necessity, Rather than ...
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...Dana Gardner
 
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Dana Gardner
 
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...Dana Gardner
 
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...Dana Gardner
 
5733 a deep dive into IBM Watson Foundation for CSP (WFC)
5733   a deep dive into IBM Watson Foundation for CSP (WFC)5733   a deep dive into IBM Watson Foundation for CSP (WFC)
5733 a deep dive into IBM Watson Foundation for CSP (WFC)Arvind Sathi
 
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...Dana Gardner
 

Viewers also liked (7)

Open Group Panel Explores Changing Field of Risk Management and Analysis in t...
Open Group Panel Explores Changing Field of Risk Management and Analysis in t...Open Group Panel Explores Changing Field of Risk Management and Analysis in t...
Open Group Panel Explores Changing Field of Risk Management and Analysis in t...
 
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...Growing Threats Make Application Security a Pervasive Necessity, Rather than ...
Growing Threats Make Application Security a Pervasive Necessity, Rather than ...
 
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
 
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...
How Data Loss Prevention End-Point Agents Use HPE IDOL’s Comprehensive Data C...
 
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...
Performance Tools from HP Help IT Services Provider Savvis Scale to Meet Cust...
 
5733 a deep dive into IBM Watson Foundation for CSP (WFC)
5733   a deep dive into IBM Watson Foundation for CSP (WFC)5733   a deep dive into IBM Watson Foundation for CSP (WFC)
5733 a deep dive into IBM Watson Foundation for CSP (WFC)
 
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...
Cybersecurity Standards: The Open Group Explores Security and Ways to Assure ...
 

Similar to NY Genome Center Manages 12TB DNA Data Daily

Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...DNA Compass
 
BIG DATA | How to explain it & how to use it for your career?
BIG DATA | How to explain it & how to use it for your career?BIG DATA | How to explain it & how to use it for your career?
BIG DATA | How to explain it & how to use it for your career?Tuan Yang
 
Big Data Analytics in Hospitals By Dr.Mahboob ali khan Phd
Big Data Analytics in Hospitals By Dr.Mahboob ali khan PhdBig Data Analytics in Hospitals By Dr.Mahboob ali khan Phd
Big Data Analytics in Hospitals By Dr.Mahboob ali khan PhdHealthcare consultant
 
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...New Health Data Deluges Require Secure Information Flow Enablement Via Standa...
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...Dana Gardner
 
Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013charles10000
 
Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013charles10000
 
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...Dana Gardner
 
Introduction to machine_learning_us
Introduction to machine_learning_usIntroduction to machine_learning_us
Introduction to machine_learning_usAnasua Sarkar
 
Inside the Operating Room of the Future: How Mass General is Unleashing the P...
Inside the Operating Room of the Future: How Mass General is Unleashing the P...Inside the Operating Room of the Future: How Mass General is Unleashing the P...
Inside the Operating Room of the Future: How Mass General is Unleashing the P...Elizabeth Mixson
 
Slides for rare disorders meeting
Slides for rare disorders meetingSlides for rare disorders meeting
Slides for rare disorders meetingSean Ekins
 
Data for EMR systems
Data for EMR systemsData for EMR systems
Data for EMR systemsSteven Fritz
 
Healthcare data's perfect storm
Healthcare data's perfect stormHealthcare data's perfect storm
Healthcare data's perfect stormHitachi Vantara
 
HEALTH PREDICTION ANALYSIS USING DATA MINING
HEALTH PREDICTION ANALYSIS USING DATA  MININGHEALTH PREDICTION ANALYSIS USING DATA  MINING
HEALTH PREDICTION ANALYSIS USING DATA MININGAshish Salve
 
Using a Big Data Solution Helps Conservation International Identify and Proac...
Using a Big Data Solution Helps Conservation International Identify and Proac...Using a Big Data Solution Helps Conservation International Identify and Proac...
Using a Big Data Solution Helps Conservation International Identify and Proac...Dana Gardner
 
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)Hellmuth Broda
 
Rock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_HealthRock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_HealthRock Health
 
eBook - Data Analytics in Healthcare
eBook - Data Analytics in HealthcareeBook - Data Analytics in Healthcare
eBook - Data Analytics in HealthcareNextGen Healthcare
 

Similar to NY Genome Center Manages 12TB DNA Data Daily (20)

Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
Possible Solution for Managing the Worlds Personal Genetic Data - DNA Guide, ...
 
BIG DATA | How to explain it & how to use it for your career?
BIG DATA | How to explain it & how to use it for your career?BIG DATA | How to explain it & how to use it for your career?
BIG DATA | How to explain it & how to use it for your career?
 
Big Data Analytics in Hospitals By Dr.Mahboob ali khan Phd
Big Data Analytics in Hospitals By Dr.Mahboob ali khan PhdBig Data Analytics in Hospitals By Dr.Mahboob ali khan Phd
Big Data Analytics in Hospitals By Dr.Mahboob ali khan Phd
 
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...New Health Data Deluges Require Secure Information Flow Enablement Via Standa...
New Health Data Deluges Require Secure Information Flow Enablement Via Standa...
 
Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013
 
Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013Nature Drug Discovery Dec 2013
Nature Drug Discovery Dec 2013
 
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...
Cerner’s Lifesaving Sepsis Control Solution Shows the Potential of Bringing M...
 
Introduction to machine_learning_us
Introduction to machine_learning_usIntroduction to machine_learning_us
Introduction to machine_learning_us
 
Inside the Operating Room of the Future: How Mass General is Unleashing the P...
Inside the Operating Room of the Future: How Mass General is Unleashing the P...Inside the Operating Room of the Future: How Mass General is Unleashing the P...
Inside the Operating Room of the Future: How Mass General is Unleashing the P...
 
Slides for rare disorders meeting
Slides for rare disorders meetingSlides for rare disorders meeting
Slides for rare disorders meeting
 
Data for EMR systems
Data for EMR systemsData for EMR systems
Data for EMR systems
 
Day 1: Real-World Data Panel
Day 1: Real-World Data Panel Day 1: Real-World Data Panel
Day 1: Real-World Data Panel
 
Healthcare data's perfect storm
Healthcare data's perfect stormHealthcare data's perfect storm
Healthcare data's perfect storm
 
HEALTH PREDICTION ANALYSIS USING DATA MINING
HEALTH PREDICTION ANALYSIS USING DATA  MININGHEALTH PREDICTION ANALYSIS USING DATA  MINING
HEALTH PREDICTION ANALYSIS USING DATA MINING
 
Using a Big Data Solution Helps Conservation International Identify and Proac...
Using a Big Data Solution Helps Conservation International Identify and Proac...Using a Big Data Solution Helps Conservation International Identify and Proac...
Using a Big Data Solution Helps Conservation International Identify and Proac...
 
The role of big data in medicine
The role of big data in medicineThe role of big data in medicine
The role of big data in medicine
 
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)
Big Data and its Impact on Industry (Example of the Pharmaceutical Industry)
 
Rock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_HealthRock Report: Big Data by @Rock_Health
Rock Report: Big Data by @Rock_Health
 
eBook - Data Analytics in Healthcare
eBook - Data Analytics in HealthcareeBook - Data Analytics in Healthcare
eBook - Data Analytics in Healthcare
 
Improving EMRs 2009
Improving EMRs 2009Improving EMRs 2009
Improving EMRs 2009
 

Recently uploaded

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 

Recently uploaded (20)

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 

NY Genome Center Manages 12TB DNA Data Daily

  • 1. How New York Genome Center Manages the Massive Data Generated from DNA Sequencing Transcript of a sponsored discussion on how the drive to better diagnose diseases and develop more effective treatments is aided by swift, cost efficient, and accessible big data analytics infrastructure. Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett Packard Enterprise. Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives. Our next big-data use case leadership discussion examines how the non-profit New York Genome Center manages and analyzes up to 12 terabytes of data generated each day from its genome sequence appliances. We’ll learn how to better diagnose disease and develop more effective treatments aided by swift, cost efficient, and accessible big-data analytics. To hear how they exploit the vast data outputs to then speedily correlate for time sensitive reporting, please join me in welcoming our guest. Start Your HPE Vertica Community Edition Trial Now We're here with Toby Bloom, Deputy Scientific Director for Informatics at the New York Genome Center in New York. Welcome, Toby. Toby Bloom: Hi. Thank you. Gardner: First, tell us a little bit about your organization. It seems like it’s a unique institution, a large variety of backers, consortium members. Tell us about it. Bloom: New York Genome Center is about two-and-a-half years old. It was formed initially as a collaboration among 12 of the large medical institutions in New York, Cornell, Columbia, NewYork-Presbyterian Hospital, Mount Sinai, NYU, Einstein Montefiore, and Stony Brook University. All of the big hospitals in New York decided that it would be better to have one genome center than have to build 12 of them. So we were formed initially to be the center of genomics in New York. Gardner: And what does one do at a center of genomics? Gardner
  • 2. Bloom: We're a biomedical research facility that has a large capacity to sequence genomes and use the resulting data output to analyze the genomes, find the causes of disease, and hopefully treatments of disease, and have a big impact on healthcare and on how medicine works now. Gardner: When it comes to doing this well, it sounds like you are generating an awesome amount of data. What sort of data is that and where does it come from? Bloom: Right now, we have a number of genome sequencing instruments that produce about 12 terabytes of raw data per day. That raw data is basically lots of strings of As, Cs, Ts and Gs, DNA data from genomes from patients who we're sequencing. Those can be patients who are sick and we are looking for specific treatment. They can be patients in large research studies, where we're trying to use and correlate a large number of genomes to find the similarities that show us the cause of the disease. Gardner: When we look at a typical big-data environment like in a corporation, it’s often transactional information. It might be outputs from sensors or machines. How is this a different data problem when you are dealing with DNA sequences? Lots of data Bloom: Some of it’s the same problem, and some of it’s different. We're bringing in lots of data. The raw data, I said, is probably about 12 terabytes a day right now. That could easily double in the next year. But than we analyze the data, and I probably store three to four times that much data in a day. In a lot of environments, you start with the raw data, you analyze it, and you cook it down to your answers. In our environment, it just gets bigger and bigger for a long time, before we get the answers and can make it smaller. So we're dealing with very large amounts of data. We do have one research project now that is taking in streaming data from devices, and we think over time we'll likely be taking in data from things like cardiac monitors, glucose monitors, and other kinds of wearable medical devices. Right now, we are taking in data off apps on smartphones that are tracking movement for some patients in a rheumatoid arthritis study we're doing. We have to analyze a bunch of different kinds of data together. We’d like to bring in full medical records for those patients and integrate it with the genomic data. So we do have a wide variety of data that we have to integrate, and a lot of it is quite large. Gardner: When you were looking for the technological platforms and solutions to accommodate your specific needs, how did that pan out? What works? What doesn’t work? And where are you in terms of putting in place the needed infrastructure? Bloom
  • 3. Bloom: The data that comes off the machines is in large files, and a lot of the complex analysis we do, we do initially on those large files. I am talking about files that are from 150 to 500 gigabytes or maybe a terabyte each, and we do a lot of machine-learning analysis on those. We do a bunch of Bayesian statistical analyses. There are a large number of methods we use to try to extract the information from that raw data. Start Your HPE Vertica Community Edition Trial Now When we've figured out the variance and mutations in the DNA that we think are correlated with the disease and that we were interested in looking at, we then want to load all of that into a database with all of the other data we have to make it easy for researchers to use in a number of different ways. We want to let them find more data like the data they have, so that they can get statistical validation of their hypotheses. We want them to be able to find more patients for cohorts, so they can sequence more and get enough data. We need to be able to ask questions about how likely it is, if you have a given genomic variant, you get a given disease. Or, if you have the disease, how likely it is that you have this variant. You can only do that if it’s easy to find all of that data together in one place in an organized way. So we really need to load that data into a database and connect it to the medical records or the symptoms and disease information we have about the patients and connect DNA data with RNA data with epigenetic data with microbiome data. We needed a database to do that. We looked at a number of different databases, but we had some very hard requirements to solve. We were looking for one that could handle trillions of rows in a table without failing over, tens of trillions of rows without falling over, and to be able to answer queries fast across multiple tables with tens of trillions of rows. We need to be able to easily change and add new kinds of data to it, because we're always finding new kinds of data we want to correlate. So there are things like that. Simple answer We need to be able to load terabytes of data a day. But more than anything, I had a lot of conversations with statisticians about why they don’t like databases, about why they keep asking me for all of the data in comma-delimited files instead of databases. And the answer, when you boiled it down, was pretty simple. When you have statisticians who are looking at data with huge numbers of attributes and huge numbers of patients, the kinds of statistical analysis they're doing means they want to look at some much smaller combinations of the attributes for all of the patients and see if they can find
  • 4. correlations, and then change that and look at different subsets. That absolutely requires a column-oriented database. A row-oriented relational database will bring in the whole database to get you that data. It takes forever, and it’s too slow for them. So, we started from that. We must have looked at four or five different databases. Vertica was the one that could handle the scale and the speed and was robust and reliable enough  and is our platform now. We're still loading in the first round of our data. We're still in the tens of billions of rows, as opposed to trillions of rows, but we'll get there. Gardner: You’re also in the healthcare field. So there are considerations around privacy, governance, auditing, and, of course, price sensitivity, because you're a non-profit. How did that factor into your decision? Is the use of off-the-shelf hardware a consideration, or off-the-shelf storage? Are you looking at conversion infrastructure? How did you manage some of those cost and regulatory issues? Bloom: Regulatory issues are enormous. There are regulations on clinical data that we have to deal with. There are regulations on research data that overlap and are not fully consistent with the regulations on clinical data. We do have to be very careful about who has access to which sets of data, and we have all of this data in one database, but that doesn’t mean any one person can actually have access to all of that data. We want it in one place, because over time, scientists integrate more and more data and get permission to integrate larger and larger datasets, and we need that. There are studies we're doing that are going to need over 100,000 patients in them to get statistical validity on the hypotheses. So we want it all in one place. What we're doing right now is keeping all of the access-control information about who can access which datasets as data in the database, and we basically append clauses to every query to filter down the data to the data that any particular user can use. Then we'll tell them the answers for the datasets they have and how much data that’s there that they couldn’t look at, and if they needed the information, how to go try to get access to that. Gardner: So you're able to manage some of those very stringent requirements around access control. How about that infrastructure cost equation? Bloom: Infrastructure cost is a real issue, but essentially, what we're dealing with is, if we're going to do the work we need to do and deal with the data we have to deal with, there are two options. We spend it on capital equipment or we spend it on operating costs to build it ourselves. In this case, not all cases, it seemed to make much more sense to take advantage of the equipment and software, rather than trying to reproduce it and use our time and our personnel's time on other things that we couldn’t as easily get. A lot of work went into Vertica. We're not going to reproduce it very easily. The open-source tools that are out there don’t match it yet. They may eventually, but they don’t now.
  • 5. Getting it right Gardner: When we think about the paybacks or determining return on investment (ROI) in a business setting, there’s a fairly simple straightforward formula. For you, how do you know you’ve got this right? What is it when you see certain, what we might refer to in the business world as service-level agreements (SLAs) or key performance indicators (KPIs)? What are you looking for when you know that you’ve got it right and when you’re getting the job done, based all of its requirements and from all of these different constituencies? Bloom: There’s a set of different things. The thing I am looking for first is whether the scientists who we work with most closely, who will use this first, will be able to frame the questions they want to ask in terms of the interface and infrastructure we’ve provided. I want to know that we can answer the scientific questions that people have with the data we have and that we’ve made it accessible in the right way. That we’ve integrated, connected and aggregated the data in the right ways, so they can find what they are looking for. There's no easy metric for that. There’s going to be a lot of beta testing. The second thing is, are we are hitting the performance standards we want? How much data can I load how fast? How much data can I retrieve from a query? Those statisticians who don’t want to use relational databases, still want to pull out all those columns and they want to do their sophisticated analysis outside the database. Eventually, I may convince them that they can leave the data in the database and run their R- scripts there, but right now they want to pull it out. I need to know that I can pull it out fast for them, and that they're not going to object that this is organized so they can get their data out. Gardner: Let's step back to the big picture of what we can accomplish in a health-level payback. When you’ve got the data managed, when you’ve got the input and output at a speed that’s acceptable, when you’re able to manage all these different level studies, what sort of paybacks do we get in terms of people’s health? How do we know we are succeeding when it comes to disease, treatment, and understanding more about people and their health? Bloom: The place where this database is going to be the most useful, not by any means the only way it will be used, is in our investigations of common and complex diseases, and how we find the causes of them and how we can get from causes to treatments. I'm talking about looking at diseases like Alzheimer’s, asthma, diabetes, Parkinson’s, and ALS, which is not so common, but certainly falls in the complex disease category. These are diseases that are caused by some combinations of genomic variance, not by a single gene gone wrong. There are a lot of complex questions we need to ask in finding those. It takes a lot of patience and a lot of genomes, to answer those questions.
  • 6. The payoff is that if we can use this data to collect enough information about enough diseases that we can ask the questions that say it looks like this genomic variant is correlated with this disease, how many people in your database have this variant and of those how many actually have the disease, and of the ones who have the disease, how many have this variant. I need to ask both those questions, because a lot of these variants confer risk, but they don’t absolutely give you the disease. If I am going to find the answers, I need to be able to ask those questions and those are the things that are really hard to do with the raw data in files. If I can do just that, think about the impact on all of us? If we can find the molecular causes of Alzheimer’s that could lead to treatments or prevention and all of those other diseases as well. Gardner: It’s a very compelling and interesting big data use case, one of the best I’ve heard. I am afraid we’ll have to leave it there. We've been examining how the New York Genome Center manages and analyzes vast data outputs to speedily correlate for time sensitive reporting, and we’ve learned how the drive to better diagnose diseases and develop more effective treatments is aided by swift, cost efficient, and accessible big data analytics infrastructure. Start Your HPE Vertica Community Edition Trial Now So, join me in thanking our guest. We’ve been here with Toby Bloom, Deputy Scientific Director for Informatics at the New York Genome Center. Thank you so much, Toby. Bloom: Thank you, and thanks for inviting me. Gardner: Thank you also to our audience for joining us for this big data innovation case study discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening, and come back next time. Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett Packard Enterprise. Transcript of a sponsored discussion on how the drive to better diagnose diseases and develop more effective treatments is aided by swift, cost efficient, and accessible big data analytics infrastructure. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved. You may also be interested in: • Redmonk analysts on best navigating the tricky path to DevOps adoption
  • 7. • DevOps by design--A practical guide to effectively ushering DevOps into any organization • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model • HPE's composable infrastructure sets stage for hybrid market brokering role • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education • Forrester analyst Kurt Bittner on the inevitability of DevOps • Agile on fire: IT enters the new era of 'continuous' everything • Big data enables top user experiences and extreme personalization for Intuit TurboTax • Feedback loops: The confluence of DevOps and big data • IoT brings on development demands that DevOps manages best, say experts • Big data generates new insights into what’s happening in the world's tropical ecosystems • DevOps and security, a match made in heaven • How Sprint employs orchestration and automation to bring IT into DevOps readiness