5. Relational data warehouse
Massively parallel; petabyte scale
Fully managed
HDD and SSD platforms
$1,000/TB/year; starts at $0.25/hour
Amazon
Redshift
a lot faster
a lot simpler
a lot cheaper
6. The Forrester Wave™ is copyrighted by Forrester Research, Inc. Forrester and Forrester Wave™ are trademarks of Forrester Research, Inc. The Forrester Wave™ is a graphical
representation of Forrester's call on a market and is plotted using a detailed spreadsheet with exposed scores, weightings, and comments. Forrester does not endorse any
vendor, product, or service depicted in the Forrester Wave. Information is based on best available resources. Opinions reflect judgment at the time and are subject to change.
Forrester Wave™ Enterprise Data Warehouse Q4 ’15
8. Use Case: Traditional Data Warehousing
Business
Reporting
Advanced pipelines
and queries
Secure and
Compliant
Easy Migration – Point & Click using AWS Database Migration Service
Secure & Compliant – End-to-End Encryption. SOC 1/2/3, PCI-DSS, HIPAA and FedRAMP compliant
Large Ecosystem – Variety of cloud and on-premises BI and ETL tools
Japanese Mobile
Phone Provider
Powering 100 marketplaces
in 50 countries
World’s Largest Children’s
Book Publisher
Bulk Loads
and Updates
9. Use Case: Log Analysis
Log & Machine
IOT Data
Clickstream
Events Data
Time-Series
Data
Cheap – Analyze large volumes of data cost-effectively
Fast – Massively Parallel Processing (MPP) and columnar architecture for fast queries and parallel loads
Near real-time – Micro-batch loading and Amazon Kinesis Firehose for near-real time analytics
Interactive data analysis and
recommendation engine
Ride analytics for pricing
and product development
Ad prediction and
on-demand analytics
10. Use Case: Business Applications
Multi-Tenant BI
Applications
Back-end
services
Analytics as a
Service
Fully Managed – Provisioning, backups, upgrades, security, compression all come built-in so you can
focus on your business applications
Ease of Chargeback – Pay as you go, add clusters as needed. A few big common clusters, several
data marts
Service Oriented Architecture – Integrated with other AWS services. Easy to plug into your pipeline
Infosys Information
Platform (IIP)
Analytics-as-a-
Service
Product and Consumer
Analytics
11. Amazon Redshift architecture
Leader node
Simple SQL endpoint
Stores metadata
Optimizes query plan
Coordinates query execution
Compute nodes
Local columnar storage
Parallel/distributed execution of all queries, loads,
backups, restores, resizes
Start at just $0.25/hour, grow to 2 PB (compressed)
DC1: SSD; scale from 160 GB to 326 TB
DS2: HDD; scale from 2 TB to 2 PB
Leader node
Compute node
10 GigE
(HPC)
Ingestion
Backup
Restore
BI tools SQL clientsAnalytics tools
Compute node Compute node
JDBC/ODBC
Amazon S3 Amazon EMR Amazon Dynamo DB SSH
13. Benefit #1: Amazon Redshift is fast
Parallel and distributed
Query
Load
Export
Backup
Restore
Resize
14. Benefit #1: Amazon Redshift is fast
Hardware optimized for I/O intensive workloads, 4 GB/sec/node
Enhanced networking, over 1 million packets/sec/node
Choice of storage type, instance size
Regular cadence of auto-patched improvements
15. Benefit #1: Amazon Redshift is fast
“Did I mention that it’s ridiculously fast? We’re using
it to provide our analysts with an alternative to Hadoop”
“After investigating Redshift, Snowflake, and
BigQuery, we found that Redshift offers top-of-the-
line performance at best-in-market price points”
“…[Redshift] performance has blown away everyone
here. We generally see 50-100X speedup over Hive”
“We regularly process multibillion row datasets
and we do that in a matter of hours. We are heading
to up to 10 times more data volumes in the next couple
of years, easily
“We saw a 2X performance improvement on a wide
variety of workloads. The more complex the queries,
the higher the performance improvement”
“On our previous big data warehouse system, it took
around 45 minutes to run a query against a year of
data, but that number went down to just 25 seconds
using Amazon Redshift”
16. And has gotten faster...
5X Query throughput improvement over the past year
Memory allocation (launched)
Improved commit and I/O logic (launched)
Queue hopping (launched)
Query monitoring rules (coming soon)
Power start (coming soon)
Short query bias (coming soon)
10X Vacuuming performance improvement
Ensures data is sorted for efficient and fast I/O
Reclaims space from deleted rows
Enhanced vacuum performance leads to better system throughput
Fast
Efficient
17. Amazon Redshift Cluster
The life of a query
BI tools
SQL clients
Analytics tools
Client
Leader node
Compute node
Compute node
Compute node
Queue 1
Queue 2
1
4
32
18. Amazon Redshift Workload Management
Waiting
Coming soon: Short query bias
BI tools
SQL clients
Analytics tools
Client
Running
Queue 1
Queue 2
4 Slots
2 Slots
Short queries go to
the head of the
queue
1
1
20. Amazon Redshift Cluster
Coming soon: Power start
BI tools
SQL clients
Analytics tools
Client
Leader node
Compute node
Compute node
Compute node
All queries receive a
power start. Shorter
queries benefit the
most
3
3
3
3
21. Coming soon: Query monitoring rules
• Allows automatic handling of runaway (poorly written) queries
• Metrics with operators and values (e.g. query_cpu_time > 1000) create a predicate
• Multiple predicates can be AND-ed together to create a rule
• Multiple rules can be defined for a queue in WLM. These rules are OR-ed together
If { rule } then [action]
{ rule : metric operator value } eg: rows_scanned > 100000
• Metric : cpu_time, query_blocks_read, rows scanned, query
execution time, cpu & io skew per slice, join_row_count, etc.
• Operator : <, >, ==
• Value : integer
[action] : hop, log, abort
22. Coming soon: Query monitoring rules
Monitor and control
cluster resources
consumed by a query
Get notified, abort and
reprioritize long-
running / bad queries
Pre-defined templates
for common use
cases
23. Coming soon: Query monitoring rules
Common use cases:
• Protect interactive queues
INTERACTIVE = { “query_execution_time > 15 sec” or
“query_cpu_time > 1500 uSec” or
”query_blocks_read > 18000 blocks” } [HOP]
• Monitor ad-hoc queues for heavy queries
AD-HOC = { “query_execution_time > 120” or
“query_cpu_time > 3000” or
”query_blocks_read > 180000” or
“memory_to_disk > 400000000000”} [LOG]
• Limit the number of rows returned to a client
MAXLINES = { “RETURN_ROWS > 50000” } [ABORT]
24. Benefit #2: Amazon Redshift is inexpensive
DS2 (HDD)
Price per hour for
DS2.XL single node
Effective annual
price per TB compressed
On-demand $ 0.850 $ 3,725
1 year reservation $ 0.500 $ 2,190
3 year reservation $ 0.228 $ 999
DC1 (SSD)
Price per hour for
DC1.L single node
Effective annual
price per TB compressed
On-demand $ 0.250 $ 13,690
1 year reservation $ 0.161 $ 8,795
3 year reservation $ 0.100 $ 5,500
Pricing is simple
Number of nodes x price/hour
No charge for leader node
No upfront costs
Pay as you go
25. Benefit #3: Amazon Redshift is fully managed
Continuous/incremental backups
Multiple copies within cluster
Continuous and incremental backups
to Amazon S3
Continuous and incremental backups
across regions
Streaming restore
Compute node
Amazon S3
Region 1
Region 2
Compute node Compute node
Amazon S3
26. Benefit #3: Amazon Redshift is fully managed
Fault tolerance
Disk failures
Node failures
Network failures
Availability Zone/region level disasters
Compute node
Amazon S3
Region 1
Region 2
Compute node Compute node
Amazon S3
27. Node fault tolerance
Leader nodeClient
Data-path monitoring agents
Compute node
Compute node
Compute node
Node level monitoring
can detect SW/HW
issues and take action
28. Node fault tolerance
Leader nodeClient
Data-path monitoring agents
Compute node
Compute node
Compute node
Failure is detected at one
of the compute nodes
29. Node fault tolerance
Leader nodeClient
Data-path monitoring agents
Compute node
Compute node
Compute node
Redshift parks the
connections
Next, the node is
replaced
37. NTT Docomo: Japan’s largest mobile service provider
68 million customers
Tens of TBs per day of data across a
mobile network
6 PB of total data (uncompressed)
Data science for marketing
operations, logistics, and so on
Greenplum on-premises
Scaling challenges
Performance issues
Need same level of security
Need for a hybrid environment
38. 125 node DS2.8XL cluster
4,500 vCPUs, 30 TB RAM
2 PB compressed
10x faster analytic queries
50% reduction in time for new
BI application deployment
Significantly less operations
overhead
Data
Source
ET
AWS Direct
Connect
Client
Forwarder
LoaderState
Management
SandboxAmazon Redshift
S3
NTT Docomo: Japan’s largest mobile service provider
39. Nasdaq: powering 100 marketplaces in 50 countries
Orders, quotes, trade executions,
market “tick” data from 7 exchanges
7 billion rows/day
Analyze market share, client activity,
surveillance, billing, and so on
Microsoft SQL Server on-premises
Expensive legacy DW
($1.16 M/yr.)
Limited capacity (1 yr. of data
online)
Needed lower TCO
Must satisfy multiple security
and regulatory requirements
Similar performance
40. 23 node DS2.8XL cluster
828 vCPUs, 5 TB RAM
368 TB compressed
2.7 T rows, 900 B derived
8 tables with 100 B rows
7 man-month migration
¼ the cost, 2x storage, room to
grow
Faster performance, very
secure
Nasdaq: powering 100 marketplaces in 50 countries
41. Amazon.com clickstream analytics
Web log analysis for Amazon.com
• PBs workload, 2TB/day@67% YoY
• Largest table: 400 TB
Understand customer behavior
Previous solution
• Legacy DW (Oracle)—query across 1 week/hr
• Hadoop—query across 1 month/hr
42. Results with Amazon Redshift
• Query 15 months in 14 min
• Load 5B rows in 10 min
• 21B w/ 10B rows: 3 days to 2 hrs
(Hive Redshift)
• Load pipeline: 90 hrs to 8 hrs
(Oracle Redshift)
• 100 node DS2.8XL clusters
• Easy resizing
• Managed backups and restore
• Failure tolerance and recovery
• 20% time of one DBA
• Increased productivity
49. Resize
• Resize while remaining online
• Provision a new cluster in the
background
• Copy data in parallel from node to
node
• Only charged for source cluster
53. Single Column
• Table is sorted by 1 column
Date Region Country
2-JUN-2015 Oceania New Zealand
2-JUN-2015 Asia Singapore
2-JUN-2015 Africa Zaire
2-JUN-2015 Asia Hong Kong
3-JUN-2015 Europe Germany
3-JUN-2015 Asia Korea
[ SORTKEY ( date ) ]
• Best for:
• Queries that use 1st column (i.e. date) as primary filter
• Can speed up joins and group bys
• Quickest to VACUUM
54. Compound
• Table is sorted by 1st column , then 2nd column etc.
Date Region Country
2-JUN-2015 Africa Zaire
2-JUN-2015 Asia Korea
2-JUN-2015 Asia Singapore
2-JUN-2015 Europe Germany
3-JUN-2015 Asia Hong Kong
3-JUN-2015 Asia Korea
[ SORTKEY COMPOUND ( date, region, country) ]
• Best for:
• Queries that use 1st column as primary filter, then other cols
• Can speed up joins and group bys
• Slower to VACUUM
55. Interleaved
• Equal weight is given to each column.
Date Region Country
2-JUN-2015 Africa Zaire
3-JUN-2015 Asia Singapore
2-JUN-2015 Asia Korea
2-JUN-2015 Europe Germany
3-JUN-2015 Asia Hong Kong
2-JUN-2015 Asia Korea
[ SORTKEY INTERLEAVED ( date, region, country) ]
• Best for:
• Queries that use different columns in filter
• Queries get faster the more columns used in the filter
• Slowest to VACUUM
57. ID Gender Name
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M James White
306 F Lisa Green
2
3
4
ID Gender Name
101 M John Smith
306 F Lisa Green
ID Gender Name
292 F Jane Jones
209 M James White
ID Gender Name
139 M Peter Black
164 M Brian Snail
ID Gender Name
446 M Pat Partridge
658 F Sarah Cyan
EVEN
DISTSTYLE EVEN
58. ID Gender Name
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M James White
306 F Lisa Green
KEY
ID Gender Name
101 M John Smith
306 F Lisa Green
ID Gender Name
292 F Jane Jones
209 M James White
ID Gender Name
139 M Peter Black
164 M Brian Snail
ID Gender Name
446 M Pat Partridge
658 F Sarah Cyan
DISTSTYLE KEY
59. ID Gender Name
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M James White
306 F Lisa Green
KEY
ID Gender Name
101 M John Smith
139 M Peter Black
446 M Pat Partridge
164 M Brian Snail
209 M James White
ID Gender Name
292 F Jane Jones
658 F Sarah Cyan
306 F Lisa Green
DISTSTYLE KEY
60. ID Gender Name
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M James White
306 F Lisa Green
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M Lisa Green
306 F James White
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M Lisa Green
306 F James White
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M Lisa Green
306 F James White
101 M John Smith
292 F Jane Jones
139 M Peter Black
446 M Pat Partridge
658 F Sarah Cyan
164 M Brian Snail
209 M Lisa Green
306 F James White
ALL
DISTSTYLE ALL
61. • KEY
• Large Fact tables
• Large dimension tables
• ALL
• Medium dimension tables (1K – 2M)
• Small dimension tables
• EVEN
• Tables with no joins or group by
66. Use multiple input files to maximize
throughput
Use the COPY command
Each slice can load one file at a
time
A single input file means only one
slice is ingesting data
Instead of 100MB/s, you’re only
getting 6.25MB/s
67. Use multiple input files to maximize
throughput
Use the COPY command
You need at least as many input
files as you have slices
With 16 input files, all slices are
working so you maximize
throughput
Get 100MB/s per node; scale
linearly as you add nodes
The main goal of this slide is to show platform completeness
Key talking points:
1/ Any big data application has a data acquisition phase, a storage need, and an analytics need
2/ Quick Service overviews. Go fast; especially on ones we talk about later.
Collect
Direct Connect – private, low latency connections between your data centers and ours. Most customers use a pair for redundancy and availability
Import/Export – for moving large volumes of data, fedex is your highest bandwidth option. With Snowball, we’ll ship you a ruggedized case, load it up, send it back
Kinesis – real time streaming data; has streams for custom apps, firehose for easy Redshift/s3 integration, Analytics for real time SQL
AWS IoT platform – complete suite for IoT devices to make it easy to manage them and get telemetry data into AWS
Store
S3 is the foundation of any big data app on AWS. Scalable, low cost, default landing zone for data ($0.023/GB-Mo and drops from there with scale. That’s $23/TB for a month, less than $300 for a year)
Glacier is the sister service for cold storage like data you need for compliance. Age data into it using lifecycle. $0.004/GB-Mo, or $48 per TB per year!
DynamoDB – NoSQL store; zero admin; JSON + Key Value with single digit millisecond latency. Great for high concurrency reads and writes
Elasticsearch – managed elasticsearch clusters for operational intelligence and search
Analyze
EMR for fully managed dynamic clusters for running Hadoop/Spark/Presto/HBase
Athena for interactive queries on S3 Data using Standard SQL with no infrastructure to manage
Redshift – fully managed, petabyte scale DW for $1,000/TB/year
ML – fully managed machine learning
EC2 – run anything you want that runs on Linux or windows
QuickSight for fast, cost-effective BI
Lambda – serverless compute for event driven computing
And rounding all this out, we have DMS for migrating databases and replicating OLTP to Redshift and Data Pipeline for scheduling and orchestration
Pace of innovation
Continuous deployment every 2 weeks
Different model than Oracle/Teradata etc. still took a 6 month deployment plan vs. us
Redshift value proposition: fast, simple, cost-effective data warehousing. Massively parallel and columnar technology. Easily scalable to petabytes or more. Makes administration simple, no DBA needed. You can choose from hard disk drive or solid state drive platform depending on your needs. All this for $1,000/TB/year, which is one-tenth of the cost of traditional data warehousing solutions.
Amazon Redshift has been ranked as a leader in the recent data warehousing Forrester Wave
USE CASES – to use across the next several slides (6-10)
Amazon –Understand customer behavior, migrated from Oracle, PBs workload, 2TB/day@67% YoY. Could query across 1 week in one hour with Oracle, now can query 15 months in 14 min with Redshift
Boingo – 2000+ Commercial Wi-Fi locations, 1 million+ Hotspots, 90M+ ad engagements in 100+ countries. Used Oracle, Rapid data growth slowed analytics, Admin overhead and Expensive (license, h/w, support). After migration 180x performance improvement and 7x cost savings
Finra (Financial Industry Regulatory Authority) – One of the largest independent securities regulators in the United States, was established to monitor and regulate financial trading practices. Reacts to changing market dynamics while providing its analysts with the tools (Redshift) to interactively query multi-petabyte data sets. Captures, analyzes, and stores a daily ~75 billion records daily. The company estimates it will save up to $20 million annually by using AWS instead of a physical data center infrastructure.
Desk.com – Ingests 200K case related events/hour and runs a user facing portal on Redshift
Fully Managed – We provision, backup, patch and monitor so you can focus on your data
Fast – Massively Parallel Processing and columnar architecture for fast queries and parallel loads
Nasdaq security – Ingests 5B rows/trading day, analyzes orders, trades and quotes to protect traders, report activity and develop their marketplaces
NTT Docomo - Redshift is NTT Docomo's primary analytics platform for data science and marketing and logistic analytics. Data is pre-processed on premises and loaded into a massive, multi-petabyte data warehouse on Amazon Redshift, which data scientists use as their primary analytics platform.
Pinterest uses Redshift for interactive data analysis. Redshift is used to store all web event data and uses for KPIs, recommendations and A/B experimentation.
Lyft uses Redshift for ride analytics across the world (rides / location data ) - Through analysis, company engineers estimated that up to 90% of rides during peak times had similar routes. This led to the introduction of Lyft Line – a service that allows customers to save up to 60% by carpooling with others who are going in the same direction.
Yelp has multiple deployments of RedShift with different data sets in use by product management, sales analytics, ads, SeatMe (Point of sale analytics) and many other teams.
Analyzes 0s of millions of ads/day, 250M mobile events/day, ad campaign performance and new feature usage
Accenture Insights Platform (AIP) is a scalable, on-demand, globally available analytics solution running on Amazon Redshift. AIP is Accenture's foundation for its big data offering to deliver analytics applications for healthcare and financial services.
High level overview of Amazon Redshift Architecture
Postgres SQL front end with MPP backend
Leader node that acts as a SQL end point and coordinates query execution with compute nodes
The compute nodes store data locally in columnar format
Ingest in parallel from S3, EMR, DDB and SSH
Column storage fetches only the specific columns that are required for queries
Customer typically get 3-5x compression. Means less I/O and more effective storage
Zone maps: Each column is divided into 1MB blocks, we store the min/max values of each block in memory. We are able to quickly identify the blocks that are required to be fetched for a query
Direct-attached storage/large block sizes also enable fast I/O
All the operations that you care about from a database management and performance perspective are all parallelized
Scale horizontally with the number of nodes
We work with EC2 and infrastructure teams closely to optimize the h/w for data warehousing needs.
Optimum was formerly called Cablevision
Will be more noticeable under heavy workload
Can be around 10-20%
Will be more noticeable under heavy workload
Can be around 10-20%
Rules
Auto statistics collection
We keep track of the statistics
Usage patterns – avoid analyze if not required
When you have an interactive queue, you may want to protect it from runaway queries and hop out any heavier-than-allowed queries. Sometimes it’s not enough to limit execution time to 15 seconds as some queries can consume large amounts of CPU or scan many blocks very fast. With this rule you can simply hop out the queries before consuming excessive cluster resource and slowing down the other queries on the “fast” lane
Since Redshift is accessible to your team(s), it is always good to monitor clusters before users’ queries impact the performance of the system. Using STL_QUERY_METRICS and logs generated by this rule, you can reach out to users to discuss (and help rewrite) their queries
Abort queries that exceed the limit of rows returned to a client (for excel or interactive use cases)
Prices start at $1,000/TB/yr. for a 3 year DS2.8XL RI.
Ris can provide more than 70% discount.
Backups are managed, continuous and incremental
Multiple copies are made within and outside the cluster on S3
Streaming restore =>While restore happens in the background, cluster is made available for reads/write within minutes of initiating the restoring operation (whether the backup is for a 1TB or a 100TB cluster)
Redshift monitors the health of a variety of components and failure conditions within an AZ and recovers from them automatically
For AZ/region level disasters, streaming restore will help get back to business within minutes
We can also fix SW issues without replacing the node if not needed
Python 2.7 – do you need support for other languages? Let us know
We respect the investments you made in your analytics platforms and work with a variety of ETL, BI and SI vendors.
Standard JDBC/ODBC drivers make the integration process seamless
We work with these vendors closely to certify the joint platforms
We believe in SOA
You pick and choose the components that work for your use case instead of bundling it all
We covered the intro, benefits and several use cases.
Next, we’re going to talk about how to get started and some useful tips.
Provisioning
Data Modeling
Data Loading
Querying
We’ll elaborate more during our deep dive session later today
Sort keys
Distribution Keys
Sorted columns enable fetching the minimum number of blocks required for query execution. In this example, an unsorted table al most leads to a full table scan O(N) and a sorted table leads to one block scanned O(1).
Goal is to sort data in an effective way to allow scans fetch only the relevant blocks
Compound – order of 1
Interleaved order of (N)^(1/3) for any column
Goal is to distribute data evenly and co-locate joins/aggregations
We also have the AWS DMS (next session)
Redshift works with customer’s BI tool of choice through Postgres drivers and a JDBC, ODBC connection. A number of partners shown here have certified integration with Redshift, meaning they have done testing to validate/build Redshift integration and make using Redshift easy from a UI perspective. If there are tools customer’s use not shown we can work on getting them integrated.
Console metrics
Table level restore
List of queries