SlideShare une entreprise Scribd logo
1  sur  87
Télécharger pour lire hors ligne
Hekaton –
SQL 2014 In-Memory OLTP
A journey BACK TO THE FUTURE
About me
Shy Engelberg, SQL Server consultant.
shy.engelberg@hynt.co.il
054-7717115
@ShyEngelberg
www.blogs.Microsoft.co.il/ShyEngel
www.hynt.co.il
www.facebook.com/HYNTil
Agenda
• What is Hekaton?
• Why now?
• Removing the performance bottlenecks
• Disk
• Locking
• Latches
• Logging
• Interpreted query language
• Performance results
• Integration into SQL Server
What is Hekaton?
• Hekaton - "hundred“ (100) in Greek.
figures in an archaic stage of Greek mythology,
three giants of incredible strength and ferocity
each of them having
a hundred hands
(100 hands working together in parallel?)
and fifty heads. (Wikipedia)
What is Hekaton?
• Hekaton is a new database engine optimized for memory
resident data and OLTP workloads. It is optimized for large
main memories and many-core processors.
(Ohhh, and it‟s fully durable!)
• Hekaton‟s new boring name is In-Memory OLTP.
• The research and development took 5 years!
• The initial goal was to gain X100 Performance improvement.
100
The Past
• RAM prices were very high:
“…In 1990 I had an IBM PC 8088 it had a 20 meg HDD.
It came with 640KB RAM. It was great for the day.
It cost me $3000.00…”
• CPUs had a single core.
• SQL Server was designed when it could be assumed that main
memory was very expensive, so data needed to reside on disk
(except when it was actually needed for processing) -
Disk-optimized (Buffer pools, data pages, costing rules)
Today (and also 5 years ago)
• RAM prices are low, CPU cores amount is
increasing:
A server with 32 cores and 1TB of memory for
about $50K.
(50$ = HP DL980 with 2TB of RAM)
• The majority of OLTP databases fit entirely in 1TB and even the largest OLTP
databases can keep the active working set in memory.
• Unlike “Big Data,” most OLTP data volumes are growing at more modest
rates
• Data management products are becoming “workload specific”
What should we do?
• Our goal is to gain a X10-100 throughput improvement.
(for OLTP workloads)
This cannot be achieved by optimizing existing SQL Server
mechanisms.
• Recognizing the trends, SQL Server began building a database
engine optimized for large main memories and many-core
CPU
• Hekaton is not a response to competitor‟s offers.
What is OLTP workload?
• OLTP – Online transactions processing:
• High concurrency
• Reads and writes in the same time
• Short transactions – the reliability of the data is very important
• Usually working with small datasets.
• Used for retail, e-commerce, gaming, Forex, tourism reservation systems etc.
• As Opposed to – DWH reporting workload:
• Writes usually happen in batches,
• a single row is not important, but aggregation of many rows.
• Not normalized.
• Usually low concurrency.
Do we have to build a new engine?
• Usually,
OLTP Performance improvement sources were:
• CPU getting faster
• Software getting better
• But now,
CPUs are not getting any faster.
DBMS have matured.
Yes, we must build a new engine.
What should we do?
In Other words:
1. Specialized database engine tuned for OLTP workloads.
2. Fitting most or all of data required by a workload into main-
memory.
3. Lower latency time for data operations
Is it just an In-Memory DB?
Is it just an In-Memory DB?
CREATE TABLE [Customer]
(
[CustomerID] INT NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT =
1000000),
[Name] NVARCHAR(250) NOT NULL
INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[CustomerSince] DATETIME NULL
)
WITH
(
MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA);
* Hekaton is defined at a table level.
A X100 performance?
How will we do it?
– Architecture concepts:
1. Optimize indexes for main memory
2. Eliminate latches and locks
3. Compile requests to native code
*Me:
Use everything the hardware has to offer (multi-cores, large
memory), make it scalable and don‟t keep no one waiting.
The bottlenecks to overcome
Procedures interpretation
Logging
Latches
Concurrency Locking
Disk IO
The bottlenecks to overcome
Disk IO
Yes, In-MemoryWewanttoreduce
Disk IO
Wecantakeadvantageof
Low
memory
prices
Sowewill
Place all
data in
memory
( )
Disk IO
Traditional DBMS
• Data is saved on disk.
• It is organized in 8KB pages.
• Every table and index is made out of extents – 8 pages – 64 KB.
• It is advised for data of a single table to reside physically next to each
other.
• Data is read from disk into a Buffer pool in memory (minimum of a
read from disk is extent)
• Every data needed for processing is read into buffer pool.
• Data is also updated in the buffer pool and a process called
Checkpoint writes the data back to the disk.
• SQL has an algorithm that defines what pages should stay in the
buffer pool and what should be removed (only in case of memory
pressure)
Traditional DBMS
• A table can be a Heap or a Clustered Index.
• A heap is a lot of pages, not sorted by anything, placed in
different places in the file.
• A clustered Index is a B-tree that is sorted by one column (or
more) and the leaf level of the tree are pages that hold the
table‟s data. (There can be only one clustered index on a table)
• Non Clustered index ins a B-tree that is sorted by one column
(or more) and the leaf level of the tree are pages that hold a
pointer to the location of the row (on a Clustered index or a
heap)
Yes, In-Memory. Memory optimized
• The design principle:
Optimizing for byte-addressable memory instead of block
addressable disk, so:
• Memory-optimized tables are not stored on pages like disk-based tables.
• No Buffer pool.
• No Clustered and non-clustered indexes:
Indexes include only pointers to rows.
Rows are not sorted in any way.
Indexes are not pointing to other indexes.
• Rows are never modified (from a different reason, will get to that)
Disk IO
Rows and Indexes
Data(1955,Marty)
Data(1955,Marty)
Data(1985,Marty)Data(1955,Doc)
Data(1985,Doc)
IdxA pointer
IdxA pointer
IdxA pointer
IdxA pointer
IdxA pointer
1955
2014
1985
…
IdxA (Year)
* This is a simple representation of the row and index, the real structure holds more data and might have a different
structure.
IdxB pointer
IdxB pointer
IdxB pointer
IdxB pointer
IdxB pointer
Marty
Einstein
Doc
…
IdxB (Name) Disk IO
Data storage - conclusion
• The row is consisted from a header (will be
discussed later), Index pointers and data.
• Memory-optimized tables must have at least one
index created on them. (the only thing connecting rows to the
table is the indexes)
• Records are always accessed via an index lookup.
• since the number of index pointers is part of the row structure,
and rows are never modified, all indexes must be defined at the
time your table is created.
Disk IO
Something must be written to disk?
• Only data rows are written to disk. (no indexes)
• More on that when we talk durability.
Disk IO
The bottlenecks to overcome
Concurrency Locking
No more locksWewanttoreduce
Lock
waits
Sowecantakeadvantageof
Multiple
CPU
cores
Sowewill
Remove
all locks
and use
MVCC
Concurrency Locking
Locking
• In order to make sure a transaction is isolated, it reads only
committed data and a row can be changed by one user at a
time, we use a mechanism called locks.
• Before reading or writing a row, a process needs to place lock
on the row.
• If there is already a lock on the row, the process needs to
compare the locks for compatability.
• Reads can place locks on rows that read.
• Writers needs exclusive acess, so they block both readers and
writers.
No one needs to wait
• Optimistic multi-version concurrency control.
• Writers do not block readers. Writers do not block writers. No locks
are acquired. Never.
• Optimistic - Transactions proceed under the (optimistic) assumption
that there will be no conflicts with other transactions.
• Multi-version - like snapshot isolation, every transaction access
only the data‟s version that is correct for the time it started.
• Data is never updated in place- every DML Creates a new version of
the row.
• This new concurrency control mechanism is built into the Hekaton
data structures. It cannot be turned on or off.
Concurrency Locking
DataIdxA pointerIdxB pointerHeader
End timestampStart timestamp
Concurrency Locking
Multi version
Row
DataIdxA pointerIdxB pointerHeader
End timestampStart timestamp
MartyIdxA pointerIdxB pointer1, ∞
①
Concurrency Locking
Multi version
Row
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, ∞
②
Concurrency Locking
Multi version
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, ∞
③
MartyIdxA pointerIdxB pointer1, ∞
Concurrency Locking
Multi version
Tx1:
Update Table1
SET Name=„Doc‟
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, ∞
DocIdxA pointerIdxB pointerTx1, ∞
③
MartyIdxA pointerIdxB pointer1, Tx1
Concurrency Locking
Multi version
Tx1:
Update Table1
SET Name=„Doc‟
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
③
Concurrency Locking
Multi version
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
④
Concurrency Locking
Multi version
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
⑤
Tx100-Read
(Start time: 5)
SELECT Name
from Table1
Concurrency Locking
Multi version
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
⑤
Concurrency Locking
Multi version
Tx100-Read
(Start time: 5)
SELECT Name
from Table1
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
⑥DocIdxA pointerIdxB pointer3, ∞
Concurrency Locking
Multi version
Tx2:
Update Table1
SET
Name=„Einstein‟
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, ∞
⑥
EinsteinIdxA pointerIdxB pointerTx2, ∞
DocIdxA pointerIdxB pointer3, Tx2
Concurrency Locking
Multi version
Tx2:
Update Table1
SET
Name=„Einstein‟
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, 6
⑥
EinsteinIdxA pointerIdxB pointer6, ∞
Concurrency Locking
Multi version
Tx200-Read
(Start time: 4)
SELECT Name
from Table1
Optimistic Multi-Version Concurrency control - Example
MartyIdxA pointerIdxB pointer1, 3
DocIdxA pointerIdxB pointer3, 6
⑥
EinsteinIdxA pointerIdxB pointer6, ∞
Concurrency Locking
Multi version
Tx200-Read
(Start time: 4)
SELECT Name
from Table1
Optimistic Multi-Version Concurrency control - Example
Multi version - conclusion
• All versions are equal, they act as rows and are
linked to the indexes, and to one another.
• To support the optimism – we validate versions for write
conflict- An Attempt to update a record that has been updated
since the transaction started.
• A garbage collection is available to remove old rows from the
memory.
• The model also supports REPEATABLE READ and SERIALIZEABLE
isolation levels (but that‟s for another time)
Concurrency Locking
The bottlenecks to overcome
Latches
No more threads waitingWewanttoreduce
Latch
waits
Sowecantakeadvantageof
Multiple
CPU cores
Sowewill
Use Lock-
free data
structures
Latches
Latches
• Latches are „region locks‟:
A mechanism used to protect a region of code or data structures
against simultaneous thread access.
• Region locks implement an „Acquire/Release‟ pattern – where a lock is
first acquired, the protected region executes, and then the lock is
released.
• All shared data structures must be protected with latches.
• In a system with many cores or very high concurrency the region
being protected becomes a bottleneck.
• Highly contended resources are the lock manager, the tail of the
transaction log, or the last page of a B-tree index
Latches
Lock free mechanism
• Hekaton uses lock free data structures.
• This is very complex algorithm stuff for University Professors, so
will not be explained now.
The bottlenecks to overcome
Logging
Logging
Logging and durabilityWewanttoreduce
Logging
and disk
write
time
Sowecansupport
Durability
Sowewill
Minimize
logging
Durability?
• Durability was one of the main goals of the
development:
We want data to be available also after shutdown or unexpected
crash.
• RAM memory might be cheap, but it still can‟t survive power
outage.
• Conclusion: we must write something to the disk.
Logging
Logging today
• Because Dirty data can be written to data files by the
checkpoint, we need to write in the transaction log every action
and it‟s UNDO information.
• Data is logged by physical structure – if a row changes, we see a
log record for every index on that table.
Logging
• In order to support the high throughput, the following
concepts are applied:
• Index operations are not logged (No log records for physical structure
modifications- Work is pushed to recovery)
• No undo information is logged – only committed transactions.
• Each transaction is logged in a single, potentially large, log record.
(Fewer log records minimize the log-header overhead and reduce the contention for
inserting into log-buffer)
• Hekaton tries to group multiple log records into one large I/O.
• Hekaton designed to support multiple concurrently generated log streams
per database to avoid any scaling bottlenecks with the tail of the log
• Combine with “Delayed Durability” (New in 2014) and you have a hover-
board.
Logging
CheckpointRecovery
• We can‟t count only on T-log for durability, because
no log truncation will occur and recovery will take forever.
• Checkpoint files are actually a compressed version of log
transactions.
• Checkpointing is Optimized for sequential access (data only
written, not updated or deleted)
• Checkpoint related I/O occurs incrementally and continuously.
• Multiple checkpoint files exist, to allow parallelism of recovery
process.
• Indexes are built during the recovery.
Logging
The bottlenecks to overcome
Procedures interpretation
Native compilationWewanttoimprove
Query
speed
Sowewillneedto
Reduce the
amount of
CPU
instructions
Sowedeveloped
Natively
compiled
stored
procedures
Procedures interpretation
Query Interpretation
• Current interpreter (Gets a physical query plan
as input) is totally generic and support every table, every type
etc.
• It performs many run time checks during the execution of even
simple statements.
• It is not fast, but was fast enough when data came from disk.
• Today, CPUs are not getting any faster, so
• We need to lower the # of CPU instructions used to perform
query processing and business logic execution.
Procedures interpretation
Native stored procedures compilation
• The primary goal is to support efficient execution
of compile-once- and-execute-many-times workloads as
opposed to optimizing the execution of ad hoc queries.
• Natively compiled SPs must interact with Hekaton tables only.
• The In-Memory OLTP compiler leverages the
query optimizer to create an efficient execution plan for each of
the queries in the stored procedure.
• The stored procedures is translated into C and compiled to
native code (a DLL) The DLL is slim and specific for the query.
Procedures interpretation
Improvements
• The procedure is compiled as a single function -
we avoid costly argument passing between functions and
expensive function calls.
• Rows are not going through all operators when it‟s not needed.
• To avoid runtime checks: compiled stored procedures execute in
a predefined security context.
• Compiled stored procedures must be schema bound- to avoid
costly schema locks.
Procedures interpretation
Things to remember
• Natively compiled stored procedures are not
automatically recompiled if the data in the table changes.
• There are some limitations on the T-SQL area surface we can use
(for now)
• Needs to be compiled with security context.
• Using natively compiled SPs give us the
biggest performance boost!
Procedures interpretation
Conclusion
Procedures interpretation
Logging
Latches
Concurrency Locking
Disk IO
Native compiled SPs
Minimal Logging and
checkpointing
Lock-free data structures
multi-versioning Currency
control
In Memory and Memory
optimized data structures
Performance results
CPU Efficiency for Lookups
• Random lookups in a table with 10M rows
• All data in memory
• Intel Xeon W3520 2.67 GHz
Transaction size in
#lookups
CPU cycles (in millions) Speedup
SQL Table Hekaton Table
1 0.734 0.040 10.8X
10 0.937 0.051 18.4X
100 2.72 0.150 18.1X
1,000 20.1 1.063 18.9X
10,000 201 9.85 20.4X
• Hekaton performance: 2.7M
lookups/sec/core
CPU Efficiency for Updates
• Random updates, 10M rows, one index,
snapshot isolation
• Log IO disabled (disk became bottleneck)
• Intel Xeon W3520 2.67 GHzTransaction size in
#updates
CPU cycles (in millions) Speedup
SQL Table Hekaton Table
1 0.910 0.045 20.2X
10 1.38 0.059 23.4X
100 8.17 0.260 31.4X
1,000 41.9 1.50 27.9X
10,000 439 14.4 30.5X
• Hekaton performance: 1.9M updates/sec/core
High Contention Throughput
Workload: read/insert into a table with a unique
index
Insert txn (50%): append a batch of 100 rows
Read txn (50%): read last inserted batch of
rows
More than just
performance
SQL integration
• The engine is completely integrated with SQL 2014:
• No hidden licensing fees.
• No need to copy data.
• No need to support a new technology.
• No need to maintain 2 DBs.
• Migration can be done in stages.
• The Hekaton engine is transparent to the application.
How is it integrated?
• Use your existing DBs.
• In-Memory tables and disk tables can be joined together easily.
• Use the same installation and connection interface.
• Use the same T-SQL language.
• Backup the same way you always did.
• Manage and maintain the DB and storage in the same way and using
the same tools.
• Same tools you‟re used to – DMV‟s, SSMS, Perf counters, resource
governor…
• Out-of-the-box Integration with SQL HA solutions.
Competitors
• IBM SolidDB
• Oracle TimesTen
• SAP In-Memory Computing
• Sybase In-Memory Databases
• VoltDB
How to get started
Some scripts and basic knowledge
Migration is easy as
Upgrade your
DB to run on
SQL 2014
instance
Identify
performance
bottlenecks
tables, create
them as
Memory-
Optimized and
migrate data.
Continue
querying the
DB without
any change
using Interop
mode.
Identify
required code
changes and
Migrate
procedures to
native mode.
No additional hardware or licensing is
required.
New tools helps us identify potential
Hekaton tables and problems.
Working with Hekaton
• Adding a filegroup
• Migrating/ creating tables
• Migrating procedures
Adding a filegroup DEMO
Memory optimized
tables and indexes
Create Table DDL
CREATE TABLE [Customer](
[CustomerID] INT NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000),
[Name] NVARCHAR(250) NOT NULL,
[CustomerSince] DATETIME NULL
INDEX [ICustomerSince] NONCLUSTERED
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
This table is memory
optimized
This table is durable
Indexes are specified inline
NONCLUSTERED indexes are
supported
Hash Index
BUCKET_COUNT 1-2X nr of
unique index key values
Memory-Optimized Indexes
Exist only in memory
Rebuilt on database startup
Do not contain data rows
Indexes contain memory pointers to the data
rows
No duplication of data
All indexes are coverin (or none are covering)
Working with tables DEMO
Natively optimized
stored procedures
• Access both memory- and disk-based
tables
• Less performant
• Virtually full T-SQL surface
• When to use
• Ad hoc queries
• Reporting-style queries
• Speeding up app migration
Accessing Memory Optimized Tables
Interpreted T-SQL Access Natively Compiled Stored Procs
• Access only memory optimized tables
• Maximum performance
• Limited T-SQL surface area
• When to use
• OLTP-style operations
• Optimize performance critical business
logic
Create Procedure DDL
CREATE PROCEDURE [dbo].[InsertOrder] @id INT, @date DATETIME
WITH
NATIVE_COMPILATION,
SCHEMABINDING,
EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH
(TRANSACTION
ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'us_english')
-- insert T-SQL here
END
This proc is natively
compiled
Native procs must be
schema-bound
Atomic blocks
• Create a transaction if
there is none
• Otherwise, create a
savepoint
Execution context is
required
Session settings are fixed at
create time
Procedure Creation
CREATE PROC DDL
Query optimization
Code generation and compilation
Procedure DLL produced
Procedure DLL loaded
Working with procedures
DEMO
Limitations on In-Memory OLTP in SQL
2014
Tables
Triggers: no DDL/DML triggers
Data types: no LOBs, no XML and no CLR data types
Constraints: no FOREIGN KEY and no CHECK constraints
No schema changes (ALTER TABLE) – need to drop/recreate
table
No add/remove index – need to drop/recreate table
Natively Compiled Stored Procedures
No outer join, no OR, no subqueries, no CASE
Limited built-in functions [core math, date/time, and string
functions are available]
Migration tools DEMO
The future never
looked brighter
Questions?
Thank you

Contenu connexe

Tendances

SQL Server In-Memory OLTP: What Every SQL Professional Should Know
SQL Server In-Memory OLTP: What Every SQL Professional Should KnowSQL Server In-Memory OLTP: What Every SQL Professional Should Know
SQL Server In-Memory OLTP: What Every SQL Professional Should KnowBob Ward
 
10 things, an Oracle DBA should care about when moving to PostgreSQL
10 things, an Oracle DBA should care about when moving to PostgreSQL10 things, an Oracle DBA should care about when moving to PostgreSQL
10 things, an Oracle DBA should care about when moving to PostgreSQLPostgreSQL-Consulting
 
Evan Ellis "Tumblr. Massively Sharded MySQL"
Evan Ellis "Tumblr. Massively Sharded MySQL"Evan Ellis "Tumblr. Massively Sharded MySQL"
Evan Ellis "Tumblr. Massively Sharded MySQL"Alexey Mahotkin
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 editionBob Ward
 
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.George Joseph
 
Brk3288 sql server v.next with support on linux, windows and containers was...
Brk3288 sql server v.next with support on linux, windows and containers   was...Brk3288 sql server v.next with support on linux, windows and containers   was...
Brk3288 sql server v.next with support on linux, windows and containers was...Bob Ward
 
Jss 2015 in memory and operational analytics
Jss 2015   in memory and operational analyticsJss 2015   in memory and operational analytics
Jss 2015 in memory and operational analyticsDavid Barbarin
 
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...Ivan Zoratti
 
Get More Out of MySQL with TokuDB
Get More Out of MySQL with TokuDBGet More Out of MySQL with TokuDB
Get More Out of MySQL with TokuDBTim Callaghan
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014marvin herrera
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs FasterBob Ward
 
Design Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational DatabasesDesign Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational Databasesguestdfd1ec
 
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document Store
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document StoreConnector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document Store
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document StoreFilipe Silva
 
Build a modern data platform.pptx
Build a modern data platform.pptxBuild a modern data platform.pptx
Build a modern data platform.pptxIke Ellis
 
Kill mysql-performance
Kill mysql-performanceKill mysql-performance
Kill mysql-performancekriptonium
 
Brk2051 sql server on linux and docker
Brk2051 sql server on linux and dockerBrk2051 sql server on linux and docker
Brk2051 sql server on linux and dockerBob Ward
 
Factual presentation for pg west 2010
Factual presentation for pg west 2010Factual presentation for pg west 2010
Factual presentation for pg west 2010ericlui
 
Run Cloud Native MySQL NDB Cluster in Kubernetes
Run Cloud Native MySQL NDB Cluster in KubernetesRun Cloud Native MySQL NDB Cluster in Kubernetes
Run Cloud Native MySQL NDB Cluster in KubernetesBernd Ocklin
 
Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1PgTraining
 

Tendances (20)

Running MySQL on Linux
Running MySQL on LinuxRunning MySQL on Linux
Running MySQL on Linux
 
SQL Server In-Memory OLTP: What Every SQL Professional Should Know
SQL Server In-Memory OLTP: What Every SQL Professional Should KnowSQL Server In-Memory OLTP: What Every SQL Professional Should Know
SQL Server In-Memory OLTP: What Every SQL Professional Should Know
 
10 things, an Oracle DBA should care about when moving to PostgreSQL
10 things, an Oracle DBA should care about when moving to PostgreSQL10 things, an Oracle DBA should care about when moving to PostgreSQL
10 things, an Oracle DBA should care about when moving to PostgreSQL
 
Evan Ellis "Tumblr. Massively Sharded MySQL"
Evan Ellis "Tumblr. Massively Sharded MySQL"Evan Ellis "Tumblr. Massively Sharded MySQL"
Evan Ellis "Tumblr. Massively Sharded MySQL"
 
Sql server 2016 it just runs faster sql bits 2017 edition
Sql server 2016 it just runs faster   sql bits 2017 editionSql server 2016 it just runs faster   sql bits 2017 edition
Sql server 2016 it just runs faster sql bits 2017 edition
 
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.
IN-MEMORY DATABASE SYSTEMS FOR BIG DATA MANAGEMENT.SAP HANA DATABASE.
 
Brk3288 sql server v.next with support on linux, windows and containers was...
Brk3288 sql server v.next with support on linux, windows and containers   was...Brk3288 sql server v.next with support on linux, windows and containers   was...
Brk3288 sql server v.next with support on linux, windows and containers was...
 
Jss 2015 in memory and operational analytics
Jss 2015   in memory and operational analyticsJss 2015   in memory and operational analytics
Jss 2015 in memory and operational analytics
 
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...
Ora mysql bothGetting the best of both worlds with Oracle 11g and MySQL Enter...
 
Get More Out of MySQL with TokuDB
Get More Out of MySQL with TokuDBGet More Out of MySQL with TokuDB
Get More Out of MySQL with TokuDB
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs Faster
 
Design Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational DatabasesDesign Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational Databases
 
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document Store
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document StoreConnector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document Store
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document Store
 
Build a modern data platform.pptx
Build a modern data platform.pptxBuild a modern data platform.pptx
Build a modern data platform.pptx
 
Kill mysql-performance
Kill mysql-performanceKill mysql-performance
Kill mysql-performance
 
Brk2051 sql server on linux and docker
Brk2051 sql server on linux and dockerBrk2051 sql server on linux and docker
Brk2051 sql server on linux and docker
 
Factual presentation for pg west 2010
Factual presentation for pg west 2010Factual presentation for pg west 2010
Factual presentation for pg west 2010
 
Run Cloud Native MySQL NDB Cluster in Kubernetes
Run Cloud Native MySQL NDB Cluster in KubernetesRun Cloud Native MySQL NDB Cluster in Kubernetes
Run Cloud Native MySQL NDB Cluster in Kubernetes
 
Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1
 

Similaire à Hekaton introduction for .Net developers

Novedades SQL Server 2014
Novedades SQL Server 2014Novedades SQL Server 2014
Novedades SQL Server 2014netmind
 
SQL Server In-Memory OLTP introduction (Hekaton)
SQL Server In-Memory OLTP introduction (Hekaton)SQL Server In-Memory OLTP introduction (Hekaton)
SQL Server In-Memory OLTP introduction (Hekaton)Shy Engelberg
 
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...Microsoft
 
Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Bob Ward
 
Implementing the Databese Server session 02
Implementing the Databese Server session 02Implementing the Databese Server session 02
Implementing the Databese Server session 02Guillermo Julca
 
Building a Database for the End of the World
Building a Database for the End of the WorldBuilding a Database for the End of the World
Building a Database for the End of the Worldjhugg
 
SQL Server 2014 Mission Critical Performance - Level 300 Presentation
SQL Server 2014 Mission Critical Performance - Level 300 PresentationSQL Server 2014 Mission Critical Performance - Level 300 Presentation
SQL Server 2014 Mission Critical Performance - Level 300 PresentationDavid J Rosenthal
 
Why databases cry at night
Why databases cry at nightWhy databases cry at night
Why databases cry at nightMichael Yarichuk
 
Sql Server 2014 In Memory
Sql Server 2014 In MemorySql Server 2014 In Memory
Sql Server 2014 In MemoryRavi Okade
 
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_SummaryHiram Fleitas León
 
Survey of Big Data Infrastructures
Survey of Big Data InfrastructuresSurvey of Big Data Infrastructures
Survey of Big Data Infrastructuresm.a.kirn
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache KuduAndriy Zabavskyy
 
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast DataDatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast DataHakka Labs
 
Everything We Learned About In-Memory Data Layout While Building VoltDB
Everything We Learned About In-Memory Data Layout While Building VoltDBEverything We Learned About In-Memory Data Layout While Building VoltDB
Everything We Learned About In-Memory Data Layout While Building VoltDBjhugg
 
7. Key-Value Databases: In Depth
7. Key-Value Databases: In Depth7. Key-Value Databases: In Depth
7. Key-Value Databases: In DepthFabio Fumarola
 
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp0220140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02Francisco Gonçalves
 

Similaire à Hekaton introduction for .Net developers (20)

Novedades SQL Server 2014
Novedades SQL Server 2014Novedades SQL Server 2014
Novedades SQL Server 2014
 
SQL Server In-Memory OLTP introduction (Hekaton)
SQL Server In-Memory OLTP introduction (Hekaton)SQL Server In-Memory OLTP introduction (Hekaton)
SQL Server In-Memory OLTP introduction (Hekaton)
 
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...
Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaal...
 
Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017
 
Redshift deep dive
Redshift deep diveRedshift deep dive
Redshift deep dive
 
L6.sp17.pptx
L6.sp17.pptxL6.sp17.pptx
L6.sp17.pptx
 
Implementing the Databese Server session 02
Implementing the Databese Server session 02Implementing the Databese Server session 02
Implementing the Databese Server session 02
 
Building a Database for the End of the World
Building a Database for the End of the WorldBuilding a Database for the End of the World
Building a Database for the End of the World
 
SQL Server 2014 Mission Critical Performance - Level 300 Presentation
SQL Server 2014 Mission Critical Performance - Level 300 PresentationSQL Server 2014 Mission Critical Performance - Level 300 Presentation
SQL Server 2014 Mission Critical Performance - Level 300 Presentation
 
Why databases cry at night
Why databases cry at nightWhy databases cry at night
Why databases cry at night
 
Sql Server 2014 In Memory
Sql Server 2014 In MemorySql Server 2014 In Memory
Sql Server 2014 In Memory
 
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary
[DBA]_HiramFleitas_SQL_PASS_Summit_2017_Summary
 
Survey of Big Data Infrastructures
Survey of Big Data InfrastructuresSurvey of Big Data Infrastructures
Survey of Big Data Infrastructures
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache Kudu
 
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast DataDatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
DatEngConf SF16 - Apache Kudu: Fast Analytics on Fast Data
 
Everything We Learned About In-Memory Data Layout While Building VoltDB
Everything We Learned About In-Memory Data Layout While Building VoltDBEverything We Learned About In-Memory Data Layout While Building VoltDB
Everything We Learned About In-Memory Data Layout While Building VoltDB
 
7. Key-Value Databases: In Depth
7. Key-Value Databases: In Depth7. Key-Value Databases: In Depth
7. Key-Value Databases: In Depth
 
25 snowflake
25 snowflake25 snowflake
25 snowflake
 
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp0220140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02
20140128 webinar-get-more-out-of-mysql-with-tokudb-140319063324-phpapp02
 
NoSQL.pptx
NoSQL.pptxNoSQL.pptx
NoSQL.pptx
 

Dernier

UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Jeffrey Haguewood
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFMichael Gough
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...BookNet Canada
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Mark Simos
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...amber724300
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sectoritnewsafrica
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Nikki Chapple
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 

Dernier (20)

UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDF
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
JET Technology Labs White Paper for Virtualized Security and Encryption Techn...
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 

Hekaton introduction for .Net developers

  • 1. Hekaton – SQL 2014 In-Memory OLTP A journey BACK TO THE FUTURE
  • 2.
  • 3. About me Shy Engelberg, SQL Server consultant. shy.engelberg@hynt.co.il 054-7717115 @ShyEngelberg www.blogs.Microsoft.co.il/ShyEngel www.hynt.co.il www.facebook.com/HYNTil
  • 4. Agenda • What is Hekaton? • Why now? • Removing the performance bottlenecks • Disk • Locking • Latches • Logging • Interpreted query language • Performance results • Integration into SQL Server
  • 5. What is Hekaton? • Hekaton - "hundred“ (100) in Greek. figures in an archaic stage of Greek mythology, three giants of incredible strength and ferocity each of them having a hundred hands (100 hands working together in parallel?) and fifty heads. (Wikipedia)
  • 6. What is Hekaton? • Hekaton is a new database engine optimized for memory resident data and OLTP workloads. It is optimized for large main memories and many-core processors. (Ohhh, and it‟s fully durable!) • Hekaton‟s new boring name is In-Memory OLTP. • The research and development took 5 years! • The initial goal was to gain X100 Performance improvement.
  • 7. 100
  • 8.
  • 9. The Past • RAM prices were very high: “…In 1990 I had an IBM PC 8088 it had a 20 meg HDD. It came with 640KB RAM. It was great for the day. It cost me $3000.00…” • CPUs had a single core. • SQL Server was designed when it could be assumed that main memory was very expensive, so data needed to reside on disk (except when it was actually needed for processing) - Disk-optimized (Buffer pools, data pages, costing rules)
  • 10.
  • 11. Today (and also 5 years ago) • RAM prices are low, CPU cores amount is increasing: A server with 32 cores and 1TB of memory for about $50K. (50$ = HP DL980 with 2TB of RAM) • The majority of OLTP databases fit entirely in 1TB and even the largest OLTP databases can keep the active working set in memory. • Unlike “Big Data,” most OLTP data volumes are growing at more modest rates • Data management products are becoming “workload specific”
  • 12. What should we do? • Our goal is to gain a X10-100 throughput improvement. (for OLTP workloads) This cannot be achieved by optimizing existing SQL Server mechanisms. • Recognizing the trends, SQL Server began building a database engine optimized for large main memories and many-core CPU • Hekaton is not a response to competitor‟s offers.
  • 13. What is OLTP workload? • OLTP – Online transactions processing: • High concurrency • Reads and writes in the same time • Short transactions – the reliability of the data is very important • Usually working with small datasets. • Used for retail, e-commerce, gaming, Forex, tourism reservation systems etc. • As Opposed to – DWH reporting workload: • Writes usually happen in batches, • a single row is not important, but aggregation of many rows. • Not normalized. • Usually low concurrency.
  • 14. Do we have to build a new engine? • Usually, OLTP Performance improvement sources were: • CPU getting faster • Software getting better • But now, CPUs are not getting any faster. DBMS have matured. Yes, we must build a new engine.
  • 15. What should we do? In Other words: 1. Specialized database engine tuned for OLTP workloads. 2. Fitting most or all of data required by a workload into main- memory. 3. Lower latency time for data operations
  • 16. Is it just an In-Memory DB?
  • 17. Is it just an In-Memory DB? CREATE TABLE [Customer] ( [CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000), [Name] NVARCHAR(250) NOT NULL INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000), [CustomerSince] DATETIME NULL ) WITH ( MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); * Hekaton is defined at a table level.
  • 18. A X100 performance? How will we do it? – Architecture concepts: 1. Optimize indexes for main memory 2. Eliminate latches and locks 3. Compile requests to native code *Me: Use everything the hardware has to offer (multi-cores, large memory), make it scalable and don‟t keep no one waiting.
  • 19. The bottlenecks to overcome Procedures interpretation Logging Latches Concurrency Locking Disk IO
  • 20. The bottlenecks to overcome Disk IO
  • 22. Traditional DBMS • Data is saved on disk. • It is organized in 8KB pages. • Every table and index is made out of extents – 8 pages – 64 KB. • It is advised for data of a single table to reside physically next to each other. • Data is read from disk into a Buffer pool in memory (minimum of a read from disk is extent) • Every data needed for processing is read into buffer pool. • Data is also updated in the buffer pool and a process called Checkpoint writes the data back to the disk. • SQL has an algorithm that defines what pages should stay in the buffer pool and what should be removed (only in case of memory pressure)
  • 23. Traditional DBMS • A table can be a Heap or a Clustered Index. • A heap is a lot of pages, not sorted by anything, placed in different places in the file. • A clustered Index is a B-tree that is sorted by one column (or more) and the leaf level of the tree are pages that hold the table‟s data. (There can be only one clustered index on a table) • Non Clustered index ins a B-tree that is sorted by one column (or more) and the leaf level of the tree are pages that hold a pointer to the location of the row (on a Clustered index or a heap)
  • 24. Yes, In-Memory. Memory optimized • The design principle: Optimizing for byte-addressable memory instead of block addressable disk, so: • Memory-optimized tables are not stored on pages like disk-based tables. • No Buffer pool. • No Clustered and non-clustered indexes: Indexes include only pointers to rows. Rows are not sorted in any way. Indexes are not pointing to other indexes. • Rows are never modified (from a different reason, will get to that) Disk IO
  • 25. Rows and Indexes Data(1955,Marty) Data(1955,Marty) Data(1985,Marty)Data(1955,Doc) Data(1985,Doc) IdxA pointer IdxA pointer IdxA pointer IdxA pointer IdxA pointer 1955 2014 1985 … IdxA (Year) * This is a simple representation of the row and index, the real structure holds more data and might have a different structure. IdxB pointer IdxB pointer IdxB pointer IdxB pointer IdxB pointer Marty Einstein Doc … IdxB (Name) Disk IO
  • 26. Data storage - conclusion • The row is consisted from a header (will be discussed later), Index pointers and data. • Memory-optimized tables must have at least one index created on them. (the only thing connecting rows to the table is the indexes) • Records are always accessed via an index lookup. • since the number of index pointers is part of the row structure, and rows are never modified, all indexes must be defined at the time your table is created. Disk IO
  • 27. Something must be written to disk? • Only data rows are written to disk. (no indexes) • More on that when we talk durability. Disk IO
  • 28. The bottlenecks to overcome Concurrency Locking
  • 30. Locking • In order to make sure a transaction is isolated, it reads only committed data and a row can be changed by one user at a time, we use a mechanism called locks. • Before reading or writing a row, a process needs to place lock on the row. • If there is already a lock on the row, the process needs to compare the locks for compatability. • Reads can place locks on rows that read. • Writers needs exclusive acess, so they block both readers and writers.
  • 31. No one needs to wait • Optimistic multi-version concurrency control. • Writers do not block readers. Writers do not block writers. No locks are acquired. Never. • Optimistic - Transactions proceed under the (optimistic) assumption that there will be no conflicts with other transactions. • Multi-version - like snapshot isolation, every transaction access only the data‟s version that is correct for the time it started. • Data is never updated in place- every DML Creates a new version of the row. • This new concurrency control mechanism is built into the Hekaton data structures. It cannot be turned on or off. Concurrency Locking
  • 32. DataIdxA pointerIdxB pointerHeader End timestampStart timestamp Concurrency Locking Multi version Row
  • 33. DataIdxA pointerIdxB pointerHeader End timestampStart timestamp MartyIdxA pointerIdxB pointer1, ∞ ① Concurrency Locking Multi version Row Optimistic Multi-Version Concurrency control - Example
  • 34. MartyIdxA pointerIdxB pointer1, ∞ ② Concurrency Locking Multi version Optimistic Multi-Version Concurrency control - Example
  • 35. MartyIdxA pointerIdxB pointer1, ∞ ③ MartyIdxA pointerIdxB pointer1, ∞ Concurrency Locking Multi version Tx1: Update Table1 SET Name=„Doc‟ Optimistic Multi-Version Concurrency control - Example
  • 36. MartyIdxA pointerIdxB pointer1, ∞ DocIdxA pointerIdxB pointerTx1, ∞ ③ MartyIdxA pointerIdxB pointer1, Tx1 Concurrency Locking Multi version Tx1: Update Table1 SET Name=„Doc‟ Optimistic Multi-Version Concurrency control - Example
  • 37. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ③ Concurrency Locking Multi version Optimistic Multi-Version Concurrency control - Example
  • 38. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ④ Concurrency Locking Multi version Optimistic Multi-Version Concurrency control - Example
  • 39. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ⑤ Tx100-Read (Start time: 5) SELECT Name from Table1 Concurrency Locking Multi version Optimistic Multi-Version Concurrency control - Example
  • 40. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ⑤ Concurrency Locking Multi version Tx100-Read (Start time: 5) SELECT Name from Table1 Optimistic Multi-Version Concurrency control - Example
  • 41. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ⑥DocIdxA pointerIdxB pointer3, ∞ Concurrency Locking Multi version Tx2: Update Table1 SET Name=„Einstein‟ Optimistic Multi-Version Concurrency control - Example
  • 42. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, ∞ ⑥ EinsteinIdxA pointerIdxB pointerTx2, ∞ DocIdxA pointerIdxB pointer3, Tx2 Concurrency Locking Multi version Tx2: Update Table1 SET Name=„Einstein‟ Optimistic Multi-Version Concurrency control - Example
  • 43. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, 6 ⑥ EinsteinIdxA pointerIdxB pointer6, ∞ Concurrency Locking Multi version Tx200-Read (Start time: 4) SELECT Name from Table1 Optimistic Multi-Version Concurrency control - Example
  • 44. MartyIdxA pointerIdxB pointer1, 3 DocIdxA pointerIdxB pointer3, 6 ⑥ EinsteinIdxA pointerIdxB pointer6, ∞ Concurrency Locking Multi version Tx200-Read (Start time: 4) SELECT Name from Table1 Optimistic Multi-Version Concurrency control - Example
  • 45. Multi version - conclusion • All versions are equal, they act as rows and are linked to the indexes, and to one another. • To support the optimism – we validate versions for write conflict- An Attempt to update a record that has been updated since the transaction started. • A garbage collection is available to remove old rows from the memory. • The model also supports REPEATABLE READ and SERIALIZEABLE isolation levels (but that‟s for another time) Concurrency Locking
  • 46. The bottlenecks to overcome Latches
  • 47. No more threads waitingWewanttoreduce Latch waits Sowecantakeadvantageof Multiple CPU cores Sowewill Use Lock- free data structures Latches
  • 48. Latches • Latches are „region locks‟: A mechanism used to protect a region of code or data structures against simultaneous thread access. • Region locks implement an „Acquire/Release‟ pattern – where a lock is first acquired, the protected region executes, and then the lock is released. • All shared data structures must be protected with latches. • In a system with many cores or very high concurrency the region being protected becomes a bottleneck. • Highly contended resources are the lock manager, the tail of the transaction log, or the last page of a B-tree index Latches
  • 49. Lock free mechanism • Hekaton uses lock free data structures. • This is very complex algorithm stuff for University Professors, so will not be explained now.
  • 50. The bottlenecks to overcome Logging
  • 51. Logging Logging and durabilityWewanttoreduce Logging and disk write time Sowecansupport Durability Sowewill Minimize logging
  • 52. Durability? • Durability was one of the main goals of the development: We want data to be available also after shutdown or unexpected crash. • RAM memory might be cheap, but it still can‟t survive power outage. • Conclusion: we must write something to the disk. Logging
  • 53. Logging today • Because Dirty data can be written to data files by the checkpoint, we need to write in the transaction log every action and it‟s UNDO information. • Data is logged by physical structure – if a row changes, we see a log record for every index on that table.
  • 54. Logging • In order to support the high throughput, the following concepts are applied: • Index operations are not logged (No log records for physical structure modifications- Work is pushed to recovery) • No undo information is logged – only committed transactions. • Each transaction is logged in a single, potentially large, log record. (Fewer log records minimize the log-header overhead and reduce the contention for inserting into log-buffer) • Hekaton tries to group multiple log records into one large I/O. • Hekaton designed to support multiple concurrently generated log streams per database to avoid any scaling bottlenecks with the tail of the log • Combine with “Delayed Durability” (New in 2014) and you have a hover- board. Logging
  • 55. CheckpointRecovery • We can‟t count only on T-log for durability, because no log truncation will occur and recovery will take forever. • Checkpoint files are actually a compressed version of log transactions. • Checkpointing is Optimized for sequential access (data only written, not updated or deleted) • Checkpoint related I/O occurs incrementally and continuously. • Multiple checkpoint files exist, to allow parallelism of recovery process. • Indexes are built during the recovery. Logging
  • 56. The bottlenecks to overcome Procedures interpretation
  • 57. Native compilationWewanttoimprove Query speed Sowewillneedto Reduce the amount of CPU instructions Sowedeveloped Natively compiled stored procedures Procedures interpretation
  • 58. Query Interpretation • Current interpreter (Gets a physical query plan as input) is totally generic and support every table, every type etc. • It performs many run time checks during the execution of even simple statements. • It is not fast, but was fast enough when data came from disk. • Today, CPUs are not getting any faster, so • We need to lower the # of CPU instructions used to perform query processing and business logic execution. Procedures interpretation
  • 59. Native stored procedures compilation • The primary goal is to support efficient execution of compile-once- and-execute-many-times workloads as opposed to optimizing the execution of ad hoc queries. • Natively compiled SPs must interact with Hekaton tables only. • The In-Memory OLTP compiler leverages the query optimizer to create an efficient execution plan for each of the queries in the stored procedure. • The stored procedures is translated into C and compiled to native code (a DLL) The DLL is slim and specific for the query. Procedures interpretation
  • 60. Improvements • The procedure is compiled as a single function - we avoid costly argument passing between functions and expensive function calls. • Rows are not going through all operators when it‟s not needed. • To avoid runtime checks: compiled stored procedures execute in a predefined security context. • Compiled stored procedures must be schema bound- to avoid costly schema locks. Procedures interpretation
  • 61. Things to remember • Natively compiled stored procedures are not automatically recompiled if the data in the table changes. • There are some limitations on the T-SQL area surface we can use (for now) • Needs to be compiled with security context. • Using natively compiled SPs give us the biggest performance boost! Procedures interpretation
  • 62. Conclusion Procedures interpretation Logging Latches Concurrency Locking Disk IO Native compiled SPs Minimal Logging and checkpointing Lock-free data structures multi-versioning Currency control In Memory and Memory optimized data structures
  • 64. CPU Efficiency for Lookups • Random lookups in a table with 10M rows • All data in memory • Intel Xeon W3520 2.67 GHz Transaction size in #lookups CPU cycles (in millions) Speedup SQL Table Hekaton Table 1 0.734 0.040 10.8X 10 0.937 0.051 18.4X 100 2.72 0.150 18.1X 1,000 20.1 1.063 18.9X 10,000 201 9.85 20.4X • Hekaton performance: 2.7M lookups/sec/core
  • 65. CPU Efficiency for Updates • Random updates, 10M rows, one index, snapshot isolation • Log IO disabled (disk became bottleneck) • Intel Xeon W3520 2.67 GHzTransaction size in #updates CPU cycles (in millions) Speedup SQL Table Hekaton Table 1 0.910 0.045 20.2X 10 1.38 0.059 23.4X 100 8.17 0.260 31.4X 1,000 41.9 1.50 27.9X 10,000 439 14.4 30.5X • Hekaton performance: 1.9M updates/sec/core
  • 66. High Contention Throughput Workload: read/insert into a table with a unique index Insert txn (50%): append a batch of 100 rows Read txn (50%): read last inserted batch of rows
  • 68. SQL integration • The engine is completely integrated with SQL 2014: • No hidden licensing fees. • No need to copy data. • No need to support a new technology. • No need to maintain 2 DBs. • Migration can be done in stages. • The Hekaton engine is transparent to the application.
  • 69. How is it integrated? • Use your existing DBs. • In-Memory tables and disk tables can be joined together easily. • Use the same installation and connection interface. • Use the same T-SQL language. • Backup the same way you always did. • Manage and maintain the DB and storage in the same way and using the same tools. • Same tools you‟re used to – DMV‟s, SSMS, Perf counters, resource governor… • Out-of-the-box Integration with SQL HA solutions.
  • 70. Competitors • IBM SolidDB • Oracle TimesTen • SAP In-Memory Computing • Sybase In-Memory Databases • VoltDB
  • 71. How to get started Some scripts and basic knowledge
  • 72. Migration is easy as Upgrade your DB to run on SQL 2014 instance Identify performance bottlenecks tables, create them as Memory- Optimized and migrate data. Continue querying the DB without any change using Interop mode. Identify required code changes and Migrate procedures to native mode. No additional hardware or licensing is required. New tools helps us identify potential Hekaton tables and problems.
  • 73. Working with Hekaton • Adding a filegroup • Migrating/ creating tables • Migrating procedures
  • 76. Create Table DDL CREATE TABLE [Customer]( [CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000), [Name] NVARCHAR(250) NOT NULL, [CustomerSince] DATETIME NULL INDEX [ICustomerSince] NONCLUSTERED ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); This table is memory optimized This table is durable Indexes are specified inline NONCLUSTERED indexes are supported Hash Index BUCKET_COUNT 1-2X nr of unique index key values
  • 77. Memory-Optimized Indexes Exist only in memory Rebuilt on database startup Do not contain data rows Indexes contain memory pointers to the data rows No duplication of data All indexes are coverin (or none are covering)
  • 80. • Access both memory- and disk-based tables • Less performant • Virtually full T-SQL surface • When to use • Ad hoc queries • Reporting-style queries • Speeding up app migration Accessing Memory Optimized Tables Interpreted T-SQL Access Natively Compiled Stored Procs • Access only memory optimized tables • Maximum performance • Limited T-SQL surface area • When to use • OLTP-style operations • Optimize performance critical business logic
  • 81. Create Procedure DDL CREATE PROCEDURE [dbo].[InsertOrder] @id INT, @date DATETIME WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english') -- insert T-SQL here END This proc is natively compiled Native procs must be schema-bound Atomic blocks • Create a transaction if there is none • Otherwise, create a savepoint Execution context is required Session settings are fixed at create time
  • 82. Procedure Creation CREATE PROC DDL Query optimization Code generation and compilation Procedure DLL produced Procedure DLL loaded
  • 84. Limitations on In-Memory OLTP in SQL 2014 Tables Triggers: no DDL/DML triggers Data types: no LOBs, no XML and no CLR data types Constraints: no FOREIGN KEY and no CHECK constraints No schema changes (ALTER TABLE) – need to drop/recreate table No add/remove index – need to drop/recreate table Natively Compiled Stored Procedures No outer join, no OR, no subqueries, no CASE Limited built-in functions [core math, date/time, and string functions are available]

Notes de l'éditeur

  1. Many of the largest financial, online retail and airline reservation systems fall between 500GB to 5TB with working sets that are significantly smaller.
  2. By using the new memory-optimized tables you can speed up data access, in particular in concurrency situations, due to the lock- and latch-free architecture of the In-Memory OLTP Engine. This means that applications which suffer from a lot of contention between concurrent transactions can greatly benefit from just migrating your hot tables to memory-optimized.
  3. If most or all of an application’s data is able to be entirely memory resident, the costing rules that the SQL Server optimizer has used since the very first version become almost completely obsoletewait time required for disk reads, other wait statistics, such as waiting for locks to be released, waiting for latches to be available, or waiting for log writes to complete, can become disproportionately large.
  4. there is no collection of pages or extents, no partitions or allocation units that can be referenced to get all the pages for a table.
  5. Scalability suffers when the systems has shared memory locations that are updated
  6. The log contains the logical effects of committed transactions sufficient to redo the transaction. The changes are recorded as insertions and deletions of row versions labeled with the table they belong to