Aucune remarque pour cette diapositive
This phase of FTRA evaluation measures SQL Server performance for the FTDW workload in terms of two core metrics. The first, Maximum CPU Consumption Rate (MCR), is a measure of peak I/O processing throughput. The second, Benchmark CPU Consumption Rate (BCR) is a measure of actual I/O processing throughput for a query or a query-based workload.What Is MCR?The MCR calculation provides a per-core I/O throughput value in MB or GB per second. This value is measured by executing a predefined, read-only, nonoptimized query, from buffer cache and measuring the time to execute against the amount of data in MB or GB. Because MCR is run from cache it represents the peak nonoptimized scan rate achievable through SQL Server for the system being evaluated. For this reason MCR provides a baseline peak rate for initial design purposes. It is not intended to indicate average or expected results for a real workload. Validated FTDW architectures will have aggregate baseline I/O throughput results that are at least 100 percent of the server-rated MCR. Another way to explain this is that MCR represents the best possible SQL Server processing rate for a reasonable, worst-case workload.MCR can also be used as a frame of reference when comparing other published and validated FTDW reference architectures for SQL Server 2012.In summary: MCR is not definitive of actual results for a customer workload. MCR provides a maximum data processing rate baseline for SQL Server and a single query associated with the Fast Track workload.MCR is specific to a CPU and server. In general, rates for a given CPU do not vary greatly by server and motherboard architecture but final MCR should be determined by actual testing.MCR throughput rating can be used as a comparative value against existing, published FTDW reference architectures. This can assist with hardware selection prior to component and application testing.
Processus de traduction d'adresses virtuelles plus rapideLarge page support is enabled on Enterprise Edition systems when physical RAM is >= 8Gb (and lock pages in memory privilege set)SQL Server will allocate buffer pool memory using Large Pages on 64bit systems if Large Page Support is enabled and trace flag 834 is enabledLarge page for the buffer pool is definitely not for everyone. You should only do this for a machine dedicated to SQL Server (and I mean dedicated) and only with careful consideration of settings like ‘‘max server memory’. Furthermore, you should test out the usage of this functionality to see if you get any measureable performance gains before using it in production.SQL Server startup time can be significantly delayed when using trace flag 834.http://blogs.msdn.com/b/psssql/archive/2009/06/05/sql-server-and-large-pages-explained.aspxhttp://support.microsoft.com/kb/920093
Basé colonne : les données d’une colonne sont regroupées dans les mêmes pages. Un epage ne contient que les données d’une colonne. Pas de deux.
1) Optimize for main memory data access: Storage-optimized engines (such as the current OLTP engine in SQL Server today) will retain hot data in a main memory buffer pool based upon access frequency. The data access and modification capabilities, however, are built around the viewpoint that data may be paged in or paged out to disk at any point. This perspective necessitates layers of indirection in buffer pools, extra code for sophisticated storage allocation and defragmentation, and logging of every minute operation that could affect storage. With Hekaton you place tables used in the extreme TP portion of an application in memory-optimized main memory structures. The remaining application tables, such as reference data details or historical data, are left in traditional storage optimized structures. This approach lets you memory-optimize hotspots without having to manage multiple data engines.Hekaton’s main memory structures do away with the overhead and indirection of the storage optimized view while still providing the full ACID properties expected of a database system. For example, durability in Hekaton is achieved by streamlined logging and checkpointing that uses only efficient sequential IO. 2) Accelerate business logic processing: Given that the free ride on CPU clock rate is over, Hekaton must be more efficient in how it utilizes each core. Today SQL Server’s query processor compiles queries and stored procedures into a set of data structures which are evaluated by an interpreter in SQL Server’s query processor. With Hekaton, queries and procedural logic in T-SQL stored procedures are compiled directly into machine code with aggressive optimizations applied at compilation time. This allows the stored procedure to be executed at the speed of native code. 3) Provide frictionless scale-up: It’s common to find 16 to 32 logical cores even on a 2-socket server nowadays. Storage-optimized engines rely on a variety of mechanisms such as locks and latches to provide concurrency control. These mechanisms often have significant contention issues when scaling up with more cores. Hekaton implements a highly scalable concurrency control mechanism and uses a series of lock-free data structures to eliminate traditional locks and latches while guaranteeing the correct transactional semantics that ensure data consistency. 4) Built-in to SQL Server: As I mentioned earlier – Hekaton is a new capability of SQL Server. This lays the foundation for a powerful customer scenario which has been proven out by our customer testing. Many existing TP systems have certain transactions or algorithms which benefit from Hekaton’s extreme TP capabilities. For example, the matching algorithm in financial trading, resource assignment or scheduling in manufacturing, or matchmaking in gaming scenarios. Hekaton enables optimizing these aspects of a TP system for in-memory processing while the cooler data and processing continue to be handled by the rest of SQL Server.
While Hekaton’s memory optimized tables must fully fit into main memory, the database as a whole need not. These in-memory tables can be used in queries just as any regular table, however providing optimized and contention-free data operation at this stage. After migrating to optimized in-memory storage, stored procedures operating on these tables can be transformed into natively compiled stored procedures, dramatically increasing the processing speed of in-database logic. Recompiling these stored procedures is, again, done through T-SQL, as shown below: