Tag Archives: Hekaton

Working with In-Memory OLTP (a.k.a. Hekaton) enabled Databases, Memory Optimized Tables and Natively Compiled Stored Procedures and it’s Internals with extensive list of Examples – Sql Server 2014

In this Article we will go-over on the following topics with extensive list of examples:

  • How To Create In-Memory OLTP (a.k.a. Hekaton) Enabled Database and it’s Internals.
  • CHECKPOINT Files And It’s Internals
  • How To Create Memory Optimized Tables and it’s Internals.
  • How To Create Natively Compiled Stored Procedures and it’s Internals.

This is the third article in the series of the articles on the topic “In-Memory OLTP (a.k.a.HEKATON a.k.a. eXtreme Transaction Processing) and its Internals – Sql Server 2014”. I request you to go through the other two articles listed below prior to this article

Note: The learning experience expressed in this article are based on the Sql Server 2014-CTP2.

How To Create In-Memory OLTP (a.k.a. Hekaton) Enabled Database and it’s Internals

To support In-Memory OLTP features (i.e. Memory Resident Tables, Natively compiled Tables and Stored Procedures etc), the Database need to have MEMORY_OPTIMIZED_DATA filegroup. We can create only one filegroup per db which can have MEMORY_OPTIMIZED_DATA. This Memory Optimized filegroup can have multiple containers/folders/volumes. Each of these containers will have Durable Memory Optimized Tables Checkpoint files (i.e. Data and Delta file pairs) containing copy of the Memory Optimized Table Data on the disk.

Creating In-Memory OLTP DataBase

Below is the script to Create a Brand New In-Memory OLTP (a.ka. Hekaton) Enabled Database

CREATE DATABASE SqlHintsXTPDemo
ON PRIMARY(NAME = SqlHints_DATA, 
  FILENAME = 'C:\XTPDBS\SqlHints_Data.mdf'),
FILEGROUP SqlHints_XTP_FG CONTAINS MEMORY_OPTIMIZED_DATA
	(NAME = SqlHints_XTP_CHKPOINT, 
		FILENAME = 'C:\XTPDBS\SqlHints_XTP_CHKPOINT')
LOG ON (NAME = SqlHints_LOG, 
                FILENAME='C:\XTPDBS\SqlHints_Log.ldf')

Immediately after the Database creation the Checkpoint Container folder in this example “C:\XTPDBS\SqlHints_XTP_CHKPOINT” and it’s sub-folders will be empty as shown below:

CheckPointContainerViewImmediatelyAfterDataBaseCreation

After few Minutes of Database Creation or Immediately after Creating the Memory Optimized Table. If we revisit the Checkpoint Container folder, we will see File Stream File’s as shown below.

CheckPointFilesAfterDataBaseCreation

Already Have a Regular Database? Want to Make them In-Memory OLTP Enabled?

Below script demonstrates this converting  Regular DataBase to a DataBase which can have Memory Optimized Tables.

--CREATE CLASSIC REGULAR DATABASE
CREATE DATABASE SqlHintsXTP
ON PRIMARY(NAME = SqlHints_DATA, 
	FILENAME = 'C:\XTPDBS\SqlHints_Data.mdf')
LOG ON (NAME = SqlHints_LOG, 
	FILENAME='C:\XTPDBS\SqlHints_Log.ldf')
GO
--ADD MEMORY_OPTIMIZED_DATA File Group to the DB 
ALTER DATABASE SqlHintsXTP 
ADD FILEGROUP SqlHints_XTP_FG CONTAINS MEMORY_OPTIMIZED_DATA
GO
--ADD FILEGROUP CONTAINER FOLDER WHICH will have the 
--MEMORY OPTIMIZED TABLES 
--CHECKPOINT FILES(i.e. DATA and DELTA File Pairs)
ALTER DATABASE SqlHintsXTP
ADD FILE(NAME = SqlHints_XTP_CHKPOINT, 
FILENAME = 'C:\XTPDBS\SqlHints_XTP_CHKPOINT') 
TO FILEGROUP SqlHints_XTP_FG

Creating Memory Optimized Database with Multiple Containers for the Memory Optimized Table’s Checkpoint Files

Script:

CREATE DATABASE SqlHintsXTP
	ON PRIMARY(NAME = SqlHints_DATA, 
		FILENAME = 'C:\XTPDBS\SqlHints_Data.mdf'),
	FILEGROUP SqlHints_XTP_FG CONTAINS MEMORY_OPTIMIZED_DATA
		(NAME = SqlHints_XTP_CHKPOINT1, 
			FILENAME = 'C:\XTPDBS\SqlHints_XTP_CHKPOINT1'),
		(NAME = SqlHints_XTP_CHKPOINT2, 
			FILENAME = 'D:\XTPDBS\SqlHints_XTP_CHKPOINT2')
	LOG ON (NAME = SqlHints_LOG, 
		FILENAME='C:\XTPDBS\SqlHints_Log.ldf')

Physical Folder’s:
In-Memory OLTP DataBase with Multiple Containers

Points to Note About In-Memory OLTP Enabled Databases:

  • Memory Optimized Databases doesn’t support Database Mirroring and Replication. But it’s alternative Always-ON Availability Component is supported.
  • It also doesn’t support differential back-up, but it supports Full and log backup and restore.
  • Sql Server 2014 CTP2 can support max 4096 Data/Delta files per In-Memory OLTP database. Combined maximum size of the Durable Memory optimized table data supported by Sql Server is 256 GB.
CHECKPOINT Files And It’s Internals

As Memory is Volatile, the Server restart will cause flushing of the Memory Optimized Table data from Memory. In order to provide Durability of the Data for Memory Optimized Tables created with durability option, the In-Memory OLTP engine writes the every committed transaction data to the Transaction log. And whenever CHECKPOINT fires, it will move the data written from the Transaction Log to the Data and Delta filestream file Pairs in the Checkpoint Container. On System restart the data from these check points file’s and the active transaction file is used to rebuild the Memory Optimized table and stream it back to the Main Memory.

Data Check Point File:

  • Data File’s holds the Durable Memory Optimized Table’s Data (i.e. Inserted Rows only).
  • Initially the No. of Data Files Created will be equal to the No. of Cores of the DB Server.
  • Approximate Size will be 128 MB

Delta Check Point file:

  • Delta File’s: For each Data File there will be a corresponding Delta file
  • It will have the ID’s of the Deleted Rows. It will have only the Id’s of the deleted records which are present in the corresponding Data

Checkpoint Meta data and it’s Mapping to physical Checkpoint files on the disk:

Below image depicts the Checkpoint Files Meta data maintained by Sql Server and it’s Mapping to the Physical checkpoint files on the disk. DMV sys.dm_db_xtp_checkpoint_files can used to get the Checkpoint files Meta data maintained by the Sql Server.

CheckPointFilesMetaData

How To Create Memory Optimized Tables and it’s Internals.

A database can have Memory Optimized Table’s, only if it is In-Memory OLTP Enabled (i.e. Database need to have MEMORY_OPTIMIZED_DATA filegroup). A database can have both Regular Tables (i.e. Disk-based Tables) and Memory Optimized tables.

Create Memory Optimized Table

Below is the script for Creating a Durable Memory Optimized Table.

CREATE TABLE dbo.Customer
(
  CustomerId INT NOT NULL PRIMARY KEY NONCLUSTERED 
          HASH WITH (BUCKET_COUNT = 1000000),
  FirstName NVARCHAR(50) NOT NULL,
  LastName  NVARCHAR(50) NOT NULL
)
WITH (MEMORY_OPTIMIZED = ON,  DURABILITY = SCHEMA_AND_DATA)
  • MEMORY_OPTIMIZED: This Parameter value ON indicates that the Table is Memory Optimized (i.e. Memory Resident Table).

  • DURABILITY: This Parameter Value SCHEMA_AND_DATA means Table Data is Durable and if it’s value is SCHEMA_ONLY means only Table Schema is Durable and Data is NOT Durable.

  • NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000): HASH index. Here choosing BUCKET_COUNT is very important, it should be equal to or greater than the possible number of unique values for the index key. It will degrade the performance if this BUCKET_COUNT is less than the number of unique values for the index key. We will discuss on this more in the future articles on the HASH and RANGE Indexes.

Restrictions/Limitations for Memory Optimized Tables:

  • LOB Data Types, XML, CLR and MAX Data Types are Not Supported
  • NO ALTER TABLE and CREATE, ALTER, DROP INDEX
  • Foreign Key and Check Constraints are Not Supported
  • Identity Columns are Not Supported
  • No Unique indexes other than for the Primary Key Column
  • Table Row Max Size 8060 bytes, it is enforced during Table Creation as-well.
  • DML Triggers are Not Supported for Memory Optimized Table
  • Memory Optimized Table need’s to have at least one Index.Memory Optimized Tables doesn’t use 8 KB Data Pages from the extent to store the table data, Instead has a Special Row Structure and Index’s are used to link the rows to get the complete Table Data.
  • Supported Index Types are Non-Clustered HASH or RANGE Indexes.
  • A Table Can Have Max 8 Indexes

Native Compilation of Memory Optimized Tables

Below image depicts the Step’s performed by In-Memory OLTP engine whenever the Memory Optimized Table is created

CreateMemoryOptimizedTableInternals

During Memory Optimized Table creation the In-Memory OLTP engine compiles the Memory Optimized Table programming constructs (i.e. SELECT/UPDATE/DELETE FROM..) to Native code (i.e. The code that doesn’t require further compilation, CPU can directly execute these processor instructions). Basically, it provides the faster access to the Memory Optimized Table by providing routines for directly accessing the Table data. Whenever we create a memory optimized table the In-Memory OLTP engine will perform the following steps:

  • Stores the Tables Meta Data for providing SCHEMA Durability (i.e. On System Restart this Meta data is used by the Sql Server to regenerate the Memory Optimized Table dll)
  • Generates the C-Code for the Memory Optimized Table and it
  • Uses the Visual C compiler (Shipped with Sql Server 2014) to compile and generate the Native code (i.e. Produces the table dll).
  • Generated Table dll will be loaded to the SqlServer.exe’s process

DMV sys.dm_os_loaded_modules can be used to get the currently loaded Natively compiled Tables or Stored Procedures into the SqlServer.exe process.

--DMV to get the list of all the Tables and Stored Procedure's 
--dll's which are loaded in the Sql Server's Process Memory
SELECT Name, Description 
FROM   sys.dm_os_loaded_modules 
WHERE  Description = 'XTP Native DLL'

sys.dm_os_loaded_modules

We can use query like below to get the list of all the Memory Optimized Tables in the current database. Note the Durability column value “0” Means both Schema and Data of the table are durable where as value “1” means only the Schema is durable.

USE SqlHintsXTPDemo
GO
SELECT Name, durability_desc, durability
FROM sys.tables 
WHERE is_memory_optimized = 1

Quey_To_Get_Memory_Optimized_Tables

How To Create Natively Compiled Stored Procedures and it’s Internals.

Below is the script for creating a simple Natively Compiled Stored Procedure. A Natively compiled Stored Procedure can Access Memory Optimized Tables but not the disk based tables. Because the disk based table’s are not Natively Compiled like Memory Optimized Table and also they don’t provide the routines to access the tables directly like the Memory Optimized Tables.

Create Natively Compiled Stored Procedure

CREATE PROCEDURE dbo.AddCustomer
(
	@CustomerId INT,
    @FirstName NVARCHAR(50),
    @LastName NVARCHAR(50)
)
WITH NATIVE_COMPILATION
        ,SCHEMABINDING
        ,EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH ( TRANSACTION ISOLATION LEVEL = SNAPSHOT ,
				LANGUAGE = 'us_english' )

    INSERT INTO dbo.Customer
    VALUES (@CustomerId, @FirstName, @LastName)
END

WHERE

  • NATIVE_COMPILATION: Conveys to Sql Server that this stored Procedure Needs to be Compiled to Native Code during it’s creation.
  • SCHEMABINDING: This block’s dropping of the Tables referenced by the Stored Procedure
  • EXECUTE AS OWNER: The context in which the Stored Procedure Should Execute. It can be EXECUTE AS OWNER, EXECUTE AS user and EXECUTE AS SELF
  • BEGIN ATOMIC: Guarantees the Atomic Execution (Implicit Transaction) of the Stored Procedure (i.e. Either All Statement’s will succeed or fail together)
  • WITH ( TRANSACTION ISOLATION LEVEL = SNAPSHOT,  LANGUAGE = ‘us_english’ ) 
    TRANSACTION ISOLATION LEVEL: Transaction Isolation Level used for the Atomic Execution of the Stored Procedure.
    LANGUAGE: Language used for the DateTime Formats and System Messages in the Stored Procedure

When we issue a command to Sql Server to Create a Natively Compiled Stored Procedure. The Sql Server engine’s Parser, Algebrizer and Query Optimizer create Processing Flow, Query Tree for the T-Sql Statement’s in the Stored procedure and then an optimized execution plan is created for all the queries in the Stored Procedure. Then In-Memory OLTP engine Takes this as an input and generates C code and then by using the Visual C compiler the C code is compiled to a native code. The Generated dll will be loaded into the Sql Server’s Process Memory.

And when user Executes the stored procedure (i.e. issue ‘s an EXEC) command then first the Sql Server Engine’s Parser & Algebrizer extracts the stored procedure Name and the Parameters and if it identifies it as a natively compiled Stored Procedure then the request is passed to the In-Memory OLTP engine which locates the Stored Procedures Dll entry point. And the dll  executes Stored Procedure logic and returns the result.

Create Natively Compiled Stored Procedures Internals

 

Please correct me, if my understanding is wrong. Comments and suggestions are always welcome.

Myths and Facts about In-Memory OLTP (a.k.a. Hekaton) – Sql Server 2014

This is the second article in the series of the articles on the topic “In-Memory OLTP (a.k.a.HEKATON a.k.a. eXtreme Transaction Processing) and its Internals – Sql Server 2014”. Below is the list of other articles in this Series:

Following are the common Myths/Misconceptions about the In-Memory OLTP (a.k.a. Hekaton). Some of these Misconception I too had when heard of this concept initially, the white paper In-Memory OLTP White Paper by Kalen Delaney cleared all my doubts.

Myth / Misconception Reality
As Memory Optimized Tables are Memory Resident, the Data is Not-Durable upon System Restart/Crash.

REALITY: This Statement is both TRUE and FALSE.

FALSE: Sql Server 2014 provides an option to create Memory Resident Memory Optimized Tables which can have durable data. If tables are created with this option, data will not be lost even upon System Restart/Crash.

Data Durability is achieved by the DATA and DELTA files in the Memory Optimized file group and by writing to the Transaction Log but the amount of data which is written to the log is reduced considerably. Here DATA file’s will have the Inserted rows, whereas DELTA file will have the ID’s of the Deleted rows. On System Restart the Memory Optimized Tables Data is regenerated from DATA, DELTA files and the Transaction Log.

TRUE: If Memory Resident Memory Optimized Table is created without Data Durability option. Then System Restart/Crash etc will clear the complete Table Data.

There is NO Locking, Latching and Blocking

REALITY: This statement is both TRUE and FALSE.

FALSE: Any modification to the Data of the Memory Optimized Tables created with Data Durability option needs to be written to the Transaction Log. This writing of log data by the Transaction Log Manger still has Latches. Logging for the Memory optimized table is much more efficient and the amount of log data generated is very less. As No logs are made for the index, schema/structure changes and UNDO information (i.e. Only Committed data is logged).

TRUE: Only for the Memory Optimized Tables which are created without Data Durability option. For such Tables there will not be any logging required, as they are designed to lose the data on System Restart.

As there is NO Locking, Latching and Blocking, this will result in In-Consistent Data and Dirty Reads.

REALITY: This statement is FALSE

FALSE: To Achieve NO Locking, Latching and Blocking, In-Memory OLTP engine uses True Multi-Version Optimistic Concurrency Control with FULL ACID Support. That is Transactions in In-Memory OLTP are Atomic, Consistent, Isolated and Durable.

For instance any request for the Data Row from a Memory Optimized Table, the In-Memory OLTP engine gives the version of the Row which is current at the time of the request. Consider a scenario where two Transactions get the same version of the record and trying to update it with different values. In such a Write-Write conflict scenario the Transaction which is updating last will fail as the record which it is manipulating is not a current version of the record while committing the Transaction. Because of this Microsoft is suggesting us to have appropriate Try-Catch and Re-Trying Mechanism built in the Application.

In-Memory OLTP Databases are New Separate Products

REALITY: This statement is FALSE

FALSE: In-Memory OLTP database engine component is fully integrated into Sql Server 2014 Relation Database engine, it is not a separate database product. The Same SSMS which is used for the Disk Based Tables/Indexes is used for the Memory Optimized Table’s and Indexes. In that way, it provides the same development, Management and administration experience.

IDENTITY Columns are Not Supported for Memory Optimized Tables, so the Best alternative is SEQUENCE.

REALITY: This statement is FALSE

FALSE: For Memory Optimized Tables I assume SEQUENCE is not the best alternative for IDENTITY column. As request for NEXT SEQUENCE value is Synchronized to assure returning unique value. In case of a sequence object created with NO CACHE option each time a next sequence value is requested, it will write the calculated value to the system table before returning it to the calling statement. But if sequence is created with CACHE of Size 50, it will write to the System table only after each 50 sequence values are served. For more details you can go through the Sequence Cache management and Internals. But in either case it leads to resource contention and waiting for locks and latches, which indirectly blocks In-Memory OLTP from scaling.

If possible please avoid Sequence for Memory Optimized Tables. Try to use something like UNIQUEIDENTIFIER data type and NEWID() function to generate uniqueidentifier value.

Will explore more on this and will come back with performance results with Sequence, With UniqueIdentifier etc.

There is No performance difference between the Memory Optimized Table and Disk based Table whose all the data pages residing in Memory due to Table Scan or DBCC PINTABLE.

REALITY: Both the statements are FALSE.

FALSE: In-Memory OLTP Database engine component doesn’t need Buffer Pool and also no 8KB data pages. Memory Optimized Tables provide better performance as the core engine uses the lock free algorithm (i.e. Multi-Version Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting Data Durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably. As the Indexes of the Memory Optimized Tables are only in Main Memory they will not be persisted on the disk, so to the log only the committed table data is written.

But on the other hand in case of disk based tables whose data residing in memory due to Table Scan or DBCC PINTABLE it still needs buffer pool, it will have Locking, Latching and blocking. Any modification to the table data will result in writing both table and index data to transaction log.

Note: The learning experience expressed in this article are based on the Sql Server 2014 – CTP2, it may change in the final release of the Sql Server 2014.

Please correct me if any of my understanding is not correct.

Introduction to In-Memory OLTP (a.k.a. Hekaton) – Sql Server 2014

This is the first article in the series of the articles on the topic “In-Memory OLTP (a.k.a. HEKATON a.k.a. eXtreme Transaction Processing) and its Internals – Sql Server 2014”. Below is the list of other articles in this Series:

For In-Depth detailed information on the In-Memory OLTP(a.k.a. Hekaton), you may like to download the In-Memory OLTP White Paper written by Kalen Delaney or you can visit MSDN. You can as well download Sql Server 2014 CTP2.

WHAT is In-Memory Online Transaction Processing (a.k.a. Hekaton)?

In-Memory Online Transaction Processing formerly know as Hekaton is a database engine component which is optimized for accessing Memory resident tables. This component is fully integrated into Sql Server 2014 relation database engine, it is not a separate database product. It facilitates creation of Memory resident Tables (i.e. Memory Optimized Tables) and Indexes. It also provides the option to compile the Transact-Sql Stored Procedure accessing Memory Optimized Tables to Machine code. These Natively compiled stored procedures referencing Memory Optimized Tables provide better performance compared to the Transact-Sql Statement or Stored Procedure accessing Memory Optimized Tables.

Memory Optimized Tables provide better performance as the core engine uses the lock free algorithm (i.e. MultiVersion Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting Data Durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably. As the Indexes of the Memory Optimized Tables are only in Main Memory they will not be persisted on the disk, so to the log only the committed table data is written.

On an average it improves the performance of OLTP applications by 10x without having to re-write entire application (i.e. Just by migrating frequently used disk based tables by the OLTP application to Memory optimized table).

WHY In-Memory OLTP (a.k.a. Hekaton)?

Sql Server database engine was designed in the days when Main Memory was very costly. As per this design data is stored on the disk and is loaded to the main memory as required for the transaction processing and any changes to the In-Memory data is written back to the disk. This disk IO is main bottle neck for the OLTP applications having huge number of concurrent users, as it involves waiting for locks to be released, latches to be available, waiting for the log writes to complete.

As per the current trend Main Memory prices are less expensive and enterprises can easily afford to have production database servers with Main Memory sizes in TB’s. And this declining Memory prices made Microsoft to re-think on the initial database engine which is designed in the days when Main Memory was costly. And the result of this re-think is the In-Memory OLTP (a.k.a. Hekaton) Database engine component which supports memory resident Tables and Index. In-Memory OLTP engine uses the lock free algorithm (i.e. MultiVersion Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting data durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably.

Key Features of In-Memory OLTP (a.k.a. Hekaton)
Memory Optimized Tables
  • Data of the Tables which are declared as Memory Optimized will reside in the Main Memory.
  • Access to the Memory Optimized Table data doesn’t require the data to be loaded from disk to memory as complete Table and Indexes data is already present in the Memory.
  • In-Memory OLTP doesn’t use the data pages and extents for storing Memory Optimized Table data. Instead it uses a special row structures. Each Row will have a Row Header, which contains the row’s Begin and End time. This Begin and End time is used to determine the validity of the row. Apart from this each row will have Index Pointers and the count of the number of indexes that are referencing it. All the tables data is linked by the indexes, so the requirement of minimum one Index for Memory Optimized Table.
  • Memory optimized Tables Schema modification is not Supported.
  • No Foreign Key and Check Constraints Supported.
  • LOB Data Types are not Supported.
  • Doesn’t Support Identity and Calculated Columns.
In-Memory Indexes
  • Index for the Memory Optimized Table exists only in Main Memory and they are never stored on the disk. Index’s are updated only in Memory and it will not be logged/written to the Transaction Log.
  • On System restart they are re-created as the table data is streamed into the memory.
  • Hash index and Range Index are the types of indexes supported by memory optimized tables.
  • And Each memory optimized table need to have at least one Non-Clustered Index. As index is the one with which complete Table data is linked.
  • A Memory Optimized Table can have Maximum of 8 Indexes.
Natively Compiled Stored Procedures
  • The Memory Optimized Tables and the Stored Procedure referencing them can be compiled to Machine code via C Code generator and Visual C Compiler.
  • Aggressive optimization is done at compile time i.e. when creating the Natively compiled Stored Procedures.
  • Call to Such Stored Procedure is just a dll entry Point.
  • Natively compiled stored procedures referencing Memory Optimized Tables provide better performance compared to the Transact-Sql Statement or Stored Procedure accessing Memory Optimized Tables.
High Concurrency
  • In-Memory OLTP uses True MultiVersion Optimistic Concurrency Control, which doesn’t require any Lock and Latches when the Memory Optimized Tables are referenced during the transaction processing.
  • In-Memory OLTP maintains the multiple versions of the Row’s. Each Row will have a Row Header, which contains the row’s Begin and End time. This Begin and End time is used to determine the validity of the row.
  • For instance any request for the Data Row, the In-Memory OLTP engine gives the version of the row which is current at the time it is requested. Consider a scenario where two transactions get the same version of the record and trying to update it with different values. In such a Write-Write conflict scenario the transaction which is updating last will fail as the record which it is manipulating is not a current version of the record while committing the transaction.
Data Durability
  • In-Memory OLTP provides an option to create Memory Optimized Tables which can have Durable Data. If tables are created with this option, data will not be lost even upon System Restart/Shutdown/Crash.
  • Data Durability is achieved by the Data and Delta files in the Memory Optimized file group and by writing to the transaction log but the amount of data which is written to the log is reduced considerably. Here Data file of 128 MB size will have the inserted rows, whereas Delta file will have the ID’s of the Deleted rows.
Optimized Transaction Log
  • There will not be any transaction logging for the Memory Optimized Tables which are created without Data Durability option.
  • And for the Memory Optimized Tables which are created with Data Durability option, will write to the same Transaction Log which is used by the Disk Based tables.
  • But the amount of data which is written to the log is reduced considerably. It is because: 1) Index for the Memory Optimized Table exists only in Main Memory and they are never stored on the disk. Index’s are updated only in Memory and it will not be logged/written to the Transaction Log. 2) Only Committed data is written to the Transaction Log and UNDO transactions are not logged.
Backup & Restore
  • Sql Servers Database Backup option, back’s up the memory optimized file group.
  • Transaction Log backup includes Memory Optimized Table log records.
  • Differential Backup is not supported.
No Buffer Pool In-Memory OLTP Engine doesn’t use the Buffer Pool, because the Memory Optimized Tables and Indexes data is already in Memory.
Sql Server Integration In-Memory OLTP database engine component is fully integrated into Sql Server 2014 Relation Database engine, it is not a separate database product. The Same SSMS which is used for the Disk Based Tables/Indexes is used for the Memory Optimized Table’s and Indexes. In that way, it provides the same development, Management and administration experience.
Garbage Collection As Update and Delete operation on the Memory Optimized Table creates multiple version of the rows. And the rows which becomes stale will unnecessarily consuming the Main Memory. For this In-Memory OLTP engine has a Garbage Collection System, which scans periodically and removes such stale rows.

Note: The learning experience expressed in this article are based on the Sql Server 2014 – CTP2, it may change in the final release of the Sql Server 2014.

Please correct me if any of my understanding is not correct.

Microsoft Announces SQL Server 2014

Microsoft just 14 Months back released Microsoft Sql Server 2012, and in such short span of time Microsoft announced it’s plan for the release of next version of Sql Server. This shows Microsoft continued commitment and focus to mitigate the challenges posed by the competitors and new array of DataBase Technology.

On June 3, 2013 at TechEd North America , Microsoft has officially made announcement about Sql Server’s new Version i.e. Sql Server 2014. But they have not mentioned when the trial version will be avilable for evaluation. But we can register at MicroSoft Site to get notified when the trial version is ready for evaluation.

Some of the key features of Sql Server 2014 which I like about are as below. For detailed information on the features and extensive feature list please keep visiting the MicroSoft Site. And at this link they have couple of white papers and the Sql Server 2014 Datasheet too. I will write more articles on it as and when I explore/get to know more about it and when MS provides the Trial version for Evaluation.

1) In-memory, online transaction processing (OLTP).

It greatly improves transaction processing speeds and reduces latency by virtue of working with in-memory data, as opposed to disk-based data. This technology is a result of a Project that has been in works for several years and project code-named ‘Hekaton‘.

2) Column Store Indexes:

Update-able Clustered Column Store Indexes.

3) Buffer Pool Extension to SSDs:

This can definitely improve query performance by allowing the use of non-volatile solid-state drives to reduce SQLServer memory pressure with no risk of data loss.

4) High Availability

SQL Server 2014 will have Enhanced AlwaysON and it is going to support up to 8 Secondaries. This AlwaysOn high Availability technology is based on the principals of database mirroring has been introduced in Sql Server 2012, where it was supporting max 4 Secondaries.  With AlwaysOn, users will be able to fail over multiple databases in groups instead of individually. Also, secondary copies will be readable, and can be used for database backups.

5) Extended Online Operations

SQL Server 2014 is going to Support Online Index Operations on the Partition Level, which will be a very nice feature for Large Databases where table data is partitioned.

6) Hadoop Connectivity

7) Simplified Cloud Back-Up, Cloud Disaster Recovery, Extended on-Premises apps to the cloud, New Cloud Migration wizard to easily move on-premises Sql Server to Windows Azure, Sql Server in a Windows Azure Virtual Machine and so. on.