Category Archives: SQL Server Fundamentals

What is In-Memory OLTP in SQL Server 2014 (Code Name “Hekaton”)?

In-memory OLTP or Hekaton these 2 terms can be used interchangeably but Microsoft officially announce this new technology name as “In-memory OLTP”. “Hekaton” is Greek word for “hundreds,” and it was given this name for its ability to speed up database function 100x (possibly).  It certainly increases application speed by 10x and nearly 50x for new, optimized applications but it depends of several parameters.

Hekaton works by providing in-application memory storage for the most often used tables in SQL Server, but don’t confuse this is entirely different from an older technology you might heard of DBCC PINTABLE ( database_id , table_id ).

DBCC PINTABLE does not cause the table to be read into memory. As the pages from the table are read into the buffer cache by normal Transact-SQL statements, they are marked as pinned pages. SQL Server does not flush pinned pages when it needs space to read in a new page. SQL Server still logs updates to the page and, if necessary, writes the updated page back to disk. SQL Server does, however, keep a copy of the page available in the buffer cache until the table is unpinned with the DBCC UNPINTABLE statement.

DBCC PINTABLE is best used to keep small, frequently referenced tables in memory. The pages for the small table are read into memory one time, then all future references to their data do not require a disk read, but still the internal structure will remain page based.

Hence if the internal structure is page based then definitely there will be locking, latching and logging, also they use the same index structure which also require locking and latching.

Although the feature of DBCC PINTABLE is good and provide performance enhancement but it has some limitations like if a large table is pinned, it can start using a large portion of the buffer cache and not leave enough cache to service the other tables in the system adequately. If a table is larger than the buffer cache is pinned, it can fill the entire buffer cache. In that case a sysadmin must shut down SQL Server, restart SQL Server, and then unpin the table. Pinning too many tables can cause the same problems as pinning a table larger than the buffer cache.



In SQL 2014 “In-Memory OLTP” it entirely different or we can say also say that it’s an enhancement of previous technology DBCC PINTABLE. Tables in In-Memory OLTP are stored in entirely different way from disk based tables, they use entirely different data and index structure because this feature did not store data on PAGES and separate index mechanism, hence removing the need to latching and can solve the problem of latch contention mostly happens in case of last page insert.


You can check this problem via DMV’s and investigate the issue from below queries.   Select * From sys.dm_os_waiting_tasks  

Select wt.*, st.text
From sys.dm_os_waiting_tasks wt
left join sys.dm_exec_requests er
    on er.session_id = wt.session_id
outer apply fn_get_sql(er.sql_handle) st
where wt.resource_description is not null


Now as In-Memory OLTP works on a new structure as mentioned above we have below advantages. Elimination of disk reads entirely by always residing in memory

  • Hash indexing (more efficient than on-disk table b-tree indexes)
  • Elimination of locking & latching mechanisms used in on-disk tables
  • Reduction in “blocking” via improved multi-version concurrency model

In-Memory OLTP Architecture



As you can see from the above diagram the tables and indexes are stored in a memory in different way, there is no buffer pool like conventional architecture also as there is no 8KB page based data structure MS is using stream based storage, and only files are appended to store the durable data.

Hence the major difference is that memory optimized table do not require pages to be read form the disk, all the data itself stored in memory all the time. A set of checkpoint files which are only used for recovery purpose is created on the file system file group to keep track of the changes of data, and the checkpoint files append-only. Memory optimized table uses the same transaction log that is used or the operation on disk based tables and is stored on disk, to ensure that in case of system crash the rows of data in the memory optimized table and be recreated from the checkpoint files and the transaction log.

Here you also have an option of Non-Durable table as well, in this option only table schema will be durable not data, so these tables will be re-created once the SQL start after a crash without data.

Indexes in In-Memory OLTP is also different they are not stored in a normal B Tree structure in fact they support HASH indexes to search records. Every memory optimized table must have at-least one Hash Index because it is the index that combine all rows into a single table. Indexes for memory optimized table are never stored on disk in-fact they are stored on memory and recreated everyone the SQL restart and data is inserted into the memory.

Bellow you can see there are 3 disk based tables T1, T2, and T3 with file-group on disk, but once we once we want some tables to be in memory that there will be a new space in


SQL memory and in addition there will be a new type of file group for stream based storage to persist copy of data to ensure data can be recovered in case of crash. Now consider a scenario we want to move some disk based tables to memory then we have to first drop them and recreate them with special syntax and the situation will look like mentioned below, as you can see there is a separate fie group called memory optimized file group and all DML operations are logged in conventional log file and as you can also see indexes and data exists in memory itself, note that there will no copy of indexes on disk.


Natively Complied Stored Procedure

In-Memory OLTP introduces the concept of native compilation. SQL Server can natively compile stored procedures that access memory-optimized tables. Native compilation allows faster data access and more efficient query execution, than interpreted (traditional) Transact-SQL

Native compilation refers to the process of converting programming constructs to native code, consisting of processor instructions that are executed directly by the CPU, without the need for further compilation or interpretation.

In-Memory OLTP compiles Transact-SQL constructs to native code for fast runtime data access and query execution. Native compilation of tables and stored procedures produce DLLs that are loaded into the SQL Server process.

In-Memory OLTP compiles both memory-optimized tables and natively compiled stored procedures to native DLLs when they are created. In addition, the DLLs are recompiled after a database or server restart. The information necessary to recreate the DLLs is stored in the database metadata.

Natively compiled stored procedure is a way to perform the same operation with fewer instruction because you can see that the CPU clock rate is stalled at a point. Hence it’s a way to perform the same operation with lesser instruction like there is a task which require 1000 instruction to complete with conventional architecture, but the same task can be done with 300 instruction with natively compiled stored procedure.



Hence you will get much performance gains in many aspects like the storage engine here itself is lock and latch free, hence there will be no locking and latching and in case your application is having performance bottle neck because of latch contention then you can use this feature of In-Memory OLTP to remove latch contention and you will see great result once you move some suffering tables in memory. Natively compiled stored procedure will give to 10-30x benefit by improving execution time, and also memory optimized table require lesser logging than disk spaced table as no index operations are logged, but still the latency could be there for the log, because the commit of transaction will not complete till the log is hardened to disk so if there is good storage then this will not be an issue.


Although the In-Memory OLTP tables provides lot of performance gains but it has lot of limitations too and those must be taken care before you decided to implement this in your production.

  • Truncate Table
  • Merge
  • Dynamic and Keyset cursor
  • Cross Database queries
  • Cross Database transactions
  • Linked Server
  • Locking Hits
  • Isolation Level Hints (ReadUncommitted, ReadCommitted, and ReadCommittedLock)
  • Memory Optimized table types and table variable are not supported in CTP1
  • Tables containing binary columns such as text, xml or row width exceeding 8kB cannot be configured as “in-memory” and there are also some limits on SQL commands and transaction concurrency options
  • Default and check constraints are not supported in memory optimized tables
  • User defined data types within table definition are not supported
  • Expects the collation of the table or database a BINARY – Latin1_General_100_BIN2 (tested trying with a database with collation SQL_Latin1_General_CP1_CI_AS and resulted in error)
  • File SIZE or AUTOGROWTH can’t be set to the file stream data file (in the CREATE DATABASE syntax)
  • The non BIN2 collation is not supported with indexes on memory optimized tables
  • In memory table creation fails in the normal database as it is not supported (The feature ‘non-bin2 collation’ is not supported with indexes on memory optimized tables.)
  • The above statement doesn’t apply for a normal disk table residing in an in-memory database
  • Identity columns can’t be defined in the memory optimized tables Data types IMAGE, TEXT, NVARCHAR(MAX), VARCHAR(MAX) types not supported in memory optimized tables
  • Row size for the memory optimized table is 8060 bytes
  • Memory optimized table can’t be altered to add a FORIGEN KEY constraint & in-line creation of Foreign key constraint can’t be created either
  • CREATE TRIGGER not supported by the memory optimized tables
  • CREATE STATISTICS on the tables wasn’t allowed when tried to create (CREATE and UPDATE STATISTICS for memory optimized tables requires the WITH FULLSCAN or RESAMPLE and the NORECOMPUTE options; the WHERE clause is not supported.)
  • Partition not supported
  • LOB data type not supported

How to check SQL Server Version and latest SQL server versions?

There are several ways to check the running SQL Server versions from Query Analyzer, command line, GUI and even by checking some binary file versions.

Select @@version



Output= Microsoft SQL Server 2005 – 9.00.4035.00 (Intel X86)   Nov 24 2008 13:01:59   Copyright (c) 1988-2005 Microsoft Corporation  Enterprise Evaluation Edition on Windows NT 5.1 (Build 2600: Service Pack 3)


Moreover you can check it from the below mentioned queries, open query analyzer and execute the below command.

Select SERVERPROPERTY(‘ProductVersion’) AS ProductVersion,SERVERPROPERTY(‘ProductLevel’) AS ProductLevel,SERVERPROPERTY(‘Edition’) AS Edition,SERVERPROPERTY(‘EngineEdition’) AS EngineEdition;



Output= 9.00.4035.00            SP3      Enterprise Evaluation Edition          3


3 = Enterprise (This is returned for Enterprise, Enterprise Evaluation, Data Center, and Developer.)

If you are bit reluctant 😉 to open query analyzer then fire a query then the version can also be checked from the server properties as mentioned below.



There is also another way to check for SQL Server version is from the SQL Server Configura tion Manager. Go to configuration manager and click Advanced Tab, then you can check for file version and version of the running SQL Server.


Sometimes if you have SCOM (System Centre Operation Manager) is monitoring your production server in your environment and you are getting still some alert that the SQL server version is not up to date even though it is updated, then you can check this from your SQLServr.exe version. To check this navigate to the location where your binaries are placed and then right click on SQLServr.exe file and check for Product version.


Note :- The SQL 2000 mainstream support was only available till April 08th 2008 and from then its extended support till April 09th 2013. If you need more details about this please follow the below mentioned Link by MS.

Note :- The SQL 2005 mainstream support was only available till 12 April 2008 and from then its extended support till 12 April 2016. If you need more details about this please follow the below mentioned Link by MS.

You can also check the version and patch level of your SQL Server from register

SQL Server Default Instance (2000)


SQL Server Default Instance(2005)

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup

SQL Server Named Instance (2008)




Which SQL Server Version?

Being a DBA if someone asks you a question which SQL Server Edition should I go for, well it’s a very tricky questions both for DBA and for the customer as he doesn’t want to shed an extra penny from his pocket.

I’ll start this discussion with SQL 2000, Well MSDE the free version of SQL Server 2000, didn’t include management tools and had a performance governor that made the product unsuitable for all but the most trivial of real-world needs or personal use
Well MSDE is the Microsoft Data Engine and is a scaled down version of Microsoft SQL Server 2000 intended primarily for development use or as a backend database for small office systems. While intended to have a lighter load on system resources than an actual installation of Microsoft SQL Server 2000, it is an otherwise functionally identical SQL database engine to its larger sibling. It is limited to a 2GB database, only supports servers up to 2GB RAM and two processors, and doesn’t support advanced SQL Server features such as BI, RS, Full Text Indexing/Search. The biggest disadvantage of MSDE is the fact that it does not include a GUI management tool and incorporates a performance governor. However if you have purchased a license for SQL Server 2000, then you can use the Enterprise Manager tools included with that to manage your MSDE.

The most significant issue with using MSDE is potentially the performance governor. This artificially limits the performance of the database engine by throttling the number of queries it can process at simultaneously as a result of which it is not suggested not to use MSDE with live production website. Hence the product is unsuitable for all but the most trivial of real-world needs or personal use.

If we talk about SQL Server 2005, then we have SQL Server Express Edition available free of cost. Moreover the SQL Server 2005 Express Service Pack 1 (SP1) now offers a useful set of GUI administration tools. Running SQL Server 2005 Express on a fast dual-core server with 1GB of memory will give you a remarkably capable platform for lower-end production uses. SQL Server 2005 Express with Advanced Services even includes support for Reporting Services.

For SQL Server 2005 and 2008 the free version SQL Server Express Edition can legally be used on laptops that connect with a paid version (such as Standard Edition) on the server, as long as you don’t mind the following limitations of SQL Server Express Edition like database size limit for SQL Server Express Edition: 4 GB for 2005 version or 10 GB for 2008

There is also SQL server workgroup edition and It is mostly used as a data management solution for small organizations that need a database with no limits on size or number of users. Workgroup Edition can serve as a front-end Web server or for departmental or branch office operations. It includes the core database features of the SQL Server product line and is easy to upgrade to Standard or Enterprise Edition.

SQL server 2005 workgroup has Management Studio, Import/Export, Limited Replication Publishing and Backup Log Shipping; and for all this you have to shed around 4000$\Processor or around 730$\5 users.

If you want to use SQL Server for developers then the other option for database developers is SQL Server 2008 Developer Edition. Unlike Express Edition, Developer Edition has no limitations on database size, performance characteristics or functionality. However, it’s only licensed for use by a single developer in a non-production development environment.

Moreover you may use it for your own use in developing applications, but you may not allow others to access either the database or those applications. Before you put the applications into production, you must transfer the database to a fully licensed version of SQL Server

Now we talk about the 2 primary editions for SQL Server: Standard and Enterprise. Standard edition provides basic SQL Server features and will serve the needs of most businesses. Enterprise edition offers additional functionality and a much heftier price tag.

If you are working on Large volumes of data and Working with terabyte-class databases or billion-row tables and need functionality likes offers table partitioning, parallel indexing, and indexed views that can significantly improve performance in large environments then you must choose Enterprise Edition.

Moreover if You are working in a high-availability, mission-critical environment in which you want system to be running a 24/7/365 then features like Online indexing will allow you to maintain indexes without scheduling down-time, and Online Restore and Fast Recovery are both options you want when recovering from a failure.

Enterprise Edition is the only Edition that allows you to replicate data from Oracle.

Enterprise Edition also provide some more advanced BI features, such as parallel processing, cube partitioning, and text mining, hence if you are working with large BI solutions then Enterprise Edition is suitable for you.

So if you have a limited budget then SQL Server Standard is significantly less expensive than Enterprise when purchased on licensed per CPU. The conclusion, If you do not fall into any of these categories, then Standard edition will most likely suffice.

Some of the Myths come in mind while choosing and edition.

Standard isn’t as stable as Enterprise.
Not true. Standard and Enterprise both use the same core; the only difference is the additional features that Enterprise provides.
My other database servers are Enterprise, so all of my new servers need to be Enterprise, too.
Not true. While having a homogeneous environment does simplify many things (planning, maintenance, etc.), you can easily mix and match SQL Server editions within an environment to best meet your needs (and budget!).
I need Enterprise edition in order to replicate partitioned tables.
Not true. Replication can exist between partitioned and non-partitioned tables. In fact, replication by default does not create partitioning schemas on the subscription database, so the destination table is not partitioned unless explicitly requested. That said, if the source tables are partitioned, there may be good cause for the destination tables to be partitioned too, and Enterprise edition should be seriously considered.
I need Enterprise edition in order to cluster.
Not true starting in SQL Server 2005. Both 2005 and 2008 Standard editions support 2 node clusters. In a lot of enterprises, you will see Enterprise installed for clustering only. If you have a two node active/passive cluster and that is your only reason: you are probably overpaying and could be fine with standard edition.
I need Enterprise edition to access more memory
Not true with 2005 or 2008. These versions of SQL will address as much memory as the OS allows in Standard or Enterprise edition.

Why do we need Service Pack, Cumulative Update and Hot Fixes?

A service pack (SP) is a collection of updates, fixes and/or enhancements to a software program delivered in the form of a single installable package. Many companies, such as Microsoft typically release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit. Installing a service pack is easier and less error-prone than installing a high number of patches individually, even more so when updating multiple computers over a network.

Hence in short a SP is a roll-up of hot fixes with some strategic improvements from the engineering team

Now let’s go into more details about SP, Cumulative Update and Hot fixes
When a new version of a product is being developed, it may be made available to selected customers and community members for early testing and is sometimes called alpha builds of the product.

As development progresses and the product become more and more polished, it’s provided to a wider audience. This used to be called beta releases; for example beta 1, beta 2, etc. However a few years ago Microsoft changed the terminology for SQL Server pre-releases. They are now referred to as CTPs (Community Technology Previews). You can download the November CTP, for example.

As the product enters into it’s final stages before release, the feature set is complete and the product is undergoing final testing, it’s called an RC (Release Candidate).

After a product has undergone significant testing and it’s determined that no more changes will be made to the product before release, it’s sometimes said that the product has gone golden. It’s also called a GA (General Availability) release and once the bits been turned over to a company to mass produce the media (CDs, DVDs, etc), it’s called RTM’d (Released To Manufacturing).

Usually sometime around the RTM, the product version is “launched”. The timing of the launch may or may not have any correlation with the time the product is actually available for purchase. The launch has more to do with marketing and product feature education than availability.

Finally the product is released! It’s available for purchase.
Here comes the part you are looking for….
Over the period of time, Hot Fixes are created by the dev team to address specific product issues affecting certain customers. Sometimes the issue is so wide spread, a GDR (General Distribution Release) is issued so that all customers will receive the updates.

Since hot fixes and GDRs are designed to quickly address specific problems encountered by specific customers, they can be issued rather often. The rapidity of the hot fixes and GDR’s makes it impractical for many IT shops to keep up with the pace of the releases. So, a CU (Cumulative Update) is created that contains all of the applicable hot fixes. This makes it easier for customers who haven’t been directly affected by the issues that sparked the hot fixes to remain current.

Once a large enough collection of changes have been gathered, an SP (Service Pack) will be issued. Historically, SPs have also been the release vehicle used to deliver new features that were not ready at the time of GA. For example, Database Mirroring was made available in SP1. SP2 brought us the custom reports as in the Performance Dashboard. Microsoft has since administrators better understand the difference between a CU, IU, and SP.

SP=Service Pack
CU = Cumulative Updates
IU= Infrastructure Updates
GDR=General Distribution Release

Bugs that have been fixed in SQL Server 2005 Service Pack 1 >>
Bugs that have been fixed in SQL Server 2005 Service Pack 2 >>
Bugs that have been fixed in SQL Server 2005 Service Pack 3 >>
Bugs that have been fixed in SQL Server 2005 Service Pack 4 >>
Bugs that have been fixed in SQL Server 2008 Service Pack 1 >>
Bugs that have been fixed in SQL Server 2008 Service Pack 2 >>
Bugs that have been fixed in SQL Server 2008 Service Pack 3>>
Bugs that have been fixed in SQL Server 2008 R2 Service Pack1 >>
Bugs that have been fixed in SQL Server 2008 R2 Service Pack 2>>

SQL Server Evolution and some Interesting Facts

Microsoft SQL Server is a relational model database server produced by Microsoft and lets talk about how SQL Server comes to the Avtar we now it today.

Prior to SQL Server version 7.0 the code base for MS SQL Server was sold by Sybase SQL Server to Microsoft corporation, and tha was begining point of Microsoft’s into enterprise-level database market, competing against Oracle, IBM, and, later, Sybase.

The first version named SQL Server 1.0 for OS/2 (about 1989) which was essentially the same as Sybase SQL Server 3.0 on Unix, VMS, etc. Microsoft SQL Server 4.2 was shipped around 1992 (available bundled with IBM OS/2 version 1.3). Later Microsoft SQL Server 4.21 for Windows NT was released at the same time as Windows NT 3.1. Microsoft SQL Server v6.0 was the first version designed for Windows NT.
In 1996, Microsoft updated SQL Server with the 6.5 release and after a two-year development cycle, Microsoft released the vastly updated SQL Server 7.0 release in 1998. SQL Server 7.0 embodied many radical changes in the underlying storage and database engine technology used in SQL Server
Microsoft’s move from SQL Server 7.0 to SQL Server 2000 was more of an evolutionary move that didn’t entail the same kinds of massive changes that were made in the move from 6.5 to 7.0. Instead, SQL Server 2000 built incrementally on the new code base that was established in the 7.0 release. Starting with SQL Server 2000, Microsoft began releasing updates to the basic release of SQL Server in the following year starting with XML for SQL Server Web Release 1, which added several XML features including the ability to receive a result set as an XML document. The next stage they renamed the web release to the more succinctly titled SQLXML 2.0, which, among other things, added the ability to update the SQL Server database using XML updategrams. This was quickly followed by the SQLXML 3.0 web release, which included the ability to expose stored procedures as web services. Two years later, Microsoft SQL Server release history cumulates with the release of SQL Server 2005. SQL Server 2005 uses the same basic architecture that was established with SQL Server 7 and it adds to this all the features introduced with SQL Server 2000 and its web releases in conjunction with the integration of the .NET CLR and an array of powerful new BI functions

When SQL Server 2005 was released it was kept in mind that Microsoft SQL Server 2005 will evolve as a comprehensive, integrated data management and analysis software that enables organizations to reliably manage mission-critical information and confidently run today’s increasingly complex business applications and is not extensively used Microsoft SQL Server 2005 significantly enhances the database programming model by hosting the Microsoft .NET Framework 2.0 Common Language Runtime (CLR). This enables developers to write procedures, triggers, and functions in any of the CLR languages, particularly Microsoft Visual C# .NET, Microsoft Visual Basic .NET, and Microsoft Visual C++. This also allows developers to extend the database with new types and aggregates. Apart from CLR the new SSIS(SQL Server Integration Services), SSRS(SQL Server Reporting Services) , SSAS(SQL Server Analysis Services) has made revolution is scalable application development with Reporting and Data Warehousing concepts

In the year 2008, another sensational evolution was happened for SQL Server, the birth of SQL Server 2008, with exceptionally good new features. SQL Server 2008 is an advanced version as compared to SQL Server 2005, which has the support for handling Spatial and Geographic data, which was major turning point for SQL Server. Until SQL Server 2008, it was difficult to maintain geographic information in SQL Server databases, the introduction of new geographic and spatial data types has the rocked the world of Application Development and Data handling.

This latest release of the SQL Server Database Engine introduces new features and enhancements that increase the power and productivity of architects, developers, and administrators who design, develop, and maintain data storage systems.

The features like Change Data Capture (CDC), Data compression, Resource Governor, Backup compression, Partition-aligned indexed views, Central Management Servers, Policy-Based Management, Transparent and Data Encryption made SQL Server 2008 a good competitor against other enterprise-level database engine available in the Market.
SQL Server has now released a more advanced version SQL Server 2012 which contains features like AlwaysOn, ColumnStore Indexes, User-Defined Server Roles, and Enhanced Auditing Features etc.

Till then SQL Server has released various SQL Server Versions and capturing enterprise level database market at a very fast rate, and rich GUI various new features coming in the different versions work as a catalyst. SQL Server has around 18% to 20% market share as per information on Web and is growing at a very rapid rate.

What is SQL Server ?

SQL Server is Microsoft’s relational Database Management System (RDBMS) like other RDBMS Oracle, SYBASE, MySQL and DB2. Microsoft has released to compete against competitors like Oracle Database (DB) and MySQL. SQL Server supports ANSI SQL Like all major RBDMS, the standard SQL language. However, SQL Server also contains T-SQL, its own SQL implementation. SQL Server Management Studio (SSMS) (previously known as Enterprise Manager- SQL2000 and previous versions) is SQL Server’s main interface tool, and it supports 32-bit and 64-bit environments. SQL Server was released in various editions like Enterprise: Designed for large enterprises with complex data requirements, data warehousing and Web-enabled databases. Has all the features of SQL Server and its license pricing is the most expensive. Standard: Targeted toward small and medium organizations. Also supports e-commerce and data warehousing. Workgroup: For small organizations. No size or user limits and may be used as the backend database for small Web servers or branch offices. Express: Free for distribution. Has the fewest number of features and limits database size and users.

SQL Server also includes more add-on services however these are not important for the operation of the database system, they provide value added services on top of the core Database Management System. These services either run as a part of some SQL Server component or out-of-process as Windows Service and presents their own API to control and interact with them.




Replication Services

SQL Server Replication Services are used by SQL Server to replicate and synchronize database objects, either in entirety or a subset of the objects present, across replication agents, which might be other database servers across the network, or database caches on the client side. Replication follows a publisher/subscriber model, i.e., the changes are sent out by one database server (“publisher”) and are received by others (“subscribers”).

Analysis Services

The SQL Server Analysis Services, or SSAS, is a multidimensional analysis tool that features Online Analytical Processing, powerful data mining capabilities, and deeper dimensions to business information within a relational database. Multidimensional analysis is an OLAP technique that produces the ability to analyze large quantities of data by storing data in axes and cells instead of the traditional relational two-dimensional view in rows and columns. SSAS places predictive analytic capabilities in the hands of information workers by creating an instant connection to backend data using familiar applications such as Microsoft Excel and SharePoint for analysis, visual presentation and collaboration.


Reporting Services

Microsoft has come up with its own reporting service; along with SQL server database to introduce the Microsoft SQL Server Reporting services also known as SSRS.

SSRS solution is a web based solution in which the report can be shared among different customers, stake holders and top management on a web based platform. SSRS is a comprehensive reporting platform whereby reports are stored on a centralized web server and because of that the deployment is quite simple.

This innovative approach that Microsoft introduces in SSRS enables users to create report with lesser efforts.



Integration Services

SQL Server Integration Services is most advanced tool and is used to integrate data from different data sources like Oracle, SQL, or Sybase. It is used for the Extraction Transform and Load capabilities for SQL Server for data warehousing needs. It includes tools Business Intelligence Studio to build data extraction workflows integration various functionality such as extracting data from various sources, querying data, transforming data including aggregating, duplication and merging data, and then loading the transformed data onto other sources, mail notifications.


Service Broker

Used inside an instance, it is used to provide an asynchronous programming environment. For cross instance applications, Service Broker communicates over TCP/IP and allows the different components to be synchronized together, via exchange of messages. The Service Broker, which runs as a part of the database engine, provides a reliable messaging and message queuing platform for SQL Server applications.

Full Text Search Service

SQL Server Full Text Search service is a specialized indexing and querying service for unstructured text stored in SQL Server databases. The full text search index can be created on any column with character based text data. It allows for words to be searched for in the text columns. While it can be performed with the SQL LIKE operator, using SQL Server Full Text Search service can be more efficient.

Notification Services

Notification Services is a framework for making rich but scalable event and subscription-matching applications. Developers use Notification Services to build alerting applications that generate and deliver alerts for events (e.g., an alert that tells a customer that an out-of-stock item is available for order). Typically, events generate notifications that match criteria from user-generated subscriptions. MSN Alerts uses Notification Services to generate several million notifications per day. Notification Services is part of the Microsoft SQL Server Business Intelligence (BI) platform. When you look at Microsoft’s three BI pillars–Integrate, Analyze, Report–you see that Notification Services is an important feature for the Reporting pillar.