All posts by srakeshsharma

One or more subscriptions have been marked inactive

Replication became inactive

If you notice sometimes your replication became inactive and you never came to know about if there is any error or distribution agent job is failing. Even if you have a SCOM configured you never get a Job failure alert.

In this scenario if you notice the properties of Distribution Agent Job and Log Reader Agent job closely you will find that in step 2 for both the jobs “Run Agent” in advance properties you will find Retry Attempt option with value =2147483647.

Job

This is Replication Agent default value to retry 2147483647 times, once every minute.  That is over 4000 years, isn’t it strange?

2147483647 = 35791394 Hours = 1491308 Days = 4085 Years.

How does this impact Replication?

When you have this kind of setup in case of any replication agent failure and it is configured for continuous retry, then agent will not report back to SQL Agent Service for this faliure, but will wait 1 minute, then try again for 4000 years.  Hence you will never get any kind of alert even if you have SCOM configured.

Example:  IF the Distribution Agent, again running continuous, is unable to delivery data because of any error like 20598 or a 2627 error etc., the Agent will go into an infinite retry logic.

20598=Row was not found at the Subscriber when applying the replicated command.

2627 =Violation of PRIMARY KEY constraint ‘PK__A’. Cannot insert duplicate key in object ‘dbo.A’.

In this scenario if undetected, the Agent will stay in retry mode until the Distribution Cleanup job removes the “cached” transactions from the Distribution Database.  At this point the Distribution Agent will no longer have the data needed to “catch up” the subscriber.  As result, the Subscriber will be marked “inactive” and post the following error in the repl_errors table in the Distribution database.

18854=One or more subscriptions have been marked inactive. Drop and re-create all subscriptions for this node that are failing with this error.

Which Replication Agent are marked to retry for this default value?

You can look at the individual Agent job properties in SQL Server Management Studio or run the following query in the MSDB database.

Select * from msdb.dbo.sysjobsteps where retry_attempts=2147483647

How to fix this issue?

I would recommend setting all of your Replication Agent retry_attempts at smaller interval, say 10 (fail after 10 minutes).

Note: The actual value you select is dependent on your business needs and criticality of replication.  For extra measure, you can also add notification ALERT that the JOB has failed.

This should give you enough time to detect the problem, resolve the issue, and get the Replication Agent up and running before the subscriber is marked as “expired”.

Misconception about Online Rebuild Index Operation till SQL 2012

Microsoft Introduces Online Rebuilding Index operation in SQL Server SQL Server 2005 and after that it is available in all SQL Server versions. So the question arises is Online Rebuilding online operation really an online operation. The answer is YES to some extent but when it comes to DDL specific operation during rebuilding of indexes things changes, please follow below link for more details

Misconception about Online Rebuild Index Operation till SQL 2012

 

 

How to handle Memory Pages Per Second is too High?

Some time when you configure SCOM (System Center Operation Manager – Monitoring Tool)  in your environment and configure a SQL Management Pack to monitor “Memory Pages Per Second” then you will get the below alert as mentioned ” Memory Pages Per Second is too High. – The threshold for the Memory\Pages\sec performance counter has been exceeded

Let’s first discuss how this alert is raised by per System Center Operation Manager

Here this counter Pages/sec means it’s the rate at which pages are read from or written to disk to resolve hard page faults. In more detail it identifies the rate of pages read from and written to disk for resolving memory requests for pages that are not in memory. In other words, Pages/Sec represents pages input/sec and pages output/sec. (Pages output represents the pages removed from memory heading to disk.).  If you talk about page fault it happens when the memory manager does not find the block of memory that it needs in the cache. That memory has been swapped out to disk, and this will degrade your Server \SQL server performance.

This rule will generate an alert when the pages/sec exceeds the defined threshold for a sustained period of time and Default is Memory\Pages/Sec >50).

When this performance threshold has been exceeded it often means that the system is low on physical memory because the operating system is reading from the hard disk to recover information from the virtual memory. When the system is in this state the overall system performance is significantly diminished and result in poor operating system and application performance.

Hence please make sure that this rule must be tuned accordingly to the server configuration keeping in mind about the performance capacity of server before it can be used. So before enabling this rule create a baseline Pages/Sec performance signature for each system. Once you create a baseline, set a default threshold that will work for the largest percentage of managed nodes and then use the override feature to define threshold for the exception cases.

From the SQL Server perspective when you receive this kind of alert it’s a primary indicator suggesting that there could be an issue of memory pressure\bottleneck, but that depends on several factors how frequent you are getting this alert from same server and how it is configured, allocated memory to SQL, no. of instances running on server etc.

Note: A high number in the Pages/Sec counter does not necessarily mean that you have a memory problem, it could be a temporary issue but if the same server is generating same kind of alert then please monitor the below mentioned counter over a period of time to confirm if there is a need to add more memory to your system to reduce the number of page faults and improve performance.

Memory – Available M Bytes

SQL Server: Buffer Manager – Page Life Expectancy (Higher is good)

SQL Server: Memory Manager – Memory Grants Pending (Lower is good)

SQL Server: Buffer Manager, Buffer Cache Hit Ratio (higher is better)

SQL Server: Memory Manager – Target Server Memory

SQL Server: Memory Manager – Total Server Memory

SQL Server: SQL Statistics – Batch Requests/sec

SQL Server: SQL Statistics – Compilations/sec

Hence from the Capacity Planning point of view from the performance perspective ,keep watch for an upward trend in the value of Memory\Pages/Sec and add the memory when paging operation absorb more than 20-50 % of your total disk I/O bandwidth

Once a while this kind of alert from SQL is ok you can monitor for some time and resolve if all goes fine, but if the alert is frequent then you have to investigate on the above counters because if could an issue of memory crunch\bottleneck and I would say the trend for these counter values is more important than their absolute value at any given time and If you notice that the average Page Life Expectancy is dropping over time, that is quite significant indication(and not in a good condition).

Conclusion: Don’t ignore this alert if it’s coming frequently from certain servers and during that time on focus on trend obtained from above counters (You can ignore this alert if you are aware that during certain time some heavy activity is going on server and because of that this alert appears and off Couse if customer is not complaining). Also If you are sure that for some server you can’t do anything because their hardware lifecycle is over and you don’t want to spend too much on them then define your threshold again or override this alert with the help of SCOM team in your environment, otherwise if you feel that it’s actually a serious issue on server on the basis of above counters then ask customer to add more memory on their dedicated server but make sure before asking him to add more memory although these days the memory chips are quite cheap but still there is a cost involve.

garmin

What is In-Memory OLTP in SQL Server 2014 (Code Name “Hekaton”)?

In-memory OLTP or Hekaton these 2 terms can be used interchangeably but Microsoft officially announce this new technology name as “In-memory OLTP”. “Hekaton” is Greek word for “hundreds,” and it was given this name for its ability to speed up database function 100x (possibly).  It certainly increases application speed by 10x and nearly 50x for new, optimized applications but it depends of several parameters.

Hekaton works by providing in-application memory storage for the most often used tables in SQL Server, but don’t confuse this is entirely different from an older technology you might heard of DBCC PINTABLE ( database_id , table_id ).

DBCC PINTABLE does not cause the table to be read into memory. As the pages from the table are read into the buffer cache by normal Transact-SQL statements, they are marked as pinned pages. SQL Server does not flush pinned pages when it needs space to read in a new page. SQL Server still logs updates to the page and, if necessary, writes the updated page back to disk. SQL Server does, however, keep a copy of the page available in the buffer cache until the table is unpinned with the DBCC UNPINTABLE statement.

DBCC PINTABLE is best used to keep small, frequently referenced tables in memory. The pages for the small table are read into memory one time, then all future references to their data do not require a disk read, but still the internal structure will remain page based.

Hence if the internal structure is page based then definitely there will be locking, latching and logging, also they use the same index structure which also require locking and latching.

Although the feature of DBCC PINTABLE is good and provide performance enhancement but it has some limitations like if a large table is pinned, it can start using a large portion of the buffer cache and not leave enough cache to service the other tables in the system adequately. If a table is larger than the buffer cache is pinned, it can fill the entire buffer cache. In that case a sysadmin must shut down SQL Server, restart SQL Server, and then unpin the table. Pinning too many tables can cause the same problems as pinning a table larger than the buffer cache.

 

 

In SQL 2014 “In-Memory OLTP” it entirely different or we can say also say that it’s an enhancement of previous technology DBCC PINTABLE. Tables in In-Memory OLTP are stored in entirely different way from disk based tables, they use entirely different data and index structure because this feature did not store data on PAGES and separate index mechanism, hence removing the need to latching and can solve the problem of latch contention mostly happens in case of last page insert.

LPInt  LPS1

You can check this problem via DMV’s and investigate the issue from below queries.   Select * From sys.dm_os_waiting_tasks  

Select wt.*, st.text
From sys.dm_os_waiting_tasks wt
left join sys.dm_exec_requests er
    on er.session_id = wt.session_id
outer apply fn_get_sql(er.sql_handle) st
where wt.resource_description is not null

 LPS2

Now as In-Memory OLTP works on a new structure as mentioned above we have below advantages. Elimination of disk reads entirely by always residing in memory

  • Hash indexing (more efficient than on-disk table b-tree indexes)
  • Elimination of locking & latching mechanisms used in on-disk tables
  • Reduction in “blocking” via improved multi-version concurrency model

In-Memory OLTP Architecture

Arch1

Arch2

As you can see from the above diagram the tables and indexes are stored in a memory in different way, there is no buffer pool like conventional architecture also as there is no 8KB page based data structure MS is using stream based storage, and only files are appended to store the durable data.

Hence the major difference is that memory optimized table do not require pages to be read form the disk, all the data itself stored in memory all the time. A set of checkpoint files which are only used for recovery purpose is created on the file system file group to keep track of the changes of data, and the checkpoint files append-only. Memory optimized table uses the same transaction log that is used or the operation on disk based tables and is stored on disk, to ensure that in case of system crash the rows of data in the memory optimized table and be recreated from the checkpoint files and the transaction log.

Here you also have an option of Non-Durable table as well, in this option only table schema will be durable not data, so these tables will be re-created once the SQL start after a crash without data.

Indexes in In-Memory OLTP is also different they are not stored in a normal B Tree structure in fact they support HASH indexes to search records. Every memory optimized table must have at-least one Hash Index because it is the index that combine all rows into a single table. Indexes for memory optimized table are never stored on disk in-fact they are stored on memory and recreated everyone the SQL restart and data is inserted into the memory.

Bellow you can see there are 3 disk based tables T1, T2, and T3 with file-group on disk, but once we once we want some tables to be in memory that there will be a new space in

Arch3

SQL memory and in addition there will be a new type of file group for stream based storage to persist copy of data to ensure data can be recovered in case of crash. Now consider a scenario we want to move some disk based tables to memory then we have to first drop them and recreate them with special syntax and the situation will look like mentioned below, as you can see there is a separate fie group called memory optimized file group and all DML operations are logged in conventional log file and as you can also see indexes and data exists in memory itself, note that there will no copy of indexes on disk.

Arch4

Natively Complied Stored Procedure

In-Memory OLTP introduces the concept of native compilation. SQL Server can natively compile stored procedures that access memory-optimized tables. Native compilation allows faster data access and more efficient query execution, than interpreted (traditional) Transact-SQL

Native compilation refers to the process of converting programming constructs to native code, consisting of processor instructions that are executed directly by the CPU, without the need for further compilation or interpretation.

In-Memory OLTP compiles Transact-SQL constructs to native code for fast runtime data access and query execution. Native compilation of tables and stored procedures produce DLLs that are loaded into the SQL Server process.

In-Memory OLTP compiles both memory-optimized tables and natively compiled stored procedures to native DLLs when they are created. In addition, the DLLs are recompiled after a database or server restart. The information necessary to recreate the DLLs is stored in the database metadata.

Natively compiled stored procedure is a way to perform the same operation with fewer instruction because you can see that the CPU clock rate is stalled at a point. Hence it’s a way to perform the same operation with lesser instruction like there is a task which require 1000 instruction to complete with conventional architecture, but the same task can be done with 300 instruction with natively compiled stored procedure.

Arch5.

Arch6

Hence you will get much performance gains in many aspects like the storage engine here itself is lock and latch free, hence there will be no locking and latching and in case your application is having performance bottle neck because of latch contention then you can use this feature of In-Memory OLTP to remove latch contention and you will see great result once you move some suffering tables in memory. Natively compiled stored procedure will give to 10-30x benefit by improving execution time, and also memory optimized table require lesser logging than disk spaced table as no index operations are logged, but still the latency could be there for the log, because the commit of transaction will not complete till the log is hardened to disk so if there is good storage then this will not be an issue.

Arch7

Although the In-Memory OLTP tables provides lot of performance gains but it has lot of limitations too and those must be taken care before you decided to implement this in your production.

  • Truncate Table
  • Merge
  • Dynamic and Keyset cursor
  • Cross Database queries
  • Cross Database transactions
  • Linked Server
  • Locking Hits
  • Isolation Level Hints (ReadUncommitted, ReadCommitted, and ReadCommittedLock)
  • Memory Optimized table types and table variable are not supported in CTP1
  • Tables containing binary columns such as text, xml or row width exceeding 8kB cannot be configured as “in-memory” and there are also some limits on SQL commands and transaction concurrency options
  • Default and check constraints are not supported in memory optimized tables
  • User defined data types within table definition are not supported
  • Expects the collation of the table or database a BINARY – Latin1_General_100_BIN2 (tested trying with a database with collation SQL_Latin1_General_CP1_CI_AS and resulted in error)
  • File SIZE or AUTOGROWTH can’t be set to the file stream data file (in the CREATE DATABASE syntax)
  • The non BIN2 collation is not supported with indexes on memory optimized tables
  • In memory table creation fails in the normal database as it is not supported (The feature ‘non-bin2 collation’ is not supported with indexes on memory optimized tables.)
  • The above statement doesn’t apply for a normal disk table residing in an in-memory database
  • Identity columns can’t be defined in the memory optimized tables Data types IMAGE, TEXT, NVARCHAR(MAX), VARCHAR(MAX) types not supported in memory optimized tables
  • Row size for the memory optimized table is 8060 bytes
  • Memory optimized table can’t be altered to add a FORIGEN KEY constraint & in-line creation of Foreign key constraint can’t be created either
  • CREATE TRIGGER not supported by the memory optimized tables
  • CREATE STATISTICS on the tables wasn’t allowed when tried to create (CREATE and UPDATE STATISTICS for memory optimized tables requires the WITH FULLSCAN or RESAMPLE and the NORECOMPUTE options; the WHERE clause is not supported.)
  • Partition not supported
  • LOB data type not supported

How to extract SQL Server Instance Details

declare @RegPathParams sysname

declare @Arg sysname

declare @Param sysname

declare @MasterPath nvarchar(512)

declare @LogPath nvarchar(512)

declare @ErrorLogPath nvarchar(512)

declare @n int

select @n=0

select @RegPathParams=N’Software\Microsoft\MSSQLServer\MSSQLServer’+’\Parameters’

select @Param=’dummy’

while(not @Param is null)

begin

select @Param=null

select @Arg=’SqlArg’+convert(nvarchar,@n)

exec master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’, @RegPathParams, @Arg, @Param OUTPUT

if(@Param like ‘-d%’)

begin

select @Param=substring(@Param, 3, 255)

select @MasterPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

else if(@Param like ‘-l%’)

begin

select @Param=substring(@Param, 3, 255)

select @LogPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

else if(@Param like ‘-e%’)

begin

select @Param=substring(@Param, 3, 255)

select @ErrorLogPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

select @n=@n+1

end

declare @SmoRoot nvarchar(512)

exec master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’, N’SOFTWARE\Microsoft\MSSQLServer\Setup’, N’SQLPath’, @SmoRoot OUTPUT

SELECT

@ErrorLogPath AS [ErrorLogPath],

@SmoRoot AS [RootDirectory],

CAST(case when ‘a’ <> ‘A’ then 1 else 0 end AS bit) AS [IsCaseSensitive],

@@MAX_PRECISION AS [MaxPrecision],

CAST(FULLTEXTSERVICEPROPERTY(‘IsFullTextInstalled’) AS bit) AS [IsFullTextInstalled],

@LogPath AS [MasterDBLogPath],

@MasterPath AS [MasterDBPath],

SERVERPROPERTY(N’ProductVersion’) AS [VersionString],

CAST(SERVERPROPERTY(N’Edition’) AS sysname) AS [Edition],

CAST(SERVERPROPERTY(N’ProductLevel’) AS sysname) AS [ProductLevel],

CAST(SERVERPROPERTY(‘IsSingleUser’) AS bit) AS [IsSingleUser],

CAST(SERVERPROPERTY(‘EngineEdition’) AS int) AS [EngineEdition],

convert(sysname, serverproperty(N’collation’)) AS [Collation],

CAST(SERVERPROPERTY(N’MachineName’) AS sysname) AS [NetName],

CAST(SERVERPROPERTY(‘IsClustered’) AS bit) AS [IsClustered],

SERVERPROPERTY(N’ResourceVersion’) AS [ResourceVersionString],

SERVERPROPERTY(N’ResourceLastUpdateDateTime’) AS [ResourceLastUpdateDateTime],

SERVERPROPERTY(N’CollationID’) AS [CollationID],

SERVERPROPERTY(N’ComparisonStyle’) AS [ComparisonStyle],

SERVERPROPERTY(N’SqlCharSet’) AS [SqlCharSet],

SERVERPROPERTY(N’SqlCharSetName’) AS [SqlCharSetName],

SERVERPROPERTY(N’SqlSortOrder’) AS [SqlSortOrder],

SERVERPROPERTY(N’SqlSortOrderName’) AS [SqlSortOrderName],

SERVERPROPERTY(N’BuildClrVersion’) AS [BuildClrVersionString],

SERVERPROPERTY(N’ComputerNamePhysicalNetBIOS’) AS [ComputerNamePhysicalNetBIOS],

‘LastRestart’=(select crdate from sys.sysdatabases where name=’tempdb’)

Error: SQL Server 2008 Installation Error: “Invoke or Begin Invoke cannot be called on a control until the window handle has been created”

Error: SQL Server 2008 Installation Error: “Invoke or Begin Invoke cannot be called on a control until the window handle has been created”

While installing SQL Server 2008 I received above error and the same error could be encountered if you install SQL Server 2008 on Windows 7. Later I came to know that this issue is happening because of a behavior change in .NET that is exposing an issue with SQL Setup User Interface (UI).

Resolution:

1.In the Installation Disc you will find a folder, x86 and x64

2.Select as per your OS and go inside the folder

3.Click on the LandingPage.exe

LandingPage.exe

4. Click on the second Option on the left hand menu “Installation”

 PandingPage2

5. Start Standalone installation from the menu at the right side

 6. .It will ask for the location files on the disc

 7. Show the location, e.g. /SQL Server Enterprise 2008/x86 or x64 depending on OS

 It will complete the installation successfully

 Work Around:-

 You can also try to install SQL Server Slipstream setup with SP1 CPU4 as MS confirmed this is a problem and is corrected in Cumulative Update 4 for SQL Server 2008 Service Pack 1.

Please refer below link for more detail

http://support.microsoft.com/kb/975055

SSRS Error: Exceeding Maximum Requests for One User

You intentionally want to restrict or increase maximum request for one user for report server

MaxUserSSRS

There could be some scenario when one specific login has created too much connection on report server via some application, and creates an issue when the number of connection reaches a maximum limit which is called MaxActiveReqForOneUser. This parameter is defined in Rsreportserver.config file and the file is available a example D:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Reporting Services\ReportServer.

Open this file in any text editor and set the value to your choice and save the file.

<Add Key=”MaxActiveReqForOneUser” Value=”20″/>

In this way you can increase or restrict the number of request from one user.