Category Archives: SQL Server Database Administration

How to add article in Transactional Replication

Let’s consider a scenario we have a database in which we have 3 tables 2 of them are already added in replication and we want to add a third table in publication.

This is more important when we have a larger database and we don’t want to reinitialize the replication.

To avoid complete snapshot of the Publisher database and to avoid re-initialization we need to make some changes in the existing replication. We need to set 2 properties to False as shown below

  1. allow_anonymous
  2. Immediate_sync

Execute below commands

Now add table T3 again from Publisher properties and press ok.

  1. USE
  2. GO
  3. EXEC sp_changepublication @publication = ‘PubDB’,
  4. @property = ‘allow_anonymous’,
  5. @value = ‘false’
  6. GO
  7. USE
  8. GO
  9. EXEC sp_changepublication @publication = ‘PubDB’,
  10. @property = ‘immediate_sync’,
  11. @value = ‘false’
  12. GO


Now add the new articles in existing publication


Now start Snapshot Agent


Now you can notice that it only creates a snapshot of one article instead of all articles, now start log reader agent if it’s in the stopped state.


Now as you can see a snapshot was generated but only for one article, so this article will be replicated to all subscriber without impacting existing replication.

Don’t forgot to execute below command at last to enable the disabled properties.

  1. Use
  2. GO
  3. EXEC sp_changepublication @publication = ‘PubDB’,
  4. @property = ‘allow_anonymous’ ,
  5. @value = ‘True’
  6. GO
  7. Use
  8. GO
  9. EXEC sp_changepublication @publication = ‘PubDB’,
  10. @property = ‘immediate_sync’ ,
  11. @value = ‘True’
  12. GO


Now you can verify the article on all your subscribers

If you want to know more about immediate sync command follow below link

  1. Immediate_sync



SQL Performance Analyzer and Health Check

Sara Performance Analyzer

Sara Performance Analyzer (SQL Administration and Reporting Analysis) is a windows based application which helps you in troubleshooting SQL Server issues. The application perform health check on various parameters and provide a full report for your SQL Server. The tool provide detailed information on below parameters

  • Hardware
  • Disk Status
  • Services
  • Instance
  • DB Configuration
  • Backup Status
  • Temp DB Status
  • Performance
  • Job related information


Download the zip file and extract all the file in folder and run setup.exe. This will install application with name SARA your machine.

>>Download Sara Performance Analyzer<<

How to run this tool and pre-requisites for this application

  • You must be SYSADMIN on SQL Server and Server Admin on Machine to run this application
  • Application enables XP_Cmdshell to execute some command and then disable it accordingly so test this in your DEV and Test environment before executing it on production
  • Use SQL Authentication OR Integrated option
  • Enter Server Name
  • Select Authentication
  • Click Process

This application will take around 1-3 minutes to finish all test and then displays result in Data Grid.

How many SQL Instance can I host efficiently on existing server?

It’s a very open ended questions and before answering this question you need to understand what kind of application you are using and how much transaction they makes on database. You need to understand size of databases and how much CPU\MEM is required to run those databases on SQL server also it more depends on your performance capacity of Physical Server\VM, I mean if you are having a server with multiple cores 100 Gigs of mem and super-fast SAN you can host multiple instances.

This kind of consolidation of instance has its own Pros and Cons as this could be a beneficial but it can also be a headache for a DBA or Architecture team.

Why Would You Use Multiple SQL Server Instances?

  • You can save lot of cost for hardware
  • License costs for SQL Server currently are by CPU socket by machine, not by instance on a machine. Doubling up the instances on a machine can save lot of cost
  • Fully utilize the hardware performance to its limit

Performance Bench-marking is very important for consolidation (running multiple instances on server), this bench-marking will provide you information like how much MIN\MAX CPU and how much MIN\MAX memory is required to run this SQL Instance, also how much storage capacity is required. Apart from these things network bandwidth and I/O plays an important role here before making decision to host multiple instances on one Server.

What Important counters I must collect for defining my benchmark of SQL Instance?

Identify the Physical configuration of your server on which you are hosting this SQL Instance


You need to focus on various trends for below performance counters to define minimum, average and maximum value required for your SQL Instance to run.

  1. Identify CPU utilization trend over a period of time to identify CPU required to run SQL Instance: This will let you decide how much computing power\processors are required to host this SQL instance.


2. Identify CPU utilization of database: You can easily get this information from T-SQL what is the average utilization of CPU for your database, a job can be configured on a server and you can draw a trend and actually see which database is consuming most CPU

 WITH db_cpu_stats 
 AS (SELECT databaseid, 
 db_name(databaseid) AS [database name], 
 sum(total_worker_time) AS [cpu_time_ms] 
 FROM sys.dm_exec_query_stats AS qs 
 CROSS apply (SELECT CONVERT(int, value) AS [databaseid] 
 FROM sys.dm_exec_plan_attributes(qs.plan_handle) 
 WHERE attribute = N'dbid') AS f_db 
 GROUP BY databaseid) 
 [database name] AS 'Name', 
 [cpu_time_ms] AS 'CPU Time [MS]', 
 cast([cpu_time_ms] * 1.0 / sum([cpu_time_ms]) 
 OVER() * 100.0 AS decimal(5, 2)) AS 'CPU [%]' 
 FROM db_cpu_stats 
 WHERE databaseid <> 32767 -- ResourceDB 
 ORDER BY cast([cpu_time_ms] * 1.0 / sum([cpu_time_ms]) 
 OVER() * 100.0 AS decimal(5, 2)) desc 
 OPTION (recompile)



3. Identify Memory utilization of database

DECLARE @total_buffer INT;
SELECT @total_buffer = cntr_value
FROM sys.dm_os_performance_counters 
WHERE RTRIM([object_name]) LIKE '%Buffer Manager'
AND counter_name = 'Database Pages';
;WITH src AS
database_id, db_buffer_pages = COUNT_BIG(*)
FROM sys.dm_os_buffer_descriptors
--WHERE database_id BETWEEN 5 AND 32766
GROUP BY database_id
[db_name] = CASE [database_id] WHEN 32767 
THEN 'Resource DB' 
ELSE DB_NAME([database_id]) END,
db_buffer_MB = db_buffer_pages / 128,
db_buffer_percent = CONVERT(DECIMAL(6,3), 
db_buffer_pages * 100.0 / @total_buffer)
FROM src
ORDER BY db_buffer_MB DESC;

DBmem4. Identify below counters usage to define MIN\MAX memory and other memory configurations.

These counters will help to understand about the load on the SQL Server. Monitor these counters over a period of time and analyze the trend that will actually give you min max and average utilization of these parameters




5. Identify the database usage trends to define capacity management


Apart from all this you need to focus on the of I/O sub system, Your I/O subsystem must be fast and further separation your Logs, Data and TEMPDB data files, will give you an additional advantage

Once you define you bench-marking on the basis of above counters which will help you in defining cpu, memory, storage and bandwidth requirement for a SQL Instance.

Who Detach database on SQL Server

Recently we are facing some complaint from a customer that one of their database gets detached automatically once a week and they have no idea why this is happening.

The other problem is that when they try to attach the SQL Servers says the file is already use even though the datafile is not attached to any database.

Technically speaking SQL database can be detached only when user has sysadmin rights and can only be detached or attached with manual intervention  i.e, either by script (Job,.Net Application or by Powershell etc.) or through SQL SSMS. When a database is

We can track this event from default trace as shown below.

 SELECT @path=path FROM sys.traces WHERE is_default = 1
 SELECT AS EventNamem, DT.TextData,DT.HostName,dt.StartTime,DT.ApplicationName,
 FROM dbo.fn_trace_gettable (@path, DEFAULT) DT
 INNER JOIN sys.trace_events TE
 ON DT.EventClass = TE.trace_event_id
 where TextData LIKE 'DBCC DETACHDB%'


It has been seen in SQL Server when you detach a database SQL service account lost permission on file so I would suggest you take the database offline and then detach database, this will not reset permission on data\log files, else you have to remove un-necessary permission and have to add SQL  Service Account again.even because of this reason you will get error like file already in use even though it’s not in use. In that case remove un-necessary permission and add SQL Service Account Permission.



How to handle Memory Pages Per Second is too High?

Some time when you configure SCOM (System Center Operation Manager – Monitoring Tool)  in your environment and configure a SQL Management Pack to monitor “Memory Pages Per Second” then you will get the below alert as mentioned ” Memory Pages Per Second is too High. – The threshold for the Memory\Pages\sec performance counter has been exceeded

Let’s first discuss how this alert is raised by per System Center Operation Manager

Here this counter Pages/sec means it’s the rate at which pages are read from or written to disk to resolve hard page faults. In more detail it identifies the rate of pages read from and written to disk for resolving memory requests for pages that are not in memory. In other words, Pages/Sec represents pages input/sec and pages output/sec. (Pages output represents the pages removed from memory heading to disk.).  If you talk about page fault it happens when the memory manager does not find the block of memory that it needs in the cache. That memory has been swapped out to disk, and this will degrade your Server \SQL server performance.

This rule will generate an alert when the pages/sec exceeds the defined threshold for a sustained period of time and Default is Memory\Pages/Sec >50).

When this performance threshold has been exceeded it often means that the system is low on physical memory because the operating system is reading from the hard disk to recover information from the virtual memory. When the system is in this state the overall system performance is significantly diminished and result in poor operating system and application performance.

Hence please make sure that this rule must be tuned accordingly to the server configuration keeping in mind about the performance capacity of server before it can be used. So before enabling this rule create a baseline Pages/Sec performance signature for each system. Once you create a baseline, set a default threshold that will work for the largest percentage of managed nodes and then use the override feature to define threshold for the exception cases.

From the SQL Server perspective when you receive this kind of alert it’s a primary indicator suggesting that there could be an issue of memory pressure\bottleneck, but that depends on several factors how frequent you are getting this alert from same server and how it is configured, allocated memory to SQL, no. of instances running on server etc.

Note: A high number in the Pages/Sec counter does not necessarily mean that you have a memory problem, it could be a temporary issue but if the same server is generating same kind of alert then please monitor the below mentioned counter over a period of time to confirm if there is a need to add more memory to your system to reduce the number of page faults and improve performance.

Memory – Available M Bytes

SQL Server: Buffer Manager – Page Life Expectancy (Higher is good)

SQL Server: Memory Manager – Memory Grants Pending (Lower is good)

SQL Server: Buffer Manager, Buffer Cache Hit Ratio (higher is better)

SQL Server: Memory Manager – Target Server Memory

SQL Server: Memory Manager – Total Server Memory

SQL Server: SQL Statistics – Batch Requests/sec

SQL Server: SQL Statistics – Compilations/sec

Hence from the Capacity Planning point of view from the performance perspective ,keep watch for an upward trend in the value of Memory\Pages/Sec and add the memory when paging operation absorb more than 20-50 % of your total disk I/O bandwidth

Once a while this kind of alert from SQL is ok you can monitor for some time and resolve if all goes fine, but if the alert is frequent then you have to investigate on the above counters because if could an issue of memory crunch\bottleneck and I would say the trend for these counter values is more important than their absolute value at any given time and If you notice that the average Page Life Expectancy is dropping over time, that is quite significant indication(and not in a good condition).

Conclusion: Don’t ignore this alert if it’s coming frequently from certain servers and during that time on focus on trend obtained from above counters (You can ignore this alert if you are aware that during certain time some heavy activity is going on server and because of that this alert appears and off Couse if customer is not complaining). Also If you are sure that for some server you can’t do anything because their hardware lifecycle is over and you don’t want to spend too much on them then define your threshold again or override this alert with the help of SCOM team in your environment, otherwise if you feel that it’s actually a serious issue on server on the basis of above counters then ask customer to add more memory on their dedicated server but make sure before asking him to add more memory although these days the memory chips are quite cheap but still there is a cost involve.


Error: SQL Server 2008 Installation Error: “Invoke or Begin Invoke cannot be called on a control until the window handle has been created”

Error: SQL Server 2008 Installation Error: “Invoke or Begin Invoke cannot be called on a control until the window handle has been created”

While installing SQL Server 2008 I received above error and the same error could be encountered if you install SQL Server 2008 on Windows 7. Later I came to know that this issue is happening because of a behavior change in .NET that is exposing an issue with SQL Setup User Interface (UI).


1.In the Installation Disc you will find a folder, x86 and x64

2.Select as per your OS and go inside the folder

3.Click on the LandingPage.exe


4. Click on the second Option on the left hand menu “Installation”


5. Start Standalone installation from the menu at the right side

 6. .It will ask for the location files on the disc

 7. Show the location, e.g. /SQL Server Enterprise 2008/x86 or x64 depending on OS

 It will complete the installation successfully

 Work Around:-

 You can also try to install SQL Server Slipstream setup with SP1 CPU4 as MS confirmed this is a problem and is corrected in Cumulative Update 4 for SQL Server 2008 Service Pack 1.

Please refer below link for more detail

How to extract historical deadlock from extended events



SET@ring= (SELECT     cast(xet.[target_data]AS[xml])

                        FROM       sys.dm_xe_session_targetsxet



                        WHERE      xe.[name]=’system_health’);


SELECT                                                                                                                AS’DEADLOCKS’,


                                OVER (

                                        ORDERBYsyshealth.xevent.value(‘(@timestamp)’,’DATETIME’))           AS’Sequence’,


                                                                   ‘DATETIME’)                                    AS’Deadlock time’,

                        –SysHealth.XEvent.query(‘.’) AS [DeadlockEvent],



                                                                                ‘NVARCHAR(MAX)’)ASxml)         AS’Deadlock graph’

–SysHealth.XEvent.value(‘data[1]’,’NVARCHAR(MAX)’) AS DeadlockGraph

FROM        (SELECT@ringASring)ASbuffer

CROSSapplyring.nodes (‘//RingBufferTarget/event’)ASsyshealth (xevent)

WHERE       syshealth.xevent.value(‘(@name)[1]’,

                                                                   ‘varchar (100)’)=’xml_deadlock_report’

ORDER       BY[deadlock time]DESC;