Category Archives: Uncategorized

SSRS ERROR: Managed server timed out. Timeout=150 sec. GC extension timeout=30 sec.

We had an issue in which the reporting services 2008 running on Windows 2008 suddenly stops responding and when we connect to reporting services it’s not responding, even the configuration manager fails to load Reporting Services Configuration in RS configuration Manager.

We explore the Reporting Services logs which generally available on below location and we found something interesting.

ERROR: Managed server timed out. Timeout=150 sec. GC extension timeout=30 sec.

%ProgramFiles%\Microsoft SQL Server\MSSQL.x\Reporting Services\LogFiles

SSRS_LOG1

SSRSLOG

During investigation it seems like a bug in Reporting Services; investigation reveals that the major cause for reporting services to be  unresponsive\crash was slow GC (Garbage collection- it manages the allocation and release of memory for your application). This error can be seen clearly in logs “Granting additional timeout of 30 sec.” indicates that managed server timed out while GC was in process, and so slow GC was the likely reason.

Slow GC occurs usually when you have a lot of RAM on the box and the report server process is using a lot of it. The problem is that for managed applications, a Garbage Collection (GC) occurs periodically. During a GC the process is suspended temporarily.
Resolution:

MS suggest that a long running queries\process related to SSRS must be killed from query analyzer should resolve this issue.

http://connect.microsoft.com/SQLServer/feedback/details/519612/reporting-services-2008-crashing

Resolution which worked for US

We tried several things but at last the resolution which worked for us is to apply a latest SP3 for SQL 2008, after applying latest SP the reporting service went to its normal consistent state.

 

Unable to add new Article in Replication

I encounter an interesting issue on Transaction Replication where I was unable to add a newly added table into some subscriber. My replication setup consists of 1 Publisher and 7 subscribers.

I followed the below step to add a new article in my existing Publication which is working fine in my Development environment which is almost similar to Production.

  1. To avoid complete snapshot of the Publisher database I set allow_anonymous and Immediate_sync to False by default its set to TrueEXEC sp_changepublication @publication = ‘PubDB’,
    @property = ‘allow_anonymous’,
    @value = ‘false’
    GO
    EXEC sp_changepublication @publication = ‘PubDB’,
    @property = ‘immediate_sync’,
    @value = ‘false’
    GO
  2. Added new table into exiting publication properties
  3. Start Snapshot Agent
  4. Snapshot Agent created a Snapshot for one article and the scripts can be seen in repl data folderSnapshot_generation
  5. Started Distribution Agent (Ignore if its already running) to deliver this snapshot on the subscriber and it should look like this as shown belowSnapshotDelivered1

Now here I noticed that on 3 of the sites out of 7 Distribution Agent did not apply snapshot on the subscriber and after some time I noticed that the folder gets deleted for this snapshot containing schema, index ,BCP file etc.

This is because my distribution agent cleanup job was configured to run after every 5 min to control the size of distribution database and shared drive containing snapshots.

EXEC dbo.sp_MSdistribution_cleanup  @min_distretention = 0, @max_distretention = 72

Initially I focused on one of the properties immediate sync = TRUE, which I believe could be responsible for deleting the snapshot before applying it on the subscriber.

But after looking into my distribution agent cleanup job properties and after referring to Article my assumption start getting wrong here the difference is that in my case its set to @min_distretention = 0 so do not wait and delete the transaction once it delivered to subscriber case

If “immediate sync” = TRUE, do not keep transaction EVEN IF they have been replicated to the subscriber but delete any transaction older the 72 hours.  Metadata that is younger than 72 hours, but older than 4 hours will be kept. This is conservative.

If “immediate sync” = FALSE, then keep transaction for at least 4 hours but if the transaction has already been replicated (distributed) to the subscriber go ahead and delete EVEN IF not older then 72 hours. Metadata that is younger than 72 hours, but older than 4 hours may be deleted quite aggressively.

Later on I focused on Replication Script and noticed that there is a small change in the Subscription properties of these 3 subscribers as shown below but the impact is big.

@sync_type = N’replication support only’

subprop

This property allows you to manually synchronize your data and schema on your publisher and your subscriber, but then the support objects needed to make replication work are pushed automatically to the subscriber. The benefit is that you can still take advantage of the replication tools to setup replication, but you also have more control over getting your data and schema in sync.  This is great if you already have a database in place and you only want to setup replication. Instead of having to drop and recreate the tables, this option will just push out the objects needed for replication and not affect your existing schema or data.

But you will not be able to add new article in publication once you configure subscriber with this option and this is by design “Replication Support Only” 

Property

Work Around:-

There is no direct way to fix this as this property and this property is not available in GUI and can not be modified, so you have to drop and recreate the subscription with sync_type = N’automatic’. But in a mission critical application this is not possible so one of the approach is to manually apply the script on subscriber. If you have the script copied from the repl data folder then execute that script on all the subscriber which are having this problem.

legal

Important:- While you are manually adding this article ensure that no transaction are made into this table else you will encounter errors like “Object Note Found” or “Row Not Found”.

Resolution:-

  • Create a new database with same name on another SQL instance and make changes in web.config to point to this database, meanwhile you can reconfigure your subscriber using re-initialization method with sync_type = N’automatic’
  • If your application is accessing subscriber database using Synonyms from other database then create a database with same name (using restoration)on same SQL instance and point synonym to this database and meanwhile you can reconfigure your subscriber using re-initialization method with sync_type = N’automatic’

When this option was enabled on my subscription and why?

Well there is no direct way to identify when this configuration was done so you have to rely on distribution agent creation job date or check for subscriber database restoration date. This option might give you some idea if  in past the replication was synchronized with this option to avoid any down time.

 

SQL Server is using more memory then assigned is this normal

SQL Server is using more memory then assigned is this normal?

First of all make sure the minimum memory of SQL Server must be set to 0 MB because this will ensure that even in worst of a scenario of memory crunch SQL Services will come up . Please keep this thing in mind that by default SQL will consume as much memory you allow and apart from SQL Server there are other components as well that needs to be considered like SSIS, SSRS, SSAS because they consumed memory apart from SQL Server. So if you are running these components on Server you have to ensure that you have at least (4 OS + 2 GB for these components – excluding max memory of SQL). In your case SQL is assigned with 13 GB and total is 16 GB, so here SQL will try to capture all 13 GB by design and the most important and interesting thing is that when you assign MAX memory limit you are ONLY defining the buffer pool size, so if you are a DBA and windows team is bugging you that your SQL is using more memory then assigned then you become deaf for windows team for some time and investigate if they transferred ticket to you.

MinMAx

PErf

As I said above you are only defining MAX buffer pool size not complete memory for SQL, so there are other memory allocations required apart from buffer pool for SQLOS and internal operations of SQL Server and that allow it to be able to allocate more than max server memory. So Logically SQL 2008 R2 and all below version MAX memory just control the size of buffer pool or you can say you are defining an upper limit of Buffer pool where it stores data pages and procedure cache, but there are other memory clerks within in SQL Server which also uses memory. Hence sometime its normal when you see SQL Server is using more memory than the assigned.

Now question comes what action the DBA should take when he see that the server memory is 98%.

Don’t come to conclusion directly that there is a memory pressure on server, first we need to consider some performance counters.

  • Target vs Total SQL Memory

Most of the blogs will point on the Target Vs Total memory of SQL Server, however I would say it’s not a reliable counter which will indicate that there is a memory pressure on SQL Server when Target Server Memory > Total Server memory.

ToalTarget

  • Page Life Expectancy

It is the expected time, in seconds, that a data file page read into the buffer pool (the in-memory cache of data files pages) will remain in memory before being pushed out of memory to make room for a different data file page. Another way to think of PLE (Page Life Expectancy) is an instantaneous measure of the pressure on the buffer pool to make free space for pages being read from disk. For both of these definitions, a higher number is better.

However it is very tricky when you have a NUMA configuration the buffer pool is split up into buffer nodes, with one buffer node per NUMA node that SQL Server can see. Each buffer node tracks PLE separately and the counter Buffer Manager: Page life expectancy counter is the average of the buffer node PLEs

Roughly we can say that the value of 300 sec or in minute scale 5 Min is fine.

  • Memory Grant Pending

Every query needs memory to execute and query memory grant is a part of server memory used to store temporary row data while sorting and joining rows. It is called “grant” because the server requires those queries to “reserve” before actually using memory. This reservation improves query reliability under server load, because a query with reserved memory is less likely to hit out-of-memory while running, and the server prevents one query from dominating entire server memory.

If Memory Grants Pending is averaging over 0, queries can’t even start because they can’t get enough memory. You may have queries that have incorrect memory grants, or you may just desperately need more RAM.

In below graph you can see the yellow line always 0 which means sufficient memory available all the time, but if you closely see that my page life expectancy is varying from 1 min to 200 min, which defines that there is something happening on my server.

PLE_MGP

  • Buffer Cache Hit Ratio

It gives the ratio of the data pages found and read from the SQL Server buffer cache and all data page requests. The pages that are not found in the buffer cache are read from the disk, which is significantly slower and affects performance – This value should always be close to 100,

BC

I have mentioned some important performance counters in one of my blog below is the link where you get more information.

http://coresql.com/2013/10/23/how-to-handle-memory-pages-per-second-is-too-high/

What is Integration Services Catalog or SSISDB in SQL 2012

Recently one of my friend who is BI expert gave an idea to use Integration Services Catalog to manage SSIS package in a more efficient way. He showed me some glimpse of that new feature and I have to say

It’s really useful and nice concept, I tried and implemented one of my environment and also tried how to migrate complete integration services catalog with all environment variables from one server to another and also implemented mirroring solution on Integration Services Catalog SSISDB to increase the availability of SSISDB database and availability of JOBS\SSIS.

Prior to SQL 2012 and SSISDB all packages are stored either in MSDB or in file system, also there is another way to store package in SSIS Package store.  SSIS Package Store is nothing but combination of SQL Server and File System deployment, as you can see when you connect to SSIS through SSMS: it looks like a store which has categorized its contents (packages) into different categories based on its manager’s (which is you, as the package developer) taste. So, don’t get it wrong as something different from the 2 types of package deployment (MSDB and File System).

SSIS3

SSIS packages are really just XML files, many organizations have used the “copy and configure” approach to deployments. In such cases, the packages are typically manually copied or copied using batch scripts to shared locations or MSDB database, where they’re executed. Configuration information is then stored in configuration files or special database tables meant only for storing SSIS configuration values.

The “copy and configure” approach can lead to problems. For example, a project where all the SSIS configuration parameters, including connection strings, were stored in a database table. As a result, whenever we restored a copy of the production database in the test environment, all the SSIS packages in the test environment would point to the production database

In an attempt to solve these types of problems, SSIS 2012 provides some new package deployment features, including the new SSISDB catalog and database. SSIS 2012 stores all the packages, projects, parameters, permissions, server properties, and operational history in the SSISDB database, bringing together all the “moving parts” for any SSIS deployment. You access the SSISDB database in SQL Server Management Studio (SSMS) by expanding the Databases node in Object Explorer.

What is Integration Services Catalog?

SSIS Catalog is a new repository model for Integration Services, which introduced from SQL Server 2012 and this new repository model comes with lot of features for developers, and database administrators.

The SSISDB catalog is the central point for working with Integration Services (SSIS) projects that you’ve deployed to the Integration Services server. For example, you set project and package parameters, configure environments to specify runtime values for packages, execute and troubleshoot packages, and manage Integration Services server operations. The objects that are stored in the SSISDB catalog include projects, packages, parameters, environments, and operational history.

SSISCataog2

The project deployment model of SSIS 2012 no longer stores packages in the MSDB or on the file system. Instead once you configure Integration Services Catalog on server it separately creates a SSISDB database on the database server. The project deployment model also allows you to define different environments (DEV, CONS and PROD) to assign correct values for the parameters used in your packages according to the environment against they are executed.

Packages can be deployed to SSISDB and you can consider this as best practice. This feature will keep a deployment history of packages (like a very basic version control) so you can even rollback some revisions of your package. Now you don’t need XML or dedicated SQL tables to save your configurations. Now managing security is also very easy you can manage security through SQL Server because now everything can be handled via SQL Server Security instead of DCOM.

Advantages of Integration Services Catalog (SSISDB)?

  • Packages can be directly deployed to SSISDB using SQL Server Data Tools for BI Studio. Packages keep a deployment history so you can even rollback some revisions of your package.
  • You can manage security through SQL Server because now everything can be handled via SQL Server Security.
  • Integration Services Catalog (SSISDB) comes with a new feature called Integration Services Dashboard, a report automatically built with report services template. Just click Integration Service Catalog and right click your packages to view “All Executions”.
  • You can see very detailed Information about your packages including execution time.

SSISCataog1

  • You can parameterize your Connection Manager or just parts of it.

Example: You can parameterize the server name and in your SSISDB you can create two environments (or more) called “CONS” and “PROD”. Then you can add variables to both of them and map them to the input parameter of your package. Main Advantage you can deploy a package to SSISDB and link to an environment and you don’t have to handle the connection strings by yourself.

  • Main Advantage of the new model is the configuration. You don’t need XML or dedicated SQL tables to save your configurations. You can use input parameters and map them with environments defined on SQL server.

How to configure Integration Services Catalog (SSISDB)?

How packages are stored in various version of SQL Server?

  • 2005 – stored in msdb.dbo.sysdtspackages90
  • 2008 – stored in msdb.dbo.sysssispackages (I seem to recall 2008 RTM using a different table, or reused the 90 table but that got patched out)
  • 2008 R2 – stored in msdb.dbo.sysssispackages
  • 2012 (package deployment model) – stored in msdb.dbo.sysssispackages
  • 2012 (project deployment model) – stored in SSISDB.catalog.packages*
  • 2014 (package deployment model) – stored in msdb.dbo.sysssispackages
  • 2014 (project deployment model) – stored in SSISDB.catalog.packages*

How to backup,restore or move SSISDB from one server to another?

One or more subscriptions have been marked inactive

Replication became inactive

If you notice sometimes your replication became inactive and you never came to know about if there is any error or distribution agent job is failing. Even if you have a SCOM configured you never get a Job failure alert.

In this scenario if you notice the properties of Distribution Agent Job and Log Reader Agent job closely you will find that in step 2 for both the jobs “Run Agent” in advance properties you will find Retry Attempt option with value =2147483647.

Job

This is Replication Agent default value to retry 2147483647 times, once every minute.  That is over 4000 years, isn’t it strange?

2147483647 = 35791394 Hours = 1491308 Days = 4085 Years.

How does this impact Replication?

When you have this kind of setup in case of any replication agent failure and it is configured for continuous retry, then agent will not report back to SQL Agent Service for this faliure, but will wait 1 minute, then try again for 4000 years.  Hence you will never get any kind of alert even if you have SCOM configured.

Example:  IF the Distribution Agent, again running continuous, is unable to delivery data because of any error like 20598 or a 2627 error etc., the Agent will go into an infinite retry logic.

20598=Row was not found at the Subscriber when applying the replicated command.

2627 =Violation of PRIMARY KEY constraint ‘PK__A’. Cannot insert duplicate key in object ‘dbo.A’.

In this scenario if undetected, the Agent will stay in retry mode until the Distribution Cleanup job removes the “cached” transactions from the Distribution Database.  At this point the Distribution Agent will no longer have the data needed to “catch up” the subscriber.  As result, the Subscriber will be marked “inactive” and post the following error in the repl_errors table in the Distribution database.

18854=One or more subscriptions have been marked inactive. Drop and re-create all subscriptions for this node that are failing with this error.

Which Replication Agent are marked to retry for this default value?

You can look at the individual Agent job properties in SQL Server Management Studio or run the following query in the MSDB database.

Select * from msdb.dbo.sysjobsteps where retry_attempts=2147483647

How to fix this issue?

I would recommend setting all of your Replication Agent retry_attempts at smaller interval, say 10 (fail after 10 minutes).

Note: The actual value you select is dependent on your business needs and criticality of replication.  For extra measure, you can also add notification ALERT that the JOB has failed.

This should give you enough time to detect the problem, resolve the issue, and get the Replication Agent up and running before the subscriber is marked as “expired”.

Misconception about Online Rebuild Index Operation till SQL 2012

Microsoft Introduces Online Rebuilding Index operation in SQL Server SQL Server 2005 and after that it is available in all SQL Server versions. So the question arises is Online Rebuilding online operation really an online operation. The answer is YES to some extent but when it comes to DDL specific operation during rebuilding of indexes things changes, please follow below link for more details

Misconception about Online Rebuild Index Operation till SQL 2012

 

 

What is In-Memory OLTP in SQL Server 2014 (Code Name “Hekaton”)?

In-memory OLTP or Hekaton these 2 terms can be used interchangeably but Microsoft officially announce this new technology name as “In-memory OLTP”. “Hekaton” is Greek word for “hundreds,” and it was given this name for its ability to speed up database function 100x (possibly).  It certainly increases application speed by 10x and nearly 50x for new, optimized applications but it depends of several parameters.

Hekaton works by providing in-application memory storage for the most often used tables in SQL Server, but don’t confuse this is entirely different from an older technology you might heard of DBCC PINTABLE ( database_id , table_id ).

DBCC PINTABLE does not cause the table to be read into memory. As the pages from the table are read into the buffer cache by normal Transact-SQL statements, they are marked as pinned pages. SQL Server does not flush pinned pages when it needs space to read in a new page. SQL Server still logs updates to the page and, if necessary, writes the updated page back to disk. SQL Server does, however, keep a copy of the page available in the buffer cache until the table is unpinned with the DBCC UNPINTABLE statement.

DBCC PINTABLE is best used to keep small, frequently referenced tables in memory. The pages for the small table are read into memory one time, then all future references to their data do not require a disk read, but still the internal structure will remain page based.

Hence if the internal structure is page based then definitely there will be locking, latching and logging, also they use the same index structure which also require locking and latching.

Although the feature of DBCC PINTABLE is good and provide performance enhancement but it has some limitations like if a large table is pinned, it can start using a large portion of the buffer cache and not leave enough cache to service the other tables in the system adequately. If a table is larger than the buffer cache is pinned, it can fill the entire buffer cache. In that case a sysadmin must shut down SQL Server, restart SQL Server, and then unpin the table. Pinning too many tables can cause the same problems as pinning a table larger than the buffer cache.

 

 

In SQL 2014 “In-Memory OLTP” it entirely different or we can say also say that it’s an enhancement of previous technology DBCC PINTABLE. Tables in In-Memory OLTP are stored in entirely different way from disk based tables, they use entirely different data and index structure because this feature did not store data on PAGES and separate index mechanism, hence removing the need to latching and can solve the problem of latch contention mostly happens in case of last page insert.

LPInt  LPS1

You can check this problem via DMV’s and investigate the issue from below queries.   Select * From sys.dm_os_waiting_tasks  

Select wt.*, st.text
From sys.dm_os_waiting_tasks wt
left join sys.dm_exec_requests er
    on er.session_id = wt.session_id
outer apply fn_get_sql(er.sql_handle) st
where wt.resource_description is not null

 LPS2

Now as In-Memory OLTP works on a new structure as mentioned above we have below advantages. Elimination of disk reads entirely by always residing in memory

  • Hash indexing (more efficient than on-disk table b-tree indexes)
  • Elimination of locking & latching mechanisms used in on-disk tables
  • Reduction in “blocking” via improved multi-version concurrency model

In-Memory OLTP Architecture

Arch1

Arch2

As you can see from the above diagram the tables and indexes are stored in a memory in different way, there is no buffer pool like conventional architecture also as there is no 8KB page based data structure MS is using stream based storage, and only files are appended to store the durable data.

Hence the major difference is that memory optimized table do not require pages to be read form the disk, all the data itself stored in memory all the time. A set of checkpoint files which are only used for recovery purpose is created on the file system file group to keep track of the changes of data, and the checkpoint files append-only. Memory optimized table uses the same transaction log that is used or the operation on disk based tables and is stored on disk, to ensure that in case of system crash the rows of data in the memory optimized table and be recreated from the checkpoint files and the transaction log.

Here you also have an option of Non-Durable table as well, in this option only table schema will be durable not data, so these tables will be re-created once the SQL start after a crash without data.

Indexes in In-Memory OLTP is also different they are not stored in a normal B Tree structure in fact they support HASH indexes to search records. Every memory optimized table must have at-least one Hash Index because it is the index that combine all rows into a single table. Indexes for memory optimized table are never stored on disk in-fact they are stored on memory and recreated everyone the SQL restart and data is inserted into the memory.

Bellow you can see there are 3 disk based tables T1, T2, and T3 with file-group on disk, but once we once we want some tables to be in memory that there will be a new space in

Arch3

SQL memory and in addition there will be a new type of file group for stream based storage to persist copy of data to ensure data can be recovered in case of crash. Now consider a scenario we want to move some disk based tables to memory then we have to first drop them and recreate them with special syntax and the situation will look like mentioned below, as you can see there is a separate fie group called memory optimized file group and all DML operations are logged in conventional log file and as you can also see indexes and data exists in memory itself, note that there will no copy of indexes on disk.

Arch4

Natively Complied Stored Procedure

In-Memory OLTP introduces the concept of native compilation. SQL Server can natively compile stored procedures that access memory-optimized tables. Native compilation allows faster data access and more efficient query execution, than interpreted (traditional) Transact-SQL

Native compilation refers to the process of converting programming constructs to native code, consisting of processor instructions that are executed directly by the CPU, without the need for further compilation or interpretation.

In-Memory OLTP compiles Transact-SQL constructs to native code for fast runtime data access and query execution. Native compilation of tables and stored procedures produce DLLs that are loaded into the SQL Server process.

In-Memory OLTP compiles both memory-optimized tables and natively compiled stored procedures to native DLLs when they are created. In addition, the DLLs are recompiled after a database or server restart. The information necessary to recreate the DLLs is stored in the database metadata.

Natively compiled stored procedure is a way to perform the same operation with fewer instruction because you can see that the CPU clock rate is stalled at a point. Hence it’s a way to perform the same operation with lesser instruction like there is a task which require 1000 instruction to complete with conventional architecture, but the same task can be done with 300 instruction with natively compiled stored procedure.

Arch5.

Arch6

Hence you will get much performance gains in many aspects like the storage engine here itself is lock and latch free, hence there will be no locking and latching and in case your application is having performance bottle neck because of latch contention then you can use this feature of In-Memory OLTP to remove latch contention and you will see great result once you move some suffering tables in memory. Natively compiled stored procedure will give to 10-30x benefit by improving execution time, and also memory optimized table require lesser logging than disk spaced table as no index operations are logged, but still the latency could be there for the log, because the commit of transaction will not complete till the log is hardened to disk so if there is good storage then this will not be an issue.

Arch7

Although the In-Memory OLTP tables provides lot of performance gains but it has lot of limitations too and those must be taken care before you decided to implement this in your production.

  • Truncate Table
  • Merge
  • Dynamic and Keyset cursor
  • Cross Database queries
  • Cross Database transactions
  • Linked Server
  • Locking Hits
  • Isolation Level Hints (ReadUncommitted, ReadCommitted, and ReadCommittedLock)
  • Memory Optimized table types and table variable are not supported in CTP1
  • Tables containing binary columns such as text, xml or row width exceeding 8kB cannot be configured as “in-memory” and there are also some limits on SQL commands and transaction concurrency options
  • Default and check constraints are not supported in memory optimized tables
  • User defined data types within table definition are not supported
  • Expects the collation of the table or database a BINARY – Latin1_General_100_BIN2 (tested trying with a database with collation SQL_Latin1_General_CP1_CI_AS and resulted in error)
  • File SIZE or AUTOGROWTH can’t be set to the file stream data file (in the CREATE DATABASE syntax)
  • The non BIN2 collation is not supported with indexes on memory optimized tables
  • In memory table creation fails in the normal database as it is not supported (The feature ‘non-bin2 collation’ is not supported with indexes on memory optimized tables.)
  • The above statement doesn’t apply for a normal disk table residing in an in-memory database
  • Identity columns can’t be defined in the memory optimized tables Data types IMAGE, TEXT, NVARCHAR(MAX), VARCHAR(MAX) types not supported in memory optimized tables
  • Row size for the memory optimized table is 8060 bytes
  • Memory optimized table can’t be altered to add a FORIGEN KEY constraint & in-line creation of Foreign key constraint can’t be created either
  • CREATE TRIGGER not supported by the memory optimized tables
  • CREATE STATISTICS on the tables wasn’t allowed when tried to create (CREATE and UPDATE STATISTICS for memory optimized tables requires the WITH FULLSCAN or RESAMPLE and the NORECOMPUTE options; the WHERE clause is not supported.)
  • Partition not supported
  • LOB data type not supported

How to extract SQL Server Instance Details

declare @RegPathParams sysname

declare @Arg sysname

declare @Param sysname

declare @MasterPath nvarchar(512)

declare @LogPath nvarchar(512)

declare @ErrorLogPath nvarchar(512)

declare @n int

select @n=0

select @RegPathParams=N’Software\Microsoft\MSSQLServer\MSSQLServer’+’\Parameters’

select @Param=’dummy’

while(not @Param is null)

begin

select @Param=null

select @Arg=’SqlArg’+convert(nvarchar,@n)

exec master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’, @RegPathParams, @Arg, @Param OUTPUT

if(@Param like ‘-d%’)

begin

select @Param=substring(@Param, 3, 255)

select @MasterPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

else if(@Param like ‘-l%’)

begin

select @Param=substring(@Param, 3, 255)

select @LogPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

else if(@Param like ‘-e%’)

begin

select @Param=substring(@Param, 3, 255)

select @ErrorLogPath=substring(@Param, 1, len(@Param) – charindex(‘\’, reverse(@Param)))

end

select @n=@n+1

end

declare @SmoRoot nvarchar(512)

exec master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’, N’SOFTWARE\Microsoft\MSSQLServer\Setup’, N’SQLPath’, @SmoRoot OUTPUT

SELECT

@ErrorLogPath AS [ErrorLogPath],

@SmoRoot AS [RootDirectory],

CAST(case when ‘a’ <> ‘A’ then 1 else 0 end AS bit) AS [IsCaseSensitive],

@@MAX_PRECISION AS [MaxPrecision],

CAST(FULLTEXTSERVICEPROPERTY(‘IsFullTextInstalled’) AS bit) AS [IsFullTextInstalled],

@LogPath AS [MasterDBLogPath],

@MasterPath AS [MasterDBPath],

SERVERPROPERTY(N’ProductVersion’) AS [VersionString],

CAST(SERVERPROPERTY(N’Edition’) AS sysname) AS [Edition],

CAST(SERVERPROPERTY(N’ProductLevel’) AS sysname) AS [ProductLevel],

CAST(SERVERPROPERTY(‘IsSingleUser’) AS bit) AS [IsSingleUser],

CAST(SERVERPROPERTY(‘EngineEdition’) AS int) AS [EngineEdition],

convert(sysname, serverproperty(N’collation’)) AS [Collation],

CAST(SERVERPROPERTY(N’MachineName’) AS sysname) AS [NetName],

CAST(SERVERPROPERTY(‘IsClustered’) AS bit) AS [IsClustered],

SERVERPROPERTY(N’ResourceVersion’) AS [ResourceVersionString],

SERVERPROPERTY(N’ResourceLastUpdateDateTime’) AS [ResourceLastUpdateDateTime],

SERVERPROPERTY(N’CollationID’) AS [CollationID],

SERVERPROPERTY(N’ComparisonStyle’) AS [ComparisonStyle],

SERVERPROPERTY(N’SqlCharSet’) AS [SqlCharSet],

SERVERPROPERTY(N’SqlCharSetName’) AS [SqlCharSetName],

SERVERPROPERTY(N’SqlSortOrder’) AS [SqlSortOrder],

SERVERPROPERTY(N’SqlSortOrderName’) AS [SqlSortOrderName],

SERVERPROPERTY(N’BuildClrVersion’) AS [BuildClrVersionString],

SERVERPROPERTY(N’ComputerNamePhysicalNetBIOS’) AS [ComputerNamePhysicalNetBIOS],

‘LastRestart’=(select crdate from sys.sysdatabases where name=’tempdb’)