Percy Reyes's Technical Blog

Just one hour ago one of my tips was published at related to “Error 601: Could not continue scan with NOLOCK due to SQL Server data movement”. You can read it at

If you like this tip then you can follow me @percyreyes on twitter.

My latest tip has been published today at about “SQL Server Replication Error - The specified LSN for repldone log scan occurs before the current start of replication in the log” and you can read it at 

If you like this post then follow @percyreyes on twitter.

Today my tip about how to fix the “SQL Server Transactional Replication Error: Could not find stored procedure” has been published online in, you can read it at

If you like this post then follow @percyreyes on twitter.

We know that SQL Server 2000 is now out of compliance with Microsoft, so in that sense, many companies are planning to upgrade to latest version of SQL Server like SQL Server 2012 or SQL Server 2014. This post will give you a recommendation that you have to put in practice in order to avoid inconsistencies errors of meta data of objects of the upgraded databases.

The inconsistencies errors what I am talking about can be found when you run DBCC CHECKDB command to check the logical and physical integrity:


and here the following inconsistency error is likely to appear:

The In-row data RSVD page count for object ‘SchemaName.ObjectName’, index ID 0, partition ID 75863107960832, alloc unit ID 75863107960832 (type In-row data) is incorrect.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'SchemaName.ObjectName' (object ID 1157579162).
CHECKDB found 0 allocation errors and 1 consistency errors in database 'MyDatabase'.

The output message above tell us that there is 1 consistency error in the database "MyDatabase" and is a logical inconsistency. The error indicates that there are pages and row counts inaccuracies for that table and this type of errors can be fixed by running DBCC UPDATEUSAGE command. DBCC UPDATEUSAGE reports and corrects these inaccuracies. Now after you execute the DBCC UPDATEUSAGE(0) command in the context of your database you will see the following message indicating that were fixed the inaccuracies

DBCC UPDATEUSAGE: Usage counts updated for table 'SchemaName.ObjectName' (index 'ObjectName' , partition 1):
        RSVD pages (In-row Data): changed from (-275) to (145) pages.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

At this time the consistency error was fixed and then we are able to run again DBCC CHECKDB and this will finish without errors. The recommendation of this tip is always execute DBCC UPDATEUSAGE after you upgraded your database from SQL Server 2000 to superior version.

If you like this post then follow @percyreyes on twitter.

Sometimes when we have an AlwaysOn solution implemented and then due to whatever reason the WSFC Cluster service run down and is corrupted we will need to reinstall everything including new cluster and new Availability Groups but how to uninstall or delete the Availability Group correctly if WFC service is not running? we don’t have any option than delete forcibly the Availability Groups and making a dirty and incomplete uninstallation.

After delete Availability Groups we will need remove the nodes from the WSFC Cluster and it can be done by uninstalling the WSFC Cluster service.

Ok, Now after you have installed again the Failover Cluster and also joining the nodes in cluster we have to create new AlwaysOn Availability Group


and trying this we can get this error:


This error is due to incorrect uninstalling that we had done above and to fix this we will have to disable AlwaysOn Availibility Groups at database engine service level


and the restarting it.


Now we have to enable it again and then restarting the database engine service


After you have done these steps you will be able to create again the Availability Groups what you want. It is great!


If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

The following issue can be raised when Transactional Replication and Change Data Capture (CDC) are deployed and running together, and something was done incorrectly managing CDC jobs. We know when CDC is deployed then two jobs are created for the CDC process. They are cdc.MyDB_capture and cdc.MyDB_cleanup.

cdc.MyDB_capture job executes sys.sp_MScdc_capture_job stored procedure and this inside invokes sp_cdc_scan to read internally the Transaction Log and captures the changes done in the database, so, this job is the agent of CDC process to read the Transaction Log and also use Log Reader Agent (of Transactional Replication) to make it possible. In this way if you also have Transaction Replication running for your database then you will have another agent working on reading the Transaction Log. This is the Log Read Agent. Now, Log Read Agent and cdc.MyDB_capture job cannot be running both so that this error will be raised:

The capture job cannot be used by Change Data Capture to extract changes from the log when transactional replication is also enabled on the same database. When Change Data Capture and transactional replication are both enabled on a database, use the logreader agent to extract the log changes.

This error is because is not possible that two Log Reader Agent instances are running and reading together your database in the same SQL Server instance.

When transactional replication is configured then the cdc.MyDB_capture job is dropped automatically and if you uninstall Replication then cdc.MyDB_capture job is created again. If you have already CDC installed and after you configured transactional replication then cdc.MyDB_capture job should have been dropped. The Log Agent Reader (created as part of replication) has the High priority. So, if you have transactional replication running for your database and cdc.MyDB_capture job is still enabled and running then you have to disable or drop it manually this job because it will be failing and raising the error above.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

Today I enjoyed talking about a new feature in SQL Server 2014, that is Backup Encryption. I have given this webcast session for SQLPASS Peru chapter (follow on twitter @sqlpassperu and fan page ). You can download the presentation from here.

Please let me know if you have any comments or questions or concerns.

If you like this post then follow @percyreyes on twitter.

Sometimes you may have jobs executing PowerShell scripts inside, but one day unexpectedly they can begin to fail without any apparent reason and then getting this error:

AuthorizationManager check failed At line:1 char:2  + & <<<<  'S:\myfolder\script.ps1'      + CategoryInfo          : NotSpecified: (:) [], PSSecurityException      + FullyQualifiedErrorId : RuntimeException.  Process Exit Code 1.  The step failed.

What you have to do now to troubleshoot this is check PowerShell to ensure the ExecutionPolicy is not set to “Restricted” by executing


If it is, then set it to “RemoteSigned” or “Unrestricted” depending on your security policy.

Set-ExecutionPolicy RemoteSigned

But if the error continues then you have to check and ensure that the Windows Management Instrumentation service (WMI) service is enabled and running, and if it is and the error still continues, then you have to restart WMI service. After this, your job and script will run again without error.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

If you have any database with Change Data Capture (CDC) enabled you need to take account this recommendation during the restore process. There is one option to be used as part of RESTORE syntax, this is KEEP_CDC which allows you to restore the cdc metadata as well when you restore the database to another SQL Server instance or restore it to the same instance but with different database name.

Now you can verify that CDC metadata was restored as well, this is, your database and tables keep CDC enabled, and also all data captured by CDC)
-- Checking if CDC is enabled for database TESTDB2.
select is_cdc_enabled,name from sys.databases where name='TESTDB2'
-- Checking if CDC is enabled for table MyTable.
select is_replicated, is_tracked_by_cdc, * from sys.tables
select * from [cdc].[change_tables]  
-- checking the data tracked for table MyTable.
select * from cdc.dbo_MyTable_CT 
This is not all what you have to do, after this you will need to create the CDC jobs by executing
EXEC sys.sp_cdc_add_job 'capture'
EXEC sys.sp_cdc_add_job 'cleanup' 
Now, you also can verify the jobs were created for CDC
EXEC [sys].[sp_cdc_help_jobs] 
With all of this, the restore process of database (with CDC) have been completed successfully.

If you like this post then follow @percyreyes on twitter.
Publicado por Percy Reyes | con no comments
Archivado en: ,,

As DBA we know sometimes is needed to modify data and log properties of the tempdb database. By default SQL Server creates only one data file and log file, but for the vast majority of situations we will need to create more files.

There are many recommendations about how to create them and also the quantity of files to be created on OLTP environments. Today we will not discuss more details about it, by now I recommend to split your tempdb database into 4 (for 4 or 8 or 16 cores) or 8 data files (for 16, 32, 64, or more cores) and only one Log File and try to locate them in different drives RAID1, RAID5 or RAID10.  Don't forget that the number of data files don't just depends on number of cores, it also depends on concurrency, the workloads on your server and the performance of your queries.

Now I will show you one small script to split your default tempdb database into 8 data files and remaining the only one log file.

USE master
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'templog', NEWNAME= N'tempdev_Log')
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev', NEWNAME=N'tempdev_Data01')
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Log',  FILENAME= N'D:\SQLTempDB\tempdev_Log.ldf', SIZE = 2048MB , FILEGROWTH = 2048MB)
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Data01',  FILENAME= N'D:\SQLTempDB\tempdev_Data01.mdf', SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data02', FILENAME = N'D:\SQLTempDB\tempdev_Data02.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data03', FILENAME = N'D:\SQLTempDB\tempdev_Data03.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data04', FILENAME = N'D:\SQLTempDB\tempdev_Data04.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data05', FILENAME = N'D:\SQLTempDB\tempdev_Data05.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data06', FILENAME = N'D:\SQLTempDB\tempdev_Data06.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data07', FILENAME = N'D:\SQLTempDB\tempdev_Data07.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data08', FILENAME = N'D:\SQLTempDB\tempdev_Data08.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )

After the execution you need to restart the database engine so that new files can be recreated and all changes are done. Let me know if you have any question or comment about tempdb database concerns.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

The tempdb is the shared database per instance in SQL Server and it is used to stored and manage temporary objects. Tempdb database has a number of changes in SQL Server 2005, that is, there are new tempdb usages and internal optimizations enhancements. However, tempdb architecture is mostly unchanged since SQL Server 2000. 
In a user database, transactions have the ACID attributes: atomicity, concurrency, isolation, and durability. In tempdb, transactions lose the durability attribute. Tempdb does not persist after SQL Server shuts down.
Most of these internal operations on tempdb do not generate log records because there is no need to roll back. So, these operations are faster.
Some of the database options cannot be modified for tempdb. Auto shrink is not allowed for tempdb. Database shrink and file shrink capabilities are limited.

The following restrictions also apply to tempdb:
    The database CHECKSUM option cannot be enabled.
    A database snapshot cannot be created on tempdb.
    DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported.
    Only offline checking for DBCC CHECKTABLE is performed.

Tempdb Space Usage: The following types of objects can occupy tempdb space: Internal objects, Version stores and User objects.

Internal Objects: Internal object metadata is stored in memory, doesn't appear in catalog views such as sys.all_objects. Internal objects are used:

  • To store intermediate runs for sort.
  • To store intermediate results for hash joins and hash aggregates.
  • To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large object types: text, image, ntext, varchar(max), varbinary(max), and all others.
  • By queries that need a spool to store intermediate results.
  • By keyset cursors to store the keys.
  • By static cursors to store a query result.
  • By Service Broker to store messages in transit.
  • By INSTEAD OF triggers to store data for internal processing.
  • DBCC CHECK internally uses a query that may need to spool intermediate results.
  • Query notification and event notification use Service Broker, so they need space in tempdb as well.

    Page allocations on internal objects and Updates to internal objects do not generate log records.
    Each internal object occupies at least nine pages (one IAM page and eight data pages) in tempdb.

Version Stores: Doesn't appear in catalog views such as sys.all_objects. They are used:

  • Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build.
  • The online index build version store is for row versions from tables that have online index build operations on them.
  • The common version store is for row versions from all other tables in all databases.
  • The version store consists of append-only store units which are highly optimized for sequential inserts and random look up. Inserts into the version store do not generate log records.
  • Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute.

    For snapshot isolation and read committed snapshot isolation, the versions are generated by DML operations in the database when the respective database options are enabled.
    For AFTER triggers, versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options.
    INSTEAD OF triggers do not generate versions.
    For MARS, versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection.
    Building an online index generates versions of the rows while the online index is being built.

User Objects: Appear in catalog views such as sys.all_objects. They are used:

  • Sp_spaceused can show the size occupied by these objects
  • User objects include both user-defined tables and indexes, and system catalog tables and indexes.
  • Operations on user objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged.
  • User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t.
  • Local temporary tables also include table variables such as @t and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.

Performance Enhancements in SQL Server 2005 or later:

  1. The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb.
  2. Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased.
  3. There is less use of the UP type page latch when allocating pages and extents in tempdb. Proportional fill has been optimized to reduce UP latch contention.
  4. Proportional fill has been optimized to reduce UP latch contention.
  5. There is now deferred drop in tempdb.
  6. Worktable caching is improved.
  7. SQL Server 2005 or later caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement

Recommendations for managing

  • The tempdb files must be configured with initial size and auto-growth based on your workloads. Do not let with the default sizes.
  • The tempdb files must be located on RAID0 (for better performance) or RAID1 (if you need have more writes than reads) or RAID5 (if you have more reads than writes). RAID10 is the best option but no all companies can justify this.
  • The tempdb files must be located on separated disks to avoid contention issues and improves the performance.
  • Tempdb database must be created with one data file per physical processor if the quantity of CPUs is eight or less. If there are more then eight CPUs then you can start off with eight data files and after increase the number of files by four in case there was PAGELATCH contentions on one of the allocation bitmap pages (including PFS pages, GAM pages and SGAM pages) until the issue is solved, if not, add four files more, and so on.
  • Do not forget, tempdb data files or log file should not grow so much in a short time, if this happens then something is working wrong, so you need to analyze the workloads performance and detect the query that you have to optimize.

If you have any questions or comments about tempdb database please let me know.

If you like this post then follow @percyreyes on twitter.

Many times as SQL DBA we can have to deal with errors in SQL Server Replication. The vast majority of errors are reported at replication agents like Distribution Agent, Log Reader Agent, Snapshot Agent, Queue Reader Agent and so on. Now, what happens when for example one of the agents reports several errors and the majority are known and/or duplicated or related to FK/PK conflicts?, to solve all errors one by one or for reconfiguring the replication you will have to need much time and we know time is money and as time happens the operation of business is more impacted so you need to know an option to allow the operation continues while you are working to fix the errors as soon as possible. This option is the SkipErrors parameter.

Today in this post I will explain about how to use the SkipErrors parameter which allow to skip specific errors so that the data synchronization is not stopped and this not affect the operation of the business. SkipErrors parameter is configurable in the profile of distribution agent. This parameter is more used to solve errors related to Primary Key or Foreign Key conflicts raised within the distribution agent process at the moment when the transactions are trying to be replicated to the subscribers. However, you can use for skipping whatever error you decide. 

The following picture shows an error (with error code 547) about FK in the distribution agent process and you will see how the transactions are being queued due to this error, so, you will need to fix it to allow the rest of transactions are also replicated. Distribution agent reads sequentially the table msrepl_commands to get the command to execute in the subscribers, this means first in the queue first out to be replicated to subscribers.


Another common errors where you can use SkipError is “The row was not found at the Subscriber when applying the replicated command” . The error code in this case is 20598.


The SkipError parameter has as input the number of error that you want to skip, and this is configured in the distribution agent profile. You need to create a customized profile and this must be based on the default profile, after this the change you need to do is indicate the numbers of the errors separated by colon as you can see in the following picture “20598:547”.


After you have done this change, you need to restart the distribution agent. When the distribution agent start up then it will load the new customized profile with the code errors to be skipped or ignored. You will see something like this picture with many errors being skipped.


Now, as you could see many errors were skipped and the distribution is running without problems, and this means also that the transactions with these errors were missed and you can not recovery them because they were ignored and this may affect your data consistency.  Do not forget you must use SkipError parameter with caution and working in team with you analysts or application administrators in order to make the best decision for your business.

If you have any questions or comments please let me know.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

An error like is shown below you can see when you try to connect to Object Explorer by using the SSMS (SQL Server Management Studio) of a older version, for example, you try to connect to SQL Server 2012 Object Explorer by using SQL Server 2005 SSMS.


The error is raised explicitly because the superior version support more features and then there is more quantity of folders to display, and this here where SQL Server 2005 SSMS can not display these new folders because the max array index is outside the bounds that this supports. However, this error is only when you try connect by default to object explorer, but this not happen if you connect by default to query. In this situations, the general recommendation is always using the latest SSMS to connect to older versions. I hope this comment is useful for you!.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,,

Many Microsoft Server eBooks are still available for free download, they are quick references and very useful resources to start your adventure in the databases world. I hope you enjoy them and do not forget to share with your contacts.

5 Tips for a Smooth SSIS Upgrade to SQL Server 2012
A Hitchiker's Guide to Microsoft StreamInsight Queries
Books Online: Backup and Restore of SQL Server Databases
Books Online: Data Analysis Expressions (DAX) Reference
Books Online: Data Mining Extensions (DMX) Reference
Books Online: Data Quality Services
Books Online: High Availability Solutions
Books Online: Master Data Services
Books Online: Monitor and Tune for Performance
Books Online: Multidimensional Expressions (MDX) Reference
Books Online: SQL Server Distributed Replay
Books Online: Transact-SQL Data Definition Language (DDL) Reference
Books Online: Transact-SQL Data Manipulation Language (DML) Reference
Books Online: XQuery Language Reference
Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence
Extracting and Loading SharePoint Data in SQL Server Integration Services
Integration Services: Extending Packages with Scripting
Introducing Microsoft SQL Server 2008 R2
Introducing Microsoft SQL Server 2012
Master Data Services Capacity Guidelines
Master Data Services (MDS) Operations Guide
Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery
Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide
Multidimensional Model Programming
Optimized Bulk Loading of Data into Oracle
Planning Disaster Recovery for Microsoft SQL Server Reporting Services in SharePoint Integrated Mode
QuickStart: Learn DAX Basics in 30 Minutes
SQLCAT's Guide to BI and Analytics
SQLCAT's Guide to High Availability Disaster Recovery
SQLCAT's Guide to Relational Engine
SQL Server 2012 Tutorials: Analysis Services - Data Mining
SQL Server 2012 Tutorials: Analysis Services - Multidimensional Modeling
SQL Server 2012 Tutorials: Analysis Services - Tabular Modeling
SQL Server 2012 Tutorials: Reporting Services
SQL Server 2012 Tutorials: Writing Transact-SQL Statements
SQL Server 2012 Upgrade Technical Guide
SQL Server Community FAQs Manual

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 2 comment(s)
Archivado en: ,,,

As the title of this post says what can we do when our SQL Serve Enterprise Evaluation period has expired?, or maybe the most important question would be “What is it really means this type of message?” the database engine service will not run anymore?


Don't worry and be happy with type of message if you are using Enterprise Evaluation Edition, your services installed will continue running without problems, so, this message means in practice that only you can not connect to your local database service by using that local SQL Server Management Studio of your Enterprise Evaluation installation.

What you have to do to solve it is uninstall only Management Studio Enterprise Evaluation Edition and then install another Management Studio Non-Evaluation Edition so that you can connect to your database engine of Enterprise Evaluation Edition. You can continue using this Evaluation Edition!.

If you like this post then follow @percyreyes on twitter.

When you have transactional replication environments with read-only subscribers (that means changes are not propagated back to the Publisher) is very important to have so clear that rows on subscribers must not be deleted directly. If anytime any row on subscriber, that was replicated from publisher, is deleted and then this same row on published is modified can be originate the following error:

The row was not found at the Subscriber when applying the replicated command

This issue is because the row to be updated on subscriber doesn’t exist anymore, Distribution Agent is trying to propagate it to Subscriber unsuccessfully. So, we need to fix it as soon as possible so that replication queue don’t grow so large and all subsequent commands queued are applied to subscriber. To solve this case, you must review the commands stored on distribution database by using sp_browsereplcmds in order to identify the transaction with this error so that know the affected row(s) and then insert the missing row manually on subscriber or delete the command from queue. This recommendation can be taken if someone deleted the row by mistake or you don’t need it anymore.

Another option we have is use the SkipErrors parameter, which allows you to skip errors of a certain type and in this case would be error 20598, that means the transaction with the error is not committed, but subsequent transactions are. Last option could be use sp_setsubscriptionxactseqno stored procedure, which allows you to skip one or more transactions that cause errors. Don't forget please that errors should be skipped with caution and with the correct understanding of the error condition.

If you have any questions or comments please let me know.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)

When we have .Net Applications (that are working on SQL Server) one of these timeout errors can be raised. Developers will report to you as an error within SQL Server so they will require you a solution for this problem because they think it is a timeout misconfiguration in SQL Server but based on my experience in troubleshooting these issues I tell you it could not be related to SQL Server as most people think first and to verify it you can read your SQL Server Error Log and you won't find any error about timeout events. What does it means? the application will never try to connect to SQL Server because it is still working at application level,  then Timeout error is related to the application at Net SqlClient Data Provider level and how CommandTimeout property value you have configured in your application connection. This CommandTimeout property specifies the number of seconds that a application provider should wait for result sets before it times out. As you know the default is 30 seconds and for many cases it may be not enough due to applications problems.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

And what we have to do to solve timeout errors? First you have to check your code within your applications and you will find an infinite loop or slow code  so that the operation spend much time on it and this value is reached before your SQL query is sent to database engine and then timeout error is raised. To resolve this, your developers have to fix your .Net application code, I’m sorry there is no other option!. By the way, the following error could also be related to timeout:

Unable to connect to SQL Server session database.

Most SQL queries tend to be slower when they work with Net SqlClient Data Provider to connect to SQL Server and this is another reason why is not enough 30 seconds. In other cases, one way to solve it would be increase the CommandTimeout property in the application connection, maybe 60 seconds and if it is not enough then change to 120 or 180 and also maybe using 0 (unlimited) but it must be a workaround while the problem is being traced and fixed. Have you ever faced this kind of issue? Let me know your comments or questions.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

As a DBA when you are developing scripts that are very useful for our job you can see sometimes that might have any bugs in the information recollected by Dynamic Management Views, for example in my case on SQL Server 2008 R2 SP2 many months ago when I was analyzing total elapsed time of query to report it as current total time I could detect an error in the 17th second for the execution of a particular query with session_id equal to 63 as you can see below in the pic.

In the following pic you can see two columns without names. Please your attention on the second query, the second column is the total elapsed time according to sys.dm_exec_requests and it should be 17 and not 938 seconds because the previous time was 16 (follow the sequence of each result in the execution of queries). The value of the third column is calculated by subtracting the start time value (of sys.dm_exec_requests) from GETDATE() so that this time will always be accurate and real, and it is used for comparing with second column and in the correct case both must be the same but it is not the case.


As stated earlier, for me the column total_elapsed_time returns inconsistent information about elapsed execution time of a process when it exceeds 16 seconds. As a recommendation you should be careful when you are working with DMV’s. Tell me, have you identified any errors in DMV similar to mine?.

If you like this post then follow @percyreyes on twitter.

There are many situations where you may need to turn off a ANSI database option in order to solve an error like this

Msg 8152, Level 16, State 14, Line 5
String or binary data would be truncated.

I know most people choose to configure ANSI_WARNINGS on OFF as final solution, however this is not so recommendable because can cause RECOMPILATION for each stored procedure where it was turned off. You should avoid your stored procedures are recompiled wherever possible in order not to impact negatively in the performance. There are other options with the same effect.


I recommend not to change default configuration unless it is very necessary and the benefits are significant.

By the way, recompilation sometime may be necessary to optimize our stored procedures or ad hoc queries but this should done explicitly by using WITH RECOMPILE option,  RECOMPILE query hint , sp_recompile system stored procedure or another optimization strategy, and not due to any misconfiguration.

Please, let me know your questions/comments if you have.

If you like this post then follow @percyreyes on twitter.

Sometimes you will need to use Extended Procedures in SQL Server to get information about any domain accounts and you may see this error is raised at execution time:

Error: 15404, State: 19. Could not obtain information about Windows NT group/user 'MyDomain\MyAccount', error code 0x5.

In order to solve this error your Network Administrator has to do a correct configuration (for the account you are logged within SQL Server) in the Active Directory by following these steps:

Enable 'Allowed to authenticate' security setting for the service account MyDomain\MyAccount on the domain controllers computer object in domain MyDomain:

  1. Logon to the Domain Controller of domain MyDomain
  2. Open Active Directory Users and Computers (dsa.msc)
  3. Enable the 'Advanced Features' under the menu 'View'
  4. Navigate to the domain controllers computer object and open the property window
  5. Click on the security tab
  6. Add the SQL Service account "MyDomain\MyAccount" and enable the setting 'Allowed to authenticate'
  7. Click OK to close the window
  8. Repeat steps 4-7 on each Domain Controller computer object

With all of this everything will work perfectly.  Let me know if you have any questions/comments. 

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)
Más artículos Página siguiente >