Percy Reyes's Technical Blog

Sometimes when we have an AlwaysOn solution implemented and then due to whatever reason the WSFC Cluster service run down and is corrupted we will need to reinstall everything including new cluster and new Availability Groups but how to uninstall or delete the Availability Group correctly if WFC service is not running? we don’t have any option than delete forcibly the Availability Groups and making a dirty and incomplete uninstallation.

After delete Availability Groups we will need remove the nodes from the WSFC Cluster and it can be done by uninstalling the WSFC Cluster service.

Ok, Now after you have installed again the Failover Cluster and also joining the nodes in cluster we have to create new AlwaysOn Availability Group


and trying this we can get this error:


This error is due to incorrect uninstalling that we had done above and to fix this we will have to disable AlwaysOn Availibility Groups at database engine service level


and the restarting it.


Now we have to enable it again and then restarting the database engine service


After you have done these steps you will be able to create again the Availability Groups what you want. It is great!


If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

The following issue can be raised when Transactional Replication and Change Data Capture (CDC) are deployed and running together, and something was done incorrectly managing CDC jobs. We know when CDC is deployed then two jobs are created for the CDC process. They are cdc.MyDB_capture and cdc.MyDB_cleanup.

cdc.MyDB_capture job executes sys.sp_MScdc_capture_job stored procedure and this inside invokes sp_cdc_scan to read internally the Transaction Log and captures the changes done in the database, so, this job is the agent of CDC process to read the Transaction Log and also use Log Reader Agent (of Transactional Replication) to make it possible. In this way if you also have Transaction Replication running for your database then you will have another agent working on reading the Transaction Log. This is the Log Read Agent. Now, Log Read Agent and cdc.MyDB_capture job cannot be running both so that this error will be raised:

The capture job cannot be used by Change Data Capture to extract changes from the log when transactional replication is also enabled on the same database. When Change Data Capture and transactional replication are both enabled on a database, use the logreader agent to extract the log changes.

This error is because is not possible that two Log Reader Agent instances are running and reading together your database in the same SQL Server instance.

When transactional replication is configured then the cdc.MyDB_capture job is dropped automatically and if you uninstall Replication then cdc.MyDB_capture job is created again. If you have already CDC installed and after you configured transactional replication then cdc.MyDB_capture job should have been dropped. The Log Agent Reader (created as part of replication) has the High priority. So, if you have transactional replication running for your database and cdc.MyDB_capture job is still enabled and running then you have to disable or drop it manually this job because it will be failing and raising the error above.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

Today I enjoyed talking about a new feature in SQL Server 2014, that is Backup Encryption. I have given this webcast session for SQLPASS Peru chapter (follow on twitter @sqlpassperu and fan page ). You can download the presentation from here.

Please let me know if you have any comments or questions or concerns.

If you like this post then follow @percyreyes on twitter.

Sometimes you may have jobs executing PowerShell scripts inside, but one day unexpectedly they can begin to fail without any apparent reason and then getting this error:

AuthorizationManager check failed At line:1 char:2  + & <<<<  'S:\myfolder\script.ps1'      + CategoryInfo          : NotSpecified: (:) [], PSSecurityException      + FullyQualifiedErrorId : RuntimeException.  Process Exit Code 1.  The step failed.

What you have to do now to troubleshoot this is check PowerShell to ensure the ExecutionPolicy is not set to “Restricted” by executing


If it is, then set it to “RemoteSigned” or “Unrestricted” depending on your security policy.

Set-ExecutionPolicy RemoteSigned

But if the error continues then you have to check and ensure that the Windows Management Instrumentation service (WMI) service is enabled and running, and if it is and the error still continues, then you have to restart WMI service. After this, your job and script will run again without error.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

If you have any database with Change Data Capture (CDC) enabled you need to take account this recommendation during the restore process. There is one option to be used as part of RESTORE syntax, this is KEEP_CDC which allows you to restore the cdc metadata as well when you restore the database to another SQL Server instance or restore it to the same instance but with different database name.

Now you can verify that CDC metadata was restored as well, this is, your database and tables keep CDC enabled, and also all data captured by CDC)
-- Checking if CDC is enabled for database TESTDB2.
select is_cdc_enabled,name from sys.databases where name='TESTDB2'
-- Checking if CDC is enabled for table MyTable.
select is_replicated, is_tracked_by_cdc, * from sys.tables
select * from [cdc].[change_tables]  
-- checking the data tracked for table MyTable.
select * from cdc.dbo_MyTable_CT 
This is not all what you have to do, after this you will need to create the CDC jobs by executing
EXEC sys.sp_cdc_add_job 'capture'
EXEC sys.sp_cdc_add_job 'cleanup' 
Now, you also can verify the jobs were created for CDC
EXEC [sys].[sp_cdc_help_jobs] 
With all of this, the restore process of database (with CDC) have been completed successfully.

If you like this post then follow @percyreyes on twitter.
Publicado por Percy Reyes | con no comments
Archivado en: ,,

As DBA we know sometimes is needed to modify data and log properties of the tempdb database. By default SQL Server creates only one data file and log file, but for the vast majority of situations we will need to create more files.

There are many recommendations about how to create them and also the quantity of files to be created on OLTP environments. Today we will not discuss more details about it, by now I recommend to split your tempdb database into 4 (for 4 or 8 or 16 cores) or 8 data files (for 16, 32, 64, or more cores) and only one Log File and try to locate them in different drives RAID1, RAID5 or RAID10.  Don't forget that the number of data files don't just depends on number of cores, it also depends on concurrency, the workloads on your server and the performance of your queries.

Now I will show you one small script to split your default tempdb database into 8 data files and remaining the only one log file.

USE master
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'templog', NEWNAME= N'tempdev_Log')
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev', NEWNAME=N'tempdev_Data01')
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Log',  FILENAME= N'D:\SQLTempDB\tempdev_Log.ldf', SIZE = 2048MB , FILEGROWTH = 2048MB)
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Data01',  FILENAME= N'D:\SQLTempDB\tempdev_Data01.mdf', SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data02', FILENAME = N'D:\SQLTempDB\tempdev_Data02.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data03', FILENAME = N'D:\SQLTempDB\tempdev_Data03.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data04', FILENAME = N'D:\SQLTempDB\tempdev_Data04.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data05', FILENAME = N'D:\SQLTempDB\tempdev_Data05.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data06', FILENAME = N'D:\SQLTempDB\tempdev_Data06.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data07', FILENAME = N'D:\SQLTempDB\tempdev_Data07.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data08', FILENAME = N'D:\SQLTempDB\tempdev_Data08.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )

After the execution you need to restart the database engine so that new files can be recreated and all changes are done. Let me know if you have any question or comment about tempdb database concerns.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

The tempdb is the shared database per instance in SQL Server and it is used to stored and manage temporary objects. Tempdb database has a number of changes in SQL Server 2005, that is, there are new tempdb usages and internal optimizations enhancements. However, tempdb architecture is mostly unchanged since SQL Server 2000. 
In a user database, transactions have the ACID attributes: atomicity, concurrency, isolation, and durability. In tempdb, transactions lose the durability attribute. Tempdb does not persist after SQL Server shuts down.
Most of these internal operations on tempdb do not generate log records because there is no need to roll back. So, these operations are faster.
Some of the database options cannot be modified for tempdb. Auto shrink is not allowed for tempdb. Database shrink and file shrink capabilities are limited.

The following restrictions also apply to tempdb:
    The database CHECKSUM option cannot be enabled.
    A database snapshot cannot be created on tempdb.
    DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported.
    Only offline checking for DBCC CHECKTABLE is performed.

Tempdb Space Usage: The following types of objects can occupy tempdb space: Internal objects, Version stores and User objects.

Internal Objects: Internal object metadata is stored in memory, doesn't appear in catalog views such as sys.all_objects. Internal objects are used:

  • To store intermediate runs for sort.
  • To store intermediate results for hash joins and hash aggregates.
  • To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large object types: text, image, ntext, varchar(max), varbinary(max), and all others.
  • By queries that need a spool to store intermediate results.
  • By keyset cursors to store the keys.
  • By static cursors to store a query result.
  • By Service Broker to store messages in transit.
  • By INSTEAD OF triggers to store data for internal processing.
  • DBCC CHECK internally uses a query that may need to spool intermediate results.
  • Query notification and event notification use Service Broker, so they need space in tempdb as well.

    Page allocations on internal objects and Updates to internal objects do not generate log records.
    Each internal object occupies at least nine pages (one IAM page and eight data pages) in tempdb.

Version Stores: Doesn't appear in catalog views such as sys.all_objects. They are used:

  • Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build.
  • The online index build version store is for row versions from tables that have online index build operations on them.
  • The common version store is for row versions from all other tables in all databases.
  • The version store consists of append-only store units which are highly optimized for sequential inserts and random look up. Inserts into the version store do not generate log records.
  • Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute.

    For snapshot isolation and read committed snapshot isolation, the versions are generated by DML operations in the database when the respective database options are enabled.
    For AFTER triggers, versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options.
    INSTEAD OF triggers do not generate versions.
    For MARS, versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection.
    Building an online index generates versions of the rows while the online index is being built.

User Objects: Appear in catalog views such as sys.all_objects. They are used:

  • Sp_spaceused can show the size occupied by these objects
  • User objects include both user-defined tables and indexes, and system catalog tables and indexes.
  • Operations on user objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged.
  • User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t.
  • Local temporary tables also include table variables such as @t and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.

Performance Enhancements in SQL Server 2005 or later:

  1. The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb.
  2. Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased.
  3. There is less use of the UP type page latch when allocating pages and extents in tempdb. Proportional fill has been optimized to reduce UP latch contention.
  4. Proportional fill has been optimized to reduce UP latch contention.
  5. There is now deferred drop in tempdb.
  6. Worktable caching is improved.
  7. SQL Server 2005 or later caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement

Recommendations for managing

  • The tempdb files must be configured with initial size and auto-growth based on your workloads. Do not let with the default sizes.
  • The tempdb files must be located on RAID0 (for better performance) or RAID1 (if you need have more writes than reads) or RAID5 (if you have more reads than writes). RAID10 is the best option but no all companies can justify this.
  • The tempdb files must be located on separated disks to avoid contention issues and improves the performance.
  • Tempdb database must be created with one data file per physical processor if the quantity of CPUs is eight or less. If there are more then eight CPUs then you can start off with eight data files and after increase the number of files by four in case there was PAGELATCH contentions on one of the allocation bitmap pages (including PFS pages, GAM pages and SGAM pages) until the issue is solved, if not, add four files more, and so on.
  • Do not forget, tempdb data files or log file should not grow so much in a short time, if this happens then something is working wrong, so you need to analyze the workloads performance and detect the query that you have to optimize.

If you have any questions or comments about tempdb database please let me know.

If you like this post then follow @percyreyes on twitter.

Many times as SQL DBA we can have to deal with errors in SQL Server Replication. The vast majority of errors are reported at replication agents like Distribution Agent, Log Reader Agent, Snapshot Agent, Queue Reader Agent and so on. Now, what happens when for example one of the agents reports several errors and the majority are known and/or duplicated or related to FK/PK conflicts?, to solve all errors one by one or for reconfiguring the replication you will have to need much time and we know time is money and as time happens the operation of business is more impacted so you need to know an option to allow the operation continues while you are working to fix the errors as soon as possible. This option is the SkipErrors parameter.

Today in this post I will explain about how to use the SkipErrors parameter which allow to skip specific errors so that the data synchronization is not stopped and this not affect the operation of the business. SkipErrors parameter is configurable in the profile of distribution agent. This parameter is more used to solve errors related to Primary Key or Foreign Key conflicts raised within the distribution agent process at the moment when the transactions are trying to be replicated to the subscribers. However, you can use for skipping whatever error you decide. 

The following picture shows an error (with error code 547) about FK in the distribution agent process and you will see how the transactions are being queued due to this error, so, you will need to fix it to allow the rest of transactions are also replicated. Distribution agent reads sequentially the table msrepl_commands to get the command to execute in the subscribers, this means first in the queue first out to be replicated to subscribers.


Another common errors where you can use SkipError is “The row was not found at the Subscriber when applying the replicated command” . The error code in this case is 20598.


The SkipError parameter has as input the number of error that you want to skip, and this is configured in the distribution agent profile. You need to create a customized profile and this must be based on the default profile, after this the change you need to do is indicate the numbers of the errors separated by colon as you can see in the following picture “20598:547”.


After you have done this change, you need to restart the distribution agent. When the distribution agent start up then it will load the new customized profile with the code errors to be skipped or ignored. You will see something like this picture with many errors being skipped.


Now, as you could see many errors were skipped and the distribution is running without problems, and this means also that the transactions with these errors were missed and you can not recovery them because they were ignored and this may affect your data consistency.  Do not forget you must use SkipError parameter with caution and working in team with you analysts or application administrators in order to make the best decision for your business.

If you have any questions or comments please let me know.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

An error like is shown below you can see when you try to connect to Object Explorer by using the SSMS (SQL Server Management Studio) of a older version, for example, you try to connect to SQL Server 2012 Object Explorer by using SQL Server 2005 SSMS.


The error is raised explicitly because the superior version support more features and then there is more quantity of folders to display, and this here where SQL Server 2005 SSMS can not display these new folders because the max array index is outside the bounds that this supports. However, this error is only when you try connect by default to object explorer, but this not happen if you connect by default to query. In this situations, the general recommendation is always using the latest SSMS to connect to older versions. I hope this comment is useful for you!.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,,

Many Microsoft Server eBooks are still available for free download, they are quick references and very useful resources to start your adventure in the databases world. I hope you enjoy them and do not forget to share with your contacts.

5 Tips for a Smooth SSIS Upgrade to SQL Server 2012
A Hitchiker's Guide to Microsoft StreamInsight Queries
Books Online: Backup and Restore of SQL Server Databases
Books Online: Data Analysis Expressions (DAX) Reference
Books Online: Data Mining Extensions (DMX) Reference
Books Online: Data Quality Services
Books Online: High Availability Solutions
Books Online: Master Data Services
Books Online: Monitor and Tune for Performance
Books Online: Multidimensional Expressions (MDX) Reference
Books Online: SQL Server Distributed Replay
Books Online: Transact-SQL Data Definition Language (DDL) Reference
Books Online: Transact-SQL Data Manipulation Language (DML) Reference
Books Online: XQuery Language Reference
Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence
Extracting and Loading SharePoint Data in SQL Server Integration Services
Integration Services: Extending Packages with Scripting
Introducing Microsoft SQL Server 2008 R2
Introducing Microsoft SQL Server 2012
Master Data Services Capacity Guidelines
Master Data Services (MDS) Operations Guide
Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery
Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide
Multidimensional Model Programming
Optimized Bulk Loading of Data into Oracle
Planning Disaster Recovery for Microsoft SQL Server Reporting Services in SharePoint Integrated Mode
QuickStart: Learn DAX Basics in 30 Minutes
SQLCAT's Guide to BI and Analytics
SQLCAT's Guide to High Availability Disaster Recovery
SQLCAT's Guide to Relational Engine
SQL Server 2012 Tutorials: Analysis Services - Data Mining
SQL Server 2012 Tutorials: Analysis Services - Multidimensional Modeling
SQL Server 2012 Tutorials: Analysis Services - Tabular Modeling
SQL Server 2012 Tutorials: Reporting Services
SQL Server 2012 Tutorials: Writing Transact-SQL Statements
SQL Server 2012 Upgrade Technical Guide
SQL Server Community FAQs Manual

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 2 comment(s)
Archivado en: ,,,

As the title of this post says what can we do when our SQL Serve Enterprise Evaluation period has expired?, or maybe the most important question would be “What is it really means this type of message?” the database engine service will not run anymore?


Don't worry and be happy with type of message if you are using Enterprise Evaluation Edition, your services installed will continue running without problems, so, this message means in practice that only you can not connect to your local database service by using that local SQL Server Management Studio of your Enterprise Evaluation installation.

What you have to do to solve it is uninstall only Management Studio Enterprise Evaluation Edition and then install another Management Studio Non-Evaluation Edition so that you can connect to your database engine of Enterprise Evaluation Edition. You can continue using this Evaluation Edition!.

If you like this post then follow @percyreyes on twitter.

When you have transactional replication environments with read-only subscribers (that means changes are not propagated back to the Publisher) is very important to have so clear that rows on subscribers must not be deleted directly. If anytime any row on subscriber, that was replicated from publisher, is deleted and then this same row on published is modified can be originate the following error:

The row was not found at the Subscriber when applying the replicated command

This issue is because the row to be updated on subscriber doesn’t exist anymore, Distribution Agent is trying to propagate it to Subscriber unsuccessfully. So, we need to fix it as soon as possible so that replication queue don’t grow so large and all subsequent commands queued are applied to subscriber. To solve this case, you must review the commands stored on distribution database by using sp_browsereplcmds in order to identify the transaction with this error so that know the affected row(s) and then insert the missing row manually on subscriber or delete the command from queue. This recommendation can be taken if someone deleted the row by mistake or you don’t need it anymore.

Another option we have is use the SkipErrors parameter, which allows you to skip errors of a certain type and in this case would be error 20598, that means the transaction with the error is not committed, but subsequent transactions are. Last option could be use sp_setsubscriptionxactseqno stored procedure, which allows you to skip one or more transactions that cause errors. Don't forget please that errors should be skipped with caution and with the correct understanding of the error condition.

If you have any questions or comments please let me know.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)

When we have .Net Applications (that are working on SQL Server) one of these timeout errors can be raised. Developers will report to you as an error within SQL Server so they will require you a solution for this problem because they think it is a timeout misconfiguration in SQL Server but based on my experience in troubleshooting these issues I tell you it could not be related to SQL Server as most people think first and to verify it you can read your SQL Server Error Log and you won't find any error about timeout events. What does it means? the application will never try to connect to SQL Server because it is still working at application level,  then Timeout error is related to the application at Net SqlClient Data Provider level and how CommandTimeout property value you have configured in your application connection. This CommandTimeout property specifies the number of seconds that a application provider should wait for result sets before it times out. As you know the default is 30 seconds and for many cases it may be not enough due to applications problems.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

And what we have to do to solve timeout errors? First you have to check your code within your applications and you will find an infinite loop or slow code  so that the operation spend much time on it and this value is reached before your SQL query is sent to database engine and then timeout error is raised. To resolve this, your developers have to fix your .Net application code, I’m sorry there is no other option!. By the way, the following error could also be related to timeout:

Unable to connect to SQL Server session database.

Most SQL queries tend to be slower when they work with Net SqlClient Data Provider to connect to SQL Server and this is another reason why is not enough 30 seconds. In other cases, one way to solve it would be increase the CommandTimeout property in the application connection, maybe 60 seconds and if it is not enough then change to 120 or 180 and also maybe using 0 (unlimited) but it must be a workaround while the problem is being traced and fixed. Have you ever faced this kind of issue? Let me know your comments or questions.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

As a DBA when you are developing scripts that are very useful for our job you can see sometimes that might have any bugs in the information recollected by Dynamic Management Views, for example in my case on SQL Server 2008 R2 SP2 many months ago when I was analyzing total elapsed time of query to report it as current total time I could detect an error in the 17th second for the execution of a particular query with session_id equal to 63 as you can see below in the pic.

In the following pic you can see two columns without names. Please your attention on the second query, the second column is the total elapsed time according to sys.dm_exec_requests and it should be 17 and not 938 seconds because the previous time was 16 (follow the sequence of each result in the execution of queries). The value of the third column is calculated by subtracting the start time value (of sys.dm_exec_requests) from GETDATE() so that this time will always be accurate and real, and it is used for comparing with second column and in the correct case both must be the same but it is not the case.


As stated earlier, for me the column total_elapsed_time returns inconsistent information about elapsed execution time of a process when it exceeds 16 seconds. As a recommendation you should be careful when you are working with DMV’s. Tell me, have you identified any errors in DMV similar to mine?.

If you like this post then follow @percyreyes on twitter.

There are many situations where you may need to turn off a ANSI database option in order to solve an error like this

Msg 8152, Level 16, State 14, Line 5
String or binary data would be truncated.

I know most people choose to configure ANSI_WARNINGS on OFF as final solution, however this is not so recommendable because can cause RECOMPILATION for each stored procedure where it was turned off. You should avoid your stored procedures are recompiled wherever possible in order not to impact negatively in the performance. There are other options with the same effect.


I recommend not to change default configuration unless it is very necessary and the benefits are significant.

By the way, recompilation sometime may be necessary to optimize our stored procedures or ad hoc queries but this should done explicitly by using WITH RECOMPILE option,  RECOMPILE query hint , sp_recompile system stored procedure or another optimization strategy, and not due to any misconfiguration.

Please, let me know your questions/comments if you have.

If you like this post then follow @percyreyes on twitter.

Sometimes you will need to use Extended Procedures in SQL Server to get information about any domain accounts and you may see this error is raised at execution time:

Error: 15404, State: 19. Could not obtain information about Windows NT group/user 'MyDomain\MyAccount', error code 0x5.

In order to solve this error your Network Administrator has to do a correct configuration (for the account you are logged within SQL Server) in the Active Directory by following these steps:

Enable 'Allowed to authenticate' security setting for the service account MyDomain\MyAccount on the domain controllers computer object in domain MyDomain:

  1. Logon to the Domain Controller of domain MyDomain
  2. Open Active Directory Users and Computers (dsa.msc)
  3. Enable the 'Advanced Features' under the menu 'View'
  4. Navigate to the domain controllers computer object and open the property window
  5. Click on the security tab
  6. Add the SQL Service account "MyDomain\MyAccount" and enable the setting 'Allowed to authenticate'
  7. Click OK to close the window
  8. Repeat steps 4-7 on each Domain Controller computer object

With all of this everything will work perfectly.  Let me know if you have any questions/comments. 

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)

En este post explícitamente indicaré cómo resolver un problema en SQL Server que ya ha sido reportado como un bug del producto el cual se debe a un error de casting de datos de tipo DATETIMEOFFSET hacia SQL_VARIANT. El mensaje típico del error es el siguiente:

An error occurred while executing batch. Error message is: Input string was not in a correct format.

El tipo de dato SQL_VARIANT en realidad no puede almacenar datos de tipo DATETIMEOFFSET, tomemos esto como una de sus restricciones. Bien, al parecer al equipo de SQL Server se les olvidó este detalle al momento de implementar ciertos procedimientos del sistema, pues por ejemplo, al realizar la siguiente consulta podemos reproducir el error:


Aunque en la imagen anterior aún no se ha reproducido el error esto no quiere decir que no exista, lo que debemos hacer ahora es mover el cursor para ver las últimas filas y en este trance el bug se disparará algo así:


Bien, ahora ya tenemos el bug, en SSMS vemos que se ha generado esas lineas rojas y encima el cuadro de dialogo con más detalles del error. Para solucionar el error podemos hacer una pequeño cambio de dicha vista del sistema pero para esto primero podemos ver que la versión original está así:


Lo que se debe hacer es corregir haciendo un casting explícito a varchar(26) de la columna current_value ya que ésta entre los valores que devuelve se encuentra un valor de tipo de dato DATETIMEOFFSET y es en este punto al convertirlo a SQL_VARIANT es donde el bug se genera. Ahora debemos actualizar la vista quedando así:


Finalmente ya se podrá realizar la consulta sin problemas devolviendo el siguiente resultado donde visualizaremos justamente que UtilityDateCreated contiene un valor de tipo DATETIMEOFFSET.


Bien, de manera similar se debe resolver los errores de este tipo. Seguro que Microsoft en un próximo Service Pack proveerá el fix para este problema. Espero que les sirva a muchos.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

En Denali se ha mejorado la funcionalidad de la cláusula ORDER BY agregando más argumentos en busca de dar solución a escenarios específicos como es el caso del paginado, es decir, simular la lectura por partes de los resultados de una consulta, tema que se podía realizar en versiones previas combinando TOP, OVER y ORDER BY, sin embargo, el rendimiento de esta solución es pobre comparado a lo que se puede conseguir con OFFSET y FETCH. En OFFSET se debe indicar el número de filas que debemos saltar antes de iniciar la lectura, en otras palabras, viene a ser la posición relativa donde debemos ubicar el puntero de lectura. Luego con FETCH indicamos la cantidad de filas que deseamos recorrer con la lectura. Para SQL Server, el valor que toma OFFSET viene a ser la posición cero, y luego hace un TOP de acuerdo al valor dado en el argumento FETCH. Ese es el funcionamiento interno de estas opciones, las cuales se basan siempre en el criterio de ordenamiento ya sea ascedente o descendente de ORDER BY.

Una recomendación en el uso de estas opciones es, en la medida posible (o mejor dicho siempre), trabajar con un índice (de preferencia clustered) sobre la columna que participa en la cláusula ORDER BY y ayudarse de filtros WHERE sobre dicha columna, de esta manera el tema será suficientemente escalable y así evitaremos costosas operaciones table scan o index scan. He realizado las pruebas sobre 10 millones de filas y la verdad que si responde bien (para empezar claro está, y usando clustered index como mínimo y a veces filtros).

Por ejemplo, en el siguiente ejemplo, ubicaremos el punto de inicio de lectura en la fila 4(posición cero para SQL Server), y se leerá todo el resto de filas a partir de la fila 5 (que para SQL Server será posición 1). La consulta devolverá 87 filas de un total de 91.

SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers
Ahora usando el criterio de ordenación DESC los resultados serán diferentes:
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers

Cada ejecución de la consulta de una página es independiente una de la otra, eso quiere decir que, si consultamos la primera página de 4 filas y luego otra de la misma cantidad (o diferente) ambas consultas no estarán relacionadas, el resultado se devuelve inmediatamente al cliente y se libera recursos del servidor, no se guarda el estado en el servidor como si suele pasar con un cursor el cual generalmente causa problemas de bloqueos y bastante consumo de recursos en muchos casos. Bien,  si deseamos leer sólamente las 3 primeras o siguientes filas después del OFFSET:

SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
Para leer las siguiente 3 filas simplemente cambiamos el valor del OFFSET:
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers
La lectura de cada página de filas las podemos realizar dinámicamente usando variables y en algunos casos aplicando hint OPTIMIZE FOR para particularizar la optimización cuando se use filtros específicos.
SET @Start=0
SET @Next=3
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
OPTION(OPTIMIZE FOR (@Start=4,@Next=5))
Podemos usarlo dentro de un procedimiento almacenado:

CREATE PROCEDURE dbo.PageFetch(@PageNumber INT, @PageSize INT)
CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
OFFSET (@PageSize * (@PageNumber -1) ) ROWS

Ahora podemos consultar las páginas de filas que deseemos, por ejemplo, a continuación la consulta de basa en páginas de 3 filas, y se leerá las páginas 2,3, y 5.
EXEC dbo.PageFetch 2,3;
EXEC dbo.PageFetch 3,3;
EXEC dbo.PageFetch 5,3;

OFFSET y FETCH  se podrá usar dentro de vistas, subconsultas, funciones y tablas derivadas, sin embargo, también tiene sus limitaciones, una de ellas es que no pueden usarse dentro de vistas indexadas, o directamente con TOP, OVER, INSERT, UPDATE, MERGE, ó DELETE, pero si dentro de expresiones independientes a más bajo nivel, como subconsultas de las cuales se alimenten. Por ejemplo:
SELECT  TOP(2) ROW_NUMBER() OVER(ORDER BY CustomerID) AS 'NRO', CompanyName, ContactName, ContactTitle  FROM (
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
) AS T1

Su uso con vistas:

SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
SELECT CustomerID, CompanyName, ContactName FROM dbo.v1

En conclusión, OFFSET y FETCH son excelentes mejoras que facilitan implementar soluciones de paginado de resultados, sin embargo, para que sea escalable hay que usarlo en combinación con índices y filtros.

Más información:

ORDER BY Clause (Transact-SQL)

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

Dentro de la lista de compatibilidades que ”supuestamente soporta” Denali se encuentra SQL Server 2000 (80), sin embargo, si se desea restaurar un backup de una base de datos con dicha compatibilidad será imposible hasta el momento, y no creo que cambie esto, pienso que Denali en RTM tampoco soportará compatibilidad 80, puesto hasta donde sé, cada versión de SQL Server sólo soporta dos versiones anteriores, en este caso, sería compatibilidad 90 (para SQL Server 2005)  y 100 (para SQL Server 2008/2008R2). Al parecer los chicos de SQL Server se olvidaron de eliminarlo del UI. No hay problema en este punto, ya que sólo es un CTP!.


La solución alterna a este tema es volver a crear la base de datos y cargar nuevamente los objetos y datos, obvio que esta solución es engorroso y no viable en temas de negocio. Cuando se procede a restaurar un backup de una base de datos en SQL Server 2000 (con SP3a) se disparará el siguiente error:

Msg 3169, Level 16, State 1, Line 1
The database was backed up on a server running version 8.00.0760. That version is incompatible with this server, which is running version 11.00.1103. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

Cuando se intenta adjunta la base de datos tampoco es posible:

Msg 1813, Level 16, State 2, Line 1
Could not open new database 'Northwind'. CREATE DATABASE is aborted.
Msg 1813, Level 16, State 2, Line 1
Could not open new database 'Northwind'. CREATE DATABASE is aborted.
Msg 950, Level 20, State 1, Line 1
Database 'Northwind' cannot be upgraded because its non-release version (0) is not supported by this version of SQL Server. You cannot open a database that is incompatible with this version of sqlservr.exe. You must re-create the database.

Otra solución para estos temas es adjuntar primero la base de datos en SQL Server 2008 R2, y luego adjuntarla en SQL11. Seguramente este issue (es decir, desaparecer la opción del UI, y cual señal de posible soporte de compatibilidad 80) será corregido cuando se libere la versión RTM, pero por el momento es lo que se tiene en el caso de desee ir testeando algún upgrade.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

En el CTP1 Denali tenemos un nuevo objeto muy interesante denominado SEQUENCE para manejar secuencia de números entre el rango de -2^31 – 1 hasta 2^31 –1. SEQUENCE soluciona muchos problemas relacionado al uso de la propiedad IDENTITY, no está amarrado a la columna y se puede usar para varias columnas en diferentes tablas. También es útil como alternativa limitada a ROW_NUMBER. Su uso es bien sencillo, práctico y flexible, ofrece mejor performance que IDENTITY, no se puede usar dentro de funciones ni con DISTINCT, UNION, EXCEPT y INTERSECT pero sí con UNION ALL, y para garantizar la unicidad se recomienda crear un índice sobre las columnas que se alimentan de los valores de esta función. Cuidado con el uso de índices, puede mejorar las consultas, pero a la vez impactar negativamente en las operaciones de actualización.

Definitivamente SEQUENCE será de demasiada utilidad, sin embargo, no olvidar que cada cosa tiene sus escenarios de uso y también sus limitaciones.  Para crear un objeto SEQUENCE suficiente con esto:

Para usarlo:
select NEXT VALUE FOR dbo.MySeq
GO 4
(no column name)
Si volvemos a ejecutar la sentencia anterior, el resultado será los número del 5 al 8 y asi sucesivamente. Si en algún momento deseamos reinicializar la secuencia podemos lograrlo asi:
Y si deseamos la secuencia sea cíclica simplemente usamos el atributo CYCLE en la definición del objeto. Por otra parte, como alternativa básica a ROW_NUMBER() podemos usarlo de la siguiente manera:
select next value for dbo.MySeq as [nro],  Employees.FirstName   from Employees 
o también de esta manera:
select next value for dbo.MySeq  over (order by customers.CustomerID ) as [nro], 
customers.CustomerID, Customers.ContactName
from Customers
Pueden revisar a más detalle acerca de SEQUENCE en las referencias que doy a continuación donde existe suficiente cantidad de ejemplos con las que se pueden familiarizar.

Creating and Using Sequence Numbers
sp_sequence_get_range (Transact-SQL)
SQL Server v.Next (Denali) : Using SEQUENCE

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 4 comment(s)
Archivado en:
Más artículos Página siguiente >