Percy Reyes's Technical Blog

The tempdb is the shared database per instance in SQL Server and it is used to stored and manage temporary objects. Tempdb database has a number of changes in SQL Server 2005, that is, there are new tempdb usages and internal optimizations enhancements. However, tempdb architecture is mostly unchanged since SQL Server 2000. 
In a user database, transactions have the ACID attributes: atomicity, concurrency, isolation, and durability. In tempdb, transactions lose the durability attribute. Tempdb does not persist after SQL Server shuts down.
Most of these internal operations on tempdb do not generate log records because there is no need to roll back. So, these operations are faster.
Some of the database options cannot be modified for tempdb. Auto shrink is not allowed for tempdb. Database shrink and file shrink capabilities are limited.

The following restrictions also apply to tempdb:
    The database CHECKSUM option cannot be enabled.
    A database snapshot cannot be created on tempdb.
    DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported.
    Only offline checking for DBCC CHECKTABLE is performed.

Tempdb Space Usage: The following types of objects can occupy tempdb space: Internal objects, Version stores and User objects.

Internal Objects: Internal object metadata is stored in memory, doesn't appear in catalog views such as sys.all_objects. Internal objects are used:

  • To store intermediate runs for sort.
  • To store intermediate results for hash joins and hash aggregates.
  • To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large object types: text, image, ntext, varchar(max), varbinary(max), and all others.
  • By queries that need a spool to store intermediate results.
  • By keyset cursors to store the keys.
  • By static cursors to store a query result.
  • By Service Broker to store messages in transit.
  • By INSTEAD OF triggers to store data for internal processing.
  • DBCC CHECK internally uses a query that may need to spool intermediate results.
  • Query notification and event notification use Service Broker, so they need space in tempdb as well.

    Page allocations on internal objects and Updates to internal objects do not generate log records.
    Each internal object occupies at least nine pages (one IAM page and eight data pages) in tempdb.

Version Stores: Doesn't appear in catalog views such as sys.all_objects. They are used:

  • Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build.
  • The online index build version store is for row versions from tables that have online index build operations on them.
  • The common version store is for row versions from all other tables in all databases.
  • The version store consists of append-only store units which are highly optimized for sequential inserts and random look up. Inserts into the version store do not generate log records.
  • Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute.

    For snapshot isolation and read committed snapshot isolation, the versions are generated by DML operations in the database when the respective database options are enabled.
    For AFTER triggers, versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options.
    INSTEAD OF triggers do not generate versions.
    For MARS, versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection.
    Building an online index generates versions of the rows while the online index is being built.

User Objects: Appear in catalog views such as sys.all_objects. They are used:

  • Sp_spaceused can show the size occupied by these objects
  • User objects include both user-defined tables and indexes, and system catalog tables and indexes.
  • Operations on user objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged.
  • User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t.
  • Local temporary tables also include table variables such as @t and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.

Performance Enhancements in SQL Server 2005 or later:

  1. The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb.
  2. Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased.
  3. There is less use of the UP type page latch when allocating pages and extents in tempdb. Proportional fill has been optimized to reduce UP latch contention.
  4. Proportional fill has been optimized to reduce UP latch contention.
  5. There is now deferred drop in tempdb.
  6. Worktable caching is improved.
  7. SQL Server 2005 or later caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement

Recommendations for managing

  • The tempdb files must be configured with initial size and auto-growth based on your workloads. Do not let with the default sizes.
  • The tempdb files must be located on RAID0 (for better performance) or RAID1 (if you need have more writes than reads) or RAID5 (if you have more reads than writes). RAID10 is the best option but no all companies can justify this.
  • The tempdb files must be located on separated disks to avoid contention issues and improves the performance.
  • Tempdb database must be created with one data file per physical processor if the quantity of CPUs is eight or less. If there are more then eight CPUs then you can start off with eight data files and after increase the number of files by four in case there was PAGELATCH contentions on one of the allocation bitmap pages (including PFS pages, GAM pages and SGAM pages) until the issue is solved, if not, add four files more, and so on.
  • Do not forget, tempdb data files or log file should not grow so much in a short time, if this happens then something is working wrong, so you need to analyze the workloads performance and detect the query that you have to optimize.

If you have any questions or comments about tempdb database please let me know.

Publicado por Percy Reyes | con no comments

One of the most important thing in a database server environment is the security in order to avoid, for example, that some fundamental objects are dropped or modified by error or intentionally. What happens if the stored procedures of replication in the subscribers are dropped?,  if so, when a change going to be replicated to subscribers, an error will tell us that the stored procedures needed do not exist anymore and so the movement of transactions from distributor to subscribers are stopped.

image

In order to fix it we need to recovery and deploy again these stored procedures of replication in the subscribers. The deployment can take much time because the quantity can be big so that one by one deployment could be no productive, and what happens if you have not a copy of these stored procedures?, do not worry, you can use the sp_scriptpublicationcustomprocs to generate and deploy new stored procedures for your publication, so, you can run it in the published database

exec sp_scriptpublicationcustomprocs @publication='mypublication'

The output will be the code of creation of stored procedures for each article that belong to that publication. This code must be copied and executed in the subscriber so that the error will be fixed.

In this post I will give a recommendation to fix the replication error like this “repldone log scan occurs before the current start of replication in the log”, but before we need to know that Log Reader agent when it starts first verify that the Last Distributed transaction still exists in the transaction log, in other words, this verify that the last transaction in msrepl_transacctions table of distribution database matches the last replicated transaction in the published database transaction log. When this not happens so an error could appear and we can use DBCC OPENTRAN on the published database to confirm the error:

Replicated Transaction Information:
        Oldest distributed LSN     : (26827:10393:204) >> 000068cb:00002899:00cc (must mach the last row in msrepl_transactions table)
        Oldest non-distributed LSN : (26832:8908:1)    >> 000068d0:000022cc:0001  (oldest LSN in the transaction log)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

From the output we can confirm that the Log Reader agent is expecting to find LSN 00002899 of VLF 000068cb, but the oldest LSN in the transaction log is 000022cc of another VLF 000068d0. This means that the Log Reader is trying to start reading on LSN 00002899 which is before the current start of replication in the log with LSN 000022cc. You must be thinking about moving the Log Reader to start at 000022cc to solve this error?, yes, this is possible but the problem here is that we do not know exactly how many transactions there are from 00002899 to 000022cc and if we move the Log Read go forward then we can miss some transactions and you would have inconsistency issues in your data. If you database is very large a good option to solve this is use sp_repldone to mark 000068cb:00002899:00cc as the last replication transaction (use sp_repldone with extreme caution) or you can reconfigure replication for all publications for the database if this is not large.

Now, some times this type of error could appear as well when you restore a database including publications of replication. In this case an error like this will appear:

The specified LSN {00000000:00000000:0000} for repldone log scan occurs before the current start of replication in the log {000068cb:00002899:00cc}.
(Source: MSSQLServer, Error number: 18768)

image

You can see that Log Reader want to start reading at LSN {00000000:00000000:0000} and this is because the msrepl_transactions table is empty, in this specific case you can run sp_replrestart to restart the Log Reader and moving to forward. Both sp_repldone and sp_replrestart  are advanced stored procedures to deal with internal replications issues and you must be careful and use them with extreme caution because the inadequate use can damage your replication system. There are cases like this where you can make it easy, I mean If you restore a database with replications you have also another option to avoid running sp_replrestart, you can reinitialize the subscriptions or drop and then reconfigure all publications. I hope these recommendations help you to deal with type of issues in replication and go carefully! :)

Many times as SQL DBA we can have to deal with errors in SQL Server Replication. The vast majority of errors are reported at replication agents like Distribution Agent, Log Reader Agent, Snapshot Agent, Queue Reader Agent and so on. Now, what happens when for example one of the agents reports several errors and the majority are known and/or duplicated or related to FK/PK conflicts?, to solve all errors one by one or for reconfiguring the replication you will have to need much time and we know time is money and as time happens the operation of business is more impacted so you need to know an option to allow the operation continues while you are working to fix the errors as soon as possible. This option is the SkipErrors parameter.

Today in this post I will explain about how to use the SkipErrors parameter which allow to skip specific errors so that the data synchronization is not stopped and this not affect the operation of the business. SkipErrors parameter is configurable in the profile of distribution agent. This parameter is more used to solve errors related to Primary Key or Foreign Key conflicts raised within the distribution agent process at the moment when the transactions are trying to be replicated to the subscribers. However, you can use for skipping whatever error you decide. 

The following picture shows an error (with error code 547) about FK in the distribution agent process and you will see how the transactions are being queued due to this error, so, you will need to fix it to allow the rest of transactions are also replicated. Distribution agent reads sequentially the table msrepl_commands to get the command to execute in the subscribers, this means first in the queue first out to be replicated to subscribers.

 image

Another common errors where you can use SkipError is “The row was not found at the Subscriber when applying the replicated command” . The error code in this case is 20598.

 image

The SkipError parameter has as input the number of error that you want to skip, and this is configured in the distribution agent profile. You need to create a customized profile and this must be based on the default profile, after this the change you need to do is indicate the numbers of the errors separated by colon as you can see in the following picture “20598:547”.

image

After you have done this change, you need to restart the distribution agent. When the distribution agent start up then it will load the new customized profile with the code errors to be skipped or ignored. You will see something like this picture with many errors being skipped.

 image

Now, as you could see many errors were skipped and the distribution is running without problems, and this means also that the transactions with these errors were missed and you can not recovery them because they were ignored and this may affect your data consistency.  Do not forget you must use SkipError parameter with caution and working in team with you analysts or application administrators in order to make the best decision for your business.

If you have any questions or comments please let me know.

Publicado por Percy Reyes | con no comments

We know that SQL Server 2000 is now out of compliance with Microsoft, so in that sense, many companies may be planning to upgrade to latest version of  SQL Server like SQL Server 2012 or SQL Server 2008 R2. This post give you a recommendation that you have to put in practice in order to avoid inconsistencies errors of metadata of objects of the upgraded databases.

After you have upgraded your database you always need to issue the DBCC CHECKDB command to check the logical and physical integrity

          DBCC CHECKDB(N'MyDatabase') WITH NO_INFOMSGS

and here the following inconsistency error is likely to appear:

Msg 2508, Level 16, State 3, Line 1

The In-row data RSVD page count for object ‘SchemaName.ObjectName’, index ID 0, partition ID 75863107960832, alloc unit ID 75863107960832 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.

CHECKDB found 0 allocation errors and 1 consistency errors in table 'SchemaName.ObjectName' (object ID 1157579162).

CHECKDB found 0 allocation errors and 1 consistency errors in database 'MyDatabase'.

The error indicates that there are pages and row counts inaccuracies for that table so that you need to issue the DBCC UPDATEUSAGE command to fix it. DBCC UPDATEUSAGE reports and corrects these inaccuracies. After you execute the DBCC UPDATEUSAGE(0) command in the context of your database you will see the following message indicating that were fixed the inaccuracies

DBCC UPDATEUSAGE: Usage counts updated for table 'SchemaName.ObjectName' (index 'ObjectName' , partition 1):

        RSVD pages (In-row Data): changed from (-275) to (145) pages.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now we are able to run again DBCC CHECKDB without errors. So, the recommendation of this post is always execute DBCC UPDATEUSAGE after you upgraded your database from SQL Server 2000 to superior version. Please let me know if you have any questions or comments.

An error like is shown below you can see when you try to connect to Object Explorer by using the SSMS (SQL Server Management Studio) of a older version, for example, you try to connect to SQL Server 2012 Object Explorer by using SQL Server 2005 SSMS.

image

The error is raised explicitly because the superior version support more features and then there is more quantity of folders to display, and this here where SQL Server 2005 SSMS can not display these new folders because the max array index is outside the bounds that this supports. However, this error is only when you try connect by default to object explorer, but this not happen if you connect by default to query. In this situations, the general recommendation is always using the latest SSMS to connect to older versions. I hope this comment is useful for you!

Publicado por Percy Reyes | con no comments
Archivado en: ,,

Many Microsoft Server eBooks are still available for free download, they are quick references and very useful resources to start your adventure in the databases world. I hope you enjoy them and do not forget to share with your contacts.

5 Tips for a Smooth SSIS Upgrade to SQL Server 2012
A Hitchiker's Guide to Microsoft StreamInsight Queries
Books Online: Backup and Restore of SQL Server Databases
Books Online: Data Analysis Expressions (DAX) Reference
Books Online: Data Mining Extensions (DMX) Reference
Books Online: Data Quality Services
Books Online: High Availability Solutions
Books Online: Master Data Services
Books Online: Monitor and Tune for Performance
Books Online: Multidimensional Expressions (MDX) Reference
Books Online: SQL Server Distributed Replay
Books Online: Transact-SQL Data Definition Language (DDL) Reference
Books Online: Transact-SQL Data Manipulation Language (DML) Reference
Books Online: XQuery Language Reference
Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence
Extracting and Loading SharePoint Data in SQL Server Integration Services
Integration Services: Extending Packages with Scripting
Introducing Microsoft SQL Server 2008 R2
Introducing Microsoft SQL Server 2012
Master Data Services Capacity Guidelines
Master Data Services (MDS) Operations Guide
Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery
Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide
Multidimensional Model Programming
Optimized Bulk Loading of Data into Oracle
Planning Disaster Recovery for Microsoft SQL Server Reporting Services in SharePoint Integrated Mode
QuickStart: Learn DAX Basics in 30 Minutes
SQLCAT's Guide to BI and Analytics
SQLCAT's Guide to High Availability Disaster Recovery
SQLCAT's Guide to Relational Engine
SQL Server 2012 Tutorials: Analysis Services - Data Mining
SQL Server 2012 Tutorials: Analysis Services - Multidimensional Modeling
SQL Server 2012 Tutorials: Analysis Services - Tabular Modeling
SQL Server 2012 Tutorials: Reporting Services
SQL Server 2012 Tutorials: Writing Transact-SQL Statements
SQL Server 2012 Upgrade Technical Guide
SQL Server Community FAQs Manual

Publicado por Percy Reyes | 2 comment(s)
Archivado en: ,,,

As the title of this post says what can we do when our SQL Serve Enterprise Evaluation period has expired?, or maybe the most important question would be “What is it really means this type of message?” the database engine service will not run anymore?

image

Don't worry and be happy with type of message if you are using Enterprise Evaluation Edition, your services installed will continue running without problems, so, this message means in practice that only you can not connect to your local database service by using that local SQL Server Management Studio of your Enterprise Evaluation installation.

What you have to do to solve it is uninstall only Management Studio Enterprise Evaluation Edition and then install another Management Studio Non-Evaluation Edition so that you can connect to your database engine of Enterprise Evaluation Edition. You can continue using this Evaluation Edition!

When you have transactional replication environments with read-only subscribers (that means changes are not propagated back to the Publisher) is very important to have so clear that rows on subscribers must not be deleted directly. If anytime any row on subscriber, that was replicated from publisher, is deleted and then this same row on published is modified can be originate the following error:

The row was not found at the Subscriber when applying the replicated command

This issue is because the row to be updated on subscriber doesn’t exist anymore, Distribution Agent is trying to propagate it to Subscriber unsuccessfully. So, we need to fix it as soon as possible so that replication queue don’t grow so large and all subsequent commands queued are applied to subscriber. To solve this case, you must review the commands stored on distribution database by using sp_browsereplcmds in order to identify the transaction with this error so that know the affected row(s) and then insert the missing row manually on subscriber or delete the command from queue. This recommendation can be taken if someone deleted the row by mistake or you don’t need it anymore.

Another option we have is use the SkipErrors parameter, which allows you to skip errors of a certain type and in this case would be error 20598, that means the transaction with the error is not committed, but subsequent transactions are. Last option could be use sp_setsubscriptionxactseqno stored procedure, which allows you to skip one or more transactions that cause errors. Don't forget please that errors should be skipped with caution and with the correct understanding of the error condition.

If you have any questions or comments please let me know.

Publicado por Percy Reyes | con no comments

In general for any database in SQL Server an error can be raised like this: Could not continue scan with NOLOCK due to data movement. This error is because some data pages are missing at the current position of the scanning with NOLOCK locking hint so we may have corruption issue in our database. It may be due to pages were deleted or moved by page splits caused by DML statements so that SQL Server not able to continue scan operation.

In order to verify that it is a page corruption issue you can ran DBCC command (first try with PHYSICAL_ONLY option). As a result, you may see many errors:

Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:219353) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:219354) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:219355) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:219356) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 144115188075921408 (type Unknown), page ID (1:219368) contains an incorrect page ID in its page header. The PageId in the page header = (5385:-1048674823).
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 281493019951104 (type Unknown), page ID (1:219369) contains an incorrect page ID in its page header. The PageId in the page header = (47621:-415740325).
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 8574986732829671424 (type Unknown), page (16640:570440960). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -12.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 6989691077213028352 (type Unknown), page (29184:1694528512). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -14.
Msg 8909, Level 16, State 1, Line 1

I could validate for example that allocation unit 144115188075921408 does not exist, the same was for the rest of allocations showed. This is the reason why I said “data pages are missing” and “(type Unknown)” appears in each message. Well, at this point we know it is a critical issue, but how can it fixed?, wait please first I want to show more messages the DBCC command returned:

Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408250) could not be processed.  See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408251) could not be processed.  See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408252) could not be processed.  See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408253) could not be processed.  See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408254) could not be processed.  See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376, alloc unit ID 72057594044940288 (type In-row data): Page (1:1408255) could not be processed.  See other errors for details.

CHECKDB found 0 allocation errors and 112 consistency errors in table 'MSrepl_commands' (object ID 501576825).
CHECKDB found 0 allocation errors and 176 consistency errors in database 'distribution'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (distribution).
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

We can see now above that messages don’t contain “(type Unknown)” that means the allocations exist in the database but they are damaged and they belong to MSrepl_commands table. However, there are cases where you don’t know what is the object and to fix we need to know the table/index where these allocations belong, for this situations the following can help you.

First we should look for partition_id by using allocation_unit_id of sys.allocation_units view:

select * from sys.allocation_units where  allocation_unit_id= 72057594044940288

image

After this you should look for object name in sys.partitions view by using partition_id as follow:

select object_name(object_id) as object_name , * from sys.partitions where partition_id=72057594040549376
image

In my case, the previous query returned MSrepl_commands, with all of this, the next step is fix all objects damaged. First, you must try REBUILD your clustered index if you have one, however another error like this can appear

Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:219359; actual 0:0). It occurred during a read of page (1:219359) in database ID 8 at offset 0x0000006b1be000 in file 'S:\SQLDATA\distribution.MDF'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

So now you don’t have any option than try with DBCC CHECKTABLE using REPAIR_REBUILD option if you have missing rows, and also have pages damaged then try with REPAIR_ALLOW_DATA_LOSS option (use it as a last option). Please be aware about impact of DBCC execution with advanced options.

If you have any comment or question let me know.

When we have .Net Applications (that are working on SQL Server) one of these timeout errors can be raised. Developers will report to you as an error within SQL Server so they will require you a solution for this problem because they think it is a timeout misconfiguration in SQL Server but based on my experience in troubleshooting these issues I tell you it could not be related to SQL Server as most people think first and to verify it you can read your SQL Server Error Log and you won't find any error about timeout events. What does it means? the application will never try to connect to SQL Server because it is still working at application level,  then Timeout error is related to the application at Net SqlClient Data Provider level and how CommandTimeout property value you have configured in your application connection. This CommandTimeout property specifies the number of seconds that a application provider should wait for result sets before it times out. As you know the default is 30 seconds and for many cases it may be not enough due to applications problems.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

And what we have to do to solve timeout errors? First you have to check your code within your applications and you will find an infinite loop or slow code  so that the operation spend much time on it and this value is reached before your SQL query is sent to database engine and then timeout error is raised. To resolve this, your developers have to fix your .Net application code, I’m sorry there is no other option!. By the way, the following error could also be related to timeout:

Unable to connect to SQL Server session database.

Most SQL queries tend to be slower when they work with Net SqlClient Data Provider to connect to SQL Server and this is another reason why is not enough 30 seconds. In other cases, one way to solve it would be increase the CommandTimeout property in the application connection, maybe 60 seconds and if it is not enough then change to 120 or 180 and also maybe using 0 (unlimited) but it must be a workaround while the problem is being traced and fixed. Have you ever faced this kind of issue? Let me know your comments or questions.

Publicado por Percy Reyes | con no comments
Archivado en: ,

As a DBA when you are developing scripts that are very useful for our job you can see sometimes that might have any bugs in the information recollected by Dynamic Management Views, for example in my case on SQL Server 2008 R2 SP2 many months ago when I was analyzing total elapsed time of query to report it as current total time I could detect an error in the 17th second for the execution of a particular query with session_id equal to 63 as you can see below in the pic.

In the following pic you can see two columns without names. Please your attention on the second query, the second column is the total elapsed time according to sys.dm_exec_requests and it should be 17 and not 938 seconds because the previous time was 16 (follow the sequence of each result in the execution of queries). The value of the third column is calculated by subtracting the start time value (of sys.dm_exec_requests) from GETDATE() so that this time will always be accurate and real, and it is used for comparing with second column and in the correct case both must be the same but it is not the case.

image

As stated earlier, for me the column total_elapsed_time returns inconsistent information about elapsed execution time of a process when it exceeds 16 seconds. As a recommendation you should be careful when you are working with DMV’s. Tell me, have you identified any errors in DMV similar to mine?

There are many situations where you may need to turn off a ANSI database option in order to solve an error like this

Msg 8152, Level 16, State 14, Line 5
String or binary data would be truncated.

I know most people choose to configure ANSI_WARNINGS on OFF as final solution, however this is not so recommendable because can cause RECOMPILATION for each stored procedure where it was turned off. You should avoid your stored procedures are recompiled wherever possible in order not to impact negatively in the performance. There are other options with the same effect.

  • SET ANSI_WARNINGS
  • SET ANSI_DEFAULTS
  • SET ANSI_NULLS
  • SET ANSI_PADDING
  • SET ANSI_WARNINGS
  • SET CONCAT_NULL_YIELDS_NULL

I recommend not to change default configuration unless it is very necessary and the benefits are significant.

By the way, recompilation sometime may be necessary to optimize our stored procedures or ad hoc queries but this should done explicitly by using WITH RECOMPILE option,  RECOMPILE query hint , sp_recompile system stored procedure or another optimization strategy, and not due to any misconfiguration.

Please, let me know your questions/comments if you have.

Regards.

Sometimes you will need to use Extended Procedures in SQL Server to get information about any domain accounts and you may see this error is raised at execution time:

Error: 15404, State: 19. Could not obtain information about Windows NT group/user 'MyDomain\MyAccount', error code 0x5.

In order to solve this error your Network Administrator has to do a correct configuration (for the account you are logged within SQL Server) in the Active Directory by following these steps:

Enable 'Allowed to authenticate' security setting for the service account MyDomain\MyAccount on the domain controllers computer object in domain MyDomain:

  1. Logon to the Domain Controller of domain MyDomain
  2. Open Active Directory Users and Computers (dsa.msc)
  3. Enable the 'Advanced Features' under the menu 'View'
  4. Navigate to the domain controllers computer object and open the property window
  5. Click on the security tab
  6. Add the SQL Service account "MyDomain\MyAccount" and enable the setting 'Allowed to authenticate'
  7. Click OK to close the window
  8. Repeat steps 4-7 on each Domain Controller computer object

With all of this everything will work perfectly.
Let me know if you have any questions/comments. Enjoy it.
Regards,

Publicado por Percy Reyes | 1 comment(s)

En este post explícitamente indicaré cómo resolver un problema en SQL Server que ya ha sido reportado como un bug del producto el cual se debe a un error de casting de datos de tipo DATETIMEOFFSET hacia SQL_VARIANT. El mensaje típico del error es el siguiente:

An error occurred while executing batch. Error message is: Input string was not in a correct format.

El tipo de dato SQL_VARIANT en realidad no puede almacenar datos de tipo DATETIMEOFFSET, tomemos esto como una de sus restricciones. Bien, al parecer al equipo de SQL Server se les olvidó este detalle al momento de implementar ciertos procedimientos del sistema, pues por ejemplo, al realizar la siguiente consulta podemos reproducir el error:

image

Aunque en la imagen anterior aún no se ha reproducido el error esto no quiere decir que no exista, lo que debemos hacer ahora es mover el cursor para ver las últimas filas y en este trance el bug se disparará algo así:

image

Bien, ahora ya tenemos el bug, en SSMS vemos que se ha generado esas lineas rojas y encima el cuadro de dialogo con más detalles del error. Para solucionar el error podemos hacer una pequeño cambio de dicha vista del sistema pero para esto primero podemos ver que la versión original está así:

image

Lo que se debe hacer es corregir haciendo un casting explícito a varchar(26) de la columna current_value ya que ésta entre los valores que devuelve se encuentra un valor de tipo de dato DATETIMEOFFSET y es en este punto al convertirlo a SQL_VARIANT es donde el bug se genera. Ahora debemos actualizar la vista quedando así:

image

Finalmente ya se podrá realizar la consulta sin problemas devolviendo el siguiente resultado donde visualizaremos justamente que UtilityDateCreated contiene un valor de tipo DATETIMEOFFSET.

image

Bien, de manera similar se debe resolver los errores de este tipo. Seguro que Microsoft en un próximo Service Pack proveerá el fix para este problema. Espero que les sirva a muchos.

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

En Denali se ha mejorado la funcionalidad de la cláusula ORDER BY agregando más argumentos en busca de dar solución a escenarios específicos como es el caso del paginado, es decir, simular la lectura por partes de los resultados de una consulta, tema que se podía realizar en versiones previas combinando TOP, OVER y ORDER BY, sin embargo, el rendimiento de esta solución es pobre comparado a lo que se puede conseguir con OFFSET y FETCH. En OFFSET se debe indicar el número de filas que debemos saltar antes de iniciar la lectura, en otras palabras, viene a ser la posición relativa donde debemos ubicar el puntero de lectura. Luego con FETCH indicamos la cantidad de filas que deseamos recorrer con la lectura. Para SQL Server, el valor que toma OFFSET viene a ser la posición cero, y luego hace un TOP de acuerdo al valor dado en el argumento FETCH. Ese es el funcionamiento interno de estas opciones, las cuales se basan siempre en el criterio de ordenamiento ya sea ascedente o descendente de ORDER BY.

Una recomendación en el uso de estas opciones es, en la medida posible (o mejor dicho siempre), trabajar con un índice (de preferencia clustered) sobre la columna que participa en la cláusula ORDER BY y ayudarse de filtros WHERE sobre dicha columna, de esta manera el tema será suficientemente escalable y así evitaremos costosas operaciones table scan o index scan. He realizado las pruebas sobre 10 millones de filas y la verdad que si responde bien (para empezar claro está, y usando clustered index como mínimo y a veces filtros).

Por ejemplo, en el siguiente ejemplo, ubicaremos el punto de inicio de lectura en la fila 4(posición cero para SQL Server), y se leerá todo el resto de filas a partir de la fila 5 (que para SQL Server será posición 1). La consulta devolverá 87 filas de un total de 91.

SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (4) ROWS ---ó OFFSET (4) ROW
 
image
Ahora usando el criterio de ordenación DESC los resultados serán diferentes:
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers
ORDER BY CustomerID DESC
OFFSET (4) ROWS ---ó OFFSET (4) ROW


image
Cada ejecución de la consulta de una página es independiente una de la otra, eso quiere decir que, si consultamos la primera página de 4 filas y luego otra de la misma cantidad (o diferente) ambas consultas no estarán relacionadas, el resultado se devuelve inmediatamente al cliente y se libera recursos del servidor, no se guarda el estado en el servidor como si suele pasar con un cursor el cual generalmente causa problemas de bloqueos y bastante consumo de recursos en muchos casos. Bien,  si deseamos leer sólamente las 3 primeras o siguientes filas después del OFFSET:

SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (4) ROWS ---ó OFFSET (4) ROW
FETCH NEXT 3 ROWS ONLY -- ó FETCH FIRST 3 ROWS ONLY
GO
 
image
Para leer las siguiente 3 filas simplemente cambiamos el valor del OFFSET:
 
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../  
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (7) ROWS ---ó OFFSET (7) ROW
FETCH NEXT 3 ROWS ONLY -- ó FETCH FIRST 3 ROWS ONLY
GO
image
La lectura de cada página de filas las podemos realizar dinámicamente usando variables y en algunos casos aplicando hint OPTIMIZE FOR para particularizar la optimización cuando se use filtros específicos.
 
DECLARE @Start INT, @Next INT
SET @Start=0
SET @Next=3
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (@Start) ROWS
FETCH NEXT @Next ROWS ONLY
OPTION(OPTIMIZE FOR (@Start=4,@Next=5))
GO
 
Podemos usarlo dentro de un procedimiento almacenado:

CREATE PROCEDURE dbo.PageFetch(@PageNumber INT, @PageSize INT)
AS
BEGIN
SELECT ROW_NUMBER() OVER (ORDER BY CustomerID) AS 'Nro',
CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (@PageSize * (@PageNumber -1) ) ROWS
FETCH NEXT @PageSize ROWS ONLY
END

Ahora podemos consultar las páginas de filas que deseemos, por ejemplo, a continuación la consulta de basa en páginas de 3 filas, y se leerá las páginas 2,3, y 5.
 
EXEC dbo.PageFetch 2,3;
GO
EXEC dbo.PageFetch 3,3;
GO
EXEC dbo.PageFetch 5,3;

image
OFFSET y FETCH  se podrá usar dentro de vistas, subconsultas, funciones y tablas derivadas, sin embargo, también tiene sus limitaciones, una de ellas es que no pueden usarse dentro de vistas indexadas, o directamente con TOP, OVER, INSERT, UPDATE, MERGE, ó DELETE, pero si dentro de expresiones independientes a más bajo nivel, como subconsultas de las cuales se alimenten. Por ejemplo:
SELECT  TOP(2) ROW_NUMBER() OVER(ORDER BY CustomerID) AS 'NRO', CompanyName, ContactName, ContactTitle  FROM (
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (4) ROWS ---ó OFFSET (4) ROW
FETCH NEXT 3 ROWS ONLY -- ó FETCH FIRST 3 ROWS ONLY
) AS T1

Su uso con vistas:


CREATE VIEW dbo.v1
as
SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address --/..*.../
FROM dbo.Customers
ORDER BY CustomerID
OFFSET (0) ROWS ---ó OFFSET (4) ROW
GO
SELECT CustomerID, CompanyName, ContactName FROM dbo.v1
WHERE CustomerID LIKE 'AN%'

En conclusión, OFFSET y FETCH son excelentes mejoras que facilitan implementar soluciones de paginado de resultados, sin embargo, para que sea escalable hay que usarlo en combinación con índices y filtros.

Más información:

ORDER BY Clause (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms188385%28v=SQL.110%29.aspx

PercyReyes,

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

Dentro de la lista de compatibilidades que ”supuestamente soporta” Denali se encuentra SQL Server 2000 (80), sin embargo, si se desea restaurar un backup de una base de datos con dicha compatibilidad será imposible hasta el momento, y no creo que cambie esto, pienso que Denali en RTM tampoco soportará compatibilidad 80, puesto hasta donde sé, cada versión de SQL Server sólo soporta dos versiones anteriores, en este caso, sería compatibilidad 90 (para SQL Server 2005)  y 100 (para SQL Server 2008/2008R2). Al parecer los chicos de SQL Server se olvidaron de eliminarlo del UI. No hay problema en este punto, ya que sólo es un CTP!.

image

La solución alterna a este tema es volver a crear la base de datos y cargar nuevamente los objetos y datos, obvio que esta solución es engorroso y no viable en temas de negocio. Cuando se procede a restaurar un backup de una base de datos en SQL Server 2000 (con SP3a) se disparará el siguiente error:

Msg 3169, Level 16, State 1, Line 1
The database was backed up on a server running version 8.00.0760. That version is incompatible with this server, which is running version 11.00.1103. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

Cuando se intenta adjunta la base de datos tampoco es posible:

Msg 1813, Level 16, State 2, Line 1
Could not open new database 'Northwind'. CREATE DATABASE is aborted.
Msg 1813, Level 16, State 2, Line 1
Could not open new database 'Northwind'. CREATE DATABASE is aborted.
Msg 950, Level 20, State 1, Line 1
Database 'Northwind' cannot be upgraded because its non-release version (0) is not supported by this version of SQL Server. You cannot open a database that is incompatible with this version of sqlservr.exe. You must re-create the database.

Otra solución para estos temas es adjuntar primero la base de datos en SQL Server 2008 R2, y luego adjuntarla en SQL11. Seguramente este issue (es decir, desaparecer la opción del UI, y cual señal de posible soporte de compatibilidad 80) será corregido cuando se libere la versión RTM, pero por el momento es lo que se tiene en el caso de desee ir testeando algún upgrade.

Percy Reyes,

Publicado por Percy Reyes | 1 comment(s)
Archivado en:

En el CTP1 Denali tenemos un nuevo objeto muy interesante denominado SEQUENCE para manejar secuencia de números entre el rango de -2^31 – 1 hasta 2^31 –1. SEQUENCE soluciona muchos problemas relacionado al uso de la propiedad IDENTITY, no está amarrado a la columna y se puede usar para varias columnas en diferentes tablas. También es útil como alternativa limitada a ROW_NUMBER. Su uso es bien sencillo, práctico y flexible, ofrece mejor performance que IDENTITY, no se puede usar dentro de funciones ni con DISTINCT, UNION, EXCEPT y INTERSECT pero sí con UNION ALL, y para garantizar la unicidad se recomienda crear un índice sobre las columnas que se alimentan de los valores de esta función. Cuidado con el uso de índices, puede mejorar las consultas, pero a la vez impactar negativamente en las operaciones de actualización.

Definitivamente SEQUENCE será de demasiada utilidad, sin embargo, no olvidar que cada cosa tiene sus escenarios de uso y también sus limitaciones.  Para crear un objeto SEQUENCE suficiente con esto:

CREATE SEQUENCE dbo.MySeq
AS INT
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE 100
CYCLE
CACHE 20
;
 
Para usarlo:
 
select NEXT VALUE FOR dbo.MySeq
GO 4
 
Resultado:
(no column name)
1
2
3
4
 
Si volvemos a ejecutar la sentencia anterior, el resultado será los número del 5 al 8 y asi sucesivamente. Si en algún momento deseamos reinicializar la secuencia podemos lograrlo asi:
ALTER SEQUENCE dbo.MySeq RESTART 
 
Y si deseamos la secuencia sea cíclica simplemente usamos el atributo CYCLE en la definición del objeto. Por otra parte, como alternativa básica a ROW_NUMBER() podemos usarlo de la siguiente manera:
 
select next value for dbo.MySeq as [nro],  Employees.FirstName   from Employees 
image
o también de esta manera:
select next value for dbo.MySeq  over (order by customers.CustomerID ) as [nro], 
customers.CustomerID, Customers.ContactName
from Customers
 
Pueden revisar a más detalle acerca de SEQUENCE en las referencias que doy a continuación donde existe suficiente cantidad de ejemplos con las que se pueden familiarizar.

Creating and Using Sequence Numbers
http://msdn.microsoft.com/en-us/library/ff878058(v=SQL.110).aspx
sp_sequence_get_range (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ff878352(v=SQL.110).aspx
SQL Server v.Next (Denali) : Using SEQUENCE
http://sqlblog.com/blogs/aaron_bertrand/archive/2010/11/11/sql-server-11-denali-using-sequence.aspx

PercyReyes,

Publicado por Percy Reyes | 4 comment(s)
Archivado en:

Reaparecezco nuevamente después de buen tiempo para invitarles a 2 eventos de importancia para personas interesadas en tópicos de SQL Server en sus diversas variedades y sabores. El primer evento es el conocido como "24 Horas PASS-LATAM" , evento online repartido en dos días(19 y 20 Oct.) donde se garantiza 24 horas de diversión total con SQL server. Más información: http://sqlpass-latam.org , regístrense!.

El otro evento será el sábado 23 de Oct., evento presencial durante la tarde en Perú, orientado a tópicos de Data Mining y Administración de Servidores de Bases de Datos en SQL Server 2008 R2. Más información: http://lima.sqlpass.org/SQLServer2008R2EventLaunch/tabid/915/Default.aspx (SQL Server 2008 R2 Event Launch).

Espero que lo disfruten!.

Saludos,

Publicado por Percy Reyes | 1 comment(s)

En Microsoft Learning han liberado un Clinic de SQL Server 2008 totalmente gratis. El enlace es: https://www.microsoftelearning.com/eLearning/courseDetail.aspx?courseId=137158&tab=overview 

Los temas cubiertos son los siguientes:

  • Availability, Security, Scalability, and Resource Management and Troubleshooting Enhancements
  • Policy-Based Management and Multi-Server Administration
  • Performance Data Collection
  • Management Implications of New Features

Espero puedan aprovecharlo!

Publicado por Percy Reyes | 3 comment(s)
Más artículos Página siguiente >