Percy Reyes's Technical Blog

Today my latest tip has been published about “Script out all SQL Server Indexes in a Database using T-SQL” and you can read it at http://www.mssqltips.com/sqlservertip/3441/script-out-all-sql-server-indexes-in-a-database-using-tsql/

Please let me know if you have any comments or questions.

Publicado por Percy Reyes | con no comments
Archivado en: ,

As many of you know I am writing some tips since this year for MSSQLTips. I encourage you to vote for me as I am in the running for “Rookie of the Year”. If you want to vote for me you can do it here http://www.mssqltips.com/MSSQLTipsAuthorVoting.asp . I count on your vote, thanks!.

Please let me know if you have any comment.

Publicado por Percy Reyes | con no comments
Archivado en:

Today my latest tip has been published about “Script to Drop All Orphaned SQL Server Database Users” and you can read it at http://www.mssqltips.com/sqlservertip/3439/script-to-drop-all-orphaned-sql-server-database-users/

Please let me know if you have any comments or questions.

Publicado por Percy Reyes | con no comments

My latest tip was published today at MSSQLTips.com, you can read it at http://www.mssqltips.com/sqlservertip/3392/microsoft-sql-server-2014-inmemory-oltp-how-fast-is-it/

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

A few hours ago my tip has been published at mssqltips.com about “How to use DBCC CHECKPRIMARYFILE to attach several SQL Server databases in seconds”. You can read it at http://www.mssqltips.com/sqlservertip/3342/how-to-use-dbcc-checkprimaryfile-to-attach-several-sql-server-databases-in-seconds/

If you like this tip then you can follow me @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

We can get this error “The following system error occurred: (Microsoft.AnalysisServices)” when we are trying to add user to the server administrator role in Analysis Services via the GUI or by code.

This type of error are raised sometimes when you have orphaned users already added as server administrators. The orphaned users has not got a correct matching with any Windows User in the Active Directory. They were removed from AD but not in the Analysis Services, so, only their invalid users ID are still linked like they are showed in the following picture.

SNAGHTML514aba

What you have to do to fix it is first remove these orphaned users and then add the Windows Users you want to.

If you like this tip then you can follow me @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

Just one hour ago one of my tips was published at MSSQLTips.com related to “Error 601: Could not continue scan with NOLOCK due to SQL Server data movement”. You can read it at http://www.mssqltips.com/sqlservertip/3289/error-601-could-not-continue-scan-with-nolock-due-to-sql-server-data-movement/

If you like this tip then you can follow me @percyreyes on twitter.

My latest tip has been published today at mssqltips.com about “SQL Server Replication Error - The specified LSN for repldone log scan occurs before the current start of replication in the log” and you can read it at http://www.mssqltips.com/sqlservertip/3288/sql-server-replication-error--the-specified-lsn-for-repldone-log-scan-occurs-before-the-current-start-of-replication-in-the-log 

If you like this post then follow @percyreyes on twitter.

Today my tip about how to fix the “SQL Server Transactional Replication Error: Could not find stored procedure” has been published online in mssqltips.com, you can read it at http://www.mssqltips.com/sqlservertip/3287/sql-server-transactional-replication-error-could-not-find-stored-procedure-error-and-how-to-recover-it-by-using-spscriptpublicationcustomprocs/

If you like this post then follow @percyreyes on twitter.

We know that SQL Server 2000 is now out of compliance with Microsoft, so in that sense, many companies are planning to upgrade to latest version of SQL Server like SQL Server 2012 or SQL Server 2014. This post will give you a recommendation that you have to put in practice in order to avoid inconsistencies errors of meta data of objects of the upgraded databases.

The inconsistencies errors what I am talking about can be found when you run DBCC CHECKDB command to check the logical and physical integrity:

DBCC CHECKDB(N'MyDatabase') WITH NO_INFOMSGS

and here the following inconsistency error is likely to appear:

The In-row data RSVD page count for object ‘SchemaName.ObjectName’, index ID 0, partition ID 75863107960832, alloc unit ID 75863107960832 (type In-row data) is incorrect.
Run DBCC UPDATEUSAGE.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'SchemaName.ObjectName' (object ID 1157579162).
CHECKDB found 0 allocation errors and 1 consistency errors in database 'MyDatabase'.

The output message above tell us that there is 1 consistency error in the database "MyDatabase" and is a logical inconsistency. The error indicates that there are pages and row counts inaccuracies for that table and this type of errors can be fixed by running DBCC UPDATEUSAGE command. DBCC UPDATEUSAGE reports and corrects these inaccuracies. Now after you execute the DBCC UPDATEUSAGE(0) command in the context of your database you will see the following message indicating that were fixed the inaccuracies

DBCC UPDATEUSAGE: Usage counts updated for table 'SchemaName.ObjectName' (index 'ObjectName' , partition 1):
        RSVD pages (In-row Data): changed from (-275) to (145) pages.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

At this time the consistency error was fixed and then we are able to run again DBCC CHECKDB and this will finish without errors. The recommendation of this tip is always execute DBCC UPDATEUSAGE after you upgraded your database from SQL Server 2000 to superior version.

If you like this post then follow @percyreyes on twitter.

Sometimes when we have an AlwaysOn solution implemented and then due to whatever reason the WSFC Cluster service run down and is corrupted we will need to reinstall everything including new cluster and new Availability Groups but how to uninstall or delete the Availability Group correctly if WFC service is not running? we don’t have any option than delete forcibly the Availability Groups and making a dirty and incomplete uninstallation.

After delete Availability Groups we will need remove the nodes from the WSFC Cluster and it can be done by uninstalling the WSFC Cluster service.

Ok, Now after you have installed again the Failover Cluster and also joining the nodes in cluster we have to create new AlwaysOn Availability Group

image

and trying this we can get this error:

image

This error is due to incorrect uninstalling that we had done above and to fix this we will have to disable AlwaysOn Availibility Groups at database engine service level

image 

and the restarting it.

image

Now we have to enable it again and then restarting the database engine service

image

After you have done these steps you will be able to create again the Availability Groups what you want. It is great!

image 

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

The following issue can be raised when Transactional Replication and Change Data Capture (CDC) are deployed and running together, and something was done incorrectly managing CDC jobs. We know when CDC is deployed then two jobs are created for the CDC process. They are cdc.MyDB_capture and cdc.MyDB_cleanup.

cdc.MyDB_capture job executes sys.sp_MScdc_capture_job stored procedure and this inside invokes sp_cdc_scan to read internally the Transaction Log and captures the changes done in the database, so, this job is the agent of CDC process to read the Transaction Log and also use Log Reader Agent (of Transactional Replication) to make it possible. In this way if you also have Transaction Replication running for your database then you will have another agent working on reading the Transaction Log. This is the Log Read Agent. Now, Log Read Agent and cdc.MyDB_capture job cannot be running both so that this error will be raised:

The capture job cannot be used by Change Data Capture to extract changes from the log when transactional replication is also enabled on the same database. When Change Data Capture and transactional replication are both enabled on a database, use the logreader agent to extract the log changes.

This error is because is not possible that two Log Reader Agent instances are running and reading together your database in the same SQL Server instance.

When transactional replication is configured then the cdc.MyDB_capture job is dropped automatically and if you uninstall Replication then cdc.MyDB_capture job is created again. If you have already CDC installed and after you configured transactional replication then cdc.MyDB_capture job should have been dropped. The Log Agent Reader (created as part of replication) has the High priority. So, if you have transactional replication running for your database and cdc.MyDB_capture job is still enabled and running then you have to disable or drop it manually this job because it will be failing and raising the error above.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

Today I enjoyed talking about a new feature in SQL Server 2014, that is Backup Encryption. I have given this webcast session for SQLPASS Peru chapter (follow on twitter @sqlpassperu and fan page ). You can download the presentation from here.

Please let me know if you have any comments or questions or concerns.

If you like this post then follow @percyreyes on twitter.

Sometimes you may have jobs executing PowerShell scripts inside, but one day unexpectedly they can begin to fail without any apparent reason and then getting this error:

AuthorizationManager check failed At line:1 char:2  + & <<<<  'S:\myfolder\script.ps1'      + CategoryInfo          : NotSpecified: (:) [], PSSecurityException      + FullyQualifiedErrorId : RuntimeException.  Process Exit Code 1.  The step failed.

What you have to do now to troubleshoot this is check PowerShell to ensure the ExecutionPolicy is not set to “Restricted” by executing

Get-ExecutionPolicy

If it is, then set it to “RemoteSigned” or “Unrestricted” depending on your security policy.

Set-ExecutionPolicy RemoteSigned

But if the error continues then you have to check and ensure that the Windows Management Instrumentation service (WMI) service is enabled and running, and if it is and the error still continues, then you have to restart WMI service. After this, your job and script will run again without error.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

If you have any database with Change Data Capture (CDC) enabled you need to take account this recommendation during the restore process. There is one option to be used as part of RESTORE syntax, this is KEEP_CDC which allows you to restore the cdc metadata as well when you restore the database to another SQL Server instance or restore it to the same instance but with different database name.

RESTORE DATABASE [TESTDB2] FROM  DISK = N'D:\SQLBackup\TESTDB.bak' WITH  FILE = 1, KEEP_CDC
 
Now you can verify that CDC metadata was restored as well, this is, your database and tables keep CDC enabled, and also all data captured by CDC)
 
-- Checking if CDC is enabled for database TESTDB2.
select is_cdc_enabled,name from sys.databases where name='TESTDB2'
-- Checking if CDC is enabled for table MyTable.
select is_replicated, is_tracked_by_cdc, * from sys.tables
select * from [cdc].[change_tables]  
-- checking the data tracked for table MyTable.
select * from cdc.dbo_MyTable_CT 
 
This is not all what you have to do, after this you will need to create the CDC jobs by executing
 
USE TESTDB2
EXEC sys.sp_cdc_add_job 'capture'
EXEC sys.sp_cdc_add_job 'cleanup' 
 
Now, you also can verify the jobs were created for CDC
 
USE TESTDB2
EXEC [sys].[sp_cdc_help_jobs] 
 
With all of this, the restore process of database (with CDC) have been completed successfully.

If you like this post then follow @percyreyes on twitter.
Publicado por Percy Reyes | con no comments
Archivado en: ,,

As DBA we know sometimes is needed to modify data and log properties of the tempdb database. By default SQL Server creates only one data file and log file, but for the vast majority of situations we will need to create more files.

There are many recommendations about how to create them and also the quantity of files to be created on OLTP environments. Today we will not discuss more details about it, by now I recommend to split your tempdb database into 4 (for 4 or 8 or 16 cores) or 8 data files (for 16, 32, 64, or more cores) and only one Log File and try to locate them in different drives RAID1, RAID5 or RAID10.  Don't forget that the number of data files don't just depends on number of cores, it also depends on concurrency, the workloads on your server and the performance of your queries.

Now I will show you one small script to split your default tempdb database into 8 data files and remaining the only one log file.

USE master
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'templog', NEWNAME= N'tempdev_Log')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev', NEWNAME=N'tempdev_Data01')
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Log',  FILENAME= N'D:\SQLTempDB\tempdev_Log.ldf', SIZE = 2048MB , FILEGROWTH = 2048MB)
GO
ALTER DATABASE [tempdb] MODIFY FILE (NAME=N'tempdev_Data01',  FILENAME= N'D:\SQLTempDB\tempdev_Data01.mdf', SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data02', FILENAME = N'D:\SQLTempDB\tempdev_Data02.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data03', FILENAME = N'D:\SQLTempDB\tempdev_Data03.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data04', FILENAME = N'D:\SQLTempDB\tempdev_Data04.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data05', FILENAME = N'D:\SQLTempDB\tempdev_Data05.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data06', FILENAME = N'D:\SQLTempDB\tempdev_Data06.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data07', FILENAME = N'D:\SQLTempDB\tempdev_Data07.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev_Data08', FILENAME = N'D:\SQLTempDB\tempdev_Data08.ndf' , SIZE = 512MB , FILEGROWTH = 1024MB )

After the execution you need to restart the database engine so that new files can be recreated and all changes are done. Let me know if you have any question or comment about tempdb database concerns.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,

The tempdb is the shared database per instance in SQL Server and it is used to stored and manage temporary objects. Tempdb database has a number of changes in SQL Server 2005, that is, there are new tempdb usages and internal optimizations enhancements. However, tempdb architecture is mostly unchanged since SQL Server 2000. 
In a user database, transactions have the ACID attributes: atomicity, concurrency, isolation, and durability. In tempdb, transactions lose the durability attribute. Tempdb does not persist after SQL Server shuts down.
Most of these internal operations on tempdb do not generate log records because there is no need to roll back. So, these operations are faster.
Some of the database options cannot be modified for tempdb. Auto shrink is not allowed for tempdb. Database shrink and file shrink capabilities are limited.

The following restrictions also apply to tempdb:
    The database CHECKSUM option cannot be enabled.
    A database snapshot cannot be created on tempdb.
    DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported.
    Only offline checking for DBCC CHECKTABLE is performed.

Tempdb Space Usage: The following types of objects can occupy tempdb space: Internal objects, Version stores and User objects.

Internal Objects: Internal object metadata is stored in memory, doesn't appear in catalog views such as sys.all_objects. Internal objects are used:

  • To store intermediate runs for sort.
  • To store intermediate results for hash joins and hash aggregates.
  • To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large object types: text, image, ntext, varchar(max), varbinary(max), and all others.
  • By queries that need a spool to store intermediate results.
  • By keyset cursors to store the keys.
  • By static cursors to store a query result.
  • By Service Broker to store messages in transit.
  • By INSTEAD OF triggers to store data for internal processing.
  • DBCC CHECK internally uses a query that may need to spool intermediate results.
  • Query notification and event notification use Service Broker, so they need space in tempdb as well.

    Page allocations on internal objects and Updates to internal objects do not generate log records.
    Each internal object occupies at least nine pages (one IAM page and eight data pages) in tempdb.

Version Stores: Doesn't appear in catalog views such as sys.all_objects. They are used:

  • Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build.
  • The online index build version store is for row versions from tables that have online index build operations on them.
  • The common version store is for row versions from all other tables in all databases.
  • The version store consists of append-only store units which are highly optimized for sequential inserts and random look up. Inserts into the version store do not generate log records.
  • Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute.

    For snapshot isolation and read committed snapshot isolation, the versions are generated by DML operations in the database when the respective database options are enabled.
    For AFTER triggers, versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options.
    INSTEAD OF triggers do not generate versions.
    For MARS, versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection.
    Building an online index generates versions of the rows while the online index is being built.

User Objects: Appear in catalog views such as sys.all_objects. They are used:

  • Sp_spaceused can show the size occupied by these objects
  • User objects include both user-defined tables and indexes, and system catalog tables and indexes.
  • Operations on user objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged.
  • User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t.
  • Local temporary tables also include table variables such as @t and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.

Performance Enhancements in SQL Server 2005 or later:

  1. The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb.
  2. Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased.
  3. There is less use of the UP type page latch when allocating pages and extents in tempdb. Proportional fill has been optimized to reduce UP latch contention.
  4. Proportional fill has been optimized to reduce UP latch contention.
  5. There is now deferred drop in tempdb.
  6. Worktable caching is improved.
  7. SQL Server 2005 or later caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement

Recommendations for managing

  • The tempdb files must be configured with initial size and auto-growth based on your workloads. Do not let with the default sizes.
  • The tempdb files must be located on RAID0 (for better performance) or RAID1 (if you need have more writes than reads) or RAID5 (if you have more reads than writes). RAID10 is the best option but no all companies can justify this.
  • The tempdb files must be located on separated disks to avoid contention issues and improves the performance.
  • Tempdb database must be created with one data file per physical processor if the quantity of CPUs is eight or less. If there are more then eight CPUs then you can start off with eight data files and after increase the number of files by four in case there was PAGELATCH contentions on one of the allocation bitmap pages (including PFS pages, GAM pages and SGAM pages) until the issue is solved, if not, add four files more, and so on.
  • Do not forget, tempdb data files or log file should not grow so much in a short time, if this happens then something is working wrong, so you need to analyze the workloads performance and detect the query that you have to optimize.

If you have any questions or comments about tempdb database please let me know.

If you like this post then follow @percyreyes on twitter.

Many times as SQL DBA we can have to deal with errors in SQL Server Replication. The vast majority of errors are reported at replication agents like Distribution Agent, Log Reader Agent, Snapshot Agent, Queue Reader Agent and so on. Now, what happens when for example one of the agents reports several errors and the majority are known and/or duplicated or related to FK/PK conflicts?, to solve all errors one by one or for reconfiguring the replication you will have to need much time and we know time is money and as time happens the operation of business is more impacted so you need to know an option to allow the operation continues while you are working to fix the errors as soon as possible. This option is the SkipErrors parameter.

Today in this post I will explain about how to use the SkipErrors parameter which allow to skip specific errors so that the data synchronization is not stopped and this not affect the operation of the business. SkipErrors parameter is configurable in the profile of distribution agent. This parameter is more used to solve errors related to Primary Key or Foreign Key conflicts raised within the distribution agent process at the moment when the transactions are trying to be replicated to the subscribers. However, you can use for skipping whatever error you decide. 

The following picture shows an error (with error code 547) about FK in the distribution agent process and you will see how the transactions are being queued due to this error, so, you will need to fix it to allow the rest of transactions are also replicated. Distribution agent reads sequentially the table msrepl_commands to get the command to execute in the subscribers, this means first in the queue first out to be replicated to subscribers.

 image

Another common errors where you can use SkipError is “The row was not found at the Subscriber when applying the replicated command” . The error code in this case is 20598.

 image

The SkipError parameter has as input the number of error that you want to skip, and this is configured in the distribution agent profile. You need to create a customized profile and this must be based on the default profile, after this the change you need to do is indicate the numbers of the errors separated by colon as you can see in the following picture “20598:547”.

image

After you have done this change, you need to restart the distribution agent. When the distribution agent start up then it will load the new customized profile with the code errors to be skipped or ignored. You will see something like this picture with many errors being skipped.

 image

Now, as you could see many errors were skipped and the distribution is running without problems, and this means also that the transactions with these errors were missed and you can not recovery them because they were ignored and this may affect your data consistency.  Do not forget you must use SkipError parameter with caution and working in team with you analysts or application administrators in order to make the best decision for your business.

If you have any questions or comments please let me know.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments

An error like is shown below you can see when you try to connect to Object Explorer by using the SSMS (SQL Server Management Studio) of a older version, for example, you try to connect to SQL Server 2012 Object Explorer by using SQL Server 2005 SSMS.

image

The error is raised explicitly because the superior version support more features and then there is more quantity of folders to display, and this here where SQL Server 2005 SSMS can not display these new folders because the max array index is outside the bounds that this supports. However, this error is only when you try connect by default to object explorer, but this not happen if you connect by default to query. In this situations, the general recommendation is always using the latest SSMS to connect to older versions. I hope this comment is useful for you!.

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | con no comments
Archivado en: ,,

Many Microsoft Server eBooks are still available for free download, they are quick references and very useful resources to start your adventure in the databases world. I hope you enjoy them and do not forget to share with your contacts.

5 Tips for a Smooth SSIS Upgrade to SQL Server 2012
A Hitchiker's Guide to Microsoft StreamInsight Queries
Books Online: Backup and Restore of SQL Server Databases
Books Online: Data Analysis Expressions (DAX) Reference
Books Online: Data Mining Extensions (DMX) Reference
Books Online: Data Quality Services
Books Online: High Availability Solutions
Books Online: Master Data Services
Books Online: Monitor and Tune for Performance
Books Online: Multidimensional Expressions (MDX) Reference
Books Online: SQL Server Distributed Replay
Books Online: Transact-SQL Data Definition Language (DDL) Reference
Books Online: Transact-SQL Data Manipulation Language (DML) Reference
Books Online: XQuery Language Reference
Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence
Extracting and Loading SharePoint Data in SQL Server Integration Services
Integration Services: Extending Packages with Scripting
Introducing Microsoft SQL Server 2008 R2
Introducing Microsoft SQL Server 2012
Master Data Services Capacity Guidelines
Master Data Services (MDS) Operations Guide
Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery
Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide
Multidimensional Model Programming
Optimized Bulk Loading of Data into Oracle
Planning Disaster Recovery for Microsoft SQL Server Reporting Services in SharePoint Integrated Mode
QuickStart: Learn DAX Basics in 30 Minutes
SQLCAT's Guide to BI and Analytics
SQLCAT's Guide to High Availability Disaster Recovery
SQLCAT's Guide to Relational Engine
SQL Server 2012 Tutorials: Analysis Services - Data Mining
SQL Server 2012 Tutorials: Analysis Services - Multidimensional Modeling
SQL Server 2012 Tutorials: Analysis Services - Tabular Modeling
SQL Server 2012 Tutorials: Reporting Services
SQL Server 2012 Tutorials: Writing Transact-SQL Statements
SQL Server 2012 Upgrade Technical Guide
SQL Server Community FAQs Manual

If you like this post then follow @percyreyes on twitter.

Publicado por Percy Reyes | 2 comment(s)
Archivado en: ,,,
Más artículos Página siguiente >