This blog is for SQL Developers who would like to tranform themselves to DBA
Search This Blog
Tuesday, September 7, 2010
SQL Monitoring Tool – SQL Profiler
Now let us know see how to use SQL Profiler and when to SQL Profiler
Start Menu-> Run and type profiler, in the UI provide server/User and password and then choose default trace and click on run and we could commands which are fired against the server.
The basic information we receive from profiler are TextData, Application Name, LoginName, Reads, Writes, CPU, Duration, SPId, StartTime, EndTime of which main parameter is TextData which shows the data sent from the client to the server along with the command executed.
Say for example I execute say a procedure Proc_test with Parameter1 value as "Test" and Parameter2 value as 120 then the TextData would show value as
EXEC Proc_Test "Test",120
This is very useful for troubleshooting as many of times we end up with application users saying its a query or procedure bug and we find finally that application has passed a hard coded value as the application developer forgot to change the code or parameter values were jumbled. So first thing any database developer needs to do when a bug or issue is logged as saying no rows returned or wrong data fetched from procedure would be to run profiler and confirm correct values are passed to database before analyzing the issue which would save lots of human errors and speedy recovery of issues.
There are other useful information which are available which help us troubleshoot from performance perspective
Reads - How much of reads is performed for each statements which would help us determine how much data is read from disk instead of buffer
Writes - How much of Writes is performed for each statement from which we can again know how much of IO is being consumed
CPU - CPU is the amount of CPU time the query consumed
Duration - Duration is the amount of time it took to execute the statement.
Needless to say that duration is the first parameter to see and locate statements which has more duration and then take the statement from Textdata and analyze further on whether reads and writes for those statements are appropriate or tuning is required from Index perspective or re-write query logic. To get statement level we need to use SQLBatchCompleted as RPCCompleted would just return procedure level and not statement level.
And also as a responsible developer we need to understand that SQL Profiler also consumes some amount of CPU on the server as it monitors the server continuously and
so it is restricted to only sysadmins.
Also we need to use SQL Profiler cautiously and not choose this for all troubleshooting issues , so talk to the DBA when in real need and DBAs can give permission to user for
running the same.
One more good option we get from profiler would be save these traces as a trace file or to SQL Server table so we can analyze these traces in future for troubleshooting.
In the coming weeks we would see some more features on the SQL Profiler to leverage this tool in a still better way to solve many of our issues…………
Dynamic Statements - sp_executesql Vs EXEC
each and every time when such statements are executed.
The greatest advantage of Stored Procedure apart from reusability is that the code is compiled and so SQL would save some execution time by not invoking compiler for creating
creating execution plan. What would happen if we need to use dynamic statements in our procedure due to a business need would the same be compiled once and then cached or does it each time create execution plan ?
Answer would be it depends on how we invoke dynamic statements. Generally there are 2 ways of invoking dynamic statements one to EXEC statement and the next one would be
to use sp_executesql . Of which sp_executesql is the one which needs to be used as the plan is cached for the same.
As always let us see an example to understand this better:
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'TEST_SYS_CACHE')
DROP TABLE TEST_SYS_CACHE
GO
CREATE TABLE TEST_SYS_CACHE
( Col1_Int INT IDENTITY(1,1) NOT NULL,
Col2_Varchar VARCHAR(255) NOT NULL,
Col3_Varchar VARCHAR(2000) NOT NULL,
COl4_Flag BIT NOT NULL
)
--Poplulate 1000 rows
DECLARE @iCount INTEGER
SELECT @iCount = 1
WHILE @iCount < = 1000
BEGIN
INSERT INTO TEST_SYS_CACHE(Col2_Varchar,Col3_Varchar,COl4_Flag)
SELECT 'ABC' + CONVERT(VARCHAR,@iCount),'\\ABC' + CONVERT(VARCHAR,@iCount) + '\Share',1
SELECT @iCount = @iCount + 1
END
DBCC FREEPROCCACHE
--Execute this query multiple times and you would see usecount is always 1
DECLARE @Col_Value_E INT
DECLARE @Qry_Cache_E VARCHAR(7000)
SELECT @Col_Value_E = 100
SELECT @Qry_Cache_E = 'SELECT * FROM TEST_SYS_CACHE WHERE Col1_Int = ' + CONVERT(VARCHAR,@Col_Value_E)
EXEC (@Qry_Cache_E)
SELECT sql,usecounts,* from sys.syscacheobjects WHERE objtype = 'Prepared'
DBCC FREEPROCCACHE
--Execute this query multiple times and you would see usecount keeps increasing
DECLARE @Col_Value INT
DECLARE @Qry_Cache NVARCHAR(4000)
SELECT @Col_Value = 100
SELECT @Qry_Cache = 'SELECT * FROM TEST_SYS_CACHE WHERE Col1_Int = @Col_Value'
EXEC sp_executesql @Qry_Cache, N'@Col_Value INT', @Col_Value = @Col_Value
SELECT sql,usecounts,* from sys.syscacheobjects WHERE objtype = 'Prepared'
/* Output
sql usecounts
(@1 tinyint)SELECT * FROM [TEST_SYS_CACHE] WHERE [Col1_Int]=@1 1
sql usecounts
(@Col_Value INT)SELECT * FROM TEST_SYS_CACHE WHERE Col1_Int = @Col_Value 4
/* Output
So from the above we can infer that sp_executesql plan is cached and used for subsequent runs but for EXEC each time the plan is recreated
Tuesday, August 17, 2010
How to remove a specific query plan from cache.
Example:Error Line: 262; Error Severity: 16; Error State: 4;
Error Message: Could not complete cursor operation because the table schema changed after the cursor was declared.
In such cases below query becomes handy to clear the query cache
-- Use the below query to get the handle of the query which is being run
SELECT plan_handle, st.text FROM sys.dm_exec_cached_plans CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE text LIKE N'SELECT * FROM TableA%'
GO
-- Use the SQL Handle to remove the specific plan from the cache
DBCC FREEPROCCACHE (0x060006001ECA270EC0215D05000000000000000000000000)
GO
Monday, August 9, 2010
Initiate a Backup of database
unique with date and time(till seconds) is included in backup file.
Below mentioned command takes DBName as the input parameter and it creates a folder based on database name and places backup file inside that.
DECLARE @DBNAME VARCHAR(100)
DECLARE @EXEC_STR VARCHAR(100)
DECLARE @Backup_Folder VARCHAR(5000)
DECLARE @Backup_Path VARCHAR(5000)
SELECT @DBNAME = 'master'
SELECT @Backup_Folder = 'E:\' + @DBNAME + '\Backup\'
SELECT @EXEC_STR = 'md ' + @Backup_Folder
exec master..xp_cmdshell @EXEC_STR
SELECT @Backup_Path = @Backup_Folder + @DBNAME + '_Full_' + replace(replace(convert(varchar(20),getdate(),113),':','_'),' ' , '_') + '.bak'
SELECT @Backup_Path
BACKUP DATABASE @DBNAME TO DISK = @Backup_Path WITH NOFORMAT
, NOINIT
, NAME = 'Full Database Backup'
, SKIP,
,STATS = 5
GO
Calculate BACKUP & Restore Time Estimate
SELECT command,
s.text,
start_time,
percent_complete,
CAST(((DATEDIFF(s,start_time,GetDate()))/3600) as varchar) + ' hour(s), '
+ CAST((DATEDIFF(s,start_time,GetDate())%3600)/60 as varchar) + 'min, '
+ CAST((DATEDIFF(s,start_time,GetDate())%60) as varchar) + ' sec' as running_time,
CAST((estimated_completion_time/3600000) as varchar) + ' hour(s), '
+ CAST((estimated_completion_time %3600000)/60000 as varchar) + 'min, '
+ CAST((estimated_completion_time %60000)/1000 as varchar) + ' sec' as est_time_to_go,
dateadd(second,estimated_completion_time/1000, getdate()) as est_completion_time
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) s
WHERE r.command in ('RESTORE DATABASE', 'BACKUP DATABASE', 'RESTORE LOG', 'BACKUP LOG')
Friday, August 6, 2010
What is Selectivity
The ratio between the number of distinct values in the indexed column to the number of records in a table is called as selectivity of an index. So it is obvious that ideal selectivity
would be 1(or Closer to 1) which means most of the values in column are unique
Examples of Good Selectivity:
Consider a table having say 1000 records and one of the indexed column has 900 distinct values then the selectivity would be 900/1000 = 0.90
Indexes on these columns are the most selective and the most effective in optimizing performance. So, the term selectivity of an index means percentage of rows in a table having
the same value for the indexed column.
Examples of Bad Selectivity:
Consider a table having say 1000 records and one of the indexed column has 50 distinct values then the selectivity would be 50/1000 = 0.05
It is evident that a full table scan is more efficient as using such an index where much more I/O is needed to scan repeatedly the index and the table.
B-TREE Indexes improve performance of queries which select small percentage of rows from a table. As a thumb rule, we should create indexes on tables that are often
queried for less than 15% of the table's rows.
Let us understand selectivity by means of a example
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'TEST_INDEX_SEL')
DROP TABLE TEST_INDEX_SEL
GO
CREATE TABLE TEST_INDEX_SEL
( Col1_Int INT IDENTITY(1,1) NOT NULL,
Col2_Varchar VARCHAR(255) NOT NULL,
Col3_Varchar VARCHAR(2000) NOT NULL,
COl4_Flag BIT NOT NULL
)
--Poplulate 1000 rows
DECLARE @iCount INTEGER
SELECT @iCount = 1
WHILE @iCount < = 1000
BEGIN
INSERT INTO TEST_INDEX_SEL(Col2_Varchar,Col3_Varchar,COl4_Flag)
SELECT 'ABC' + CONVERT(VARCHAR,@iCount),'\\ABC' + CONVERT (VARCHAR,@iCount) + '\Share',1
SELECT @iCount = @iCount + 1
END
--Update some rows randomly for sampling
UPDATE TEST_INDEX_SEL
SET Col2_Varchar = 'ABC2',
Col3_Varchar = '\\ABC5\Share',
COl4_Flag = 0
WHERE Col1_Int IN ( SELECT TOP 100 COL1_INT FROM TEST_INDEX_SEL ORDER BY NEWID())
UPDATE TEST_INDEX_SEL
SET Col2_Varchar = 'ABC203',
Col3_Varchar = '\\ABC764\Share',
COl4_Flag = 0
WHERE Col1_Int IN ( SELECT TOP 100 COL1_INT FROM TEST_INDEX_SEL ORDER BY NEWID())
UPDATE TEST_INDEX_SEL
SET Col3_Varchar = '\\ABC378\Share',
COl4_Flag = 0
WHERE Col1_Int IN ( SELECT TOP 100 COL1_INT FROM TEST_INDEX_SEL ORDER BY NEWID())
Now for example selectivity of Col2_Varchar can be calculated as follows:
SELECT COUNT (DISTINCT Col2_Varchar) FROM TEST_INDEX_SEL
-- Output
-- 808
SELECT COUNT ('X') FROM TEST_INDEX_SEL
-- Output
-- 1000
So selectivity would be arrived by dividing these 2 values which is not bad and its 0.8
which is just 20% away from ideal value of 1
SELECT 808/1000 = 0.808
--To find for all columns in that table
SELECT CONVERT(NUMERIC(5,2),COUNT (DISTINCT Col2_Varchar))/
(SELECT COUNT ('X') FROM TEST_INDEX_SEL),
CONVERT(NUMERIC(5,2),COUNT (DISTINCT Col3_Varchar))/
(SELECT COUNT ('X') FROM TEST_INDEX_SEL),
CONVERT(NUMERIC(5,2),COUNT (DISTINCT COl4_Flag))/
(SELECT COUNT ('X') FROM TEST_INDEX_SEL)
FROM TEST_INDEX_SEL T
/* Output
Col2_Selecvity Col3_Selectivity Col4_Selectivity
0.8080000000000 0.7260000000000 0.0020000000000
*/
By the above you we can infer that its ideal to create indexes on Col2 if the same is used in WHERE clauses and it wouldn’t help much to create on Col4 as it has low selectivity and that is the reason why many tables which have a bit value as 0 or 1 tend to be slow as it has low selectivity.
Wednesday, August 4, 2010
Query to change default data and log path
USE master
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'D:\SQLDB\DATA'
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'Z:\SQLDB\LOG'
GO
Create Linked Server using Query
EXEC master.dbo.sp_addlinkedserver @server=N'Linked_Server_Name', @srvproduct=N'', @provider=N'SQLNCLI', @datasrc=N'SQL_Instance'
GO
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'Linked_Server_Name',@useself=N'False',@locallogin=NULL,@rmtuser=N'username',@rmtpassword='password'GOEXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'collation compatible', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'data access', @optvalue=N'true'GOEXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'dist', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'pub', @optvalue=N'false'GOEXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'rpc', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'rpc out', @optvalue=N'true'GOEXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'sub', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'connect timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'collation name', @optvalue=null
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'lazy schema validation', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'query timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'Linked_Server_Name', @optname=N'use remote collation', @optvalue=N'true'
GO
Friday, July 30, 2010
Avoid Functions on Indexed Columns
SQL Server not use the indexes which results in poor performance of a query.
Some of the functions we use generally use in where clause are convert, substring, datediff, etc. Let us understand the performance implications by means of a example below.
SET NOCOUNT ON
SET STATISTICS TIME OFF
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'TEST_FUNCTION_PERF')
DROP TABLE TEST_FUNCTION_PERF
GO
--Create a table with 3 columns
CREATE TABLE TEST_FUNCTION_PERF
( Col1_Int INT IDENTITY(1,1) NOT NULL,
Col2_Date DATETIME NOT NULL,
Col3_Varchar VARCHAR(255) NOT NULL
)
--Let us create clustered index on Date column , it can be on any columns which we would use in where clause
CREATE CLUSTERED INDEX IX_TEST ON TEST_FUNCTION_PERF(Col2_Date)
--Populate table with 90000 records
DECLARE @iCount INTEGER
SELECT @iCount = 1
WHILE @iCount < = 90000 BEGIN INSERT INTO TEST_FUNCTION_PERF(Col2_Date,Col3_Varchar) SELECT GETDATE()- @iCount,'ABC' + CONVERT(VARCHAR,@iCount) SELECT @iCount = @iCount + 1 END Now let us enable statistics to know how our queries perform in terms of CPU Time and Elapsed Time for 3 different queries below which produces same results of 59 rows. SET STATISTICS TIME ON --Query 1: SELECT COL2_DATE FROM TEST_FUNCTION_PERF WHERE CONVERT(VARCHAR(10),COL2_DATE,101) > GETDATE()- 60
--Query 2
SELECT COL2_DATE FROM TEST_FUNCTION_PERF WHERE DATEDIFF(DD,COL2_DATE,GETDATE()) <> DATEADD(DD,-60,GETDATE())
/*
SQL Server Execution Times:
CPU time = 359 ms, elapsed time = 438 ms.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 63 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
*/
If you see execution plan of these queries first 2 queries would have gone for Index Scan and the last one would be Index Seek which is optimal and the best way to write such queries is using Query3 format i.e. by keeping index columns separated from functions.
Friday, July 9, 2010
Query to get User and Role mappings
select 'EXEC sp_addrolemember ''' + g.name + ''',[' + u.name + ']'
from sysusers u, sysusers g, sysmembers m
where g.uid = m.groupuid
and g.issqlrole = 1
and u.uid = m.memberuid
--SQL 2005
select 'EXEC sp_addrolemember @rolename =''' + c.name + ''',@membername = ''' + b.name + '''' from sys.database_role_members a, sysusers b , sys.database_principals c
where b.uid=a.member_principal_id
and a.role_principal_id=c.principal_id
Query to find when last Log Backup happened
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'L')
and type = 'L'
and database_name in ( select name from master..sysdatabases)
Query to find when last Full Backup happened
select backup_finish_date,database_name
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'D')
and type = 'D'
and database_name in ( select name from master..sysdatabases)
Understand Isolation Levels with Query Example
Ø Read Uncommitted
Ø Read Committed
Ø Repeatable Read
Ø Serializable
It is better to understand each of this example rather by reading through definitions on what each means.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert a committed row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-3
COMMIT
-- Insert a uncommitted row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-2
If we execute below commands on other session in SQL Management Studio
-- Read Uncommitted Isolation Level returns 2 rows and this can be used for generating reports
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM DOCS_HISTORY
-- Read Committed Isolation Level which is default doesn’t returns rows and it keeps on running
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT * FROM DOCS_HISTORY
This is the reason you would see DBAs running around to person who is blocking other users as user has a uncommitted transaction, so please keep the transaction short and commit or rollback transaction once the same is complete.
For understanding repeatable read we would need 3 sessions in SQL Management Studio
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
All 3 statements would get executed but the output from Window 1 would return 2 different outputs in case of read committed even when they are in a transaction.
Above same command when executed via Repeatable Read Isolation Level(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) which allows newer insert to happen and doesn’t allow Window 2 execution to complete as the same value was read in a transaction in Window 1 and it waits till Window 1 completes and then updates the result.
So Repeatable Read would not allow values to be changed for rows that was read inside a transaction until the same is complete however it allows newer records to be inserted which is a phantom record as the same was not present during execution of 1st select statement.
Last isolation level is Serializable and many would have guessed from previous isolation levels what would this do, yes it does not allow any data inconsistencies i.e. it will not allow Dirty Reads, non repeatable reads and Phantom reads.
Let us try to understand this also by means of same example.
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()
COMMIT TRAN
Run first Window 1 followed by Window 2 and Window 3. You would notice that Window 2 and Window 3 would be in wait state and would not execute until Window 1 is completed.
This is the top most isolation level with not data inconsistencies between reads but this is the isolation level would cause deadlocks.
There is also a new Isolation which is introduced in SQL 2005 which is SNAPSHOT isolation which is very handy and behaves mixed of SERIALIZABLE and READ COMMITTED as it always considers SNAPSHOT taken during start of transaction and displays the same inside the transaction and also it does not block new inserts or updates to happen.
You can use the same above example to understand the same with isolation level set as SET TRANSACTION ISOLATION LEVEL SNAPSHOT and understand the results.
So one need to decide on our business model and choose the appropriate isolation level based on our business model and READ COMMITTED is the widely accepted or used Isolation level.
P.S:- Thanks to Guru Charan for explaining this.
Friday, July 2, 2010
TOP Clause Enhancements - SQL 2005
If anyone asks you, how often do you display say all records in table on a User Interface ? The answer would be not always, as we show only say 20 records or max 100 records in 1 page and then if user wants to see more pages then he can always move to next page to see the next set of records.
And when you want to see only say only 20 documents of 100 documents which are huge how do you display only those 20 documents sorted by size, TOP clause in SQL Server
becomes handy.
However this TOP Clause was present in SQL 2000 for serving the basic need and the same has been enhanced in SQL 2005 which makes the programmer life still easier.
Drawbacks of TOP in SQL 2000:
1) TOP clause is not dynamic and so cannot change based on user input.
Ex: For ex say user defines in UI say return me top 30 rows and later he needs top 40 rows , this change of parameter is not available with TOP clause in turn it means TOP is
not parameterized
P.S:- Same can be achieved by SET ROWCOUNT , but we need to remember to set it back to 0 otherwise all queries will return same rows as its at session level.
2) TOP Clause can be used only in SELECT statements and cannot be used in INSERT/UPDATE/DELETE statements.
Performance Drawback:
TOP is faster when compared to SET NOCOUNT as Query optimizer uses the value of expression in the TOP clause for generating an execution plan for a query. But SET ROWCOUNT is
used outside a statement that executes a query, its value cannot be used to generate a query plan for a query.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert itno DOCS_History
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-10
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-9
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-8
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()-7
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC5',500,getdate()-6
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC6',600,getdate()-5
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC7',700,getdate()-4
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC8',800,getdate()-3
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC9',900,getdate()-2
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC10',1000,getdate()-1
-- User decides on how many rows to see in UI which is given by parameter @UI_Records
DECLARE @UI_Records INT
SELECT @UI_Records = 5
SELECT TOP (@UI_Records) * FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE DESC
Paging Logic
Let suppose you have returned 5 rows in a UI and want to return next 5 rows which is from 6 to 10 when user clicks on Next button on screen one can use the below code to achieve the same to return rows from 6 to 10 which works faster rather than having recordset in .Net and implementing paging from the same.
SELECT * FROM ( SELECT TOP 5 * FROM (SELECT TOP 10 * FROM [DOCS_HISTORY] ORDER BY DOC_UPD_DATE ASC
) AS DOCS_HISTORY1 ORDER BY DOC_UPD_DATE DESC
) AS DOCS_HISTORY2 ORDER BY DOC_UPD_DATE DESC
--Update only top 5 records whose size is > 300 with today date
UPDATE TOP (@UI_Records) [DOCS_HISTORY]
SET DOC_UPD_DATE = GETDATE()
WHERE DOC_SIZE > 300
--Delete top 5 records ordered by Doc_Upd_Date Descending
DELETE FROM [DOCS_HISTORY]
WHERE [DOC_ID] IN ( SELECT TOP (@UI_Records) [DOC_ID] FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE ASC)
Friday, June 25, 2010
Output is a new clause which can be paired off with Insert, Update & Delete statements to get the before and after image of the date being modified. It basically uses 2 magic tables
called as Inserted, Deleted for its operation.
Inserted table would contain a row for each row that is inserted and deleted table would contain a row each row that is deleted whereas both inserted and deleted would contain 1
row for row which is updated where the deleted holds the pre-image of data and inserted holds the new data that is inserted.
For Ex:
Col1 Col2
1 Peter
So if I update Peter to Peter Drucker then inserted table would have(Post Image)
Col1 Col2
1 Peter Drucker
And Deleted table would have(pre-image)
Col1 Col2
1 Peter
This feature was basically available only inside Triggers in SQL 2000, but however in SQL 2005 it is exposed and can be used even inside a stored procedure.
The ability to immediately access data changes without requiring additional reads from the database can enhance application throughput. When data modification statements
are executed in SQL Server 2005, you can capture the data that has been modified and store it for use in subsequent transactions.
One can use this technique
ü Trips to database from .Net or Java application to the database is reduced as its combined with INSERT/UPDATE/DELETE
ü To prevent the unnecessary reading of your online transaction processing (OLTP) tables which enhances application performance.
ü To implement custom data auditing without the use of triggers.
Let us see some examples on how to use various OUTPUT clauses.
Using Output clause to return the last value inserted into Identity, which is possible only by means of executing @@IDENTITY clause in SQL 2000
CREATE TABLE [dbo].[DOCS_HISTORY]
( [DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) )
INSERT INTO [DOCS_HISTORY]
OUTPUT INSERTED.[DOC_ID],INSERTED.[DOC_NAME]
SELECT 'test docs'
Using Output and Output INTO to show output and capture befor and after image to audit table
DECLARE @Tbl_Docs TABLE
( [DOC_ID_NEW] INT
, [DOC_NAME_NEW] VARCHAR(100),
[DOC_ID_OLD] INT ,
[DOC_NAME_OLD] VARCHAR(100),
CR_DATE DATETIME)
UPDATE [DOCS_HISTORY]
SET DOC_NAME = 'test documents'
OUTPUT INSERTED.[DOC_ID],
INSERTED.[DOC_NAME],
DELETED.[DOC_ID],
DELETED.[DOC_NAME],
GETDATE()
INTO @Tbl_Docs
OUTPUT INSERTED.[DOC_ID],
INSERTED.[DOC_NAME],
DELETED.[DOC_ID],
DELETED.[DOC_NAME]
WHERE DOC_ID = 100001
This is a very useful feature for auditing the pre and post image of data which was only possible by means of trigger in SQL 2000
Similarly OUTPUT can be used with DELETE statement as well to capture the data that was deleted from DELETED table.
Friday, June 18, 2010
Using Schema Binding Option in User Defined Functions
General notion is using User Defined functions(UDF) slows down your query performance and so was not a ideal method for choosing a function when the same does not
touch any tables inside the function.
If one looks at the execution plan of where UDFs are involved you could find a item named Spooler which by normal terms means as stores data from input into a temp table
to optimise the rewinds and this would be used only when Schemabinding is not set for a function and this causes a overhead in terms of performance.
Before going to much detail into our example let us understand what is schema binding?
Let us suppose I decide to drop or rename a table from a database then all of my existing procedures/functions/views would get affected while executing with error as “ Object
the same can be specified while creating proc as
Ex: Create Procedure A with SchemaBinding as begin -- Code Logic End
Let us see the above in terms of a example.
1) First step we create DOCS_HISTORY table with two columns(DocID and DocName)
2) Insert some 10000 rows into the table
3) Create 2 functions one with Schemabinding and other none and which just adds value of ‘WO’ or ‘W’ for the input
4) Call the 2 functions in UPDATE and use getdate function before and after to track which function performs better.
SET NOCOUNT ON
-- Create DOCS_HISTORY Table
IF EXISTS(SELECT 'X' FROM sysobjects WHERE type = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
CREATE TABLE DOCS_HISTORY
(
DOC_ID INT NOT NULL IDENTITY (1, 1),
DOC_NAME VARCHAR(100) NOT NULL
)
GO
ALTER TABLE DOCS_HISTORY ADD CONSTRAINT PK_DOCS_HISTORY PRIMARY KEY CLUSTERED(DOC_ID)
GO
--Insert 10000 rows
Declare @I int
Select @I = 1
WHILE @I < i =" @I+1" type =" 'FN'" name =" 'Return_WO_SchemaBinding')" type =" 'FN'" name =" 'Return_W_SchemaBinding')" doc_name =" dbo.Return_WO_SchemaBinding(DOC_NAME)" doc_name =" dbo.Return_W_SchemaBinding(DOC_NAME)">With Schemabinding took 2 seconds to complete when compared to 5 seconds which gives amazing 60% better performance
Friday, June 11, 2010
INTERSECT and EXCEPT Clause
These are keywords used in SET Theory in Mathematics and the same name is adopted in SQL Server to make coding style easier for developers.
INTERSECT and EXCEPT are one of the exciting feature which will save you tons of time on coding as it is a replacement to IN,EXISTS,NOT IN & NOT EXISTS.
It takes the input and combines them to make a single set.Same as UNION columns in the sets must be equal in type and name.
Syntax:
Where the SET_A and SET_B are select statements, and the SET_OPERATOR is one of the following below
UNION - include all values in SET_A or SET_B.
INTERSECT - include values that exist in SET_A and SET B.
EXCEPT - include values that exist in SET_A and not in SET_B or viceversa.
Let us see a example to better understand this.
Example:-
Consider Users who are defined in managers role whose entries are present in MGR_PERM table and users who are defined in Admin role
are present in ADM_PERM table , we can use the SET Operators to find users who are in either one,both or only one.
CREATE TABLE MGR_PERM
(
USERID VARCHAR(50),
USERNAME VARCHAR(100)
)
CREATE TABLE ADM_PERM
(
USERID VARCHAR(50),
USERNAME VARCHAR(100)
)
INSERT INTO MGR_PERM(USERID)
VALUES ('E0001')
INSERT INTO MGR_PERM(USERID)
VALUES ('E0002')
INSERT INTO MGR_PERM(USERID)
VALUES ('E0003')
INSERT INTO MGR_PERM(USERID)
VALUES ('E0004')
INSERT INTO MGR_PERM(USERID)
VALUES ('E0005')
INSERT INTO ADM_PERM(USERID)
VALUES('E0003')
INSERT INTO ADM_PERM(USERID)
VALUES('E0004')
INSERT INTO ADM_PERM(USERID)
VALUES('E0006')
INSERT INTO ADM_PERM(USERID)
VALUES('E0007')
SELECT * FROM MGR_PERM
SELECT * FROM ADM_PERM
-- List users who have Manager permissions as well as Admin Permissions
SELECT * FROM MGR_PERM
UNION
SELECT * FROM ADM_PERM
-- Result E0001, E0002, E0003, E0004, E0005, E0006, E0007
-- List users who have both Manager and Admin Permissions
SELECT * FROM MGR_PERM
INTERSECT
SELECT * FROM ADM_PERM
-- Result E0003, E0004
-- List users who are managers but not administrators
SELECT * FROM MGR_PERM
EXCEPT
SELECT * FROM ADM_PERM
-- Result E0001, E0002,E0005
-- List users who are administrators but not managers
SELECT * FROM ADM_PERM
EXCEPT
SELECT * FROM MGR_PERM
-- Result E0006, E0007
Difference Between Inner Join and INTERSECT
SELECT * FROM ADM_PERM
INTERSECT
SELECT * FROM MGR_PERM
-- Returns E0003, E0004
SELECT * FROM ADM_PERM
INNER JOIN MGR_PERM
ON ADM_PERM .USERID = MGR_PERM.USERID
AND ADM_PERM.USERNAME = MGR_PERM.USERNAME
-- Returns 0 rows as One NULL is not equal to other NULL
Thursday, June 3, 2010
Common Table Expressions (CTE)
A CTE is similar to a derived table that is not stored as an object and lasts only for the duration of the query. It is basically useful for creating a recursive query where in SQL 2000 we need to write multiple lines of code for finding the Manager/Employee Hierarchy and the same can be achieved optimally from SQL 2005 using CTE.
Below mentioned is an coding example for the same.
/* Table which had Employee and Their Manager Names */
CREATE TABLE EMP
( EMP_ID varchar(10),
EMP_NAME varchar(50),
MGR_ID varchar(10)
)
/* Sample Data to be inserted */
/* Insert CEO Data who doesnt report to anyone and is the boss */
INSERT INTO EMP (EMP_ID,EMP_NAME,MGR_ID)
SELECT 'E0001','PAUL - CEO',NULL
/* Insert President Data who reports to CEO */
INSERT INTO EMP (EMP_ID,EMP_NAME,MGR_ID)
SELECT 'E0002','SAM - PRES','E0001'
/* Insert VP Data who reports to President */
INSERT INTO EMP (EMP_ID,EMP_NAME,MGR_ID)
SELECT 'E0003','MIKE - VP','E0002'
/* Insert another VP Data who reporting is not finalised yet and is so null */
INSERT INTO EMP (EMP_ID,EMP_NAME,MGR_ID)
SELECT 'E0004','ROB - VP',NULL
/* Insert Director data who is reporting to new VP who doesnt have reporting */
INSERT INTO EMP (EMP_ID,EMP_NAME,MGR_ID)
SELECT 'E0005','NATE - DIRECTOR','E0004'
/* Usage of Common Table Experessions(CTE)
This lists the Employee with their respective
managers and the levels and the Root Manager IDs */
WITH EMP_HIER(EMP_ID, EMP_NAME, MGR_ID, LEVELS, ROOT_MGR_ID)
AS
(
SELECT EMP_ID, EMP_NAME, MGR_ID, 0, EMP_ID
FROM EMP
WHERE MGR_ID IS NULL
UNION ALL
SELECT e.EMP_ID, e.EMP_NAME, e.MGR_ID, LEVELS + 1, eh.ROOT_MGR_ID
FROM EMP e JOIN EMP_HIER eh
ON eh.EMP_ID = e.MGR_ID
)
SELECT * FROM EMP_HIER
Output
EMP_ID EMP_NAME MGR_ID LEVELS ROOT_MGR_ID
E0001 PAUL - CEO NULL 0 E0001
E0004 ROB - VP NULL 0 E0004
E0005 NATE - DIRECTOR E0004 1 E0004
E0002 SAM - PRES E0001 1 E0001
E0003 MIKE - VP E0002 2 E0001
Wednesday, April 21, 2010
SQL Queries to get Physical Node Names and Shared Drive from Cluster
Query to get the physical nodes of SQL cluster
SELECT * FROM Fn_VirtualServerNodes()
Query to get the shared drives for a clustered instance
SELECT * FROM fn_servershareddrives()
SQL Query to find if SQL Server is clustered
Query to find Server is clustered
USE MASTER
GO
SELECT @@servername 'ServerName',
ServerType = CASE ServerProperty('IsClustered')
WHEN 1 THEN 'Cluster'
WHEN 0 THEN 'Standalone'
ELSE 'Unknown' END,
@@version 'Version',
LEFT(CAST(SERVERPROPERTY('ProductLevel') as varchar(10)), 3) AS Level,
getdate() 'Date'
Not a valid win32 Application - SQL 2008
Many a times we get wierd errors on installing on specific servers, but we would need
to locate whether its issue with machine we are installing or issue with the setup files
we have copied.
One such case with setup files is reported below where the error message is weird on popup message box and some more info we get on error logs to troubleshoot and ultimately
we find it is a issue with some other DLL whcih needs to be present for setup to continue.
If you encounter the ‘not a valid win32 image’ during SQL 2008 Install or Upgrade it is due to a corrupt DLL and the same would be reported as bad in the detail_global_rules.txt
For which use the Windows resource kit tool called depends.exe(Can be downloaded as free from Microsoft website) to locate the corruption issue.
Let us take for example Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll is the corrupt DLL reported then use the microsoft dependency walker tool to test it
by locating the install directory where the setup files are present, for instance
D:\SQL2008\x86 is install folder then command to use would be:
From the command prompt
D:\sql2008\x86>depends Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll
In the window panel you get the DLL which shows as fault needless to say that warnings can be ignored.
And if you double click the dll which gives error you get the full path of the DLL which reports error and the same can be copied fresh from a SQL 2008 Install CD and we are good to go with SQL 2008 Install.
How to Avoid Cursors - Way to Avoid Cursors - SQL Server
How to Avoid Cursors - Way to Avoid Cursors
Many of us know already that cursor causes performance issues as the data traversed from cursor is stored in tempdb which means lots of Disk I/O is performed and would cause degraded performance.Now let us see how to eliminate cursors from my code.First we create a sample table called as Employee
/* Creation of Employee Table Data */
CREATE TABLE Employee( Emp_ID INT IDENTITY(1,1),Emp_Name VARCHAR(100),Emp_City VARCHAR(50))
Secondly we insert sample data to the employee table
/* Insert Sample Data into Employee Table Data */
INSERT INTO Employee (Emp_Name,Emp_City)SELECT 'Emp1','MAS'
INSERT INTO Employee (Emp_Name,Emp_City)SELECT 'Emp2','HYD'
INSERT INTO Employee (Emp_Name,Emp_City)SELECT 'Emp3',NULL
INSERT INTO Employee (Emp_Name,Emp_City)SELECT 'Emp4','HYD'
INSERT INTO Employee (Emp_Name,Emp_City)SELECT 'Emp5','SBC'
To get employee data one by one in cursor we use the below code
/*Cursors Usage */
SET NOCOUNT ON
DECLARE @EMP_ID VARCHAR(50)
DECLARE @EMP_NAME VARCHAR(50)
DECLARE EMP_CUR CURSOR FOR
SELECT EMP_ID, EMP_NAME FROM Employee
OPEN EMP_CUR
FETCH NEXT FROM EMP_CUR INTO @EMP_ID ,@EMP_NAME
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @EMP_ID,@EMP_NAME
FETCH NEXT FROM EMP_CUR INTO @EMP_ID ,@EMP_NAME
END
CLOSE EMP_CUR
DEALLOCATE EMP_CUR
SET NOCOUNT OFF
Now lets see the way to avoid cursors using a simple while loop
/* Code to avoid cursor */
DECLARE @MinSlNo INT
DECLARE @MaxSlNo INT
DECLARE @TBL_EMP TABLE (SNo INT IDENTITY(1,1),
EMP_ID VARCHAR(50),
EMP_NAME VARCHAR(50))
INSERT INTO @TBL_EMP (EMP_ID,EMP_NAME)
SELECT EMP_ID,EMP_NAMEFROM Employee
SELECT @MinSlNo = 1
SELECT @MaxSlNo = MAX(SNo) FROM @TBL_EMP
WHILE @MinSlNo < = @MaxSlNo
BEGIN
SELECT EMP_ID,EMP_NAME
FROM @TBL_EMP
WHERE SNo = @MinSlNo
SELECT @MinSlNo = @MinSlNo + 1
END
Rebuild Log in SQL 2005 - DBCC REBUILD_LOG SQL Server detected a logical consistency-based I/O error
Error: 824, Severity: 24, State: 2
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xb28f8c1a; actual: 0xb37b1a45). It occurred during a read of page (2:0) in database ID 5 at offset 0000000000000000 in file 'D:\Log\TestRebuildLog.ldf'
Rebuild log should be the last option to use(not as first option) in case of log corruption provided we have tried all other recovery or corruption fix options before using this option. But however the same doesnt work in SQL 2005 or higher.
DBCC REBUILD_LOG gives a error message as
Msg 2526, Level 16, State 3, Line 1Incorrect DBCC statement.
Check the documentation for the correct DBCC syntax and options.
So the command to use would be as follows:
Command:
ALTER DATABASE TestRebuildLog REBUILD LOG ON (NAME=TestRebuildLog_Log,FILENAME='D:\Data\TestRebuildLog_Log.ldf')
Monday, April 12, 2010
Run Profiler Trace from TSQL Command
-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
select @maxfilesize = 5
-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share
exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere',
@maxfilesize, NULL
if (@rc != 0)
goto error
-- Client side File and Table cannot be scripted
-- Set the events
declare @on bit
select @on = 1
exec sp_trace_setevent @TraceID, 78, 7, @on
exec sp_trace_setevent @TraceID, 78, 8, @on
exec sp_trace_setevent @TraceID, 78, 9, @on
exec sp_trace_setevent @TraceID, 78, 6, @on
exec sp_trace_setevent @TraceID, 78, 10, @on
exec sp_trace_setevent @TraceID, 78, 14, @on
exec sp_trace_setevent @TraceID, 78, 26, @on
exec sp_trace_setevent @TraceID, 78, 3, @on
exec sp_trace_setevent @TraceID, 78, 11, @on
exec sp_trace_setevent @TraceID, 78, 35, @on
exec sp_trace_setevent @TraceID, 78, 51, @on
exec sp_trace_setevent @TraceID, 78, 12, @on
exec sp_trace_setevent @TraceID, 78, 60, @on
exec sp_trace_setevent @TraceID, 74, 7, @on
exec sp_trace_setevent @TraceID, 74, 8, @on
exec sp_trace_setevent @TraceID, 74, 9, @on
exec sp_trace_setevent @TraceID, 74, 6, @on
exec sp_trace_setevent @TraceID, 74, 10, @on
exec sp_trace_setevent @TraceID, 74, 14, @on
exec sp_trace_setevent @TraceID, 74, 26, @on
exec sp_trace_setevent @TraceID, 74, 3, @on
exec sp_trace_setevent @TraceID, 74, 11, @on
exec sp_trace_setevent @TraceID, 74, 35, @on
exec sp_trace_setevent @TraceID, 74, 51, @on
exec sp_trace_setevent @TraceID, 74, 12, @on
exec sp_trace_setevent @TraceID, 74, 60, @on
exec sp_trace_setevent @TraceID, 53, 7, @on
exec sp_trace_setevent @TraceID, 53, 8, @on
exec sp_trace_setevent @TraceID, 53, 9, @on
exec sp_trace_setevent @TraceID, 53, 6, @on
exec sp_trace_setevent @TraceID, 53, 10, @on
exec sp_trace_setevent @TraceID, 53, 14, @on
exec sp_trace_setevent @TraceID, 53, 26, @on
exec sp_trace_setevent @TraceID, 53, 3, @on
exec sp_trace_setevent @TraceID, 53, 11, @on
exec sp_trace_setevent @TraceID, 53, 35, @on
exec sp_trace_setevent @TraceID, 53, 51, @on
exec sp_trace_setevent @TraceID, 53, 12, @on
exec sp_trace_setevent @TraceID, 53, 60, @on
exec sp_trace_setevent @TraceID, 70, 7, @on
exec sp_trace_setevent @TraceID, 70, 8, @on
exec sp_trace_setevent @TraceID, 70, 9, @on
exec sp_trace_setevent @TraceID, 70, 6, @on
exec sp_trace_setevent @TraceID, 70, 10, @on
exec sp_trace_setevent @TraceID, 70, 14, @on
exec sp_trace_setevent @TraceID, 70, 26, @on
exec sp_trace_setevent @TraceID, 70, 3, @on
exec sp_trace_setevent @TraceID, 70, 11, @on
exec sp_trace_setevent @TraceID, 70, 35, @on
exec sp_trace_setevent @TraceID, 70, 51, @on
exec sp_trace_setevent @TraceID, 70, 12, @on
exec sp_trace_setevent @TraceID, 70, 60, @on
exec sp_trace_setevent @TraceID, 77, 7, @on
exec sp_trace_setevent @TraceID, 77, 8, @on
exec sp_trace_setevent @TraceID, 77, 9, @on
exec sp_trace_setevent @TraceID, 77, 6, @on
exec sp_trace_setevent @TraceID, 77, 10, @on
exec sp_trace_setevent @TraceID, 77, 14, @on
exec sp_trace_setevent @TraceID, 77, 26, @on
exec sp_trace_setevent @TraceID, 77, 3, @on
exec sp_trace_setevent @TraceID, 77, 11, @on
exec sp_trace_setevent @TraceID, 77, 35, @on
exec sp_trace_setevent @TraceID, 77, 51, @on
exec sp_trace_setevent @TraceID, 77, 12, @on
exec sp_trace_setevent @TraceID, 77, 60, @on
exec sp_trace_setevent @TraceID, 14, 7, @on
exec sp_trace_setevent @TraceID, 14, 8, @on
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 2, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
exec sp_trace_setevent @TraceID, 14, 26, @on
exec sp_trace_setevent @TraceID, 14, 3, @on
exec sp_trace_setevent @TraceID, 14, 11, @on
exec sp_trace_setevent @TraceID, 14, 35, @on
exec sp_trace_setevent @TraceID, 14, 51, @on
exec sp_trace_setevent @TraceID, 14, 12, @on
exec sp_trace_setevent @TraceID, 14, 60, @on
exec sp_trace_setevent @TraceID, 17, 7, @on
exec sp_trace_setevent @TraceID, 17, 8, @on
exec sp_trace_setevent @TraceID, 17, 1, @on
exec sp_trace_setevent @TraceID, 17, 9, @on
exec sp_trace_setevent @TraceID, 17, 2, @on
exec sp_trace_setevent @TraceID, 17, 6, @on
exec sp_trace_setevent @TraceID, 17, 10, @on
exec sp_trace_setevent @TraceID, 17, 14, @on
exec sp_trace_setevent @TraceID, 17, 26, @on
exec sp_trace_setevent @TraceID, 17, 3, @on
exec sp_trace_setevent @TraceID, 17, 11, @on
exec sp_trace_setevent @TraceID, 17, 35, @on
exec sp_trace_setevent @TraceID, 17, 51, @on
exec sp_trace_setevent @TraceID, 17, 12, @on
exec sp_trace_setevent @TraceID, 17, 60, @on
exec sp_trace_setevent @TraceID, 100, 7, @on
exec sp_trace_setevent @TraceID, 100, 8, @on
exec sp_trace_setevent @TraceID, 100, 1, @on
exec sp_trace_setevent @TraceID, 100, 9, @on
exec sp_trace_setevent @TraceID, 100, 6, @on
exec sp_trace_setevent @TraceID, 100, 10,@on
exec sp_trace_setevent @TraceID, 100, 14, @on
exec sp_trace_setevent @TraceID, 100, 26, @on
exec sp_trace_setevent @TraceID, 100, 3, @on
exec sp_trace_setevent @TraceID, 100, 11, @on
exec sp_trace_setevent @TraceID, 100, 35, @on
exec sp_trace_setevent @TraceID, 100, 51, @on
exec sp_trace_setevent @TraceID, 100, 12, @on
exec sp_trace_setevent @TraceID, 100, 60, @on
exec sp_trace_setevent @TraceID, 10, 7, @on
exec sp_trace_setevent @TraceID, 10, 15, @on
exec sp_trace_setevent @TraceID, 10, 31, @on
exec sp_trace_setevent @TraceID, 10, 8, @on
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 9, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
exec sp_trace_setevent @TraceID, 10, 10, @on
exec sp_trace_setevent @TraceID, 10, 26, @on
exec sp_trace_setevent @TraceID, 10, 3, @on
exec sp_trace_setevent @TraceID, 10, 11, @on
exec sp_trace_setevent @TraceID, 10, 35, @on
exec sp_trace_setevent @TraceID, 10, 51, @on
exec sp_trace_setevent @TraceID, 10, 12, @on
exec sp_trace_setevent @TraceID, 10, 60, @on
exec sp_trace_setevent @TraceID, 10, 6, @on
exec sp_trace_setevent @TraceID, 10, 14, @on
exec sp_trace_setevent @TraceID, 11, 7, @on
exec sp_trace_setevent @TraceID, 11, 8, @on
exec sp_trace_setevent @TraceID, 11, 1, @on
exec sp_trace_setevent @TraceID, 11, 9, @on
exec sp_trace_setevent @TraceID, 11, 2, @on
exec sp_trace_setevent @TraceID, 11, 6, @on
exec sp_trace_setevent @TraceID, 11, 10, @on
exec sp_trace_setevent @TraceID, 11, 14, @on
exec sp_trace_setevent @TraceID, 11, 26, @on
exec sp_trace_setevent @TraceID, 11, 3, @on
exec sp_trace_setevent @TraceID, 11, 11, @on
exec sp_trace_setevent @TraceID, 11, 35, @on
exec sp_trace_setevent @TraceID, 11, 51, @on
exec sp_trace_setevent @TraceID, 11, 12, @on
exec sp_trace_setevent @TraceID, 11, 60, @on
exec sp_trace_setevent @TraceID, 72, 7, @on
exec sp_trace_setevent @TraceID, 72, 8, @on
exec sp_trace_setevent @TraceID, 72, 9, @on
exec sp_trace_setevent @TraceID, 72, 6, @on
exec sp_trace_setevent @TraceID, 72, 10, @on
exec sp_trace_setevent @TraceID, 72, 14, @on
exec sp_trace_setevent @TraceID, 72, 26, @on
exec sp_trace_setevent @TraceID, 72, 3, @on
exec sp_trace_setevent @TraceID, 72, 11, @on
exec sp_trace_setevent @TraceID, 72, 35, @on
exec sp_trace_setevent @TraceID, 72, 51, @on
exec sp_trace_setevent @TraceID, 72, 12, @on
exec sp_trace_setevent @TraceID, 72, 60, @on
exec sp_trace_setevent @TraceID, 71, 7, @on
exec sp_trace_setevent @TraceID, 71, 8, @on
exec sp_trace_setevent @TraceID, 71, 9, @on
exec sp_trace_setevent @TraceID, 71, 6, @on
exec sp_trace_setevent @TraceID, 71, 10, @on
exec sp_trace_setevent @TraceID, 71, 14, @on
exec sp_trace_setevent @TraceID, 71, 26, @on
exec sp_trace_setevent @TraceID, 71, 3, @on
exec sp_trace_setevent @TraceID, 71, 11, @on
exec sp_trace_setevent @TraceID, 71, 35, @on
exec sp_trace_setevent @TraceID, 71, 51, @on
exec sp_trace_setevent @TraceID, 71, 12, @on
exec sp_trace_setevent @TraceID, 71, 60, @on
exec sp_trace_setevent @TraceID, 12, 7, @on
exec sp_trace_setevent @TraceID, 12, 15, @on
exec sp_trace_setevent @TraceID, 12, 31, @on
exec sp_trace_setevent @TraceID, 12, 8, @on
exec sp_trace_setevent @TraceID, 12, 1, @on
exec sp_trace_setevent @TraceID, 12, 9, @on
exec sp_trace_setevent @TraceID, 12, 6, @on
exec sp_trace_setevent @TraceID, 12, 10, @on
exec sp_trace_setevent @TraceID, 12, 14, @on
exec sp_trace_setevent @TraceID, 12, 26, @on
exec sp_trace_setevent @TraceID, 12, 3, @on
exec sp_trace_setevent @TraceID, 12, 11, @on
exec sp_trace_setevent @TraceID, 12, 35, @on
exec sp_trace_setevent @TraceID, 12, 51, @on
exec sp_trace_setevent @TraceID, 12, 12, @on
exec sp_trace_setevent @TraceID, 12, 60, @on
exec sp_trace_setevent @TraceID, 13, 7, @on
exec sp_trace_setevent @TraceID, 13, 8, @on
exec sp_trace_setevent @TraceID, 13, 1, @on
exec sp_trace_setevent @TraceID, 13, 9, @on
exec sp_trace_setevent @TraceID, 13, 6, @on
exec sp_trace_setevent @TraceID, 13, 10, @on
exec sp_trace_setevent @TraceID, 13, 14, @on
exec sp_trace_setevent @TraceID, 13, 26, @on
exec sp_trace_setevent @TraceID, 13, 3, @on
exec sp_trace_setevent @TraceID, 13, 11, @on
exec sp_trace_setevent @TraceID, 13, 35, @on
exec sp_trace_setevent @TraceID, 13, 51, @on
exec sp_trace_setevent @TraceID, 13, 12, @on
exec sp_trace_setevent @TraceID, 13, 60, @on
-- Set the Filtersdeclare @intfilter intdeclare @bigintfilter bigint
exec sp_trace_setfilter @TraceID, 1, 0, 6, N'%T_Table1%'
exec sp_trace_setfilter @TraceID, 1, 1, 6, N'%P_Table2%'
exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler - f4575bc8-bfed-43c3-83c0-a4ef9f643adc'
exec sp_trace_setfilter @TraceID, 35, 0, 6, N'testdb'
-- Set the trace status to startexec sp_trace_setstatus @TraceID, 1
-- display trace id for future references
select TraceID=@TraceID
goto finish
error:
select ErrorCode=@rc
finish:
go
And traceID would be 2 if no other trace is running and below statement
SELECT * FROM sys.fn_trace_getinfo(2)
Will give you info if this trace is running and also where the files are being copied.
SQL Code Reviews
- Write comments in your stored procedures, triggers and SQL batches generously, whenever something is not very obvious. This helps other programmers understand your code clearly. Don’t worry about the length of the comments, as it won’t impact the performance, unlike interpreted languages like ASP 2.0.
- Do not prefix your stored procedure names with ’sp_’. The prefix sp_ is reserved for system stored procedure that ship with SQL Server. Whenever SQL Server encounters a procedure name starting with sp_,, it first tries to locate the procedure in the master database, then looks for any qualifiers (database, owner) provided, then using dbo as the owner. So, you can really save time in locating the stored procedure by avoiding sp_ prefix. But there is an exception! While creating general purpose stored procedures that are called from all your databases go ahead and prefix those stored procedure names with sp_ and create them in the master database.
- Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like ‘(1 row(s) affected)’ after executing INSERT, UPDATE, DELETE and SELECT statements. This in turn improves the performance of the stored procedures by reducing the network traffic.
Do not use SELECT * in your queries. Always write the required column names after the SELECT statement, like SELECT CustomerID, CustomerFirstName, City. This technique results in less disk IO and less network traffic and hence better performance. - Do not use the column numbers in the ORDER BY clause as it impairs the readability of the SQL statement. Further, changing the order of columns in the SELECT list has no impact on the ORDER BY when the columns are referred by names instead of numbers. Consider the following example, in which the second query is more readable than the first one:
SELECT OrderID, OrderDateFROM OrdersORDER BY 2
SELECT OrderID, OrderDateFROM OrdersORDER BY OrderDate Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which is defeating the purpose of having an index. The following statement results in an index scan, while the second statement results in an index seek:1. SELECT LocationID FROM Locations WHERE Specialities LIKE ‘%pples’2. SELECT LocationID FROM Locations WHERE Specialities LIKE ‘A%s’ Also avoid searching with not equals operators (<> and NOT) as they result in table and index scans. If you must do heavy text-based searches, consider using the Full-Text search feature of SQL Server for better performance. - Use ‘Derived tables’ wherever possible, as they perform better. Consider the following query to find the second highest salary from Employees table: SELECT MIN(Salary)FROM EmployeesWHERE EmpID IN(SELECT TOP 2 EmpIDFROM EmployeesORDER BY Salary Desc)The same query can be re-written using a derived table as shown below,
and it performs twice as fast as the above query: SELECT MIN(Salary)FROM(SELECT TOP 2 SalaryFROM EmployeesORDER BY Salary Desc) AS A This is just an example, the results might differ in different scenarios depending upon the database design, indexes, volume of data etc. So, test all the possible ways a query could be written and go with the efficient one. With some practice and understanding of ‘how SQL Server optimizer works’, you will be able to come up with the best possible queries without this trial and error method. - Try to avoid server side cursors as much as possible. Always stick to ’set based approach’ instead of a ‘procedural approach’ for accessing/manipulating data. Cursors can be easily avoided by SELECT statements in many cases. If a cursor is unavoidable, use a simpleWHILE loop instead, to loop through the table. I personally tested and concluded that a WHILE loop is faster than a cursor most of the times. But for a WHILE loop to replace a cursor you need a column (primary key or unique key) to identify each row uniquely and I personally believe every table must have a primary or unique key.
- Avoid the creation of temporary tables while processing data, as much as possible, as creating a temporary table means more disk IO. Consider advanced SQL or views or table variables of SQL Server 2000 or derived tables, instead of temporary tables. Keep in mind that, in some cases, using a temporary table performs better than a highly complicated query.
- While designing your database, design it keeping ‘performance’ in mind. You can’t really tune performance later, when your database is in production, as it involves rebuilding tables/indexes, re-writing queries. Use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries. Make sure your queries do ‘Index seeks’ instead of ‘Index scans’ or ‘Table scans’. A table scan or an index scan is a very bad thing and should be avoided where possible (sometimes when the table is too small or when the whole table needs to be processed, the optimizer will choose a table or index scan).
- Use the more readable ANSI-Standard Join clauses instead of the old style joins. With ANSI joins the WHERE clause is used only for filtering data. Where as with older style joins, the WHERE clause handles both the join condition and filtering data. The first of the following two queries shows an old style join, while the second one shows the new ANSI join syntax: SELECT a.au_id, t.titleFROM titles t, authors a, titleauthor taWHEREa.au_id = ta.au_id ANDta.title_id = t.title_id ANDt.title LIKE ‘%Computer%’SELECT a.au_id, t.titleFROM authors aINNER JOINtitleauthor taONa.au_id = ta.au_idINNER JOINtitles t
ON ta.title_id = t.title_idWHERE t.title LIKE ‘%Computer%’ Be aware that the old style *= and =* left and right outer join syntax may not be supported in a future release of SQL Server, so you are better off adopting the ANSI standard outer join syntax. - Views are generally used to show specific data to specific users based on their interest. Views are also used to restrict access to the base tables by granting permission on only views. Yet another significant use of views is that, they simplify your queries. Incorporate your frequently required complicated joins and calculations into a view, so that you don’t have to repeat those joins/calculations in all your queries, instead just select from the view.
- Use ‘User Defined Datatypes’, if a particular column repeats in a lot of your tables, so that the datatype of that column is consistent across all your tables.
Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures. This keeps the data access clean and consistent across all the modules of your application, at the same time centralizing the business logic within the database. - Try not to use text, ntext datatypes for storing large textual data. ‘text‘ datatype has some inherent problems associated with it. You can not directly write, update text data using INSERT, UPDATE statements (You have to use special statements like READTEXT, WRITETEXT and UPDATETEXT). There are a lot of bugs associated with replicating tables containing text columns. So, if you don’t have to store more than 8 KB of text, use char(8000) or varchar(8000)datatypes.
If you have a choice, do not store binary files, image files (Binary large objects or BLOBs) etc. inside the database. Instead store the path to the binary/image file in the database and use that as a pointer to the actual binary file. Retrieving, manipulating these large binary files is better performed outside the database and after all, database is not meant for storing files.
Use char data type for a column, only when the column is non-nullable. If a char column is nullable, it is treated as a fixed length column in SQL Server 7.0+. So, a char(100), when NULL, will eat up 100 bytes, resulting in space wastage. So, use varchar(100) in this situation. Of course, variable length columns do have a very little processing overhead over fixed length columns. Carefully choose between char and varchar depending up on the length of the data you are going to store. - Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime. IF and CASE statements come in handy to avoid dynamic SQL. Another major disadvantage of using dynamic SQL is that, it requires the users to have direct access permissions on all accessed objects like tables and views. Generally, users are given access to the stored procedures which reference the tables, but not directly on the tables. In this case, dynamic SQL will not work. Consider the following scenario, where a user named ‘dSQLuser’ is added to the pubs database, and is granted access to a procedure named ‘dSQLproc’, but not on any other tables in the pubs database. The procedure dSQLproc executes a direct SELECT on titles table and that works. The second statement runs the same SELECT on titles table, using dynamic SQL and it fails with the following error:Server: Msg 229, Level 14, State 5, Line 1SELECT permission denied on object ‘titles’, database ‘pubs’, owner ‘dbo’.To reproduce the above problem, use the following commands:
sp_addlogin ‘dSQLuser’GOsp_defaultdb ‘dSQLuser’, ‘pubs’USE pubsGOsp_adduser ‘dSQLUser’, ‘dSQLUser’GOCREATE PROC dSQLProcASBEGINSELECT * FROM titles WHERE title_id = ‘BU1032′ –This worksDECLARE @str CHAR(100)SET @str = ‘SELECT * FROM titles WHERE title_id = "BU1032"’EXEC (@str) –This failsENDGOGRANT EXEC ON dSQLProc TO dSQLuserGO Now login to the pubs database using the login dSQLuser and execute the procedure dSQLproc to see the problem. - Consider the following drawbacks before using IDENTITY property for generating primary keys. IDENTITY is very much SQL Server specific, and you will have problems if you want to support different database backends for your application.IDENTITY columns have other inherent problems. IDENTITY columns run out of numbers one day or the other. Numbers can’t be reused automatically, after deleting rows. Replication and IDENTITY columns don’t always get along well. So, come up with an algorithm to generate a primary key, in the front-end or from within the inserting stored procedure. There could be issues with generating your own primary keys too, like concurrency while generating the key, running out of values. So, consider both the options and go with the one that suits you well.
- Minimize the usage of NULLs, as they often confuse the front-end applications, unless the applications are coded intelligently to eliminate NULLs or convert the NULLs into some other form. Any expression that deals with NULL results in a NULL output. ISNULL and COALESCE functions are helpful in dealing with NULL values. Here’s an example that explains the problem:Consider the following table, Customers which stores the names of the customers and the middle name can be NULL. CREATE TABLE Customers(FirstName varchar(20),MiddleName varchar(20),LastName varchar(20))Now insert a customer into the table whose name is Tony Blair, without a middle name: INSERT INTO Customers(FirstName, MiddleName, LastName)VALUES (’Tony’,NULL,’Blair’)The following SELECT statement returns NULL, instead of the customer name: SELECT FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName FROM CustomersTo avoid this problem, use ISNULL as shown below: SELECT FirstName + ‘ ‘ + ISNULL(MiddleName + ‘ ‘,") + LastName FROM Customers
- Use Unicode datatypes like nchar, nvarchar, ntext, if your database is going to store not just plain English characters, but a variety of characters used all over the world. Use these datatypes, only when they are absolutely needed as they need twice as much space as non-unicode datatypes.
- Always use a column list in your INSERT statements. This helps in avoiding problems when the table structure changes (like adding a column). Here’s an example which shows the problem.Consider the following table: CREATE TABLE EuropeanCountries(CountryID int PRIMARY KEY,CountryName varchar(25))Here’s an INSERT statement without a column list , that works perfectly: INSERT INTO EuropeanCountriesVALUES (1, ‘Ireland’)Now, let’s add a new column to this table: ALTER TABLE EuropeanCountriesADD EuroSupport bitNow run the above INSERT statement. You get the following error from SQL Server:Server: Msg 213, Level 16, State 4, Line 1Insert Error: Column name or number of supplied values does not match table definition.This problem can be avoided by writing an
INSERT statement with a column list as shown below: INSERT INTO EuropeanCountries(CountryID, CountryName)VALUES (1, ‘England’) - Perform all your referential integrity checks, data validations using constraints (foreign key and check constraints). These constraints are faster than triggers. So, use triggers only for auditing, custom tasks and validations that can not be performed using these constraints. These constraints save you time as well, as you don’t have to write code for these validations and the RDBMS will do all the work for you.
- Always access tables in the same order in all your stored procedures/triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible. Touch as less data as possible during a transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205. In your applications, process all the results returned by SQL Server immediately, so that the locks on the processed rows are released, hence no blocking.
- Offload tasks like string manipulations, concatenations, row numbering, case conversions, type conversions etc. to the front-end applications, if these operations are going to consume more CPU cycles on the database server (It’s okay to do simple string manipulations on the database end though). Also try to do basic validations in the front-end itself during data entry. This saves unnecessary network roundtrips.
- If back-end portability is your concern, stay away from bit manipulations with T-SQL, as this is very much RDBMS specific. Further, using bitmaps to represent different states of a particular entity conflicts with the normalization rules.
- Consider adding a @Debug parameter to your stored procedures. This can be of bit data type. When a 1 is passed for this parameter, print all the intermediate results, variable contents using SELECT or PRINT statements and when 0 is passed do not print debug information. This helps in quick debugging of stored procedures, as you don’t have to add and remove these PRINT/SELECT statements before and after troubleshooting problems.
- Do not call functions repeatedly within your stored procedures, triggers, functions and batches. For example, you might need the length of a string variable in many places of your procedure, but don’t call the LEN function whenever it’s needed, instead, call the LEN function once, and store the result in a variable, for later use.
- Make sure your stored procedures always return a value indicating the status. Standardize on the return values of stored procedures for success and failures. The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters.
- If your stored procedure always returns a single row resultset, consider returning the resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than resultsets returned by SELECT statements.
Always check the global variable @@ERROR immediately after executing a data manipulation statement (like INSERT/UPDATE/DELETE), so that you can rollback the transaction in case of an error (@@ERROR will be greater than 0 in case of an error). This is important, because, by default, SQL Server will not rollback all the previous changes within a transaction if a particular statement fails. This behavior can be changed by executing SET XACT_ABORT ON. The @@ROWCOUNT variable also plays an important role in determining how many rows were affected by a previous data manipulation (also, retrieval) statement, and based on that you could choose to commit or rollback a particular transaction. - To make SQL Statements more readable, start each clause on a new line and indent when needed. Following is an example: SELECT title_id, titleFROM titlesWHERE title LIKE ‘Computing%’ ANDtitle LIKE ‘Gardening%’
Though we survived the Y2K, always store 4 digit years in dates (especially, when using char or int datatype columns), instead of 2 digit years to avoid any confusion and problems. This is not a problem with datetime columns, as the century is stored even if you specify a 2 digit year. But it’s always a good practice to specify 4 digit years even with datetime datatype columns. - In your queries and other SQL statements, always represent date in yyyy/mm/dd format. This format will always be interpreted correctly, no matter what the default date format on the SQL Server is. This also prevents the following error, while working with dates: Server: Msg 242, Level 16, State 3, Line 2The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.
- As is true with any other programming language, do not use GOTO or use it sparingly. Excessive usage of GOTO can lead to hard-to-read-and-understand code.
Do not forget to enforce unique constraints on your alternate keys. - Always be consistent with the usage of case in your code. On a case insensitive server, your code might work fine, but it will fail on a case sensitive SQL Server if your code is not consistent in case. For example, if you create a table in SQL Server or database that has a case-sensitive or binary sort order, all references to the table must use the same case that was specified in the CREATE TABLE statement. If you name the table as ‘MyTable’ in the CREATE TABLE statement and use ‘mytable’ in the SELECT statement, you get an ‘object not found’ or ‘invalid object name’ error.
- Though T-SQL has no concept of constants (like the ones in C language), variables will serve the same purpose. Using variables instead of constant values within your SQL statements, improves readability and maintainability of your code. Consider the following example: UPDATE dbo.OrdersSET OrderStatus = 5WHERE OrdDate < ‘2001/10/25′ The same update statement can be re-written in a more readable form as shown below: DECLARE @ORDER_PENDING int SET @ORDER_PENDING = 5 UPDATE dbo.OrdersSET OrderStatus = @ORDER_PENDINGWHERE OrdDate < ‘2001/10/25′
- Do not depend on undocumented functionality. The reasons being:- You will not get support from Microsoft, when something goes wrong with your undocumented code- Undocumented functionality is not guaranteed to exist (or behave the same) in a future release or service pack, there by breaking your code
- Try not to use system tables directly. System table structures may change in a future release. Wherever possible, use the sp_help* stored procedures or INFORMATION_SCHEMA views. There will be situattions where you cannot avoid accessing system table though!