Many of us have the have the habit of doing this common mistake of having functions over column names, which makes the column take a totally new value, and by which we make
SQL Server not use the indexes which results in poor performance of a query.
Some of the functions we use generally use in where clause are convert, substring, datediff, etc. Let us understand the performance implications by means of a example below.
SET NOCOUNT ON
SET STATISTICS TIME OFF
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'TEST_FUNCTION_PERF')
DROP TABLE TEST_FUNCTION_PERF
GO
--Create a table with 3 columns
CREATE TABLE TEST_FUNCTION_PERF
( Col1_Int INT IDENTITY(1,1) NOT NULL,
Col2_Date DATETIME NOT NULL,
Col3_Varchar VARCHAR(255) NOT NULL
)
--Let us create clustered index on Date column , it can be on any columns which we would use in where clause
CREATE CLUSTERED INDEX IX_TEST ON TEST_FUNCTION_PERF(Col2_Date)
--Populate table with 90000 records
DECLARE @iCount INTEGER
SELECT @iCount = 1
WHILE @iCount < = 90000 BEGIN INSERT INTO TEST_FUNCTION_PERF(Col2_Date,Col3_Varchar) SELECT GETDATE()- @iCount,'ABC' + CONVERT(VARCHAR,@iCount) SELECT @iCount = @iCount + 1 END Now let us enable statistics to know how our queries perform in terms of CPU Time and Elapsed Time for 3 different queries below which produces same results of 59 rows. SET STATISTICS TIME ON --Query 1: SELECT COL2_DATE FROM TEST_FUNCTION_PERF WHERE CONVERT(VARCHAR(10),COL2_DATE,101) > GETDATE()- 60
--Query 2
SELECT COL2_DATE FROM TEST_FUNCTION_PERF WHERE DATEDIFF(DD,COL2_DATE,GETDATE()) <> DATEADD(DD,-60,GETDATE())
/*
SQL Server Execution Times:
CPU time = 359 ms, elapsed time = 438 ms.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 63 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
*/
If you see execution plan of these queries first 2 queries would have gone for Index Scan and the last one would be Index Seek which is optimal and the best way to write such queries is using Query3 format i.e. by keeping index columns separated from functions.
This blog is for SQL Developers who would like to tranform themselves to DBA
Search This Blog
Friday, July 30, 2010
Friday, July 9, 2010
Query to get User and Role mappings
--SQL 2000
select 'EXEC sp_addrolemember ''' + g.name + ''',[' + u.name + ']'
from sysusers u, sysusers g, sysmembers m
where g.uid = m.groupuid
and g.issqlrole = 1
and u.uid = m.memberuid
--SQL 2005
select 'EXEC sp_addrolemember @rolename =''' + c.name + ''',@membername = ''' + b.name + '''' from sys.database_role_members a, sysusers b , sys.database_principals c
where b.uid=a.member_principal_id
and a.role_principal_id=c.principal_id
select 'EXEC sp_addrolemember ''' + g.name + ''',[' + u.name + ']'
from sysusers u, sysusers g, sysmembers m
where g.uid = m.groupuid
and g.issqlrole = 1
and u.uid = m.memberuid
--SQL 2005
select 'EXEC sp_addrolemember @rolename =''' + c.name + ''',@membername = ''' + b.name + '''' from sys.database_role_members a, sysusers b , sys.database_principals c
where b.uid=a.member_principal_id
and a.role_principal_id=c.principal_id
Labels:
Basics - SQL,
SQL - Queries,
SQL 2005 - General
Query to find when last Log Backup happened
select backup_finish_date,database_name
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'L')
and type = 'L'
and database_name in ( select name from master..sysdatabases)
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'L')
and type = 'L'
and database_name in ( select name from master..sysdatabases)
Labels:
Basics - SQL,
SQL - Queries,
SQL 2005 - General
Query to find when last Full Backup happened
Query to find when last Full Backup happened
select backup_finish_date,database_name
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'D')
and type = 'D'
and database_name in ( select name from master..sysdatabases)
select backup_finish_date,database_name
from msdb..backupset o
where backup_finish_date = ( select max(backup_finish_date)
from msdb..backupset i
where i.database_name = o.database_name
and type = 'D')
and type = 'D'
and database_name in ( select name from master..sysdatabases)
Labels:
Basics - SQL,
SQL - Queries,
SQL 2005 - General
Understand Isolation Levels with Query Example
Isolation Levels in any databases like SQL,Oracle,DB2,etc are used for preventing data consistency issues. It is very essential for any developers to understand how isolation works as the coding written by users behave differently when isolation levels are specified differently. There are various isolation levels in SQL Server namely
Ø Read Uncommitted
Ø Read Committed
Ø Repeatable Read
Ø Serializable
It is better to understand each of this example rather by reading through definitions on what each means.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert a committed row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-3
COMMIT
-- Insert a uncommitted row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-2
If we execute below commands on other session in SQL Management Studio
-- Read Uncommitted Isolation Level returns 2 rows and this can be used for generating reports
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM DOCS_HISTORY
-- Read Committed Isolation Level which is default doesn’t returns rows and it keeps on running
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT * FROM DOCS_HISTORY
This is the reason you would see DBAs running around to person who is blocking other users as user has a uncommitted transaction, so please keep the transaction short and commit or rollback transaction once the same is complete.
For understanding repeatable read we would need 3 sessions in SQL Management Studio
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
All 3 statements would get executed but the output from Window 1 would return 2 different outputs in case of read committed even when they are in a transaction.
Above same command when executed via Repeatable Read Isolation Level(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) which allows newer insert to happen and doesn’t allow Window 2 execution to complete as the same value was read in a transaction in Window 1 and it waits till Window 1 completes and then updates the result.
So Repeatable Read would not allow values to be changed for rows that was read inside a transaction until the same is complete however it allows newer records to be inserted which is a phantom record as the same was not present during execution of 1st select statement.
Last isolation level is Serializable and many would have guessed from previous isolation levels what would this do, yes it does not allow any data inconsistencies i.e. it will not allow Dirty Reads, non repeatable reads and Phantom reads.
Let us try to understand this also by means of same example.
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()
COMMIT TRAN
Run first Window 1 followed by Window 2 and Window 3. You would notice that Window 2 and Window 3 would be in wait state and would not execute until Window 1 is completed.
This is the top most isolation level with not data inconsistencies between reads but this is the isolation level would cause deadlocks.
There is also a new Isolation which is introduced in SQL 2005 which is SNAPSHOT isolation which is very handy and behaves mixed of SERIALIZABLE and READ COMMITTED as it always considers SNAPSHOT taken during start of transaction and displays the same inside the transaction and also it does not block new inserts or updates to happen.
You can use the same above example to understand the same with isolation level set as SET TRANSACTION ISOLATION LEVEL SNAPSHOT and understand the results.
So one need to decide on our business model and choose the appropriate isolation level based on our business model and READ COMMITTED is the widely accepted or used Isolation level.
P.S:- Thanks to Guru Charan for explaining this.
Ø Read Uncommitted
Ø Read Committed
Ø Repeatable Read
Ø Serializable
It is better to understand each of this example rather by reading through definitions on what each means.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert a committed row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-3
COMMIT
-- Insert a uncommitted row in DOCS_HISTORY
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-2
If we execute below commands on other session in SQL Management Studio
-- Read Uncommitted Isolation Level returns 2 rows and this can be used for generating reports
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM DOCS_HISTORY
-- Read Committed Isolation Level which is default doesn’t returns rows and it keeps on running
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT * FROM DOCS_HISTORY
This is the reason you would see DBAs running around to person who is blocking other users as user has a uncommitted transaction, so please keep the transaction short and commit or rollback transaction once the same is complete.
For understanding repeatable read we would need 3 sessions in SQL Management Studio
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
All 3 statements would get executed but the output from Window 1 would return 2 different outputs in case of read committed even when they are in a transaction.
Above same command when executed via Repeatable Read Isolation Level(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) which allows newer insert to happen and doesn’t allow Window 2 execution to complete as the same value was read in a transaction in Window 1 and it waits till Window 1 completes and then updates the result.
So Repeatable Read would not allow values to be changed for rows that was read inside a transaction until the same is complete however it allows newer records to be inserted which is a phantom record as the same was not present during execution of 1st select statement.
Last isolation level is Serializable and many would have guessed from previous isolation levels what would this do, yes it does not allow any data inconsistencies i.e. it will not allow Dirty Reads, non repeatable reads and Phantom reads.
Let us try to understand this also by means of same example.
--Management Studio Window 1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM [DOCS_HISTORY]
WAITFOR DELAY '00:01:00'
SELECT * FROM [DOCS_HISTORY]
COMMIT
--Management Studio Window 2
BEGIN TRAN
UPDATE [DOCS_HISTORY]
SET DOC_SIZE = DOC_SIZE + 500,
DOC_UPD_DATE = GETDATE()
COMMIT
-- Management Studio Window 3
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-1
COMMIT TRAN
BEGIN TRAN
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()
COMMIT TRAN
Run first Window 1 followed by Window 2 and Window 3. You would notice that Window 2 and Window 3 would be in wait state and would not execute until Window 1 is completed.
This is the top most isolation level with not data inconsistencies between reads but this is the isolation level would cause deadlocks.
There is also a new Isolation which is introduced in SQL 2005 which is SNAPSHOT isolation which is very handy and behaves mixed of SERIALIZABLE and READ COMMITTED as it always considers SNAPSHOT taken during start of transaction and displays the same inside the transaction and also it does not block new inserts or updates to happen.
You can use the same above example to understand the same with isolation level set as SET TRANSACTION ISOLATION LEVEL SNAPSHOT and understand the results.
So one need to decide on our business model and choose the appropriate isolation level based on our business model and READ COMMITTED is the widely accepted or used Isolation level.
P.S:- Thanks to Guru Charan for explaining this.
Labels:
SQL 2005 - General,
SQL 2005 - Programming
Friday, July 2, 2010
TOP Clause Enhancements - SQL 2005
SQL 2005 TOP Clause Enhancements
If anyone asks you, how often do you display say all records in table on a User Interface ? The answer would be not always, as we show only say 20 records or max 100 records in 1 page and then if user wants to see more pages then he can always move to next page to see the next set of records.
And when you want to see only say only 20 documents of 100 documents which are huge how do you display only those 20 documents sorted by size, TOP clause in SQL Server
becomes handy.
However this TOP Clause was present in SQL 2000 for serving the basic need and the same has been enhanced in SQL 2005 which makes the programmer life still easier.
Drawbacks of TOP in SQL 2000:
1) TOP clause is not dynamic and so cannot change based on user input.
Ex: For ex say user defines in UI say return me top 30 rows and later he needs top 40 rows , this change of parameter is not available with TOP clause in turn it means TOP is
not parameterized
P.S:- Same can be achieved by SET ROWCOUNT , but we need to remember to set it back to 0 otherwise all queries will return same rows as its at session level.
2) TOP Clause can be used only in SELECT statements and cannot be used in INSERT/UPDATE/DELETE statements.
Performance Drawback:
TOP is faster when compared to SET NOCOUNT as Query optimizer uses the value of expression in the TOP clause for generating an execution plan for a query. But SET ROWCOUNT is
used outside a statement that executes a query, its value cannot be used to generate a query plan for a query.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert itno DOCS_History
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-10
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-9
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-8
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()-7
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC5',500,getdate()-6
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC6',600,getdate()-5
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC7',700,getdate()-4
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC8',800,getdate()-3
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC9',900,getdate()-2
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC10',1000,getdate()-1
-- User decides on how many rows to see in UI which is given by parameter @UI_Records
DECLARE @UI_Records INT
SELECT @UI_Records = 5
SELECT TOP (@UI_Records) * FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE DESC
Paging Logic
Let suppose you have returned 5 rows in a UI and want to return next 5 rows which is from 6 to 10 when user clicks on Next button on screen one can use the below code to achieve the same to return rows from 6 to 10 which works faster rather than having recordset in .Net and implementing paging from the same.
SELECT * FROM ( SELECT TOP 5 * FROM (SELECT TOP 10 * FROM [DOCS_HISTORY] ORDER BY DOC_UPD_DATE ASC
) AS DOCS_HISTORY1 ORDER BY DOC_UPD_DATE DESC
) AS DOCS_HISTORY2 ORDER BY DOC_UPD_DATE DESC
--Update only top 5 records whose size is > 300 with today date
UPDATE TOP (@UI_Records) [DOCS_HISTORY]
SET DOC_UPD_DATE = GETDATE()
WHERE DOC_SIZE > 300
--Delete top 5 records ordered by Doc_Upd_Date Descending
DELETE FROM [DOCS_HISTORY]
WHERE [DOC_ID] IN ( SELECT TOP (@UI_Records) [DOC_ID] FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE ASC)
If anyone asks you, how often do you display say all records in table on a User Interface ? The answer would be not always, as we show only say 20 records or max 100 records in 1 page and then if user wants to see more pages then he can always move to next page to see the next set of records.
And when you want to see only say only 20 documents of 100 documents which are huge how do you display only those 20 documents sorted by size, TOP clause in SQL Server
becomes handy.
However this TOP Clause was present in SQL 2000 for serving the basic need and the same has been enhanced in SQL 2005 which makes the programmer life still easier.
Drawbacks of TOP in SQL 2000:
1) TOP clause is not dynamic and so cannot change based on user input.
Ex: For ex say user defines in UI say return me top 30 rows and later he needs top 40 rows , this change of parameter is not available with TOP clause in turn it means TOP is
not parameterized
P.S:- Same can be achieved by SET ROWCOUNT , but we need to remember to set it back to 0 otherwise all queries will return same rows as its at session level.
2) TOP Clause can be used only in SELECT statements and cannot be used in INSERT/UPDATE/DELETE statements.
Performance Drawback:
TOP is faster when compared to SET NOCOUNT as Query optimizer uses the value of expression in the TOP clause for generating an execution plan for a query. But SET ROWCOUNT is
used outside a statement that executes a query, its value cannot be used to generate a query plan for a query.
IF EXISTS ( SELECT 'X' FROM sysobjects where TYPE = 'U' and name = 'DOCS_HISTORY')
DROP TABLE DOCS_HISTORY
GO
-- Create table named DOCS_HISTORY
CREATE TABLE [dbo].[DOCS_HISTORY](
[DOC_ID] [int] IDENTITY(1,1) NOT NULL,
[DOC_NAME] [varchar](100) NOT NULL,
[DOC_SIZE] [bigint] NULL,
[DOC_UPD_DATE] [datetime] NULL,
CONSTRAINT [PK_DOCS_HISTORY] PRIMARY KEY CLUSTERED([DOC_ID] ASC))
-- Insert itno DOCS_History
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC1',100,getdate()-10
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC2',200,getdate()-9
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC3',300,getdate()-8
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC4',400,getdate()-7
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC5',500,getdate()-6
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC6',600,getdate()-5
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC7',700,getdate()-4
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC8',800,getdate()-3
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC9',900,getdate()-2
INSERT INTO [DOCS_HISTORY](DOC_NAME,DOC_SIZE,DOC_UPD_DATE)
SELECT 'TESTDOC10',1000,getdate()-1
-- User decides on how many rows to see in UI which is given by parameter @UI_Records
DECLARE @UI_Records INT
SELECT @UI_Records = 5
SELECT TOP (@UI_Records) * FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE DESC
Paging Logic
Let suppose you have returned 5 rows in a UI and want to return next 5 rows which is from 6 to 10 when user clicks on Next button on screen one can use the below code to achieve the same to return rows from 6 to 10 which works faster rather than having recordset in .Net and implementing paging from the same.
SELECT * FROM ( SELECT TOP 5 * FROM (SELECT TOP 10 * FROM [DOCS_HISTORY] ORDER BY DOC_UPD_DATE ASC
) AS DOCS_HISTORY1 ORDER BY DOC_UPD_DATE DESC
) AS DOCS_HISTORY2 ORDER BY DOC_UPD_DATE DESC
--Update only top 5 records whose size is > 300 with today date
UPDATE TOP (@UI_Records) [DOCS_HISTORY]
SET DOC_UPD_DATE = GETDATE()
WHERE DOC_SIZE > 300
--Delete top 5 records ordered by Doc_Upd_Date Descending
DELETE FROM [DOCS_HISTORY]
WHERE [DOC_ID] IN ( SELECT TOP (@UI_Records) [DOC_ID] FROM [DOCS_HISTORY]
ORDER BY DOC_UPD_DATE ASC)
Labels:
SQL 2005 - General,
SQL 2005 - Programming
Subscribe to:
Comments (Atom)