Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts

Sunday, August 23, 2015

Change Tracking May Leads to Blocking and Dead Locks


Changing Tracking: May not be a viable solution on OLTP databases.


CT (change tracking) primarily used for Synchronizing the data.

As per Books Online: It tracks what rows are changed. The latest data can be obtained directly from the table that is being tracked. It’s a light weight method to track the changes, unlike triggers which needs programming with additional tables and cleanup process."

But it’s not light weight in terms of the problems raised with blocking.

Change tracking itself creates few internal tables without having any control over them to maintain or troubleshoot during the issues raised with performance.

In this scenario, we wants to synchronize the tables.

Steps involved in synchronize 2 tables using change tracking are

1.     Populate the Target table with initial full set of data from the source table
2.     Identify the changes on the source tables using change tracking
3.     Synchronize both the tables with the changes (insert,update,delete) happens on the source table to the destination table


There are many methods available viz..Triggers, Time Stamp Tracking, Log shipping, Replication, Mirroring (with snapshot) etc. and each method has its own limitations

But if the solution has using change tracking then beware of the problem with deadlocks.

"A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock.

Image: Dead Lock Trace Captured from Profiler


image


Steps:

1.     Create a Test Database
2.     Create a test table with test data
3.     Enable Change Tracking on database
4.     Enable Change Tracking on Table
5.     Run Update Statements
6.     Catch Deadlocks
7.     Conclusion
8.     Alternate solutions




1.     Create a Test Database
Create database TestDB
Go
Use TestDB
Go



2.     Create a test table with test data
USE TestDB

GO

create table [Customer](
    IID INT IDENTITY (1,1) NOT NULL PRIMARY KEY CLUSTERED,
[CustomerID] [int] NOT NULL ,
[PersonID] [int] NULL,
[StoreID] [int] NULL,
[TerritoryID] [int] NULL,
[AccountNumber]  varchar(30),
[rowguid] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
[ModifiedDate] [datetime] NOT NULL)

GO

insert into [Customer] SELECT * FROM AdventureWorks2008.Sales.Customer


3.     Enable Change Tracking on database

ALTER DATABASE TESTDB SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 7 DAYS, AUTO_CLEANUP = ON);


4.     Enable Change Tracking on Table


USE TestDB

GO

ALTER TABLE customer  enable CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = Off);

5.      
6.     Run Update Statements
-- open a new session(1) and run the below statement
while 1=1
BEGIN

insert into [Customer] SELECT * FROM AdventureWorks2008.Sales.Customer
END;
-- open a 2nd new session(2) and run the below statement
while 1=1
BEGIN
    update Customer set PersonID =customerid
END;



7.     Catch Deadlocks

select XEventData.XEvent.value('(data/value)[1]', 'varchar(max)') as DeadlockGraph
FROM
(select CAST(target_data as xml) as TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = 'system_health' and target_name = 'ring_buffer') AS Data
CROSS APPLY TargetData.nodes ('//RingBufferTarget/event') AS XEventData (XEvent)
where XEventData.XEvent.value('@name', 'varchar(4000)') = 'xml_deadlock_report'

image




8.     Conclusion:

make sure before implementing change tracking on the tables with high volume of concurrent insert and updates. Change tracking creates few internal tables which are the main sources of causing these deadlocks.
During every modification on source table , it creates one record in the internal change tracking table. So , If Transaction A inserting records every 1 second and at the same time Transaction B is updating the records than the concurrent blocking will be placed on the Internal tables between these 2 transactions which leads to deadlocks.


Choose this option wisely or write your own custom code using other features where you can take full control.
The major challenge involved in synchronizing the tables are identifying the modifications (updates) than the inserts/deletes as inserts/deletes can be identified by using an anchor table by comparing the primary keys in any data warehouse environment.


Major challenge here is identify the modifications, we have other options mentioned below to identify the modified rows where each has its own limitations. Choose your options wisely
i)       Binary checksum
ii)      Timestamp
iii)     Non-key attributes with in the table to keep track of the changes by the application like dt_modified
iv)    Triggers
v)     CDC – change data capture
vi)    CT – change tracking
vii)   T-Replication
viii)  3rd party tools


Also refer the article written by Brentozar at the below link. His article pretty much covered about change tracking.

http://www.brentozar.com/archive/2014/06/performance-tuning-sql-server-change-tracking/


Friday, August 2, 2013

backups on shared database instances with out maintenance plans

 

Managing database backups on shared database instances without maintenance plan wizard.

Download Code

Problem

As the number of databases with different recovery models at different schedules exists in shared database instances, so do the challenges related managing the databases backups. Brent clearly described in his post about the few of the challenges http://www.brentozar.com/archive/2012/12/backups-gone-bad-the-danger-of-differentials/

Maintenance plans are very helpful to create and schedule the backups, but it has very limited features where you can’t control the backups of all the databases in shared database instances. 


  Maintenance Plans This Solution
To Include the databases to the existing Plan manual dynamic query as a parameter "select name from sys.databases where name in ('db_1','db_2')
Copying/Moving the backups to secondary location   supports.
set the job parameter "I_Optional_RemoteBkpPath]" with the secondary location path
Track the status (backup,copies,deleted) of each backup including its secondary locations   supports.
select * from BKPS_DB_AUTO_LOG
delete the old backups based on dependencies instead age based on age based on your own query.
restart the job from the point of failure   supports
dashboard monitoring   supports.
create your own dashboard using the bkps_db_auto_log table.


Below framework/solution addresses all the problems listed above. You can modify the same solution as per your requirement to control multiple instances with multiple databases. and you can configure and check whether the backup jobs are configured as per your DRP requirements.

  1. Backups all the databases based on the recovery model. by default it takes full database backups of all the databases and takes t-log backups only if the recovery model is full or bulk-logged.
  2. no manual intervention required to include the databases to take full backups
  3. track and moves the backup files from local server to secondary locations . once all the backups are completed in a batch.
  4. deletes the old backup files in sequence (full + dependent differential and t-log backups) instead based on the no.of days old at the secondary location.
  5. a daily report to ensure that all the backups are in place

Solution

image

Description :

Steps are very self descriptive.

  1. Get DB List : gets the list of the databases from the output of the input variable I_SqlDBList  to include in local variable "dblist".
  2. Get Next Execution ID : insert a record in dbo.BKPS_DB_AUTO_LOG with an auto increment number during each execution
  3. Loop All DB List : loops through all the databases in the list
    1. creates a directory for each database in the local folder
    2. backup the database
  4. get files to move from local to remote location : If the optional variable @I_Optional_RemoteBkpPath is not null , then it assigns the list of the files to move to a local variable " moveFileList"
  5. Move and delete old files : loops through the list of the backup files assigned in step 1
    1. moves one file at a time in a loop
    2. get old file list to delete : get the list of the OLD files which are depends on the file which is just moved
    3. Loop old backups : loops through one file at a time to delete
      1. delete old backups : delete and log the status of the file deletion in the log table

Input variables Description
set the variables where name starts with I_ , rest are local variables

clip_image003

Variable Name

Description

I_BkpExtension

Backup extension. Don’t enter “.”

I_BkpIsCompressed

Enter 1 to compress the backups

I_BkpIsVerify

Enter 1 to verify the backup

I_BkpType

Enter the backup type Full /Diff/Log

I_LocalBkpPath

Enter local database backup path.

If you want to take the backups directly to a remote location then enter the remote location name

I_Optional_RemoteBkpPath

This is optional. Enter remote location to move the local backups to this location.

I_Optional_SqlRetention

This is Optional. Enter the select query to include the list of the backup files to delete.

Eg: below query produce the list of old backups except recent,full +recent differential + all log backups after recent diff backup.

select files_to_delete from vw_keep_recent_full_nd_diff

I_SqlDBList

Enter the select query to list the databases to take the backup.

Eg: below value includes the backups X,Y,Z

select name as dblist from master.sys.databases where name in ('X','Y','Z')

You can filter the database based on your own criteria.
For e.g.: To take the log backups of all databases which are in full recovery model , set the variable value with “select name as dblist from sys.databases where recovery_model_desc='full' “

SSIS Flow Steps Description

  Database Objects

  1. BKPS_DB_AUTO_LOG (Table) : it’s a tracking table , to store the status of the backup executions and location of the backup files
  2. fn_split_files  : user defined function
  3. BKP_INS_BKP_STATS(procedure) : to insert the status of the backup operation in tracking table
  4. BKP_DBS_MULTI (procedure) : to take the backups
  5. vw_keep_recent_full_nd_diff(view) : this view contains the all the old files to delete from the remote location. You can change this view as per your retention plan.

Steps to configure the job on your lab server

1. Download all the files

2. Save the BKPS_DB_AUTO.dtsx as an external file in c drive

3. execute all the scripts (1-6)  in MSDB database
Note: you should review and modify the view vw_keep_recent_full_nd_diff to delete the old backups , I have provided this view for simulation purpose only .

4. Create and Configure SQL Server agent Job
after executing the script "6.create Job BKP_DBS_AUTO" , it create a job with name "BKP_DBS_AUTO"

f you already has a job with the same name then modify the first line in the script with a new name for the job
Go to SQL server agent , right click on “"BKP_DBS_AUTO" ” job , open the first step and modify the location of the SSIS package

1.
clip_image008

2 Click on [Set value] tab on same screen and modify the values for all variables as below example. Refer the section “variable description” in this article for the description of these variables

clip_image010

2. Execute the job, after successful completion of the job , you will see the records in msdb.dbo.BKPS_DB_AUTO_LOG table

3. Execute the job several times , and check the status column in the same table. it deletes all the old backup files and keep only the recent one. If you don’t want to delete the old backups then don’t pass any value to the variable I_Optional_SqlRetention

Next:

To create multiple jobs , you can script out the existing job and change the variable values

More on deleting old backups:

Attached download files has a view “vw_keep_recent_full_nd_diff” , this view refers the tracking table “bkp_file_locations” and produce the list of the remote old backup files to delete except the recent full ,recent differentials and all log files after the recent differential backup.

clip_image012

I have created a view to produce the list of the old backup files to delete to reuse it.

create view vw_keep_recent_full_nd_diff

as

--select BFL.DBNAME,BFL.IID ,RECENT.IID ,BFL.BKP_TYPE ,RECENT.BKP_TYPE

select BFL.BKP_REMOTE_PATH

from

BKP_file_locations BFL

LEFT outer join

(select MAX(IID)AS IID,DBNAME,BKP_TYPE from bkp_file_locations where bkp_type <> 'LOG'

GROUP BY DBNAME,BKP_TYPE

) recent

on recent.dbname = BFL.dbname AND recent.BKP_TYPE = BFL.BKP_TYPE AND recent.IID = BFL.IID

WHERE RECENT.IID IS NULL

AND BFL.status = 'moved'

To test what database file are deleted. Run the below query, check the difference in status columns.

select iid,DBNAME,bkp_type,status from BKP_file_locations

order by dbname

Below is the output for the databases X and Y

clip_image013

Tuesday, April 23, 2013

sql server partition table online from non partition table


step by step process to partition an existing table to a partitioned table

image

steps :
  1. create table orders
  2. populate test data with past 12 months to current date
  3. lock the table till the switch happens , create partition function
  4. create partition scheme
  5. create partition table
  6. create check constraints , check constraints are needed to switch the table to new partition
  7. switch the existing table to temp table and rename the temp to newly
    partitioned table and finally drop the existing table
  8. insert new test data
  9. check the rows on partition
-------------------------------------------------------
-- Create test database
-------------------------------------------------------
create database poc
GO
use POC
-------------------------------------------------------
-- 1 Create the new Orders table.
-------------------------------------------------------

CREATE TABLE Orders
(iid int identity(1,1),
  nage int,
  CreatedDate datetime
)
GO
-------------------------------------------------------
-- 2 populate test data with past 12 months to current date
-------------------------------------------------------
-- insert the orders from 2012 december to till this month
insert into Orders select 10,'2012-dec-01'
insert into Orders select 10,'2013-jan-01'
insert into Orders select 10,'2013-jan-15'
insert into Orders select 10,'2013-jan-31'
insert into Orders select 10,'2013-feb-01'
insert into Orders select 10,'2013-feb-15'
insert into Orders select 10,'2013-feb-28'
insert into Orders select 10,'2013-mar-01'
insert into Orders select 10,'2013-mar-15'
insert into Orders select 10,'2013-mar-31'
insert into Orders select 10,'2013-apr-01'
insert into Orders select 10,'2013-apr-15'
-------------------------------------------------------
-- 3 create partition function
-- create new table with partition.
-- create partition starting next month, since you want to archive all the old data after 1 month or so
-------------------------------------------------------

BEGIN TRAN
select * from orders with (holdlock , tablockx) where 1=0


create partition function PF_MONTHLY_ORDERS (datetime)
as RANGE RIGHT for values(
'2001-01-01', -- from this date , get the minimum date -1 month
'2013-05-01', -- to this date , get maxdate + 1 month to < this date it goes to 1st partition. if the current month is April(4th)
'2013-06-01', -- add one month for next 12 months
'2013-07-01', '2013-08-01','2013-09-01', '2013-10-01', '2013-11-01', '2013-12-01'
)
-------------------------------------------------------
-- 4 create partition scheme
-------------------------------------------------------

create PARTITION SCHEME PSCH_MONTHLY_ORDERS  as partition PF_MONTHLY_ORDERS  all
to ([primary])
go
-------------------------------------------------------
-- 5 create partition table
-- ansi setting must be on as mentioned below
-------------------------------------------------------
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING OFF
GO
CREATE TABLE [Orders_temp]
(iid int identity(1,1),
  nage int,
  CreatedDate datetime
) ON PSCH_MONTHLY_ORDERS(CreatedDate)
GO
SET ANSI_PADDING OFF
GO
-------------------------------------------------------
-- 6 create constraints on both the tables
-- check constraints are needed to switch the tables , with out this its not possible to move the data
-------------------------------------------------------

alter table orders add constraint chkPrtn1 check (CreatedDate >='2001-01-01'  and CreatedDate is not null)
alter table orders add constraint chkPrtn2 check (CreatedDate < '2013-05-01' and CreatedDate is not null)


alter table orders_temp add constraint chkPrtn3 check (CreatedDate >='2001-01-01'  and CreatedDate is not null)
alter table orders_temp add   constraint chkPrtn4 check (CreatedDate < '2020-01-01' and CreatedDate is not null)
--------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------
-- 7  switch the data to newly partioned table

-------------------------------------------------------
alter table orders switch partition 1 to orders_temp partition 2

alter table orders_temp drop constraint chkPrtn3
alter table orders_temp drop constraint chkPrtn4

exec sp_rename orders ,orders_x
exec sp_rename orders_temp ,orders
select * from Orders
COMMIT TRAN

-- !! ROLLBACK OR COMMIT THE TRANSACTION
-- !! CHECK AND DROP THE TABLE CpAuditLog_x
--------------------------------------------------------------------------------------------------------------------------
-- 9 insert new test data
insert into Orders select 10,'2013-apr-30' -- this will goes to 2 partition
insert into Orders select 10,'2013-may -30' -- this will goes to 3 partition

--------------------------------------------------------------------------------------------------------------------------
--  10 check the rows on partitition
SELECT [CreatedDate],
        $partition.PF_MONTHLY_ORDERS([CreatedDate])
            AS 'Partition Number'
FROM Orders

SELECT $partition.PF_MONTHLY_ORDERS([CreatedDate])
            AS 'Partition Number'
    , min([CreatedDate]) AS 'Min Order Date'
    , max([CreatedDate]) AS 'Max Order Date'
    , count(*) AS 'Rows In Partition'
FROM Orders
GROUP BY $partition.PF_MONTHLY_ORDERS([CreatedDate])

SELECT sprv.value AS [Value], sprv.boundary_id AS [ID] ,*
FROM sys.partition_functions AS spf
INNER JOIN sys.partition_range_values sprv
ON sprv.function_id=spf.function_id WHERE (spf.name=N'PF_MONTHLY_ORDERS') ORDER BY [ID] ASC

------ full schema query
DECLARE @TableName sysname = 'orders';
SELECT p.partition_number, fg.name, p.rows,*
FROM sys.partitions p
    INNER JOIN sys.allocation_units au
    ON au.container_id = p.hobt_id
    INNER JOIN sys.filegroups fg
    ON fg.data_space_id = au.data_space_id
WHERE p.object_id = OBJECT_ID(@TableName)


--------------other meta data queries
DECLARE @TableName sysname = 'orders';
SELECT distinct p.partition_number, fg.name, p.rows,sprv.value AS [Value]
FROM sys.partitions p
    INNER JOIN sys.allocation_units au     ON au.container_id = p.hobt_id
    INNER JOIN sys.filegroups fg           ON fg.data_space_id = au.data_space_id
    INNER JOIN sys.destination_data_spaces dds           ON fg.data_space_id = dds.data_space_id
    INNER JOIN sys.partition_schemes ps ON ps.data_space_id = dds.partition_scheme_id
    INNER JOIN sys.partition_functions AS spf  ON spf.function_id = ps.function_id
    INNER JOIN sys.partition_parameters pp ON pp.function_id=spf.function_id
    INNER JOIN sys.partition_range_values sprv ON sprv.function_id=pp.function_id and sprv.parameter_id= pp.parameter_id
WHERE p.object_id = OBJECT_ID(@TableName)
and (spf.name=N'PF_MONTHLY_ORDERS')

SELECT sprv.value AS [Value], sprv.boundary_id AS [ID] ,*
FROM sys.partition_functions AS spf
INNER JOIN sys.partition_range_values sprv
ON sprv.function_id=spf.function_id WHERE (spf.name=N'PF_MONTHLY_ORDERS') ORDER BY [ID] ASC

Thursday, February 14, 2013

SQL Server I/O Errors List


If you are frequently encountering the I/O errors like below  , and to analyze when exactly these are happening you need
to export all the io errors into a table , so that you can sort based on date and time to see the trends.
below script will export all the io errors from error log in sql server into a temp table #io_errors
output :

image


use tempdb

create table #IO_ERRORS(LogDate datetime,ProcessInfo varchar(30),text varchar(max))
declare @error_msg  varchar(2000) = 'requests taking longer than 15 seconds to complete on file'

insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 0, 1,@error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 1, 1, @error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 2, 1, @error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 3, 1, @error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 4, 1, @error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 5, 1, @error_msg  , '', NULL, NULL, N'desc'
insert into #IO_ERRORS EXEC master.dbo.xp_readerrorlog 6, 1, @error_msg  , '', NULL, NULL, N'desc'

select LogDate,physical_name,
OCCURANCES= cast(SUBSTRING(TEXT,CHARINDEX('SQL Server has encountered ',TEXT)+27,
CHARINDEX('occurrence(s) of I/O requests taking longer than ',TEXT)-CHARINDEX('SQL Server has encountered ',TEXT)-27) as int)
--,CAST(logdate as DATE) as dt, DATEPART(HH,logdate   ) as h ,DATEPART(MINUTE,logdate   ) as minu
from #IO_ERRORS a
JOIN sys.master_files b on a.text like  '%'+b.physical_name +'%'
order by 3 desc

Friday, December 28, 2012

sql server update outdated statistics

 

Though the auto update statistics are ON, SQL Server wont update the statistics unless it reaches certain threshold  http://support.microsoft.com/kb/195565

If you are using SQL Server 2008 R2 or later version then the below link will be helpful. http://support.microsoft.com/kb/2754171?wa=wsignin1.0

But If you are using the 2005 or 2008 (not R2)  or don't want to enable the trace in R2 as mentioned in above link then below procedure will helps :

1- Update the statistics when the table got modified with in 24 hours delay from last statistics update
2-Updates only the tables which are modified instead all the tables in the database.


CREATE PROC PR_UPDATE_STATS
AS
with cte as
(
select SCHEMA_NAME(b.schema_id) as tbl_schema,OBJECT_NAME(b.object_id  ) as tbl_name, STATS_DATE(s.object_id, s.stats_id) as stat_dt,last_user_update,
(select MAX(last_user_dt) from (values (last_user_seek),(last_user_scan),(last_user_lookup)) as all_val(last_user_dt)) as last_seeks_lkps_scans_datetime
from sys.dm_db_index_usage_stats  c  join sys.tables b on b.object_id =  c.object_id
join sys.stats s on b.object_id = s.object_id
)

select 'UPDATE STATISTICS '+ tbl_name  +';' as cmd, tbl_schema , tbl_name , MAX(last_user_update) as last_udpate , MAX(stat_dt) as last_stat_dt ,max(last_seeks_lkps_scans_datetime) as last_select 
into #t
from cte
group by tbl_schema , tbl_name
having datediff(dd,MAX(stat_dt),MAX(last_user_update)   ) >=1 -- stats which are outdated for 1 day

 

declare @cmd varchar(256)
declare cur cursor for select cmd from #t

open cur
    fetch next from cur into @cmd

while @@fetch_status =  0

    begin
    exec (@cmd)
        fetch next from cur into @cmd
    continue
    end       
       
close cur

deallocate cur

 

This is a simple and straight forward solution to update the statistics based on capturing what tables are updated since the last update statistics time. But if you want to control update statistics based on threshold (no.of updates ) then you have to collect the metrics with how many updates are happening on each table through collecting sys.dm_db_index_usage_stats on daily basis and modify the stored procedure by adding a parameter where last_user_update > @threshold_limit.

Thursday, December 20, 2012

sql server when was the database table accessed last time


-- this works starting 2008 onwards
with cte as
(
select
(select min(last_user_dt) from (values (last_user_update),(last_user_seek),(last_user_scan),(last_user_lookup)) as all_val(last_user_dt)) as dt
,DB_NAME(database_id) as dbName
from sys.dm_db_index_usage_stats
)
select min(dt) as since from cte where
dbName = 'NORTHWIND'



-- to check when was the database accessed last time. use the below.
source : https://www.mssqltips.com/sqlservertip/3171/identify-sql-server-databases-that-are-no-longer-in-use/


Thursday, October 11, 2012

sql server script to know when the table accessed last time


If the database has lot many unwanted tables which are not being used by any of the applications and if you want to identify the ununused objects , so that you can remove or move those objects to a separate database , then the system view sys.dm_db_index_usage_stats (sql server 2008) contains the information with when the table accessed last time.
the script SQL-1 will produce the list of the tables which are being used and non-used as well. the records in the column [recent_datetime] contains the list of tables which are non been used since the server started.
Note : the counters on this DMV starts / refresh when the server is restarted.
-- Query-1
with cte_recent as
(
select SCHEMA_NAME(B.schema_id) +'.'+object_name(b.object_id) as tbl_name,
(select MAX(last_user_dt) from (values (last_user_seek),(last_user_scan),(last_user_lookup)) as all_val(last_user_dt)) as access_datetime FROM sys.dm_db_index_usage_stats a
right outer join sys.tables b on a.object_id =  b.object_id
)
select tbl_name,max(access_datetime) as recent_datetime  from cte_recent
group by tbl_name
order by recent_datetime desc , 1
output
image
To keep a track of the tables which are non been used , its better to store the results in a separate table and merge the table before restart the sql server or as per your own schedule as the SQL Server installed on cluster then you probably schedule it as a job to wait for an year before making the decision to remove or move those tables to a different database.
create a table TABLES_LAST_ACCESSED with 2 columns as below script
--Query-2
create table TABLES_LAST_ACCESSED (TBL_NAME VARCHAR(256),recent_datetime DATETIME)
sync the table with the script (Query-3) below
-- Query-3
with cte_recent as
(
select SCHEMA_NAME(B.schema_id) +'.'+object_name(b.object_id) as tbl_name,
(select MAX(last_user_dt) from (values (last_user_seek),(last_user_scan),(last_user_lookup)) as all_val(last_user_dt)) as access_datetime FROM sys.dm_db_index_usage_stats a
right outer join sys.tables b on a.object_id =  b.object_id
)
MERGE TABLES_LAST_ACCESSED AS target
USING (select tbl_name,max(access_datetime) as recent_datetime  from cte_recent group by tbl_name) AS source
ON (Target.tbl_name = Source.tbl_name)
when matched then
        UPDATE SET Target.recent_datetime = Source.recent_datetime
when not matched then
        INSERT (tbl_name,recent_datetime)VALUES (source.tbl_name,source.recent_datetime);
query any table by using select , insert , update or delete and execute the below statement. the tables which you accessed recently should be in the top.
SELECT * FROM  TABLES_LAST_ACCESSED ORDER BY 2 DESC
For more information on this DMV http://msdn.microsoft.com/en-us/library/ms188755.aspx




SQL Server 2005 : http://stackoverflow.com/questions/711394/how-do-you-find-the-last-time-a-database-was-accessed






Featured Post

SQL Server AWS Migration BCP

stored procedure to generate BCP scritps to migrate the SQL Server database. Developed this stored procedure on my labs to simulate t...

Contributors