Saturday, 19 April 2014

T-SQL script to backup all user databases with time stamped backup file.

use master;
go
DECLARE UserDatabases_CTE_Cursor Cursor
FOR
-- Selecting user database names.
select name as DatabaseName
from sys.sysdatabases
where ([dbid] > 4) and ([name] not like '$')
OPEN UserDatabases_CTE_Cursor
DECLARE @dbName varchar(100);
DECLARE @backupPath varchar(100);
DECLARE @backupQuery varchar(500);

-- make sure that the below path exists
set @backupPath = 'c:\SQLBackupFolder\'
Fetch NEXT FROM UserDatabases_CTE_Cursor INTO @dbName
While (@@FETCH_STATUS <> -1)
BEGIN
-- Backup SQL statement
set @backupQuery =  'backup database ' + @dbName + ' to disk = ''' + @backupPath + @dbName + '_[' + REPLACE(convert(varchar, getdate(), 109), ':', '-') + '].bak'''

-- Print SQL statement
print @backupQuery

-- Execute backup script
EXEC (@backupQuery)

-- Get next database
Fetch NEXT FROM UserDatabases_CTE_Cursor INTO @dbName
END

CLOSE UserDatabases_CTE_Cursor
DEALLOCATE UserDatabases_CTE_Cursor

GO

Monday, 14 April 2014

SQL Server transaction logs

What is a transaction log?


                  A transaction log is a file – integral part of every SQL Server database. It contains log records produced during the logging process in a SQL Server database. The transaction log is the most important component of a SQL Server database when it comes to the disaster recovery – however, it must be uncorrupted. After each database modification – transaction occurrence, a log record is written to the transaction log. All the changes are written sequentially

What does a SQL Server transaction log store?


                A transaction log stores every transaction made to a SQL Server database, except some which are minimally logged like BULK IMPORT or SELECT INTO. Internally it is split into the smaller parts called Virtual Log Files (VLFs). When one VLF becomes full logging continue to write into the next available in the transaction log. The transaction log file can be represented as a circular file. When the logging reaches the end of the file it starts again from the beginning, but only if all the requirements has been met and the inactive parts has been truncated. The truncation process is necessary to mark all inactive parts so they can be used again and overwritten
A log record is no longer needed in the transaction log if all of the following are true:
  • The transaction of which it is part has committed
  • The database pages it changed have all been written to disk by a checkpoint
  • The log record is not needed for a backup (full, differential, or log)
  • The log record is not needed for any feature that reads the log (such as database mirroring or replication) [1]
Logical log is an active part of the transaction log. A Log Sequence Number (LSN) identifies every transaction in the transaction log. The MinLSN is the starting point of the oldest active transaction in the online transaction log

SQL Server transaction log structure


Can SQL Server database work without a transaction log?


No, that is not possible due to the SQL Server design and ACID compliance. ACID stands for atomicity, consistency, isolation, and durability. All the transactions must fullfil these characteristics:
  • An atomic transaction is either fully completed, or is not begun at all
  • A transaction enforces consistency in the system state by ensuring that at the end of any transaction the system is in a valid state
  • When a transaction runs in isolation, it appears to be the only action that the system is carrying out at one time
  • A transaction is durable meaning that once it has been successfully completed, all of the changes it made to the system are permanent [2]


Can one SQL Server database have more than one transaction log?


Yes, that is possible, but it’s only recommended in the specific situations. Adding multiple transaction log files will not improve performance of the SQL Server database in any way. Writing can occur only into one file at the time, so parallel I/O operations are not possible
Having multiple transaction log files is recommended only if the first one is full or the disk drive is running out of space. Either way, these problems should be addressed earlier and handled by creating transaction log backups and monitoring available space on the disk drive
Monitoring disk space used by SQL logs


Why is the SQL Server transaction log growing?


Every transaction is followed by logging into the online transaction log. During SQL Server work, the transaction log grows if changes are made into a database, therefore maintaining the transaction log is crucial for proper SQL Server operation
There are three recovery models available in SQL Server, depending on which one is used transaction log growth is manifested differently:
  • Simple recovery model – the transaction log backups are not supported. The truncation process is automated and space is reclaimed for reuse. There is data loss risk because changes since the most recent database backup are exposed. In the Simple recovery there is little chance for the transaction log growing – just in specific situations when there is a long running transaction or transaction that creates many changes
  • Bulk-logged recovery model – the transaction log backups are supported and needed on regular basis. There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. Bulk-logged recovery model reduces transaction log space usage by using minimal logging for most bulk operations
  • Full recovery model – the transaction log backups are supported and needed on regular basis. There is no data loss risk under the normal circumstances. There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. In the Full recovery there is biggest chance of the transaction log growing because all transactions are logged

How to maintain a transaction log in SQL Server?


A transaction log maintenance is important task in SQL Server administration. Monitoring is recommended on daily basis or even more often is a SQL Server database has high amount of traffic. The transaction log space can be monitored by using the DBCC SQLPREF command:

Monitoring the SQL transaction log space by using the DBCC SQLPREF command

  • Database Name – Name of the database for the log statistics displayed
  • Log Size (MB) – Current size allocated to the log. This value is always smaller than the amount originally allocated for log space because the Database Engine reserves a small amount of disk space for internal header information
  • Log Space Used (%) – Percentage of the log file currently occupied with transaction log information
  • Status – Status of the log file. Always 0 [3]
The transaction log should be backed up on the regular basis to avoid the auto growth operation and filling up a transaction log file. Space in a transaction log can be truncated (cleared) through SQL Server Management Studio by selecting Transaction Log as backup type or through CLI by executing the following command:


That backed up space can be reused again and it’ll be overwritten with new transactions. Some operations can’t be combined and must be executed separately:


Do I need SQL Server transaction log backups?


Yes, that is one of the most important resource when it comes to disaster recovery. They are not needed (and available) only if the Simple recovery model is used – but there is data loss exposure. Majority of Database Administrators uses 15 minutes interval or even less for a high traffic SQL Server databases. The transaction log backups are important because when taken they mark inactive VLFs


Saturday, 12 April 2014

High Availability Features

High Availability Features

* In order to reduce or avoid server or database downtime so that the applications/users works continuously with the server, SS supports 3 types of DR features.

                1. Server Level High Availability features
                                * Clustering
                2. Database Level High Availability features
                                * T.Log Shipping
                                * Database Mirroring
                3. Object Level High Availability features
                                * Replication

1.       1. Log shipping
--------------------
* It is simple method of keeping entire database in different server.
* It works with Backup, Copy and Restore jobs.
* We need 3 servers
                * Primary server
                * Secondary server
                * Monitor server (optional)
* In primary server database is in online state where the applications or users are connected.
* In secondary server database is in standby mode where we can read data.
* In secondary server T.Log backups are applied either with
                * Norecovery or
                * Standby
* We can have multiple secondary servers.

Advantages
-----------------
                * To reduce downtime for the applications or users
                * To implement load balancing. i.e. we can allow reading from secondary server and
manipulations in primary server.

Points to Remember:
-----------------------------

We can ship all databases except master, model, read-only and the databases whose recovery model is SIMPLE.
    We should have sysadmin privileges on both servers.
    Changing recovery model from FULL or BULK LOGGED to SIMPLE will break log shipping process.
    We cannot take ad-hoc Transaction Log backups from primary server as it breaks log shipping log
    backup chain.

Architecture
------------------

Log Shipping Jobs
------------------------
* Log shipping supports 4 jobs
                * Backup Job
                * Copy Job
                * Restore Job
                * Alert Job

1.       1. Backup Job
-------------------
                * It is created in primary server for every log shipping configuration.
                * It takes backup of T.Log file periodically and deletes old backups and old history information.
                * We have to grant read write permissions on Backup folder to primary server service account
    and read permissions to secondary server account.

1.       2. Copy Job
----------------
                * Created in secondary server for every log shipping configuration.
                * Copy the backup files from backup folder into copy folder.
                * It deletes old files and old history from copy folder.
                * On backup folder we have to grant read permission to secondary server account and read -
   write permissions on copy folder.

1.       3. Restore Job
--------------------
                * It is created in secondary server for every log shipping configuration.
                * It restores the files from Copy folder into secondary server in standby mode.
1.       4. Alert Job
----------------
                * It is created in Monitor server.
                * If monitor server is not used it is created in primary and secondary servers.
                * Only one instance of Alert Job is created.


Requirements
-------------------
* Minimum 2 servers are required.
* Database must be in FULL or BULK LOGGED recovery model.
* Any of the editions
                * Enterprise Edition
                * Standard Edition
                * Workgroup Edition
* Both the servers should have same collation settings.

Steps
--------

1.       Go to SSMS take 3 instances
2.       Note down their service accounts
SYS1 (SQL Server 9.0.5000 – WIPRO\administrator)
SYS2 (SQL Server 9.0.5000 – WIPRO\administrator)
SYS3 (SQL Server 9.0.5000 – WIPRO\administrator)

          
               
1.       Create the following folders
                * In Primary Server F:\logshipping_backup
                * Grant read write permissions on this folder to Primary Server account
                * Grant read permission to secondary server account

                * In Secondary server E:\logshipping_copy
                * Grant read write permissions to secondary server account


1.       Test sharing features as
                * In Primary server
                                * Start --> Run --> \\sys1\logshipping_backup
                * In Secondary Server
                                * Start --> Run --> \\sys3\logshipping_copy
1.       5. In Primary server
                1. Create database with the name: suresh
                                CREATE DATABASE suresh


                2. Create sample table (EMP TABLE)

CREATE TABLE EMP
(
EMPNO NUMERIC(4) ,
ENAME VARCHAR(10),
JOB VARCHAR(9),
MGR NUMERIC(4),
HIREDATE DATETIME,
SAL NUMERIC(7, 2),
COMM NUMERIC(7, 2),
DEPTNO NUMERIC(2)
);
INSERT INTO EMP VALUES(7369,'SMITH','CLERK',7902,'1980-12-17',800, 300, 20);
INSERT INTO EMP VALUES(7499,'ALLEN','SALESMAN',7698,'1981-02-20',1600,300,30);
INSERT INTO EMP VALUES(7521,'WARD','SALESMAN',7698,'1981-02-22',1250,500,30);
INSERT INTO EMP VALUES(7566,'JONES','MANAGER',7839,'1981-04-02',2975,300,20);
                3. Taking Full backup & Log Backup
BACKUP DATABASE suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'
BACKUP LOG suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'
(Not required to perform manually)


Note:
                We have to configure on existing database.
1.       6. Go to Secondary Server          
* Create folder with the name d:\SureshFiles and grant read write permissions to service account.
                (Not required to perform manually)
               
* R.C on database (suresh) --> Restore -- > database…


                   To Database:   suresh
                   Select From Device:
                   Click on browse button --> Add--> select suresh_full.bak file (\\sys1\SQL_BACKUPS\suresh_full.bak) --> OK






                * Select checkbox under Restore option.



               * Go to options --> select recovery state : with standby
                --> OK


1.       7. Configuring Log Shipping
               
* Go to Primary Server
                * Right click on Sales db
                * Tasks --> Ship Transaction Log

                * Select checkbox "Enable this as primary database...."
                * Click on "Backup Settings" --> Enter backup folder path
                                \\SYS1\LSBackup_Sales


                * Click on "Schedule" Change the time to 5minutes
                                Occurs Every: 5 minutes
                * OK
                * Under "Secondary Databases" click on Add button

                * Secondary server instance = Click on "Connect" button --> Select secondary server instance


                * Select the option "No, the secondary database is initialized (Yes, generate full backup)



                * Click on Copy Files tab --> enter Destination folder as
                                \\SYS3\LSCopy_Sales
                * Click on schedule Change the time to 5minutes
                                Occurs Every: 5 minutes
                * Click on Restore transaction log tab -->
               

* Select "Standby Mode" option and checkbox "Disconnect users in the database when..."
                * Click on schedule Change the time to 5minutes
                                Occurs Every: 5 minutes
                * OK
                * OK
                * OK

Observations
------------------
1.       Go to primary server --> SQL Server agent --> Jobs --> View the backup job with the name 
LSBackup_Sales
1.       If no monitor server is used, check 2 alerts are created in Alerts folder.
                Log shipping Primary Server Alert
                Log shipping Secondary Server Alert
1.       In Secondary server verify that 2 jobs are created
                * Copy                  (LSCopy_Sales)
                * Restore            (LSRestore_Sales)
1.       All the above (Backup, Copy, Restore) jobs uses "sqllogship.exe" file.
2.       Alert job calls the following SP
                sys.sp_check_log_shipping_monitor_alert
1.       sqlmaint.exe is responsible for updating backup, copy and restore information in the monitor    
    server/p/s servers.
1.       Linked Servers are created in primary and secondary server related to monitor as
LOGSHIPLINK_<MonitorServerName>
1.       The following SP’s are used to configure Log Shipping
                1. master.dbo.sp_add_log_shipping_secondary_primary
                2. master.dbo.sp_add_log_shipping_secondary_database

Configuring Alerts
-------------------------
* Once we configure Log shipping 2 alerts are created automatically.
* We have to configure response for the alerts.

Steps
--------
* Go to Primary server --> SQL Server Agent --> Alerts --> Right Click on Log shipping Primary server alert --> Properties --> Response -->Notify Operators --> Select existing operator and select Email
                * Go to secondary server configure "Log shipping secondary alert…"

Monitoring Log Shipping
----------------------------------
1.       Using MSDB tables and Views
                * Go to primary server --> MSDB
                                1. Log_shipping_primary_databases
                                                * Consists of information related to
                                                                * Log Backup folder path
                                                                * Last Log backup file name
                2. Log_shipping_primary_secondaries
                                * Consists of details of secondary server name and database name
                * Go to Secondary Server --> MSDB
                                1. log_shipping_secondary
                                                * Details of copy folder and last copied file
                                2. log_shipping_secondary_databases
                                                * Details of last restore file
Creating Linked Server in Server1 (primary) for Monitoring
--------------------------------------------------------------------------------
1.       Go to secondary server Server2 and create a login which can access msdb tables.
                use master
                go
                create login link_login with password='hyd'
                go
                use msdb
                go
                create user link_login for login link_login
                go
                grant select to link_login


* Go to primary server(server1) --> server Objects -->R.C on Linked Server -->
                Enter server name : SERVER2 -->
                Select "SQL Server"
                --> Select Security   --> Be made using .... Enter login name and password --> OK
                Enter login name: link_login
                password=hyd
               OK
* Go to Server1 write the following script

select t1.primary_database AS DBName,t1.last_backup_file,
t2.last_copied_file,t3.last_restored_file
from msdb.dbo.log_shipping_primary_databases t1
    join [class2\third].msdb.dbo.log_shipping_secondary t2
ON t1.primary_database=t2.primary_database
   JOIN [class2\third].msdb.dbo.log_shipping_secondary_databases t3
                ON t2.secondary_id=t3.secondary_id

1.       2. Using Log Shipping Status Report
-------------------------------------------------
* Go to Primary Server --> R.C on Server name --> Reports --> Standard Reports --> Transaction Log Shipping Status

1.       3. Using Agent Job History
------------------------------------
* We can monitor the jobs history and if any issues are there we can resolve it.

Points to Remember
-----------------------------
* As part of Log shipping only database users are transferred to secondary server (db).
* Logins are not transferred as part of log shipping. We have to transfer manually or by creating job.
Log backup schedule                      = 15min
                Copy backup schedule = 15Min
                Restore backup schedule = 15Min

If the changes are made in primary server @8am. The max time to reach at secondary server 8:45am (15+15+15)
* If the table is truncated at primary server then at the secondary server database also it is truncated, because the truncate command is minimal logged operation.
* Log shipping is stopped if we change the db recovery model from FULL or BULK_LOGGED to SIMPLE.

FAQ: - If we take FULL backup at primary server then is log shipping continues?

Ans:
                * Full backup cannot break log backups LSN number. Hence Log shipping process continues.



Log Shipping Issues
---------------------------

1.       1. Out of sync
-------------------
* It is raised when the T.Log backups are not applied at secondary server with the error number 14421
                * If there is no disk space in the secondary server for restoring backups.
                * If there is no respective path to create the files in secondary server.
                * If there is ad-hoc T.Log backup was taken in the primary server.
                                                9:00        T.Log     1000
                                                9:10        T.Log     1050 (Ad-hoc)
                                                9:15        T.Log 1100
Solution
------------
* Restore first ad-hoc log backup which was taken at 9:10 manually in secondary server with norecovery.
                * Restore next log backup which was generated by Log shipping process i.e. at 9:15
                * If the Agent service or msdb is not working in secondary server.
                * If any T.Log backups are missing/corrupted before restoring.
                                * Disable Log shipping jobs.
                                * Take differential/Full backup and apply in secondary server.
                                * Enable jobs

1.       2. Copy and Restore jobs are not running
--------------------------------------------------------
* Check secondary server date and time. i.e secondary server date and time is less than primary server.
                * Change the date of start for the above jobs equal to secondary server date.

FAQ: - If the secondary server is down or Agent service in secondary server is not running or msdb is not online copy and restore jobs fail. Once the Agent service is started how the backup files are copied.
Ans:
                * All the backup files are copied from last copied file at once by the copy job.

FAQ: - In secondary server restore job was failing and there is error message
"it is too early to apply the logs". What may be the possible reasons?

Ans:
                * Try to restore if there is any previous backup file.

Performing fail over
-----------------------------
* Process of making secondary database available to the appls or users is called fail over when primary server/db failed.
* Log Shipping Fail over process is not automatic.
Steps
--------
1.       Perform one last T.Log backup in primary server if possible
2.       Disable Log shipping jobs
3.       Restore if any backups are not applied at secondary server with NORECOVERY.
4.       Restore tail log backup with RECOVERY.
5.       Transfer the Logins related to that db.
6.       Make the secondary server available to the users.
7.       Configure Log shipping.





TRANSFERING LOGINS FROM ONE INSTANCE TO ANOTHER INSTANCE
----------------------------------------------------------------------------------------------
--First 2 steps should be executed in primary server
--We have to copy 2 step output and run in standby server

--Step1--
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar(256) OUTPUT
AS
DECLARE @charvalue varchar(256)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @xstatus int
DECLARE @binpwd varbinary (256)
DECLARE @txtpwd sysname
DECLARE @tmpstr varchar (256)
DECLARE @SID_varbinary varbinary(85)
DECLARE @SID_string varchar(256)
IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name = @login_name
OPEN login_curs
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated '
+ CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
PRINT 'DECLARE @pwd sysname'
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@xstatus & 4) = 4
BEGIN -- NT authenticated account/group
IF (@xstatus & 1) = 1
BEGIN -- NT login is denied access
SET @tmpstr = 'EXEC master..sp_denylogin ''' + @name + ''''
PRINT @tmpstr
END
ELSE
BEGIN -- NT login has access
SET @tmpstr = 'EXEC master..sp_grantlogin ''' + @name + ''''
PRINT @tmpstr
END
END
ELSE
BEGIN -- SQL Server authentication
IF (@binpwd IS NOT NULL)
BEGIN -- Non-null password
EXEC sp_hexadecimal @binpwd, @txtpwd OUT
IF (@xstatus & 2048) = 2048
SET @tmpstr = 'SET @pwd = CONVERT (varchar(256), ' + @txtpwd + ')'
ELSE
SET @tmpstr = 'SET @pwd = CONVERT (varbinary(256), ' + @txtpwd + ')'
PRINT @tmpstr
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', @pwd, @sid = ' + @SID_string + ', @encryptopt = '
END
ELSE
BEGIN
-- Null password
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', NULL, @sid = ' + @SID_string + ', @encryptopt = '
END
IF (@xstatus & 2048) = 2048
-- login upgraded from 6.5
SET @tmpstr = @tmpstr + '''skip_encryption_old'''
ELSE
SET @tmpstr = @tmpstr + '''skip_encryption'''
PRINT @tmpstr
END
END
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO
----- End Script -----
Note: -  This script creates two stored procedures in the master database. The two stored procedures are named thesp_hexadecimal stored procedure and the sp_help_revlogin stored procedure.
Run the following statement.
EXEC master..sp_help_revlogin
GO
--step2:                   EXEC sp_help_revlogin

--step3:                   The above SP generates some output as follows

/* sp_help_revlogin script
** Generated May 25 2009 9:11PM on ONLINE */
-- Login: BUILTIN\Administrators
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]

-- Login: NT AUTHORITY\SYSTEM
CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE = [master]
Run the above output (for required logins) in Stand by server instance.

--You can download from
......................................................


Administering and Managing Log Shipping
=======================================

Log shipping is one of four SQL Server 2005 high-availability solutions.

Other SQL Server 2005 high-availability alternatives include
                * Database Mirroring
                * Failover Clustering
                * Peer-to-Peer Replication.

Note: - Database mirroring and Peer-to-Peer Replication are new technologies introduced with SQL Server 2005

Log shipping offers increased database availability and database protection by maintaining a warm standby database on another instance of SQL Server 2005.

Unlike database mirroring, a Log Shipping failover is not done automatically but its manual failover.

Factors affecting Log Shipping
=================================
The size of the transaction log backups, speed of the network, and length of time the restoration takes all play a significant role in planning and scheduling the log shipping solution.

Log Shipping Design Considerations
===================================

*             SQL Server Version: SQL Server 2005 Standard, Workgroup and Enterprise Editions can be used for log shipping. All servers must be configured with the same case-sensitivity settings.

*             Recovery Mode: The recovery mode of the source database on the primary server must be        configured as full or bulk-logged. Because the transaction log is a key part of log shipping, the simple recovery model cannot be used.

*             Monitor Server: The monitor server is optional. This server is important because it provides a central location for the log shipping status and alert messages.

*             Security: Sysadmin role is required on each SQL Server that will take part in the Log Shipping. The agent account used to run the backup job on the primary server must be able to read and write to the backup location.

*             Backup & Restore T Log Location: The backup storage location is used to store the transaction log backups created by the backup job. It is highly recommended to host the transaction log backups on a fault-tolerant server independent of the log shipping primary or secondary servers having enough disk space. Similarly on backup server, enough drives should be available.




Viewing Log Shipping Reports
=============================

* Log shipping reports can be viewed from the primary, secondary, and monitor servers. However, viewing the reports from the monitor server is most effective because the monitor server contains records from both the primary and secondary servers. Viewing the log shipping report from the primary and secondary servers shows only half the data.






Log Shipping Tables
====================

                Table Name                                                                                       Description
                ----------------                                                                                     ---------------

log_shipping_monitor_alert                                       Stores alert job ID.

log_shipping_monitor_error_detail                        Stores error details for log shipping jobs.

log_shipping_monitor_history_detail                     Contains history details for log shipping agents.

log_shipping_monitor_primary                                                 Stores one monitor record for the primary                                                                                                                           database in each Log shipping configuration,                                                                                                                       including information about the last backup file                                                                                                                                 and last restored file that is useful for monitoring.

log_shipping_monitor_secondary                            Stores one monitor record for each secondary database,                                                                                              including information about the last backup file and last                                                                                                                restored file that is useful for monitoring.
Reasons of Log shipping Failure
==============================

* Network Failure
* No Disk space on Secondary Server
* Read/Write permission removed from Log folder
* Password Expired of SS Agent Service
* Copy / Restore job is not running

Prerequisite to configure LS
-------------------------------------

* 2 Servers ( Primary & Secondary)
* Recovery model should be FULL
* SQL 2005 Enterprise/Standard edition
* Same collation on both Servers
* Agent should be in Automatic Mode
* One shared folder on primary server to keep log backups
-------------------------------------------------------------------






Script to check Log shipping Status - SQL2000
------------------------------------------------------------

select p.primary_Server ,p.primary_database ,s.secondary_server, p.last_backup_file,s.last_copied_file,
a.       s.last_restored_file
from msdb..Log_shipping_monitor_primary p , msdb..Log_shipping_monitor_secondary S
where p.primary_database = S.primary_database




What is .TUF file in Log Shipping?
===========================

TUF file is a Microsoft SQL Server Transaction Undo file. .TUF File contains the information regarding any modifications that were made as part of incomplete transactions at the time the backup was performed.

A transaction undo(.TUF) file is required if a database is loaded in read-only state. In this state, further transaction log backups may be applied.


TUF File in Log Shipping
====================

The transaction undo file contains modifications that were not committed on the source database but were in progress when the transaction log was backed up AND when the log was restored to another
database, you left the database in a state that allowed addition transaction log backups to be restored to it (at some point in the future. When another transaction log is restored, SQL Server uses data
from the undo file and the transaction log to continue restoring the incomplete transactions (assuming that they are completed in the next transaction log file). Following the restore, the undo file will be re-
written with any transactions that, at that point, are incomplete.
Hope its not too geeky.

Question: In my environment there is an issue with Log shipping destination file path, I've to change the file path on the destination, I've changed and LS copy is working fine and LS restore is failing because it is trying find the .tuf file on the old path which is not exists on the destination.

I don't want to do full restore for 30+ databases, so I'm trying to update the .tuf path on msdb on destination server but I couldn't find out the path details on any of the log shipping system tables. I knew the last restored file path details can be found ondbo.log_shipping_monitor_secondary ,dbo.log_shipping_secondary_databases
tables, updating these tables not helping to resolve my issue.

Where is the .tuf file path detailsHigh Availability Features

* In order to reduce or avoid server or database downtime so that the applications/users works continuously with the server, SS supports 3 types of DR features.

                1. Server Level High Availability features
                                * Clustering
                2. Database Level High Availability features
                                * T.Log Shipping
                                * Database Mirroring
                3. Object Level High Availability features
                                * Replication

1.       1. Log shipping
--------------------
* It is simple method of keeping entire database in different server.
* It works with Backup, Copy and Restore jobs.
* We need 3 servers
                * Primary server
                * Secondary server
                * Monitor server (optional)
* In primary server database is in online state where the applications or users are connected.
* In secondary server database is in standby mode where we can read data.
* In secondary server T.Log backups are applied either with
                * Norecovery or
                * Standby
* We can have multiple secondary servers.

Advantages
-----------------
                * To reduce downtime for the applications or users
                * To implement load balancing. i.e. we can allow reading from secondary server and
manipulations in primary server.

Points to Remember:
-----------------------------

We can ship all databases except master, model, read-only and the databases whose recovery model is SIMPLE.
    We should have sysadmin privileges on both servers.
    Changing recovery model from FULL or BULK LOGGED to SIMPLE will break log shipping process.
    We cannot take ad-hoc Transaction Log backups from primary server as it breaks log shipping log
    backup chain.

Architecture
------------------

Log Shipping Jobs
------------------------
* Log shipping supports 4 jobs
                * Backup Job
                * Copy Job
                * Restore Job
                * Alert Job

1.       1. Backup Job
-------------------
                * It is created in primary server for every log shipping configuration.
                * It takes backup of T.Log file periodically and deletes old backups and old history information.
                * We have to grant read write permissions on Backup folder to primary server service account
    and read permissions to secondary server account.

1.       2. Copy Job
----------------
                * Created in secondary server for every log shipping configuration.
                * Copy the backup files from backup folder into copy folder.
                * It deletes old files and old history from copy folder.
                * On backup folder we have to grant read permission to secondary server account and read -
   write permissions on copy folder.

1.       3. Restore Job
--------------------
                * It is created in secondary server for every log shipping configuration.
                * It restores the files from Copy folder into secondary server in standby mode.
1.       4. Alert Job
----------------
                * It is created in Monitor server.
                * If monitor server is not used it is created in primary and secondary servers.
                * Only one instance of Alert Job is created.


Requirements
-------------------
* Minimum 2 servers are required.
* Database must be in FULL or BULK LOGGED recovery model.
* Any of the editions
                * Enterprise Edition
                * Standard Edition
                * Workgroup Edition
* Both the servers should have same collation settings.

Steps
--------

1.       Go to SSMS take 3 instances
2.       Note down their service accounts
SYS1 (SQL Server 9.0.5000 – WIPRO\administrator)
SYS2 (SQL Server 9.0.5000 – WIPRO\administrator)
SYS3 (SQL Server 9.0.5000 – WIPRO\administrator)

          
               
1.       Create the following folders
                * In Primary Server F:\logshipping_backup
                * Grant read write permissions on this folder to Primary Server account
                * Grant read permission to secondary server account

                * In Secondary server E:\logshipping_copy
                * Grant read write permissions to secondary server account


1.       Test sharing features as
                * In Primary server
                                * Start --> Run --> \\sys1\logshipping_backup
                * In Secondary Server
                                * Start --> Run --> \\sys3\logshipping_copy
1.       5. In Primary server
                1. Create database with the name: suresh
                                CREATE DATABASE suresh


                2. Create sample table (EMP TABLE)

CREATE TABLE EMP
(
EMPNO NUMERIC(4) ,
ENAME VARCHAR(10),
JOB VARCHAR(9),
MGR NUMERIC(4),
HIREDATE DATETIME,
SAL NUMERIC(7, 2),
COMM NUMERIC(7, 2),
DEPTNO NUMERIC(2)
);
INSERT INTO EMP VALUES(7369,'SMITH','CLERK',7902,'1980-12-17',800, 300, 20);
INSERT INTO EMP VALUES(7499,'ALLEN','SALESMAN',7698,'1981-02-20',1600,300,30);
INSERT INTO EMP VALUES(7521,'WARD','SALESMAN',7698,'1981-02-22',1250,500,30);
INSERT INTO EMP VALUES(7566,'JONES','MANAGER',7839,'1981-04-02',2975,300,20);
                3. Taking Full backup & Log Backup
BACKUP DATABASE suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'
BACKUP LOG suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'
(Not required to perform manually)


Note:
                We have to configure on existing database.
1.       6. Go to Secondary Server          
* Create folder with the name d:\SureshFiles and grant read write permissions to service account.
                (Not required to perform manually)
               
* R.C on database (suresh) --> Restore -- > database…


                   To Database:   suresh
                   Select From Device:
                   Click on browse button --> Add--> select suresh_full.bak file (\\sys1\SQL_BACKUPS\suresh_full.bak) --> OK






                * Select checkbox under Restore option.



               * Go to options --> select recovery state : with standby
                --> OK


1.       7. Configuring Log Shipping
               
* Go to Primary Server
                * Right click on Sales db
                * Tasks --> Ship Transaction Log

                * Select checkbox "Enable this as primary database...."
                * Click on "Backup Settings" --> Enter backup folder path
                                \\SYS1\LSBackup_Sales


                * Click on "Schedule" Change the time to 5minutes
                                Occurs Every: 5 minutes
                * OK
                * Under "Secondary Databases" click on Add button

                * Secondary server instance = Click on "Connect" button --> Select secondary server instance


                * Select the option "No, the secondary database is initialized (Yes, generate full backup)



                * Click on Copy Files tab --> enter Destination folder as
                                \\SYS3\LSCopy_Sales
                * Click on schedule Change the time to 5minutes
                                Occurs Every: 5 minutes
                * Click on Restore transaction log tab -->
               

* Select "Standby Mode" option and checkbox "Disconnect users in the database when..."
                * Click on schedule Change the time to 5minutes
                                Occurs Every: 5 minutes
                * OK
                * OK
                * OK

Observations
------------------
1.       Go to primary server --> SQL Server agent --> Jobs --> View the backup job with the name 
LSBackup_Sales
1.       If no monitor server is used, check 2 alerts are created in Alerts folder.
                Log shipping Primary Server Alert
                Log shipping Secondary Server Alert
1.       In Secondary server verify that 2 jobs are created
                * Copy                  (LSCopy_Sales)
                * Restore            (LSRestore_Sales)
1.       All the above (Backup, Copy, Restore) jobs uses "sqllogship.exe" file.
2.       Alert job calls the following SP
                sys.sp_check_log_shipping_monitor_alert
1.       sqlmaint.exe is responsible for updating backup, copy and restore information in the monitor    
    server/p/s servers.
1.       Linked Servers are created in primary and secondary server related to monitor as
LOGSHIPLINK_<MonitorServerName>
1.       The following SP’s are used to configure Log Shipping
                1. master.dbo.sp_add_log_shipping_secondary_primary
                2. master.dbo.sp_add_log_shipping_secondary_database

Configuring Alerts
-------------------------
* Once we configure Log shipping 2 alerts are created automatically.
* We have to configure response for the alerts.

Steps
--------
* Go to Primary server --> SQL Server Agent --> Alerts --> Right Click on Log shipping Primary server alert --> Properties --> Response -->Notify Operators --> Select existing operator and select Email
                * Go to secondary server configure "Log shipping secondary alert…"

Monitoring Log Shipping
----------------------------------
1.       Using MSDB tables and Views
                * Go to primary server --> MSDB
                                1. Log_shipping_primary_databases
                                                * Consists of information related to
                                                                * Log Backup folder path
                                                                * Last Log backup file name
                2. Log_shipping_primary_secondaries
                                * Consists of details of secondary server name and database name
                * Go to Secondary Server --> MSDB
                                1. log_shipping_secondary
                                                * Details of copy folder and last copied file
                                2. log_shipping_secondary_databases
                                                * Details of last restore file
Creating Linked Server in Server1 (primary) for Monitoring
--------------------------------------------------------------------------------
1.       Go to secondary server Server2 and create a login which can access msdb tables.
                use master
                go
                create login link_login with password='hyd'
                go
                use msdb
                go
                create user link_login for login link_login
                go
                grant select to link_login


* Go to primary server(server1) --> server Objects -->R.C on Linked Server -->
                Enter server name : SERVER2 -->
                Select "SQL Server"
                --> Select Security   --> Be made using .... Enter login name and password --> OK
                Enter login name: link_login
                password=hyd
               OK
* Go to Server1 write the following script

select t1.primary_database AS DBName,t1.last_backup_file,
t2.last_copied_file,t3.last_restored_file
from msdb.dbo.log_shipping_primary_databases t1
    join [class2\third].msdb.dbo.log_shipping_secondary t2
ON t1.primary_database=t2.primary_database
   JOIN [class2\third].msdb.dbo.log_shipping_secondary_databases t3
                ON t2.secondary_id=t3.secondary_id

1.       2. Using Log Shipping Status Report
-------------------------------------------------
* Go to Primary Server --> R.C on Server name --> Reports --> Standard Reports --> Transaction Log Shipping Status

1.       3. Using Agent Job History
------------------------------------
* We can monitor the jobs history and if any issues are there we can resolve it.

Points to Remember
-----------------------------
* As part of Log shipping only database users are transferred to secondary server (db).
* Logins are not transferred as part of log shipping. We have to transfer manually or by creating job.
Log backup schedule                      = 15min
                Copy backup schedule = 15Min
                Restore backup schedule = 15Min

If the changes are made in primary server @8am. The max time to reach at secondary server 8:45am (15+15+15)
* If the table is truncated at primary server then at the secondary server database also it is truncated, because the truncate command is minimal logged operation.
* Log shipping is stopped if we change the db recovery model from FULL or BULK_LOGGED to SIMPLE.

FAQ: - If we take FULL backup at primary server then is log shipping continues?

Ans:
                * Full backup cannot break log backups LSN number. Hence Log shipping process continues.



Log Shipping Issues
---------------------------

1.       1. Out of sync
-------------------
* It is raised when the T.Log backups are not applied at secondary server with the error number 14421
                * If there is no disk space in the secondary server for restoring backups.
                * If there is no respective path to create the files in secondary server.
                * If there is ad-hoc T.Log backup was taken in the primary server.
                                                9:00        T.Log     1000
                                                9:10        T.Log     1050 (Ad-hoc)
                                                9:15        T.Log 1100
Solution
------------
* Restore first ad-hoc log backup which was taken at 9:10 manually in secondary server with norecovery.
                * Restore next log backup which was generated by Log shipping process i.e. at 9:15
                * If the Agent service or msdb is not working in secondary server.
                * If any T.Log backups are missing/corrupted before restoring.
                                * Disable Log shipping jobs.
                                * Take differential/Full backup and apply in secondary server.
                                * Enable jobs

1.       2. Copy and Restore jobs are not running
--------------------------------------------------------
* Check secondary server date and time. i.e secondary server date and time is less than primary server.
                * Change the date of start for the above jobs equal to secondary server date.

FAQ: - If the secondary server is down or Agent service in secondary server is not running or msdb is not online copy and restore jobs fail. Once the Agent service is started how the backup files are copied.
Ans:
                * All the backup files are copied from last copied file at once by the copy job.

FAQ: - In secondary server restore job was failing and there is error message
"it is too early to apply the logs". What may be the possible reasons?

Ans:
                * Try to restore if there is any previous backup file.

Performing fail over
-----------------------------
* Process of making secondary database available to the appls or users is called fail over when primary server/db failed.
* Log Shipping Fail over process is not automatic.
Steps
--------
1.       Perform one last T.Log backup in primary server if possible
2.       Disable Log shipping jobs
3.       Restore if any backups are not applied at secondary server with NORECOVERY.
4.       Restore tail log backup with RECOVERY.
5.       Transfer the Logins related to that db.
6.       Make the secondary server available to the users.
7.       Configure Log shipping.





TRANSFERING LOGINS FROM ONE INSTANCE TO ANOTHER INSTANCE
----------------------------------------------------------------------------------------------
--First 2 steps should be executed in primary server
--We have to copy 2 step output and run in standby server

--Step1--
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar(256) OUTPUT
AS
DECLARE @charvalue varchar(256)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @xstatus int
DECLARE @binpwd varbinary (256)
DECLARE @txtpwd sysname
DECLARE @tmpstr varchar (256)
DECLARE @SID_varbinary varbinary(85)
DECLARE @SID_string varchar(256)
IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name = @login_name
OPEN login_curs
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated '
+ CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
PRINT 'DECLARE @pwd sysname'
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@xstatus & 4) = 4
BEGIN -- NT authenticated account/group
IF (@xstatus & 1) = 1
BEGIN -- NT login is denied access
SET @tmpstr = 'EXEC master..sp_denylogin ''' + @name + ''''
PRINT @tmpstr
END
ELSE
BEGIN -- NT login has access
SET @tmpstr = 'EXEC master..sp_grantlogin ''' + @name + ''''
PRINT @tmpstr
END
END
ELSE
BEGIN -- SQL Server authentication
IF (@binpwd IS NOT NULL)
BEGIN -- Non-null password
EXEC sp_hexadecimal @binpwd, @txtpwd OUT
IF (@xstatus & 2048) = 2048
SET @tmpstr = 'SET @pwd = CONVERT (varchar(256), ' + @txtpwd + ')'
ELSE
SET @tmpstr = 'SET @pwd = CONVERT (varbinary(256), ' + @txtpwd + ')'
PRINT @tmpstr
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', @pwd, @sid = ' + @SID_string + ', @encryptopt = '
END
ELSE
BEGIN
-- Null password
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', NULL, @sid = ' + @SID_string + ', @encryptopt = '
END
IF (@xstatus & 2048) = 2048
-- login upgraded from 6.5
SET @tmpstr = @tmpstr + '''skip_encryption_old'''
ELSE
SET @tmpstr = @tmpstr + '''skip_encryption'''
PRINT @tmpstr
END
END
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO
----- End Script -----
Note: -  This script creates two stored procedures in the master database. The two stored procedures are named thesp_hexadecimal stored procedure and the sp_help_revlogin stored procedure.
Run the following statement.
EXEC master..sp_help_revlogin
GO
--step2:                   EXEC sp_help_revlogin

--step3:                   The above SP generates some output as follows

/* sp_help_revlogin script
** Generated May 25 2009 9:11PM on ONLINE */
-- Login: BUILTIN\Administrators
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]

-- Login: NT AUTHORITY\SYSTEM
CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE = [master]
Run the above output (for required logins) in Stand by server instance.

--You can download from
......................................................


Administering and Managing Log Shipping
=======================================

Log shipping is one of four SQL Server 2005 high-availability solutions.

Other SQL Server 2005 high-availability alternatives include
                * Database Mirroring
                * Failover Clustering
                * Peer-to-Peer Replication.

Note: - Database mirroring and Peer-to-Peer Replication are new technologies introduced with SQL Server 2005

Log shipping offers increased database availability and database protection by maintaining a warm standby database on another instance of SQL Server 2005.

Unlike database mirroring, a Log Shipping failover is not done automatically but its manual failover.

Factors affecting Log Shipping
=================================
The size of the transaction log backups, speed of the network, and length of time the restoration takes all play a significant role in planning and scheduling the log shipping solution.

Log Shipping Design Considerations
===================================

*             SQL Server Version: SQL Server 2005 Standard, Workgroup and Enterprise Editions can be used for log shipping. All servers must be configured with the same case-sensitivity settings.

*             Recovery Mode: The recovery mode of the source database on the primary server must be        configured as full or bulk-logged. Because the transaction log is a key part of log shipping, the simple recovery model cannot be used.

*             Monitor Server: The monitor server is optional. This server is important because it provides a central location for the log shipping status and alert messages.

*             Security: Sysadmin role is required on each SQL Server that will take part in the Log Shipping. The agent account used to run the backup job on the primary server must be able to read and write to the backup location.

*             Backup & Restore T Log Location: The backup storage location is used to store the transaction log backups created by the backup job. It is highly recommended to host the transaction log backups on a fault-tolerant server independent of the log shipping primary or secondary servers having enough disk space. Similarly on backup server, enough drives should be available.




Viewing Log Shipping Reports
=============================

* Log shipping reports can be viewed from the primary, secondary, and monitor servers. However, viewing the reports from the monitor server is most effective because the monitor server contains records from both the primary and secondary servers. Viewing the log shipping report from the primary and secondary servers shows only half the data.






Log Shipping Tables
====================

                Table Name                                                                                       Description
                ----------------                                                                                     ---------------

log_shipping_monitor_alert                                       Stores alert job ID.

log_shipping_monitor_error_detail                        Stores error details for log shipping jobs.

log_shipping_monitor_history_detail                     Contains history details for log shipping agents.

log_shipping_monitor_primary                                                 Stores one monitor record for the primary                                                                                                                           database in each Log shipping configuration,                                                                                                                       including information about the last backup file                                                                                                                                 and last restored file that is useful for monitoring.

log_shipping_monitor_secondary                            Stores one monitor record for each secondary database,                                                                                              including information about the last backup file and last                                                                                                                restored file that is useful for monitoring.
Reasons of Log shipping Failure
==============================

* Network Failure
* No Disk space on Secondary Server
* Read/Write permission removed from Log folder
* Password Expired of SS Agent Service
* Copy / Restore job is not running

Prerequisite to configure LS
-------------------------------------

* 2 Servers ( Primary & Secondary)
* Recovery model should be FULL
* SQL 2005 Enterprise/Standard edition
* Same collation on both Servers
* Agent should be in Automatic Mode
* One shared folder on primary server to keep log backups
-------------------------------------------------------------------






Script to check Log shipping Status - SQL2000
------------------------------------------------------------

select p.primary_Server ,p.primary_database ,s.secondary_server, p.last_backup_file,s.last_copied_file,
a.       s.last_restored_file
from msdb..Log_shipping_monitor_primary p , msdb..Log_shipping_monitor_secondary S
where p.primary_database = S.primary_database




What is .TUF file in Log Shipping?
===========================
 dsfsTUF file is a Microsoft SQL Server Transaction Undo file. .TUF File contains the information regarding any modifications that were made as part of incomplete transactions at the time the backup was performed.

A transaction undo(.TUF) file is required if a database is loaded in read-only state. In this state, further transaction log backups may be applied.


TUF File in Log Shipping
====================

The transaction undo file contains modifications that were not committed on the source database but were in progress when the transaction log was backed up AND when the log was restored to another
database, you left the database in a state that allowed addition transaction log backups to be restored to it (at some point in the future. When another transaction log is restored, SQL Server uses data
from the undo file and the transaction log to continue restoring the incomplete transactions (assuming that they are completed in the next transaction log file). Following the restore, the undo file will be re-
written with any transactions that, at that point, are incomplete.
Hope its not too geeky.

Question: In my environment there is an issue with Log shipping destination file path, I've to change the file path on the destination, I've changed and LS copy is working fine and LS restore is failing because it is trying find the .tuf file on the old path which is not exists on the destination.

I don't want to do full restore for 30+ databases, so I'm trying to update the .tuf path on msdb on destination server but I couldn't find out the path details on any of the log shipping system tables. I knew the last restored file path details can be found ondbo.log_shipping_monitor_secondary ,dbo.log_shipping_secondary_databases
tables, updating these tables not helping to resolve my issue.

Where is the .tuf file path details on msdb?
Ans: The tuf file path is none other than the column backup_destination_directory in log_shipping_secondary on the primary server. And this will be automatically updated when you change the folder name in the LS setup page . But TUF should be available in the old directory when the next restore happens.

SELECT backup_destination_directory FROM dbo.log_shipping_secondary

If you are changing the path for this directory what SQL server does is , when the next restore happens it first tries to copy the TUF file from the old directory to new directory and then only go ahead with the restore operation . If SQL server cannot find the .tuf file in the old directory or the old directory is itself lost – then there is no other way than reconfiguring your LS setup from scratch.

What is Undo File? Why it is required?

Undo file is needed in standby state because while restoring the log backup, uncommitted transactions will be recorded to the undo file and only committed transactions will be written to disk there by making users to read the database. When you restore next tlog backup SQL server will fetch the uncommitted transactions from undo file and check with the new tlog backup whether the same is committed or not. If its committed the transactions will be written to disk else it will be stored in undo file until it gets committed or rolled back.
  on msdb?
Ans: The tuf file path is none other than the column backup_destination_directory in log_shipping_secondary on the primary server. And this will be automatically updated when you change the folder name in the LS setup page . But TUF should be available in the old directory when the next restore happens.

SELECT backup_destination_directory FROM dbo.log_shipping_secondary

If you are changing the path for this directory what SQL server does is , when the next restore happens it first tries to copy the TUF file from the old directory to new directory and then only go ahead with the restore operation . If SQL server cannot find the .tuf file in the old directory or the old directory is itself lost – then there is no other way than reconfiguring your LS setup from scratch.

What is Undo File? Why it is required?

Undo file is needed in standby state because while restoring the log backup, uncommitted transactions will be recorded to the undo file and only committed transactions will be written to disk there by making users to read the database. When you restore next tlog backup SQL server will fetch the uncommitted transactions from undo file and check with the new tlog backup whether the same is committed or not. If its committed the transactions will be written to disk else it will be stored in undo file until it gets committed or rolled back.