Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Friday, March 30, 2012

Having trouble installing SQL Server 2005 SP1

I'm Having problem installing SQL server 2005 SP1 on Windows 2003 server R2 SP2.

The log file talk about SQL server express but I have SQL server 2005.

03/19/2007 13:49:07.203 ================================================================================
03/19/2007 13:49:07.203 Hotfix package launched
03/19/2007 13:49:32.984 Attempting to install instance: SQL Server Native Client
03/19/2007 13:49:33.000 Attempting to install target: OMNIWEBPACS
03/19/2007 13:49:33.000 Attempting to install file: sqlncli.msi
03/19/2007 13:49:33.015 Attempting to install file: \\OMNIWEBPACS\d$\bc06cd32e1991973c97744b9c4\HotFixSqlncli\Files\sqlncli.msi
03/19/2007 13:49:33.015 Creating MSI install log file at: C:\WINDOWS\Hotfix\Redist9\Logs\Redist9_Hotfix_KB913090_sqlncli.msi.log
03/19/2007 13:49:33.015 Successfully opened registry key: Software\Policies\Microsoft\Windows\Installer
03/19/2007 13:49:33.015 Failed to read registry key: Debug
03/19/2007 13:49:34.062 MSP returned 0: The action completed successfully.
03/19/2007 13:49:34.062 Successfully opened registry key: Software\Policies\Microsoft\Windows\Installer
03/19/2007 13:49:34.062 Failed to read registry key: Debug
03/19/2007 13:49:34.062 Successfully installed file: \\OMNIWEBPACS\d$\bc06cd32e1991973c97744b9c4\HotFixSqlncli\Files\sqlncli.msi
03/19/2007 13:49:34.062 Successfully installed target: OMNIWEBPACS
03/19/2007 13:49:34.062 Successfully installed instance: SQL Server Native Client
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product Status Summary:
03/19/2007 13:49:34.062 Product: SQL Server Native Client
03/19/2007 13:49:34.062 SQL Server Native Client (RTM ) - Success
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Setup Support Files
03/19/2007 13:49:34.062 Setup Support Files (RTM ) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Database Services
03/19/2007 13:49:34.062 Database Services (SP1 2047 ENU) - NA
03/19/2007 13:49:34.062 Details: Instances of SQL Server Express cannot be updated by using this Service Pack installer. To update instances of SQL Server Express, use the SQL Server Express Service Pack installer.
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Database Services
03/19/2007 13:49:34.062 Database Services (RTM 1399 ENU) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Integration Services
03/19/2007 13:49:34.062 Integration Services (RTM 1399 ENU) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Client Components
03/19/2007 13:49:34.062 Client Components (RTM 1399 ENU) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: MSXML 6.0 Parser
03/19/2007 13:49:34.062 MSXML 6.0 Parser (RTM ) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: SQLXML4
03/19/2007 13:49:34.062 SQLXML4 (RTM ) - Not Applied
03/19/2007 13:49:34.062
03/19/2007 13:49:34.062 Product: Backward Compatibility
03/19/2007 13:49:34.078 Backward Compatibility (RTM ) - Not Applied
03/19/2007 13:49:34.078
03/19/2007 13:49:34.078 Product: Microsoft SQL Server VSS Writer
03/19/2007 13:49:34.078 Microsoft SQL Server VSS Writer (RTM ) - Not Applied
03/19/2007 13:49:34.078

any Idea?

It means that you take the wrong package to apply. You will need to get the following packages for ENU depending on the processor archiectures.

SQLServer2005SP1-KB913090-x86-ENU.exe

SQLServer2005SP1-KB913090-x64-ENU.exe

SQLServer2005SP1-KB913090-ia64-ENU.exe

Wednesday, March 28, 2012

Having problem to use the timestamp of the last successful job run

Hi,
I am using the MS sql DTS package to import some log data
from openenterprise server to the MS Sql server.
I am trying use the timestamps values of the MS SQL job
which ran successfully last time.
I was able to create an DTS pachage which imports the last
24 hours log data from a table of a database engine
name "polyhedra" which is used in a SCADA system to a MS
SQL sever table.
I made a schedule which imports the log data for the last
24 hours. My query statement look like as follows:
Select ID, name, timestamp, realvalue, delete
from realsamples
where timestamp > now()-hours(24)
But instead of hours(24), I need to use the timestamp when
my job last ran successfully. Because incase the any of
the server fails to run for more than 24 hours I will not
get the data older than last 24 hours.
Please help me to solve the problem.
Thank you in advance.You can get information about jobs (including the times when the job started
and if it completed succesfully) using the sp_help_jobhistory stored
procedure. See Books Online for the full syntax and details.
Jacco Schalkwijk
SQL Server MVP
"M Sikder" <anonymous@.discussions.microsoft.com> wrote in message
news:01a101c3bc3d$ed71f000$a301280a@.phx.gbl...
> Hi,
> I am using the MS sql DTS package to import some log data
> from openenterprise server to the MS Sql server.
> I am trying use the timestamps values of the MS SQL job
> which ran successfully last time.
> I was able to create an DTS pachage which imports the last
> 24 hours log data from a table of a database engine
> name "polyhedra" which is used in a SCADA system to a MS
> SQL sever table.
> I made a schedule which imports the log data for the last
> 24 hours. My query statement look like as follows:
> Select ID, name, timestamp, realvalue, delete
> from realsamples
> where timestamp > now()-hours(24)
> But instead of hours(24), I need to use the timestamp when
> my job last ran successfully. Because incase the any of
> the server fails to run for more than 24 hours I will not
> get the data older than last 24 hours.
> Please help me to solve the problem.
> Thank you in advance.

Monday, March 19, 2012

Hardware specs( No. of RAID Controllers)

We want to go with SQL recommendations of seperate drives for
backups,data,log,tempdb and OS.
So we are thinking of
2*72 RAID 1 for OS
2*72 RAID 1 for Tempdb
2*72 RAID 1 for Log
4*72 RAID 10 for Data
3*72 RAID 5 for backups,etc.
So the question is should I use one RAID controller to tie all these drives
too ? Just looking at it from a price perspective. If not, then 2 and if so,
what areas should I distribute the above layout to ?
Let me know if 1 RAID controller would work.
Thanks
I would combine the tempdb and log drive and treat tempdb files like the
rest of the databases. It is better to have logs on 0+1. If the controller
is decent, one will be able to handle that spindle count.
Jason Massie
www: http://statisticsio.com
rss: http://feeds.feedburner.com/statisticsio
"Hassan" <hassan@.test.com> wrote in message
news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
> We want to go with SQL recommendations of seperate drives for
> backups,data,log,tempdb and OS.
> So we are thinking of
> 2*72 RAID 1 for OS
> 2*72 RAID 1 for Tempdb
> 2*72 RAID 1 for Log
> 4*72 RAID 10 for Data
> 3*72 RAID 5 for backups,etc.
> So the question is should I use one RAID controller to tie all these
> drives too ? Just looking at it from a price perspective. If not, then 2
> and if so, what areas should I distribute the above layout to ?
> Let me know if 1 RAID controller would work.
> Thanks
|||Are you saying as such ?
2*72 RAID 1 for OS
4*72 RAID 10 for Tempdb(data and log) and Log Files
4*72 RAID 10 for Data Files
3*72 RAID 5 for backups,etc.
"jason" <jason-r3move@.statisticsio.com> wrote in message
news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
> --
> Jason Massie
> www: http://statisticsio.com
> rss: http://feeds.feedburner.com/statisticsio
> "Hassan" <hassan@.test.com> wrote in message
> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>
|||>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
Logs shouldn't be shared with other files. Log writes during COMMIT are
synchronous and so response time will be negatively affected due to
competing with other file I/O.
Hope this helps.
Dan Guzman
SQL Server MVP
"jason" <jason-r3move@.statisticsio.com> wrote in message
news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
> --
> Jason Massie
> www: http://statisticsio.com
> rss: http://feeds.feedburner.com/statisticsio
> "Hassan" <hassan@.test.com> wrote in message
> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>
|||is it best to put both the log files for the data and tempdb together ? Or
have tempdb on its own disks that include both its data files and log files
Thanks
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
> Logs shouldn't be shared with other files. Log writes during COMMIT are
> synchronous and so response time will be negatively affected due to
> competing with other file I/O.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "jason" <jason-r3move@.statisticsio.com> wrote in message
> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>
|||> is it best to put both the log files for the data and tempdb together ? Or
> have tempdb on its own disks that include both its data files and log
> files
I would place the tempdb log file to the same drive/array as the other log
files and on a controller with a lot of write cache. Tempdb data files may
be heavily used and separating data files from log files will help optimize
the sequential writes to the log.
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.test.com> wrote in message
news:OqcmMl8TIHA.280@.TK2MSFTNGP03.phx.gbl...
> is it best to put both the log files for the data and tempdb together ? Or
> have tempdb on its own disks that include both its data files and log
> files
> Thanks
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
>
|||> 2*72 RAID 1 for OS
> 4*72 RAID 10 for Log Files
> 4*72 RAID 10 for Data Files
> 3*72 RAID 5 for backups,etc.
And treat tempdb as any other database with this spindle count. Data files
on data drive. Log files on log drive. When you don't don't have many
spindles, it is best to keep it simple.
There is even a case that could be made this config:
> 2*72 RAID 1 for OS
> 8*72 RAID 10 for Data Files and Log Files
> 3*72 RAID 5 for backups,etc.
"Hassan" <hassan@.test.com> wrote in message
news:umf6Sb8TIHA.5288@.TK2MSFTNGP04.phx.gbl...
> Are you saying as such ?
> 2*72 RAID 1 for OS
> 4*72 RAID 10 for Tempdb(data and log) and Log Files
> 4*72 RAID 10 for Data Files
> 3*72 RAID 5 for backups,etc.
> "jason" <jason-r3move@.statisticsio.com> wrote in message
> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>
|||Would you also do it with one controller ?
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:467C193D-BFB7-45C1-BC99-DF7084FF714C@.microsoft.com...
> I would place the tempdb log file to the same drive/array as the other log
> files and on a controller with a lot of write cache. Tempdb data files
> may be heavily used and separating data files from log files will help
> optimize the sequential writes to the log.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Hassan" <hassan@.test.com> wrote in message
> news:OqcmMl8TIHA.280@.TK2MSFTNGP03.phx.gbl...
>
|||> Would you also do it with one controller ?
It depends on budget and controller capabilities. A single controller is
probably adequate as long as it's a decent one with features like lots of
cache, configurable cache read/write ratio and battery backup.
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.test.com> wrote in message
news:usb%23Wc9TIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Would you also do it with one controller ?
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:467C193D-BFB7-45C1-BC99-DF7084FF714C@.microsoft.com...
>

Hardware specs( No. of RAID Controllers)

We want to go with SQL recommendations of seperate drives for
backups,data,log,tempdb and OS.
So we are thinking of
2*72 RAID 1 for OS
2*72 RAID 1 for Tempdb
2*72 RAID 1 for Log
4*72 RAID 10 for Data
3*72 RAID 5 for backups,etc.
So the question is should I use one RAID controller to tie all these drives
too ? Just looking at it from a price perspective. If not, then 2 and if so,
what areas should I distribute the above layout to ?
Let me know if 1 RAID controller would work.
ThanksI would combine the tempdb and log drive and treat tempdb files like the
rest of the databases. It is better to have logs on 0+1. If the controller
is decent, one will be able to handle that spindle count.
--
Jason Massie
www: http://statisticsio.com
rss: http://feeds.feedburner.com/statisticsio
"Hassan" <hassan@.test.com> wrote in message
news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
> We want to go with SQL recommendations of seperate drives for
> backups,data,log,tempdb and OS.
> So we are thinking of
> 2*72 RAID 1 for OS
> 2*72 RAID 1 for Tempdb
> 2*72 RAID 1 for Log
> 4*72 RAID 10 for Data
> 3*72 RAID 5 for backups,etc.
> So the question is should I use one RAID controller to tie all these
> drives too ? Just looking at it from a price perspective. If not, then 2
> and if so, what areas should I distribute the above layout to ?
> Let me know if 1 RAID controller would work.
> Thanks|||Are you saying as such ?
2*72 RAID 1 for OS
4*72 RAID 10 for Tempdb(data and log) and Log Files
4*72 RAID 10 for Data Files
3*72 RAID 5 for backups,etc.
"jason" <jason-r3move@.statisticsio.com> wrote in message
news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
> --
> Jason Massie
> www: http://statisticsio.com
> rss: http://feeds.feedburner.com/statisticsio
> "Hassan" <hassan@.test.com> wrote in message
> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then 2
>> and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>|||>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
Logs shouldn't be shared with other files. Log writes during COMMIT are
synchronous and so response time will be negatively affected due to
competing with other file I/O.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"jason" <jason-r3move@.statisticsio.com> wrote in message
news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>I would combine the tempdb and log drive and treat tempdb files like the
>rest of the databases. It is better to have logs on 0+1. If the controller
>is decent, one will be able to handle that spindle count.
> --
> Jason Massie
> www: http://statisticsio.com
> rss: http://feeds.feedburner.com/statisticsio
> "Hassan" <hassan@.test.com> wrote in message
> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then 2
>> and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>|||is it best to put both the log files for the data and tempdb together ? Or
have tempdb on its own disks that include both its data files and log files
Thanks
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
> >I would combine the tempdb and log drive and treat tempdb files like the
> >rest of the databases. It is better to have logs on 0+1. If the
> >controller is decent, one will be able to handle that spindle count.
> Logs shouldn't be shared with other files. Log writes during COMMIT are
> synchronous and so response time will be negatively affected due to
> competing with other file I/O.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "jason" <jason-r3move@.statisticsio.com> wrote in message
> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>>I would combine the tempdb and log drive and treat tempdb files like the
>>rest of the databases. It is better to have logs on 0+1. If the controller
>>is decent, one will be able to handle that spindle count.
>> --
>> Jason Massie
>> www: http://statisticsio.com
>> rss: http://feeds.feedburner.com/statisticsio
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then 2
>> and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>|||> is it best to put both the log files for the data and tempdb together ? Or
> have tempdb on its own disks that include both its data files and log
> files
I would place the tempdb log file to the same drive/array as the other log
files and on a controller with a lot of write cache. Tempdb data files may
be heavily used and separating data files from log files will help optimize
the sequential writes to the log.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.test.com> wrote in message
news:OqcmMl8TIHA.280@.TK2MSFTNGP03.phx.gbl...
> is it best to put both the log files for the data and tempdb together ? Or
> have tempdb on its own disks that include both its data files and log
> files
> Thanks
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
>> >I would combine the tempdb and log drive and treat tempdb files like the
>> >rest of the databases. It is better to have logs on 0+1. If the
>> >controller is decent, one will be able to handle that spindle count.
>> Logs shouldn't be shared with other files. Log writes during COMMIT are
>> synchronous and so response time will be negatively affected due to
>> competing with other file I/O.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "jason" <jason-r3move@.statisticsio.com> wrote in message
>> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>>I would combine the tempdb and log drive and treat tempdb files like the
>>rest of the databases. It is better to have logs on 0+1. If the
>>controller is decent, one will be able to handle that spindle count.
>> --
>> Jason Massie
>> www: http://statisticsio.com
>> rss: http://feeds.feedburner.com/statisticsio
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then
>> 2 and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>>
>|||> 2*72 RAID 1 for OS
> 4*72 RAID 10 for Log Files
> 4*72 RAID 10 for Data Files
> 3*72 RAID 5 for backups,etc.
And treat tempdb as any other database with this spindle count. Data files
on data drive. Log files on log drive. When you don't don't have many
spindles, it is best to keep it simple.
There is even a case that could be made this config:
> 2*72 RAID 1 for OS
> 8*72 RAID 10 for Data Files and Log Files
> 3*72 RAID 5 for backups,etc.
"Hassan" <hassan@.test.com> wrote in message
news:umf6Sb8TIHA.5288@.TK2MSFTNGP04.phx.gbl...
> Are you saying as such ?
> 2*72 RAID 1 for OS
> 4*72 RAID 10 for Tempdb(data and log) and Log Files
> 4*72 RAID 10 for Data Files
> 3*72 RAID 5 for backups,etc.
> "jason" <jason-r3move@.statisticsio.com> wrote in message
> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>>I would combine the tempdb and log drive and treat tempdb files like the
>>rest of the databases. It is better to have logs on 0+1. If the controller
>>is decent, one will be able to handle that spindle count.
>> --
>> Jason Massie
>> www: http://statisticsio.com
>> rss: http://feeds.feedburner.com/statisticsio
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then 2
>> and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>|||Would you also do it with one controller ?
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:467C193D-BFB7-45C1-BC99-DF7084FF714C@.microsoft.com...
>> is it best to put both the log files for the data and tempdb together ?
>> Or have tempdb on its own disks that include both its data files and log
>> files
> I would place the tempdb log file to the same drive/array as the other log
> files and on a controller with a lot of write cache. Tempdb data files
> may be heavily used and separating data files from log files will help
> optimize the sequential writes to the log.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Hassan" <hassan@.test.com> wrote in message
> news:OqcmMl8TIHA.280@.TK2MSFTNGP03.phx.gbl...
>> is it best to put both the log files for the data and tempdb together ?
>> Or have tempdb on its own disks that include both its data files and log
>> files
>> Thanks
>> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
>> news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
>> >I would combine the tempdb and log drive and treat tempdb files like
>> >the rest of the databases. It is better to have logs on 0+1. If the
>> >controller is decent, one will be able to handle that spindle count.
>> Logs shouldn't be shared with other files. Log writes during COMMIT are
>> synchronous and so response time will be negatively affected due to
>> competing with other file I/O.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "jason" <jason-r3move@.statisticsio.com> wrote in message
>> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>>I would combine the tempdb and log drive and treat tempdb files like the
>>rest of the databases. It is better to have logs on 0+1. If the
>>controller is decent, one will be able to handle that spindle count.
>> --
>> Jason Massie
>> www: http://statisticsio.com
>> rss: http://feeds.feedburner.com/statisticsio
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not, then
>> 2 and if so, what areas should I distribute the above layout to ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>>
>|||> Would you also do it with one controller ?
It depends on budget and controller capabilities. A single controller is
probably adequate as long as it's a decent one with features like lots of
cache, configurable cache read/write ratio and battery backup.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.test.com> wrote in message
news:usb%23Wc9TIHA.1208@.TK2MSFTNGP03.phx.gbl...
> Would you also do it with one controller ?
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:467C193D-BFB7-45C1-BC99-DF7084FF714C@.microsoft.com...
>> is it best to put both the log files for the data and tempdb together ?
>> Or have tempdb on its own disks that include both its data files and log
>> files
>> I would place the tempdb log file to the same drive/array as the other
>> log files and on a controller with a lot of write cache. Tempdb data
>> files may be heavily used and separating data files from log files will
>> help optimize the sequential writes to the log.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OqcmMl8TIHA.280@.TK2MSFTNGP03.phx.gbl...
>> is it best to put both the log files for the data and tempdb together ?
>> Or have tempdb on its own disks that include both its data files and log
>> files
>> Thanks
>> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
>> news:7C67FDC2-4E72-4DEE-80F7-F5EEDCC44DD2@.microsoft.com...
>> >I would combine the tempdb and log drive and treat tempdb files like
>> >the rest of the databases. It is better to have logs on 0+1. If the
>> >controller is decent, one will be able to handle that spindle count.
>> Logs shouldn't be shared with other files. Log writes during COMMIT
>> are synchronous and so response time will be negatively affected due to
>> competing with other file I/O.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "jason" <jason-r3move@.statisticsio.com> wrote in message
>> news:185DB471-3652-44A0-AAD0-33E056397B92@.microsoft.com...
>>I would combine the tempdb and log drive and treat tempdb files like
>>the rest of the databases. It is better to have logs on 0+1. If the
>>controller is decent, one will be able to handle that spindle count.
>> --
>> Jason Massie
>> www: http://statisticsio.com
>> rss: http://feeds.feedburner.com/statisticsio
>> "Hassan" <hassan@.test.com> wrote in message
>> news:OMbGoL8TIHA.4684@.TK2MSFTNGP06.phx.gbl...
>> We want to go with SQL recommendations of seperate drives for
>> backups,data,log,tempdb and OS.
>> So we are thinking of
>> 2*72 RAID 1 for OS
>> 2*72 RAID 1 for Tempdb
>> 2*72 RAID 1 for Log
>> 4*72 RAID 10 for Data
>> 3*72 RAID 5 for backups,etc.
>> So the question is should I use one RAID controller to tie all these
>> drives too ? Just looking at it from a price perspective. If not,
>> then 2 and if so, what areas should I distribute the above layout to
>> ?
>> Let me know if 1 RAID controller would work.
>> Thanks
>>
>>
>

Monday, March 12, 2012

Hardware planning for growth

We are configuring a new database server at work. I have decided to do a RAID
1 Array for the transaction log and a RAID 5 array for the data.
The fastest growing part of our database is a table that stores photos. It
is currently about 12Gb in size. We expect it to grow the next few years by
17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
Although these are rough projections, I still need to be able to begin
planning accordingly.
What is the best hardware configuration to allow this kind of growth? I
would like something that is expandable, so that I can buy according to our
current needs, and add to it as needed.
Any advice is greatly appreciated.
-Dan
Hi
In effect, you can only plan for the next 3-4 years. By then, the current
disk subsystems will need to be retired.
In effect, you need about 150Gb for Data and probably 400Gb for backups
during this period. A few 147Gb drives will do the job nicely.
For best performance have a seperate SCSI controller for the logs, backup
and data drives. You may want to split the data up into 2 drive sets to get
better throughput.
With the size and price of drives, do you really need RAID-5? What about
RAD-10? The write performance is just so much better.
Do you need high abailabilty? If so, you need to look at Windows Clusters.
This will guide you in the direction of the drive enclosure as it will need
to be a SAN. EMC has a few nice babies.
Regards
Mike
"Dan" wrote:

> We are configuring a new database server at work. I have decided to do a RAID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years by
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to our
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan
|||Hi Dan,
Statistically speaking the company you are working for will not be in
business in 10 years! Anyhow. Let me tell you what I did for one of our
businesses that had a quad-processor (700 MHZ) server running SQL Server:
1-got a dual processor (3 GHZ) server.
2-Got 15000 RPM drives (50% in disk speed increase right of the bat!)
3-Created 3 RAID 1 arrays: one for OS, one for DB and one of transaction
log. Make sure that your controller supports split seeks, which devides the
read I/Os among the mirrored pair, allowing two disks to seek simultaneously.
Ideally you can use RAID 10.
4-be carefull with the edition of SQL Server. The server edition will limit
SQL to use a max. of 2 GB. Also, you will not be able to create indexes on
views.
Microsoft SQL Administrator's companion book has a section on Capacity
Planning (for memory, processor, disk-subsystem) that you can have a look.
The problem we had with our quad-processor server was that we had to pay
close to 10,000$ to get 2 more CPUs since they were older CPUs. Note also
that storage often becomes a problem. With 200 servers to maintain by our
team, we often need to plan some upgrades for servers that are running low in
HD space (after few years) so make sure you have enough space on your
database server. Last think, it is very hard to load balance a SQL Server
(manual process as far as I know) because chances are that new servers will
get data from your database server in the future and/or the load will
increase.
"Dan" wrote:

> We are configuring a new database server at work. I have decided to do a RAID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years by
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to our
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan

Hardware planning for growth

We are configuring a new database server at work. I have decided to do a RAID
1 Array for the transaction log and a RAID 5 array for the data.
The fastest growing part of our database is a table that stores photos. It
is currently about 12Gb in size. We expect it to grow the next few years by
17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
Although these are rough projections, I still need to be able to begin
planning accordingly.
What is the best hardware configuration to allow this kind of growth? I
would like something that is expandable, so that I can buy according to our
current needs, and add to it as needed.
Any advice is greatly appreciated.
-DanHi
In effect, you can only plan for the next 3-4 years. By then, the current
disk subsystems will need to be retired.
In effect, you need about 150Gb for Data and probably 400Gb for backups
during this period. A few 147Gb drives will do the job nicely.
For best performance have a seperate SCSI controller for the logs, backup
and data drives. You may want to split the data up into 2 drive sets to get
better throughput.
With the size and price of drives, do you really need RAID-5? What about
RAD-10? The write performance is just so much better.
Do you need high abailabilty? If so, you need to look at Windows Clusters.
This will guide you in the direction of the drive enclosure as it will need
to be a SAN. EMC has a few nice babies.
Regards
Mike
"Dan" wrote:
> We are configuring a new database server at work. I have decided to do a RAID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years by
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to our
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan|||Hi Dan,
Statistically speaking the company you are working for will not be in
business in 10 years! Anyhow. Let me tell you what I did for one of our
businesses that had a quad-processor (700 MHZ) server running SQL Server:
1-got a dual processor (3 GHZ) server.
2-Got 15000 RPM drives (50% in disk speed increase right of the bat!)
3-Created 3 RAID 1 arrays: one for OS, one for DB and one of transaction
log. Make sure that your controller supports split seeks, which devides the
read I/Os among the mirrored pair, allowing two disks to seek simultaneously.
Ideally you can use RAID 10.
4-be carefull with the edition of SQL Server. The server edition will limit
SQL to use a max. of 2 GB. Also, you will not be able to create indexes on
views.
Microsoft SQL Administrator's companion book has a section on Capacity
Planning (for memory, processor, disk-subsystem) that you can have a look.
The problem we had with our quad-processor server was that we had to pay
close to 10,000$ to get 2 more CPUs since they were older CPUs. Note also
that storage often becomes a problem. With 200 servers to maintain by our
team, we often need to plan some upgrades for servers that are running low in
HD space (after few years) so make sure you have enough space on your
database server. Last think, it is very hard to load balance a SQL Server
(manual process as far as I know) because chances are that new servers will
get data from your database server in the future and/or the load will
increase.
"Dan" wrote:
> We are configuring a new database server at work. I have decided to do a RAID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years by
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to our
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan

Hardware planning for growth

We are configuring a new database server at work. I have decided to do a RAI
D
1 Array for the transaction log and a RAID 5 array for the data.
The fastest growing part of our database is a table that stores photos. It
is currently about 12Gb in size. We expect it to grow the next few years by
17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year 5
that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
Although these are rough projections, I still need to be able to begin
planning accordingly.
What is the best hardware configuration to allow this kind of growth? I
would like something that is expandable, so that I can buy according to our
current needs, and add to it as needed.
Any advice is greatly appreciated.
-DanHi
In effect, you can only plan for the next 3-4 years. By then, the current
disk subsystems will need to be retired.
In effect, you need about 150Gb for Data and probably 400Gb for backups
during this period. A few 147Gb drives will do the job nicely.
For best performance have a seperate SCSI controller for the logs, backup
and data drives. You may want to split the data up into 2 drive sets to get
better throughput.
With the size and price of drives, do you really need RAID-5? What about
RAD-10? The write performance is just so much better.
Do you need high abailabilty? If so, you need to look at Windows Clusters.
This will guide you in the direction of the drive enclosure as it will need
to be a SAN. EMC has a few nice babies.
Regards
Mike
"Dan" wrote:

> We are configuring a new database server at work. I have decided to do a R
AID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years b
y
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year
5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to ou
r
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan|||Hi Dan,
Statistically speaking the company you are working for will not be in
business in 10 years! Anyhow. Let me tell you what I did for one of our
businesses that had a quad-processor (700 MHZ) server running SQL Server:
1-got a dual processor (3 GHZ) server.
2-Got 15000 RPM drives (50% in disk speed increase right of the bat!)
3-Created 3 RAID 1 arrays: one for OS, one for DB and one of transaction
log. Make sure that your controller supports split seeks, which devides the
read I/Os among the mirrored pair, allowing two disks to seek simultaneously
.
Ideally you can use RAID 10.
4-be carefull with the edition of SQL Server. The server edition will limit
SQL to use a max. of 2 GB. Also, you will not be able to create indexes on
views.
Microsoft SQL Administrator's companion book has a section on Capacity
Planning (for memory, processor, disk-subsystem) that you can have a look.
The problem we had with our quad-processor server was that we had to pay
close to 10,000$ to get 2 more CPUs since they were older CPUs. Note also
that storage often becomes a problem. With 200 servers to maintain by our
team, we often need to plan some upgrades for servers that are running low i
n
HD space (after few years) so make sure you have enough space on your
database server. Last think, it is very hard to load balance a SQL Server
(manual process as far as I know) because chances are that new servers will
get data from your database server in the future and/or the load will
increase.
"Dan" wrote:

> We are configuring a new database server at work. I have decided to do a R
AID
> 1 Array for the transaction log and a RAID 5 array for the data.
> The fastest growing part of our database is a table that stores photos. It
> is currently about 12Gb in size. We expect it to grow the next few years b
y
> 17Gb, 26Gb, 33Gb, then platueau at 37Gb growth a year. At the end of year
5
> that puts us at about 165Gb, and at the end of year 10 at about 500Gb.
> Although these are rough projections, I still need to be able to begin
> planning accordingly.
> What is the best hardware configuration to allow this kind of growth? I
> would like something that is expandable, so that I can buy according to ou
r
> current needs, and add to it as needed.
> Any advice is greatly appreciated.
> -Dan

Wednesday, March 7, 2012

Hard time master log files

Hi,
I have a small DB, about 200MB, but my transaction log files grows regulary
over 900MB.
The only way I have been able to manage the log file is to detach the
database, delete the log file and reattach the DB.
Is there any better way to manage growing log files?
Thanks in advance
ChristianHi,
Check the recovery model for your database using the below command:-
Select DATABASEPROPERTYEX(<dbname>,Recovery )
If the recovery is Simple then the LDF should grow big intil u have some
batch operation. In that once the trasnaction is over you could
shrink the LDF file using DBCC SHRINKFILE command
If the recovery model is either FULL or BULK_LOGGED then you need to do the
Transaction log backup using BACKUP LOG command. This
will keep the LDF file not to grow.
Thanks
Hari
SQL Server MVP
"Christian Perthen" <abracadabara@.dontreplytothidress.com> wrote in
message news:e2YgKk8QFHA.3928@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have a small DB, about 200MB, but my transaction log files grows
> regulary
> over 900MB.
> The only way I have been able to manage the log file is to detach the
> database, delete the log file and reattach the DB.
> Is there any better way to manage growing log files?
> Thanks in advance
> Christian
>

Monday, February 27, 2012

Handling large amounts of data for reporting?

Hi all,
I have an application which logs a considerable amount of data. Each day,
we log about 50,000 to 100,000 rows of data.
We like to report on this data... currently I'm using a stored procedure to
calculate the statistics, however since this is an ad hoc, reports take a
while to generate.
So how do you guys handle large amounts of data? Is there a good way to
precalculate a a set of statistics to handle ad hoc queries (i.e. By Hour,
By Day, By W, By Month). Our application also provides near realtime
statistics... so precalculation has to be done on a continual basis. Does
.NET have any statistics classes that might help out with this sort of
thing? I don't think .NET performance counters will work since they don't
log persistent data.
Any ideas?
Thanks!
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
[url]http://members.ebay.com/aboutme/spot18/[/url]> So how do you guys handle large amounts of data?
Usually, you can run queries in the background that will aggregate to the
various dimensions you want to measure (e.g. ).
In one current system, we aggregate data by hour, then when someone wants to
do a "this w" or "this month" type of query, we simply union the
aggregated data (super fast) from before the last hour where stats were
rolled up, against the raw data (which is also relatively fast, using the
same kind of query that would "roll it up") and then apply group by to the
derived table. This way the majority of the crunching is already done, and
you are applying operations on raw data to only the last hour or less.
You might also look at Analysis Services or other BI products, but I think
the lag of rolling data into cubes/dimensions will fail your realtime
requirement.|||Do you need to report against a big table that grows 100K rows daily or
do you need to report against a table that has 100K rows?
100,000 rows of data is usually NOT a large amount on decent hardware.|||"AK" <AK_TIREDOFSPAM@.hotmail.COM> wrote in news:1125092267.248510.101290
@.g49g2000cwa.googlegroups.com:

> Do you need to report against a big table that grows 100K rows daily or
> do you need to report against a table that has 100K rows?
> 100,000 rows of data is usually NOT a large amount on decent hardware.
I need to report against a table that grows by ~100K row daily.
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
[url]http://members.ebay.com/aboutme/spot18/[/url]|||You may also want to explore partitioned views depending on the types
of queries you're looking at; we partion our data daily (about 800k
rows a day), and the optimizer knows wehre to look for the data we want
to retrieve.
Stu|||Summarized datasets, federated views, and OLAP cubes some of the basic
components, but the general solution is called data warehousing. Basically,
the concept is to export a pre-calculated and pre-summarized subset of data
in the OLTP system another database or server for the purpose of reporting
or analysis. The goal is to design a system that is optimized to suit your
own specific analytic needs, so the appropriate implementation details can
very from one warehouse to the next.
This document provides a good overview:
http://userfs.cec.wustl.edu/~cse530...ng-Combined.ppt
"Lucas Tam" <REMOVEnntp@.rogers.com> wrote in message
news:Xns96BEAE3D2946Bnntprogerscom@.127.0.0.1...
> Hi all,
>
> I have an application which logs a considerable amount of data. Each day,
> we log about 50,000 to 100,000 rows of data.
> We like to report on this data... currently I'm using a stored procedure
> to
> calculate the statistics, however since this is an ad hoc, reports take a
> while to generate.
> So how do you guys handle large amounts of data? Is there a good way to
> precalculate a a set of statistics to handle ad hoc queries (i.e. By Hour,
> By Day, By W, By Month). Our application also provides near realtime
> statistics... so precalculation has to be done on a continual basis. Does
> .NET have any statistics classes that might help out with this sort of
> thing? I don't think .NET performance counters will work since they don't
> log persistent data.
> Any ideas?
> Thanks!
> --
> Lucas Tam (REMOVEnntp@.rogers.com)
> Please delete "REMOVE" from the e-mail address when replying.
> [url]http://members.ebay.com/aboutme/spot18/[/url]|||"JT" <someone@.microsoft.com> wrote in
news:Oq9egaKrFHA.3452@.TK2MSFTNGP14.phx.gbl:

> The goal is to design a system that is optimized to suit your
> own specific analytic needs, so the appropriate implementation details
> can very from one warehouse to the next.
Hmmm... that's the problem.
Alot of the data our application generates is being used by external
clients, and they all have differing reporting requirements.
Not to mention, management seems to change the way queries are calculated
every so often.
Any ideas?
Would the Star/Snowflake structures be my best solution?
Thanks!
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
Newmarket Volvo Sucks! http://newmarketvolvo.tripod.com

Sunday, February 19, 2012

H

Hi
How do I change the log shipping monitor to another server
without setting up log shipping again?
Thanks
HThese steps are probably worth a shot. I'd however carefully test it in a
test environment first:
The information about the monitoring of the log shipping is stored to the
following locations:
- On the Primary server, in the msdb.dbo.log_shipping_monitor table
- On the Standby server, in the msdb.dbo.log_shipping_monitor table
- On the log shipping Monitor, in the msdb.dbo.log_shipping_primaries and
the msdb.dbo.log_shipping_secondaries tables.
1. Each time a log shipping related job is executed, its completion status
is updated into the tables of the Monitor server. Obviously, if the log
shipping monitor server needs to be changed, the information about it must
be updated in all of the locations. Beware that in log shipping,should not
be allowed to specify different monitor servers if secondary has monitor
already defined.
2. Script the "Log Shipping Alert - Backup" and "Log Shipping Alert -
Restore" jobs from the current monitor server To do this in the Enterprise
manager go to SQL Server Agent -->Jobs --> "Log shipping Alert-Backup"
-->Right click-->Generate SQL Script.Run these scripts on the New Monitor
Server to create the Alerts there.
3. Run the following query on current monitor server and save the output
select maintenance_plan_id from msdb.dbo.log_shipping_primaries
select secondary_plan_id from msdb.dbo.log_shipping_secondaries
4. On the new log shipping Monitor, there are a couple of alternatives:
- Use Query Analyzer or other similar tool to execute
sp_add_log_shipping_primary and sp_add_log_shipping_secondary stored
procedures on the new Monitor server to define the monitored servers. This
alternative is suitable if the Monitor role is moved to another server.
- Restore the whole MSDB database to the new Monitor server. This
alternative is suitable if the server needs to be replaced or if MSDB
contains other crucial information that needs to be moved, too (DTS
packages, jobs, operators, alerts, etc.) Be aware about some complications
that may turn up in case the machine name is different than the original
one. For more information see KB article 281642.KB.EN-US: BUG: Error 14274
When Updating a SQL Agent Job After Renaming Windows Server
A sample execution of the sp_add_log_shipping_primary and
sp_add_log_shipping_secondary stored procedures may look like:
DECLARE @.myprimary_id int
SELECT @.myprimary_id = 1
EXEC msdb.dbo.sp_add_log_shipping_primary
@.primary_server_name = 'primary_server_name', -- the primary server name
@.primary_database_name = 'primary_database_name', -- the primary db name
@.maintenance_plan_id = 'C354D59F-FACF-43FA-8D52-CEE8DC62FCC2', -- ID of the
maintenance plan that backs up the transaction log
@.backup_threshold = 45, -- the length of time, in minutes, after the last
backup before a threshold_alert error is raised
@.threshold_alert = 14420, -- the error raised when the backup threshold is
exceeded
@.threshold_alert_enabled = 1, -- whether an alert will be raised when
backup_threshold is exceeded
@.planned_outage_start_time = 0, -- the time, in HHMMSS format, a planned
outage starts.
-- During a planned outage, alerts will not be raised if the backup
threshold is exceeded
@.planned_outage_end_time = 0, -- the time, in HHMMSS format, a planned
outage ends
@.planned_outage_weekday_mask = 0, -- the day of the week that a planned
outage occurs
@.primary_id = @.myprimary_id -- OUTPUT parameter, unique ID for the new
primary server and database pair
--Execute the following select statement on the New monitor Server to get
--the Primary_ID that needs to be used in the sp_add_log_shipping secondary
statement:
select primary_id from msdb.dbo.log_shipping_primaries where
primary_server_name = '<primary server name>' and primary_database_name ='<primary database name>'
EXEC msdb.dbo.sp_add_log_shipping_secondary
@.primary_id = @.myprimary_id, -- the ID of the primary server
@.secondary_server_name = 'secondary_server_name', -- the name of the
secondary server
@.secondary_database_name = 'seconday_database_name', -- the secondary
db name
@.secondary_plan_id = '616679D6-CA8A-4098-A694-9502C390D2CA', -- the
ID of the log shipping plan on the secondary server found in
msdb..log_shipping_plans table as obtained in Step 2.
@.copy_enabled = 1, -- whether the copy for the database is enabled on the
secondary server
@.load_enabled = 1, -- whether the load for the database is enabled on the
secondary server
@.out_of_sync_threshold = 45, -- the length of time, in minutes, after the
last load before an error is raised
@.threshold_alert = '14421', -- whether an alert will be raised when an
out-of-sync threshold is exceeded
@.threshold_alert_enabled = 1, -- whether an alert will be raised when
backup_threshold is exceeded
@.planned_outage_start_time = 0, -- the time, in HHMMSS format, a planned
outage starts.
-- During a planned outage, alerts will not be raised if the backup
threshold is exceeded
@.planned_outage_end_time = 0, -- the time, in HHMMSS format, a
planned outage ends
@.planned_outage_weekday_mask = 0 -- the day of the week that a
planned outage occurs
GOThe values for the parameters can be completed from the values in the
step 2 above
5. Execute the following statement on the Primary and secondary servers to
get the current monitor login information. Note down the Logon Type
information.
select * from msdb.dbo.log_shipping_monitor
go
6. Execute the following command on both Primary and all secondary servers
to change the monitor information. Use the logon type that was found in the
step above
sp_define_log_shipping_monitor @.monitor_name ='<New monitor server name>',
@.logon_type =logon_type, -- this was written down in the previous
step
@.password = NULL, -- if the logon type is 1 ( NT authentication) then
the password is ignored
@.delete_existing=1
go
7. On the old log shipping Monitor, delete the log shipping information
from the msdb.dbo.log_shipping_primaries and the
msdb.dbo.log_shipping_secondaries tables. Executing only the
sp_delete_log_shipping_primary stored procedure is sufficient to delete the
related entries from the msdb.dbo.log_shipping_secondaries table if the
@.delete_secondaries parameter is set. A sample execution of the procedure
may look like:
USE master
GO
EXEC sp_delete_log_shipping_primary
@.primary_server_name = 'primary_server_name', -- the primary server
name
@.primary_database_name = 'primary_database_name', -- the primary db name
@.delete_secondaries = 1 -- the delete action is also applied to
log_shipping_secondaries table. Bear in mind that the entries in the
Monitor's system tables are per server and also per database.
It is not recommended to do direct inserts into the
msdb.dbo.log_shipping_primaries and msdb.dbo.log_shipping_secondaries
tables. Direct updates of system tables is never recommended - use the
designated stored procedures instead. Note that the procedure is just the
same even if either the Primary or any of the Standby servers has been
assigned the Monitor role. Just treat it as a separate server and follow
the steps above.
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.