Wednesday, March 7, 2012

Possible Solution to Keeping Log Size Under Control During INDEXDEFRAG

If I just backed up the log to a network drive every 15 minutes or so
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
--
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
> Since IndexDefrag uses a lot of small transactions you can certainly backup
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. You
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
>

No comments:

Post a Comment