Hey all,
I've got a very long stored proc that runs intensive updates on a particular table. The locks are always escalated from Intent eXclusive to eXclusive. After some reading online, I've decided to implement this (http://support.microsoft.com/default.aspx?scid=kb;en-us;323630#kb2) . The idea is to start a transaction with another spid, and hold an incompatible lock on that table so the stored procedure that I'm running isn't able to escalate the lock. The solution works, but it unfortunately means that I've to lock this table with an update lock for the whole stored proc, which I would rather not do.
Is it possible to spawn another stored proc/function/transaction under another spid from within my stored proc ? I'm hoping the answer to my question isn't here (http://www.dbforums.com/t994076.html).
Is it maybe possible to open another connection within my stored proc ? On a similar note, would it be possible to communicate somehow between connections without using a table ?
Thanks,
-KilkaThere are many ways to do this, but Transact-SQL is a bit limited in this area. It can be done, but it is brute force and ugly at best.
Have you investigated DTS? At least in my experience, it handles this kind of processing much better than Transact-SQL can.
-PatP|||I've worked with DTS and I know the only way it could potentially help me was if I used scripting (activeX or something) to acheive the same thing.
I'm trying to keep everything in a scheduled stored proc. If at all possible, I want to do everything in T-sql for performance reasons. This is something that takes hours to run, so any little performance hit has a big impact.
Showing posts with label solution. Show all posts
Showing posts with label solution. Show all posts
Friday, March 30, 2012
Wednesday, March 28, 2012
Pre-defined lists of values as report parameter input
Has anyone come up with a solution for the following scenario yet?
We have many reports which have input parameters that are lists of values.
As an example the user might want to report on 30 of 100 available product
codes, or 12 of 50,000 account numbers.
What might be an idea would be to store the lists of values that userâ's want
to report on, in a general purpose Lists table. One column would be a List
Name and the other a delimited list of values. This List table could then be
used as the input to a Drop Down box for the relevant report parameter.
Has anybody tried something similar, or does anyone have any other
suggestions?
(Weâ're aware that multi-select list boxes will be available in SSRS 2005,
but that would still be a bit tiresome for a user that needs to select a few
values from a long list).
Regards,
John MarshHi John,
I have done similar thing like that.
i have a lookup parameter report which is a webpage with listbox and the
value is populated by calling a param report. The param report contain 2
fileds and it like name- value pair. The report is called programttically and
the result is returned in xml format so that it can be bind to a dataset and
to tht listbox or dropdown box.
So this act as a lists of values as report parameter input..
Regards
Bava
"John Marsh" wrote:
> Has anyone come up with a solution for the following scenario yet?
> We have many reports which have input parameters that are lists of values.
> As an example the user might want to report on 30 of 100 available product
> codes, or 12 of 50,000 account numbers.
> What might be an idea would be to store the lists of values that userâ's want
> to report on, in a general purpose Lists table. One column would be a List
> Name and the other a delimited list of values. This List table could then be
> used as the input to a Drop Down box for the relevant report parameter.
> Has anybody tried something similar, or does anyone have any other
> suggestions?
> (Weâ're aware that multi-select list boxes will be available in SSRS 2005,
> but that would still be a bit tiresome for a user that needs to select a few
> values from a long list).
> Regards,
> John Marsh
>
We have many reports which have input parameters that are lists of values.
As an example the user might want to report on 30 of 100 available product
codes, or 12 of 50,000 account numbers.
What might be an idea would be to store the lists of values that userâ's want
to report on, in a general purpose Lists table. One column would be a List
Name and the other a delimited list of values. This List table could then be
used as the input to a Drop Down box for the relevant report parameter.
Has anybody tried something similar, or does anyone have any other
suggestions?
(Weâ're aware that multi-select list boxes will be available in SSRS 2005,
but that would still be a bit tiresome for a user that needs to select a few
values from a long list).
Regards,
John MarshHi John,
I have done similar thing like that.
i have a lookup parameter report which is a webpage with listbox and the
value is populated by calling a param report. The param report contain 2
fileds and it like name- value pair. The report is called programttically and
the result is returned in xml format so that it can be bind to a dataset and
to tht listbox or dropdown box.
So this act as a lists of values as report parameter input..
Regards
Bava
"John Marsh" wrote:
> Has anyone come up with a solution for the following scenario yet?
> We have many reports which have input parameters that are lists of values.
> As an example the user might want to report on 30 of 100 available product
> codes, or 12 of 50,000 account numbers.
> What might be an idea would be to store the lists of values that userâ's want
> to report on, in a general purpose Lists table. One column would be a List
> Name and the other a delimited list of values. This List table could then be
> used as the input to a Drop Down box for the relevant report parameter.
> Has anybody tried something similar, or does anyone have any other
> suggestions?
> (Weâ're aware that multi-select list boxes will be available in SSRS 2005,
> but that would still be a bit tiresome for a user that needs to select a few
> values from a long list).
> Regards,
> John Marsh
>
Wednesday, March 7, 2012
Possible Solution to Keeping Log Size Under Control During INDEXDEFRAG
If I just backed up the log to a network drive every 15 minutes or so
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.
Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> Since IndexDefrag uses a lot of small transactions you can certainly backup
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. You
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.
|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
>
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.
Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> Since IndexDefrag uses a lot of small transactions you can certainly backup
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. You
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.
|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.
|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
>
Possible Solution to Keeping Log Size Under Control During INDEXDEFRAG
If I just backed up the log to a network drive every 15 minutes or so
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
--
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
> Since IndexDefrag uses a lot of small transactions you can certainly backup
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. You
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
>
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
--
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
> Since IndexDefrag uses a lot of small transactions you can certainly backup
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. You
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> > If I just backed up the log to a network drive every 15 minutes or so
> > during an INDEXDEFRAG wouldn't that keep the file size under control? I
> > have an extensive defrag that I've only been able to get partially
> > through because of space and I figure this will let me run it completely
> > in one shot.
>
Possible Solution to Keeping Log Size Under Control During INDEXDEFRAG
If I just backed up the log to a network drive every 15 minutes or so
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
[vbcol=seagreen]
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> Since IndexDefrag uses a lot of small transactions you can certainly backu
p
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. Y
ou
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[/vbco
l]
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
>
during an INDEXDEFRAG wouldn't that keep the file size under control? I
have an extensive defrag that I've only been able to get partially
through because of space and I figure this will let me run it completely
in one shot.Hi Brad,
Backing up the transaction log at regular intervals during an INDEXDEFRAG
will indeed prevent you from running out of transaction log space. DBCC
INDEXDEFRAG, in contrast to DBCC DBREINDEX, uses a large number of small
transactions to reorganise the indexes. So there are no long running
transaction that will prevent the transaction log from being emptied when it
is backed up.
Jacco Schalkwijk
SQL Server MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Brad,
Since IndexDefrag uses a lot of small transactions you can certainly backup
the log during it to keep it's size reasonable. But how much gets logged
and how fast depends on the fragmentation level of the Index and the
hardware your running it on. As such it is hard to say whether 15 minutes
is appropriate or not. Only you can determine that with testing. Most
people run log backups every 15 minutes or less anyway. Since your
apparently doing log backups at fairly long intervals I assume it is not
that busy of a system. If that is the case you might consider doing a
DBREINDEX every few days or once a week to keep the fragmentation down. You
can do indexdefrags in between if necessary. You might want to check this
out:
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||In article <Oy7PTC9dEHA.3964@.TK2MSFTNGP10.phx.gbl>,
sqlmvpnooospam@.shadhawk.com says...
[vbcol=seagreen]
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
> Since IndexDefrag uses a lot of small transactions you can certainly backu
p
> the log during it to keep it's size reasonable. But how much gets logged
> and how fast depends on the fragmentation level of the Index and the
> hardware your running it on. As such it is hard to say whether 15 minutes
> is appropriate or not. Only you can determine that with testing. Most
> people run log backups every 15 minutes or less anyway. Since your
> apparently doing log backups at fairly long intervals I assume it is not
> that busy of a system. If that is the case you might consider doing a
> DBREINDEX every few days or once a week to keep the fragmentation down. Y
ou
> can do indexdefrags in between if necessary. You might want to check this
> out:
> http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[/vbco
l]
It actually is a very busy system, but we are in the process of changing
our backup strategy. Right now I am rolling my own log shipping (since
it's standard and not enterprise), but I am still doing the backups
manually to oversee the process.
The main reason I'm doing the log shipping myself is that the standby
server only has web connectivity into the environment, so I have a
process that backs up the log on the production server, zips it up and
tells the standby server that the log is ready. The standby then pulls
down the zipped log with wget, unzips it and applies it to the local
database. It's working pretty well.|||Tying to guess the appropriate log backup interval may require a lot of
trial and error.
There may be an easier way. You can set up an alert that fires whenever the
log file reaches a specified % (try maybe 60%). When it fires, the alert
can be configured to execute a job that backs up the log file.
Mark
"Brad" <brad@.seesigifthere.com> wrote in message
news:MPG.1b76bd01ae8a965e989685@.news...
> If I just backed up the log to a network drive every 15 minutes or so
> during an INDEXDEFRAG wouldn't that keep the file size under control? I
> have an extensive defrag that I've only been able to get partially
> through because of space and I figure this will let me run it completely
> in one shot.|||Oops, sorry about the "from" name. Just set up my newsgroups on a new
computer and did not configure that correctly.
Mark
"msnews.microsoft.com" <nosapm@.nospam.com> wrote in message
news:ujNUzGDeEHA.1048@.tk2msftngp13.phx.gbl...
> Tying to guess the appropriate log backup interval may require a lot of
> trial and error.
> There may be an easier way. You can set up an alert that fires whenever
the
> log file reaches a specified % (try maybe 60%). When it fires, the alert
> can be configured to execute a job that backs up the log file.
> Mark
> "Brad" <brad@.seesigifthere.com> wrote in message
> news:MPG.1b76bd01ae8a965e989685@.news...
>
Subscribe to:
Posts (Atom)