Monday, March 26, 2012

I concur

For your consideration and comments:
I want to consider changing the way our transaction logs grow and what the initial growth defaults are to see if I can improve processing performance for one particular process (and perhaps others as well).
I want to get away from allocating new space by percentage of current space, because as the space gets significantly larger, the 20% naturally represents lots of megabytes, resulting in some very large numbers after this update-intensive process run.
I don't know if it is better to have a fixed amount of growth versus a percentage amount, but I'm guessing that as the growth amount gets progressively larger, that the time it takes to get the larger amounts of space becomes a significant drag compared with always going after a smaller, fixed amount of growth even though we may need to get more space more often. Eventually, at the larger numbers, we probably allocate much more space than we can use, so that the time we spent going after that space is wasted. I have seen that we have gotten up to over 3Gig of txn log space in one particular process. (fyi - The is the most complex and calculation-intensive process that I've ever run across in about 30 years of development experience) This is over 2.7Gig more than any of our other processes requires.

Example:If I set up a transaction log default of 20% growth and if I currently have a huge transaction log of 3,209MB, this results in trying to acquire 642MB. That can take a significant amount of time and I may only need a small percentage of that.

Naturally, getting space in smaller chunks, goes much faster than trying to acquire large chunks and my experience is that the time to acquire multiple small chunks of space is not proportional to the equivalent time to gather fewer, much larger chunks of space. I'm not sure why, but I'd guess that since it takes less time to allocate the small chunks of space and return that space to the process, that I am not suffering all the waiting for higher priority tasks that would occur during the much longer period of time it would take to get the large chunks of space.

Does anyone out there have experience to share about this? Are there some blogs or msg threads I should read?


When SQL Server allocates additional space to a database table it takes time (as you've seen). If you allocate by a percentage then as the size of the file grows the size of the new chunks grows. If it takes to long for the transaction log to grow to fit your transaction. When this happens your transaction can time out and roll back giving you an out of space error returned to the client.

I always setup all database files to grow by a set amount, based on the load of the system and the growth requirements. I try to setup the files to grow between 100 and 500 Megs at a time. This way the system grows at a decent amount, without having to wait very long for the files to grow. This also helps keep disk fragmentation down.

|||

IMHO AUTOGROW with Fixed amount of MB is better than leaving it to %age and also it is better to set the size before by calculating the size of database for next 1 year in order avoid such intermittent process of increasing when an intensive query is processed where the performance can be degraded.

http://www.windowsitpro.com/Article/ArticleID/46220/46220.html fyi on the gotchas with similar information.

|||

I agree 100%.

IF your log or DB Grows, it's best that it grow in fixed increments.

Ideally, you'll adjust the size ahead of time so that autogrow does not have to happen at all......

No comments:

Post a Comment