SQL .LDF log file multiple GB after backup
MOSS, SQL Server 2005, stand-alone boxI noticed that disk was running lower than I would expect. I went through our Site Collections and confirmed that we have less than 1GB of total storage used by our Site Collections. Searching for disk usage on the file system, I found two LDF files taking up multiple GB of disk. The LDF for our config DB was ~10GB and the LDF for one of our content DBs was ~6GB. I know that running a SQL backup or a stsadm catastrophic backup should take care of this file and commit all the transactions, thus reducing the size of the file.I performed SQL backups successfully to a different drive (since this drive was nearly full). But the LDF files only decreased by a few MB. I asked a SQL DBA about this and I was shown the initial size value.(from SQL Server Management Studio, expand databases, right click on the DB, select Properties, select "File" from the top left pane. The MDF and LDF files are listed with an initial size column).Sure enough, the initial size for the config DB was 10482MB and the initial size for the content DB in question was 5511MB.I tried to manually adjust this value but it did not take affect.I then attempted to shrink the tran log file since the backup was not reducing it.(Right click DB, select Tasks > Shrink > Files, select log file the first drop-down, under shrink actions shrink to 0 or even 1G would have been nice at this point)This didn't help.My Questions:1. Why is the LDF receiving such a high initial value? SharePoint created these databases programmatically. Other content DB's in this farm have an initial value of < 100MB2. Is there a safe way to decrease the footprint of these files?3. Are there steps I can take to keep SharePoint from setting such a high initial value in the future?
March 11th, 2008 12:22am
I finally solved this with the following steps:- Set the Recovery Model to simple for the databases in question. I waited a weekend and ran more DB backups but the log file didn't reduce its footprint. The initial value stayed put.- Shrink the log file. This finally did it, but required the recovery model to be set to make any difference. I left this as my last resort, as I have heard that shrinking the tran log file is a bad thing.Can anyone confirm that shrinking the Log file is a bad thing? (with an explanation)Granted, changing the DB to the simple recovery model will keep me from getting point-in-time backups. What if you need point in time backups? You need several times your content DB size in disk space for your tran log files if so? Is there a way to get point in time without having to go back to your content DB's birth? Maybe you want point in time logs kept for a week.
Free Windows Admin Tool Kit Click here and download it now
March 20th, 2008 9:41pm
If this is a production SharePoint site, what I would do it set the recovery back to full and do transaction log backup and have the logs be truncated after the backup. That will keep them small.
If you use simple mode you can only go back to your last backup and not be able to use the logs in between now and then.
March 20th, 2008 10:44pm
I am very new to WSS and need some help with the log file issue. Everyone suggests using SQL Mgt Studio but how do I connect to the database remotely as my servers drive is almost full. How do I find the database name, I used all the defaults during the install.I have installed SQL Mgt Studio express on my computer and want to connect to our test WSS v3 server, can this be done or do I need to install the util on the server. Can I force the install to another partition other than C:?
Free Windows Admin Tool Kit Click here and download it now
August 19th, 2008 12:53am
We're generating HUGE LDF files (150GB), I think this is related to sharepoint logging a continual saving of a file. I'm not sure how this is happening, but at one time i set up alerts to notify me if a file was modified. Unfortunately I got thousands of
emails, it looked like the file was modified thousands of times.
I'm guessing this is related to having a workflow which triggers on a file being modified which in turn modifies the file itself. Has anyone else had this issue?
Is there anyway to update a document based on it being modified without triggering a cascade of events?
February 1st, 2011 6:07pm
To answer a few of these questions and be clear about the transaction log:
- First, a transaction log doesn't shrink by taking a log backup. Setting the recovery model to simple doesn't shrink the file either. The only way to shrink a ldf file is to shrink it with DBCC SHRINK FILE or from Management Studio.
- A ldf file will only shrink till the last active portion of the log file. If the last active portion of the log file is at the end of the log file it will effictively not shrink. There are undocumented stored procedures to see where the active portion
of the log file is.
- The ldf file is rotating. So as long as there is free space in the log file (beginning or end) SQL will use that. This is what happens when you take a log backup (FULL recovery model) or when a checkpoint (SIMPLE recovery model) is written to the database.
- If you're logfile is 100GB in size and there is no space left then you need it. If you have 99 percent free space (more or less) the log file has grown. There can be many causes for this. Perhaps, and likely on SharePoint, users have uploaded/changed a
lot of documents. I've seen one situation where users had a tool to import a file share into sharepoint. In a weekend they decided to import 250GB of fileshares. This will cause an incidental growth.
- As a best practice you're log file will never grow and will never be used more then 90%. Growing a logfile causes processes to wait, eg. you cannot update a record when the log file is growing. If you don't have a fast disk subsystem the wait will be noticeable
depending on the growth size.
- The initial size for the log file is determined by the model database. As is the growth of the databse.
- Switching to SIMPLE recovery model doesn't shrink you're logfile but just clears some space inside the logfile.
- Shrinking the logfile doesn't hurt performance as much as shrinking a datafile. But there is no use in shrinking a log file beyond it's minimial processing size. Meaning if on a regular base you're log file is around 4GB, shrinking it to 1GB will cause
the log to grow again at some point. Settings it's initial size to 4GB will be a better option.
If you need full recovery to a point in time, use the FULL recovery model. If you can afford to lose you're data between full backups you can use the simple recovery model. Although the FULL recoverymodel logs more actions it provides the highest recovery
possibilities. In the case of SharePoint you want to use the same recovery model on all databases.
Robert Hartskeerl - http://sqlblog.hartskeerl.nl - http://twitter.com/rhartskeerl
Free Windows Admin Tool Kit Click here and download it now
February 14th, 2011 3:49am