SQL .LDF log file multiple GB after backup
MOSS, SQL Server 2005, stand-alone boxI noticed that disk was running lower than I would expect. I went through our Site Collections and confirmed that we have less than 1GB of total storage used by our Site Collections. Searching for disk usage on the file system, I found two LDF files taking up multiple GB of disk. The LDF for our config DB was ~10GB and the LDF for one of our content DBs was ~6GB. I know that running a SQL backup or a stsadm catastrophic backup should take care of this file and commit all the transactions, thus reducing the size of the file.I performed SQL backups successfully to a different drive (since this drive was nearly full). But the LDF files only decreased by a few MB. I asked a SQL DBA about this and I was shown the initial size value.(from SQL Server Management Studio, expand databases, right click on the DB, select Properties, select "File" from the top left pane. The MDF and LDF files are listed with an initial size column).Sure enough, the initial size for the config DB was 10482MB and the initial size for the content DB in question was 5511MB.I tried to manually adjust this value but it did not take affect.I then attempted to shrink the tran log file since the backup was not reducing it.(Right click DB, select Tasks > Shrink > Files, select log file the first drop-down, under shrink actions shrink to 0 or even 1G would have been nice at this point)This didn't help.My Questions:1. Why is the LDF receiving such a high initial value? SharePoint created these databases programmatically. Other content DB's in this farm have an initial value of < 100MB2. Is there a safe way to decrease the footprint of these files?3. Are there steps I can take to keep SharePoint from setting such a high initial value in the future?
March 11th, 2008 12:22am
I finally solved this with the following steps:- Set the Recovery Model to simple for the databases in question. I waited a weekend and ran more DB backups but the log file didn't reduce its footprint. The initial value stayed put.- Shrink the log file. This finally did it, but required the recovery model to be set to make any difference. I left this as my last resort, as I have heard that shrinking the tran log file is a bad thing.Can anyone confirm that shrinking the Log file is a bad thing? (with an explanation)Granted, changing the DB to the simple recovery model will keep me from getting point-in-time backups. What if you need point in time backups? You need several times your content DB size in disk space for your tran log files if so? Is there a way to get point in time without having to go back to your content DB's birth? Maybe you want point in time logs kept for a week.
Free Windows Admin Tool Kit Click here and download it now
March 20th, 2008 9:41pm
If this is a production SharePoint site, what I would do it set the recovery back to full and do transaction log backup and have the logs be truncated after the backup. That will keep them small.
If you use simple mode you can only go back to your last backup and not be able to use the logs in between now and then.
March 20th, 2008 10:44pm
I am very new to WSS and need some help with the log file issue. Everyone suggests using SQL Mgt Studio but how do I connect to the database remotely as my servers drive is almost full. How do I find the database name, I used all the defaults during the install.I have installed SQL Mgt Studio express on my computer and want to connect to our test WSS v3 server, can this be done or do I need to install the util on the server. Can I force the install to another partition other than C:?
Free Windows Admin Tool Kit Click here and download it now
August 19th, 2008 12:53am
We're generating HUGE LDF files (150GB), I think this is related to sharepoint logging a continual saving of a file. I'm not sure how this is happening, but at one time i set up alerts to notify me if a file was modified. Unfortunately I got thousands of
emails, it looked like the file was modified thousands of times.
I'm guessing this is related to having a workflow which triggers on a file being modified which in turn modifies the file itself. Has anyone else had this issue?
Is there anyway to update a document based on it being modified without triggering a cascade of events?
February 1st, 2011 6:07pm