Exchange 2007 CCR build
Hello all As an interim measure until I go to Exchange 2010 accross my org, I will be building a couple of CCR clusters to complement the 6 SCC clusters I currently have. This "is" a financial decision as the SAN supporting the SCC environment is getting too expensive for us to justify. In sizing the CCR however, I am struggling to marry up the Exchange sizing calculatir suggestions with what I would class as "reality". According to the sizing calc, I would need upwards of 100 * SAS disks at 10k rpm to satisfy 4000 heavy users.... this just sounds wrong. Would a couple of people that are running fairly heavy mailbox servers in a CCR mind getting in touch and lending your wisom as to how your service is performing. I am "fairly" certain i will still go ahead with CCR, but i;d like some idea of what each cluster can support in terms of user load, it will help me figure out whether I need more than 2 clusters for the users I intend to move onto them. I am not concerned about disk space as such in this CCR design, it is I\O requirements.... What I want is to be able to host 3000 - 4000 heavy users on a CCR mailbox with DAS attached. I'll list my provisional specs for a single CCR cluster below: 1 ) Two * Servers with 48GB RAM and 2*4 core CPU's - Very highly specced as you can see ;-) 2 ) Two * DAS (HP 2700 device i think it was I found) , each with 25 SFF SAS 10k RPM disks holding 450GB capacity per disk. In basic terms, I could imagine receiving 25* 160 iops per tray...question is, would this satisfy the requirements for R\W IOPS. It's all theoretical right now however. In theory I would get 4000 total iops for this.... and I can only really afford to have a RAID 5 configuration. So, in theory, I am looking to run these 4000 mailboxes on maybe 3000 iops (Factoring in 25% degredation during RAID rebuild.) With the 48GB RAM per server, I imagine the READ\Write ration to be about 1:1, and I also imagine per mailbox iops to be Relatively small (maybe 0.5). That gives me 2000 iops needed to run the server on a day to day basis, about 1000 contingency..... factor in AV and message scanning, as well as some overhead and it sounds like I have enough..... yet the Exchange calculator suggests I "should" be using RAID 10 (100 disks) just for the performance side of it!!!! OK, with RAID 10, 50 spindles would give me ample iops, but I really hope\think that the over spec of RAM (48GB) and the reasonable quality disks (25 SAS disks) will give me enough to handle 4000 heavy users. I certainly should have enough disk space. Has anyone got experience of running this load on a CCR, and what configuration did they have to use? Any tips much appreciated folks. T
February 8th, 2011 6:13am

Well, if its a financial decision as you state, are you going about this in the right manner? Going 2007 CCR is double your cost so thats twice as much money as you need to spend. Going SAS disks is a waste of money because when you do go to 2010 you will have grossly over-specd and somewhat under sized your disks for the future. The only way a SAN can get too expensive if if you have had it for four years and the vendor is charging you for extended support. Those are three reasons that your plans should be reviewed with management. Tell them that you can do all the stuff you say and keep within financial constraints but in 18 months time you will have proven that a good chunk of it was wasted money. 600GB SAS disks versus 2TB SATA disks? Point that out. Why CCR? Whats the business justification for it if youre currently running SCC? If you do this you need to be aware that you are never going to go to Exchange 2010. You will instead skip it and go to Exchange 20next. You will also acknowledge that you will be out of support by the time you do that upgrade. The calculator is maybe telling you that each server needs 50 disks. IOPS for 4,000 users in 2007 could be around 3,000 IOPS. That can be met in 20 disks which is 12TB or 3GB per mailbox. Then add disks to account for RAID. That makes your 100 disks right about right. (25 disks for IO and capacity at 2GB per MB multiplied by 2 for RAID 10 makes 50 and multiplied by 2 again makes the 100 for the environment) BE AWARE I havent got any access to your sizing calculator and these numbers are for illustrative purposes. I do not need feedback from anyone about whether they are right or need to be tweaked. They are, again, illustrative for this case to indicate that 100 isnt such a crazy number of disks. Thank you! Id have a good hard look. If theres ever a reason not to skip a version your case is it. Its 2011. Dont go deploying 2007 on a disk type that literally locks you in to a version that wont be supported by the time the disks youve put them on goes EOL. "Millardus" wrote in message news:83ed66d6-86b5-433e-b4d3-0dfc96fa35f1... Hello all As an interim measure until I go to Exchange 2010 accross my org, I will be building a couple of CCR clusters to complement the 6 SCC clusters I currently have. This "is" a financial decision as the SAN supporting the SCC environment is getting too expensive for us to justify. In sizing the CCR however, I am struggling to marry up the Exchange sizing calculatir suggestions with what I would class as "reality". According to the sizing calc, I would need upwards of 100 * SAS disks at 10k rpm to satisfy 4000 heavy users.... this just sounds wrong. Would a couple of people that are running fairly heavy mailbox servers in a CCR mind getting in touch and lending your wisom as to how your service is performing. I am "fairly" certain i will still go ahead with CCR, but i;d like some idea of what each cluster can support in terms of user load, it will help me figure out whether I need more than 2 clusters for the users I intend to move onto them. I am not concerned about disk space as such in this CCR design, it is I\O requirements.... What I want is to be able to host 3000 - 4000 heavy users on a CCR mailbox with DAS attached. I'll list my provisional specs for a single CCR cluster below: 1 ) Two * Servers with 48GB RAM and 2*4 core CPU's - Very highly specced as you can see ;-) 2 ) Two * DAS (HP 2700 device i think it was I found) , each with 25 SFF SAS 10k RPM disks holding 450GB capacity per disk. In basic terms, I could imagine receiving 25* 160 iops per tray...question is, would this satisfy the requirements for R\W IOPS. It's all theoretical right now however. In theory I would get 4000 total iops for this.... and I can only really afford to have a RAID 5 configuration. So, in theory, I am looking to run these 4000 mailboxes on maybe 3000 iops (Factoring in 25% degredation during RAID rebuild.) With the 48GB RAM per server, I imagine the READ\Write ration to be about 1:1, and I also imagine per mailbox iops to be Relatively small (maybe 0.5). That gives me 2000 iops needed to run the server on a day to day basis, about 1000 contingency..... factor in AV and message scanning, as well as some overhead and it sounds like I have enough..... yet the Exchange calculator suggests I "should" be using RAID 10 (100 disks) just for the performance side of it!!!! OK, with RAID 10, 50 spindles would give me ample iops, but I really hope\think that the over spec of RAM (48GB) and the reasonable quality disks (25 SAS disks) will give me enough to handle 4000 heavy users. I certainly should have enough disk space. Has anyone got experience of running this load on a CCR, and what configuration did they have to use? Any tips much appreciated folks. T Mark Arnold, Exchange MVP.
Free Windows Admin Tool Kit Click here and download it now
February 8th, 2011 8:47am

Cheers for the thoughts Mark. >>The only way a SAN can “get too expensive” if if you have had it for four years and the vendor is charging you for >>extended support. The SAN is only midway through it's lifecycle. We are paying 15k for 4 TB of usable storage, which amounts to 2TB usable when 2-way replication comes in. The Business offered 2GB mailboxes to a 55000 user environment, and lo and behold, people are enjoying that. The Business wanted 2010 in by the end of this summer (2011), but were only providing 75k to implement it!!! So, we were set an almost impossible task to my mind. It would involve having to use SAN storage as Direct attached storage, an awful return on investiment for the product. We would have enough project funds to purchase a couple of servers and probably 4 DAS units (Racks would also be needed, UPS), we would then be in a cycle of bunny-hopping and connecting unsuitable SAN storage direct to servers once we'd managed to extract them from the SAN storage pool, while bunnyhopping mailbox servers. Co-existence becomes extended, every connection we have is EWS\Outlook Anywhere\IMAP and is remote....nothing within a Domain. Co-existence would be very complicated so unless funds were provided for 2010 I suggested an alternate way of getting cheaper storage with 2007...CCR.... However, if 100 disks or so really are needed to support 4000 heavy users, then it is certainly not "much" cheaper at all when other costs are factored in (racking, power). >>Why CCR? What’s the business justification for it if you’re currently running SCC? In fact, it was because I reckoned it would be half the price of our current SCC-SAN storage. I thought a 25 disk tray per CCR node, so 50 for a cluster, would provide enough IOPS even if configured with RAID 5. However, I am not quite as certain it is... I could imagine getting cheaper SAS disks, so that I can afford 50 per CCR node, and it would probably still be cheaper than the 15k per SAN node for 2TB usable storage. HOWEVER, your point about these disks then being over-specced for 2010 (for IO, not for size though) is a very good one. It certainly seems as if the most intelligent decision IS for us to go to 2010 soon, in which case I really need to petition for funds to be brought forward 2 years to cover the cost...... And if they cannot do that, well what can I say. 1 point however...you mention that SAS would be entirely over-specced for 2010....I've seen enough notional postings around the web where SATA disks are still not being wholly endorsed as an enterprise-class solution for 2010....I am completely wide of the mark here? At the end of the day, if we keep buying these SAN nodes to tide us over, we will have overspent significantly on Direct attached storage as that is what we would be expected to use as our initial 2010 DAS! My aim was to make 2007 storage cheaper than it currently is, because at the end of the day it appears there are no funds to invest in an exchange 2010 project. "Exchange 2013" or whatever is next will probably be the most recent we can go to! 2007 Out of support by then you think, I suspect another product will be around and implemented by 2014\2015 wont it?
February 8th, 2011 11:42am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics