Branchcache between 2 2012R2 servers

Hi,

I found the BranchCache documentation lacking as more and more usual with MS. There's a lot of info on what it does and a lot of marketing jabbajabba but not much technical info - at least not for troubleshooting. I am having a remote site which only has one 2012R2 server. No clients. That server holds a copy of some data in the main site. On several places there is some info on how 'well branchcache works with deduplication in 2012R2'.

My goal is to use branchcache to limit WAN bandwidth usage while syncing the data over SMB. What I have done now is enabled branchcache for network shares on the main sites share, configured it's GPO to 0ms latency, allowed hasing for all shares (just to be sure) but still enabled Branchcache on the specific share.

On the remote site, having just one server there I enabled Branchcache in distributed mode, as I see no use in hosted mode in this environment (but please prove me wrong if needed!). It's enabled through GPO and it is in fact enabled:

PS C:\Users\<me>> get-bcstatus

BranchCacheIsEnabled        : True
BranchCacheServiceStatus    : Running
BranchCacheServiceStartType : Automatic


ClientConfiguration:

    CurrentClientMode           : DistributedCache
    HostedCacheServerList       :
    HostedCacheDiscoveryEnabled : False


ContentServerConfiguration:

    ContentServerIsEnabled : True


HostedCacheServerConfiguration:

    HostedCacheServerIsEnabled        : False
    ClientAuthenticationMode          : Domain
    HostedCacheScpRegistrationEnabled : False


NetworkConfiguration:

    ContentRetrievalUrlReservationEnabled : True
    HostedCacheHttpUrlReservationEnabled  : True
    HostedCacheHttpsUrlReservationEnabled : True
    ContentRetrievalFirewallRulesEnabled  : True
    PeerDiscoveryFirewallRulesEnabled     : True
    HostedCacheServerFirewallRulesEnabled : True
    HostedCacheClientFirewallRulesEnabled : True


HashCache:

    CacheFileDirectoryPath               : C:\Windows\ServiceProfiles\NetworkService\AppData\Local\PeerDistPub
    MaxCacheSizeAsPercentageOfDiskVolume : 1
    MaxCacheSizeAsNumberOfBytes          : 533169397
    CurrentSizeOnDiskAsNumberOfBytes     : 29433856
    CurrentActiveCacheSize               : 0


DataCache:

    CacheFileDirectoryPath               : C:\Windows\ServiceProfiles\NetworkService\AppData\Local\PeerDistRepub
    MaxCacheSizeAsPercentageOfDiskVolume : 5
    MaxCacheSizeAsNumberOfBytes          : 2665846985
    CurrentSizeOnDiskAsNumberOfBytes     : 29433874
    CurrentActiveCacheSize               : 0

    DataCacheExtensions:

Both main-site as well as remote-site volumes are deduped by the way. As I understand from the sumire info available that should in fact help branchcache as the files are already hashed. No matter how often I copy a single file from the main site, I never get any results. I have perfmon opened with all branchcache counters, but they don't reflect a single action or byte at all. I have used https://mizitechinfo.wordpress.com/2014/12/30/step-by-step-deploy-configure-branchcache-in-windows-server-2012-r2/, https://gallery.technet.microsoft.com/Windows-Server-2012-R2-and-c18a6dd1 and https://technet.microsoft.com/library/jj572990 to no avail.

I am now installing Windows 8.1 Enterprise now as here and there I read you need enterprise to use this. However all client components seem to be available in 2012R2 as well.



My concrete questions:

- Is it at all possible to use 2012R2 as a client? About the same question here: https://social.technet.microsoft.com/Forums/windowsserver/en-US/551c55ab-7e49-4a18-8315-13fcf3cab522/branchcache-client-on-a-rd-host?forum=winserverfiles but no answer.

- What should I expect BranchCache to do together with dedupe?





January 28th, 2015 2:40pm

I didn't try hashgen yet, nor am I familiar with it. I assume the source data (in the main site) needs to be hashed with this? I ran it:

C:\Users\<me>>hashgen "v:\New folder"
Processing directory v:\New folder

 File hpacuoffline-8.75-12.0.iso processed successfully for hash version 1.
 File hpacuoffline-8.75-12.0.iso processed successfully for hash version 2.
 File VSE880POC975906.zip processed successfully for hash version 1.
 File VSE880POC975906.zip processed successfully for hash version 2.

 Processing complete.
 4 Files processed successfully.
 0 Files processed unsuccessfully.

I know I only need V2 hashes with 2012R2 only, but just to be sure. I copied these files to the remote site, either push and pull, but it won't work. Still 0 bytes cached.

After that I also let the remote site hash but still 0 bytes in cache and doesn't copy any faster (the two testfiles were already in destination once while hashing).

I am at the moment not even sure anymore how the process is, I've read too much acbout it today I guess. When I have a Branchcache enabled share at site A, and a distributed (or even hosted for that matter) cache on site B, and I copy a file a few times (three I believe) the data should be in the distributed cache, correct? In addition I believe, but the information is very scarce on that, that when we have deduped volumes we copy from and to, branchcache can use the hashes from that as well and really only transfer changed blocks from site A to B opposed to the whole file if it isn't available. A bit like source-side deduplication more or less. That is what I need to achieve in fact. The files that need to be synced there are about 98% identical each day, but large in individual size. I don't want to transfer 100GB when only 1GB is actual new data. That's my goal.

 Thanks for your help so far, greatly appreciated!


Free Windows Admin Tool Kit Click here and download it now
January 28th, 2015 9:27pm

Good idea to try with BITS, hadn't thought of that. However it's still not working. The transfer works fine but no hasing is used nor is anything cached.

About the standard / enterprise thing, I wonder about that as 2012R2 has no enterprise edition, those were merged to standard (ie. most features that needed enterprise before are available in 2012(R2) standard). The only 'higher' option is datacenter but I don't think that's feasible for Branchcache.


January 30th, 2015 7:49am

On the main site I have branchcache for network enabled, and the hasing policies set to 'allow all shared to be hashed'. On the remote site I have branchcache for network as well as branchcache itself installed, if you don't do so you can't even enable a caching server. I can't get over the fact that you can install and configure it fine but can't use it? That's weird beyond believe.

I can try with Windows 8.1 but I am not sure if that's going to run on my remote HP server hardware. I could ofcourse always virtualize it but well, thats another bunch of overhead. In addition while I know they share about the same kernel, I don't think I want to use a desktop OS for a backup repository :)

I just don't get why it would not be possible between servers, and second why there is no serious documentation on this once again...

I'll try a test setup with Windows 8.1 though.
Free Windows Admin Tool Kit Click here and download it now
January 31st, 2015 11:23am

I said servers but I meant server. I am testing by copying a 100MB file from the very same share each time. What I expected is the second or maybe third copy to be consuming less bandwidth. Alas it's not working like that (yet..)

By the way what I've read and made out of the scarce documentation available is that when used together with dedup, any block that already exists in either file is not transfered again. That way it should in fact work if I copy the same file from two servers. I'm looking up the source of that info.

January 31st, 2015 5:12pm

FIXED IT (I think)

So, you need to install the Desktop Experience feature on your server. This is because BranchCache SMB is linked to offline files.

Install the feature under 'User Interfaces And Infrastructure' (it needs a reboot)

Go to sync center - enable offline files (another friggin reboot)

Then retry your tests - I just dragged a file from the other server share and it went straight into the BranchCache cache

Cheers

Phil

http://2pintsoftware.com



  • Proposed as answer by Phil Wilcock Saturday, January 31, 2015 6:46 PM
Free Windows Admin Tool Kit Click here and download it now
January 31st, 2015 6:46pm

Finally had time to spend on this, and glad to report I finally got it working. The issue was I was probably stared blind at it, as I forgot to install the desktopfeatures on the source-server in the datacenter, which made branchcache-SMB not work. It now works, altough it has some quirks - in only works when the remote server pulls data from the source, when source copies data to remote it doesn't. It works together with Dedupe rather well - as soon as dedup ran these hashes are used.

So some quirks I have to work out but for now I'm good :) Thanks for your help!


March 17th, 2015 10:26am

It still doesn't work the way I thought it would. I have two testfiles, 1.iso and 2.iso. Each about 150MB in size. When I copy them, it uses regular bandwidth as expected. Then I reboot the branchserver to be sure filesystem cache is flushed and stuff like that. Copy them over again, done in a second. However, when I combine the two files in one larger file, I would expect it to be fast again, as the content of that new file is exactly the same as of the two seperate files. That file travels the line completely again though.

Even if I copy one of the testfiles on the source server to another file, creating 2 identical files with only a different name, the whole file travels the line again.

So I feel Branchcache does use dedupe hashes to prevent generating CPU to create new hashes, but that it does not use the 'global' store of hashes, but on a per file level. As far as my test go I cannot get it to do source-side-dedupe. That's what I want to achieve, to copy only unique data across but on a volume-wide base. We make one full backup every week, whicih differs only a few percent opposed to last weeks backup. I'd like to transfer only the unique blocks as the rest is already at the destination side.


Free Windows Admin Tool Kit Click here and download it now
March 18th, 2015 2:49pm

But why would I need a large BC cache if BC can use the dedup-store (which it claims it does) where 95% of my data is in?

Are you able to find the name or thread of the tool you mentioned? I've been working with Replacador, but that isn't really what I want, as that syncs a whole disk, rather than files. To little flexibility there.


Thanks for your reply!
June 9th, 2015 10:41am

I did check with the MS devs though, and in theory the BC Cache size should be the same as your de-duped content. So if you de-dup the source (which I assume you have) then get th size of that. Then set the destination machine BC cache to a bit larger than the source. As content is pulled (push will never be BC aware AFAIK) from the source to match the destination it should all go via the BC Cache and de-dupe will go in to action as the BC cache is de-duped.

Worth a try?

//Andreas

Free Windows Admin Tool Kit Click here and download it now
June 23rd, 2015 8:36pm

I finally found out why BC wasn't working for me. Branchcache for Network files, ie on SMB, at least in my environment seems to have a limit of about 3.8GB per file. Not 4GB which would sound like a more logical limit. I have backup files ranging from about 2GB up to 1.2TB in size. All files up to about 3.8GB work perfectly fine and get 'BranchCached' pretty well. Whenever I transfer a larger file through SMB it wouldn't work anymore. The BC counters wouldn't go up. It might have to do with the offline files stuff. I don't even understand why MS has made it dependent on that anyway. I gave it the largest amount of space though, but that didn't help.
The solution for me is as simple as moving to IIS and use HTTP(s) to transfer the files. It needs a little adaption in my scripts but that's fine. I've tried a 140GB file with that and while it isn't finished yet, the BC counters go up just as expected.
I'll let you know when I finally got what I wanted to achieve.
July 15th, 2015 9:28am

Yeah, make sure you use BITS and not the regular HTTP BranchCache.

A writeup on the topic: http://2pintsoftware.com/branchcache-de-duplication/

//Andreas

Free Windows Admin Tool Kit Click here and download it now
July 24th, 2015 7:01am

I've just yesterday about given up on Branchcache. I've let it run for about one and a half week now, and my BC cache has built up to 1.1TB, using BITS. Yet hardly any blocks are coming from the cache. To test this I copy a full of the next week, where the full of the week before is already there and in the cache. About 1-2% is coming from the BC cache while all rest has to travel the line, still ending up in the cache. One might say, then the blocks are just different, if they weren't the cache wouldn't build up. Yet deduplication dedupes both files against each other to be pratically identical.

So I am terribly confused and also frustrated to say the least; dedup works fine - hence blocks are equal. BC doesn't work well - hence the blocks aren't equal.

July 24th, 2015 7:15am

Ok, thats not good. How are you copying it HTTP or BITS?

What file sizes do you have the most of? Think I am going to test this, need some TB disks though!

Did it work if you started with a smaller size and then build up gradially? Where does it stop working?

What size is your de-duped source volume?

//Andreas

Free Windows Admin Tool Kit Click here and download it now
July 24th, 2015 11:00am

Also, can you check BranchCache + DeDup + VSS logs for any errors?
July 24th, 2015 3:29pm

Sorry for the late reply, been quite busy on other challenges. There's two things: I am currently tranfering another full, so again which differs only about a few %. For now it seems to work rather well. The one thing I should note is that the initial sync I did when I brought the remote backup server to the main site, I transfered through HTTP rather than BITS. This is ofcourse reflected in the BC stats. For some reason, the next full transfer I did through BITS just had a very, very low cache hit ratio. I assume ALL BC caches, be it SMB, HTTP or BITS are thrown in the same store? For some reasont the second BITS transfer I am doing now runs perfectly fine, and is actually mostly capped by diskspace.

A second thing is that I found the IO performance very low. At the moment, as I am still testing, I am running this on a rather old HP G6 hardware with 6x 1T on a SmartArray 410i with 512MB cache. However as I have only 6 spindels in this machine I created just one big array, so working with BC means a lot of random IO. Still performance was very, very bad. Processmonitor as well as Resource Monitor revealed BC was working from the pagefile all the time, generating lots and lots of small IO's to the pagefile (which is on the very same disk array). The machine has plenty of RAM available though, so I disabled paging completely (I'm not fond of paging anyway) and BC works so much more smoothly now which about a third of the IOPS I consumed with paging enabled.

So again I have the feeling it's going to be ok ;) But I'll update you again when I've done more testing.

Free Windows Admin Tool Kit Click here and download it now
August 3rd, 2015 3:27am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics