Converged Network Understanding VLAN

Hi

I think I might have a little gap in my understanding about converged networking in Hyper-V, I have never used this before but it looks fairly simple to set up. Its the whole VLAN ID thing that I dont really get. I have followed these two examples for my configuration, I have read other sources too but they dont cover the physical switches which is where I think my gap of knowledge is.

https://marckean.wordpress.com/2012/09/26/windows-server-2012-hyper-v-server-command-line-configuration-2/

https://technet.microsoft.com/fr-fr/library/dn550728.aspx

When I first looked at this I thought it wouldnt involve any configuration on the physical switch in terms of VLAN, I assumed that the virtual switch on hyper-v handled all that. However, when I use VLAN IDs as per the examples above, I cannot route traffic through my main network between the two hosts (I have two hosts, each with 8 NICs, all of them are plugged into my main network switches which is also shared by all my clients and servers laptops and mobiles etc, I create 1 team using all 8 NICs). I create the virtual networks give them IP addresses but I cant ping between them at all from one host to another (for example to ping between the cluster networks I use ping S 10.10.7.1 10.10.7.2).

As soon as I remove the VLAN IDs (I just comment out the lines for VLAN ID and run the script again) and again set IP addresses on the vNICs used for the cluster I can then ping between the two hosts. My management network uses a 192.1.x.x range and the cluster network uses a 10.10.x.x range. My cluster validation all goes ok, each vNIC (cluster, migration, SMB) is on its own subnet and this all gets green ticks.

I have two questions about this:

  1. In order to get VLAN IDs working, do I need to configure the physical ports on the physical switches to use specific VLANs, and create a trunk to the switch in my other building for the other node? I am assuming yes since I cant get this to work.
  2. Doing it without VLAN IDs, is there any real drawback to this provided I do keep the networks in their own separate subnets or should I really be using VLANs?

Many thanks

Steve

January 30th, 2015 9:19am

Hi Steve,

>>In order to get VLAN IDs working, do I need to configure the physical ports on the physical switches to use specific VLANs, and create a trunk to the switch in my other building for the other node?

Yes, your understanding is correct. Please change the switch port to trunk mode and allow the specified VLANs on the trunk port.

>>Doing it without VLAN IDs, is there any real drawback to this provided I do keep the networks in their own separate subnets or should I really be using VLANs?

VLAN is used to separate the VMs into different subnets when we only have one physical NIC.  If we have enough physical NICs to separate the VMs into different subnet, we can leave the VLAN disabled.

Best Regards.

Free Windows Admin Tool Kit Click here and download it now
February 2nd, 2015 2:47pm

just to add what has been said,

 Remember, VLANs are used to separate Networks/Subnets, if you want to route/have access to between different networks/subnets, you would need a layer 3 device (such as a router)

February 2nd, 2015 3:50pm

Hi

thanks for your reply on this. since all my network cables form one "team" and then virtual network adapters are created off of this team with different VLAN ID's, I assume I would have to allow multiple VLAN ID onto each port, so for example if Cluster was VLAN 10, Heartbeat VLAN 11 and ISCSI VLAN 12 id need to allow VLAN IDs 10,11,12 on port 1 of my switch, and 10,11,12 on port 2, 10,11,12 port 3 and so on?

in answer to my second question, I am trying to do it the way of making a team, creating a virtual switch from that team and then virtual networks off that virtual switch so I can get a better effective use of my bandwidth rather than dedicate an entire 1Gbps for heartbeat and then have another 1Gbps for backup heartbeat and so on. In this manner I could have far more management level virtual NICs than I do physical if I wanted.

So to extend your answer slightly.... would there actually be a need for me to VLAN the cluster networks or could I just put each one in its own subnet by setting the IP and subnet mask of each virtual NIC? I guess I am more thinking from a performance perspective or some other reasons... I am unsure which is best to do. I don't want to over-complicate the solution but obviously want to do it the most modern best way :)

referencing the first web link I posted might clarify what my setup most closely resembles.

thanks

Steve

Free Windows Admin Tool Kit Click here and download it now
February 2nd, 2015 3:52pm

One subnet per vNIC is proper. Referencing your first web link, here is an example of what to do on cluster worload's vNICs :


New-NetIPAddress -InterfaceAlias "LiveMig" -IPAddress 10.10.0.1 -PrefixLength "24"

New-NetIPAddress -InterfaceAlias "CSV" -IPAddress 10.10.1.1 -PrefixLength "24"

New-NetIPAddress -InterfaceAlias "Cluster HB" -IPAddress 10.10.2.1 -PrefixLength "24"

Adapt according to your specifications...



March 27th, 2015 4:40pm

I think your first link is the more helpful: you can see on the diagram just under the physical switches, they have noted a list of the VLANs that are also configured on each of the vNICs.
This is actually trying to tell you: you must trunk each VLAN used on the vNICs between the physical NICs and the Physical switches.

Once the traffic is isolated by VLAN, the HyperV switch maintains that isolation. It functions as a Layer2 switch only (ie, devices within a VLAN/subnet can see each other).

Devices on different VLANs/subnets can only communicate at Layer3 (routing).

There is more than one way to provide Layer3 routing between VLANs, but the basic way is as hinted at by your first link's diagram: the VLANs are extended onto the physical network. Each VLAN has an IP address configured on it, (eg on the physical network's "core" switch).

A device on VLAN11 attempting to send a packet to a device on VLAN12 will see from its own subnet mask that the other device is on a different subnet and so it will not try to find it, it will encapsulate the packet into a frame, and it will address the frame with the MAC address for the IP address it has configured as its "default gateway".

So your default gateway for each VLAN is the address on the physical network's "core" switch where interVLAN routing is planned to occur for that VLAN.


Free Windows Admin Tool Kit Click here and download it now
April 13th, 2015 7:18am

I think your first link is the more helpful: you can see on the diagram just under the physical switches, they have noted a list of the VLANs that are also configured on each of the vNICs.
This is actually trying to tell you: you must trunk each VLAN used on the vNICs between the physical NICs and the Physical switches.

Once the traffic is isolated by VLAN, the HyperV switch maintains that isolation. It functions as a Layer2 switch only (ie, devices within a VLAN/subnet can see each other).

Devices on different VLANs/subnets can only communicate at Layer3 (routing).

There is more than one way to provide Layer3 routing between VLANs, but the basic way is as hinted at by your first link's diagram: the VLANs are extended onto the physical network. Each VLAN has an IP address configured on it, (eg on the physical network's "core" switch).

A device on VLAN11 attempting to send a packet to a device on VLAN12 will see from its own subnet mask that the other device is on a different subnet and so it will not try to find it, it will encapsulate the packet into a frame, and it will address the frame with the MAC address for the IP address it has configured as its "default gateway".

So your default gateway for each VLAN is the address on the physical network's "core" switch where interVLAN routing is planned to occur for that VLAN.


April 13th, 2015 7:18am

Hi

thanks for replying, the thing i am a little confused over on this is how to configure the VLAN on the switches (if i need to do that at all that is). the first link shows that two physical NICs are teamed together, then split out into vNICS ... those vNICS are still travelling down two physical cables.... but there are multiple VLAN ID's within there....

on my switch, do i configure for example VLAN 10 and tie it to port 1 of switch 1 and port 1 of switch 2, and then VLAN 11 to port 1 of switch 1 and port 1 of switch 2 and so on?

I have 6 x 1 Gbps cables in the server i am looking at doing this converged design on, i want to team them and have 3 cables connect to switch 1 (ports 1-3) and 3 cables connect to switch 2 (ports 1-3) i am going to have multiple VLANs (heartbeat, storage, ISCSI, sync, backup, live migration and so on).... so will i be creating VLAN 10 on all 6 ports, and then VLAN 11 on all 6 ports and so on?

the design looks very similar to that of the other thread of mine you commented on at https://social.technet.microsoft.com/Forums/en-US/7c73bd5b-f7b6-4b11-902b-17256b83e34a/struggling-to-max-out-the-network-adapters?forum=winserverPN

I essentially want to keep the switch bit of that diagram but the networking from the servers will be converged. my current design maps 1 VLAN per physical switch port... i dont know if multiple VLANs can be assigned to each physical port and even whether this is what i actually need to do?

thanks

Steve

Free Windows Admin Tool Kit Click here and download it now
April 15th, 2015 6:34am

so will i be creating VLAN 10 on all 6 ports, and then VLAN 11 on all 6 ports and so on?

Right, whatever the number of ethernet cards you are willing to use, physical switch ports of converged network must allow multiple VLANs. So make sure these ports are set to trunk (or general). You can't use 1 VLAN per physical switch port with converged topology.

April 15th, 2015 8:38am

ahh that's great, is this what's referred to as dynamic VLANs, 802.1Q ??

thanks

Steve

Free Windows Admin Tool Kit Click here and download it now
April 15th, 2015 9:33am

"so will i be creating VLAN 10 on all 6 ports, and then VLAN 11 on all 6 ports and so on?"

 Yes!

The *very* important reason you have ALL VLANs on ALL ports is so that any one link failure doesn't cause a break in communications on any VLAN.
In any case, if your switchports are configured differently, you can't do link-aggregation on them.

Forget dynamic VLANs, you never want anything dynamic happening, especially at the data centre end where everything needs to be up and running according to design.

802.1q = VLAN tagging: it describes a Layer2 frame format that includes extra bits necessary to carry a VLAN ID.

April 16th, 2015 12:38am

Ha, thanks that was a missing piece of the puzzle about the VLANS on all ports, i suspected so but wasn't 100% sure. when you say "In any case, if your switchports are configured differently, you can't do link-aggregation on them." what do you mean by "differently" and are you referring to LACP between the individual ports that i assign to VLANs or are you referring to the LACP Trunk i have between the two fibres back-end of the switches? as shown in my diagram on the other thread, i have a trunk between the two 10Gbps fibres that link my two buildings together, i dont have LACP configured on any individual ports and i dont really intend to at this stage because i dont see the benefit (the switches are used for cross-site redundancy and performance)

thanks

Steve

Free Windows Admin Tool Kit Click here and download it now
April 16th, 2015 3:33am

oh yes and one more thing, should my ports be tagged or untagged for this scenario? HP configured the last set as untagged but those aren't used for a converged scenario.

Steve

April 16th, 2015 4:36am

Generally, all your operational VLANs should be tagged on each side of every link they have to cross.

I see the Trk1 on your cross-site link.
I was referring to the individual links that are coming out of the HyperV nodes. These are the links that should probably be changed so they are all configured identically (all 6 carry all 6 VLANs, all tagged). That way, when one link goes down, you don't lose access to a VLAN.

Free Windows Admin Tool Kit Click here and download it now
April 16th, 2015 9:39pm

ahh i see, what i was going to do (as per the first link in this thread) was create a team out of all 6 NICS, 3 go into one switch and 3 go into the other switch. then i was going to create as many vNICS as i need for the cluster as per my second link section "Example of converged networking: routing traffic through one Hyper-V virtual switch" modifying that script as appropriate.

my vNICS are; heartbeat, ISCSI, Live Migration, Storage Sync (i use a virtual SAN that requires this), and Backup Traffic. each of these channels uses a different VLAN, i will configure the switch to say that port 1 belongs to all of these five 5 VLANs and port 2 also and so on... what I am hoping for is that if any single cable gets disconnected i will be down to 5 physical cables but because they're in a team everything should stay up with reduced bandwidth, if an entire switch fails i lose half of my bandwidth but the team should stay up - right?

does this sound correct?

thanks

Steve

April 17th, 2015 10:40am

Yes, that sounds like the approach to take.
Free Windows Admin Tool Kit Click here and download it now
April 20th, 2015 2:10am

Hi

this seemed to work fine thanks, i configured this at the weekend and now my vNICS with the VLANs are all talking to each other, i am just experimenting with the different load balancing and teaming options to get the best performance. my switches are HP 2920's and i should be able to use the LACP option in server 2012R2 but it doesn't seem to work. my switches have LACP configured on ports 1/A1 and 2/A1 (these are the fibres which link two buildings and the trunk is called trk1, each of my new VLANS are part of this trunk), am i right in saying that ports 1/3-1/5 and 2/3-2/5 also need to be configured in their own trunk - trk2 (these ports are the ones my cables connect to), and then those VLANs added to that trk2?

on a separate issue, I'd like to ask you this Olwen because you seem quite knowledgeable with the switching stuff, we have a set of switches in the room next door which is an absolute mess but i can only get one cable to connect to a d-link switch in our server room, connecting anything else causes a network flood and devices can't be contacted, the impact is immediate after the extra cables are connected. i have drawn probably the worst diagram man can ever draw in paint, but i hope it illustrates a point.

The red box at the bottom is where our 20Mbps fibre comes into the building (in yellow), the green link connects to a HP switch (i forget the model number, all the blue switches are 24 port), each switch is connected via the uplinks (not stacked with modules), we have a network cable that connects from switch 3 to the top switch in red - an 8 port. there is one cable which connects to our server room d-link switch.... that entire set of switches is therefore going through a single 1Gbps cable, that D-link then connects to servers, NAS, IP Camera recording equipment. the moment i try to connect any of those other green cables with the red X it causes network flooding.

previously, before i virtualised the rack servers, each of those green cables connected to a NIC on the servers and didn't cause a problem. all the other blue switches ports connect to patch panels to go off through the building, i haven't drawn those lines in. we have near enough connected all 96 ports of those 4 switches, so going through 1 x 1Gbps to connect to infrastructure is not great.

I am trying to increase the bandwidth that gets into our D-Link switch, is it just a case of connecting those green cables from the top red switch into some of the other HP switches below it, or is it more complicated than that? i really apologise if the diagram is so bad it's not understood.

all ports on all switches are 1Gbps in speed including the uplinks.

i would appreciate any advice on this, i know it's off topic a bit.

thanks

Steve

April 21st, 2015 2:49pm

3 things spring to mind:

 - the network flood would indicate the D-Link does not have SPanning-Tree enabled (even if it supports it).
However, fixing this only solves the flood issue, without giving you any extra bandwidth.

 

 - multiple links up and running would require the DLink switch supports 802.3ad, link aggregation. Check. If it supports it, enable, then connect.

 

 - Your design could do with a revamp. Too much daisy-chaining. The following are the sorts of things you should be upgrading:

1. The DLink switch should be replaced by a 10Gb-capable switch, or, even better, with a stack of 2x 10Gb-capable switches to provide some redundancy for your critical server infrastructure.

2. The link between the two cabinets should be upgraded to 2x 10Gb links.

3. The blue switches in the cabinet#1 are access switches. Ideally, these should be stackable switches to eliminate the daisy chaining. Ideally, *each* switch should have an uplink to the top-of-rack switch, and ideally this should be a 10Gb uplink.

4. The cabinet#1 top of rack switch should be a 10Gb-capable switch, and ideally should be a stacked pair to provide redundancy.

I don't know what kind of switches you use, but you can get very good value 10Gb switches from HP these days, and you can get 10Gb SR optics from somewhere like SmartOptics for about $120 each.

Unfortunately, a lot of organisations look at the cost of implementing 10Gb by looking at the kind of prices Cisco still charges for 10Gb optics, which is prohibitively expensive. Even HP's own branded optics are 5x more expensive than the perfectly good alternatives.

Free Windows Admin Tool Kit Click here and download it now
April 26th, 2015 8:50pm

After reading some more, i suspected i would need LACP configured, the DLink is new, but cheap rubbish really (it's just crashed on us about an hour ago).

would it be possible to connect an uplink direct into one of the ports in the DLink from each of the HP switches, or would this yet again cause flooding?

The HP switches are likely to support LACP but the D-Link is unlikely to, it is a managed switch but we have to buy a some additional adapter to go into the SFP ports to manage it, so we left it as unmanaged for now.

stackable - i agree, but this is the way it is at the moment, not sure if they will pay for stack modules or not.

if we can't link each switch individually to a port on the D-Link, then it may well be time to look at some new switches. $120 seems fairly cheap for 10Gbps. hmm food for thought.

any ideas on the trk2 part of my last question? i shouldnt need to create a second trunk should i for LACP on those other ports.... hooking into the first trunk should be enough.... just trying to look at the best performance for teaming in server 2012 and would have liked to have tested that scenario.

cheers mate

Steve

April 27th, 2015 7:21am

As I understand it, your Trk1 is the connection between two switches in different buildings.

The trunk between a server and a switch is a different trunk (different path between different devices) so that would be Trk2.

A trunk between the switch and  *second* server has to be a different trunk again.

A trunk is a collection of links that all join the same two devices.
The paths can be a bit different, but not so different that the latencies are noticeably different.
The interfaces used have to be all the same type - all 1Gb or all 10Gb, no mixture of the two.

(A stacked switch becomes a single virtual switch and so counts as a single device for the purpose of trunking. You can see the benefit of stacking switches like this. Don't be fooled by what some switches call "stacking" - it is only real stacking if it turns the multiple switches into a single virtual switch).

Free Windows Admin Tool Kit Click here and download it now
April 27th, 2015 9:36pm

1. All ports on the switches where devices which need to talk on that VLAN are connected will need to be tagged with the applicable VLAN.

2. VLAN's stop devices on one VLAN talking to devices on another.

If you only have different subnets, devices can still be accessed from one subnet to the next. This is a drawback as it compromises your network.

Example.

I have a customer wifi connection which runs on it's own vlan, this stops customers being able to access anything on our corporate network.

April 27th, 2015 9:52pm

Ahh i see, ok thanks. so I will need to create a second trunk to include 6 ports (3 ports in each physical switch) and the same again in the other building, i assume i then include each of my VLANS into this new trunk.

I'm also assuming i need to keep the VLANS included in trunk 1 so they can can still talk to the switches in the other building, or am i wrong on this?

cheers.

Steve

Free Windows Admin Tool Kit Click here and download it now
April 28th, 2015 4:10am

Yes, all VLANs go on the trunks, and yes, all 6 VLANs need to go on your Trk1.

Think of the VLANs as bits of coloured string, as per your diagram in the other thread.

All Servers on a Green IP address need an uninterrupted Green bit of string connected between them.

Think of your physical links as hollow pipes that you put coloured bits of string through to achieve connectivity.

April 28th, 2015 9:25pm

(Oops. I should have said Blue or Yellow, you use Green elsewhere...).

...but you've used Blue and Yellow twice for two different subnets. What you really have there are 6 different colours coming out of your HyperV nodes.

I use different colours for VLANs on a diagram if I have 2 or 3 VLANs. Any more than that isn't practical, and instead I label each physical link with the list of VLANs running through it.

 

This is kind of how I would do a VLAN diagram:


Free Windows Admin Tool Kit Click here and download it now
April 28th, 2015 9:26pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics