Transport Nodes: Difference between revisions

From Iwan
Jump to: navigation, search
No edit summary
 
(6 intermediate revisions by 2 users not shown)
Line 21: Line 21:


<div id="90a78f7d-f6b6-496b-ab89-9237663e37a3" class="figure image">
<div id="90a78f7d-f6b6-496b-ab89-9237663e37a3" class="figure image">
'''Figure 13:'''  
'''Figure 13:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled.png
Line 57: Line 56:


<span id="e354b281-4539-44c1-aeeb-e53746773825"></span>
<span id="e354b281-4539-44c1-aeeb-e53746773825"></span>
== Transport Nodes (cluster) Design ==
== Transport Nodes 〈cluster〉 Design ==


For availability and recoverability purposes and configuration consistency, it is possible to configure the Transport Nodes in a cluster.
For availability and recoverability purposes and configuration consistency, it is possible to configure the Transport Nodes in a cluster.
Line 66: Line 65:


<div id="969961e6-5e15-4c27-a9a9-690898658d85" class="figure image">
<div id="969961e6-5e15-4c27-a9a9-690898658d85" class="figure image">
'''Figure 14:'''  
'''Figure 14:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%201.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%201.png
Line 73: Line 70:
</div>
</div>
<span id="d1c453f8-ba06-41b5-9e93-8e8c850985cb"></span>
<span id="d1c453f8-ba06-41b5-9e93-8e8c850985cb"></span>
== (ESXi) Host Transport Nodes ==
== ESXi Host Transport Nodes ==


From a Physical (physical NICs) and Virtual Network level (VDS), it is possible to have different designs that result in different configurations. All different types of design/configurations will have its pros and cons. In this section, I will discuss a few of these configurations and why they can be better compared to the others.
From a Physical (physical NICs) and Virtual Network level (VDS), it is possible to have different designs that result in different configurations. All different types of design/configurations will have its pros and cons. In this section, I will discuss a few of these configurations and why they can be better compared to the others.


<span id="66f5eb7b-1393-4a45-8039-8bab45d5ef4a"></span>
<span id="66f5eb7b-1393-4a45-8039-8bab45d5ef4a"></span>
=== Host Transport Node VSS/VDS and pNIC Design ===
=== Host Transport Node VSS and VDS and pNIC Design ===


Figure 15 will illustrate two ESXi hosts that have a 2-pNIC (two physical Network Interface Cards) configuration. Because the ESXi hosts are part of a vSphere Cluster they have the option to make use of a Virtual Distributed Switch (VDS).
Figure 15 will illustrate two ESXi hosts that have a 2-pNIC (two physical Network Interface Cards) configuration. Because the ESXi hosts are part of a vSphere Cluster they have the option to make use of a Virtual Distributed Switch (VDS).
Line 113: Line 110:


<div id="0d1bb3e1-8daf-4766-a8fe-aee061e91d01" class="figure image">
<div id="0d1bb3e1-8daf-4766-a8fe-aee061e91d01" class="figure image">
'''Figure 15:'''  
'''Figure 15:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%202.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%202.png
Line 137: Line 132:


<div id="dd90177d-62be-4e21-9fdf-3da771bfafec" class="figure image">
<div id="dd90177d-62be-4e21-9fdf-3da771bfafec" class="figure image">
'''Figure 16:'''  
'''Figure 16:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%203.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%203.png
Line 193: Line 186:


<div id="7c1f0bbd-b856-450e-be31-34d9562528cf" class="figure image">
<div id="7c1f0bbd-b856-450e-be31-34d9562528cf" class="figure image">
'''Figure 17:'''  
'''Figure 17:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%204.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%204.png
Line 259: Line 250:


<div id="c8b0057b-7ce3-4999-b050-0b9bc1632dd9" class="figure image">
<div id="c8b0057b-7ce3-4999-b050-0b9bc1632dd9" class="figure image">
'''Figure 18:'''  
'''Figure 18:'''  
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%205.png
https://www.iwanhoogendoorn.nl/images/thumb/blog/10/Untitled%205.png
Line 267: Line 257:


<span id="603dc891-bb59-4095-8083-a6672bfb6fb6"></span>
<span id="603dc891-bb59-4095-8083-a6672bfb6fb6"></span>
== Edge Transport Nodes ==
== Edge Transport Nodes ==


Line 310: Line 301:
<div id="e54005ba-3c49-447b-873b-d362cb001830" class="collection-content">
<div id="e54005ba-3c49-447b-873b-d362cb001830" class="collection-content">


<span id="table-3-virtual-edge-traffic-mapping-on-a-2-pnic-host-figure-19"></span>
==== Table 3{{fqm}} Virtual Edge Traffic Mapping on a 2&ndash;pNIC Host 〈Figure 19〉 ====
==== Table 3: Virtual Edge Traffic Mapping on a 2-pNIC Host (Figure 19) ====


{| class="wikitable collection-content"
{| class="wikitable collection-content"
Line 359: Line 349:
| vNIC #3
| vNIC #3
|}
|}
</div>
<span class="mark highlight-gray">Table 3: Virtual Edge Traffic Mapping on a 2-pNIC Host (Figure 19)</span>


<div id="7f4124fe-9c01-4c65-8121-c988f31d76a9" class="figure image">
<div id="7f4124fe-9c01-4c65-8121-c988f31d76a9" class="figure image">
Line 411: Line 395:
<div id="6fd78f29-2489-4f38-802d-ef77b307593c" class="collection-content">
<div id="6fd78f29-2489-4f38-802d-ef77b307593c" class="collection-content">


<span id="table-4-2-pnic-bare-metal-edge-traffic-mapping-figure-21"></span>
==== Table 4{{fqm}} 2&ndash;pNIC Bare Metal Edge Traffic Mapping 〈Figure 21〉 ====
==== Table 4: 2-pNIC Bare Metal Edge Traffic Mapping (Figure 21) ====


{| class="wikitable collection-content"
{| class="wikitable collection-content"
Line 435: Line 418:
| pNIC #2
| pNIC #2
|}
|}
</div>
<span class="mark highlight-gray">Table 4: 2-pNIC Bare Metal Edge Traffic Mapping (Figure 21)</span>


⚠️ Network traffic can either be sent using both interfaces (active/active) or only using one active interface and the other one as the standby (active/standby). This all depends on how you would configure the Uplink Profiles, and this all depends on your (customer) requirements.
⚠️ Network traffic can either be sent using both interfaces (active/active) or only using one active interface and the other one as the standby (active/standby). This all depends on how you would configure the Uplink Profiles, and this all depends on your (customer) requirements.
Line 469: Line 449:
<div id="82dbbd9b-e5bd-484b-9c94-75b841ce3082" class="collection-content">
<div id="82dbbd9b-e5bd-484b-9c94-75b841ce3082" class="collection-content">


<span id="table-5-4-pnic-bare-metal-edge-traffic-mapping-figure-23"></span>
==== Table 5{{fqm}} 4&ndash;pNIC Bare Metal Edge Traffic Mapping 〈Figure 23〉 ====
==== Table 5: 4-pNIC Bare Metal Edge Traffic Mapping (Figure 23) ====


{| class="wikitable collection-content"
{| class="wikitable collection-content"
Line 552: Line 531:
</div>
</div>
<span id="b1acad0f-3e76-4ec4-afdd-fe0505c998d2"></span>
<span id="b1acad0f-3e76-4ec4-afdd-fe0505c998d2"></span>
== NSX Host Transport Profiles, NSX Uplink Profiles, and NSX IP Pools ==
== NSX Host Transport Profiles and NSX Uplink Profiles and NSX IP Pools ==


An IP Pool is used to specify the IP addresses that Host Transport Nodes should use for their TEP IP addresses.
An IP Pool is used to specify the IP addresses that Host Transport Nodes should use for their TEP IP addresses.
Line 571: Line 550:


<span id="f3a2da42-374f-4611-a457-d214348160bb"></span>
<span id="f3a2da42-374f-4611-a457-d214348160bb"></span>
== NSX Uplink Profiles and NSX IP Pools and Edge Transport nodes ==
== NSX Uplink Profiles and NSX IP Pools and Edge Transport nodes ==


Line 601: Line 581:
</div>
</div>
<span id="b6ee8efe-4360-4fc5-ab20-aeecc50cd244"></span>
<span id="b6ee8efe-4360-4fc5-ab20-aeecc50cd244"></span>
== Installing NSX on a (vSphere) cluster Host Transport Nodes (using Host Transport Profiles) ==
== Installing NSX on a 〈vSphere〉 cluster Host Transport Nodes 〈using Host Transport Profiles〉 ==


The Host Transport Profile is used to apply settings to multiple Host Transport Nodes that are part of a vSphere Cluster. In Figure 31 you can see that the Host Transport Profile is used to apply this on a Host Transport Node Cluster (vSphere Cluster).
The Host Transport Profile is used to apply settings to multiple Host Transport Nodes that are part of a vSphere Cluster. In Figure 31 you can see that the Host Transport Profile is used to apply this on a Host Transport Node Cluster (vSphere Cluster).

Latest revision as of 18:12, 16 March 2024

A transport node within NSX is a server that can be either physical or virtual that is responsible for “transporting” network traffic.

Figure 13 illustrates the different types of transport nodes. The transport nodes are divided into two categories:

  1. Host Transport Nodes
    1. This can be either a Bare Metal Server, ESXi Host, or KVM host.
      1. The ESXi and KVM hosts are used as Hypervisors to host Virtual Machines.
    1. Edge Transport Nodes
      1. This can be either a Virtual Edge or a Bare Metal Edge.

Figure 13: Untitled.png

Bare Metal Server

A Bare Metal Transport Node Server is a server that is not hosting any Virtual Machines and is not a Hypervisor.

This server is a server with a regular operating system (for example, Windows Server). When this server becomes a transport node, you will install NSX bits on the server so that this server can be enabled for supported NSX services (for example to be part of an overlay network provided by NSX).

ESXi Server

The ESXi server is VMware’s Hypervisor that is primarily used to host VMware Virtual Machines. When this server becomes a transport node, you will install NSX bits on the server so that this server can be enabled for supported NSX services (for example to be part of an overlay network provided by NSX).

The main difference between the Bare Metal Server and the ESXi server is that the supported NSX services (for example overlay networks) are now provided to the Virtual Machines hosted by the ESXi server.

KVM Server

The KVM Server is not in scope here, but the KVM server can be compared with the ESXi host in terms of functionality and therefore it is also a Hypervisor that offers the capability to host KVM-based Virtual Machines.

Edge Transport Nodes

The Edge Transport Nodes (Virtual or Bare Metal) are used to provide network services. They also enable North/South network connectivity. The North/South network connectivity is processed by Virtual Gateways that are hosted inside the Edge Transport Nodes.

The (virtual) Tier-0 Gateway is responsible for all North/South network connectivity from the NSX overlay networks to the Physical Network and is responsible to offer network services.

The (virtual) Tier-1 Gateway is connected to a Tier-0 Gateway and is responsible to offer network services. It is possible to have multiple Tier-1 Gateways connected to a single Tier-0 Gateway.

In the NSX Networking section, I will explain more about the Tier Gateways and Segments and the relation between these objects.

Transport Nodes 〈cluster〉 Design

For availability and recoverability purposes and configuration consistency, it is possible to configure the Transport Nodes in a cluster.

The ESXi Host Transport Nodes are typically already configured in a cluster on the vSphere level. This means that a collection of ESXi servers is already configured on the vCenter Server as a vSphere cluster. This cluster is inherited by NSX when you want to enable NSX on all the ESXi servers in the cluster. A Host Transport Node Profile is used to configure the ESXi hosts part of a vSphere Cluster.

To form a cluster for the Edge Transport Nodes an Edge Cluster needs to be created inside NSX. It is NOT possible to mix Virtual and Bare Metal Edges within the same Edge cluster.

Figure 14: Untitled%201.png

ESXi Host Transport Nodes

From a Physical (physical NICs) and Virtual Network level (VDS), it is possible to have different designs that result in different configurations. All different types of design/configurations will have its pros and cons. In this section, I will discuss a few of these configurations and why they can be better compared to the others.

Host Transport Node VSS and VDS and pNIC Design

Figure 15 will illustrate two ESXi hosts that have a 2-pNIC (two physical Network Interface Cards) configuration. Because the ESXi hosts are part of a vSphere Cluster they have the option to make use of a Virtual Distributed Switch (VDS).

The VDS is a shared vCenter Server (or vSphere) object that all ESXi Servers in the cluster can make use of.

Let’s focus on one ESXi host and the 2-pNICs for now as the second one will work and operate the same.

ESXi host #1:

  • The ESXi host makes use of one (shared) VDS.
  • The ESXi host has two physical Network Interfaces that are mapped to the (shared) VDS.
  • The VDS has three Distributed Virtual Port Groups (DVPG) with different VLANS and different VMkernel (vmk) interfaces mapped for a DVPG (and VLAN).
  • The DVPGs are shared so ESXi host #2 will also have access to them for its own (local) vmk interfaces.
  • VLAN 10 = ESXi Management Network Traffic.
  • VLAN 20 = vMotion Network Traffic.
  • VLAN 30 = VSAN Network Traffic.
  • The ESXi host has the NSX Tunnel Endpoint Interface (TEP) configured as the host is configured as an NSX Transport node.
  • The TEP interface is an interface (just like the vmk interfaces) local to the ESXi host.
  • The number of TEP interfaces used depends on the Uplink Profile configuration.
  • In this case, only one TEP interface is configured.
  • The TEP interface will use a dedicated VLAN (that is not on the figure)

Pros:

  • You can argue that you have fewer objects to manage (only one VDS) and therefore management is simpler.

Cons:

  • In a configuration like this, all network traffic (Management, vMotion, VSAN, NSX Uplink, NSX Overlay) will use the same(two) interfaces.
    • In this case, you need additional configuration to have some form of traffic prioritization as VSAN can be heavy on traffic consumption and you do not want to lose the management of your host when VSAN is pumping a huge amount of VSAN across the (shared) interfaces).

Figure 15: Untitled%202.png

Figure 16 will illustrate two ESXi hosts that have a 4-pNIC (two physical Network Interface Cards) configuration.

The only difference with Figure 15 is the number of pNICs. This means that the network traffic can now be spread across four pNICs instead of two, offering a better way to share the network traffic load and send/process more traffic (throughput) than it would be possible with two interfaces.

Pros:

  • You can argue that you have fewer objects to manage (only one VDS) and therefore management is simpler.
  • You have additional interfaces available to process more traffic.

Cons:

  • In a configuration like this, all network traffic (Management, vMotion, VSAN, NSX Uplink, NSX Overlay) will use the same(two) interfaces.
    • In this case, you need additional configuration to have some form of traffic prioritization as VSAN can be heavy on traffic consumption and you do not want to lose the management of your host when VSAN is pumping a huge amount of VSAN across the (shared) interfaces).
  • You have more interfaces, and this will introduce an additional cost.

Figure 16: Untitled%203.png

Figure 17 will have the same amount of pNICS as in Figure 16. The difference now is that I will use two dedicated interfaces that I will map to a dedicated (local) vSphere Standard Switch (VSS).

Let’s focus on one ESXi host and the 4-pNICs for now as the second one will work and operate the same.

ESXi host #1:

  • The ESXi host makes use of one (shared) VDS.
  • The ESXi host makes use of one (local) VSS.
  • The ESXi host has two physical Network Interfaces that are mapped to the (shared) VDS.
  • The ESXi host has two physical Network Interfaces that are mapped to the (local) VSS.
  • The VDS has two Distributed Virtual Port Groups (DVPG) with different VLANS and different VMkernel (vmk) interfaces mapped for a DVPG (and VLAN).
  • The DVPGs are shared so ESXi host #2 will also have access to them for its own (local) vmk interfaces.
  • VLAN 20 = vMotion Network Traffic.
  • VLAN 30 = VSAN Network Traffic.
  • The VSS has one Virtual Port Group (PG) with a different VLAN and a different VMkernel (vmk) interface mapped for a PG (and VLAN).
    • VLAN 10 = ESXi Management Network Traffic.
  • The ESXi host has the NSX Tunnel Endpoint Interface (TEP) configured as the host is configured as an NSX Transport node.
  • The TEP interface is an interface (just like the vmk interfaces) local to the ESXi host.
  • The number of TEP interfaces used depends on the Uplink Profile configuration.
  • In this case, only one TEP interface is configured.
  • The TEP interface will use a dedicated VLAN (that is not on the figure)


Pros:

  • You have additional interfaces available to process more traffic.
  • The ESXi Management traffic is using two dedicated pNICs.

Cons:

  • You can argue that you have more objects to manage (one VSS and one VDS) and therefore management is harder.
  • In a configuration like this, all network traffic (vMotion, VSAN, NSX Uplink, NSX Overlay) will use the same(two) interfaces.
    • In this case, you need additional configuration to have some form of traffic prioritization as VSAN can be heavy on traffic consumption and you do not want to lose the vMotion capability of your host when VSAN is pumping a huge amount of VSAN across the (shared) interfaces).
  • You have more interfaces, and this will introduce an additional cost.

Figure 17: Untitled%204.png

In Figure 18 I am adding two additional pNICs (making the total of interfaces now 6) per ESXi host. I am also adding an additional VDS. With this I am dedicating different pNIC pairs to different Virtual Switches.

Let’s focus on one ESXi host and the 6-pNICs for now as the second one will work and operate the same.

ESXi host #1:

  • The ESXi host makes use of two (shared) VDSs.
  • One VDS is dedicated to NSX uplink and overlay network traffic.
  • One VDS is dedicated to vMotion and VSAN network traffic.


  • The ESXi host makes use of one (local) VSS.
    • This VSS is dedicated to ESXi management network traffic.
  • The ESXi host has two physical Network Interfaces that are mapped to the (shared) VDS for NSX.
  • The ESXi host has two physical Network Interfaces that are mapped to the (shared) VDS for vMotion and VSAN.
  • The ESXi host has two physical Network Interfaces that are mapped to the (local) VSS for ESXi Management.
  • The NSX VDS has no Distributed Virtual Port Groups (DVPG) but has the NSX Tunnel Endpoint Interface (TEP) configured as the host is configured as an NSX Transport node.
  • The TEP interface is an interface (just like the vmk interfaces) local to the ESXi host.
  • The number of TEP interfaces used depends on the Uplink Profile configuration.
  • In this case, only one TEP interface is configured.
  • The TEP interface will use a dedicated VLAN (that is not on the figure)


  • The other VDS has two Distributed Virtual Port Groups (DVPG) with different VLANS and different VMkernel (vmk) interfaces mapped for a DVPG (and VLAN).
  • The DVPGs are shared so ESXi host #2 will also have access to them for its own (local) vmk interfaces.
  • VLAN 20 = vMotion Network Traffic.
  • VLAN 30 = VSAN Network Traffic.


  • The VSS has one Virtual Port Group (PG) with a different VLAN and a different VMkernel (vmk) interface mapped for a PG (and VLAN).
    • VLAN 10 = ESXi Management Network Traffic.

Pros:

  • You have additional interfaces available to process more traffic.
  • The ESXi Management traffic is using two dedicated pNICs.
  • The NSX (NSX Uplink, NSX Overlay) traffic is using two dedicated pNICs.
  • In a configuration like this, the network traffic will be spread across different dedicated pNICs and Virtual Switches.
    • In this case, you still need additional configuration to have some form of traffic prioritization as VSAN can be heavy on traffic consumption and you do not want to lose the vMotion capability of your host when VSAN is pumping a huge amount of VSAN across the (shared) interfaces).

Cons:

  • You can argue that you have more objects to manage (one VSS and two VDSs) and therefore management is harder.
  • You have more interfaces, and this will introduce an additional cost.

Figure 18: Untitled%205.png

⚠️ As you can see you can have different configurations when it comes to the number of pNICs and Virtual Switches (VSSs and VDSs) and the way you want your network traffic to be distributed. I have only highlighted a few just to give you an idea of what is possible and why.

Edge Transport Nodes

An Edge Transport Node can either be Virtual or Bare Metal.

In this section, I will discuss the most common way of designing and implementing of both form factors.

Virtual Edge Transport Nodes

A Virtual Edge is typically hosted as a Virtual Machine (VM) on ESXi hosts. Like every other Virtual Machine, the Virtual Edge also makes use of the Virtual Port Groups offered by the Virtual Switch (VSS or VDS).

An Edge VM typically has four Virtual Network Interface Cards (vNICs). And these vNICs are mapped to a Port Group to allow the correct transport of traffic coming from these Edge VMs.

An Edge VM is responsible for North/South connectivity, overlay network routing, and various other network services supported by NSX. All these different traffic types are using different VLANs, pNIC and vNIC interfaces. The way how this works depends on the way you design it.

The required VLANs for an Edge Transport Node to operate are:

  • Management VLAN (10)
  • Host TEP VLAN (40)
  • Edge TEP VLAN (50)
  • BGP Uplink VLAN #1 (100)
  • BGP Uplink VLAN #2 (200)

In Figure 19 I am hosting two Edge VMs on a single ESXi host. In this case, this ESXi host is also an NSX (Host) Transport Node. This ESXi host has two pNICS.

⚠️ An Edge VM can also be hosted on an ESXi host that is not an NSX Transport node.

This means that all the traffic that is required for the Edge VM to fully operate needs to be shared/split/divided across these two Physical Network Interface Cards. The way how this is done in Figure 19 is that I have created three Port Groups:

  • Management Port group (10)
  • Trunk Port Group A (all VLANs)
  • Trunk Port Group B (all VLANs)

Table 3 will provide you with an overview of how the traffic will be flowing.

Table 3» Virtual Edge Traffic Mapping on a 2–pNIC Host 〈Figure 19〉

Property Port Group Host pNIC Edge vNIC
Management Traffic PG VLAN 10 Management pNIC #1 (active)

pNIC #2 (standby)

vNIC #1
Host TEP Traffic N/A pNIC #1 (active)

pNIC #2 (standby)

N/A
Edge TEP Traffic PG Trunk A

PG Trunk B

pNIC #1 (active)

pNIC #2 (active)

vNIC #1

vNIC #2

BGP Uplink 1 Traffic PG Trunk A pNIC #1 (active)

pNIC #2 (standby)

vNIC #2
BGP Uplink 2 Traffic PG Trunk B pNIC #1 (standby)

pNIC #2 (active)

vNIC #3

Figure 19: Untitled%206.png

You can see here the way how the traffic is distributed here, and with only two pNICs to work with on the ESXi server level this is the most optimal way of doing it. So, in this case (Figure 19) Both Edge VMs share the same Port Groups and the same pNICs.

In Figure 20 I have added two additional pNICs to the ESXi host and I have also added two additional Port Groups. With this design, I can now dedicate two physical Interfaces (pNIC #1 and pNIC #2) to the NSX Uplink and Overlay network traffic coming and going to Edge VM #1 and two other physical Interfaces (pNIC #3 and pNIC #4) to the NSX Uplink and Overlay network traffic coming and going to Edge VM #2.

Figure 20: Untitled%207.png

Bare Metal Edge Transport Nodes

A Bare Metal Edge is typically a physical server.

A Bare Metal Edge can have as many pNICs as you want. And these pNICS will process the traffic based on your configuration.

Just like an Edge VM, a Bare Metal Edge is also responsible for North/South connectivity, overlay network routing, and various other network services supported by NSX. All these different traffic types are also using different VLANs and depending on the amount of pNICs different pNICs as well. The way how this works depends on the way you design it.

The required VLANs for an Edge Transport Node to operate are:

  • Management VLAN (10)
  • Edge TEP VLAN (50)
  • BGP Uplink VLAN #1 (100)
  • BGP Uplink VLAN #2 (200)

In Figure 21 I am hosting two Bare Metal Edges and these Edges have two physical Network Interface Cards.

This means that all the traffic that is required for the Bare Metal Edges to fully operate needs to be shared/split/divided across these two Physical Network Interface Cards.

Table 4 will provide you with an overview of how the traffic will be flowing.

ADD TABLE 4

Table 4» 2–pNIC Bare Metal Edge Traffic Mapping 〈Figure 21〉

Property Host pNIC
Management Traffic pNIC #1 (active)

pNIC #2 (standby)

Edge TEP Traffic pNIC #1 (active)

pNIC #2 (active)

BGP Uplink 1 Traffic pNIC #1
BGP Uplink 2 Traffic] pNIC #2

⚠️ Network traffic can either be sent using both interfaces (active/active) or only using one active interface and the other one as the standby (active/standby). This all depends on how you would configure the Uplink Profiles, and this all depends on your (customer) requirements.

Figure 21: Untitled%208.png

Because I only have two physical interfaces on the Bare Metal Edge to process network traffic this means that all NSX traffic (management, overlay and BGP uplink traffic) will be sent across these interfaces.

In Figure 22 I have added two additional pNICs to Bare Metal Edge. The configuration is still the same as the Bare Metal Edge in Figure 21. But I have the flexibility to have dedicated Management Interfaces which is shown in Figure 23.

Figure 22: Untitled%209.png

Figure 23: Untitled%2010.png

Table 5 will provide you with an overview of how the traffic will be flowing.

ADD TABLE 5

Table 5» 4–pNIC Bare Metal Edge Traffic Mapping 〈Figure 23〉

Property Host pNIC
Management Traffic pNIC #1 (active)

pNIC #2 (standby)

Edge TEP Traffic pNIC #3 (active)

pNIC #4 (active)

BGP Uplink 1 Traffic pNIC #3 for Edge BM#1

pNIC #4 for Edge BM#2

BGP Uplink 2 Traffic pNIC #4 for Edge BM#1

pNIC #3 for Edge BM#2



Table 5: 4-pNIC Bare Metal Edge Traffic Mapping (Figure 23)

Uplink Profiles

Depending on the number of physical interfaces this will determine the way how traffic is (or can be) processed. You can configure (or influence) this using Uplink Profiles.

⚠️ An uplink profile can be applied to (ESXi) Host Transport Nodes and to Edge (Virtual and Bare Metal) Transport Nodes. It is best practice to create dedicated/separate Uplink Profiles for each different type of Uplink Profile you are planning to use.

In Figure 24, Figure 25, and Figure 26 you see that I have created and applied an Uplink Profile to an ESXi host.
To keep things simple, I am working with the following examples with ESXi hosts with two pNICs.


Figure 24 is illustrating an uplink profile that is configured in “Failover Order”. Failover Order means that only one pNIC is used as the active pNIC and the other ones left are all going to be on standby (not used).

The blue VMs (Web, App, and DB) that are hosted on the ESXi host will all send their traffic to Uplink-1 which is attached to pNIC #1. pNIC #2 is unused until pNIC #1 is unavailable for whatever reason.

Figure 24: Untitled%2011.png

Figure 25 illustrates an uplink profile configured in “Load Balance Source ID”. Load Balance Source ID means that both pNICs are used as the active pNICs.

The blue VMs Web and DB that are hosted on the ESXi host will all send their traffic to Uplink-1 which is attached to pNIC #1. The other blue VM App will all send its traffic to Uplink-2 which is attached to pNIC #2.

When one pNIC is unavailable for whatever reason the other one will be used.


⚠️ The way NSX determines what Uplink/pNIC to select will be done based on the Virtual Port ID of the VDC to which the VM is attached.

Figure 25: Untitled%2012.png

Figure 26 illustrates an uplink profile configured in “LAG”. LAG means that both pNICs are used as the active pNICs just like Load Balance Source ID. The difference here is that with this profile setting it is now possible to use multiple Physical Interfaces and combine these to be logically one. This is also called a Port Channel (or Ether Channel). The traffic distribution between the ESXi host and the Physical Network (Switch) is now done based on the selected LAG algorithm.

Figure 26 is using Source and Destination IP Addresses as the algorithm, and this means that the traffic path that is selected between the physical interfaces (between the ESXi server and the switch) will be based on the source and destination IP addresses.

Figure 26: Untitled%2013.png

NSX Host Transport Profiles and NSX Uplink Profiles and NSX IP Pools

An IP Pool is used to specify the IP addresses that Host Transport Nodes should use for their TEP IP addresses.

The Uplink Profile (like I have just described) determines the way network traffic will flow depending on the number of interfaces.
The Host Transport Profile is used to apply settings to multiple Host Transport Nodes that are part of a vSphere Cluster.


One of the settings that the Host Transport Profile inherits is the settings from the Uplink Profile. The Uplink Profile inherits the settings from the IP Pool.

Figure 27: Untitled%2014.png

For this reason, when you want to configure your ESXi Hosts for NSX (making them a Host Transport Node) and you are using the Host Transport Profile for this you first need to configure the IP Pool and the Uplink Profile as a dependency (Figure 27).

NSX Uplink Profiles and NSX IP Pools and Edge Transport nodes

IP Pools and Uplink Profiles are also used for Edge Transport Nodes.

Edge Transport Nodes (Edge VMs and Bare Metal Edges) do not have profiles as you have seen for hosts. Uplink Profiles are directly applied to the Edges. In Figures 28 and 29 you see this.

Untitled%2015.png

Inside the Uplink Profile, you still need to specify an IP Pool that you need to pre-create up front.

Figure 28: Untitled%2016.png

Installing NSX on a standalone Host Transport Node

When you do not want to make use of the Host Transport Profile and only want to configure NSX on standalone hosts you can also apply the Uplink Profiles directly applied to a single host (Figure 30) or to a subset of selected hosts. The principle is the same as this is done for the Edge Transport Nodes

Untitled%2017.png

Installing NSX on a 〈vSphere〉 cluster Host Transport Nodes 〈using Host Transport Profiles〉

The Host Transport Profile is used to apply settings to multiple Host Transport Nodes that are part of a vSphere Cluster. In Figure 31 you can see that the Host Transport Profile is used to apply this on a Host Transport Node Cluster (vSphere Cluster).

Figure 29: Untitled%2018.png

Continue with the next section >> NSX License and Compute Managers