Lab:Install and configure a (nested) vSphere SDDC: Difference between revisions

From Iwan
Jump to: navigation, search
No edit summary
m (Applying replacements)
 
(One intermediate revision by one other user not shown)
Line 48: Line 48:
One of the prerequisites when you are installing NSX is a working SDDC based on vSphere.
One of the prerequisites when you are installing NSX is a working SDDC based on vSphere.


[https://www.notion.so/Compute-Storage-and-vSphere-Infrastructure-71c1d504273242bf98f3f948e7eea491?pvs=21 There are different designs available when it comes to the deployment of your vSphere Infrastructure].
There are different designs available when it comes to the deployment of your vSphere Infrastructure]. [[Compute, Storage, and vSphere Infrastructure]]


In this section I will tell you how to deploy/configure a vCenter Server with some ESXi hosts so that you can use this environment as a starting point to Install and configure NSX.
In this section I will tell you how to deploy/configure a vCenter Server with some ESXi hosts so that you can use this environment as a starting point to Install and configure NSX.
Line 78: Line 78:
= The Steps =
= The Steps =


When you have [https://www.notion.so/Lab-Install-and-configure-a-DNS-Server-using-BIND-e561f226941745e087a49ddc9650a214?pvs=21 DNS] working, the next step is to deploy your vSphere infrastructure.
When you have DNS working, the next step is to deploy your vSphere infrastructure.  
[[Lab:Install and configure a DNS Server using BIND]]


<ul>
<ul>
Line 96: Line 97:


<span id="1756d5fc-c543-4308-a225-13a674b19491"></span>
<span id="1756d5fc-c543-4308-a225-13a674b19491"></span>
== STEP 1: Deploy a vCenter Server (Appliance) ==
== STEP 1{{fqm}} Deploy a vCenter Server 〈Appliance〉 ==


The vCenter Server is deployed on another vSphere environment (nested) and this requires a different installation approach using two stages. You will need to deploy the vCenter Server using the .ova file. The .ova file can be retrieved from the vCenter Server ISO file that you can download from the VMware website.
The vCenter Server is deployed on another vSphere environment (nested) and this requires a different installation approach using two stages. You will need to deploy the vCenter Server using the .ova file. The .ova file can be retrieved from the vCenter Server ISO file that you can download from the VMware website.
Line 278: Line 279:
</div>
</div>
<span id="10fc48bf-ef8e-4553-9172-e27d177427a9"></span>
<span id="10fc48bf-ef8e-4553-9172-e27d177427a9"></span>
== STEP 2: Configure vCenter Server ==
== STEP 2{{fqm}} Configure vCenter Server ==


<span id="6d28365f-2913-4b84-9e32-458037ca3edd"></span>
<span id="6d28365f-2913-4b84-9e32-458037ca3edd"></span>
=== Create (virtual) Data Center with vSphere Clusters ===
=== Create 〈virtual〉 Data Center with vSphere Clusters ===


After the Data Center and vSphere Cluster objects are created this looks something like the screenshot below.
After the Data Center and vSphere Cluster objects are created this looks something like the screenshot below.
Line 367: Line 368:
! VLAN ID/trunk range
! VLAN ID/trunk range
|-
|-
| [https://www.notion.so/Management-9ad619900edd452798a48164f5e647a0?pvs=21 Management]
| Management
| VLAN
| VLAN
| 100
| 100
|-
|-
| [https://www.notion.so/NSXEdgeUplink1-2c89e82bea304a668b57088e730bde53?pvs=21 NSXEdgeUplink1]
| NSXEdgeUplink1
| VLAN Trunk
| VLAN Trunk
| 114, 116, 118
| 114, 116, 118
|-
|-
| [https://www.notion.so/NSXEdgeUplink2-ff4142f5afa54d0a9e0241400cd0355d?pvs=21 NSXEdgeUplink2]
| NSXEdgeUplink2
| VLAN Trunk
| VLAN Trunk
| 114, 117, 118
| 114, 117, 118
|-
|-
| [https://www.notion.so/vMotion-6479378f6c0c41b98591eee2df4e365d?pvs=21 vMotion]
| vMotion
| VLAN
| VLAN
| 111
| 111
|-
|-
| [https://www.notion.so/vSAN-150f1b097dbd44b7a9b5de51efcaf869?pvs=21 vSAN]
| vSAN
| VLAN
| VLAN
| 112
| 112
Line 473: Line 474:
</div>
</div>
<span id="6efdbf24-493a-47fc-bfad-740d99979c6a"></span>
<span id="6efdbf24-493a-47fc-bfad-740d99979c6a"></span>
== STEP 3: Deploy the ESXi ==
== STEP 3{{fqm}} Deploy the ESXi ==


Now that your vCenter Server is up and running and reachable the next step is to deploy the (nested) ESXi Compute and Edge hosts. Per Pod I am deploying in total six (nested) ESXi hosts (as per the diagrams in the diagrams section above) For Pod 100 the details for the (nested) ESXi hosts to deploy are found in the table below:
Now that your vCenter Server is up and running and reachable the next step is to deploy the (nested) ESXi Compute and Edge hosts. Per Pod I am deploying in total six (nested) ESXi hosts (as per the diagrams in the diagrams section above) For Pod 100 the details for the (nested) ESXi hosts to deploy are found in the table below:
Line 489: Line 490:
! Purpose
! Purpose
|-
|-
| [https://www.notion.so/Pod-100-ESXi-31-f8e5f66bb3564e8f8d2b251ec75ce63a?pvs=21 Pod-100-ESXi-31]
| Pod-100-ESXi-31
| IH-Pod-100-ESXi-31
| IH-Pod-100-ESXi-31
| 10.203.100.31/24
| 10.203.100.31/24
| Compute Host
| Compute Host
|-
|-
| [https://www.notion.so/Pod-100-ESXi-32-12465a97a36d40de8c8231d6284fb55f?pvs=21 Pod-100-ESXi-32]
| Pod-100-ESXi-32
| IH-Pod-100-ESXi-32
| IH-Pod-100-ESXi-32
| 10.203.100.32/24
| 10.203.100.32/24
| Compute Host
| Compute Host
|-
|-
| [https://www.notion.so/Pod-100-ESXi-33-dd19b05cb95246ac9044fd6caec038a5?pvs=21 Pod-100-ESXi-33]
| Pod-100-ESXi-33
| IH-Pod-100-ESXi-33
| IH-Pod-100-ESXi-33
| 10.203.100.33/24
| 10.203.100.33/24
| Compute Host
| Compute Host
|-
|-
| [https://www.notion.so/Pod-100-ESXi-91-c5b2f02bc93b43c7851e74e1b51062ff?pvs=21 Pod-100-ESXi-91]
| Pod-100-ESXi-91
| IH-Pod-100-ESXi-91
| IH-Pod-100-ESXi-91
| 10.203.100.91/24
| 10.203.100.91/24
| Edge Host
| Edge Host
|-
|-
| [https://www.notion.so/Pod-100-ESXi-92-c122288222094974a854f0065b3bf377?pvs=21 Pod-100-ESXi-92]
| Pod-100-ESXi-92
| IH-Pod-100-ESXi-92
| IH-Pod-100-ESXi-92
| 10.203.100.92/24
| 10.203.100.92/24
| Edge Host
| Edge Host
|-
|-
| [https://www.notion.so/Pod-100-ESXi-93-2f4d6589fd524644b43cf63bc8605341?pvs=21 Pod-100-ESXi-93]
| Pod-100-ESXi-93
| IH-Pod-100-ESXi-93
| IH-Pod-100-ESXi-93
| 10.203.100.93/24
| 10.203.100.93/24
Line 524: Line 525:


</div>
</div>
⚠️ In this article I am only deploying Compute and Edge Hosts, but the same steps below can also be used to deploy your Management hosts ([https://www.notion.so/Lab-Install-and-configure-a-nested-vSphere-SDDC-fc60530328db4b978f996c049a693621?pvs=21 Figure 2]). It all depends on the amount of available resources you have.
⚠️ In this article I am only deploying Compute and Edge Hosts, but the same steps below can also be used to deploy your Management hosts(Figure 2]).
[[Lab:Install and configure a (nested) vSphere SDDC]]
It all depends on the amount of available resources you have.


I am using an ova template (Nested_ESXi7.0_Appliance_Template_v1.0)that has been [https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance provided by William Lam].
I am using an ova template (Nested_ESXi7.0_Appliance_Template_v1.0)that has been [https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance provided by William Lam].
Line 639: Line 642:
! Port Group
! Port Group
|-
|-
| [https://www.notion.so/Network-Adaptor-1-112b7ce0c88e44969ca9105cb887f27d?pvs=21 Network Adaptor 1]
| Network Adapter 1
| Pod-100-Mgmt
| Pod-100-Mgmt
|-
|-
| [https://www.notion.so/Network-Adaptor-2-d72833a0533e483eb71b739094ac0731?pvs=21 Network Adaptor 2]
| Network Adapter 2
| Pod-100-Trunk
| Pod-100-Trunk
|-
|-
| [https://www.notion.so/Network-Adaptor-3-1e2fcf97579e4db49f507a077bde01da?pvs=21 Network Adaptor 3]
| Network Adapter 3
| Pod-100-Trunk
| Pod-100-Trunk
|}
|}


</div>
</div>
Line 665: Line 665:
!  
!  
|-
|-
| [https://www.notion.so/vCPU-5908376c855944beb13251bf09dc363c?pvs=21 vCPU]
| vCPU
| 8
| 8
|-
|-
| [https://www.notion.so/RAM-0d6b6af42c4c4a30b557091d9ebb35dc?pvs=21 RAM]
| RAM
| 16 GB
| 16 GB
|-
|-
| [https://www.notion.so/Hard-Disk-1-3fb7c5aaf9894cac9a782c9a24781fb8?pvs=21 Hard Disk 1]
| Hard Disk 1
| 8 GB
| 8 GB
|-
|-
| [https://www.notion.so/Hard-Disk-2-d21626689891497388a2001fd22051df?pvs=21 Hard Disk 2]
| Hard Disk 2
| 20 GB
| 20 GB
|-
|-
| [https://www.notion.so/Hard-Disk-3-ce0d1b63ee604715af1969d7aefb2cb0?pvs=21 Hard Disk 3]
| Hard Disk 3
| 180 GB
| 180 GB
|}
|}


</div>
</div>
Line 742: Line 739:


<span id="8bacc7cc-6648-4d29-906a-04839f01fea5"></span>
<span id="8bacc7cc-6648-4d29-906a-04839f01fea5"></span>
== STEP 4: Add the ESXi Servers to a vSphere Cluster and configure VSAN ==
== STEP 4{{fqm}} Add the ESXi Servers to a vSphere Cluster and configure VSAN ==


<span id="18e70d96-36c2-42c3-88fc-be09bbf0f91c"></span>
<span id="18e70d96-36c2-42c3-88fc-be09bbf0f91c"></span>
Line 758: Line 755:
! Password
! Password
|-
|-
| [https://www.notion.so/pod-100-computea-1-sddc-lab-c2e985f586704274b7b6335db85244ba?pvs=21 pod-100-computea-1.sddc.lab]
| pod-100-computea-1.sddc.lab
| root
| root
| VMware1!
| VMware1!
|-
|-
| [https://www.notion.so/pod-100-computea-2-sddc-lab-9ed5ad7dcd8148809d94f89e732795dc?pvs=21 pod-100-computea-2.sddc.lab]
| pod-100-computea-2.sddc.lab
| root
| root
| VMware1!
| VMware1!
|-
|-
| [https://www.notion.so/pod-100-computea-3-sddc-lab-2c227446992c458984a4cb534a2fdd8b?pvs=21 pod-100-computea-3.sddc.lab]
| pod-100-computea-3.sddc.lab
| root
| root
| VMware1!
| VMware1!
|-
|-
| [https://www.notion.so/pod-100-edge-1-sddc-lab-63ad8cf8472243b9b0ee80731c027cc0?pvs=21 pod-100-edge-1.sddc.lab]
| pod-100-edge-1.sddc.lab
| root
| root
| VMware1!
| VMware1!
|-
|-
| [https://www.notion.so/pod-100-edge-2-sddc-lab-2098287f236a44aaaeb65e83feed0949?pvs=21 pod-100-edge-2.sddc.lab]
| pod-100-edge-2.sddc.lab
| root
| root
| VMware1!
| VMware1!
|-
|-
| [https://www.notion.so/pod-100-edge-3-sddc-lab-efdfb10fd37247b1bb599f1243092f78?pvs=21 pod-100-edge-3.sddc.lab]
| pod-100-edge-3.sddc.lab
| root
| root
| VMware1!
| VMware1!
Line 839: Line 836:
</div>
</div>
<span id="ed362718-a6f6-49a1-99c1-4e3d5714cda3"></span>
<span id="ed362718-a6f6-49a1-99c1-4e3d5714cda3"></span>
=== Add hosts to VDS and move vmk0 to VDS (Port Group) ===
=== Add hosts to VDS and move vmk0 to VDS 〈Port Group〉 ===


Now that the VDS is created and the Port Groups are in place and the ESXi hosts are added to the cluster, you can now add the hosts to the VDS and migrate the vmk0 management port to the VDS (Port Group).
Now that the VDS is created and the Port Groups are in place and the ESXi hosts are added to the cluster, you can now add the hosts to the VDS and migrate the vmk0 management port to the VDS (Port Group).
Line 926: Line 923:
! Gateway
! Gateway
|-
|-
| [https://www.notion.so/pod-100-computea-1-sddc-lab-f8c8c6f646b64ffa88ab8bf5cf07c84c?pvs=21 pod-100-computea-1.sddc.lab]
| pod-100-computea-1.sddc.lab
| 10.203.101.111
| 10.203.101.111
| vMotion
| vMotion
Line 932: Line 929:
| 10.203.101.1
| 10.203.101.1
|-
|-
| [https://www.notion.so/pod-100-computea-2-sddc-lab-661ddbc78bec4dcda88d2ebdce082ef3?pvs=21 pod-100-computea-2.sddc.lab]
| pod-100-computea-2.sddc.lab
| 10.203.101.112
| 10.203.101.112
| vMotion
| vMotion
Line 938: Line 935:
| 10.203.101.1
| 10.203.101.1
|-
|-
| [https://www.notion.so/pod-100-computea-3-sddc-lab-d57697f13df240f4a412a985a18455e3?pvs=21 pod-100-computea-3.sddc.lab]
| pod-100-computea-3.sddc.lab
| 10.203.101.113
| 10.203.101.113
| vMotion
| vMotion
Line 944: Line 941:
| 10.203.101.1
| 10.203.101.1
|-
|-
| [https://www.notion.so/pod-100-edge-1-sddc-lab-538951fd3aac46e29e031c9f65d5c906?pvs=21 pod-100-edge-1.sddc.lab]
| pod-100-edge-1.sddc.lab
| 10.203.101.191
| 10.203.101.191
| vMotion
| vMotion
Line 950: Line 947:
| 10.203.101.1
| 10.203.101.1
|-
|-
| [https://www.notion.so/pod-100-edge-2-sddc-lab-c23fdabe38a2425bb76e88bfca10b408?pvs=21 pod-100-edge-2.sddc.lab]
| pod-100-edge-2.sddc.lab
| 10.203.101.192
| 10.203.101.192
| vMotion
| vMotion
Line 956: Line 953:
| 10.203.101.1
| 10.203.101.1
|-
|-
| [https://www.notion.so/pod-100-edge-3-sddc-lab-f82f7ca30f714ababfec15bb7fd6485c?pvs=21 pod-100-edge-3.sddc.lab]
| pod-100-edge-3.sddc.lab
| 10.203.101.193
| 10.203.101.193
| vMotion
| vMotion
Line 962: Line 959:
| 10.203.101.1
| 10.203.101.1
|}
|}


</div>
</div>
Line 1,034: Line 1,028:
! Gateway
! Gateway
|-
|-
| [https://www.notion.so/pod-100-computea-1-sddc-lab-69fa2c96302444799acfffd7fd6ca38e?pvs=21 pod-100-computea-1.sddc.lab]
| pod-100-computea-1.sddc.lab
| 10.203.102.111
| 10.203.102.111
| vSAN
| vSAN
Line 1,040: Line 1,034:
| 10.203.102.1
| 10.203.102.1
|-
|-
| [https://www.notion.so/pod-100-computea-2-sddc-lab-7b75033509384360abadaf5cbe349ebe?pvs=21 pod-100-computea-2.sddc.lab]
| pod-100-computea-2.sddc.lab
| 10.203.102.112
| 10.203.102.112
| vSAN
| vSAN
Line 1,046: Line 1,040:
| 10.203.102.1
| 10.203.102.1
|-
|-
| [https://www.notion.so/pod-100-computea-3-sddc-lab-32310ac7d3c9477dace25a3696119411?pvs=21 pod-100-computea-3.sddc.lab]
| pod-100-computea-3.sddc.lab
| 10.203.102.113
| 10.203.102.113
| vSAN
| vSAN
Line 1,052: Line 1,046:
| 10.203.102.1
| 10.203.102.1
|-
|-
| [https://www.notion.so/pod-100-edge-1-sddc-lab-bb846e16825941b28f5b4dcb38e5af55?pvs=21 pod-100-edge-1.sddc.lab]
| pod-100-edge-1.sddc.lab
| 10.203.102.191
| 10.203.102.191
| vSAN
| vSAN
Line 1,058: Line 1,052:
| 10.203.102.1
| 10.203.102.1
|-
|-
| [https://www.notion.so/pod-100-edge-2-sddc-lab-0e57ff0e02e843e3a0ee84ede262f05b?pvs=21 pod-100-edge-2.sddc.lab]
| pod-100-edge-2.sddc.lab
| 10.203.102.192
| 10.203.102.192
| vSAN
| vSAN
Line 1,064: Line 1,058:
| 10.203.102.1
| 10.203.102.1
|-
|-
| [https://www.notion.so/pod-100-edge-3-sddc-lab-f4bc7ec97c10464bb0f4179bea2f5509?pvs=21 pod-100-edge-3.sddc.lab]
| pod-100-edge-3.sddc.lab
| 10.203.102.193
| 10.203.102.193
| vSAN
| vSAN
Line 1,070: Line 1,064:
| 10.203.102.1
| 10.203.102.1
|}
|}


</div>
</div>
Line 1,225: Line 1,216:
</div>
</div>
<span id="b8a47ba8-8085-4821-9f8b-326d30fdac54"></span>
<span id="b8a47ba8-8085-4821-9f8b-326d30fdac54"></span>
== STEP 5: Install the Licenses for the vCenter Server, ESXi Servers and VSAN ==
== STEP 5{{fqm}} Install the Licenses for the vCenter Server, ESXi Servers and VSAN ==


You must have noticed the message in the orange bar on the top related to Licenses.
You must have noticed the message in the orange bar on the top related to Licenses.
Line 1,246: Line 1,237:
! License key
! License key
|-
|-
| [https://www.notion.so/vCenter-Server-License-4bbe188fe0a24aa2a193bd272e212d3e?pvs=21 vCenter Server License]
| vCenter Server License
| <code>K42HJ-XXXXX-XXXXX-XXXXX-XXXXX</code>
| <code>K42HJ-XXXXX-XXXXX-XXXXX-XXXXX</code>
|-
|-
| [https://www.notion.so/ESXi-Server-License-13314cae12484b4e818554e0a8bb9fba?pvs=21 ESXi Server License]
| ESXi Server License
| <code>1J4T3-XXXXX-XXXXX-XXXXX-XXXXX</code>
| <code>1J4T3-XXXXX-XXXXX-XXXXX-XXXXX</code>
|-
|-
| [https://www.notion.so/VSAN-License-735788f24a9c47fd92b7239d2aa7ad2d?pvs=21 VSAN License]
| VSAN License
| <code>X02TH-XXXXX-XXXXX-XXXXX-XXXXX</code>
| <code>X02TH-XXXXX-XXXXX-XXXXX-XXXXX</code>
|}
|}


</div>
</div>
Line 1,327: Line 1,315:
</div>
</div>
<span id="031916e1-a2ca-4474-9506-c0fdb30806af"></span>
<span id="031916e1-a2ca-4474-9506-c0fdb30806af"></span>
=== ESXi Server(s) ===
=== ESXi Server〈s〉 ===


Assign the ESXi Server licenses:
Assign the ESXi Server licenses:

Latest revision as of 20:19, 15 March 2024

In this lab I am working with the following software and versions:

Software Version Filename
VMware vCenter Server Appliance 7.00U3G VMware-VCSA-all-7.0.3-20150588.iso
VMware ESXi Server 7.00U3F VMware-VMvisor-Installer-7.0U3f-20036589.x86_64.iso
  1. Deploy a vCenter server
  1. Deploy 6 x ESXi hosts
    1. 3 x Compute hosts
    1. 3 x Edge hosts
  1. The vSphere environment has one Workload cluster and one Edge cluster that need to be configured.
    1. Prepare the full cluster for Compute with DRS and HA.
    1. Prepare another full cluster for Edge with DRS and HA.
  1. Create a VDS and Port Groups
  1. Add the ESXi hosts to the vSphere Clusters
  1. Configure VSAN

One of the prerequisites when you are installing NSX is a working SDDC based on vSphere.

There are different designs available when it comes to the deployment of your vSphere Infrastructure]. Compute, Storage, and vSphere Infrastructure

In this section I will tell you how to deploy/configure a vCenter Server with some ESXi hosts so that you can use this environment as a starting point to Install and configure NSX.

As I am doing a nested deployment and one can loose track on what is happening where I have created the figure below for better understanding.

Figure 1: Untitled.png

⚠️ In this article I am building the nested lab as per Figure 1.

When I add NSX later in this nested setup Figure 2 would look something like this:

Figure 2: Untitled%201.png

When I would deploy this in a non-nested environment Figure 3 would look like this:

Figure 3: Untitled%202.png

The Steps

When you have DNS working, the next step is to deploy your vSphere infrastructure. Lab:Install and configure a DNS Server using BIND

  • STEP 1: Deploy a vCenter Server (Appliance)
  • STEP 2: Configure vCenter Server
  • STEP 3: Deploy the ESXi
  • STEP 4: Add the ESXi Servers to a vSphere Cluster and configure VSAN
  • STEP 5: Install the Licences for the vCenter Server, ESXi Servers and VSAN

STEP 1» Deploy a vCenter Server 〈Appliance〉

The vCenter Server is deployed on another vSphere environment (nested) and this requires a different installation approach using two stages. You will need to deploy the vCenter Server using the .ova file. The .ova file can be retrieved from the vCenter Server ISO file that you can download from the VMware website.

⚠️ There are multiple ways to deploy a vCenter Server, for example using the VCSA deploy tool with the JSON file as input. This method will not be used in this article.

STAGE 1 Deployment

Create a new Virtual Machine:

Untitled%203.png

Select: Deploy from template:

Untitled%204.png

Choose the OVA/OVF Template you want to deploy from:

Untitled%205.png

Provide a Virtual Machine name:

Untitled%206.png

Select the correct Resource Pool (the one with your name on it):

Untitled%207.png

Review the details:

Untitled%208.png

Accept the licence agreement:

Untitled%209.png

Select the Storage:

Untitled%2010.png

Select the destination network:

Untitled%2011.png

Specify the template specific properties like passwords, IP address, DNS, default gateway settings, etc.:

Untitled%2012.png

Untitled%2013.png

Untitled%2014.png

Untitled%2015.png

Untitled%2016.png

Review the Summary before you start the actual deploy:

Untitled%2017.png

Power on the Virtual Machine:

Untitled%2018.png

STAGE 2 Deployment

When you powered on the Virtual machine, you have only completed the STAGE 1 vCenter Server deployment, and we still need to complete the STAGE 2 deployment.

To do this, you need to browse from the Stepstone to https://10.203.100.5:5480 (the vCenter Server Management IP address) Log in to the vCenter Server to complete the STAGE 2 deployment.

Untitled%2019.png

Click next:

Untitled%2020.png

Verify all the parameters, and select to synchronise your NTP with a selected NTP server.

Untitled%2021.png

Specify the SSO configuration details:

Untitled%2022.png

Configure CEIP (or not):

Untitled%2023.png

Review the final STAGE 2 configuration details and click finish to start the STAGE 2 deployment:

Untitled%2024.png

Untitled%2025.png

Watch the progress screen until it finishes.

When STAGE 2 deployment is finished sometimes, you will get a screen that is finalised, and sometimes it does not do the auto-refresh properly, and it times out. The STAGE 2 deployment takes around 20 minutes, and after the deployment, you can refresh the screen and log in again and look at the vCenter Server Appliance summary screen:

Untitled%2026.png

Now you are ready to log into the vCenter Server GUI:

Untitled%2027.png

STEP 2» Configure vCenter Server

Create 〈virtual〉 Data Center with vSphere Clusters

After the Data Center and vSphere Cluster objects are created this looks something like the screenshot below.

Untitled%2028.png

Configure DRS on the vSphere Clusters

Verify if DRS is enabled and set to “Fully Automated” and if not set it to the below settings.

Verify this on the Compute Cluster.

Untitled%2029.png

Verify this on the Edge Cluster.

Untitled%2030.png

Create VDS and VDS Port Groups

Browse to the Networking tab and right-click on the Data Center object and select Distributed Switch → New Distributed Switch to create a new Distributed Switch.

Untitled%2031.png

Provide a name for the new Distributed Switch:

Untitled%2032.png

Select a Distributed Switch version:

Untitled%2033.png

Set the number of uplinks:

Untitled%2034.png

Review the details and click on finish:

Untitled%2035.png

Verify if the Distributed Switch has been created successfully:

Untitled%2036.png

Now that the Distributed Switch is created it is time to create some Port Groups. the details can be found in the table below:

Port Group Name VLAN Type VLAN ID/trunk range
Management VLAN 100
NSXEdgeUplink1 VLAN Trunk 114, 116, 118
NSXEdgeUplink2 VLAN Trunk 114, 117, 118
vMotion VLAN 111
vSAN VLAN 112



⚠️ The NSXEdgeUplink Port Groups will be created now, and will be used for the Virtual NSX Edge Transport Nodes.

Right-click on the Distributed Switch and select Distributed Port Group → New Distributed Port Group.

Untitled%2037.png

Provide a name for the new Distributed Port Group:

Untitled%2038.png

Set the VLAN Type to be VLAN and specify the VLAN id as provided in the table:

Untitled%2039.png

Review the details and click on finish:

Untitled%2040.png

Verify if the Distributed Port Group has been created successfully:

Untitled%2041.png

⚠️ Repeat the same steps to create the vMotion and VSAN Distributed Port Groups.

The Distributed Port Groups that are going to be used for the Virtual Edge Transport Nodes are slightly different.

Provide a name for the new Distributed Port Group:

Untitled%2042.png

Set the VLAN Type to be VLAN Trunking and specify the VLAN trunk range as provided in the table:

The VLANS that are trunked are the BGP Uplink VLANs, the TEP and RTEP VLANs.

Untitled%2043.png

Review the details and click on finish:

Untitled%2044.png

Verify if the Distributed Port Group has been created successfully:

Untitled%2045.png

⚠️ Repeat the same steps to create the second NSXEdgeUplink2 Distributed Port Group.

When all the required Distributed Port Groups are created it should look like this:

Untitled%2046.png

STEP 3» Deploy the ESXi

Now that your vCenter Server is up and running and reachable the next step is to deploy the (nested) ESXi Compute and Edge hosts. Per Pod I am deploying in total six (nested) ESXi hosts (as per the diagrams in the diagrams section above) For Pod 100 the details for the (nested) ESXi hosts to deploy are found in the table below:

ESXi Hostname VM Name vmk0 IP address Purpose
Pod-100-ESXi-31 IH-Pod-100-ESXi-31 10.203.100.31/24 Compute Host
Pod-100-ESXi-32 IH-Pod-100-ESXi-32 10.203.100.32/24 Compute Host
Pod-100-ESXi-33 IH-Pod-100-ESXi-33 10.203.100.33/24 Compute Host
Pod-100-ESXi-91 IH-Pod-100-ESXi-91 10.203.100.91/24 Edge Host
Pod-100-ESXi-92 IH-Pod-100-ESXi-92 10.203.100.92/24 Edge Host
Pod-100-ESXi-93 IH-Pod-100-ESXi-93 10.203.100.93/24 Edge Host



⚠️ In this article I am only deploying Compute and Edge Hosts, but the same steps below can also be used to deploy your Management hosts(Figure 2]). Lab:Install and configure a (nested) vSphere SDDC It all depends on the amount of available resources you have.

I am using an ova template (Nested_ESXi7.0_Appliance_Template_v1.0)that has been provided by William Lam.

Untitled%2047.png

Create a new Virtual Machine:

Untitled%2048.png

Select: Deploy from template:

Untitled%2049.png

Choose the OVA/OVF Template you want to deploy from:

Untitled%2050.png

Provide a Virtual Machine name:

Untitled%2051.png

Select the correct Resource Pool (the one with your name on it):

Untitled%2052.png

Review the details:

Untitled%2053.png

Accept the licence agreement:

Untitled%2054.png

Select the Storage:

Untitled%2055.png

Select the destination network:

Untitled%2056.png

Specify the template specific properties like passwords, IP address, DNS, default gateway settings, etc.:

Untitled%2057.png

Untitled%2058.png

Untitled%2059.png

Review the Summary before you start the actual deploy:

Untitled%2060.png

Now that the first (nested) ESXi host is deployed, we need to add an additional Virtual Network adapter and change the CPU, RAM, and Hard Disk settings. Let's edit the settings the newly created (nested) ESXi host:

Untitled%2061.png

Add Network Adapter so we can use the first NIC dedicated for management.

vNIC# Port Group
Network Adapter 1 Pod-100-Mgmt
Network Adapter 2 Pod-100-Trunk
Network Adapter 3 Pod-100-Trunk

Change the CPU, RAM, and Hard Disk settings:

Property
vCPU 8
RAM 16 GB
Hard Disk 1 8 GB
Hard Disk 2 20 GB
Hard Disk 3 180 GB

Edit the VM settings:

Untitled%2062.png

Now that you have just change the settings, you can use this Virtual Machine and clone this five times and specify the (nested) ESXi host-specific parameters during the clone wizard. Clone the Virtual Machine:

Untitled%2063.png

Provide a Virtual Machine name:

Untitled%2064.png

Select the correct Resource Pool (the one with your name on it):

Untitled%2065.png

Select the Storage:

Untitled%2066.png

As you are cloning, this exact VM and the Virtual Machines will be identical, you don't need any OS or hardware modification, and you can choose for yourself if you want to power the VM on after deployment. I prefer powering them up all at the same time.

Untitled%2067.png

Specify the template specific properties like passwords, IP address, DNS, default gateway settings, etc. Most of the details are already predefined, the only thing you need to change is the hostname and the management IP address:

Untitled%2068.png

Untitled%2069.png

Now you have ESXI-31 and ESXI-32; repeat the cloning steps for all the (nested) ESXi hosts you required in your SDDC lab / Pod. I deployed everything according to the diagram in Figure 1.

STEP 4» Add the ESXi Servers to a vSphere Cluster and configure VSAN

Add ESXi hosts to vSphere Clusters

ESXi hostname Username Password
pod-100-computea-1.sddc.lab root VMware1!
pod-100-computea-2.sddc.lab root VMware1!
pod-100-computea-3.sddc.lab root VMware1!
pod-100-edge-1.sddc.lab root VMware1!
pod-100-edge-2.sddc.lab root VMware1!
pod-100-edge-3.sddc.lab root VMware1!



Right-click on the Compute Cluster and select Add Hosts.

Untitled%2070.png

Specify the FQDNs for all the hosts you need to add and if the credentials are the same you can check the box and only specify the credentials of the first host you are adding.

Untitled%2071.png

Accept the SHA1 thumbprints/certificates:

Untitled%2072.png

Look at the summary:

Untitled%2073.png

Review the Summary and Finish.

Untitled%2074.png

Verify if the hosts are added successfully:

Untitled%2075.png

⚠️ Repeat the same steps to add the ESXi Edge hosts to the vSphere Edge Cluster.

When all the hosts are added to the clusters this should look something like this:

Untitled%2076.png

Add hosts to VDS and move vmk0 to VDS 〈Port Group〉

Now that the VDS is created and the Port Groups are in place and the ESXi hosts are added to the cluster, you can now add the hosts to the VDS and migrate the vmk0 management port to the VDS (Port Group).

Go to the Networking tab and right-click on the VDS and select Add and Manage Hosts.

Untitled%2077.png

Select “Add hosts”:

Untitled%2078.png

Select all the hosts you want to the VDS:

Untitled%2079.png

Assign the pNICs (vmnics) to the VDS Uplinks.

In my case I have 5 pNICs available on my ESXi hosts, and I will use vmnic1 and vmnic2 as shown below:

Untitled%2080.png

Move the vmk0 from the VSS to the VDS (Management Port Group). Click on “Assign”:

Untitled%2081.png

Select the Management Port Group and click on “assign” again:

Untitled%2082.png

You do not have any (Compute) Virtual Machines so there is nothing to migrate:

Untitled%2083.png

Review the Summary and Finish.

Untitled%2084.png

Verify if all the hosts has been added to the VDS correctly:

Untitled%2085.png

Create vMotion vmk interface on ESXi hosts

To allow Virtual vMotion is is good practice to use a dedicated vmk interface and VLAN for that. The table below will show the IP addresses that I am using per host for the vMotion vmk interfaces.

ESXi hostname vMotion IP address VDS Portgroup Subnetmask Gateway
pod-100-computea-1.sddc.lab 10.203.101.111 vMotion 255.255.255.0 10.203.101.1
pod-100-computea-2.sddc.lab 10.203.101.112 vMotion 255.255.255.0 10.203.101.1
pod-100-computea-3.sddc.lab 10.203.101.113 vMotion 255.255.255.0 10.203.101.1
pod-100-edge-1.sddc.lab 10.203.101.191 vMotion 255.255.255.0 10.203.101.1
pod-100-edge-2.sddc.lab 10.203.101.192 vMotion 255.255.255.0 10.203.101.1
pod-100-edge-3.sddc.lab 10.203.101.193 vMotion 255.255.255.0 10.203.101.1

When you go to the Networking tab and the Distributed Port Groups sub-tab you can right-click on the vMotion Port Group and “Add VMkernel Adapters.

Untitled%2086.png

Click on “+Attached hosts”:

Untitled%2087.png

Select all the hosts you want to add a vMotion vmk interface:

Untitled%2088.png

Enable the vMotion service by checking the checkbox:

Untitled%2089.png

Use static IPv4 settings, and use the IP address settings provided in the table:

Untitled%2090.png

Review the Summary and Finish.

Untitled%2091.png

Verify if the vmk vMotion interfaces have a virtual switch port assigned:

Untitled%2092.png

Configure vSAN

Before you can configure vSAN you need to add a vmk interface dedicated for vSAN. This is similar to the steps you just followed for the vMotion vmk, but now with a different Port Group and VLAN.

ESXi hostname vSAN IP address VDS Portgroup Subnetmask Gateway
pod-100-computea-1.sddc.lab 10.203.102.111 vSAN 255.255.255.0 10.203.102.1
pod-100-computea-2.sddc.lab 10.203.102.112 vSAN 255.255.255.0 10.203.102.1
pod-100-computea-3.sddc.lab 10.203.102.113 vSAN 255.255.255.0 10.203.102.1
pod-100-edge-1.sddc.lab 10.203.102.191 vSAN 255.255.255.0 10.203.102.1
pod-100-edge-2.sddc.lab 10.203.102.192 vSAN 255.255.255.0 10.203.102.1
pod-100-edge-3.sddc.lab 10.203.102.193 vSAN 255.255.255.0 10.203.102.1

When you go to the Networking tab and the Distributed Port Groups sub-tab you can right-click on the vSAN Port Group and “Add VMkernel Adapters.

Untitled%2093.png

Select all the hosts you want to add a vSAN vmk interface:

Untitled%2094.png

Select all the hosts you want to add a vSAN vmk interface:

Untitled%2095.png

Verify the selected hosts (again):

Untitled%2096.png

Enable the vSAN service by checking the checkbox:

Untitled%2097.png

Use static IPv4 settings, and use the IP address settings provided in the table:

Untitled%2098.png

Review the Summary and Finish.

Verify if the vmk vSAN interfaces have a virtual switch port assigned:

Untitled%2099.png

Now that the vSAN vmk is in place you can start to configure vSAN.

Each host has three Disks.

  1. 1 x 10 GB (ESXi boot disk)
  1. 1 x 20 GB (vSAN cache disk)
  1. 1 x 180 GB (vSAN capacity disk)

When you look at the Storage Devices on the ESXi hosts it looks something like this:

Untitled%20100.png

To enable vSAN select the vSphere Cluster object, and in the configure tab select Services in the vSAN section. Click on “Configure vSAN” to open the wizard.

Untitled%20101.png

Select “Single site cluster”:

Untitled%20102.png

Leave all the services settings default:

Untitled%20103.png

The wizard will detect what disk is the caching and capacity disk. Just make sure this is correct and change this if required:

Untitled%20104.png

Review the Summary and Finish.

Untitled%20105.png

Verify if the vSAN datastore is created:

Untitled%20106.png

You can also verify what hosts are part of this vSAN datastore.

Untitled%20107.png

Rename the name of the vSAN datastore.

Untitled%20108.png

Name it to something that is related to the cluster name:

Untitled%20109.png

Verify if the name has been changed:

Untitled%20110.png

⚠️ Repeat the same steps to enable vSAN on the vSphere Edge Cluster.

When you enabled vSAN on the Compute and Edge vSphere clusters your datastores view will look something like this:

Untitled%20111.png

STEP 5» Install the Licenses for the vCenter Server, ESXi Servers and VSAN

You must have noticed the message in the orange bar on the top related to Licenses.

Untitled%20112.png

For the products above I will need to have three licences:

Product License key
vCenter Server License K42HJ-XXXXX-XXXXX-XXXXX-XXXXX
ESXi Server License 1J4T3-XXXXX-XXXXX-XXXXX-XXXXX
VSAN License X02TH-XXXXX-XXXXX-XXXXX-XXXXX

You can add new licenses by either clicking on the button on the top bar, or you can do trough the menu and when you click on “Administration”:

Untitled%20113.png

Click on “Licenses”:

Untitled%20114.png

Copy/Paste all the license keys:

Untitled%20115.png

Provide a recognisable license name:

Untitled%20116.png

Review the Summary and Finish.

Untitled%20117.png

Verify if all the licenses has been added correctly:

Untitled%20118.png

vCenter Server

Assign the vCenter Server license:

Untitled%20119.png

Assign the vCenter Server license:

Untitled%20120.png

Verify if the vCenter Server license is assigned:

Untitled%20121.png

ESXi Server〈s〉

Assign the ESXi Server licenses:

Untitled%20122.png

Click yes:

Untitled%20123.png

Assign the ESXi Server licenses:

Untitled%20124.png

Verify if the ESXi Server licenses are assigned:

Untitled%20125.png

vSAN

Assign the vSAN Server licenses:

Untitled%20126.png

Click yes:

Untitled%20127.png

Assign the vSAN Server licenses:

Untitled%20128.png

Verify if the vSAN Server licenses are assigned:

Untitled%20129.png

Refresh the screen and notice that the orange bar is gone.

Untitled%20130.png

Continue with >> Lab: NSX Manager deployment (Single site)