Homelab Setup Part 3 - vCenter Install & Configure

To setup the NUC and install ESXi, follow part one of this guide. To configure the basic vSAN configuration for deployments pre vCenter 6.7, follow part two of this guide.

vCenter Appliance Installation

With the Intel NUCs set up (and a vSAN datastore created if running pre 6.7 vCenter), it is time to install the vCenter Server Appliance (vCSA). As you probably know, the Windows vCenter Server is deprecated, so the vCSA will be deployed. Download the ISO, mount it to your PC and run the UI installer located in "Drive:\vcsa-ui-installer\win32\installer.exe". The process to deploy vCSA has been documented extensively on many blogs including this one from Virtual Reality. For reference, I am installing the vcsa with the following configuration. Note that we don't have working DNS, so the FQDN and DNS Server should be the IP Address of the vCSA and the IP address of the default gateway (Virtually Ghetto).

  • vCenter Server with Embedded Platform Services Controller (PSC)
  • ESXi Host: 192.168.0.100 (enter the credentials for root user)
  • Certificate: Accepted
  • VM Name: vcenter-1
  • vCSA Root Password: ****************
  • Deployment Size: Tiny
  • Storage Size: Default
  • Storage: Install on a new vSAN Cluster containing the target host
  • Datacenter Name: Homelab Datacenter
  • Cluster Name: Homelab Cluster
  • Enable Thin Disk Mode: Yes
  • Enable Deduplication and compression: Yes
  • Network: VM Network
  • IP Assignment: Static
  • FQDN: 192.168.0.110 (this removes the requirement of working DNS)
  • IP Address: 192.168.0.110
  • Subnet Mask: 24
  • Default Gateway: 192.168.0.1
  • DNS Servers: 192.168.0.1 (this removes the requirement of working DNS)

Once stage 1 has completed and the appliance has been deployed, the vCSA server needs to be configured. I have highlighted my configuration below

  • NTP Servers: uk.pool.ntp.org (future AD DNS / NTP server)
  • SSH Access: Enabled (I always do this for troubleshooting, however the service should ALWAYS be left DISABLED once in production)
  • Create a new SSO domain
  • Single Sign-On domain name: vsphere.local (do not use your AD domain name as it will cause authentication conflicts)
  • Single Sign-On username: administrator
  • Configure CEIP: Enable or Disable

vCenter Server Appliance Configuration

Once this has completed you will be able to access the vCenter server at the url https://<vcenter-ip-address> where you can launch the HTML5 or Flex client and confirm that you can log in with the username administrator@vsphere.local and the password you created in the installer. Once you are logged in, you will see a licensing warning. Select Manage licenses and add your vCenter Server licenses. Once the license is added, locate the vCenter server instance and assign the license to it. One of the many benefits to using vSphere is the vSphere Distributed Switch (vDS). To make use of this in a lab environment where we only have one host added so far is a little tricky, but William Lam has a great article over at Virtually Ghetto which documents the process. The process is to disable Network Rollback through the vCenter Advanced Settings, creating a vDS with a Management and VM-Network Distributed Port Group (DPG) and ensuring the DPG for the vCSA appliance is an Ephermeral - no binding and not Static Binding. Then right click on the Data Centre and select Migrate VMs to Another Network and specify the Standard Switch network as the source and your vDS network as the destination. On each of the next screens select the correct NIC mappings to ensure that your physcial NIC is migrated from the Standard Switch to the vDS. Even if you elect to migrate the vCSA network as part of the wizard, this will fail. Next you will need to log into the ESXi Host UI and manually edit the VM settings and select the new vDS network. Give the vCSA a reboot and hopefully it should come back up without issues. I did however face a HTTP 503 Service Unavailable error however the solution was to putty into the host and run the below two commands to check the service status which showed a lot of services down, so I then ran the second command to bring everything back up. I never encountered this issue again so hopefully a once off bug.

1service-control --status
2service-control --start --all

With vCenter up and running again, log in and select Policies and Profiles, VM Storage Policies, vSAN Default Storage Policy. Select Edit and change the value for "Primary level of failures to tolerate" from 1 to 0 and Enable Force Provisioning. This will enable provisioning before we deploy a domain controller and the vSAN witness appliance. I won't go into the details in this blog as there are plenty of others around but the next step is to deploy a Windows Server as an Active Directory Domain Controller. I have deployed the server dc-1 with IP address 192.168.0.120 and created a domain called homelab.local following my guide here. Once this is complete, create DNS forward and reverse lookup records for the ESXi hosts (1-3) and the vCSA server. You can also reconfigure the vcsa server with the correct DNS settings (https://192.168.0.110:5480) along with the ESXi servers using the Direct Connect Console. Once DNS settings are correct everywhere, remove the current hosts from the vCenter Server inventory and re-add them using the FQDN (esx-1.homelab.local in the vSAN Cluster, esx-2.homelab.local in the vSAN cluster and esx-3.homelab.local in the vSAN Datacenter). 

Name IP Address
ISP Router 192.168.0.1
DHCP Clients (Home PCs & Laptops) 192.168.0.10 - 192.168.0.98
TP-Link Switch 192.168.0.99
Intel NUC (esx-1) 192.168.0.100
Intel NUC (esx-2) 192.168.0.101
vSAN Witness (esx-3) 192.168.0.102 (management)
192.168.0.103 (vsan witness)
vCenter Server (vcsa) 192.168.0.110
AD Domain Controller (dc-1) 192.168.0.120 (homelab.local)
comments powered by Disqus