vRealize Automation 8.x – Change Internal Kubernetes IP Range

Reading Time: 2 minutes

vRealize Automation (vRA) 8.2 added support for changing the IP address range used by the internal Kubernetes cluster, which is great news as many organisation already use the 10.244.0.0/21 address space. Unfortunately if your network already uses the ranges 10.244.0.0/22 or 10.244.4.0/22, you will likely encounter issues, such a provisioning timeouts.

To resolve this issue in vRA 8.2, run the below commands to view your current configuration, and to update to a new range.

Please note, this is specifically for vRA 8.2. For any other version, please change the Internal Kubernetes IP range through vRealize Suite Lifecycle Manager.

View your Current Configuration

View your current configuration by running the command below. Two /22 subnets are shown, the cluster CIDR and the service CIDR.

Set your New Configuration

To update the internal Kubernetes IP ranges, run the below command substituting the 192.168.0.0/22 and 192.168.4.0/22 subnets with your own.

Once it is reconfigured, redeploy the application by running the ‘clean’ command

Followed by the ‘deploy’ command

Reset to Original Configuration

To reset the internal Kubernetes IP ranges back to the default subnets, run the below command

Once it is reconfigured, redeploy the application by running the ‘clean’ command

Followed by the ‘deploy’ command

Known Issue

There is a bug in vRealize Automation 8.2 which causes the vRA appliance to not deploy / start-up properly, when the Kubernetes internal IP address space is changed to use the 192.168.0.0/16 address space. This post resolves the issue, while VMware develops a patch to resolve the issue permanently.

3 comments

  1. We had attempted this in our environment unsuccessfully. After opening a case with vmware, we were informed that these steps are incorrect, and were provided the following procedure:

    • execute ‘vracli upgrade exec -y –prepare –profile k8s-subnets’
    • take VM snapshot
    • change the subnets on the same node the upgrade command was started: ‘vracli network k8s-subnets –cluster-cidr 10.224.0.0/22 –service-cidr 10.224.4.0/22’
    • execute upgrade with profile: ‘vracli upgrade exec’

    1. Hey Josh,
      Cheers for this. Just out of curiosity, was your environment vRA 8.2 specifically? I have also added an extra disclaimer as it has certainly changed for newer versions of vRA.

  2. I can confirm this blog’s procedure works for vRA 8.2 and standalone vRO 8.4

    Josh’s command vracli -profile k8s-subnets is not recognized by the system in these versions

    Thanks Gary 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.