vRealize Automation 8.x – Troubleshooting

Reading Time: 3 minutes

With the introduction of vRealize Automation (vRA) 8.0, the traditional appliance VAMI page is gone. This is replaced with the vRA CLI and the kubernetes command line tools. This post will show some of the more common CLI commands you may need. To use all of the commands below, use SSH to connect to the appliance and log in with the root username and password.

Check Pods / ‘Services’ Status

Although the traditional vRA services are replaced with kubernetes containers, you can still check the running status of them using the command below. This command will show the running status, the age and the number of restarts for each pod or ‘service’.

Display vRA Cluster Status

Verify the vRA Deployment Status

The output of this command will be “Deployment not complete” if the appliance is still deploying / starting up, otherwise it will show as “Deployment complete”.

Check Deployment Log File

The deployment log file is located at the below location

Generate a Log Bundle

The command below will generate a log bundle and the output file can be found at \root\log-bundle-xxxxxxxxx.tar.xz. For my environment, the log bundle took around 20 minutes to complete and was 60MB in size, however HA environments are likely to take longer and be significantly larger. The –collector-timeout flag can be used to set a timeout for each log collection (default 1000 seconds). The –include-cold-storage may be requested by GSS if the issue you are troubleshooting was not recent as it will include older log files in the log-bundle, however collection will be slower and the output file will be larger.

Stopping / Shut down vRA Cluster

This command will shutdown vRealize Automation on all of the cluster nodes by stopping the services, sleep for 2 minutes and clean the current deployment before shutting down the appliance. Check the official docs here for up-to-date procedures.

Starting vRA Cluster

Power on each of the appliances and wait for them to boot completely before proceeding. Wait for the appliance console to show the blue welcome page. Ensure that all prerequisite servers are also started such as vRealize Identity Manager (vIDM). This command will run the deploy.sh script to deploy all prelude services and then the kubectl command will show the status of all the running pods or ‘services’. This process can take 20+ minutes. If the appliance has insufficient memory, the timeout will occur at 30 minutes. Check the official docs here for up-to-date procedures.

vRA 8.x Error – Bad Gateway

After starting up your vRA appliances, you may find that the UI loads but shows an error of Bad Gateway. This is usually because the appliance is still starting up. Presuming the appliance has enough resources assigned to it, the UI will eventually load and as per above, the status of the deployment can be checked using the below command. Check the READY column and confirm that all pods are ready for use. Any pod with a READY value of 0/1 means that the pod is not available yet. Once all pods are listed as 1/1 or 2/2 then the UI will be available for use.

vRA just not working…

After trying all of the above, sometimes vRA just won’t come back online after a failure. If this is the case, run the command above to check the status of the pods and if they are all online except the postgres database pod, try the below command to restart the kubelet service. Once this is run, let it sit for the next 30 minutes as vRA will restart itself and try to come back online cleanly.

Remove VM from Inventory without deleting the VM

Whilst this is 100% not supported, vautomation.dev provides a very useful article on how to remove VMs from inventory without deleting the underlying VM by accessing the internal vRA database.

Remove vRA integration with vRealize Log Insight

Run the below command to remove the integration between vRA and vRLI in addition to removing the configuration from the vRLI interface.

To add the integration back again, run the command below and substitute vrli8.homelab.local with the FQDN or URL of your vRLI instance.


  1. Hi Gary,
    i have deployed VRA 8 and also synchronize Active directory, but i unable to access VRA using AD account,
    i have tested both admins and member accounts.

    1. Hey vSphere25,
      A couple things to check.
      – In vIDM 8, have you sync’ed your user ID / group so that is it visible in vIDM?
      – If the user is visible in vIDM, is the user visible in vRA 8 under the “Identity & Access Management” tab?
      – If the user is visible, have the Service Roles been assigned to the user?
      – If the user has the Service Roles assigned, ensure that you are selecting your AD domain at the login screen and also check the user name format. Check in vIDM if the samAccountName or userPrincipalName is being synced.

  2. Hi Gary,
    I have deployed vRA8.1 suite: LCM, vIDM and the vRA appliance.
    But I can ssh into the vRA appliance but I can’t access it through UI.
    Are there some configs/settings that I am missing?

    1. Hi,
      Have you tried accessing the https://IP Address or https://FQDN?
      If neither are working, what is the error? A HTTP error or maybe a browser error?
      Don’t forget to SSH into the appliance and run the commands on this page to check the health status of the deployment and the services / pods.

  3. Hi Gary
    I have deployed VRA 8.1 single node architecture and for some reason i cant see any compute on the
    compute resources tab, i have a cloud account of vcenter 6.5 and i have discover the tags of the cluster by the vra. but still there in no compute found.
    the wired part is that everything else like vm, network and storage polices resource are fine..
    what can i missing?

    1. Hi,
      Make sure you have a Cloud account, a Cloud Zone and give the Project access to the compute. You should see the clusters then under Resources > Compute.
      Check out this page also for a hint on setting these up here.

  4. Hi Gary,
    I’m using vRA 8.1, clustered deployment.
    At times I see error like :
    “Unable to sync the status of this resource” for multiple VMs deployed through vRA and due to this further provisioning of VMs gets impacted.
    All the pods are running and cluster is healthy.
    Can you suggest, if I’m missing something here?

    1. Hey A.J,
      For this issue, I would contact VMware Support / GSS to assist. This certainly shouldn’t be happening.

    2. AJ – did you ever get a resolution for this? We’ve run into the same issue and VMware is currently stumped.

  5. After upgrading to 8.4.0 or 8.4.1 I am unable to deploy. When I try to go to deploy form the catalog the input screen comes up with a red box that says [object Object]. Happens in different places throughout vRA.

    1. Hey Holly,
      I have heard a few people having issues with upgrading to 8.4.x. My suggestion is to GSS, otherwise give 8.4.2 a go as it might have fixed your issue.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.