How to Resolve Horizon View vCenter Connection Errors

I was recently working with a client that needed me to migrate Horizon View VMs to new storage. I thought it would be as easy as changing the storage settings for the pool and performing a rebalance across the cluster. Unfortunately, no rebalance operation was successful and I saw the following errors:

Provisioning error occurred for Machine XXX: Refit operation rebalance failed

vCenter at address <vCenter Address> has been temporarily disabled (this error would typically followed by another notification that the same vCenter had been enabled)

I was able to resolve the issue by following VMware KB 1030996. I the case of this customer there was only one working production pool. To test that there was an issue with the pool in use, I created a new temporary pool. I then tried recompose actions and looked for errors in the event log. There were none.

Creating a new temporary pool proved critical to resolving this issue. The crux of the problem as laid out in the KB is that there are two vCenter entries in the composer database. In my case the IP address and the FQDN (The FQDN being the correct entry). The correct Deployment Group ID was displayed in the View Composer Database entry for the new temporary pool I created. I was able to take that ID and replace it in the entries for the current production pool. After that was done, I was able to easily rebalance the production pool.   

How to Detach/Attach Nutanix AHV Disks

This is the workflow on how to migrate a Nutanix AHV disk from one VM to another. A reason you may do this is because you have an old Server 2008 file server that you want to migrate to Server 2019. Nutanix handles this differently than VMware in that AHV does not allow you detach and reattach disks as you would in vSphere.

The process to do this is outlined in both Nutanix KB 3577 and KB 8062. Unfortunately, the way it is worded can be confusing to some without a lot of Nutanix experience. This process requires you to open an SSH (using Putty, Tera Term, etc.) session to one of the CVMs in the cluster.  

Step 1. Find the VM Disk File

To find the VM Disk file you will use the command below:

 acli vm.get <VM name> include_vmdisk_paths=1 | grep -E 'disk_list|vmdisk_nfs_path|vmdisk_size|vmdisk_uuid'

The output should look like the picture below.

Make note of the disk size and disk path. The disk size should correspond with the data disk you would like to migrate to your new VM.

Step 2. Create an Image from the VM Disk

To create a disk image you must run the following command:

acli image.create <image name> source_url=<url of source VM disk> container=<target container name> image_type=kDiskImage

Note: To get the source URL you will have to append nfs://127.0.0.1 to the nfs path output from step 1.

For example - nfs://127.0.0.1/BronzeTier_StorageContainer01/.acropolis/vmdisk/be06372a-b8c5-4544-b451-12b608615248  

The output of the command should appear as shown below.

Step 3. Attach the disk to the new VM.

Attaching the disk to the new VM can be done from Prism Element on the same cluster.

  1. Locate the VM from the VM menu in Prism and click update.

2. From the Update screen select +Add New Disk

3. In the Add Disk menu, select ‘Clone from Image Service’ from the Operation drop down menu.

4. In the Image menu, select the image you created in Step 2 and click Add.

Once this is completed you can log into your VM and initialize the disk in the operating system.

DaaS: Staying Connected, Anywhere, Anytime

Photo by Caspar Camille Rubin on Unsplash

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

The pandemic has brought its share of challenges. One of the greatest challenges has been how to give workers the connectivity and access necessary to do their jobs when working from home. This has been especially true for organizations that previously had few to no remote workers. In a previous article, we talked about on-prem VDI and how it has matured over the years. Desktops-as-a-Service (DaaS) is the latest stage of VDI maturity.

What is DaaS?

Traditional VDI is best defined as a technology in which one desktop, running in its own virtual machine, can be provisioned to a user on a one-to-one basis. This uses the same technology used to virtualize a server virtual machine (ESXi, XenServer, AHV, etc.) to virtualize a desktop operating system for an end user. Then, users can interact with the desktop using either a thin client or an HTML5 browser. The difference is that DaaS is in a public cloud while traditional VDI is on-premises in a private cloud.

What are the advantages of DaaS?

Manageability

Manageability is DaaS’ greatest strength versus physical desktops and even traditional VDI. With physical desktops, IT staff must manage on-premises hardware; this implies everything from firmware updates to component failure. Even with on-premises VDI, the physical host servers must be managed and maintained. With DaaS, there is no hardware on-prem. There are no support calls for a drive failure or to troubleshoot a misconfiguration on a top of the rack switch. This frees IT staff to work on other tasks.

Scalability

With no hardware on-prem and everything in the public cloud, organizations can quickly and easily spin up hundreds or thousands of virtual desktops to users around the world. This contrasts with traditional on-prem VDI in which an organization can quickly use all available capacity, waiting weeks or even months until new hardware can be installed. Moreover, organizations with a seasonal workforce (or project-based teams) will only consume as many resources as they need at that time. There are no unused resources, which is in stark contrast to what happens in many organizations today.

Security

When using a properly configured DaaS solution, an organization can ensure that data never leaves their environment. Moreover, there are settings that only allow connections from trusted IP addresses. Furthermore, DaaS allows for the automation of patching the desktop operating system (OS), which is often the greatest security vulnerability most organizations face.

What use cases are best suited for DaaS?

DaaS is suited for all the same use cases as traditional VDI. In three specific use cases, DaaS is far and away the superior choice:

  • Disaster Recovery – This is a perfect application for DaaS. Desktop images can be stored in the public cloud and virtual desktops only need to be spun up during a DR event. This is both resource and cost effective.
  • Remote and Contract Employees – Employees who have a short-term contract or who are remote and rarely, if ever, come into the office are great candidates for DaaS. This keeps the organization from procuring long-term resources unnecessarily.
  • Test and Dev – Many organizations struggle to provision adequate test and development environments. DaaS allows them to do so without having to use old or out of date gear.

Conclusion

DaaS is the evolution of traditional on-prem VDI. This pandemic has proven that organizations need tools that allow them to nimbly navigate the current landscape. DaaS’ manageability, scalability, and security features make it an excellent choice to assist organizations in navigating this evolving landscape.

IT Security Policies: Your First Line of Defense in Cybersecurity

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

Technology Can’t do Everything

You walk into the office Monday morning, attempt to login to your desktop and realize that you can’t login because you’ve been hacked or there is a ransomware note ominously dominating your screen. The first thing you may think of is to look at logs and other use the other tools of the trade to figure out how this happened.

You find out later that this breach was caused by a phishing attack on an unsuspecting employee, this innocuous failure of operational security (OpSec) by one of your employees resulted in tremendous losses in man-hours, money, and reputation.

Often when we think of cybersecurity, the first thing that usually comes to mind are firewalls, endpoint protection, Security Information and Event Management (SIEM) solutions, and the like. While these products and solutions are a vital part of cybersecurity, they can only marginally influence human behavior. This is where policies are effective; they can bridge the divide between technology and employee behavior, complementing technology by outlining expectations and defining consequences for noncompliance.

What’s the purpose?

To better understand the role of IT security policies as a part of a cybersecurity strategy, we need to understand why we have them in the first place and what we are trying to accomplish. Put simply, we want to keep our organization’s information safe. We accomplish this by ensuring three things:

  • Confidentiality -information must not be made available or disclosed to unauthorized individuals, entities, or processes.
  • Integrity -data must not be altered or destroyed in an unauthorized manner, and accuracy and consistency must be preserved regardless of changes.
  • Availability -information must be accessible and usable on demand by authorized entities.

To that end we often must build a strategy that incorporates technological and policy solutions which balance information security with the needs of the organization.

The Human Side of Tech

Now that we have briefly gone over the purpose of IT security policies, we must look at how they should be implemented. Effective policies are policies that not only protect data and help the organization avoid liability, but also take into consideration the culture of the organization and its employees. For example, an organization with a large remote workforce should have Multi Factor Authentication (MFA) to login to applications, whereas a small organization with all employees working in one office could consider MFA optional.

Additionally, effective policy always reflects the following ideals:

  • Clear – vague policies leave confuse IT system users and leave room for bad actors to claim a plausible misunderstanding of the rules.
  • Consequential – policies without an enforcement mechanism with clear consequences for violations are not likely to be followed in large organizations.
  • Current – policies should be reviewed and modified periodically to reflect the technology and security posture of the organization as it is today.

Bottom Line

Until killer robots and rouge AI become our overlords, humans are going to be the center and the weakest link of any cybersecurity strategy. And while the technology used will always be a huge part of cybersecurity, implementing effective IT security policies must not be overlooked.

Data Protection: The Fancy New Name for Backups?

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

When I Image result for data protectionfirst saw the words Data Protection, I thought, “ugh, here is another new way to do backups that I have to keep up with.” The more I read about it, the more I understood that data protection and backups are like squares and rectangles. All data protection includes backups, but not all backups are data protection.

The Problem with Backups

Ransomware, data corruption, and data loss all strike fear into the hearts of the modern IT professional. All three of these things have come to be known colloquially as RGEs or Resume Generating Events and have kept many an IT professional from sleeping well at night. As the modern datacenter has evolved from rows of tower servers running bare metal workloads to racks full of blades running hyperconverged platforms, how an administrator backs up and recovers data has struggled to keep up. The traditional solution has always been to use the 3-2-1 rule of backups coined by Peter Krogh. The rule states:

  • Keep three copies of your data
  • Use two different types of storage medium
  • Keep one copy of the data offsite

These were great rules in the past, but with the added complexity and amounts of data to be protected in the modern datacenter, these rules do not effectively mitigate RGEs.

What is Data Protection?

Traditional backup and restore consists of grouping workloads together in a backup schedule, backing them up, occasionally checking backup integrity, and restoring when necessary. This was fine when the datacenter was nothing more than a server room and the business could afford downtime, but in the modern datacenter this is woefully insufficient. The pain points of this strategy are all challenges that modern data protection have sought to mitigate using the following five strategies:

Centralized management – this allows the administrator to manage data protection across on-premises and public clouds.

Cross Cloud and Hypervisor Support – giving the administrator the ability to archive and/or setup disaster recovery in the public cloud or across hypervisors.

Data Lifecycle Management – automates moving backups and snapshots between hot, cold, and archival tiers.

Application Aware – uses VSS or CBT to capture database tables and logs.

Mitigates Malware and Other Threats – immutable data to resist ransomware and use of artificial intelligence to detect anomalies.

Avoiding Resume Generating Events

The solution seems simple: use a modern data protection solution. The reality is that many organizations have different reporting requirements, software and hardware stacks, budgetary constraints and level of operational intelligence to consider when making a purchasing decision. Considering these challenges, there are two main architectures to consider when looking at a modern data protection solution:

Hardware Appliance

These solutions are characteristically the easiest to install and maintain, and typically at a higher cost. The advantage is an integrated hardware and software stack, and the ability to almost instantly live mount restores. Examples of solutions in this category are Rubrik, Cohesity, and Commvault.

Software Solution

Traditionally these solutions are lower in cost and will have all the features of modern data protection, but the administrator will typically lose the ease of use and elegance of the hardware appliance. The leader in this category is Veeam, with HYCU and Nakivo being great alternatives.

The modern datacenter continues to present numerous challenges to organizations, and data protection is no different. As always, any organization should look to its trusted advisor (VAR or MSP) to guide them in making an informed decision.

Hybrid Cloud Considerations

This is a article that I originally wrote for my job. I am reposting it here.

The Problem

The cloud continues to be a hot topic in 2019. Public cloud initiatives have been at the forefront of eCloud and Pixelsnterprise digital transformation for the past few years. As we discussed last month, the cloud is not a complete solution for most modern enterprises. Although the public cloud is great for its agility, scalable workloads, and reliability, many enterprise customers are hampered by the “Three C’s” of cost, connectivity, and complexity. In addition, they face pressure by other business units to be more agile, which often take matters into their own hands and create the problem of shadow IT. This becomes even more of an issue when using a multi-cloud strategy. So, what is the solution? The solution is to combine the current on-premises private cloud with the public cloud to create a hybrid cloud infrastructure.

 

What is hybrid cloud?

Hybrid cloud refers to using both on-premises infrastructure in combination with public cloud infrastructure. This allows enterprises to combine the best of both worlds: scalability and reliability for web-tier and disaster recovery workloads found in the public cloud, with the fixed cost and connectivity for ERP workloads found in the private cloud.

 

Hybrid Cloud Solutions

 

The DIY Approach

This approach is not for the faint of heart. It is typically the most complicated way to create a hybrid cloud. It requires deep knowledge and understanding of not only on-premises enterprise and cloud architecture, but also how to integrate them properly. This requires a new set of tools and skills, such as learning cloud storage; networking; instance types; and, most importantly, how to manage costs. Businesses with sufficient technical resources can overcome these barriers and create a robust hybrid cloud solution. Unfortunately, this is the first approach for many businesses. Oftentimes they end up becoming overwhelmed and ultimately end up drastically reducing their presence in the public cloud, which discourages them from beginning any new public cloud projects.

 

The Single Hypervisor Approach

The single hypervisor approach is one that is exemplified by Azure Stack and VMware Cloud on AWS. These solutions remove a lot of the complexity found in the DIY approach. Due to the tight integration between the hypervisor and management stack, very few new skills are needed. An administrator that can manage vSphere in the private cloud has little to learn to be able to manage VMware Cloud on AWS. The same is true for Azure Stack and Windows Admin Center. The issues that remain are the costs and lock-in. Both of these solutions have financial costs that are often far above the DIY approach, putting them out of reach of many smaller enterprises. Additionally, each of these solutions effectively locks the enterprise into a particular vendor’s ecosystem or creates knowledge silos within the organization. This ends up negating a lot of the agility that brought enterprises to the public cloud in the first place.

 

The Enterprise Multi-Cloud Strategy

The enterprise multi-cloud approach is the natural evolution of hybrid cloud. It allows enterprises to take advantage of the benefits in each of the three major cloud providers’ (AWS, Azure, and GCP) offerings, while also being able to easily move workloads between cloud providers and the private cloud and while also managing costs. This is exemplified by Nutanix and its products Xi Beam and Calm. These solutions give enterprises the insight and tools they’ve needed to optimize and automate their public cloud workloads. Centralized financial governance is one of the most important components of the multi-cloud strategy. Xi Beam not only centralizes financial governance but also allows for remediation of under and over-utilized public cloud resources. Additionally, Xi Beam offers compliance governance with automated audit checks, which removes another layer of complexity to the multi-cloud strategy. Another important component of the multi-cloud strategy is automation. Calm gives enterprises the ability to quickly provision applications and allow for self-service resource consumption for other business units, enabling the agility of the cloud for which the public cloud is well known, as well as mitigating shadow IT.

 

Where Do We Go from Here?

Hybrid cloud is the enterprise infrastructure model of the foreseeable future. The control, flexibility, and ease have made the pure public cloud model unattractive and the pure private cloud model obsolete. It is important for each enterprise to evaluate their needs and technical resources to decide on which of the hybrid cloud models best suits them.