Traveling During the Pandemic: Mexico City

Wait, what?

Mexico City is one of my favorite cities in the world. It’s a leading city of Spanish-speaking Latin America, with world-class museums, restaurants, and nightlife. When I was a kid my parents told me stories about their adventures in Mexico in the early eighties. I was also influenced by reading the book 1493, I was always intrigued by the place, but prioritized travel further afield. I went for the first time in 2016 and fell in love with the city but wasn’t able to visit again until October of 2020.

Getting There

There are currently no restrictions or special requirements to fly from the US to Mexico, nor is there a test required beforehand. As a precaution, I always use the Kayak travel restrictions site to verify before I book a flight.

HIlton Reforma

I took a direct flight on Aeroméxico. Covid has caused them to make a few changes to their service (just like all other airlines). First, masks are obligatory throughout the entire flight, unless eating or drinking. Second, boarding is from back to front, including business class and frequent fliers. Third, there was no meal service, only drinks. All of these things were perfectly reasonable.

Entering the country was very quick and easy. The only difference was that you had to present a completed form (Cuestionario de identificación de factores de riesgo en viajeros) to the immigration officials. This form asks you if you have any Covid symptoms. The form is handed out on the plane and also available in the immigration hall.  

Cost

In October 2020 the Mexican Peso was hovering around 21 pesos to the dollar. As with most global cities, your mileage may vary. Mexico City can be very cheap or very expensive depending on your tastes. I went to a pretty cosmopolitan seafood restaurant with a friend and for both of us it was around $80 for two with drinks and tip. But I also had meals for $5 and ice cream on the street for 25 cents.

The Language

As with all global cities, you can get around with English. But knowing a few words in Spanish will go a long way especially when negotiating with street vendors. Most international hotel brands and hotels in tourist or business clientele will have English speaking staff. As always, people are very usually very forgiving when you try to use Spanish to communicate with them.

Things to Do

There are so many things to do in Mexico City, even during the pandemic. There are a few changes. Some museums and destinations, like the Frida Kahlo Museum, require you to buy tickets online. This allows them to limit the amount of people entering every hour. Another change is that there may be a curfew at beginning at 11pm in certain areas. So, you may find that some restaurants or bars close earlier than their pre-pandemic hours indicate. Also, while it may not be a requirement where you’re coming from, wearing a mask outside is mandatory in Mexico City. Some areas have police enforcing this ordinance.

On the way to the Zocalo

Even with those restrictions, there are so many things to do. There’s great art at the Palacio de Bellas Artes, which is very close to the Diego Rivera Museum. Both of those places are located next to Alameda Central, which is one of the nicest parks in Mexico City. My descriptions don’t even begin to scratch the surface. There’s a good guide at WikiTravel, which I recommend.

Safety

Just a quick word on safety. I have never had an incident in Mexico City. Just like anywhere else, you have to be careful. I don’t carry my passport or large sums of money with me when I’m walking around town. Nor do I wear expensive jewelry or talk on the phone when I’m in an unfamiliar area. Also, I make sure to take an Uber or a licensed taxi when I need to travel long distances. These are the same precautions I have taken in all my travels and they have served me well.

Bottom Line:

Mexico City is a great place to travel to even during the pandemic. Obviously everyone has to decide on their own personal risk tolerance, but if you decide to go there is plenty to do and see.

How to Resolve Horizon View vCenter Connection Errors

I was recently working with a client that needed me to migrate Horizon View VMs to new storage. I thought it would be as easy as changing the storage settings for the pool and performing a rebalance across the cluster. Unfortunately, no rebalance operation was successful and I saw the following errors:

Provisioning error occurred for Machine XXX: Refit operation rebalance failed

vCenter at address <vCenter Address> has been temporarily disabled (this error would typically followed by another notification that the same vCenter had been enabled)

I was able to resolve the issue by following VMware KB 1030996. I the case of this customer there was only one working production pool. To test that there was an issue with the pool in use, I created a new temporary pool. I then tried recompose actions and looked for errors in the event log. There were none.

Creating a new temporary pool proved critical to resolving this issue. The crux of the problem as laid out in the KB is that there are two vCenter entries in the composer database. In my case the IP address and the FQDN (The FQDN being the correct entry). The correct Deployment Group ID was displayed in the View Composer Database entry for the new temporary pool I created. I was able to take that ID and replace it in the entries for the current production pool. After that was done, I was able to easily rebalance the production pool.   

How to Detach/Attach Nutanix AHV Disks

This is the workflow on how to migrate a Nutanix AHV disk from one VM to another. A reason you may do this is because you have an old Server 2008 file server that you want to migrate to Server 2019. Nutanix handles this differently than VMware in that AHV does not allow you detach and reattach disks as you would in vSphere.

The process to do this is outlined in both Nutanix KB 3577 and KB 8062. Unfortunately, the way it is worded can be confusing to some without a lot of Nutanix experience. This process requires you to open an SSH (using Putty, Tera Term, etc.) session to one of the CVMs in the cluster.  

Step 1. Find the VM Disk File

To find the VM Disk file you will use the command below:

 acli vm.get <VM name> include_vmdisk_paths=1 | grep -E 'disk_list|vmdisk_nfs_path|vmdisk_size|vmdisk_uuid'

The output should look like the picture below.

Make note of the disk size and disk path. The disk size should correspond with the data disk you would like to migrate to your new VM.

Step 2. Create an Image from the VM Disk

To create a disk image you must run the following command:

acli image.create <image name> source_url=<url of source VM disk> container=<target container name> image_type=kDiskImage

Note: To get the source URL you will have to append nfs://127.0.0.1 to the nfs path output from step 1.

For example - nfs://127.0.0.1/BronzeTier_StorageContainer01/.acropolis/vmdisk/be06372a-b8c5-4544-b451-12b608615248  

The output of the command should appear as shown below.

Step 3. Attach the disk to the new VM.

Attaching the disk to the new VM can be done from Prism Element on the same cluster.

  1. Locate the VM from the VM menu in Prism and click update.

2. From the Update screen select +Add New Disk

3. In the Add Disk menu, select ‘Clone from Image Service’ from the Operation drop down menu.

4. In the Image menu, select the image you created in Step 2 and click Add.

Once this is completed you can log into your VM and initialize the disk in the operating system.

DaaS: Staying Connected, Anywhere, Anytime

Photo by Caspar Camille Rubin on Unsplash

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

The pandemic has brought its share of challenges. One of the greatest challenges has been how to give workers the connectivity and access necessary to do their jobs when working from home. This has been especially true for organizations that previously had few to no remote workers. In a previous article, we talked about on-prem VDI and how it has matured over the years. Desktops-as-a-Service (DaaS) is the latest stage of VDI maturity.

What is DaaS?

Traditional VDI is best defined as a technology in which one desktop, running in its own virtual machine, can be provisioned to a user on a one-to-one basis. This uses the same technology used to virtualize a server virtual machine (ESXi, XenServer, AHV, etc.) to virtualize a desktop operating system for an end user. Then, users can interact with the desktop using either a thin client or an HTML5 browser. The difference is that DaaS is in a public cloud while traditional VDI is on-premises in a private cloud.

What are the advantages of DaaS?

Manageability

Manageability is DaaS’ greatest strength versus physical desktops and even traditional VDI. With physical desktops, IT staff must manage on-premises hardware; this implies everything from firmware updates to component failure. Even with on-premises VDI, the physical host servers must be managed and maintained. With DaaS, there is no hardware on-prem. There are no support calls for a drive failure or to troubleshoot a misconfiguration on a top of the rack switch. This frees IT staff to work on other tasks.

Scalability

With no hardware on-prem and everything in the public cloud, organizations can quickly and easily spin up hundreds or thousands of virtual desktops to users around the world. This contrasts with traditional on-prem VDI in which an organization can quickly use all available capacity, waiting weeks or even months until new hardware can be installed. Moreover, organizations with a seasonal workforce (or project-based teams) will only consume as many resources as they need at that time. There are no unused resources, which is in stark contrast to what happens in many organizations today.

Security

When using a properly configured DaaS solution, an organization can ensure that data never leaves their environment. Moreover, there are settings that only allow connections from trusted IP addresses. Furthermore, DaaS allows for the automation of patching the desktop operating system (OS), which is often the greatest security vulnerability most organizations face.

What use cases are best suited for DaaS?

DaaS is suited for all the same use cases as traditional VDI. In three specific use cases, DaaS is far and away the superior choice:

  • Disaster Recovery – This is a perfect application for DaaS. Desktop images can be stored in the public cloud and virtual desktops only need to be spun up during a DR event. This is both resource and cost effective.
  • Remote and Contract Employees – Employees who have a short-term contract or who are remote and rarely, if ever, come into the office are great candidates for DaaS. This keeps the organization from procuring long-term resources unnecessarily.
  • Test and Dev – Many organizations struggle to provision adequate test and development environments. DaaS allows them to do so without having to use old or out of date gear.

Conclusion

DaaS is the evolution of traditional on-prem VDI. This pandemic has proven that organizations need tools that allow them to nimbly navigate the current landscape. DaaS’ manageability, scalability, and security features make it an excellent choice to assist organizations in navigating this evolving landscape.

IT Security Policies: Your First Line of Defense in Cybersecurity

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

Technology Can’t do Everything

You walk into the office Monday morning, attempt to login to your desktop and realize that you can’t login because you’ve been hacked or there is a ransomware note ominously dominating your screen. The first thing you may think of is to look at logs and other use the other tools of the trade to figure out how this happened.

You find out later that this breach was caused by a phishing attack on an unsuspecting employee, this innocuous failure of operational security (OpSec) by one of your employees resulted in tremendous losses in man-hours, money, and reputation.

Often when we think of cybersecurity, the first thing that usually comes to mind are firewalls, endpoint protection, Security Information and Event Management (SIEM) solutions, and the like. While these products and solutions are a vital part of cybersecurity, they can only marginally influence human behavior. This is where policies are effective; they can bridge the divide between technology and employee behavior, complementing technology by outlining expectations and defining consequences for noncompliance.

What’s the purpose?

To better understand the role of IT security policies as a part of a cybersecurity strategy, we need to understand why we have them in the first place and what we are trying to accomplish. Put simply, we want to keep our organization’s information safe. We accomplish this by ensuring three things:

  • Confidentiality -information must not be made available or disclosed to unauthorized individuals, entities, or processes.
  • Integrity -data must not be altered or destroyed in an unauthorized manner, and accuracy and consistency must be preserved regardless of changes.
  • Availability -information must be accessible and usable on demand by authorized entities.

To that end we often must build a strategy that incorporates technological and policy solutions which balance information security with the needs of the organization.

The Human Side of Tech

Now that we have briefly gone over the purpose of IT security policies, we must look at how they should be implemented. Effective policies are policies that not only protect data and help the organization avoid liability, but also take into consideration the culture of the organization and its employees. For example, an organization with a large remote workforce should have Multi Factor Authentication (MFA) to login to applications, whereas a small organization with all employees working in one office could consider MFA optional.

Additionally, effective policy always reflects the following ideals:

  • Clear – vague policies leave confuse IT system users and leave room for bad actors to claim a plausible misunderstanding of the rules.
  • Consequential – policies without an enforcement mechanism with clear consequences for violations are not likely to be followed in large organizations.
  • Current – policies should be reviewed and modified periodically to reflect the technology and security posture of the organization as it is today.

Bottom Line

Until killer robots and rouge AI become our overlords, humans are going to be the center and the weakest link of any cybersecurity strategy. And while the technology used will always be a huge part of cybersecurity, implementing effective IT security policies must not be overlooked.

Data Protection: The Fancy New Name for Backups?

This is a article that I originally wrote for my job. I am reposting it here with a few changes.

When I Image result for data protectionfirst saw the words Data Protection, I thought, “ugh, here is another new way to do backups that I have to keep up with.” The more I read about it, the more I understood that data protection and backups are like squares and rectangles. All data protection includes backups, but not all backups are data protection.

The Problem with Backups

Ransomware, data corruption, and data loss all strike fear into the hearts of the modern IT professional. All three of these things have come to be known colloquially as RGEs or Resume Generating Events and have kept many an IT professional from sleeping well at night. As the modern datacenter has evolved from rows of tower servers running bare metal workloads to racks full of blades running hyperconverged platforms, how an administrator backs up and recovers data has struggled to keep up. The traditional solution has always been to use the 3-2-1 rule of backups coined by Peter Krogh. The rule states:

  • Keep three copies of your data
  • Use two different types of storage medium
  • Keep one copy of the data offsite

These were great rules in the past, but with the added complexity and amounts of data to be protected in the modern datacenter, these rules do not effectively mitigate RGEs.

What is Data Protection?

Traditional backup and restore consists of grouping workloads together in a backup schedule, backing them up, occasionally checking backup integrity, and restoring when necessary. This was fine when the datacenter was nothing more than a server room and the business could afford downtime, but in the modern datacenter this is woefully insufficient. The pain points of this strategy are all challenges that modern data protection have sought to mitigate using the following five strategies:

Centralized management – this allows the administrator to manage data protection across on-premises and public clouds.

Cross Cloud and Hypervisor Support – giving the administrator the ability to archive and/or setup disaster recovery in the public cloud or across hypervisors.

Data Lifecycle Management – automates moving backups and snapshots between hot, cold, and archival tiers.

Application Aware – uses VSS or CBT to capture database tables and logs.

Mitigates Malware and Other Threats – immutable data to resist ransomware and use of artificial intelligence to detect anomalies.

Avoiding Resume Generating Events

The solution seems simple: use a modern data protection solution. The reality is that many organizations have different reporting requirements, software and hardware stacks, budgetary constraints and level of operational intelligence to consider when making a purchasing decision. Considering these challenges, there are two main architectures to consider when looking at a modern data protection solution:

Hardware Appliance

These solutions are characteristically the easiest to install and maintain, and typically at a higher cost. The advantage is an integrated hardware and software stack, and the ability to almost instantly live mount restores. Examples of solutions in this category are Rubrik, Cohesity, and Commvault.

Software Solution

Traditionally these solutions are lower in cost and will have all the features of modern data protection, but the administrator will typically lose the ease of use and elegance of the hardware appliance. The leader in this category is Veeam, with HYCU and Nakivo being great alternatives.

The modern datacenter continues to present numerous challenges to organizations, and data protection is no different. As always, any organization should look to its trusted advisor (VAR or MSP) to guide them in making an informed decision.

Hybrid Cloud Considerations

This is a article that I originally wrote for my job. I am reposting it here.

The Problem

The cloud continues to be a hot topic in 2019. Public cloud initiatives have been at the forefront of eCloud and Pixelsnterprise digital transformation for the past few years. As we discussed last month, the cloud is not a complete solution for most modern enterprises. Although the public cloud is great for its agility, scalable workloads, and reliability, many enterprise customers are hampered by the “Three C’s” of cost, connectivity, and complexity. In addition, they face pressure by other business units to be more agile, which often take matters into their own hands and create the problem of shadow IT. This becomes even more of an issue when using a multi-cloud strategy. So, what is the solution? The solution is to combine the current on-premises private cloud with the public cloud to create a hybrid cloud infrastructure.

 

What is hybrid cloud?

Hybrid cloud refers to using both on-premises infrastructure in combination with public cloud infrastructure. This allows enterprises to combine the best of both worlds: scalability and reliability for web-tier and disaster recovery workloads found in the public cloud, with the fixed cost and connectivity for ERP workloads found in the private cloud.

 

Hybrid Cloud Solutions

 

The DIY Approach

This approach is not for the faint of heart. It is typically the most complicated way to create a hybrid cloud. It requires deep knowledge and understanding of not only on-premises enterprise and cloud architecture, but also how to integrate them properly. This requires a new set of tools and skills, such as learning cloud storage; networking; instance types; and, most importantly, how to manage costs. Businesses with sufficient technical resources can overcome these barriers and create a robust hybrid cloud solution. Unfortunately, this is the first approach for many businesses. Oftentimes they end up becoming overwhelmed and ultimately end up drastically reducing their presence in the public cloud, which discourages them from beginning any new public cloud projects.

 

The Single Hypervisor Approach

The single hypervisor approach is one that is exemplified by Azure Stack and VMware Cloud on AWS. These solutions remove a lot of the complexity found in the DIY approach. Due to the tight integration between the hypervisor and management stack, very few new skills are needed. An administrator that can manage vSphere in the private cloud has little to learn to be able to manage VMware Cloud on AWS. The same is true for Azure Stack and Windows Admin Center. The issues that remain are the costs and lock-in. Both of these solutions have financial costs that are often far above the DIY approach, putting them out of reach of many smaller enterprises. Additionally, each of these solutions effectively locks the enterprise into a particular vendor’s ecosystem or creates knowledge silos within the organization. This ends up negating a lot of the agility that brought enterprises to the public cloud in the first place.

 

The Enterprise Multi-Cloud Strategy

The enterprise multi-cloud approach is the natural evolution of hybrid cloud. It allows enterprises to take advantage of the benefits in each of the three major cloud providers’ (AWS, Azure, and GCP) offerings, while also being able to easily move workloads between cloud providers and the private cloud and while also managing costs. This is exemplified by Nutanix and its products Xi Beam and Calm. These solutions give enterprises the insight and tools they’ve needed to optimize and automate their public cloud workloads. Centralized financial governance is one of the most important components of the multi-cloud strategy. Xi Beam not only centralizes financial governance but also allows for remediation of under and over-utilized public cloud resources. Additionally, Xi Beam offers compliance governance with automated audit checks, which removes another layer of complexity to the multi-cloud strategy. Another important component of the multi-cloud strategy is automation. Calm gives enterprises the ability to quickly provision applications and allow for self-service resource consumption for other business units, enabling the agility of the cloud for which the public cloud is well known, as well as mitigating shadow IT.

 

Where Do We Go from Here?

Hybrid cloud is the enterprise infrastructure model of the foreseeable future. The control, flexibility, and ease have made the pure public cloud model unattractive and the pure private cloud model obsolete. It is important for each enterprise to evaluate their needs and technical resources to decide on which of the hybrid cloud models best suits them.

Why Bogota is One of My Favorite Cities in the World

Colombia, Really?

View of Bogota from Monserrate

I have been traveling to Colombia for three years. I originally went for two reasons. First, it is the birthplace of my favorite author, Gabriel Garcia Marquez. Second, because I’m a cheap traveler and I had a hunch that the places most people are scared to go to offer the most value. This has turned out to be one of my best hunches ever. I have fallen in love with the people, the regions and the culture. I have been to six major cities in Colombia, most of them multiple times, yet Bogota is my favorite. It is a city of over eight million people, full of culture, nightlife, and tons of things to do.

Getting There

One of the great things about Bogota is that it’s close. It’s a direct flight from almost all the major cities in the US, from DFW it’s only about six hours. Compared to the over 16 hours it takes to get to Dubai (another of my favorite cities), it’s a short flight. Also, it’s a major hub for Avianca and LATAM, serving as a gateway to South America. It’s also one of the many countries which are visa-free for US passport holders. In addition, many parts of the year, the price of a round trip ticket costs less than $500.

Cost

For me, this is one of the areas where Colombia, and Bogota in particular, shines. Right now, The exchange rate is 2,886 Colombian Pesos to one US Dollar ($1/2,886). I usually pay $35-40 a night for my hotel stays. For really simple meals I pay anywhere from $3-5, including drink. I’ve even gone to have Japanese food with friends in one of the ritzy areas of the city, and for 10 people the bill came to $125.

The Language

While Spanish isn’t absolutely necessary to enjoy this city, knowing Spanish definitely makes it much more enjoyable. Also, without knowing any Spanish, you’ll have to stay at some of the more expensive hotels with English speaking staff.

It is my opinion that the Spanish spoken in Bogota, Colombia is some of the easiest to understand in the world. I’ve been to places like the Dominican Republic where the speed of speech, slang, and accent made it very hard to understand people. In Bogota, that’s not much of an issue. If you’re a person that has spent years learning to speak Spanish, but has never had enough practice, Bogota is the place for you. I’ve also found that people are generally forgiving and patient with mistakes made while trying to speak the language.

Things to Do

Bike Tour in La Candelaria

As the capital and biggest city of Colombia, Bogota has a lot of things to offer. There is the old church in up the mountain called Monserrate. There are bike tours of La Candelaria, which is a old part of the city with some buildings that date from the 16th century. This is actually my favorite part of town. On the weekends and during certain holidays, one of the streets is closed to traffic, and there are events and parades. This part of town is also home to the gold museum. In addition, outside of the city there are emerald mine that are open to tourist.

Safety

Just a quick word on safety. I have never had an incident in Colombia. Just like anywhere else, you have to be careful. I don’t carry my passport or large sums of money with me when I’m walking around town. Nor do I wear expensive jewelry or talk on the phone when I’m in an unfamiliar area. Also, I make sure to take an Uber or a licensed taxi when I need to travel long distances. These are the same precautions I have taken in all my travels and they have served me well.

Bottom Line: Bogota is a wonderful city with great people and culture. It’s cheap, easy to get to, and will surprise you if you give it the chance.