Digital Transformation Spotlight: Guyana and Its Potential

Discover the potential and progress of digital transformation in Guyana. Learn about the country’s digital governance roadmap, e-governance, and efforts to expand broadband access to create a digital state. Explore the opportunities and benefits of ICT to help drive economic growth and transform public services.

Digital transformation is a term that has been used frequently in IT discussions. We live in an era where digital technologies are expanding exponentially, and digital transformation is reshaping the business landscape globally. However, this transformation has not yet taken root in many places, including Guyana, a developing country in South America.

As of January 2021, internet penetration in Guyana is only 37.3%, presenting significant opportunities for digital transformation. One of the significant roadblocks to progress is people’s hesitation to switch from traditional methods to digital ones due to privacy concerns. Building trust is crucial to developing a positive digital mindset and getting people to embrace digital transformation fully.

To address this issue, the Guyanese government has engaged the Estonian government to help develop a “Digital Governance Roadmap for the Govt of Guyana.” This roadmap focuses on improving the country’s technological infrastructure by implementing suggestions for e-governance, improving data quality, planning and implementing connectivity and broadband access, and developing cybersecurity strategies.

Digital transformation is especially critical in Guyana’s oil and gas sector, which presents unique challenges, such as remote locations, difficult terrain, and harsh weather conditions. However, digital transformation can help address these challenges by improving communication and collaboration between teams, streamlining processes, and providing real-time data for decision-making.

The government’s efforts to embrace digital transformation have already shown significant progress in Guyana’s urban and rural areas. Broadband access is getting rolled out to isolated rural areas to foster greater integration of the nation, and almost 116 government bureaus and departments, nine student hostels and dormitories, and three nursing establishments have been connected to enhance resident engagement and foster social advancement.

The oil and gas industry requires a highly-skilled workforce, and digital transformation can play a critical role in providing training and development opportunities for workers. This can include online courses, virtual reality simulations, and other digital learning tools that help workers gain the skills they need to succeed in the industry.

In addition, digital transformation can play a role in creating a more sustainable and environmentally responsible oil and gas sector. This includes the use of data to monitor and reduce emissions, as well as using digital tools to track and manage waste and water usage.

However, digital transformation needs to be more widely adopted in Guyana’s rural areas, where internet penetration is low, and people have not yet fully embraced digital technologies. The potential benefits of digital transformation for these areas are enormous. For example, a digital economy can reduce Guyana’s dependency on external goods and services and help the country become the innovator and developer of the Caribbean region.

The country’s efforts to embrace digital transformation have already yielded significant progress, as seen in the collaboration with the Inter-American Development Bank and the Estonian government. Guyana’s government has recognized the importance of digital transformation and is taking steps to improve the country’s technological infrastructure. This includes implementing suggestions for e-governance, improving data quality, planning and implementing connectivity and broadband access, and developing cybersecurity strategies.

One of the significant successes of Guyana’s digital transformation is linking every person, community, and government agency within the nation. This is particularly transformative in countries where government services have had a more challenging time penetrating beyond urban areas. Also, with digitization, even the most remote parts of Guyana can be connected and fully integrated with its central city hubs, the Caribbean, and beyond.

Another success story in Guyana’s digital transformation is the use of information and communication technology to add value to the manufacturing and service sectors and boost economic and digital transformation. This is especially important considering Guyana’s relatively recent entry into the oil and gas sector.

Through the adoption of new technologies and a focus on broadband access and e-governance, Guyana can create a more connected and efficient industry that is better equipped to address the unique challenges presented by the oil and gas industry.

However, digital transformation requires more than just infrastructure and training. It also requires a shift in mindset, both for individuals and organizations. The Guyanese government has recognized this and is working to promote a positive digital mindset by building trust and demonstrating the benefits of digital transformation to citizens.

As digital technologies continue to expand exponentially, the potential for digital transformation in Guyana is vast. By embracing new technologies, improving connectivity and infrastructure, and promoting a positive digital mindset, Guyana can create a more connected, efficient, and sustainable future for its citizens.

In conclusion, digital transformation is a crucial element of modern business and technology, and it is vital to the growth and development of countries worldwide. Guyana is a prime example of how a developing nation can embrace digital transformation to create a more connected and efficient industry, enhance public services and promote economic growth. With the continued adoption of new technologies and infrastructure, Guyana can create a more sustainable and prosperous future for its people, with opportunities for digital innovation and industry growth.

How to Resolve Connection Errors Between Horizon and vCenter

Learn how to resolve connection errors between Horizon and vCenter with our step-by-step guide. Follow our troubleshooting steps to get your virtual desktop infrastructure running smoothly.

I was recently working with a client that needed me to migrate Horizon View VMs to new storage. I thought it would be as easy as changing the storage settings for the pool and performing a rebalance across the cluster. Unfortunately, no rebalance operation was successful and I saw the following errors:

Provisioning error occurred for Machine XXX: Refit operation rebalance failed

vCenter at address <vCenter Address> has been temporarily disabled (this error would typically followed by another notification that the same vCenter had been enabled)

I was able to resolve the issue by following VMware KB 1030996. I the case of this customer there was only one working production pool. To test that there was an issue with the pool in use, I created a new temporary pool. I then tried recompose actions and looked for errors in the event log. There were none.

Creating a new temporary pool proved critical to resolving this issue. The crux of the problem as laid out in the KB is that there are two vCenter entries in the composer database. In my case the IP address and the FQDN (The FQDN being the correct entry). The correct Deployment Group ID was displayed in the View Composer Database entry for the new temporary pool I created. I was able to take that ID and replace it in the entries for the current production pool. After that was done, I was able to easily rebalance the production pool.   

How to Detach/Attach Nutanix AHV Disks

Learn how to detach and attach Nutanix AHV disks in this step-by-step guide. Properly detaching and attaching disks is crucial for maintaining the performance and integrity of your Nutanix AHV environment. Follow these instructions to ensure a smooth and successful process.

This is the workflow on how to migrate a Nutanix AHV disk from one VM to another. A reason you may do this is because you have an old Server 2008 file server that you want to migrate to Server 2019. Nutanix handles this differently than VMware in that AHV does not allow you detach and reattach disks as you would in vSphere.

The process to do this is outlined in both Nutanix KB 3577 and KB 8062. Unfortunately, the way it is worded can be confusing to some without a lot of Nutanix experience. This process requires you to open an SSH (using Putty, Tera Term, etc.) session to one of the CVMs in the cluster.  

Step 1. Find the VM Disk File

To find the VM Disk file you will use the command below:

 acli vm.get <VM name> include_vmdisk_paths=1 | grep -E 'disk_list|vmdisk_nfs_path|vmdisk_size|vmdisk_uuid'

The output should look like the picture below.

Make note of the disk size and disk path. The disk size should correspond with the data disk you would like to migrate to your new VM.

Step 2. Create an Image from the VM Disk

To create a disk image you must run the following command:

acli image.create <image name> source_url=<url of source VM disk> container=<target container name> image_type=kDiskImage

Note: To get the source URL you will have to append nfs://127.0.0.1 to the nfs path output from step 1.

For example - nfs://127.0.0.1/BronzeTier_StorageContainer01/.acropolis/vmdisk/be06372a-b8c5-4544-b451-12b608615248  

The output of the command should appear as shown below.

Step 3. Attach the disk to the new VM.

Attaching the disk to the new VM can be done from Prism Element on the same cluster.

  1. Locate the VM from the VM menu in Prism and click update.

2. From the Update screen select +Add New Disk

3. In the Add Disk menu, select ‘Clone from Image Service’ from the Operation drop down menu.

4. In the Image menu, select the image you created in Step 2 and click Add.

Once this is completed you can log into your VM and initialize the disk in the operating system.

DaaS: Staying Connected, Anywhere, Anytime

Desktop
Photo by Caspar Camille Rubin on Unsplash

The pandemic has brought its share of challenges. One of the greatest challenges has been how to give workers the connectivity and access necessary to do their jobs when working from home. This has been especially true for organizations that previously had few to no remote workers. In a previous article, we talked about on-prem VDI and how it has matured over the years. Desktops-as-a-Service (DaaS) is the latest stage of VDI maturity.

What is DaaS?

Traditional VDI is best defined as a technology in which one desktop, running in its own virtual machine, can be provisioned to a user on a one-to-one basis. This uses the same technology used to virtualize a server virtual machine (ESXi, XenServer, AHV, etc.) to virtualize a desktop operating system for an end user. Then, users can interact with the desktop using either a thin client or an HTML5 browser. The difference is that DaaS is in a public cloud while traditional VDI is on-premises in a private cloud.

What are the advantages of DaaS?

Manageability

Manageability is DaaS’ greatest strength versus physical desktops and even traditional VDI. With physical desktops, IT staff must manage on-premises hardware; this implies everything from firmware updates to component failure. Even with on-premises VDI, the physical host servers must be managed and maintained. With DaaS, there is no hardware on-prem. There are no support calls for a drive failure or to troubleshoot a misconfiguration on a top of the rack switch. This frees IT staff to work on other tasks.

Scalability

With no hardware on-prem and everything in the public cloud, organizations can quickly and easily spin up hundreds or thousands of virtual desktops to users around the world. This contrasts with traditional on-prem VDI in which an organization can quickly use all available capacity, waiting weeks or even months until new hardware can be installed. Moreover, organizations with a seasonal workforce (or project-based teams) will only consume as many resources as they need at that time. There are no unused resources, which is in stark contrast to what happens in many organizations today.

Security

When using a properly configured DaaS solution, an organization can ensure that data never leaves their environment. Moreover, there are settings that only allow connections from trusted IP addresses. Furthermore, DaaS allows for the automation of patching the desktop operating system (OS), which is often the greatest security vulnerability most organizations face.

What use cases are best suited for DaaS?

DaaS is suited for all the same use cases as traditional VDI. In three specific use cases, DaaS is far and away the superior choice:

  • Disaster Recovery – This is a perfect application for DaaS. Desktop images can be stored in the public cloud and virtual desktops only need to be spun up during a DR event. This is both resource and cost effective.
  • Remote and Contract Employees – Employees who have a short-term contract or who are remote and rarely, if ever, come into the office are great candidates for DaaS. This keeps the organization from procuring long-term resources unnecessarily.
  • Test and Dev – Many organizations struggle to provision adequate test and development environments. DaaS allows them to do so without having to use old or out of date gear.

Conclusion

DaaS is the evolution of traditional on-prem VDI. This pandemic has proven that organizations need tools that allow them to nimbly navigate the current landscape. DaaS’ manageability, scalability, and security features make it an excellent choice to assist organizations in navigating this evolving landscape.

IT Security Policies: Your First Line of Defense in Cybersecurity

Technology Can’t do Everything

You walk into the office Monday morning, attempt to login to your desktop, and realize that you can’t login because you’ve been hacked or there is a ransomware note ominously dominating your screen. The first thing you may think of is to look at logs and other use the other tools of the trade to figure out how this happened.

You find out later that this breach was caused by a phishing attack on an unsuspecting employee, this innocuous failure of operational security (OpSec) by one of your employees resulted in tremendous losses in man-hours, money, and reputation.

When we think of cybersecurity, the first thing that usually comes to mind are firewalls, endpoint protection, Security Information, and Event Management (SIEM) solutions, and the like. While these products and solutions are a vital part of cybersecurity, they can only marginally influence human behavior. This is where policies are effective; they can bridge the divide between technology and employee behavior, complementing technology by outlining expectations and defining consequences for noncompliance.

What’s the purpose?

To better understand the role of IT security policies as a part of a cybersecurity strategy, we need to understand why we have them in the first place and what we are trying to accomplish. Put simply, we want to keep our organization’s information safe. We accomplish this by ensuring three things:

  • Confidentiality -information must not be made available or disclosed to unauthorized individuals, entities, or processes.
  • Integrity -data must not be altered or destroyed in an unauthorized manner, and accuracy and consistency must be preserved regardless of changes.
  • Availability -information must be accessible and usable on demand by authorized entities.

To that end, we often must build a strategy that incorporates technological and policy solutions which balance information security with the needs of the organization.

The Human Side of Tech

Now that we have briefly gone over the purpose of IT security policies, we must look at how they should be implemented. Effective policies are policies that not only protect data and help the organization avoid liability, but also take into consideration the culture of the organization and its employees. For example, an organization with a large remote workforce should have Multi-Factor Authentication (MFA) to login to applications. In contrast, a small organization with all employees working in one office could consider MFA optional.

Additionally, effective policy always reflects the following ideals:

  • Clear – vague policies leave confuse IT system users and leave room for bad actors to claim a plausible misunderstanding of the rules.
  • Consequential – policies without an enforcement mechanism with clear consequences for violations are not likely to be followed in large organizations.
  • Current – policies should be reviewed and modified periodically to reflect the technology and security posture of the organization as it is today.

Bottom Line

Until killer robots and rouge AI become our overlords, humans will be the center and the weakest link of any cybersecurity strategy. And while the technology used will always be a huge part of cybersecurity, implementing effective IT security policies must not be overlooked.

Data Protection: The Fancy New Name for Backups?

When I first saw the words Data Protection, I thought, “ugh, here is another new way to do backups that I have to keep up with.” The more I read about it, the more I understood that data protection and backups are like squares and rectangles. All data protection includes backups, but not all backups are data protection.

The Problem with Backups

Ransomware, data corruption, and data loss all strike fear into the hearts of the modern IT professional. All three of these things have come to be known colloquially as RGEs or Resume Generating Events and have kept many an IT professional from sleeping well at night. As the modern datacenter has evolved from rows of tower servers running bare metal workloads to racks full of blades running hyperconverged platforms, how an administrator backs up and recovers data has struggled to keep up. The traditional solution has always been to use the 3-2-1 rule of backups coined by Peter Krogh. The rule states:

  • Keep three copies of your data
  • Use two different types of storage medium
  • Keep one copy of the data offsite

These were great rules in the past, but with the added complexity and amounts of data to be protected in the modern datacenter, these rules do not effectively mitigate RGEs.

What is Data Protection?

Traditional backup and restore consists of grouping workloads together in a backup schedule, backing them up, occasionally checking backup integrity, and restoring when necessary. This was fine when the datacenter was nothing more than a server room and the business could afford downtime, but in the modern datacenter this is woefully insufficient. The pain points of this strategy are all challenges that modern data protection have sought to mitigate using the following five strategies:

Centralized management – this allows the administrator to manage data protection across on-premises and public clouds.

Cross Cloud and Hypervisor Support – giving the administrator the ability to archive and/or setup disaster recovery in the public cloud or across hypervisors.

Data Lifecycle Management – automates moving backups and snapshots between hot, cold, and archival tiers.

Application Aware – uses VSS or CBT to capture database tables and logs.

Mitigates Malware and Other Threats – immutable data to resist ransomware and use of artificial intelligence to detect anomalies.

Avoiding Resume Generating Events

The solution seems simple: use a modern data protection solution. The reality is that many organizations have different reporting requirements, software and hardware stacks, budgetary constraints and level of operational intelligence to consider when making a purchasing decision. Considering these challenges, there are two main architectures to consider when looking at a modern data protection solution:

Hardware Appliance

These solutions are characteristically the easiest to install and maintain, and typically at a higher cost. The advantage is an integrated hardware and software stack, and the ability to almost instantly live mount restores. Examples of solutions in this category are Rubrik, Cohesity, and Commvault.

Software Solution

Traditionally these solutions are lower in cost and will have all the features of modern data protection, but the administrator will typically lose the ease of use and elegance of the hardware appliance. The leader in this category is Veeam, with HYCU and Nakivo being great alternatives.

The modern datacenter continues to present numerous challenges to organizations, and data protection is no different. As always, any organization should look to its trusted advisor (VAR or MSP) to guide them in making an informed decision.

Hybrid Cloud Considerations

The Problem

Cloud and Pixels

The cloud continues to be a hot topic in 2019. Public cloud initiatives have been at the forefront of enterprise digital transformation for the past few years. As we discussed last month, the cloud is not a complete solution for most modern enterprises. Although the public cloud is great for its agility, scalable workloads, and reliability, many enterprise customers are hampered by the “Three C’s” of cost, connectivity, and complexity. In addition, they face pressure from other business units to be more agile, which often takes matters into their own hands and creates the problem of shadow IT. This becomes even more of an issue when using a multi-cloud strategy. So, what is the solution? The solution is to combine the current on-premises private cloud with the public cloud to create a hybrid cloud infrastructure.

What is hybrid cloud?

Hybrid cloud refers to using both on-premises infrastructure in combination with public cloud infrastructure. This allows enterprises to combine the best of both worlds: scalability and reliability for web-tier and disaster recovery workloads found in the public cloud, with the fixed cost and connectivity for ERP workloads found in the private cloud.

Hybrid Cloud Solutions

The DIY Approach

This approach is not for the faint of heart. It is typically the most complicated way to create a hybrid cloud. It requires deep knowledge and understanding of not only on-premises enterprise and cloud architecture, but also how to integrate them properly. This requires a new set of tools and skills, such as learning cloud storage; networking; instance types; and, most importantly, how to manage costs. Businesses with sufficient technical resources can overcome these barriers and create a robust hybrid cloud solution. Unfortunately, this is the first approach for many businesses. Oftentimes they end up becoming overwhelmed and ultimately end up drastically reducing their presence in the public cloud, which discourages them from beginning any new public cloud projects.

The Single Hypervisor Approach

The single hypervisor approach is one that Azure Stack and VMware Cloud on AWS exemplify. These solutions remove a lot of the complexity found in the DIY approach. Due to the tight integration between the hypervisor and management stack, very few new skills are needed. An administrator that can manage vSphere in the private cloud has little to learn to be able to manage VMware Cloud on AWS. The same is true for Azure Stack and Windows Admin Center. The issues that remain are the costs and lock-in. Both of these solutions have financial costs that are often far above the DIY approach, putting them out of reach of many smaller enterprises. Additionally, each of these solutions effectively locks the enterprise into a particular vendor’s ecosystem or creates knowledge silos within the organization. This ends up negating a lot of the agility that brought enterprises to the public cloud in the first place.

The Enterprise Multi-Cloud Strategy

The enterprise multi-cloud approach is the natural evolution of hybrid cloud. It allows enterprises to take advantage of the benefits in each of the three major cloud providers’ (AWS, Azure, and GCP) offerings, while also being able to easily move workloads between cloud providers and the private cloud and while also managing costs. This is exemplified by Nutanix and its products Xi Beam and Calm. These solutions give enterprises the insight and tools they need to optimize and automate their public cloud workloads. Centralized financial governance is one of the most important components of the multi-cloud strategy. Xi Beam not only centralizes financial governance but also allows for the remediation of under and over-utilized public cloud resources. Additionally, Xi Beam offers compliance governance with automated audit checks, which removes another layer of complexity to the multi-cloud strategy. Another important component of the multi-cloud strategy is automation. Calm gives enterprises the ability to quickly provision applications and allow for self-service resource consumption for other business units, enabling the agility of the cloud for which the public cloud is well known, as well as mitigating shadow IT.

Where Do We Go from Here?

Hybrid cloud is the enterprise infrastructure model of the foreseeable future. The control, flexibility, and ease have made the pure public cloud model unattractive and the pure private cloud model obsolete. It is important for each enterprise to evaluate its needs and technical resources to decide which hybrid cloud models best suits them.

%d bloggers like this: