fbpx
vshosting~

Recent Developments in the VMware-Broadcom Acquisition

The completion of Broadcom’s $69 billion acquisition of VMware has introduced significant changes and uncertainties for current VMware clients. The transition to a subscription-based model and other strategic shifts are key developments that could impact existing customers.

Key Issues and Potential Cost Increases:

  1. Shift to Subscription-Based Model:
  2. Simplification of VMware’s Product Portfolio:
    • Broadcom plans to streamline VMware’s product offerings, which includes significant changes to existing services and support structures. While this aims to make the portfolio simpler, it may also result in the discontinuation of certain products and services that customers currently rely on, forcing them to adopt new solutions that could be more costly or less suited to their needs.
  3. Increased Costs and Reduced Flexibility:
    • As Broadcom integrates VMware’s operations, customers may face higher costs for support and licensing due to the company’s focus on maximizing returns from its investments. Additionally, the shift to a subscription model reduces the flexibility customers previously had with perpetual licenses, potentially leading to higher total costs of ownership in the long term.

Proxmox: A Viable Alternative

Given these potential issues and impending cost increases, businesses should consider exploring alternatives to VMware. Proxmox stands out as a robust and cost-effective solution for several reasons:

  • Open Source and Cost-Effective: Proxmox is built on open-source technologies, eliminating high licensing fees and offering businesses full control over their infrastructure.
  • Comprehensive Features: It offers extensive features such as live migration, high availability, and built-in backup and recovery, providing enterprise-grade capabilities without additional costs.
  • Community Support and Flexibility: Proxmox benefits from an active community that continually enhances the platform, ensuring robust support and flexibility for various business needs.

To assist businesses in transitioning smoothly from VMware to Proxmox, we have developed an exclusive guide. Sign up now to download the guide and ensure a seamless migration to a more flexible and cost-efficient virtualization solution.

Comparison Table

Summary:

  • Cost Efficiency: Proxmox is generally more cost-effective, eliminating high licensing fees.
  • Flexibility: Proxmox offers greater flexibility with its open-source model, avoiding vendor lock-in.
  • Ease of Use: Proxmox provides a more user-friendly interface, suitable for a range of businesses from SMBs to larger enterprises.
  • Support Options: While VMware offers extensive support through paid contracts, Proxmox benefits from a vibrant community and optional paid support.

By considering these factors and exploring Proxmox as an alternative, businesses can avoid the potential pitfalls associated with the recent VMware-Broadcom acquisition and ensure a more stable and cost-effective virtualization strategy.

If you have any questions or need help with the migration, our team of experts is ready to assist you.

Are you considering migration or need help optimizing your cloud environment?

Do not hesitate to contact us. We offer expert consultations and support that allow you to use the cloud efficiently and easily transition to other platforms. This approach gives you not only better control over your technologies but also greater cost control and independence from specific providers.


Ondřej Flídr

“Migrate to the cloud, ditch your hardware, and save money.”

We’ve heard this advice from all sides for the past seven years. Unfortunately, it often becomes clear too late that migrating to the cloud, or even adopting a cloud-native approach, not only fails to deliver the desired savings but can also lead to unpredictable costs and vendor lock-in. Cloud pricing is very complex, and what initially appears to be a clear advantage often turns out to be an unpleasant surprise.

Attempting to exit the cloud can then result in a major dilemma: should you rewrite half of your application and two-thirds of your business processes, or continue paying more and more for cloud services?

Can we avoid problems with cloud migration?

The simplistic advice would be: “Just don’t go to the cloud.” But that’s an oversimplified view. For example, at the beginning of a project, launching in the cloud is a very reasonable and rational choice, and it would be unfortunate to reject it outright. There are also other specific situations and projects for which a cloud-native approach is the best option. Therefore, we recommend that instead of making categorical judgments for or against the cloud, you should keep in mind when entering the cloud that you might want to leave one day.

With this mindset, you won’t get locked into the world of cloud services, and by sticking to proven standard protocols and procedures, you can maintain flexibility. Even if you are already in the cloud, it’s worth looking at your setup and considering how to escape potential lock-in. In today’s article, we will look at several key services that our clients most often deal with when leaving AWS and how to make their transfer as smooth as possible. This is by no means an exhaustive list of everything you might encounter, but from experience, I can say that getting on top of these few services can save a lot of trouble during the transition.

Schedule a free consultation with our experts

Elastic Kubernetes Service (EKS)

In recent years, Kubernetes (K8s) has become the de facto standard for running containerized applications. Its simplicity, automatic scaling, high availability, and easy cluster expansion with additional nodes are all advantages that K8s offers over traditional infrastructure. Kubernetes was created in 2014 for Google’s internal needs but quickly left its data centres and began to conquer the world. EKS can be run on AWS either on dedicated EC2 instances or using the Fargate tool in serverless mode.

How to migrate EKS from the public cloud?

The great advantage of EKS is its universality. Thanks to 100% compatibility with classic Kubernetes, the transition to your own infrastructure is relatively seamless. You just need to take existing deployments and send them to another cluster.

However, one thing to be a bit cautious about is the ingress settings. AWS uses ingress in the form of its balancers in EKS, so for your own K8s, you need to adjust the ingresses. In most cases, this involves a simple rewrite of the ingress class from AWS to NGINX.

EC2

EC2 virtual servers, along with S3 and RDS, are among the oldest and still very popular services that AWS offers. These servers, running on Linux or Windows and available on the ARM platform, are very similar in nature to traditional servers, making their migration from the cloud easier. Although the configuration of operating systems and applications on EC2 may be comparable to other virtualization platforms or even physical servers, you cannot assume that an EC2 configuration will work elsewhere without adjustments.

How to migrate EC2 from the public cloud?

Migrating EC2 is relatively straightforward due to their similarity to traditional servers. However, it is important to adjust cloud-specific configurations, such as load balancers and network settings, to match the new hosting environment. Special attention should also be paid to operating systems, especially if you use Amazon Linux, which may require modifications to be compatible with more common distributions like RHEL, Debian, or Ubuntu.

S3

The S3 object storage, one of AWS’s first products, now has several open-source alternatives such as MinIO and CEPH, which are fully compatible with AWS S3 and often offer additional features.

How to migrate S3 from the public cloud?

These open-source alternatives can be used for migrating S3. The key is to adjust S3 endpoints in applications to point to the new location. This is usually simple if your applications already use an abstraction layer for working with S3.

In infrastructures we build for clients at vshosting, you most often encounter implementations of MinIO or S3 on the CEPH platform because they provide excellent compatibility and scalability.

RDS

The RDS relational database service supports the most widespread database systems such as MySQL/MariaDB, PostgreSQL, MSSQL, and Oracle.

How to migrate RDS from the public cloud?

When migrating from RDS or its special version Aurora, you may face challenges in data transfer, as AWS does not allow connecting an external database in slave mode for seamless data copying. The only solution is to export data from RDS and import it into your own database solution, which requires application downtime and can be time-consuming if you have a large amount of data.

Elastic Container Service (ECS)

If you want to start using application containerization but don’t want to jump straight into full-fledged K8s, you might opt for ECS. This simple runtime environment for Docker containers, like EKS, uses the power of dedicated EC2 servers or serverless via Fargate. However, the similarity and interoperability end there. ECS is Amazon’s proprietary solution, and there is no 100% alternative for it.

How to migrate ECS from the public cloud?

Since ECS is an exclusive Amazon service, migration requires significant adjustments. On a classic Docker server, you can use existing ECS containers, but the entire deployment method needs to be rewritten from scratch as ECS configurations are not transferable. From experience, we can recommend that if you are considering dockerization, skip the step with ECS and move straight to EKS. The slightly more complicated start will quickly pay off not only in terms of much greater clarity but also in the ability to easily migrate from the cloud to your own K8s.

Lambda

Lambda is a serverless environment for running code. It is one of the first and practically the most used solutions for running applications using batch processing. On the AWS platform, you can run various applications in Lambda written in PHP, NodeJS, or Python.

How to migrate Lambda from the public cloud?

When transitioning to your own solution, consider whether you need to maintain the serverless and event-driven concept or if you can switch to full-fledged containerization. If a serverless approach is necessary, you can use several projects that implement Lambda behaviour in a classic K8s cluster, such as Knative. The downside of this transition is the need to operate the entire K8s cluster on which the functions run. With Knative, you can run functions written in various languages like PHP, NodeJS, Python, or Go, so there is no need to rewrite the application itself.

Final Notes: Gaining Control Over Your Cloud Strategy

As you can see, migrating from the cloud can be straightforward if alternatives to the most commonly used services are considered in advance. However, AWS offers other services that may not be as easily migratable. It is crucial to carefully consider all aspects before deploying new technologies in the cloud. There are certainly applications that are worth running in the cloud long-term (e.g., AI or storage in exabyte scales), but most projects can suffice with solutions that don’t close the door to moving to cheaper and more transparent alternatives.

Are you considering migration or need help optimizing your cloud environment?

Do not hesitate to contact us. We offer expert consultations and support that allow you to use the cloud efficiently and easily transition to other platforms. This approach gives you not only better control over your technologies but also greater cost control and independence from specific providers.


Lucie Rybičková Javorská

The Digital Dilemma: IT’s Environmental Impact

With the digital universe expanding rapidly, over 4 billion active internet users are part of a network whose environmental impact is becoming increasingly apparent. Smart devices, internet infrastructure, and the systems that support them are responsible for approximately 3.7% of greenhouse gas emissions and this figure is projected to double by 2025. This stark reality calls for immediate action towards sustainable IT practices.

Eco-Friendly IT Strategies: What Steps Can We Take?

1. Energy Management and Reduced Carbon Footprint

Data centres and server farms consume huge amounts of energy, leading to extremely high carbon emissions. Increasing their energy efficiency is therefore pivotal in the Green IT strategy. This can be achieved, for example, by introducing advanced cooling technologies and the use of renewable energy sources. 

2. Combating Electronic Waste

The rapid turnover of electronic devices is generating large amounts of waste. To counter this, Green IT emphasizes recycling, developing more durable products, and embracing devices with modular designs that are easier to repair and upgrade, thus reducing e-waste.

3. Software Optimisation for Energy Efficiency

Energy-efficient software development is another critical area. Employing advanced algorithms and streamlined coding techniques can substantially lower the energy required by software.

4. Incorporating Green IT Practices in Business

An increasing number of companies are recognising the importance of Green IT and implementing eco-friendly operational practices. Paperless offices,energy-saving devices and encouraging remote work are some ways to reduce the environmental impact of day-to-day business activities.

Vision, Challenges, and the Way Forward

Implementing Green IT comes with its set of challenges. These include: high initial costs, technological barriers and a general lack of consumer awareness. However, the growing interest in eco-friendly IT practices highlights them as essential to building a sustainable future.

Business-Environment Synergy

Adopting Green IT approaches such as energy-efficient hardware, recyclable materials and eco-friendly operating practices offers dual benefits. They not only aid the environment but also enhance business efficiency. These practices enable organisations to ensure regulatory compliance, cut operating costs and elevate their corporate reputation, effectively merging ecological responsibility with business success.

Impact Through Design

A prime example is website design optimization. Simply creating more user-friendly websites that make finding the desired content easier can significantly reduce unnecessary data traffic and its associated environmental impact.

The Role of Innovations

Sustainable IT is not just about the present, but also the future. Technological advancements such as energy-efficient data centres and smart grids will play a crucial role in shaping a more sustainable future, in which IT managers, professionals, and all technology users must play an active role.

vshosting: A Case Study in Sustainability

Vshosting stands as a testament to sustainable IT practice. We thought about sustainability and the environment from the outset. Our data centre was designed for maximum efficiency and we continue to invest heavily in renewable energy sources to reduce our ecological footprint. 

Running modern hardware along with optimizing ventilation and cooling, has led to further energy consumption reductions. 

Alongside efficient waste management, waste reduction and recycling programs, these initiatives have earned our data centre green energy certification since 2022.

Conclusion: Green IT – A Responsibility and Opportunity

Green IT is not merely a trend; it’s a responsibility and an opportunity for today’s businesses. By adopting eco-friendly hardware, promoting recyclable materials, and implementing environmentally conscious practices, companies can reduce costs, and enhance their corporate reputation. More importantly, they contribute to a sustainable future, where technology and ecology exist in harmony.


Lucie Rybičková Javorská

Businesses have relied on the cloud to drive innovation for nearly two decades, with public cloud being hailed as the transformative force empowering companies to unlock new opportunities via global infrastructures, scalability and software-defined solutions. But with the technology evolving at breakneck speed and as consumption patterns change, evaluating your cloud strategy is essential to future-proof your business as market conditions shift. 

Just like how we check in on our business goals to gauge progress, it is equally important to regularly review our cloud strategy. If you’re embarking on a cloud reassessment journey, here are considerations you should keep in mind:

Configuring for improved performance 

Cloud infrastructure is not static and requires regular finetuning to keep applications running and to prevent outages. Determining where and how the workload runs is crucial in this step. Typically, workloads considered to be more demanding may struggle in the public cloud, while others may need to rely on high-performance networks to work well. Applications that rely on low latency or are unsuitable for distributed computing infrastructures are often best suited for on-prem environments. 

These are general guidance on how best to distribute your workload for enhanced performance. A deeper assessment can help pinpoint other existing performance bottlenecks, such as overutilised resources or inefficient configurations. 

How emerging technologies will influence cloud workload

Technologies, along with business’s needs, are constantly evolving and cloud providers are striving to keep pace. New technologies like serverless computing, edge computing, artificial intelligence (AI), and machine learning (ML) bring with them opportunities for business efficiencies and performance improvements. But integrating these into the mix introduces unique sets of requirements, which can have varying demands on cloud workloads. 

Managing spiraling costs 

Cloud computing remains a key part of the IT modernisation strategy. But the cost of cloud services is no longer falling at the rate that it was years ago, where hyperscalers operated on margins below on-prem services. Major cloud service providers including IBM Cloud, Salesforce and ServiceNow for instance, have announced price hikes to reflect this trend. The emergence of new technologies (mentioned above) has also added complexities to cloud pricing, with the costs of running these models often passed onto end users and customers at a premium.

Given the potential of cloud spending to spiral if left unchecked, cost optimisation is fundamental to cloud management. Regular reviews of cloud costs can help identify savings opportunities, and some workloads can stand to benefit from reevaluating infrastructure needs.

Data privacy and compliance

Cost and performance considerations aside, a business may choose to maintain certain workloads on-prem due to stringent data privacy and compliance requirements. End users typically lack visibility into the underlying hardware and the infrastructure hosting these workloads and data. This poses clear challenges for businesses obligated to meet data security and other regulatory requirements, such as clear auditing or data residency proof.

This would be especially prevalent in industries such as healthcare or finance, where handling sensitive customer data is subject to strict regulatory mandates. Therefore, compliance with regulations such as GDPR, PCI DSS, and SOC 2 is essential for protecting sensitive data, and frequent reviews are required to ensure cloud deployments are still compliant overtime.

So, should you move back on-prem? 

In assessing options, businesses may find themselves considering the option of moving back certain workloads or resources on-prem. 

On the upside, operating on-prem gives businesses full infrastructural control and access to all log files, the ability to troubleshoot, correct and audit all activity within the data centre. This level of control lets businesses take proactive steps to protect the environment and address issues promptly, which would be perfect in an ideal world. 

But this decision would still warrant careful consideration given the scale of the reverse migration, and implications across the business. In the same vein, we also understand business’ concerns around the cloud providers’ ability to meet specific uptime and resilience expectations, which can have severe repercussions in the event of, say, an outage.

Meeting in the middle with multi or hybrid cloud strategies

Each operational approach comes with its own advantages and drawbacks. And with that in mind, keeping workloads distributed across multiple cloud providers and environments may still be the best way forward, for now. Organisations have for years embraced a hybrid, multi-cloud strategy, leveraging the strengths of different environments to flexibly run workloads and manage data where it makes the most sense for them. 

At the end of the day, it is not about favouring one environment over the other, but rather rationalising your usage, and continuously refining your consumption. It is about deploying each workload responsibly, with return on investment (ROI) in mind, ensuring that resources are ultimately utilised most effectively. 

If you’d like to speak to an expert about reviewing your cloud strategy, please don’t hesitate to reach out to one of our friendly vshosting experts. 


Lucie Rybičková Javorská

Several reasons are driving companies to move away from cloud services. One primary reason is cost. While migrating to the cloud can save money initially, it can become more expensive as a company grows, especially regarding data transfer. Another reason is performance. Some applications require lower latency, more readily achievable with local servers. Moreover, concerns about data security, privacy, and adhering to industry regulations are prompting more companies to manage their data and applications in-house.

The Role of Managed Service Providers

Managed Service Providers (MSPs), particularly those with their own data centres, are vital in facilitating cloud repatriation. They don’t just provide technical know-how but also the infrastructure necessary for a successful move. By collaborating with MSPs, companies can enjoy a controlled and secure environment akin to a private cloud. This customised setting offers better flexibility, scalability, and security.

Benefits and Challenges

Reverting to local infrastructure brings advantages such as enhanced control over costs and tailored performance for specific applications. It also aids in meeting stringent regulatory requirements and gives companies greater command over their IT resources. However, the transition is not without its complexities. It requires thoughtful planning and testing and might be time-consuming and resource-intensive, potentially leading to operational disruptions. Moreover, a long-term IT strategy is essential for this change.

Planning for Success

Before committing to cloud repatriation, companies should thoroughly analyse whether this move supports their long-term objectives. Careful planning and extensive testing are crucial to minimise disruptions. The local infrastructure must be prepared to support future expansion and comply with all security and compliance regulations.

Cloud repatriation can be a beneficial strategy for companies aiming to optimise their IT infrastructure. With proper preparation and implementation, it can enhance performance, control costs, and increase security. If you’re considering cloud repatriation for your company, contact us for tailored advice. Let’s work together to create an IT environment that is secure, high-performing, and cost-efficient.


We have successfully assisted with migrations for hundreds of clients over the course of 17 years. Join them.

  1. Schedule a consultation

    Simply leave your details. We’ll get back to you as soon as possible.

  2. Free solution proposal

    A no commitment discussion about how we can help. We’ll propose a tailored solution.

  3. Professional implementation

    We’ll create the environment for a seamless migration according to the agreed proposal.

Leave us your email or telephone number




    Or contact us directly

    +420 246 035 835 Available 24/7
    consultation@vshosting.eu
    Copy
    We'll get back to you right away.