Lucie Rybičková Javorská

In the face of cost-cutting exercises and the push for efficiency, Gartner’s latest 2024 worldwide IT spending forecast has predicted that IT services spending is poised to take up a lion’s share of overall IT budgets. This is prompting a strategic emphasis on leveraging IT services to enhance operational efficiency, against the backdrop of economic uncertainties and spiraling operational costs.  

With this in mind, we look at some of the key technological trends we will expect to see this year, and expectations from businesses across the board:  

Hosting infrastructures will be expected to adapt and perform

In the aftermath of major events like supply chain disruptions, political events and the AI boom, one key takeaway is that nothing will remain status quo for long. This all points to the need for a flexible, adaptable hosting infrastructure for businesses. Businesses will now look to IT partners who can roll with the punches – and to scale up or down, based on ongoing demand, workloads and needs. 

Simply put, there’s a rising demand for hosting that flexes with shifting needs and workloads.

Continuous shift towards cloud optimisation

Businesses migrated to the clouds with the goal to improve performance while streamlining operational costs. But the era of one-size-fits-all cloud solution is shifting, and will continue going into 2024, into a more tailored and dynamic approach. Cloud optimisation strategies, integrating both public and private clouds, would continue to offer the most significant cost saving opportunities.

This year, we can expect to see a greater focus on fighting ‘cloud-flation’, where businesses will look to optimise processes where possible, and adjusting resource scales based on workload requirements, while eliminating underutilised workloads.  This would include phasing out generic hosting capabilities, eliminating duplicated workloads and streamlining day-to-day IT storage operations for enhanced efficiencies, among other measures. 

Moving workloads back on prem

As part of the ongoing cloud optimisation exercise, we expect to see more businesses turning to cloud repatriation, or the process of moving data and workload from a public cloud environment back on-prem. According to research by Citrix, a significant number (93%) of IT leaders have already been involved in cloud repatriation project in the last three years, which is unsurprising considering the spiraling cloud costs and the growing need for a scalable approach to cloud infrastructure. 

Businesses would also want to explore a safe, and secure way to do so. 

Ransomware resistant backup

In trying to safeguard data while migrating workloads, it’s important to not overlook the data stored in backup systems which form a crucial part of a business’ response and recovery process. Increasingly, threat actors are targeting backup systems and infrastructure, maliciously deleting, or destroying stored data to disrupt operations. Alarmingly, in 75% of these cases, they successfully cripple victims’ ability to recover.

While common practice when dealing with ransomware attacks includes avoiding payment, all backup strategies should adhere to principles that are resilient to destructive actions. This would include measures such as soft-delete practices, blocking any deletion or alteration request once created, introducing delays for these requests, and blocking destructive actions from customers, for example. 

Protecting company data is ultimately a multi-layered approach. But as the threat landscape continues to evolve, businesses need to stay proactive and work with their IT service providers to understand how the services they provide can grow their defenses against these evolving risks, including AI-powered DDoS attacks and WormGPT. 

Staying green in a hybrid cloud environment

While companies are actively working to optimize their cloud spending, there’s a growing focus on the environmental footprint they leave behind. Traditionally centered on direct emissions, the rising demand to address Scope 3 emissions is exerting pressure on businesses. Shifting away from physical servers and data centers used to be sufficient, but with the demands of hybrid cloud setups, businesses are now turning to greener hosting providers that prioritise energy efficiency.

In the context of regulations and initiatives across Europe, including the Netherlands and Germany, addressing the environmental impact of data centers will continue to shift. As businesses are expected to comply with transparent reporting expectations, working with a responsible hosting provider helps align them with current and future environmental regulations, mitigating potential regulatory risks.  

Mainstream adoption of self-hosted large-language models 

ChatGPT first propelled AI to mainstream consciousness back when it was first introduced in 2022. Since then, companies of all sizes have tried to harness and integrate generative AI into their business practices to streamline the worker and customer experience. We can expect to see the rise of adoption of self-hosted large-language models within hosting services, which can in turn empower businesses with advanced capabilities for customer engagement, content creation, data analysis, and overall operational efficiency. 

Containersation and deploying Kubernetes 

Simplifying Kubernetes usage is an important part of a smart cloud strategy. With Kubernetes becoming a go-to for managing containerised applications, it’s more important than ever to manage costs and ensure resources are used effectively. Containers will play a key role in this, contributing to cost optimisation, reducing time-to-market, maximising resource utilisation and lowering infrastructure overhead.

If you’re considering optimising your cloud strategy or are eager to explore how managed hosting can bolster your business needs with a future-ready infrastructure, don’t hesitate to connect with our experts here at vshosting

Lucie Rybičková Javorská

Several reasons are driving companies to move away from cloud services. One primary reason is cost. While migrating to the cloud can save money initially, it can become more expensive as a company grows, especially regarding data transfer. Another reason is performance. Some applications require lower latency, more readily achievable with local servers. Moreover, concerns about data security, privacy, and adhering to industry regulations are prompting more companies to manage their data and applications in-house.

The Role of Managed Service Providers

Managed Service Providers (MSPs), particularly those with their own data centres, are vital in facilitating cloud repatriation. They don’t just provide technical know-how but also the infrastructure necessary for a successful move. By collaborating with MSPs, companies can enjoy a controlled and secure environment akin to a private cloud. This customised setting offers better flexibility, scalability, and security.

Benefits and Challenges

Reverting to local infrastructure brings advantages such as enhanced control over costs and tailored performance for specific applications. It also aids in meeting stringent regulatory requirements and gives companies greater command over their IT resources. However, the transition is not without its complexities. It requires thoughtful planning and testing and might be time-consuming and resource-intensive, potentially leading to operational disruptions. Moreover, a long-term IT strategy is essential for this change.

Planning for Success

Before committing to cloud repatriation, companies should thoroughly analyse whether this move supports their long-term objectives. Careful planning and extensive testing are crucial to minimise disruptions. The local infrastructure must be prepared to support future expansion and comply with all security and compliance regulations.

Cloud repatriation can be a beneficial strategy for companies aiming to optimise their IT infrastructure. With proper preparation and implementation, it can enhance performance, control costs, and increase security. If you’re considering cloud repatriation for your company, contact us for tailored advice. Let’s work together to create an IT environment that is secure, high-performing, and cost-efficient.


It’s time to start preparing your e-shop infrastructure for the busy season. Think ahead and get ready for the high demand for your products.

Now is the perfect time to prepare your infrastructure for the busy season. Many e-shops focus mainly on marketing campaigns to boost sales, but they forget that if these marketing efforts are successful, their e-shop might not be able to handle the influx of customers. 
If preparations start taking place in the fall, the infrastructure upgrade might not be completed in time, leading to lost orders. If the website is slow or crashes, customers will shop elsewhere- it’s as simple as that. Don’t believe us? We recently conducted a survey that shows how much revenue businesses lose if their websites loads longer than 3 seconds. 

So, think ahead and prepare your e-shop for the anticipated demand for your products. Now is your chance to outsmart the competition and steal their thunder with a fully workable website.  

Here are 7 important tips on how to keep your website functionable.

1) Verify Your Solution’s Capacity

Ask yourself and your employees about the past season. How much did the demand on your infrastructure increase during that time? It’s crucial to compare this with your average annual operation. Once you have this comparison, you can roughly calculate the expected increase for this year using a simple formula. 

However, it’s entirely possible that this year’s main season will be even more successful. That’s why we recommend planning for an additional 20% performance reserve and you should consult with your hosting provider on this matter. 

Increasing the capacity of your hosting solution is best tailored directly to your needs because, in many cases, performance cannot be linearly increased. It also depends on the software technologies your e-shop utilises.

If you are unsure on any of the above, then feel free to contact us on consultation@vshosting.co.uk

2) Make Sure To Include Your Marketing Team In Discussions

A surprisingly successful marketing campaign might be every e-shop owner’s dream, but only if the e-shop is prepared for it. If it’s not, there’s a risk that the website might crash due to the increased traffic, and the money spent on the campaign would go to waste. 

That’s why it’s essential to communicate your planned campaigns with your hosting provider well in advance. This is especially crucial if you’re planning a larger campaign, such as a TV ad. So, make sure to have a prepared plan for an unexpectedly successful campaign.

3) Consider Premium Database Backups

Think ahead; it’s about your reputation and money- both are extremely important. 

Prepare a backup plan for emergencies and consider premium database backup. After all databases are the heart of every e-shop.

For instance, if your database suddenly stops working due to reasons like accidental deletion by a careless colleague or a failed disk drive, orders will soon start disappearing. You won’t know what customers bought and for how much, let alone where to ship their purchases. While you might have a backup of the database, it was likely quite extensive, so restoring it could take several hours. With premium backup and “point-in-time recovery,” data restoration is lightning-fast (up to 10 times faster!) this minimises any data loss by restoring the database to its correct state.

Contact us for more information – we can tailor the backup solution to your database technology. 

4) Make sure to have a backup plan

Regardless of how thoroughly you prepare your infrastructure for the season, unexpected issues can always arise. This is why we always recommend having a disaster recovery plan in place. It’s a recovery plan designed primarily to ensure that you don’t lose anything in a crisis situation and know exactly how to proceed.

Do you have such documentation ready? Do you have an idea of how long it would take to recover the data if needed? It’s a worst-case scenario, but if it happens, it’s critical. It’s not just about your company’s reputation but especially about your profits and customer data. You need to respond quickly, efficiently, and salvage as much as possible.

Check out our guide on what disaster recovery plans should include.

5) Switch to a dedicated solution

Many clients switch to VPS hosting when they outgrow their existing solutions, especially when using shared infrastructure in web hosting. They start hitting performance limits, typically during traffic peaks, when numerous e-commerce websites compete for limited shared resources.

A prime example is the Christmas season, which sees the highest demand on e-commerce infrastructures. This period, between September 1st and December 23rd, often accounts for more than half of an e-commerce business’s annual revenue. Slow loading times or, worse, downtime is simply not affordable.

From our 15 years of experience, we can say that if you’re on web hosting or any shared solution and expect higher traffic than usual, you should at least consider a dedicated solution. For growing and successful e-commerce businesses, we recommend transitioning to an infrastructure where all the resources are dedicated solely to them, ideally several months before the peak season.

6) Don’t underestimate the importance of security

Security is crucial, whether it’s about protecting data or ensuring the availability of your e-shop. 

Security is closely related to software updates, which include security patches. That’s why, with managed services, we monitor the software’s current status and alert clients if it becomes outdated. We also perform operating system updates upon agreement.

As a successful company, you can also expect DDoS attacks – either from random hackers or directly from competitors. Unfortunately, these incidents are becoming more frequent. That’s why we recommend not underestimating security. At vshosting, we have our own anti-DDoS protection, which reliably shields our clients from such attacks. Don’t forget about training your employees, who are your first line of defence against phishing attacks.

7) Test the quality of your provider (and consider switching in time if necessary) 

To prepare the infrastructure for your e-shop, a reliable partner is key. What are your experiences with your current hosting provider? Do they proactively address issues before they occur? And what are the experiences of their other clients? We always recommend checking their references.

At vshosting, we believe that if something goes wrong, you should have access to a team of senior administrators and technicians who will immediately start working on the problem. Even at 2 in the morning, including weekends. That’s why we provide this quality of service to all clients without exception. Whether it’s a large e-shop with a billion-pound turnover or a smaller e-shop that is just starting out.

We hope you have found these tips helpful. If you have any questions on how to keep your E-shop prepared for the busy season, then please contact us on cosultation@vshosting.co.uk


With that being said, every innovation comes with its price, and in the cloud, it is indeed true that ‘the sky is the limit.’ Many companies are starting to realise that while the benefits of the cloud are immense, they don’t necessarily need them for their entire project.

What exactly is Hybrid-Cloud?

Companies can identify and separate server usage into a permanent load (known as base load) and peak demands. Companies will then need to consider whether it’s worth moving the base load back from the cloud to on-premise hardware, saving a significant amount of money and using the cloud only to cover peak performance needs. This is when when we start talking about a hybrid solution: infrastructure spread between the public cloud and one’s own or rented hardware.

Why should businesses consider hybrid cloud in conjunction with AWS?

In practice, there are two main reasons to consider hybrid cloud: location and cost.

Location is usually not so critical as Amazon currently offers eight computing locations (regions) within Europe and more than 25 Edge locations for CDN.

When it comes to cost, the situation becomes quite interesting. AWS and public clouds, in general, offer a plethora of services beyond traditional virtual servers, in the form of serverless and software-as-a-service offerings. Whether it’s databases, storage, emailing, or more complex tools, clouds can practically cover any requirements but come at a significant cost. However, many of these services can be found within on-premise solutions at a lower cost and higher quality. In the case of AWS, these typically include large databases, various queuing services (SQS), and servers to handle base load.

AWS as one of the layers of hybrid architecture.

Overall, it can be said that AWS services can be divided into two categories – true SaaS and services running on Amazon-managed virtual machines. The first category includes SQS, SES, or storage, while the second one includes databases. Many clients who come to us with their AWS solutions rely solely on the fact that AWS services simply do not fail. They try to reduce costs by bypassing fault tolerance. This makes sense in the short term, but in the long run, it leads to significant losses and problems, especially when an AWS service stops functioning. There can be many reasons, but the most common cause is technical hardware failure at Amazon. However, these are also the services that can benefit the most from a hybrid architecture.

For this example we will be looking at an Czech e-shop. The e-shop has several servers for running the application itself, a database, email service (SES), storage, and CDN for content distribution. Most of the time, there is a relatively constant load on the servers, and only occasionally, during marketing campaigns, there is a need to cover peak performance demands. At the same time, all components need to be fault-tolerant, which dramatically increases the cost, especially for databases. The situation can then look like the diagram below.

In principle, there is nothing wrong with this setup – except for the single-node RDS, which, in the event of a failure, causes the entire application to crash. Thanks to CloudFront CDN, data reaches customers quickly, dynamically scaled instances cover peak loads, and SES takes care of emailing. For a larger e-shop with approximately 300 GB of data in the database and half a million emails sent monthly, we are talking about a cost of around $3000 per month just for the infrastructure. Don’t believe it? Just take a look at the below calculator.

If we wanted to ensure high availability for RDS, the cost would be even higher. Additionally, we need to factor in the time of the people managing the system (even if it’s just developing IaC scripts and monitoring logs), easily exceeding costs well over 250,000 CZK monthly (approximately the price of a lightly used Skoda Fabia car).

Now, let’s see the savings we can achieve by moving some services from AWS to dedicated hardware. The foundation will be a 3-node private cloud on the Proxmox platform. We will move practically everything essential for the e-shop’s operation to this cluster – a database in a master-master setup for fault tolerance, S3 storage, and virtual servers. Due to the placement in the vshosting data center, we can eliminate CDN. In AWS, only the SES email service, DNS Route53, and potential EC2 servers for handling peak traffic, if needed, will remain.

For such a hybrid solution, you will pay around 120,000 CZK monthly (including AWS hardware and AWS management). By implementing a hybrid cloud approach, you reduce costs by more than 50% compared to running purely in AWS. For this price, you not only get a better technical solution but also 24/7 monitoring, management by our experienced administrators, and backups to a geographically separate location.

Impressive right?
Are you interested in a hybrid solution and want to know how you could save on costs? Email us at consultation@vshosting.co.uk


Why choose a private cloud

Moving applications to the cloud has a number of undeniable benefits, but its most popular form offered by global providers has a number of operational pitfalls: you give up the ability to customize your entire environment, allow a third party to have access to your data, and run the risk of vendor lock-in.

If you are one of the more demanding clients, you definitely need an exclusive cloud for your application, which is just for you and custom-made. You need a private cloud that, unlike a public cloud, fully adapts to the needs of your application.

The private cloud takes the benefits of the public cloud and grafts them onto a solution that you have full control over and where your freedom is not restricted in any way.

Example of Private cloud infrastructure

What a vshosting~ private cloud on the VMware platform looks like

We build each private cloud on the latest hardware from HP, DELL, Supermicro, Intel, and AMD. If you choose the VMware platform, we will use the vSphere tool for virtualization, in the Standard or Enterprise version. Taking care of the overall design, its implementation as well as ensuring smooth operation in a high availability mode are a matter of course with vshosting~. All including professional advice from our administrators.

The cloud solutions we provide always include comprehensive server infrastructure management and exceptional 24/7 support with 60-second response times. The private cloud on VMware is no exception.

Our experienced administrators use advanced server management software, VMware vCenter (Standard version), which serves as a centralized platform for controlling the vSphere environment. Whether your applications run on Windows or Linux, we provide server management for you.

Are you considering a private cloud solution on VMware? Get in touch with our consultants: consultation@vshosting.eu and discuss the options and features of VMware. No strings attached.


What sets the Platform for Kubernetes service from vhosting~ apart from similar solutions by Amazon, Google or Microsoft? There is a surprising amount of differences. 

Kubernetes services development

Clients often ask us how our new Platform for Kubernetes service differs from similar products by Amazon, Google or Microsoft, for example. There are in fact a great many differences so we decided to dig into the details in this article.

Individual infrastructure design

The majority of traditional cloud providers offer an infrastructure platform, but the design and individual creation of the infrastructure is left to the client – or rather to their developers. The overwhelming majority of developers will of course tell you that they’d rather deal with development than read a 196-page guide to using Amazon EKS. Furthermore, unlike most manuals, you really need to read this one since setting up Kubernetes on Amazon isn’t very intuitive at all. 

At vshosting~ we know how frustrating this is for most companies. The development team should be able to concentrate on development and not waste time on something that they’re not specialized in. Therefore, we make sure that unlike traditional cloud services, our Kubernetes solution is tailor-made for each client. With us, you can skip the complex task of choosing from predefined packages, reading overly long manuals, and having to work out which type of infrastructure best meets your needs. We will design Kubernetes infrastructure exactly according to the needs of your application, including load balancing, networking, storage and other essentials. 

In addition, we would love to help you analyze your application before switching to Kubernetes, if you don’t already use it. Based on your requirements we’ll recommend you a selection of the most suitable technologies (consultation is included in the price!), so that everything runs as it should and any subsequent scaling is as straightforward as possible. 

In terms of scaling, with Zerops it’s simple. Again there is no choosing from performance packages etc. at vhosting~ you simply scale according to your current needs, no hassle. We also offer the option of fine scaling for only the required resources. Does your application need more RAM or disk space because of customer growth? No problem. 

After we create a customized infrastructure design, we’ll carry out the individual installation and set up of Kubernetes and load balancers before putting it into live operation. Just for some perspective, with Google, Amazon or Microsoft, all of this would be on your shoulders. At vshosting~ we carefully fine-tine everything in consultation with you. Once launched, Kubernetes will run on our cloud or on the highest quality hardware in our own data center, ServerPark. 

The option of combining physical servers and the cloud

Another benefit of Kubernetes from vshosting~ is the option of combining physical servers with the cloud – other Kubernetes providers do not allow this at all. With this option you can start testing Kubernetes on a lower performance Virtual Machine and only then transfer the project into production by adding physical servers (all at runtime) with the possibility of maintaining the existing VMs for development. 

For comparison: Google will for example offer you either the option of on-prem Google Kubernetes Engine or a cloud variant, but you have to choose one or the other. What’s more, you have to manage the on-prem variant “off your own back”. You won’t find the option of combining physical servers with the cloud with Amazon or Microsoft. 

You save up to 50% compared to global Kubernetes providers. Take a look at how we compare.

Global Kubernetes providers

With us you can combine physical servers with the cloud as you see fit and we’ll also take care of administration – leaving you to focus on development. We’ll oversee the management of the operating systems for all Kubernetes nodes and load balancers and we’ll provide regular upgrades of operating systems, kernels etc. (and even an upgrade of Kubernetes, if agreed).

High level of SLA and senior support 24/7

One of the most important criteria in choosing a good Kubernetes platform is its availability. You might be surprised to learn that neither Microsoft AKS nor Google GEK provide a SLA (financially-backed service level agreement) and only claim to “strive to ensure at least 99.5% availability”. 

Although Amazon talk about a 99.9% SLA, when you look at their credit return conditions, in reality it’s only a guarantee of 95% availability – since Amazon only return 100% of credit below this level of availability, If availability drops only slightly below 99.9%, they only return 10% of credit. 

At vshosting~ we contractually guarantee 99.97% availability, that is to say more than Amazon’s somewhat theoretical SLA and significantly more than the 99.5% not guaranteed by Microsoft and Google. In reality, availability with vshosting~ is more like 99.99%. In addition., our Managed Kubernetes solution works in high-availability cluster mode which means that if one of the servers or part of the cloud malfunctions the whole solution immediately starts on a reserve server or on another part of the cloud. 

We also guarantee high-speed connectivity and unlimited data flows to the whole world. In addition we ensure dedicated Internet bandwidth for every client. Our network has capacity up to 1 Tbps and each route is backed up many times over.  

Thanks to the high-availability cluster regime, high network capability, and back up connection, the Kubernetes solution from vshosting~ is particularly resistant to outages of any part of the cluster. Furthermore, our experienced team will continuously monitor your solution and quickly identify any issues that emerge before they can have an effect on the end user. We also have robust AntiDDoS protection which effectively defends the entire cluster against cyber attacks. 

Debugging and monitoring of the entire infrastructure

Unlike traditional cloud providers, at vshosting~ our team of senior administrators and technicians monitor your solution continuously 24 hours a day directly from our data center and will react to any problems which may arise within 60 seconds – even on Saturday at 2am. These experts continuously monitor dozens of parameters relating to the entire solution (hardware, load balancers, Kubernetes) and as a result are able to prevent most issues before they become a problem. In addition, we guarantee to repair or replace a malfunctioning server within 60 minutes. 

To keep things as simple as possible, we’ll provide you with just one service contact for all your services-  whether it’s about Kubernetes itself, its administration or anything to do with infrastructure. We’ll take care of routine maintenance and complex debugging. Included in the Platform for Kubernetes price we also offer consultation regarding specific Dockerfile formats (3 hours a month).


Imagine you’re in the middle of the peak season, marketing campaigns are in full swing and orders are pouring in. Sounds nice, doesn’t it? Unless the database suddenly stops working that is. Perhaps an inattentive colleague accidentally deletes it. Or maybe the disk array fails – it doesn’t matter in the end, the result is the same. Orders start falling into a black hole. You have no idea what someone bought and for how much, let alone where to send it to them. Of course, you have a database backup, but the file is quite large and it can take several hours to restore.

Now what?

Roll up your sleeves, start pulling the necessary information manually from email logs and other dark corners. And hope that nothing escapes you. But those few hours of recovery will be really long and incredibly expensive. Some orders will certainly be lost and you will be catching up with the hours of database downtime for a few more days.

Standard database backup (and why recovery takes so long)

Standard backup, which most larger e-shoppers are used to, is carried out using the so-called “dump” method, where the entire database is saved as a single file. The file contains sequences of commands that can be edited as needed. This method is very simple to implement. Another advantage is that the backup can be performed directly on the server on which the database is running.

However, a significant disadvantage of the dump is the time needed to restore the database from such a backup. This applies especially to large databases. As each command must be reloaded separately from the saved file into the database, the whole process can take several hours. At the same time, you can only restore the data that was contained in the last dump – you will lose the latest entries in the database that have not yet been backed up. The result is an unpleasant scenario described in the introduction – a lot of manual work and lost sales.

Want to dive deeper into tailored backup solutions? Visit our detailed page to learn how we can help protect your business.

Premium backup with Point-in-time recovery

In order for our clients to avoid similar problems, we offer them premium database backups. This service allows for very fast recovery of databases, to the state just before the moment of failure. We achieve this by combining snapshot backups with binary log replication.

How does it work exactly?

We create an asynchronous replica from the primary database to the backup server. On this backup server, we make a backup using a snapshot. In parallel, we continuously copy binary logs to the backup server, which record all changes in the primary database. In the event of an accident, the logs will help us determine exactly when the problem occurred. At the same time, thanks to them, we have records of operations that immediately preceded the accident and are not backed up by a snapshot.

By combining these two methods, we can – in case of failure – quickly restore the database to its original state (so-called Point-in-time recovery, recovery to a point in time).

First, we restore the latest backup snapshot and copy it to the primary server from the backup server. Subsequently, for binary logs, we identify the place where the destructive operation took place and use them to restore the most recent data.

The speed of the whole process can be as much as 10 times higher than recovery from a dump. It is limited only by the write speed to the disk and the network connection. With a database of around 100 GB, the length of the entire process will be in the order of dozens of minutes.

What is needed for implementation?

Unlike the classic dump backup, which you can perform directly on the primary server, you need a backup server for the premium option. This server should have similar performance as the production server. The size of the storage is also important: we recommend about twice the volume of the disk with the primary database. This capacity should allow snapshots to be backed up for at least the last 48 hours (if you opt for hourly backups).

We will be happy to recommend the ideal storage volume for your database – book a free consultation at consultation@vshosting.eu –⁠ it depends on the frequency of backups, the number of changes in your database, and other factors.

Premium backup also depends on the choice of database technologies. Due to the use of binary logs, it can only be implemented in relational databases such as MariaDB or PostgreSQL. NoSQL databases do not have a transaction log and are therefore not compatible with this method.

Another condition is a more conservative database setup on the backup server. The repository must always be consistent in order to take snapshots using ZFS. Upgrades that prioritize database performance over consistency cannot be used on the backup server. Therefore, it is necessary to choose a faster storage option than on the primary server, where a higher performance setting that reduces consistency is feasible.

Is the premium database backup for you?

If you can’t afford to lose any data in your business, let alone run for hours without a database, our premium backup with Point-in-time recovery is right for you. An example of a project that will benefit the most from this service is an online store with large databases, which would cost thousands of euros. In this case, an investment in the backup server needed for premium backup will pay off very quickly.

Conversely, if you have a smaller database with just a few changes per hour, you’re probably perfectly fine opting for a standard dump backup.

If you have any questions, we’ll be happy to advise you free of charge: consultation@vshosting.eu.


But your cargo will probably sink, and you will scramble to try and save as much of it as possible. To avoid this, we recommend that you prepare in advance for the expected fluctuations in infrastructure load. Among the most demanding e-commerce events are Christmas and Black Friday.

Online stores are traditionally preparing for Christmas in the summer. That’s when it’s time to think not only about marketing campaigns to attract as many customers as possible but also about the technical background that needs to withstand their influx to the website. The key is to know the average traffic to your site. And what the last season was like. Once you have this comparison, you can simply calculate roughly how much increase you can expect this year.

But what if you do even better than last year? That would be great but be prepared for this very desirable option. We recommend having an extra infrastructure capacity of approx. 20% on top of your estimate from the calculation above. However, it is always best to consult your hosting provider directly about tailor-made capacity reserves.

How to calculate the necessary capacity

For a simple calculation of infrastructure capacity, it is sufficient to compare the expected numbers with the current data. If you assume that the application will scale linearly, you can simply use last year’s high season increase in traffic compared to the average traffic in the first half of the year. Use that percentage increase combined with this year’s traffic and you’ll find out what system resources you’ll need this time around.

The advantages of this method are its speed and minimal cost. However, it is only a rough approximation. A more accurate alternative would be the so-called performance test. During this process, we simulate large traffic using an artificial load, while monitoring which components of the infrastructure become bottlenecks. This method also reveals configuration or technological limitations. However, it is fair to mention that performance tests are time-consuming as well as highly specific depending on the technologies used. For small and medium-sized online stores, they can therefore be unnecessarily expensive.

Pro tip: For example, the popular Redis database is single-threaded, so when the performance of a single core becomes saturated, it has reached its maximum at that point, and it doesn’t matter that the server has dozens of free cores available. Simply because such an application cannot use them.

Getting technical: 4 things to watch

CPU – beware of misleading CPU usage graphs if hyperthreading is enabled. The graph aggregating performance across all processor cores then greatly distorts the available performance. Although hyper thread theoretically doubles the number of cores, it practically doesn’t add twice the power. If you see values above 50% on such a graph, you are very close to the maximum… This is typically somewhere between 60 and 70%, depending on the type of load.

RAM – RAM usually does not grow linearly. For example, for databases, some memory allocations are global and others are separate for each connection. It often gets forgotten that the RAM cannot get completely full. If it does, all you need is a small allocation requirement, and the core kills the process that the memory required.

The operating system typically uses the memory reserve as a disk cache, which has a positive effect on performance. If caching is not sufficient, disk operation needs to increase.

Disks – Low disk speeds are a common reason that some operations are slow or completely inoperable at high loads. Whether the solution is sufficient will be shown only at high load or during a performance test. This load can be reduced by more intensive caching, which requires more RAM. It is also possible to solve the situation, for example, by upgrading from SATA / SAS SSD to NVMe disks.

It is also necessary to consider capacity because it can also affect overall performance. All filesystems using COW (copy-on-write) – for example, the ZFS we use, or file systems such as btrfs or WAFL – need extra capacity to run. All of these file systems share an unpleasant feature: when about 90% of the capacity becomes occupied, performance starts to degrade rapidly. It is important not to underestimate this – in times of heavy load, more data is often created and capacity is consumed faster.

Network layer – especially important for cluster solutions, where servers communicate a lot with each other and the speed for internal communication can easily become insufficient. It is also appropriate to consider redundancy – the vshosting~ standard is the doubling of the network layer with the help of LACP technology. So, for example, we make one 2GE interface from 2x 1GE. This creates a capacity of 2GE, but in practice, it is not appropriate to use up more than 1GE, because at that moment we are losing redundancy on the server.

Even the fact that the solution uses a 10GE interface does not mean that such a solution will suffice under all circumstances. All it takes is a small developer error when a simple query transfers a large amount of unnecessary data to the database (typically select * from… and then takes the first X lines in the application) and it is easy to deplete even such a large bandwidth.

Can we help evaluate your infrastructure capacity? Email us at consultation@vshosting.eu.


For most of us, 2020 developed completely differently than we imagined at the time of our New Year’s toast. The pandemic turned everything upside down and much still remains that way. Not all the changes were for the worse, however, and many plans got implemented despite the virus. Here are vshosting~ milestones of last year.

Turnover plus 20%

Each year since our founding in 2006, we have been proud of our quick growth. But 2020 was really something. Already in the spring, the coronavirus switched digitalization over to rocket fuel and everyone rushed to be online like never before. For us, it immediately meant another business peak comparable to Christmas, and since then the situation has calmed down only partially.

It was challenging, but we are pleased to say that we have helped dozens of clients move their business primarily to the world of the Internet. To another hundred or so businesses, we assisted with dramatically strengthen their existing e-shops. In numbers, this represents a year-on-year increase in turnover of as much as 20%.

Data center capacity grew by 2,500 servers

Thanks to the rapid influx of new clients (thank you!), our ServerPark data center has begun to burst at the seams. That is why we started to build the last stage of the data center, which we successfully completed last fall.

We used our technological reserves to the maximum and installed large 52U racks. This means increased capacity of the data center by about 2,500 servers! To keep up with such a boost, we also added 2 new transformer stations (800kVA and 630kVA) and increased the cooling capacity by adding 4 condensers to the roof and 2 new air conditioning units (each with an output of 100kW).

Connectivity increased to 2 x 100 Gbps

We also keep up with the growing data flows of our clients. Newly, vshosting~ has a fully redundant capacity of 2 x 100 Gbps to the NIX.CZ node. That’s five times more than the original 2 x 20 Gbps! We also upgraded the network technologies connecting our ServerPark data center with the TTC DC1 data center (this is used for backups in a geographically separated location and for the implementation of connections with other operators).

The new technology has a transmission speed of 2 x 100 Gbps and connects both data centers via two independent optical routes. All data transfers, including backups, are even faster and more secure than before.

German investor patronage

2020 marked a huge milestone for us because vshosting~ has become part of the German investment group Contabo. Contabo has hosting businesses growing across Europe as well as within the USA, and thanks to their support, we can grow at an even faster pace than before.

In addition to hosting, our new investors are also helping us kick off the long-awaited Zerops project, which we launched in a private version in August last year.

What will 2021 bring?

We will not be idle this year either as we are preparing a lot of news for you. First of all, the public version of Zerops won’t be long in coming – we plan to launch it in April 2021. You can look forward to a fully functional, automated platform for developers that will make their coding dramatically easier.

Another novelty will be the launch of an AWS-based solution. We thus cater to our clients who have shown interest in this opportunity, and we offer a more advantageous alternative for those who do not want to give up AWS completely.

Last but not least, we will dive into our long-awaited expansion to the Hungarian market, where we want to offer reliable hosting services for e-commerce projects.


In August 2015, we officially opened our own ServerPark data center in Hostivař, Prague. As a hosting service provider, vshosting~ had already been operating for 9 years at that time. However, with the launch of ServerPark, a new era began. The most important business lessons we have learned in the 5 years that followed could be summarized thusly: paranoia = the key to success, there’s no such thing as “can’t”, and herding cats.

Paranoia = the key to success

As hosting providers, we have paranoia in the job description already. If you also run a data center, this is doubly true. Maximum security and ultra-high availability of services are of the utmost importance for our clients. For this reason, we have implemented even stricter measures in our data center compared to the industry standard. We have doubled all the infrastructure and even added an extra reserve for each element in the data center.

In contrast, in most other data centers, they only choose the duplication of infrastructure or even only one reserve. It’s much cheaper and the chances of, say, more than half of your air conditioning units failing are minimal, right?

You know what they say: just because we’re paranoid doesn’t mean they’re not after us. Or that three air conditioning units cannot break at the same time. Therefore, we were not satisfied with this standard and we can say that it has paid off several times in those 5 years. Speaking of air conditioning: for example, we once performed an inspection of individual devices (for which, of course, it is necessary to turn them off) and suddenly a compressor got stuck in one of the spare air conditioning units. It would be a disaster for a standard data center, but it didn’t even faze us. Well, we just had to order the compressor…

Unlike most other data centers, even after 5 years, we can boast of completely uninterrupted operation. Extra care pays off, and this is doubly true for a data center.

We also rely on large stocks of hardware. So large that many other hosting providers would find them unnecessary. However, we have once again confirmed that if we want to provide clients with excellent service in all circumstances, “supply paranoia” pays off.

A great example was the beginning of this year’s coronavirus crisis. Due to the increased interest in online shopping, many e-shops needed to significantly increase capacity – even double it in some cases. At the same time, the pandemic around the world has essentially halted the flow of hardware supplies. If we had to order the necessary servers, the client’s infrastructure would simply collapse under the onslaught of customers before the hardware would arrive. But we simply pulled the servers out of the warehouse and installed them almost immediately.

There’s no such thing as “can’t”

Another lesson we learned from running a data center is the need to think out of the box. As hosting service providers and data center operators, we must fulfill the client’s ideas about their server infrastructure from start to finish. Having the “we can’t do that” attitude is simply not an option in this business. You either come up with a solution or you’re done for in this industry.

In 5 years of operating the data center, we have faced our fair share of challenges. Whether it’s designing a giant infrastructure for the biggest local e-commerce players or transforming an entire hosting solution into a brand new technology unknown to us, we’ve learned to never give up before the fight. The results are usually completely new, unique solutions, courtesy of our amazing admins. It is said that the work of an administrator is typically quite stereotypical. Well, not in our company. We have more than enough challenges for each and every one of them.

Of course, there are also requirements that simply cannot be met. The laws of physics are, after all, still in place. Also, sometimes the price of a solution is completely outside the business reality. Even in such cases, however, we have come to the conclusion that it is always worthwhile to look for alternative solutions. Of course, they will not be completely according to the original assignments, but you can come pretty close.

Summing up: we always look for ways to honor our client’s wishes, not for reasons why we cannot.

Herding cats aka how to manage a company with teams all over Prague

When we were building the data center, there were about 20 of us in the company. Above the data hall, we added a floor with offices and facilities with then seemingly unreasonably large capacity of 50 people. We figured that it would take us a good number of years to grow so much. In the end, it took only three.

Today there are about eighty of us (plus 5 dogs, 1 lizard, and 1 hedgehog working part-time). We’ve had zero chance of fitting into the data center for quite some time, so we’ve undertaken slight decentralization. The developers are based in Holešovice and sales and marketing reside in Karlín. From the point of view of company management and HR, such a fragmentation of the company presents a great challenge.

What were the lessons learned? Primarily, that effective communication is really hard but totally worth it. After all, many growing companies run into some type of communication trouble: once there’s more than 25 of you, it is no longer enough to naturally pass on information while waiting for the coffee to brew. When you combine this growth with a division of teams to different locations, the effect multiplies.

We have learned (and in fact are still learning) to share information between teams more regularly and in a more structured way to avoid misunderstandings. Because misunderstandings give rise to unnecessary conflicts, inefficient work, and general frustration. On the other hand, we are no fans of endless meetings with everyone. So what’s the best way to go about it?

For example, we regularly send everyone an in-house newsletter, to which each team in the company contributes updates about what they are doing, what new things they are preparing, and what has been particularly successful. Thanks to this, even a new salesman knows what technicians are doing for the success of our company, and admins understand why marketing wants them to check articles. We break our team stereotypes and constantly remind ourselves that we all pull together.

Our wonderful HR also makes sure that they show up in all our offices every week. That way they have a very good idea of the atmosphere everywhere in the company and the preferences of specific teams. A pleasant side effect are the spontaneous post-work tastings of alcoholic beverages, throughout which, as is well known, relationships are strengthened the most.

After 5 years of operating a data center and 14 running the whole company, we are by no means experts. However, we keep going forward, never stop working on ourselves, and most importantly: we still love it.

We have successfully assisted with migrations for hundreds of clients over the course of 17 years. Join them.

  1. Schedule a consultation

    Simply leave your details. We’ll get back to you as soon as possible.

  2. Free solution proposal

    A no commitment discussion about how we can help. We’ll propose a tailored solution.

  3. Professional implementation

    We’ll create the environment for a seamless migration according to the agreed proposal.

Leave us your email or telephone number

    Or contact us directly

    +420 246 035 835 Available 24/7
    We'll get back to you right away.