fbpx
vshosting~

With that being said, every innovation comes with its price, and in the cloud, it is indeed true that ‘the sky is the limit.’ Many companies are starting to realise that while the benefits of the cloud are immense, they don’t necessarily need them for their entire project.

What exactly is Hybrid-Cloud?

Companies can identify and separate server usage into a permanent load (known as base load) and peak demands. Companies will then need to consider whether it’s worth moving the base load back from the cloud to on-premise hardware, saving a significant amount of money and using the cloud only to cover peak performance needs. This is when when we start talking about a hybrid solution: infrastructure spread between the public cloud and one’s own or rented hardware.

Why should businesses consider hybrid cloud in conjunction with AWS?

In practice, there are two main reasons to consider hybrid cloud: location and cost.

Location is usually not so critical as Amazon currently offers eight computing locations (regions) within Europe and more than 25 Edge locations for CDN.

When it comes to cost, the situation becomes quite interesting. AWS and public clouds, in general, offer a plethora of services beyond traditional virtual servers, in the form of serverless and software-as-a-service offerings. Whether it’s databases, storage, emailing, or more complex tools, clouds can practically cover any requirements but come at a significant cost. However, many of these services can be found within on-premise solutions at a lower cost and higher quality. In the case of AWS, these typically include large databases, various queuing services (SQS), and servers to handle base load.

AWS as one of the layers of hybrid architecture.

Overall, it can be said that AWS services can be divided into two categories – true SaaS and services running on Amazon-managed virtual machines. The first category includes SQS, SES, or storage, while the second one includes databases. Many clients who come to us with their AWS solutions rely solely on the fact that AWS services simply do not fail. They try to reduce costs by bypassing fault tolerance. This makes sense in the short term, but in the long run, it leads to significant losses and problems, especially when an AWS service stops functioning. There can be many reasons, but the most common cause is technical hardware failure at Amazon. However, these are also the services that can benefit the most from a hybrid architecture.

For this example we will be looking at an Czech e-shop. The e-shop has several servers for running the application itself, a database, email service (SES), storage, and CDN for content distribution. Most of the time, there is a relatively constant load on the servers, and only occasionally, during marketing campaigns, there is a need to cover peak performance demands. At the same time, all components need to be fault-tolerant, which dramatically increases the cost, especially for databases. The situation can then look like the diagram below.

In principle, there is nothing wrong with this setup – except for the single-node RDS, which, in the event of a failure, causes the entire application to crash. Thanks to CloudFront CDN, data reaches customers quickly, dynamically scaled instances cover peak loads, and SES takes care of emailing. For a larger e-shop with approximately 300 GB of data in the database and half a million emails sent monthly, we are talking about a cost of around $3000 per month just for the infrastructure. Don’t believe it? Just take a look at the below calculator.

If we wanted to ensure high availability for RDS, the cost would be even higher. Additionally, we need to factor in the time of the people managing the system (even if it’s just developing IaC scripts and monitoring logs), easily exceeding costs well over 250,000 CZK monthly (approximately the price of a lightly used Skoda Fabia car).

Now, let’s see the savings we can achieve by moving some services from AWS to dedicated hardware. The foundation will be a 3-node private cloud on the Proxmox platform. We will move practically everything essential for the e-shop’s operation to this cluster – a database in a master-master setup for fault tolerance, S3 storage, and virtual servers. Due to the placement in the vshosting data center, we can eliminate CDN. In AWS, only the SES email service, DNS Route53, and potential EC2 servers for handling peak traffic, if needed, will remain.

For such a hybrid solution, you will pay around 120,000 CZK monthly (including AWS hardware and AWS management). By implementing a hybrid cloud approach, you reduce costs by more than 50% compared to running purely in AWS. For this price, you not only get a better technical solution but also 24/7 monitoring, management by our experienced administrators, and backups to a geographically separate location.

Impressive right?
Are you interested in a hybrid solution and want to know how you could save on costs? Email us at consultation@vshosting.co.uk


vshosting~

Why choose a private cloud

Moving applications to the cloud has a number of undeniable benefits, but its most popular form offered by global providers has a number of operational pitfalls: you give up the ability to customize your entire environment, allow a third party to have access to your data, and run the risk of vendor lock-in.

If you are one of the more demanding clients, you definitely need an exclusive cloud for your application, which is just for you and custom-made. You need a private cloud that, unlike a public cloud, fully adapts to the needs of your application.

The private cloud takes the benefits of the public cloud and grafts them onto a solution that you have full control over and where your freedom is not restricted in any way.

Example of Private cloud infrastructure

What a vshosting~ private cloud on the VMware platform looks like

We build each private cloud on the latest hardware from HP, DELL, Supermicro, Intel, and AMD. If you choose the VMware platform, we will use the vSphere tool for virtualization, in the Standard or Enterprise version. Taking care of the overall design, its implementation as well as ensuring smooth operation in a high availability mode are a matter of course with vshosting~. All including professional advice from our administrators.

The cloud solutions we provide always include comprehensive server infrastructure management and exceptional 24/7 support with 60-second response times. The private cloud on VMware is no exception.

Our experienced administrators use advanced server management software, VMware vCenter (Standard version), which serves as a centralized platform for controlling the vSphere environment. Whether your applications run on Windows or Linux, we provide server management for you.

Are you considering a private cloud solution on VMware? Get in touch with our consultants: consultation@vshosting.eu and discuss the options and features of VMware. No strings attached.


vshosting~

What sets the Platform for Kubernetes service from vhosting~ apart from similar solutions by Amazon, Google or Microsoft? There is a surprising amount of differences. 

Kubernetes services development

Clients often ask us how our new Platform for Kubernetes service differs from similar products by Amazon, Google or Microsoft, for example. There are in fact a great many differences so we decided to dig into the details in this article.

Individual infrastructure design

The majority of traditional cloud providers offer an infrastructure platform, but the design and individual creation of the infrastructure is left to the client – or rather to their developers. The overwhelming majority of developers will of course tell you that they’d rather deal with development than read a 196-page guide to using Amazon EKS. Furthermore, unlike most manuals, you really need to read this one since setting up Kubernetes on Amazon isn’t very intuitive at all. 

At vshosting~ we know how frustrating this is for most companies. The development team should be able to concentrate on development and not waste time on something that they’re not specialized in. Therefore, we make sure that unlike traditional cloud services, our Kubernetes solution is tailor-made for each client. With us, you can skip the complex task of choosing from predefined packages, reading overly long manuals, and having to work out which type of infrastructure best meets your needs. We will design Kubernetes infrastructure exactly according to the needs of your application, including load balancing, networking, storage and other essentials. 

In addition, we would love to help you analyze your application before switching to Kubernetes, if you don’t already use it. Based on your requirements we’ll recommend you a selection of the most suitable technologies (consultation is included in the price!), so that everything runs as it should and any subsequent scaling is as straightforward as possible. 

In terms of scaling, with Zerops it’s simple. Again there is no choosing from performance packages etc. at vhosting~ you simply scale according to your current needs, no hassle. We also offer the option of fine scaling for only the required resources. Does your application need more RAM or disk space because of customer growth? No problem. 

After we create a customized infrastructure design, we’ll carry out the individual installation and set up of Kubernetes and load balancers before putting it into live operation. Just for some perspective, with Google, Amazon or Microsoft, all of this would be on your shoulders. At vshosting~ we carefully fine-tine everything in consultation with you. Once launched, Kubernetes will run on our cloud or on the highest quality hardware in our own data center, ServerPark. 

The option of combining physical servers and the cloud

Another benefit of Kubernetes from vshosting~ is the option of combining physical servers with the cloud – other Kubernetes providers do not allow this at all. With this option you can start testing Kubernetes on a lower performance Virtual Machine and only then transfer the project into production by adding physical servers (all at runtime) with the possibility of maintaining the existing VMs for development. 

For comparison: Google will for example offer you either the option of on-prem Google Kubernetes Engine or a cloud variant, but you have to choose one or the other. What’s more, you have to manage the on-prem variant “off your own back”. You won’t find the option of combining physical servers with the cloud with Amazon or Microsoft. 

You save up to 50% compared to global Kubernetes providers. Take a look at how we compare.

vshosting~
Global Kubernetes providers

With us you can combine physical servers with the cloud as you see fit and we’ll also take care of administration – leaving you to focus on development. We’ll oversee the management of the operating systems for all Kubernetes nodes and load balancers and we’ll provide regular upgrades of operating systems, kernels etc. (and even an upgrade of Kubernetes, if agreed).

High level of SLA and senior support 24/7

One of the most important criteria in choosing a good Kubernetes platform is its availability. You might be surprised to learn that neither Microsoft AKS nor Google GEK provide a SLA (financially-backed service level agreement) and only claim to “strive to ensure at least 99.5% availability”. 

Although Amazon talk about a 99.9% SLA, when you look at their credit return conditions, in reality it’s only a guarantee of 95% availability – since Amazon only return 100% of credit below this level of availability, If availability drops only slightly below 99.9%, they only return 10% of credit. 

At vshosting~ we contractually guarantee 99.97% availability, that is to say more than Amazon’s somewhat theoretical SLA and significantly more than the 99.5% not guaranteed by Microsoft and Google. In reality, availability with vshosting~ is more like 99.99%. In addition., our Managed Kubernetes solution works in high-availability cluster mode which means that if one of the servers or part of the cloud malfunctions the whole solution immediately starts on a reserve server or on another part of the cloud. 

We also guarantee high-speed connectivity and unlimited data flows to the whole world. In addition we ensure dedicated Internet bandwidth for every client. Our network has capacity up to 1 Tbps and each route is backed up many times over.  

Thanks to the high-availability cluster regime, high network capability, and back up connection, the Kubernetes solution from vshosting~ is particularly resistant to outages of any part of the cluster. Furthermore, our experienced team will continuously monitor your solution and quickly identify any issues that emerge before they can have an effect on the end user. We also have robust AntiDDoS protection which effectively defends the entire cluster against cyber attacks. 

Debugging and monitoring of the entire infrastructure

Unlike traditional cloud providers, at vshosting~ our team of senior administrators and technicians monitor your solution continuously 24 hours a day directly from our data center and will react to any problems which may arise within 60 seconds – even on Saturday at 2am. These experts continuously monitor dozens of parameters relating to the entire solution (hardware, load balancers, Kubernetes) and as a result are able to prevent most issues before they become a problem. In addition, we guarantee to repair or replace a malfunctioning server within 60 minutes. 

To keep things as simple as possible, we’ll provide you with just one service contact for all your services-  whether it’s about Kubernetes itself, its administration or anything to do with infrastructure. We’ll take care of routine maintenance and complex debugging. Included in the Platform for Kubernetes price we also offer consultation regarding specific Dockerfile formats (3 hours a month).


vshosting~

In August 2015, we officially opened our own ServerPark data center in Hostivař, Prague. As a hosting service provider, vshosting~ had already been operating for 9 years at that time. However, with the launch of ServerPark, a new era began. The most important business lessons we have learned in the 5 years that followed could be summarized thusly: paranoia = the key to success, there’s no such thing as “can’t”, and herding cats.

Paranoia = the key to success

As hosting providers, we have paranoia in the job description already. If you also run a data center, this is doubly true. Maximum security and ultra-high availability of services are of the utmost importance for our clients. For this reason, we have implemented even stricter measures in our data center compared to the industry standard. We have doubled all the infrastructure and even added an extra reserve for each element in the data center.

In contrast, in most other data centers, they only choose the duplication of infrastructure or even only one reserve. It’s much cheaper and the chances of, say, more than half of your air conditioning units failing are minimal, right?

You know what they say: just because we’re paranoid doesn’t mean they’re not after us. Or that three air conditioning units cannot break at the same time. Therefore, we were not satisfied with this standard and we can say that it has paid off several times in those 5 years. Speaking of air conditioning: for example, we once performed an inspection of individual devices (for which, of course, it is necessary to turn them off) and suddenly a compressor got stuck in one of the spare air conditioning units. It would be a disaster for a standard data center, but it didn’t even faze us. Well, we just had to order the compressor…

Unlike most other data centers, even after 5 years, we can boast of completely uninterrupted operation. Extra care pays off, and this is doubly true for a data center.

We also rely on large stocks of hardware. So large that many other hosting providers would find them unnecessary. However, we have once again confirmed that if we want to provide clients with excellent service in all circumstances, “supply paranoia” pays off.

A great example was the beginning of this year’s coronavirus crisis. Due to the increased interest in online shopping, many e-shops needed to significantly increase capacity – even double it in some cases. At the same time, the pandemic around the world has essentially halted the flow of hardware supplies. If we had to order the necessary servers, the client’s infrastructure would simply collapse under the onslaught of customers before the hardware would arrive. But we simply pulled the servers out of the warehouse and installed them almost immediately.

There’s no such thing as “can’t”

Another lesson we learned from running a data center is the need to think out of the box. As hosting service providers and data center operators, we must fulfill the client’s ideas about their server infrastructure from start to finish. Having the “we can’t do that” attitude is simply not an option in this business. You either come up with a solution or you’re done for in this industry.

In 5 years of operating the data center, we have faced our fair share of challenges. Whether it’s designing a giant infrastructure for the biggest local e-commerce players or transforming an entire hosting solution into a brand new technology unknown to us, we’ve learned to never give up before the fight. The results are usually completely new, unique solutions, courtesy of our amazing admins. It is said that the work of an administrator is typically quite stereotypical. Well, not in our company. We have more than enough challenges for each and every one of them.

Of course, there are also requirements that simply cannot be met. The laws of physics are, after all, still in place. Also, sometimes the price of a solution is completely outside the business reality. Even in such cases, however, we have come to the conclusion that it is always worthwhile to look for alternative solutions. Of course, they will not be completely according to the original assignments, but you can come pretty close.

Summing up: we always look for ways to honor our client’s wishes, not for reasons why we cannot.

Herding cats aka how to manage a company with teams all over Prague

When we were building the data center, there were about 20 of us in the company. Above the data hall, we added a floor with offices and facilities with then seemingly unreasonably large capacity of 50 people. We figured that it would take us a good number of years to grow so much. In the end, it took only three.

Today there are about eighty of us (plus 5 dogs, 1 lizard, and 1 hedgehog working part-time). We’ve had zero chance of fitting into the data center for quite some time, so we’ve undertaken slight decentralization. The developers are based in Holešovice and sales and marketing reside in Karlín. From the point of view of company management and HR, such a fragmentation of the company presents a great challenge.

What were the lessons learned? Primarily, that effective communication is really hard but totally worth it. After all, many growing companies run into some type of communication trouble: once there’s more than 25 of you, it is no longer enough to naturally pass on information while waiting for the coffee to brew. When you combine this growth with a division of teams to different locations, the effect multiplies.

We have learned (and in fact are still learning) to share information between teams more regularly and in a more structured way to avoid misunderstandings. Because misunderstandings give rise to unnecessary conflicts, inefficient work, and general frustration. On the other hand, we are no fans of endless meetings with everyone. So what’s the best way to go about it?

For example, we regularly send everyone an in-house newsletter, to which each team in the company contributes updates about what they are doing, what new things they are preparing, and what has been particularly successful. Thanks to this, even a new salesman knows what technicians are doing for the success of our company, and admins understand why marketing wants them to check articles. We break our team stereotypes and constantly remind ourselves that we all pull together.

Our wonderful HR also makes sure that they show up in all our offices every week. That way they have a very good idea of the atmosphere everywhere in the company and the preferences of specific teams. A pleasant side effect are the spontaneous post-work tastings of alcoholic beverages, throughout which, as is well known, relationships are strengthened the most.

After 5 years of operating a data center and 14 running the whole company, we are by no means experts. However, we keep going forward, never stop working on ourselves, and most importantly: we still love it.


vshosting~

We made major upgrades to the infrastructure of one of the biggest e-commerce projects in the Czech Republic and Slovakia: GymBeam. And they’re not just some minor improvements – we exchanged all the hardware in the application part of their cluster and installed the extra powerful servers (8 of those bad boys in total). 

How did the installation go, what does it mean for GymBeam, which advantages do EPYC servers provide, and should you be thinking of this upgrade yourself? You’ll find out all that and more in this article. 

What’s so epic about EPYC servers?

Until recently, we’ve been focusing on Intel Xeon processors at vshosting~. These have been dominating (not only) the server product market for many years. In the last couple of years, however, the significant improvement in portfolio and manufacturing technologies of the AMD (Advanced Micro Devices) company caught our attention.

This company newly offers processors that offer a better price/performance ratio, a higher number of cores per CPU, and better energy management (among other things thanks to a different manufacturing technology – AMD Zen 2 7nm vs. Intel Xeon Scalable 14nm). These processors are installed in the AMD EPYC servers we have used for the new GymBeam infrastructure.

AMD EPYC servers processors

They are the most modern servers with record-breaking processors with up to 68 cores and 128 threads (!!!). Compared to the standard Intel Xeon Scalable, where we offer processors with a maximum of 28 cores per CPU, the volume of computing cores is more than double.

The EPYC server processors are manufactured using the 7 nm process and the multiple-chipsets-per-case method, which allows for all 64 cores to be packed into a single CPU and ensure a truly noteworthy performance.

How did the installation go

The installation of the first servers based on this new platform went flawlessly. Our first step was a careful selection of components and platform unification for all of the future installations. The most important part at the very beginning was choosing the best possible architecture of the platform together with our suppliers and specialists. This included choosing the best chassis, server board, peripherals including the more powerful 20k RPM ventilators for sufficient cooling, etc.  We will apply this setup going forward on all future AMD EPYC installations. We were determined for the new platform to reflect the high standard of our other realizations – no room for compromise. 

EPYC server platform

As a result, the AMD EPYC servers joined our “fleet” without a hitch. The servers are based on the chassis and motherboards from the manufacturer SuperMicro and we can offer both 1Gbps and 10Gbps connection and connection of hard disks both on-board and with the help of a physical RAID controller according to the customer’s preferences. We continue to apply hard drives from our offer, namely the SATA3 // SAS2 or PCI-e NVMe. Read more about the differences between SATA and NVMe disks.

Because this is a new platform for use, we have of course stocked our warehouse with SPARE equipment and are ready to use it immediately should there be any issue in production.

Advantages of the hardware for GymBeam’s business

The difference compared to the previous processors from Intel is huge: besides the larger number of cores, even the computing power per core is higher. Another performance increase is caused by turning on the Hyperthreading technology. We turn this off in case of the Intel processors due to security reasons but in case of the AMD EPYC processors, there’s no reason to do so (as of yet anyway). 

The result of the overall increase in performance is, firstly, a significant acceleration in web loading due to higher performance per core. This is especially welcomed by GymBeam customers, for whom shopping in the online store has now become even more pleasant. Speeding up the web will also improve SEO and raise search engine “karma” overall.

In addition to faster loading, GymBeam gained a large performance reserve for its marketing campaigns. The new infrastructure can handle even a several-fold increase in traffic in the case of intensive advertising.

Last but not least, at GymBeam they can now be sure they are running on the best hardware available 🙂

Would you benefit from upgrading to the EPYC servers?

Did the mega-powerful EPYC processors catch your interest and you are now considering whether they would pay off in your case? When it comes to optimizing your price/performance ratio, the number one question is how powerful an infrastructure your project needs.

It makes sense to consider AMD EPYC processors in a situation where your existing processors are running out of breath and upgrading to a higher Intel Xeon line would not make economic sense. That limit is currently at about 2x 14core – 2x 16core. Intel’s price above this performance is disproportionately high at the moment.

Of course, the reason for the upgrade does not have to be purely technical or economic – the feeling that you run services on the fastest and best the market has to offer, of course, also has its value.


vshosting~

There can be quite a few reasons to leave web hosting: maybe you need more setup flexibility, higher web limit, availability, or performance, or you want to use specific software that doesn’t quite agree with your web hosting. Web hosting has simply become too small for you.

On the other hand, web hosting provides a relatively high level of user-friendliness: you control it using a graphic user interface, the provider takes care of everything regarding both hardware and software, and you don’t actually have to worry about much of that behind the scenes stuff. Upgrading to a new hosting solution that would provide the same level of comfort is, therefore, no easy task.

Where to then?

Web hosting alternatives that offer more flexibility and performance are plentiful on the market.  Among the closest ones from a user’s perspective are VPS and managed services. You may also consider a dedicated server or getting your own physical server.

Because reality can be quite a bitch sometimes, each of these options comes with both advantages and disadvantages. Let’s take a look at them. 

VPS

At first glance, the most attractive option is the so-called VPS, i.e. virtual private server. Compared to web hosting, you get complete freedom with a VPS to install whatever software you want and set up everything the way you like. VPS is also quite cheap which proves very attractive especially for projects that are just starting out.

However, full freedom comes with a caveat – you have to take care of everything yourself. And we do mean everything. Installations, updates, security measures, any changes, problem-solving, backups, etc. etc. You’ll need your own administrator to make sure everything works the way it’s supposed to and that your project stays safe.

There are many threats lurking in the shadows that you’ll have to identify and stay clear of when managing your own server.

Own physical server

Another option you can transfer to from web hosting is getting your own server. That way, you’ll get a lot more performance than with web hosting or a VPS as well as a lot of flexibility. On the other hand, this solution has similar disadvantages as a VPS – you have to deal with everything on your own. Which is costly, annoying, and pretty dangerous to boot (unless your administrator is one hell of a guy who works 24/7).

Besides software management, you also have to take care of all things hardware – provide cooling, constant energy supply (not quite as easy as it sounds – aka plugging it in doesn’t cover it). All things considered, getting your own physical server thus comes with the highest risk of outages, cybernetic attacks, and other fun things like that.

Dedicated server

A dedicated server, unlike own physical server, is actually a server as a service, where your provider takes care of all things hardware and sometimes even does the initial installation. The server provider also deals with all hardware-related issues – e.g. at vshosting~, we guarantee solving any hardware problem within 60 minutes, day or night.

Your dedicated server is placed in a data center which tends to be well protected from power outages or intruders. In addition, it has much better connectivity than a server plugged into your own makeshift server room.

Compared to web hosting, a dedicated server offers much higher performance and you can install pretty much whatever you want on it. The software side of things is still on you though, just like with a VPS or an own server. Therefore, you’ll have a hard time getting by without an administrator (and thus extra costs).

Managed services

The most pleasant upgrade from web hosting is, without a doubt, to a managed server or even a more robust managed service. From a user’s perspective, these kinds of services are like web hosting on steroids: the service is equally easy to use and the hosting provider takes care of all operational things (both software and hardware). At the same time, you get much higher performance at your disposal as well as a lot more flexibility regarding settings, software compatibility, and the like. 

In effect, this means you don’t need an administrator, can forget about what’s going on with the server behind the scenes and everything works as it should. And if not, it’s your provider’s job to fix it ASAP. You can focus on your core business.

Depending on the extent of your project and the technologies used, all you need to do is decide whether you’ll go with a managed server, a more robust managed cluster, or e.g. a managed solution for Kubernetes

Cloud or metal?

Cloud or metal?

If you do opt for a managed server, you’ll probably run into the “cloud vs. physical server” dilemma. In our experience, it’s hard to say point-blank that one is better than the other – it depends on your specific situation.

For a smaller but quickly growing online project where high availability and flexibility is key, cloud is what you should go for. Thanks to a lower performance requirement, cloud will also be the more frugal option for you. And if you grow out of it eventually, it’s easy to transfer to a physical solution after. 

However, if your business requires high performance or the storage of large amounts of data, it pays off to jump straight into a physical server solution. That one can pack a much bigger performance punch and becomes much cheaper per unit than cloud.

Not all managed services are created equal 

Managed services have been gaining popularity thanks to their user-friendliness. Unfortunately, not every provider considers “managed” a truly completely managed service and clients can thus become unpleasantly surprised.

Ideally, when it comes to a managed service, the provider handles the initial installation, all of the server monitoring, and operating system updates as well as all of the issues that might arise – be it software or hardware related. That’s how we do it at vshosting~.

Some other providers, however, understand a managed service as only the initial software installation and subsequent care for hardware. Alternatively, they may be prepared to handle software-related problems but charge extra fees for that. That’s why we recommend having a very close look at what your chosen managed service truly encompasses.

The best solution for your project

Have you found your pick?

We know from experience how difficult making this choice can be. Everything is individual and you’ll typically get the best results if you have your hosting solution customized just for your project.

So if you’re still wondering about the best option for you, shoot us an email. Our experts will be happy to advise you on what option is best for you.


Damir Špoljarič

Take a look at the most important reasons why hundreds of clients opt for dedicated servers from vshosting~.

Customized configuration

More than 40 % of our clients take advantage of our individualized configuration option when it comes to dedicated servers. We provide clients with dedicated servers with high performing processors and up to 1 TB RAM.

Immediate service and upgrades

As the only company in the Czech Republic, we guarantee repairs or exchanges of servers in the unlikely event of a malfunction within 60 minutes. On average though, we repair a server within 25 minutes :-). We also have hundreds of replacement servers in stock directly in the data center and we are therefore able to exchange them immediately. In addition, we can upgrade your server or change its configuration just as fast.

Monitoring and remote management included

We monitor each dedicated server and in the event of its unavailability (e.g. due to overload) we contact the client within 5 minutes. We then assist them in solving this if the inaccessibility is caused by a hardware issue.

All dedicated servers are plugged into our central management network and clients can thus use our web application KVM Proxy to manage their dedicated servers. They can monitor the log of the server hardware, access the server console, execute a remote boot of the server or physically restart the server. All that can be done using the central interface with the maximum comfort which is especially appreciated by clients, who have a number of dedicated servers.

Really good connectivity

Each server is connected to the 2x 1 Gbps network (redundancy thanks to LACP) into two distinct switches in the active-active mode so that the full speed of 2 Gbps can be used. We also offer a speed of 2x 10 Gbps.

vshosting~ makes no false promises and our backbone network is prepared for large data streams. Out of all hosting companies, we use the best network hardware from top suppliers.

We are members of 4 peering centers in 3 countries and continuously expand our European infrastructure. In the near future, we also plan an upgrade to 2x 100 Gbps into NIX.CZ. Our technology is ready for that already.

Redundancy

We only use servers with two power sources or servers with STS switch connection in order to fully utilize the features of our top of the line data center and its energy infrastructure. Thanks to that, we are able to guarantee extra high availability.

Take a look behind the scenes – into the backup systems of our data center:

Complete management of the physical infrastructure – everything you need

We provide complex infrastructure as a service even for the largest internet projects. Here are the most popular add-on services we offer with dedicated servers:

– private networks between servers (client VLAN)
– lease of NetApp storage or space in our CloudStorage SSD/SAS for central storage (NFS/iSCSI)
– top of the line premium AntiDDoS (Radware solution)
– VPN as a Service
vshosting~ CDN

And much more.

Read more about our dedicated servers.


vshosting~

Are you picking out a new dedicated or managed server and wondering which SSD disk would be best for you? At vshosting~, we put great emphasis on maximum possible quality but because the needs of our clients often differ, we offer 2 types of professional SSD disks: 2,5“ SATA SSD and 2,5“ NVMe (both from Intel). Let’s take a look at the differences between each type. 

SATA disk models

2,5“ SATA SSD Intel, S4510 series

– sequential read and write operations in the hundreds of MB per second

– typically up to 500 MB/s, depends on the exact model

2,5“ SATA SSD Intel, S4610 series

– better disks with higher durability than S4510

– more suitable for database servers than S4510

– sequential read and write operations on a similar level as the S4510 series

NVMe disk models

2,5“ NVMe (PCIe 3.1 x4 interface) Intel, P4510 series

– sequential read and write operations in the thousands of MB per second

– typically up to 3200MB/s, depends on the exact model

– this is several times faster than the standard SATA SSD disks

2,5“ NVMe (rozhraní PCIe 3.1 x4 interface) Intel, P4610 series

– better disks with higher durability than P4510

– more suitable for database servers than P4510

– sequential read and write operations on a similar level or slightly better level than the P4510

We design both SATA and NVMe servers so that the hard disks are “hot-swap”, i.e. changeable while the machine and the system are running. Eventual repairs or exchanges are very easy to do as a result. We don’t use NVMe disks with the M.2 PCIe interface that are installed directly to the motherboard and as such are physically inaccessible to the technicians.

RAID on NVMe disks compared to SATA SSD

We mostly configure SATA disks in HW RAID (RAID is managed by a separate disk controller). However, NVMe disks operate with the PCIe interface, so disk controllers of this kind are either performance inadequate or unreasonably expensive. As a result, NVMe disks are directly attached to the motherboard connectors in the server where each PCIe link is served by the CPU itself. We implement NVMe servers on SuperMicro solutions because SuperMicro developed a suitable solution for this situation. 

As opposed to SATA disks, RAID can be solved in two possible ways in the case of NVMe disks. The first option is the installation of an additional hardware key onto the motherboard, which activates the Intel function VRAID on CPU. In the BIOS of the server, we are then able to configure RAID 0/1/10/5 from the NVMe disks, the operating system then works with one virtual disk. The second option is not configuring RAID for the system at all and subsequently, take care of it on the software front with the OS (e.g. using ZFS, etc.).

So SATA or NVMe?

Simply put, NVMe disks are 6.5 x faster than SATA, which is a big plus. On the other hand, they are a bit more expensive so the choice between the two isn’t exactly straightforward. 

NVMe disks are a more suitable solution for someone looking for extreme data throughput on the storage, be it for a demanding database server, web server, or anything else where we expect high load. At vshosting~, we’re able to operate these disks both in managed servers with Linux (Ubuntu 18, Debian) or in dedicated servers using Windows Server 2016, Windows Server 2019 or Linux (Ubuntu 18, Debian). In the case of dedicated servers, OS management is up to each client.

However, it is worth considering that we only operate NVMe disks with the current Intel Xeon Scalable CPUs. In contrast, SATA disks can be combined with pretty much any generation of Intel Xeon processors thanks to our disk controllers.


vshosting~

Our clients often ask how our new service Platform for Kubernetes differs from similar products provided by e.g. Amazon, Google, etc. There are quite a few distinctions so we decided to describe them in detail in this article.

Individualized Infrastructure Design

Most of the traditional clouds provide a platform for the infrastructure but the design and creation itself remains the clients’ responsibility – or more accurately, the clients’ developers’ responsibility. Most developers, however, would much rather spend their time developing (surprise!) as opposed to reading a 196-page manual on how to use Amazon EKS. Unlike most manuals in life, this one really needs to be read – setting up Kubernetes on Amazon is not particularly intuitive.

In addition, we’re happy to assist you in analyzing your application readiness for transfer to Kubernetes, if it’s not utilizing it yet. Based on your requirements, we’ll also help you select the most suitable technologies (at no extra cost!) to make sure everything works the way it should and so that eventual scaling is as easy as possible. 

At vshosting~, we understand how frustrating this can be for many companies. The development team should concern themselves with development and not waste time on something outside their expertise. Therefore, unlike traditional clouds, we put great emphasis on custom designing the Kubernetes solution ourselves for each client. There’s no need to engage in complicated selection among predefined packages, read lengthy manuals or wreck your brain thinking about the best infrastructure design. We’ll prepare the Kubernetes infrastructure precisely based on the needs of your application, including load balancing, networking, storage, and other necessities.

Speaking of scaling: that’s exceedingly simple with our Kubernetes solution. Again: no package selection required. At vshosting~, you simply scale up or down with full flexibility, exactly according to your current needs. We also offer the option of fine scaling of only the necessary resources. Does your application need more RAM or disc space because you got a lot of new clients? No problem.

Once we finish designing your fully customized infrastructure, we conduct an individualized installation and set up Kubernetes and load balancers before transferring everything to live traffic. Just to clarify – all of these tasks would be your responsibility if using Google’s, Amazon’s, or Microsoft’s Kubernetes solution. We’ll carefully tweak everything in close cooperation with you. After launching, Kubernetes will run on our cloud or hardware in our own data center ServerPark.

Option to Combine Physical Servers with Cloud 

Another advantage of Kubernetes from vshosting~ is that you can combine cloud and physical servers as needed – other Kubernetes providers don’t offer this. Thanks to this feature, you can e.g. start testing Kubernetes on a Virtual Machine with lower performance and only after that transfer the project to production by adding physical servers (all that with zero downtime) with eventual maintenance of the current VMs for development purposes.

Point of comparison: e.g. Google offers either the option of on-prem Google Kubernetes Engine or running Kubernetes in the cloud but you have to choose one or the other. Plus you have to manage the on-prem variant on your own. You won’t find a physical server + cloud combo option at Amazon or Microsoft either.

At vshosting~, you can mix and match physical servers and cloud as you please and we take care of the entire management to boot. You can focus solely on development and leave the operations to us. We take care of managing the operating systems of all Kubernetes nodes and load balancers, ensure upgrades of operating systems, kernel, etc. (we can even upgrade Kubernetes itself if you like). 

High SLA and 24×7 Senior Support 

One of the most important criteria when choosing a good Kubernetes platform is its availability. Which is why it may come as a surprise that neither Microsoft AKS nor Google GEK offer an SLA (i.e. a „financially-backed service level agreement“) and only claim that they’ll “do their best to ensure the availability of at least 99,5%“.

Amazon EKS does mention a 99,9% SLA but considering their credit refund conditions, it is, in fact, more of a 95% availability guarantee – only below that level does Amazon refund 100 % of your credit. In the event of only a small drop below 99,9% availability, just 10 % of your credit gets refunded.

At vshosting~, we contractually guarantee 99,97% availability: that is even more than the somewhat theoretical SLA at Amazon and much more than the not-guaranteed 99,5% availability at Microsoft and Google. In reality, our availability hovers around 99,99 %. In addition, our managed Kubernetes solution also operates in the high-availability cluster mode, so if a server or a part of the cloud malfunctions, the solution immediately starts running on a backup server or in a different part of the cloud.

Moreover, we guarantee high-speed connectivity as well as unlimited data streams to anywhere in the world. Each client also gets a guaranteed dedicated bandwidth. Our network has a capacity of up 1 Tbps and each pathway is backed up multiple times. 

Thanks to the high-availability cluster mode, high network capacity, and backed up connection, the vshosting~ Kubernetes solution is exceptionally resistant to outages of any part of the cluster. Besides, our experienced teams are continually monitoring your solution and quickly identify eventual beginning problems before they can manifest to the end-user. We also have robust AntiDDoS protection which effectively prevents any cyberattacks on the cluster.

Debugging and Monitoring of the Entire Infrastructure

In contrast to the traditional clouds, at vshosting~, teams of senior administrators and technicians that sit directly in our datacenter watch over your solution 24/7. In the event of a problem, they react within 60 seconds – even at, say, 2 am on a Saturday. These experts are monitoring dozens of parameters of the entire solution (hardware, load balancers, Kubernetes) and as a result, can eliminate most of the problems before they start causing trouble. On top of all that, we guarantee a repair or an exchange of a malfunctioning server within 60 minutes.

For maximum simplification, you’ll get a single contact from us that you can use for all services you have with us: be it Kubernetes itself, its management, or anything regarding infrastructure. We’ll take care of standard maintenance as well as complicated debugging. Consultations regarding the concrete form of Docker files (3 hours monthly) are also included in the price of our Platform for Kubernetes service.


vshosting~

Database servers are key parts of any web project’s infrastructure and with the project’s increasing in size the database grows in significance. Sooner or later, however, we come to a point where database performance requirements can no longer be solved by the mere addition of extra memory and processor improvements. Increasing resources within one server has its limits and eventually, it becomes necessary to distribute the load among multiple servers.

Before implementing such a step, it is more than appropriate to clarify, what it is that we aim to accomplish. Some load distribution models will only allow us to manage an increase in the number of requests while others can also solve the issue of potential unavailability of one of the machines.

Scaling, High Availability, and Other Important Terms

First of all, let’s take a look at the basic terms we’ll be needing today. There are not many of them but without their knowledge, we won’t be able to move on. Experienced scalers can feel free to skip this section.

Scalability

The ability of a system (in our case a database environment) to react to increased or decreased resource need. In practice, we distinguish between two basic types of scaling: vertical and horizontal.

In the case of vertical scaling, we increase the resources the given database has at its disposal. Typically, this means adding accessible server memory and increasing the number of cores. Practically any application can be scaled vertically but sooner or later we run into hardware limits of the platform.

An alternative to this is horizontal scaling where we increase the performance of the application by adding more servers. This way we can increase the application’s performance almost limitlessly, however, the application must account for this kind of distribution.

High Availability

The ability of a system to react to a part of that system being down. The prerequisite for high availability is the ability to run the application in question in multiple instances.

The other instances can be fully replaceable and process requests in parallel (in this case we’re talking about active-active setup) or they can be in standby mode, where they only mirror data but aren’t able to process requests (so-called active-passive setup). Should a problem occur, one of the instances in passive mode is selected and turned into an active one.

Master node

The driving component of the system. In the case of databases, the master node is an instance operating in both read and write mode. If we have multiple full-featured master nodes, we speak of a so-called multi-master setup.

Slave node

A backup copy of data. In a standard situation, it only mirrors data and operates only in read mode. In the event of a master node failure, one of the slave nodes is selected and turned into a master node. Once the original master node is operational again, the new master will either return to being a slave node or it remains to be the master and the original master becomes a slave node.

Asynchronous replication

After inserting data into the master node, this insertion is confirmed to the client and written into the transaction log. At a later time, this change is replicated to slave nodes. Until the replication is completed, the new or changed data is only available on the master node and should it fail they would become inaccessible. Asynchronous replication is typical for MySQL.

Synchronous replication

The data insertion is confirmed to the client only after the data is saved to all nodes in the cluster. The risk of new data loss is eliminated in this case (the data is either changed everywhere or nowhere) but the solution is significantly more prone to issues in the network connecting the nodes.

Should the network be down, the performance of the cluster becomes temporarily downgraded. Alternatively, the reception of new requests for data change may even become temporarily suspended. This type of replication is used in the case of multi-master setups in combination with the Galera plugin.

Master-Slave Replication

Master-slave is the basic type of database cluster replication. In this setup, there is a single master node that receives all request types. The slave node (or multiple slave nodes) mirror changes using asynchronous replication. Slave nodes don’t necessarily have the newest copy of the data at their disposal.

Should the master node fail, the slave node with the newest data copy is selected to become the new master. Each slave node evaluates how delayed it is compared to the master node. This value can be found within the Seconds_behind_master variable and it is essential to monitor it. An increasing value indicates an issue in change replication at the master node.

The slave node operates in a read-only mode and can thus deal with select type requests. In this case, we’re talking about the so-called read/write split, which we’ll discuss in a moment.

Master-Master Replication

Master-master setup is such where we have two master nodes. Both are able to deal with all types of requests but between the two of them, asynchronous replication is the modus operandi. This presents a disadvantage when the data inserted into one node may not be immediately accessible from the second one. In practice, we set this up in such a way, so that each node is also a slave node to the other.

This setup is advantageous when we install a load balancer before the MySQL servers, which directs half of the connections to each machine. Each node is a separate master at the same time and knows not the other server is a master too. It is, therefore, necessary to set up the auto-increment step to the value of 2. If we don’t do this, a collision of primary keys that use auto-increment will ensue.

Each of the master nodes can have additional slave nodes that can be used for data reading (read/write split) and as a backup.

Multi-Master Replication

If there are more than two master nodes in a cluster, we’re talking about a multi-master setup. This setup cannot be built in basic MySQL but an implementation of the wsrep protocol, e.g. Galera, has to be used.

Wsrep implements synchronous replication and as such is very sensitive to network issues. In addition, it requires time synchronization of all nodes. On the other hand, it allows for all request types to be sent to all nodes in the cluster which makes it very suitable for a load balancing solution. A disadvantage being that all replicated tables have to use the innodb engine. Table utilizing a different engine will not be replicated.

Sharding

Sharding is dividing the data into logical segments. In MySQL, the term partitioning is used to describe this type of data storage and in essence, this means that the data of a single table is divided among several servers, tables or data files within a single server.

Data sharding is appropriate, if the data we have, forms separate groups. Typical examples are historical records (sharding according to time) or user data (sharding according to user ID). Thanks to such data division, we can effectively combine different storage types where we store the most recent data on fast SSD discs and older data that we don’t expect to be used very often on cheaper rotation discs.

Sharding is very often used in NoSQL databases, e.g. ElasticSearch.

Read-Write Splitting

In the Master-Slave replication mode, we have the performance of slave nodes at our disposal but cannot use it for write operations. However, if we have an application where most of the requests are just selects (typical in web projects), we can use their performance for read operations. In this case, the application directs write operations (insert, delete, update) to the master node but sends selects to the group of slave nodes.

Thanks to the fact that a single master node can have many slave nodes, this read/write splitting will help us increase the response rate of the entire application by distributing the read operations.

This behavior doesn’t require any configuration on the side of the database server but it needs to be dealt with in the application side. The easiest option is to maintain two connections in the application: one for read and one for write operations. The application then decides on which connection to use for every given request based on its type

The second option, which is useful if we are unable to implement read/write splitting at the application level, is using application proxy that understands requests and is able to automatically send them to appropriate nodes. The application then maintains only one connection to the proxy and doesn’t concern itself with request types. A typical example of this solution is Maxscale. Unfortunately, this is a commercial product but it provides a free version limited to three database nodes.

We Scale for You

Don’t have the capacity to maintain and scale you databases? We’ll do it for you.

We will take care of even very complex maintanance and optimization of a wide range of databases. We’ll ensure their maximum stability, availibility, and scalability. Our admin team manages tens of thousands of database servers and cluster so you’ll be in the hands of true experts.


We have successfully assisted with migrations for hundreds of clients over the course of 17 years. Join them.

  1. Schedule a consultation

    Simply leave your details. We’ll get back to you as soon as possible.

  2. Free solution proposal

    A no commitment discussion about how we can help. We’ll propose a tailored solution.

  3. Professional implementation

    We’ll create the environment for a seamless migration according to the agreed proposal.

Leave us your email or telephone number




    Or contact us directly

    +420 246 035 835 Available 24/7
    consultation@vshosting.eu
    Copy
    We'll get back to you right away.