fbpx
vshosting~

We made major upgrades to the infrastructure of one of the biggest e-commerce projects in the Czech Republic and Slovakia: GymBeam. And they’re not just some minor improvements – we exchanged all the hardware in the application part of their cluster and installed the extra powerful servers (8 of those bad boys in total). 

How did the installation go, what does it mean for GymBeam, which advantages do EPYC servers provide, and should you be thinking of this upgrade yourself? You’ll find out all that and more in this article. 

What’s so epic about EPYC servers?

Until recently, we’ve been focusing on Intel Xeon processors at vshosting~. These have been dominating (not only) the server product market for many years. In the last couple of years, however, the significant improvement in portfolio and manufacturing technologies of the AMD (Advanced Micro Devices) company caught our attention.

This company newly offers processors that offer a better price/performance ratio, a higher number of cores per CPU, and better energy management (among other things thanks to a different manufacturing technology – AMD Zen 2 7nm vs. Intel Xeon Scalable 14nm). These processors are installed in the AMD EPYC servers we have used for the new GymBeam infrastructure.

AMD EPYC servers processors

They are the most modern servers with record-breaking processors with up to 68 cores and 128 threads (!!!). Compared to the standard Intel Xeon Scalable, where we offer processors with a maximum of 28 cores per CPU, the volume of computing cores is more than double.

The EPYC server processors are manufactured using the 7 nm process and the multiple-chipsets-per-case method, which allows for all 64 cores to be packed into a single CPU and ensure a truly noteworthy performance.

How did the installation go

The installation of the first servers based on this new platform went flawlessly. Our first step was a careful selection of components and platform unification for all of the future installations. The most important part at the very beginning was choosing the best possible architecture of the platform together with our suppliers and specialists. This included choosing the best chassis, server board, peripherals including the more powerful 20k RPM ventilators for sufficient cooling, etc.  We will apply this setup going forward on all future AMD EPYC installations. We were determined for the new platform to reflect the high standard of our other realizations – no room for compromise. 

EPYC server platform

As a result, the AMD EPYC servers joined our “fleet” without a hitch. The servers are based on the chassis and motherboards from the manufacturer SuperMicro and we can offer both 1Gbps and 10Gbps connection and connection of hard disks both on-board and with the help of a physical RAID controller according to the customer’s preferences. We continue to apply hard drives from our offer, namely the SATA3 // SAS2 or PCI-e NVMe. Read more about the differences between SATA and NVMe disks.

Because this is a new platform for use, we have of course stocked our warehouse with SPARE equipment and are ready to use it immediately should there be any issue in production.

Advantages of the hardware for GymBeam’s business

The difference compared to the previous processors from Intel is huge: besides the larger number of cores, even the computing power per core is higher. Another performance increase is caused by turning on the Hyperthreading technology. We turn this off in case of the Intel processors due to security reasons but in case of the AMD EPYC processors, there’s no reason to do so (as of yet anyway). 

The result of the overall increase in performance is, firstly, a significant acceleration in web loading due to higher performance per core. This is especially welcomed by GymBeam customers, for whom shopping in the online store has now become even more pleasant. Speeding up the web will also improve SEO and raise search engine “karma” overall.

In addition to faster loading, GymBeam gained a large performance reserve for its marketing campaigns. The new infrastructure can handle even a several-fold increase in traffic in the case of intensive advertising.

Last but not least, at GymBeam they can now be sure they are running on the best hardware available 🙂

Would you benefit from upgrading to the EPYC servers?

Did the mega-powerful EPYC processors catch your interest and you are now considering whether they would pay off in your case? When it comes to optimizing your price/performance ratio, the number one question is how powerful an infrastructure your project needs.

It makes sense to consider AMD EPYC processors in a situation where your existing processors are running out of breath and upgrading to a higher Intel Xeon line would not make economic sense. That limit is currently at about 2x 14core – 2x 16core. Intel’s price above this performance is disproportionately high at the moment.

Of course, the reason for the upgrade does not have to be purely technical or economic – the feeling that you run services on the fastest and best the market has to offer, of course, also has its value.


vshosting~

As the popular SEO joke goes, when you need to hide something really well, you put it on the second page of Google search results. That you can very easily end up in even better-concealed places due to your hosting provider is not quite as well known, though. 

The search algorithms of Google and other search engines are strictly confidential. However, many of the factors that can kick you out of the coveted first page have been well documented by SEO experts. Typically they are low-quality texts on the website, content copied from elsewhere, or unseemly SEO practices.

The effect of hosting quality is often neglected. But it influences your SEO quite a bit and from three main angles: speed, outages, and location.

1) Website speed

The faster your website loads, the better your search engine ranking becomes. Of course, top-notch website speed alone will not ensure the first spot in the results for you. On the other hand, there’s no way you’ll make it to the top without it. Combined with other SEO strategies, speeding up your site will help you climb the Google result ladder. 

Why is that? Website speed isn’t just some abstract Google metric. Slow loading is the leading cause of website visitor’s leaving. That leads to worsening metrics such as bounce rate and time spent on the site which in turn causes bad “search engine karma” and pushes you further down in the search results. 

In our experience, 3 seconds are a good benchmark. If it takes your site longer to load, your hosting might be at fault. Find out from your provider whether it’s caused by low performance, sharing resources with other users, or due to a suboptimal location of servers that store your data (we’ll get back to the location issue in a second). 

For more tips regarding increasing website speed, check out our dedicated article.

Tip: Not sure how fast your website is? We recommend the PageSpeed Insights tool from Google. It gives you the option to measure the speed of both the desktop and mobile versions of your website. 

2) Outages

Hosting outages are an unpleasant affair from start to finish – starting with unrealized sales, followed by loss of customer trust, and topped off with damaged SEO. That’s why we recommend you avoid them altogether by choosing a reliable hosting provider

Wait a second – what do outages have to do with SEO? Unfortunately, quite a bit. For example, Google penalizes websites that have been down for a while. Thus you’re risking a drop in your search result position. Getting back up is no easy task so prevention is key. 

3) Location

The location in which the servers storing your data are placed is closely related to the speed of your website and therefore also SEO. It is primarily about how great the distance is between the servers and the visitor to your site. The longer the distance your data has to travel, the longer it takes for it to reach the user. If the distance is, say, 500 kilometers and your hosting provider uses quality operators, everything works great.

However, if you have servers in ServerPark in Prague and a visitor to your site is sitting at a Starbucks in Los Angeles, they may be unpleasantly surprised by the slow loading. The ideal solution in this case is the so-called CDN, which periodically caches the content of your website (i.e. stores it in locations that are closer to site visitors). The result is a significant acceleration of your site loading time and thus an improved position in search results.

When choosing a CDN, focus on where the provider has the so-called pops. That is for which locations it is able to ensure the fast delivery of your content. For example, vshosting~ has pops all over Europe and North America.

Top SEO is not just about keywords and interesting content. You also need the support of a quality hosting partner to reach the top of the search. Make sure your SEO efforts are not sabotaged by poor hosting quality.



vshosting~

We all know that backups are important. But what next? Is it enough to simply “have a backup” and be done with it? You can probably tell already that it won’t be that simple. Here top 5 questions everyone should think about as well as discuss with their hosting provider. 

1. How fast would the data recovery be?

An often overlooked but the essential question is the speed of data recovery from your backup solution. If you don’t pay attention to this, you can very easily end up in a situation where the renewal of your project takes 3 days (versus your expectations of 1 hour max). If you are a busy online store, for instance, 3 days offline constitutes a catastrophe.

The speed of recovery depends primarily on the amount of data you’re renewing and on the technology used. The data volume is given by the nature and size of your project – if you run an online business with a huge customer database, you’ll hardly be able to shrink it. However, even if there’s a lot of data to contend with, you can look for a solution that would allow for faster recovery (e.g. snapshot technology is much faster than rsync). Therefore, ask your provider how long data recovery would take in your case and if there are any options to speed it up. 

2. Which backup frequency is best for me?

Another crucial aspect of backups is their frequency. For example, at vshosting~, we include a standard backup package with each managed service that backs up all the data once a day. But if you decide that’s not good enough for you, we can easily provide more frequent backups – say, once every hour (if your production configuration allows for it). 

Of course, the more frequent the backups the more expensive your solution becomes because you need more storage space and infrastructure capacity. Especially if you want to keep all backup versions for 30 days or even longer. So, food for thought – how often do you need to back things up and how many versions do you need to keep stored? 

Explore more about our cutting-edge backup technologies on our comprehensive backup solutions page

3. What if the backup fails or gets delayed?

With projects that require backing up a huge volume of data, there’s a risk of the backup not completing within the given time frame. For instance, if you run backups once a day, the backup process needs to finish in 24 hours. If it doesn’t, a delay can occur or the backup can fail entirely. 

At vshosting~, we prevent this from happening by using the ZFS filesystem as the default filesystem for all of our managed services. This filesystem natively supports backups via snapshots, just like the ones you know from virtual server backups. The snapshots ensure that the entire server is backed up as a single file. As a result, the process is super fast – almost immediate in fact. Even data recovery becomes vastly sped up thanks to snapshot technology (compared to rsync for example). 

4. Where is my data stored?

From a security point of view, it is absolutely crucial that the backup is stored in a completely different location than the primary data. Ideally in another data center at the opposite end of town. In the event of a disaster at the location of your primary data, the backups will not be compromised.

It’s actually similar to backing up your computer to an external drive at home. After completing the backup, it is ideal to take the drive to your mother-in-law, for example, in case your apartment catches fire or something.

5. How are the backed up data secured?

Apart from backing your data up to a separate location, it is essential from a security point of view how easily an unauthorized person can access your data. The main defense against this is data encryption and limited access to data. At vshosting~, encryption is a standard measure and the backups of our clients can only be accessed from our internal network. However, you cannot rely on such a standard with all providers.

Don’t settle for having “some backup” from your hosting provider. Be demanding and ask for specifics. Your project deserves the best care.


vshosting~

DevOps and containerization are among the most popular IT buzzwords these days. Not without reason. A combination of these two approaches happens to be one of the main reasons why developer work keeps getting more efficient. In this article, we’ll focus on 9 main reasons why even your project could benefit from DevOps and containers. 

A couple of introductory remarks

DevOps is a composition of two words: Development and Operations. It’s pretty much a software development approach that emphasizes the cooperation of developers with IT specialists taking care of running the applications. This leads to many advantages, the most important of which we will discuss shortly.

Containerization fits into DevOps perfectly. We can see it as a supportive instrument of the DevOps approach. Similar to physical containers that standardized the transportation of goods, software containers represent a standard “transportation” unit of software. Thanks to that, IT experts can implement them across environments with hardly any adjustments (just like you can easily transfer a physical container from a ship to a train or a truck).

Top 9 DevOps and container advantages

1) Team synergies

With the DevOps approach, developers and administrators collaborate closely and all of them participate in all parts of the development process. These two worlds have traditionally been separated but their de facto merging brings forth many advantages. 

Close cooperation leads to increased effectiveness of the entire process of development and administration and thus to its acceleration. Another aspect is that the cooperation of colleagues from two different areas often results in various innovative, out of the box solutions that would otherwise remain undiscovered. 

2) Transparent communication

A common issue not only in IT companies is quality communication (or rather lack thereof). Everybody is swamped with work and focuses solely on his or her tasks. However, this can easily result in miscommunication and incorrect assumptions and by extension into conflicts and unnecessary workload. 

Establishing transparent and regular communication between developers and administrators is a big part of DevOps. Because of this, everyone feels more like a part of the same team. Both groups are also included in all phases of application development. 

3) Fewer bugs and other misfortunes

Another great DevOps principle is the frequent releasing of smaller parts of applications (instead of fewer releases of large bits). That way, the risk of faulty code affecting the entire application is pretty much eliminated. In other words: if something does go wrong, at least it doesn’t break the app as a whole. Together with a focus on thorough testing, this approach leads to a much lower number of bugs and other issues.

If you decide to combine containers with DevOps, you can benefit from their standardization. Standardization, among other things, ensures that the development, testing, and production environments (i.e. where the app runs) are defined identically. This dramatically reduces the occurrence of bugs that didn’t show up during development and testing and only present themselves when released into production. 

4) Easier bug hunting (and fixing)

Eventual bug fixes and ensuring smooth operation of the app is also made possible by the methodical storage of all code version that’s typical for DevOps. As a result, it becomes very easy to identify any problem that might arise when releasing a new app version.

If an error does occur, you can simply switch the app back to its previous version – it takes a few minutes at the most. The developers can then take their time finding and fixing the bug while the user is none the wiser. Not to mention the bug hunting is so much easier because of the frequent releases of small bits of code. 

5) Hassle-free scalability and automation

Container technology makes scaling easy too and allows the DevOps team to automate certain tasks. For example, the creation and deployment of containers can be automated via API which saves precious development time (and cost). 

When it comes to scalability, you can run the application in any number of container instances according to your immediate need. The number of containers can be increased (e.g. during the Christmas season) or decreased almost immediately. You’ll thus be able to save a significant amount of infrastructure costs in the periods when the demand for your products is not as high. At the same time, if the demand suddenly shoots up – say that you’re an online pharmacy during a pandemic – you can increase capacity in a flash. 

6) Detailed monitoring of business metrics

DevOps and containerization go hand in hand with detailed monitoring, which helps you quickly identify any issues. Monitoring, however, is also key for measuring business indicators.  Those allow you to evaluate whether the recently released update helps achieve your goals or not. 

For example: imagine that you’ve decided to redesign the homepage of your online store with the objective of increasing the number of orders by 10 %. Thanks to detailed monitoring, you can see whether you’re hitting the 10 % goal or not shortly after the homepage release. On the other hand, if you made 5 changes in the online store all at once, the evaluation of their individual impact would be much more difficult. Say that the collective result of the 5 changes would be the increase of order number by 7 %. Which of the new features contributed the most to the increase? And don’t some of them cause the order number to go down? Who knows.

7) Faster and more agile development

All of the above results in significant acceleration of the entire development process – from writing the code to its successful release. The increase in speed can reach 60 % or even more (!). 

How much efficiency DevOps will provide (and how much savings and extra revenue) depends on many factors. The most important ones are your development team size and the degree of supportive tool use – e.g. containers, process automation, and the choice of flexible infrastructure. Simply put, the bigger your team and the more you utilize automation and infrastructure flexibility, the more efficient the entire process will become. 

8) Decreased development costs 

It is hardly a surprise that faster development, better communication and cooperation preventing unnecessary work, and fewer bugs lead to lowering development costs. Especially in companies with large IT departments, the savings can reach dozens of percent (!).

Oftentimes the synergies and higher efficiency show that you don’t need to have, say, 20 IT specialists in the team. Perhaps just 17 or so will suffice. That’s one heck of a saving right there as well.

9) Happier customers

Speeding up development also makes your customers happy. Your business is able to more flexibly react to their requests and e.g. promptly add that new feature to your online store that your customers have been asking for. Thanks to the previously mentioned detailed monitoring, you can easily see which of the changes are welcomed by your users and which you should rather throw out of the window. This way, you’ll be able to better differentiate yourself from the competition and build up a tribe of fans that will rarely go get their stuff anywhere else. 

Key takeaways

To sum it all up, from a developer’s point of view, DevOps together with containers simplify and speed up work, improve communication with administrators, and drastically reduce the occurrence of bugs. Business-wise this translates to radical cost reductions and more satisfied customers (and thus increased revenues). The resulting equation “increased revenues + decreased costs = increased profitability” requires no further commentary. 

In order for everything to run as it should, you’ll also need a great infrastructure provider – typically some form of a Kubernetes platform. For most of us, what first comes to mind are the traditional clouds of American companies. Unfortunately, according to our clients’ experience, the user (un)friendliness of these providers won’t make things easier for you. Another option is a provider that will get the Kubernetes platform ready for you, give you much needed advice as well as nonstop phone support. And for a lower price. Not to toot our own horn but these are exactly the criteria that our Kubernetes platform fits perfectly. 

Example of infrastructure utilizing container technology – vshosting~


vshosting~

How do you recognize a great hosting provider from a sub-par one? To quote one of our colleagues: “It’s damn hard.”

Unfortunately, many of you will only learn the true qualities of your hosting provider when things get tough. A great hosting company solves problems proactively, quickly and nonstop. (And most of the issues it manages to quench before they fully manifest.) That’s why a lot of people only change their hosting provider after some very bad experience.

However, by then the inability of a hosting company will have usually cost you considerable money. Plus when looking for a new hosting provider, you know little more than before. The only improvement is that you can remove your existing provider from a long list of options.

Damn hard, yet not impossible

To determine a hosting provider’s quality is truly difficult but there are some indicators that can help you decide nonetheless. Award plus points to all hosting companies with their own data center, those who have support staff directly in the data center, 24/7, and to those providers that are happy to prepare a customized solution for you. 

On the other hand, if a hosting company is renting space in someone else’s data center hundreds of miles away from its headquarters, we recommend you rule it out at once. How will they fix your server if something goes wrong? The same goes for a provider with support staff that only work on weekdays from 9-5. It is well known that bugs and other problems notoriously rear their ugly head at 2 am on Saturdays. Ideally, just before Christmas.

Similarly, if a provider tries to squeeze your unique online project into one of their cookie-cutter solutions and doesn’t want to hear about customization, we recommend you run for the hills.

All of these markers can help you narrow down the list of potential providers. Unfortunately, it’s often not enough to make a final decision. Besides, there are also such companies that will tell you exactly what you want to hear. Then it becomes very difficult for you to figure out what’s the truth and what isn’t.

How to size up an intangible product

Hosting is a virtual service that you can’t just look at and evaluate. Therefore, some providers can promise you wonderful things that you’ll have difficulty verifying. For this reason, we recommend you go beyond the above-mentioned research and visit the company’s headquarters and data center.

Look into the data center itself, the backup elements they use, and even how many people are manning the support. It’s also essential whether the support staff are experienced professionals or whether it’s obvious they’re just some random students. Focus also on how the company manages server monitoring (does it allow them to discover a problem before it fully manifests itself?). All of this will help you judge how well the hosting provider takes care of servers entrusted to them. 

This tactic will further help you narrow down the list of potential hosting companies to the highest quality ones. You’re still not out of the woods, though. Many aspects are hard for you to gauge and clever salespeople can manage to hide a few skeletons under the data center floorboards. Therefore, it’s time to focus on quality indicator number one: references.

References, references, references

You’ve probably heard that in real estate what matters most are three things: location, location, location. Well, when it comes to hosting, it’s all about references. Thanks to them you can accurately estimate whether the given company can provide even more complicated solutions, whether they can customize infrastructure to your project and whether they’re a good fit for your business at all. 

Try to find out if your potential hosting providers have large and well-known companies that require customized complex infrastructure among their clients (such as Pilulka.cz). If so, it’s a great indicator that the hosting company can manage even extensive projects and won’t have a problem tailoring the hosting solution to your needs – no matter the size of your business. To give you an idea: at vshosting~, we host both the largest Czech projects with clusters composed of tens of servers and respectable clients who only have a single cloud server.

At the same time, focus on reference clients that have a business model similar to yours. Do you sell clothes online? Then you’ll be interested to know which hosting provider has e. g. Trenýrkárna.cz as a client. Do you run a digital agency? In that case, you’ll want to host with a company that provides infrastructure for e.g. Blueghost. And so on – you get the idea.

It is also important whether the hosting provider has experience with the technologies you use in your application. Therefore, ask also about concrete clients that chose the same technologies as yourself. Do you, for example, run a highly loaded MySQL database and need someone to take care of it, optimize it and ensure its operation in a high availability mode? That’s our daily bread at vshosting~ – we even take care of MySQL databases with terabytes of data in volume! 

But how can you rule out those companies that have no qualms about putting up a bunch of impressive logos on their website without actually having those companies as clients? Simple: ask for contact information of these clients and verify the references. Respectable hosting companies will have no problem giving them to you. On the other hand, if a company gives you a bunch of excuses about why they can’t give you their clients’ contact info, that’s a serious red flag.

References in the hosting business simply serve as insurance that whatever it is providers promise you, they can also deliver. 


vshosting~

At vshosting~, we make it our mission to not only provide our clients with top-notch hosting services but also to advise them well. In the 14 years that we’ve been on the market, we’ve seen quite a lot already. Therefore, we have a pretty good idea about what works and what spells trouble. One of the key (and very costly) issues we see time and again is a shared infrastructure for both development and production. This tends to be the case even with large online projects that risk losing enormous amounts of money should something go awry.

Considering how big a risk shared dev and production environment poses, something going awry is a matter of time. Why is this so dangerous? And how to set up your infrastructure so that you eliminate the risks? We put together the most important points. 

Development vs. production environment 

Development (and testing) environment should just and only serve new software and feature development or its testing. This encompasses not only the changes in your main application but also e.g. updates of your software equipment on the server. In the dev environment, developers should be able to experiment without worrying about endangering production.

The production environment, on the other hand, is where the app runs for everyone to see. For instance, an online store, where customers browse and search for items, add them to carts and pay for orders. Production is simply all that is visible for your average user plus all the things in the background that are key for app operation (e.g. databases, warehousing systems, etc.).

But most importantly: the production environment is where the money is made. Therefore, we should keep our distance from it and play it soothing classical music. As any problem in production rapidly translates into lost revenue.

Risks posed by A shared infrastructure

If you don’t separate development from production, it can easily happen that your developers will release insufficiently tested software, which will in turn damage or break the entire application. In other words: your production will go down. Should you be sufficiently unlucky, it will take your developers several hours or even days to fix the app. If your app is a large online store, this translates into tens of thousands of euros in lost revenue. Not to mention the extra development expenditures.

Such a situation becomes especially painful if it occurs during a time of high traffic on your website. In the case of online stores, this is typically the case before Christmas – take a look at how much would just an hour-long outage cost you. It’s not just Christmas, though – take any period of time you spend investing in e.g. a TV commercial. This is a very expensive endeavor and cannot be simply turned off because your online store is down.

Unfortunately, we’ve witnessed way too many nightmarish situations like this. This is why we recommend all our clients develop software in a separate environment and only after testing it in a testing environment release it into production. The same can be said for expanding the software equipment of their production servers. Only by thoroughly testing outside of production can you avoid discovering a critical bug in production on a Friday night just before Christmas.

Inside a separated development environment, you can deploy new app versions (e.g. an update online store) risk-free. There you can also test everything well before deployment to production. It will also allow you to update server components to their new versions (DB, PH, etc.) and test their compatibility and function. Only when you are certain everything works the way it should, can you move it to production. All in all, you’ll save yourself lots of trouble and cut costs to boot.

How to separate development from production

When choosing a hosting solution, take the issue of separating development and production into consideration. Ideally, you should run development and testing on a separate server and production should “live” on another one (or on a cluster of servers). At vshosting~, we’re happy to offer you a free consultation regarding the best solution for your project – just drop us a line.

We’ll help you design appropriate configuration for your development and testing environment so that it fully reflects that of production but at the same time doesn’t cost you a fortune in unused performance you don’t need. As the development environment receives little traffic, it doesn’t have to be as robust. For example, if your production consists of a cluster of three powerful servers, one smaller virtual one will likely be just enough for your development purposes. We recommend using the cloud for development because it’s typically the most cost-efficient option.

If you opt for one of our managed hosting services, we’re even happy to create the development environment for you. Simply put, we’ll “clone” your production and adjust it in such a way, that the environment remains identical but its performance is not unnecessarily high. That way, you’ll get all the benefits of separating development from production and save time and money while at it. Then, you’ll be able to make all your changes in development and, only after successful testing, transfer them to production.


vshosting~

It likely comes as no surprise that at vshosting~, we take security very seriously. Sometimes we joke that our measures are bordering on paranoia. But that’s our job. Only thanks to extremely strict measures and crisis scenarios fine-tuned to the last detail are we able to operate a data center that hasn’t experienced an outage since its opening in 2015 and provide our clients with maximum reliability.

In this article, we’ll take you behind the scenes and show you, how we protect clients’ servers and data from three typical threats: server sabotage or theft, a prolonged blackout, and cooling system failure.

Apocalyptic scenario 1: Server sabotage or theft

If some random vandals, or worse, your competitors, got their hands on your servers, that would spell real trouble. Not only would your applications (e.g. your online store) stop working but the thieves could access all your data. Fortunately, if you’ve entrusted your infrastructure to vshosting~, you don’t have to worry about this ever happening.

Our data center ServerPark is an impenetrable reinforced concrete cube with armored doors surrounded by a tall fence with barbed wire to boot.

https://vshosting.cz/tech

ServerPark data center

Not even that was sufficiently secure for us though, so we added a sophisticated security system complete with cameras. The system activates the moment anyone would, for instance, climb over the fence or try to break into one of the doors. The only way to get into the server room is with a combination of several keys, chips and an access code. If that wasn’t enough, each server rack is locked as well so making it to a server without clearance is next to impossible.

It is worth mentioning that we also protect our clients against cybernetic sabotage: DDoS attacks. Those can be easily (and cheaply) ordered online and the attackers can then overload your application rendering it inoperational. That’s why we developed our own anti-DDoS protection system, which effectively protects our clients’ servers. Saboteurs will, therefore, have no luck even if they decide to take the software route.

Apocalyptic scenario 2: Several days of blackout

Thieves, saboteurs, and other villains are taken care of but what if, say, there was a power outage? Any data center consumes a huge amount of electricity – so how would we manage a blackout? And what if the power outage lasts for a full week? It is exactly for these possible cases that we’ve installed a complex system at ServerPark that comprises UPS, i.e. a backup battery power source, diesel generators, and a diesel tank.

2 out of 3 diesel generators at the ServerPark data center

2 out of 3 diesel generators at the ServerPark data center

We also operate all of these elements in a so-called nx2 and n+1 mode. What that means is that we’ve installed two independent power supply branches (nx2). Each branch is assigned a one dedicated as well as one backup UPS (n+1) and has its own diesel generator and switchboard. At the same time, we have an extra generator that will switch on automatically, should any of the other two have a malfunction.

Each power supply branch also has its own set of batteries and each set is composed of 3 independent strings. This is the case because, for technical reasons, the batteries are set up as a series in each string. Therefore, if there was poor contact between two batteries, for instance, the entire string could fail. We also install 2 separate power sources to every server, each one simultaneously connected to both of our power supply branches: to independent UPS, switchboards, and generators.

So what would happen if there was a power outage? The data center would automatically switch to battery system power while our diesel generators would start turning on. Our batteries can fully supply ServerPark for more than 20 minutes. This provides ample time for the generators to start operating at full efficiency. After that, the data center would be fully powered by diesel generators. Thanks to our extensive diesel supply, we could operate like this for more than two weeks. To give you an idea, that’s several times more than most hospitals.

Apocalyptic scenario 3: Cooling system malfunction

We’ve handled the blackout then but there are other potential problems that could arise. A data center is full of electronics after all – what if some of it malfunctions? And what if the malfunction occurs in a key element, such as the cooling system? 

Servers create a lot of heat which is why they need to be cooled constantly to prevent overheating. If their temperature rose too high, it could cause server damage, destruction or even a fire. That’s why we implemented a robust cooling infrastructure along with a professional FM200 gas fire extinguishing system. Fire extinguishing should be off the table though – each of our servers has a safety switch that turns them off if they get too hot.

FM200 fire extinguishing system in the server room at ServerPark

FM200 fire extinguishing system in the server room at ServerPark

Our cooling system is just as robust as our power supply one: we have twice as many air conditioning units and other elements as we need plus an extra one in reserve. Many data centers only have that one reserve but we didn’t consider it safe enough. Cooling system failure in our data center is, therefore, about as likely as you getting hit by lightning while the sky is clear.

As you can see, our data center ServerPark is ready for the worst. Be it an attempt at sabotage, power outage or a possible malfunction, the quality of our services will remain constant. Due to our no-compromise security (and many other benefits), even the biggest Czech and Slovak internet companies have entrusted us with their online projects. Also, if you’re curious how we’re maintaining a 100% operation during the coronavirus pandemic, check out our previous article


Damir Špoljarič

Throughout the past 14 years, we have experienced a lot of difficult times, technical problems, and other complications. The current pandemic is different in many ways. It is not possible to prepare for such a situation in advance and design a detailed crisis procedure for it. Although this is nothing compared to companies that had to completely “shut down”.

During the first days, we’ve made some elementary changes (we’ve subsequently informed about them on social media). We have, perhaps, approached the situation with too much paranoia but our intense measures have two goals: to postpone (or entirely eliminate) the infection in our company and to maintain operation at all costs.

Preventing the infection

First and foremost, we’ve made wearing face masks mandatory for all personnel in the building. This was days before the government announced this measure country-wide.

We also banned the use of public transport and introduced corporate car rides using the capacity of company cars. At the same time, we allowed the maximum number of people to work from home, although in our case it is quite complicated and we have significantly reduced our staffing capacity.

In an effort to prevent contamination, we also restricted all foreign persons from entering the building and asked clients to do so only in the most urgent cases. We subsequently tightened this measure and introduced a total ban on the entry of foreign persons as well as clients and suppliers into the building except for emergency situations. Our teams of admins and technicians solve all other issues on behalf of clients so that visits are not necessary.

Furthermore, we measure the temperature of all persons entering the building, disinfect work stations daily as well as door handles, etc.

Preparing for anything

One of the early measures was also filling up our diesel tanks with fuel for our generators (tens of thousands of liters purchased). We don’t expect the state to “pull the plug” on electricity for companies but we want to be ready for anything. This way, we can endure a several-week-long blackout without a hitch.

On top of everything, we divided the management in such a way, that they cannot get infected at the same time. The objective is that the full managerial operation of the company remains intact even if someone falls ill, our strict measures notwithstanding.

Paradoxically, our volume of work during this crisis has remained the same or even increased. This is due to most of our clients being online stores that are currently experiencing a demand comparable to the one before Christmas. Not only food is sought after but electronics or even sports goods as well. However, thanks to our wonderful colleagues, we are managing everything just fine.

Full lockdown? We’re ready

We’ve also prepared for an emergency “island-like” operation if the company in the event that the state enforces a full lockdown after all (as data centers fall under telecommunications and are thus not exempt from it). The building is now fully stocked with food and other essentials to make the continual operation of the company and the data center possible.

A big thank you to all colleagues, they all approached the situation responsibly and devotedly. In these tense situations, the health and strength of each company team come to light. At the same time, I would like to assure our clients that vshosting ~ is still running at 100% and is ready for all crisis options.

Damir Špoljarič
CEO



vshosting~

There can be quite a few reasons to leave web hosting: maybe you need more setup flexibility, higher web limit, availability, or performance, or you want to use specific software that doesn’t quite agree with your web hosting. Web hosting has simply become too small for you.

On the other hand, web hosting provides a relatively high level of user-friendliness: you control it using a graphic user interface, the provider takes care of everything regarding both hardware and software, and you don’t actually have to worry about much of that behind the scenes stuff. Upgrading to a new hosting solution that would provide the same level of comfort is, therefore, no easy task.

Where to then?

Web hosting alternatives that offer more flexibility and performance are plentiful on the market.  Among the closest ones from a user’s perspective are VPS and managed services. You may also consider a dedicated server or getting your own physical server.

Because reality can be quite a bitch sometimes, each of these options comes with both advantages and disadvantages. Let’s take a look at them. 

VPS

At first glance, the most attractive option is the so-called VPS, i.e. virtual private server. Compared to web hosting, you get complete freedom with a VPS to install whatever software you want and set up everything the way you like. VPS is also quite cheap which proves very attractive especially for projects that are just starting out.

However, full freedom comes with a caveat – you have to take care of everything yourself. And we do mean everything. Installations, updates, security measures, any changes, problem-solving, backups, etc. etc. You’ll need your own administrator to make sure everything works the way it’s supposed to and that your project stays safe.

There are many threats lurking in the shadows that you’ll have to identify and stay clear of when managing your own server.

Own physical server

Another option you can transfer to from web hosting is getting your own server. That way, you’ll get a lot more performance than with web hosting or a VPS as well as a lot of flexibility. On the other hand, this solution has similar disadvantages as a VPS – you have to deal with everything on your own. Which is costly, annoying, and pretty dangerous to boot (unless your administrator is one hell of a guy who works 24/7).

Besides software management, you also have to take care of all things hardware – provide cooling, constant energy supply (not quite as easy as it sounds – aka plugging it in doesn’t cover it). All things considered, getting your own physical server thus comes with the highest risk of outages, cybernetic attacks, and other fun things like that.

Dedicated server

A dedicated server, unlike own physical server, is actually a server as a service, where your provider takes care of all things hardware and sometimes even does the initial installation. The server provider also deals with all hardware-related issues – e.g. at vshosting~, we guarantee solving any hardware problem within 60 minutes, day or night.

Your dedicated server is placed in a data center which tends to be well protected from power outages or intruders. In addition, it has much better connectivity than a server plugged into your own makeshift server room.

Compared to web hosting, a dedicated server offers much higher performance and you can install pretty much whatever you want on it. The software side of things is still on you though, just like with a VPS or an own server. Therefore, you’ll have a hard time getting by without an administrator (and thus extra costs).

Managed services

The most pleasant upgrade from web hosting is, without a doubt, to a managed server or even a more robust managed service. From a user’s perspective, these kinds of services are like web hosting on steroids: the service is equally easy to use and the hosting provider takes care of all operational things (both software and hardware). At the same time, you get much higher performance at your disposal as well as a lot more flexibility regarding settings, software compatibility, and the like. 

In effect, this means you don’t need an administrator, can forget about what’s going on with the server behind the scenes and everything works as it should. And if not, it’s your provider’s job to fix it ASAP. You can focus on your core business.

Depending on the extent of your project and the technologies used, all you need to do is decide whether you’ll go with a managed server, a more robust managed cluster, or e.g. a managed solution for Kubernetes

Cloud or metal?

Cloud or metal?

If you do opt for a managed server, you’ll probably run into the “cloud vs. physical server” dilemma. In our experience, it’s hard to say point-blank that one is better than the other – it depends on your specific situation.

For a smaller but quickly growing online project where high availability and flexibility is key, cloud is what you should go for. Thanks to a lower performance requirement, cloud will also be the more frugal option for you. And if you grow out of it eventually, it’s easy to transfer to a physical solution after. 

However, if your business requires high performance or the storage of large amounts of data, it pays off to jump straight into a physical server solution. That one can pack a much bigger performance punch and becomes much cheaper per unit than cloud.

Not all managed services are created equal 

Managed services have been gaining popularity thanks to their user-friendliness. Unfortunately, not every provider considers “managed” a truly completely managed service and clients can thus become unpleasantly surprised.

Ideally, when it comes to a managed service, the provider handles the initial installation, all of the server monitoring, and operating system updates as well as all of the issues that might arise – be it software or hardware related. That’s how we do it at vshosting~.

Some other providers, however, understand a managed service as only the initial software installation and subsequent care for hardware. Alternatively, they may be prepared to handle software-related problems but charge extra fees for that. That’s why we recommend having a very close look at what your chosen managed service truly encompasses.

The best solution for your project

Have you found your pick?

We know from experience how difficult making this choice can be. Everything is individual and you’ll typically get the best results if you have your hosting solution customized just for your project.

So if you’re still wondering about the best option for you, shoot us an email. Our experts will be happy to advise you on what option is best for you.


Damir Špoljarič

Take a look at the most important reasons why hundreds of clients opt for dedicated servers from vshosting~.

Customized configuration

More than 40 % of our clients take advantage of our individualized configuration option when it comes to dedicated servers. We provide clients with dedicated servers with high performing processors and up to 1 TB RAM.

Immediate service and upgrades

As the only company in the Czech Republic, we guarantee repairs or exchanges of servers in the unlikely event of a malfunction within 60 minutes. On average though, we repair a server within 25 minutes :-). We also have hundreds of replacement servers in stock directly in the data center and we are therefore able to exchange them immediately. In addition, we can upgrade your server or change its configuration just as fast.

Monitoring and remote management included

We monitor each dedicated server and in the event of its unavailability (e.g. due to overload) we contact the client within 5 minutes. We then assist them in solving this if the inaccessibility is caused by a hardware issue.

All dedicated servers are plugged into our central management network and clients can thus use our web application KVM Proxy to manage their dedicated servers. They can monitor the log of the server hardware, access the server console, execute a remote boot of the server or physically restart the server. All that can be done using the central interface with the maximum comfort which is especially appreciated by clients, who have a number of dedicated servers.

Really good connectivity

Each server is connected to the 2x 1 Gbps network (redundancy thanks to LACP) into two distinct switches in the active-active mode so that the full speed of 2 Gbps can be used. We also offer a speed of 2x 10 Gbps.

vshosting~ makes no false promises and our backbone network is prepared for large data streams. Out of all hosting companies, we use the best network hardware from top suppliers.

We are members of 4 peering centers in 3 countries and continuously expand our European infrastructure. In the near future, we also plan an upgrade to 2x 100 Gbps into NIX.CZ. Our technology is ready for that already.

Redundancy

We only use servers with two power sources or servers with STS switch connection in order to fully utilize the features of our top of the line data center and its energy infrastructure. Thanks to that, we are able to guarantee extra high availability.

Take a look behind the scenes – into the backup systems of our data center:

Complete management of the physical infrastructure – everything you need

We provide complex infrastructure as a service even for the largest internet projects. Here are the most popular add-on services we offer with dedicated servers:

– private networks between servers (client VLAN)
– lease of NetApp storage or space in our CloudStorage SSD/SAS for central storage (NFS/iSCSI)
– top of the line premium AntiDDoS (Radware solution)
– VPN as a Service
vshosting~ CDN

And much more.

Read more about our dedicated servers.


We have successfully assisted with migrations for hundreds of clients over the course of 17 years. Join them.

  1. Schedule a consultation

    Simply leave your details. We’ll get back to you as soon as possible.

  2. Free solution proposal

    A no commitment discussion about how we can help. We’ll propose a tailored solution.

  3. Professional implementation

    We’ll create the environment for a seamless migration according to the agreed proposal.

Leave us your email or telephone number




    Or contact us directly

    +420 246 035 835 Available 24/7
    consultation@vshosting.eu
    Copy
    We'll get back to you right away.