Kubernetes is a popular, cloud-native container orchestration system. We’ve seen a trend over the last few years, with more and more companies and organizations adopting Kubernetes in production environments. As Kubernetes adoption increases, there is often pressure to migrate applications that are currently deployed via other means onto Kubernetes.

However, migrating a system into a cloud-native environment can often be a black hole or a deep valley, very different from traditional architectures in a variety of ways. Done successfully, an effective migration of these applications to Kubernetes may help organizations adopt DevOps practices, and allow organizations to unify their operations onto a single set of cloud tooling and expertise.

This article describes the process of migration and will cover the following topics:

– Onboarding the existing client’s infrastructure
– Working with the client to define their needs, expected load, niche, and the rest of the requirements
– Preparing the infrastructure for the migration
– Transforming the client’s services and applying them to the created infrastructure
– Transferring the data from their existing infrastructure
– SLA

Onboarding for existing client’s infrastructure

When we start with a new client, we meet to discuss their expectations and take a deeper look into what their current infrastructure looks like. Prior to the meeting, our DevOps team representative asks for the following technical information:

Where is the system hosted (cloud, hosting or on-premise)?

Migration from the existing infrastructure means the change of the host system. Our stack consists of Docker and Kubernetes. They are
– Reliable modern technologies that fit client’s needs
– Can handle both – small sites and huge complex applications
– Are easy to manage/update.

Need hosting services, including support or managed services for open source applications? Click here

Which services are present (e.g. WordPress, Nginx, NodeJS, etc.)?

Usually, our developer’s team leader creates some pseudo-code structure in the source code controlling system (usually GitLab) depending on the current structure of the client. This makes transferring the app and all service communications faster, more reliable, and easier to work with for developers.

Which DNS solution is used?

Sometimes, simple DNS records are not enough. We offer services to delegate domain names to some WAF systems like Cloudflare, Incapsula, etc. After the records are transferred, we configure security, SSL, cache, and other options that make the site more secure.

Are the services dockerized or daemonized?

VERY IMPORTANT POINT. If the services are not dockerized, we need to put our efforts to put them into Docker containers. Docker containers are a modern way of working with applications for many reasons- too many for today’s post. Kubernetes is a container orchestration tool that allows us to manage complicated structures in the most rational way.

Which OS is used on the host system?

Usually, this is related to whether or not the services are dockerized or daemonized. See above.

Which database(s) is used?

We work with different databases. Some of them are deployed as containers, but some run in the cloud since they have to be as stateful as possible. For example in contrast with Redis, it is not a best practice to run MySQL as a container.

Is there any backup solution for any of the components?

A backup solution is very important. During the onboarding stage, we work together with the client to determine which things have to be backed up. Usually, they are static files, configuration, and databases. Upon agreement, we’ll configure the backup processes and store backups on different servers to account for fault tolerance.

Is there any Git repo with the code?

A repo with code makes a transfer much easier. With a repo, developers can understand what the project is and what to expect. We can start to work on it on the fly or migrate it to a GitLab server where we can cover all of the requirements (CI/CD, Version Control, Creating Environments, Storing Credentials, etc.).

Which email solution is used (if any)?

Emails are still a primary communication platform, often used in many applications. We will need to configure and manage external or internal email solutions for the client.

Which SSL certificate issuer is used (if any)?

Certificates can be purchased or issued absolutely free. If using a purchased certificate, all we ask is to share them with us so it will be attached to the services. If using a free issued certificate, the process has to be configured due to a relatively short renewal period of the solution. This is why we suggest using WAF certificates (Cloudflare, Incapsula, etc.) in order to have free and reliable sources of SSL certificates. LetsEncrypt is often used as an end-point SSL certificate issuer with Docker/Kubernetes services.

Client-focused

We will work with clients to define their needs, the expected load, niche, and the rest of the requirements.
The infrastructure we create depends on the client’s needs. In most cases, the client wants their app to be reachable from the outside, scalable, reliable, and highly available. This is why we chose Kubernetes as the container orchestration tool. It allows us to run almost any modern application in a short time with less effort. Sometimes clients have their own developer team and we are responsible for the underlying infrastructure. For example, for DevOps, we can create the structure for almost any type of application, working with such tools as Terraform, Kubernetes and Helm, Rancher, or deploying Kubernetes on-premises. Linnovate also offers developers on-demand, available to clients when they need to augment their team.

Preparing the infrastructure for the migration

The final structure that can handle the application of almost any type consists of:
— Kubernetes cluster deployed with Terraform (IaaC)
— Auto-generated ssl certificates
— Dynamic storage provisioner
— Application can be scaled depending on load/needs.
— Configuration can be changed without any downtime
— Updates are not causing any downtime.
— Easy to monitor and implement alerting/logging system.
— Configuring monitoring and alerting system. Usually, they are:
— Prometheus+Grafana+AlertManager to gather metrics 
— EFK to gather, parse and view the logs

Harbor (The storage for Docker images/Helm Charts)
— Secure way of storing Docker Images and Helm Charts
— Ability to control the access with Robot Accounts (without access to WebUI)

GitLab (source code controlling system for developers and main CI/CD platform)
— The most popular Source Code version controlling system that allows you to do almost everything with the code

Rancher (Allows developers/clients to see what is actually happens with his application)
— Very friendly WebUI that is attractive for clients. Helps you to manage, scale, change and update the application on the fly.

Transform client’s services to the newly created infrastructure

Here’s what we’ll do to deploy the application on the pre-created structure.

— Split the application among docker containers (one service: one container). A simple example would be splitting web service+PHP-FPM onto two containers, one for Nginx with Ports 80/443 opened, and another for PHP-FPM with Port 9000 opened. This makes them not dependent on each other and, as follows, easily manageable and scalable.
— Create the code structure:
— Put each service into separate repositories
— Put all application services under GENERIC_REPO (Parent repo) as submodules
— Create docker-compose.yml for local development in GENERIC_REPO
— Create Kubernetes/Helm templates
— Configure CI/CD that makes the following:
— Pushes the image and Helm chart to Harbor
— Deploys dev/stage/prod environments
— Implements version control system for the application

Transfer data from the existing infrastructure

Things that are transferred during the migration:
– Source Code – the heart of the app
– Databases
– Wiki with known issues/additional info
– DNS configuration (if there is more that just an “A” records)

SLA

After the application is deployed and passes QA, DNS records can be switched. We perform continuous monitoring of the application using tools deployed on the cluster as well as external tools, such as UpTimeRobot. SLA means that we will take actions if something goes down; we are ready 24/7 to resolve any issue.
Here are some support issues we’ve provided support for:
— Site gives an error in UpTimeRobot
— Resource usage is extremely high during some defined period of time
— CI/CD flow is not working for production
— Custom alert that comes from Prometheus which monitors specific log lines/metric
— DNS issues (cache on WAF, accident deletion, etc.)
— Database backup failure
— Storage is full and has to be expanded

Kubernetes helps organizations achieve their DevOps goals and streamlines operations. However, because some applications require a great deal of effort to migrate, K8Support by Linnovate offers migration services. The first and foremost to a successful migration is establishing the goals of the migration and gathering relevant data. The series of questions presented above helps to gather information about the application relevant to migration. The information gathered then makes the migration straightforward by determining how to represent the application in Kubernetes. With all the information, we can begin an easy migration process, and companies can streamline operations.

For more information about our Kubernetes deploy, migration, and support services, click here.

To receive articles and instructables, case studies and strategies for Kubernetes and Cloud, click here.

Share
Skip to content