What would the cloud be without virtualization? The provision of virtual IT resources and the associated abstraction from the physical computing machine paved the way for the cloud age. All technologies used in the context of cloud computing are based on the virtualization of IT resources such as hardware, software, storage or network components. Technically, these forms of virtualization sometimes differ considerably.
Virtualization is an abstraction of physical IT resources. Hardware and software components can be abstracted. An IT component created as part of virtualization is referred to as a virtual or logical component and can be used in exactly the same way as its physical counterpart.
The central advantage of virtualization is the abstraction layer between the physical resource and the virtual image. This is the basis of various cloud services that are becoming increasingly important in everyday business.
A kernel-based virtual machine (KVM) is an open source virtualization technology that is integrated into Linux. In particular, one can convert Linux with KVM into a hypervisor, which enables a host computer to run several isolated virtual environments, so-called guests or virtual machines (VM).
KVM is integrated in Linux. KVM is also included on every current Linux. KVM is therefore part of the existing Linux code and immediately benefits from every new function, fix or improvement for Linux, without additional technical effort.
Another advantage of KVM is that the guest systems run almost at native speed, i.e. the guest system responds almost as quickly as a native system.
How does KVM work?
KVM converts Linux to a type 1 hypervisor (bare metal). All hypervisors require the same components at the operating system level - such as a memory manager, process planner, input / output stack (I / O), device drivers, security manager, network stack and more - in order to run the VM. KVM has all of these components because it is built into the Linux kernel. Each VM is thus implemented as a regular Linux process, which is planned by the standard Linux scheduler, with dedicated virtual hardware such as a network card, graphics card, CPU (s), memory and hard drives.
KVM uses a combination of Linux with enhanced security (SELinux) and secure virtualization (sVirt) for improved VM security and isolation. SELinux defines the security limits around VM. sVirt extends the capabilities of SELinux. sVirt enables the use of mandatory access control (MAC) for guest VMs and prevents manual labeling errors.
Linux memory management functions are performed by KVM, including non-uniform memory access (NUMA) and kernel same-page merging. A VM's memory can be swapped, increased in volume for better performance, and shared or supported by a hard disk file.
KVM takes over the performance of Linux and scales to adapt to load requirements as the number of guest machines and requests increases. Large application workloads can be virtualized with KVM. In addition, KVM forms the basis for many setups for corporate virtualization, for example for data centers and private clouds (via
Performance and scalability
KVM takes over the performance of Linux and scales to adapt to load requirements as the number of guest machines and requests increases. Large application workloads can be virtualized with KVM. In addition, KVM forms the basis for many setups for corporate virtualization, for example for data centers and private clouds (via OpenStack®).
Planning and resource management
In the KVM model, a VM is a Linux process that is planned and managed by the kernel. The Linux scheduler enables fine control of the resources assigned to a Linux process and guarantees the quality of a service for a specific process. In KVM, this includes the Completely Fair Scheduler (CFS), control groups, network namespaces and real-time extensions.
Lower latency and higher prioritization
The Linux kernel contains real-time enhancements that allow VM-based apps to run with lower latency and better prioritization (compared to bare metal). The kernel also divides processes that require long computing times into smaller components, which are then planned and processed accordingly.
The Linux Containers (LXC) resemble roughly the so-called jails under BSD or the containers under Solaris. In the running system, the host, a closed and secure area is set up for the guest, within which individual services or entire systems are virtualized. In LXC jargon, this environment is called Solaris Container. Compared to many other solutions, LXC works directly at the operating system level. While VMware or Virtualbox emulate a complete PC including BIOS and hardware and are therefore ideal for installing other operating systems, LXC uses the existing hardware and the existing kernel to provide appropriate containers.
LXC is suitable for environments that require separate instances with their own resources for security reasons, as well as for areas in which a certain degree of flexibility has to be guaranteed. If necessary, you can move a container to another host in a few simple steps. The approach is suitable for use by providers who want to provide virtual servers at low cost, as well as software developers who have to work in defined environments.
With the "Docker" software, there is a containerization technology that enables the creation and operation of Linux® containers.
With Docker, containers can be treated like extremely lightweight, modular virtual machines. And with these containers, flexibility is generated - you can create, use, copy, and move containers between environments, which in turn helps optimize apps for the cloud.
Docker technology uses the Linux kernel and its functions, such as cgroups and namespaces, to isolate processes so that they can run independently. This independence is the purpose of the container - the ability to run multiple processes and apps separately. This means that your infrastructure is used better and at the same time the security that results from working with separate systems is maintained.
Container tools, including Docker, operate with an image-based delivery model. This makes it easier to share an application or package of services with all of their dependencies in multiple environments. Docker also automates the delivery of the application (or the combination of processes that make up an application) within this container environment.
The tools build on LXC and give users unprecedented access to applications. They enable versions to be made available and checked and distributed more quickly. That's what makes Docker user-friendly and unique.
Although Docker technology was originally built on top of LXC technology - which is mostly associated with "traditional" Linux containers, Docker is not the same as Linux containers. Since then she has freed herself from this dependency. LXC was useful as a lightweight virtualization, but did not offer a good developer or user experience. Docker technology offers more than the ability to run containers - it also simplifies the process of creating and building containers, shipping images, and, among other things, versioning images.
Docker on its own is ideal for managing individual containers. However, as you begin to use more and more containers and containerized apps that are broken down into hundreds of parts, managing and orchestration can become very difficult. At some point you have to take a step back and group containers to provide services such as networking, security, telemetry, etc. in all of the containers. This is where Kubernetes comes in.
The biggest difference is that with LXC, Docker, all containers share the same kernel. That means you can only use Linux containers on a Linux host. In contrast, a complete virtual machine is "emulated" at KVM. With KVM you can also run Windows systems on Linux hosts.
However, containers are sufficient for most tasks. Because to simply isolate a few programs or users from the host system or simply operate a small additional Linux VM, containers are the better choice because they save resources. However, if the goal is to run a Linux system with a different kernel version or even Windows, then you need a KVM.
Proxmox VE (Proxmox Virtual Environment oder PVE) is a Debian-based open source virtualization platform for operating virtual machines with a web interface for setting up and controlling x86 virtualizations. The environment is based on QEMU with the Kernel-based Virtual Machine (KVM). In addition to the operation of classic virtual machines that also allow the use of virtual appliances, PVE also offers Linux Containers (LXC).
The use of a web interface simplifies much of the simple work such as setting up, starting and stopping, creating backups and managing the network infrastructure and the ongoing operation of virtual machines and the associated storage systems on the host system. In addition, clusters of several PVE host systems based on the Corosync Cluster Engine can be formed, which are managed together and between which virtual machines and their virtual hard disk space can be exchanged. This also enables the construction of high availability clusters.
Mit OpenStack® can be used to implement your own cloud architectures on standard hardware. The software project is not new, but has so far been used very cautiously in companies. OpenStack® is an open source project that consists of a series of software elements for the creation of cloud platforms. In principle, the OpenStack software components should make it easier for companies to build their own clouds. You could also say it is an open “cloud operating system” that holds a large number of pre-initialized resources within a data center.
For example, a company's IT department can act as an internal cloud service provider. The administrators provide all OpenStack cloud resources and control them via a dashboard. At the same time, the individual departments can provision these resources individually for their projects via a web interface as required. So if a certain department needs cloud infrastructure resources for a specific project, it compiles them individually from the OpenStack pool.
The digitization of business life, essential production processes and private life is in full swing. At the same time, threats from server failures, viruses and cybercrime are increasing. The whole thing is reinforced by neglecting IT security both in the private sphere and in the business world. Protective mechanisms that are really necessary are usually only considered when the damage has already occurred and the restoration of the IT infrastructure has caused enormous costs.
Linux offers you a secure basis in your IT infrastructure right from the start. On the one hand, because it has always been conceived as an operating system geared towards network operation. On the other hand, because the free availability of the source code makes the possibility of defective or misused functions almost impossible. In addition, “Open Source” has always meant permanent improvement by innovative specialists from all over the world. In the meantime, more and more users trust Linux, which among other things provides the kernel for the numerous Android installations, including companies and institutions such as Siemens, BMW, Lufthansa, Deutsche Post AG, Greenpeace and state institutions including the Federal Commissioner for Data Protection.
You are a company, a medium-sized company, a craft company, a sole trader with the appropriate IT infrastructure and you want to fully satisfy your customers with your products. Or you are a private individual with corresponding support requests. Your IT infrastructure should work reliably around the clock. As an expert in this field, IT-LINUXMAKER can protect your information effectively and quickly. With the services of IT-LINUXMAKER you secure your competitive advantage through the stability of your IT infrastructure and your data.
You can find all checklists for safe digital work here:
Our fees depend on the service/product and the scope. Therefore, we can only state our fees in an offer if we already know your request.