Means of virtualization of the main companies of developers of operating systems. Introduction of virtualization tools as a solution to centralized enterprise infrastructure management

The history of virtualization technologies has more than forty years. However, after the period of their triumphal application in the 70s and 1980s of the last century, first of all on the IBM mainframes, this concept went to the background when creating corporate information systems. The fact is that the concept of virtualization itself is associated with the creation of computational centers of collective use, with the need to apply a single set of hardware to form several different logically independent systems. And from the mid-80s, the decentralized model of the organization of information systems on the basis of mini-computers began to dominate the computer industry, and then x86 servers.

Virtualization for X86 architecture

In the problem of hardware virtualization that appeared with the time of personal computers, it would seem, did not exist by definition, since each user received at its disposal the entire computer from its OS. But as the power of the PC and expand the scope of X86 systems, the situation quickly changed quickly. The "dialectic spiral" of development made its next round, and at the turn of the centuries began the next cycle of strengthening the centripetal forces on the concentration of computing resources. At the beginning of this decade against the background of the growing interest of enterprises in improving the effectiveness of their computer Tools A new stage of development of virtualization technologies has started, which is now preferably associated with the use of the X86 architecture.

It is necessary to immediately emphasize that although in the ideas of X86-virtualization in theoretical plan, it seems to be nothing unknown before, it was about a qualitatively new for IT to paint the phenomenon compared with the situation of 20 years ago. The fact is that in the hardware-software architecture of mainframes and UNIX computers, virtualization issues were immediately solved at the base level. The X86 system was built at all on the work in the mode of data center, and its development in the direction of virtualization is a rather complex evolutionary process with a multitude of different options for solving the problem.

Another one, perhaps, even more important point is to qualitatively different business models of the development of mainframes and x86. In the first case, we are actually about the monotebook software and hardware complex for support in a generally fairly limited circle of applied software for a not very wide range of large customers. In the second, we are dealing with a decentralized community of machinery manufacturers, suppliers of basic software and a huge army of applied software developers.

The use of X86-virtualization tools began in the late 90s from workstations: at the same time, the number of people (software developers, technical support experts, experts on software), which had to have several PCs, were constantly increased with an increase in the number of versions of the client OS copies of various OS.

  • Virtualization for server infrastructure began to be used a little later, and it was associated primarily with solving the tasks of consolidating computing resources. But here two independent directions were immediately formed: ·
  • support for inhomogeneous operating environments (including to work inherited applications). This case is most often found within corporate information systems. Technically, the problem is solved by simultaneously work on one computer of several virtual machines, each of which includes an instance of the operating system. But the implementation of this mode is performed using two fundamentally different approaches: full virtualization and pathrahritualization; ·
  • support for homogeneous computing environments, which is most characteristic of hosting applications service providers. Of course, a variant of virtual machines can also be used here, but it is much more efficient to create isolated containers based on one core OS.

The next life stage of X86-virtualization technologies started in 2004-2006. and was associated with the beginning of their mass use in corporate systems. Accordingly, if earlier developers were mainly engaged in creating virtual environments, now the tasks of managing these decisions and their integration into a common corporate IT infrastructure began to be published. At the same time, a noticeable increase in demand from personal users (But if in the 90s it was developers and testers, now we are already talking about end users - both professional and domestic).

Summing up the above, in general, the following main scenarios of the application of virtualization technologies can be distinguished by customers: ·

  • development and testing software; ·
  • modeling of work real Systems on research stands; ·
  • consolidation of servers in order to improve the efficiency of equipment use; ·
  • consolidation of servers as part of solving problems of support for inherited applications; ·
  • demonstration and study of the new software; ·
  • deploying and updating applied software in conditions of existing information systems; ·
  • the work of end users (mainly domestic) on a PC with heterogeneous operating environments.

Basic virtualization options for

We have already said earlier that the problems of developing virtualization technologies are largely related to overcoming the inherited features of the X86 software and hardware architecture. And for this there are several basic methods.

Full Virtualization (Full, Native Virtualization). Unmodified instances of guest operating systems are used, and to support the operation of these OS serves the overall layer of emulation of their execution over the host OS, which is the role of which is the usual operating system (Fig. 1). Such a technology is used, in particular, in VMware Workstation, VMware Server (former GSX Server, Parallels Desktop, Parallels Server, MS Virtual PC, MS Virtual Server, Virtual Iron. The advantages of this approach can be classified as relative simplicity of implementation, versatility and reliability of the solution; All control functions assume hosts. Disadvantages - high additional overhead for used hardware resources, lack of accounting of the features of the guest OS, less than necessary, flexibility in the use of hardware.

Paravirtualization (Paravirtualization). The modification of the core of the guest OS is performed in such a way that it includes a new API set, through which it can directly work with equipment, not conflict with other virtual machines (VM; Fig. 2). At the same time, there is no need to use a full-fledged OS as a host software whose functions in this case Performs a special system called hypervisor (Hypervisor). It is this option that today is the most relevant direction of the development of server technology of virtualization and is used in VMware ESX Server, Xen (and solutions of other suppliers based on this technology), Microsoft Hyper-V. The advantages of this technology are in the absence of a need for host OS - VM are actually installed on the "naked iron", and the hardware resources are used effectively. Disadvantages - in the difficulty of implementing the approach and the need to create a specialized OS-hypervisor.

Virtualization at the OS kernel (Operating System-Level Virtualization). This option implies the use of one nucleus of the host OS to create independent parallel operating environments (Fig. 3). For guest software, only its own network and hardware environment is created. This option is used in Virtuozzo (for Linux and Windows), OpenVZ (free Virtuozzo option) and Solaris Containers. Advantages - high efficiency of using hardware resources, low overhead technical costs, excellent handling, minimizing license costs. Disadvantages - the implementation of only homogeneous computing environments.

Application virtualization implies the use of a strong isolation model applied software With the managed interaction with the OS, in which each application instance is virtualized, all its main components: files (including system), registry, fonts, ini files, COM objects, services (Fig. 4). The application is executed without installation procedure in the traditional understanding and can be started directly from external media (for example, with flash cards or from network folders). From the point of view of the IT department, this approach has obvious advantages: accelerating the deployment of desktop systems and the ability to manage them, minimizing not only conflicts between applications, but also needs to test applications for compatibility. In fact, it is precisely such a version of virtualization in Sun Java Virtual Machine, Microsoft Application Virtualization (previously called SoftGrid), Thinstall (in early 2008, entered VMware), Symantec / Altiris.

Virtualization Solution Selection Questions

Say: "Product A is a solution for software virtualization" - not enough to understand the real possibilities of "A". To do this, it is necessary to look more detailed for the various characteristics of the products offered.

The first of these is related to the support of various OS as host and guest systems, as well as with providing applications in virtual environments. When choosing a virtualization product, the customer must also keep in mind the wide range technical characteristics: Level of application performance loss as a result of the appearance of a new operating layer, the need for additional computing resources for the operation of the virtualization mechanism, the spectrum of supported periphery.

In addition to creating virtual media execution mechanisms today, network management tasks are output: converting physical environments to virtual and vice versa, recovery of the system in case of failure, transferring virtual environments from one computer to another, deployment and administration of software, security, etc.

Finally, the values \u200b\u200bof the virtualization infrastructure used are important. It should be borne in mind that here in the cost structure main can not be so much the price of virtualization tools themselves, how much the opportunity to save licenses for basic OS or business applications.

The main players of the X86-virtualization market

The market of virtualization tools began to form less than ten years ago and today has acquired quite certain outlines.

Established in 1998, VMWare is one of the pioneers to use virtualization technologies for X86 architecture computers and today has a leading position in this market (according to some estimates, its share is 70--80%). Since 2004, it is a subsidiary of ECM Corporation, but the market works autonomously under its own brand. According to EMC, VMware staff has grown from 300 to 3,000 people during this time, and sales volumes have doubled every year. According to officially announced information, now the company's annual income (from the sale of virtualization and related services) is approaching $ 1.5 billion. These data reflects the overall increase in market demand for virtualization tools.

Today, WMWare offers a comprehensive third-generation virtualization platform VMware Virtual Infrastructure 3, which includes means for both separate PCs and for the data center. The key component of this software package is the VMware ESX Server hypervisor. Companies can also use the free VMware Virtual Server product on the basis of which pilot projects are proposed.

Parallels is a new (since January 2008) the name of the company SWSOFT, which is also a veteran of the technological market. Its key product - Parallels Virtuozzo Containers, Virtualization Solution at the OS level, allowing you to run a set of insulated containers (virtual servers) on one Windows-or Linux server. To automate the business processes of hosting providers, the Parallels Plesk Control Panel is proposed. In recent years, the company has been actively developing the direction of table systems virtualization tools - Parallels Workstation (for Windows and Linux) and Parallels Desktop for Mac (for Mac OS on the X86 architecture computers). In 2008, she announced the release of a new product - Parallels Server supporting the server mechanism of virtual machines using different OS (Windows, Linux, Mac OS).

Microsoft has entered the virtualization tool market in 2003 with the acquisition of Connectih, releaseing its first Virtual PC product for desktop PCs. Since then, she has consistently increased the spectrum of proposals in this area and today almost completed the formation of a virtualization platform, which contains the following components. ·

  • Server virtualization. Here are two different technological approaches: using Microsoft Virtual Server 2005 and the new Hyper-V Server solution (while it is presented by beta version). ·
  • PC virtualization. It is performed using a free Microsoft Vitrual PC 2007 product distributed.
  • Application virtualization. For such tasks, the Microsoft SoftGrid Application Virtualization system is offered (previously called SoftGrid). ·
  • Presentation virtualization. Implemented using Microsoft Windows Server Terminal Services and in general is a long-known terminal access mode. ·
  • Integrated virtual system management. In solving these tasks, the key role is assigned to the System Center Virtual Machine Manager released at the end of last year.

Sun Microsystems offers multi-level knobs: traditional OS, resource management, OS virtualization, virtual machinery and domains at the Hard Partitions). This sequence is built on the principle of increasing the level of insulation of applications (but the flexibility of the solution is reduced at the same time). All Sun virtualization technologies are implemented within the SOLARIS operating system. In the hardware plan everywhere there is support for the X64 architecture, although the Ultrasparc system is initially sharpened to these technologies. Other operating systems can be used as virtual machines, including including Windows and Linux.

Citrix Systems Corporation is a recognized infrastructure leader remote access to applications. She seriously strengthened its position in the field of virtualization technologies, bought in 2007 for $ 500 million. XENSOURCE company, Xen developer - one of the leading technologies for the virtualization of operating systems. Just on the eve of this degree, Xensource introduced the new version of its flagship product XENTERPRISE on the basis of the XEN kernel 4. This acquisition caused some confusion in the IT industry, since Xen is an open project and the technologies laid in it underlie commercial products such suppliers, such as , Sun, Red Hat and Novell. A certain ambiguity in the Citrix position in the future Promotion of Xen, including marketing plan, is still preserved. On the first half of 2008, the release of the first product of the company on the basis of Xen - Citrix Xendesktop technology (for PC virtualization) is scheduled. Then the updated version of XenServer is expected.

In November 2007, Oracle announced a virtualization market output, submitted by Oracle VM to virtualize server applications of this corporation and other manufacturers. A new solution includes an open source server software component and an integrated browser-based management console, designed to create and administer virtual servers pools that run in systems based on architectures X86 and X86-64. Experts saw the unwillingness of Oracle to support users who start her products in virtual environments of other manufacturers. It is known that the Oracle VM solution is implemented on the basis of the Xen hypervisor. The uniqueness of this step, Oracle lies in the fact that it seems to be the first case in the history of computer virtualization, when actually technology is not adjusted to the operating environment, but for specific applications.

IDC eye virtualization market

The market for virtualization of the X86 architecture is at the stage of rapid development, and its structure has not yet established. This complicates the assessments of its absolute indicators and a comparative analysis of the products presented here. Confirmation of this Tezis is the report of IDC "Enterprise Virtualization Software: Customer Needs and Strategies" ("Corporate Virtualization Software: Customer and Strategy") published in November last year. The greatest interest in this document represents the embodiment of the server virtualization software, in which IDC allocates four main components (Fig. 5).

Virtualization platform. Its foundation is the hypervisor, as well as the basic resource management elements and the application programming interface (API). As key characteristics, the number of sockets and the number of processors supported by one virtual machine, the number of guest systems available on one license, and the spectrum of supported OS are distinguished.

Management of virtual machines. Includes host management tools and virtual servers. Today, the differences in the proposals of the vendors in both the composition of the functions and scaling are most noticeable. But IDC is confident that the possibilities of leading supplier tools are quickly aligned, the management of physical and virtual servers will be performed through a single interface.

Infrastructure of virtual machines. A wide range of additional tools that perform tasks such as software migration, automatic restart, balancing the load of virtual machines, etc. According to IDC, it is the possibilities of this software that will be crucially influenced by the choice of suppliers by customers, and just at the level of these funds between vendors.

Virtualization solutions. A set of products that allow you to associate the aforementioned basic technologies with specific types of applications and business processes.

In terms of general analysis of the situation in the IDC market, there are three camps of participants. The first watershed runs between those who virtualize on upper level OS (SWSoft and Sun) and at the lower level of OS (VMware, Xensource, Virtual Iron, Red Hat, Microsoft, Novell). The first option allows you to create the most effective in terms of performance and additional costs for solutions, but implemented only homogeneous computing environments. The second makes it possible to run on one computer several OS of different types. Inside the second IDC group, there is another border that shares providers of autonomous virtualization products (VMware, Xensource, Virtual Iron) and operating system manufacturers, which include virtualization tools (Microsoft, Red Hat, Novell).

From our point of view, the proposed IDC market structuring is not very accurate. First, for some reason, the IDC does not allocate the presence of two fundamentally different types of virtual machines - using host-OS (VMware, Virtual Iron, Microsoft) and Hypervisor (VMware, Xensource, Red Hat, Microsoft, Novell). Secondly, if we talk about a hypervisor, then it is useful to distinguish those who use our own basic technologies (VMware, Xensource, Virtual Iron, Microsoft), and those who licenses strangers (Red Hat, Novell). And finally, it must be said that SWSoft and Sun have in their arsenal not only virtualization technologies at the OS level, but also means of supporting virtual machines.

Annotation: Information technologies brought a lot of useful and interesting things to modern society. Every day, inventive and talented people invent more and more new applications to computers as effective manufacturing tools, entertainment and cooperation. Many different software and hardware, technologies and services allow us to improve the convenience and speed of working with information daily. It is more and more difficult and more difficult to highlight the technologies of the technologies that are truly useful and learn to apply them with maximum benefit. This lecture will be discussed by another incredibly promising and truly efficient technology rapidly breaking into the world of computers - virtualization technology, which occupies a key place in the concept of "cloud" computing.

The purpose of this lecture is to obtain information on the technologies of virtualization, terminology, varieties and the main advantages of virtualization. Get acquainted with the main solutions of leading IT vendors. Consider the features of the Microsoft Virtualization Platform.

Virtualization technologies

According to statistics, the average level of loading of processor capacities from servers under windows control does not exceed 10%, the Unix systems have better, but nevertheless does not exceed 20% on average. Low server use efficiency is explained by the "One Appendix - One Server" approach widely used since the beginning of the 90s, i.e. each time the company acquires new server. . Obviously, in practice, this means a rapid increase in the server park and as a result - the increase in its costs administration, energy consumption and cooling, as well as the need for additional rooms to install all new servers and purchase licenses for the server OS.

The virtualization of the physical server resources allows you to flexibly distribute them between applications, each of which "sees" only the resources intended for it and "believes" that it is allocated a separate server, i.e., in this case, the "One Server - Multiple Applications" approach is implemented But without reducing the performance, availability and security of server applications. In addition, virtualization solutions make it possible to run different OS in sections using the emulation of their system calls to server hardware resources.


Fig. 2.1.

The virtualization is based on the possibility of one computer to perform the work of several computers due to the distribution of its resources in several environments. Using virtual servers and virtual desktop computers You can place several OS and several applications in a single location. Thus, physical and geographical restrictions cease to have any value. In addition to energy saving and cutting costs due to more efficient use of hardware resources, the virtual infrastructure provides a high level of resource availability, a more efficient management system, increased security and an improved recovery system in critical situations.

In a broad sense, the concept of virtualization is a concealment of the real implementation of any process or an object from its true presentation for the one who enjoys it. The product of virtualization is something user-friendly for use, in fact, having a more complex or completely different structure, different from the one that is perceived when working with the object. In other words, it is separated from the implementation of something. Virtualization is designed to abstruct software from the hardware.

In computer technologies, the term "virtualization" is usually understood by the abstraction of computing resources and the provision of a system of a system that "encapsulates" (hides in itself) its own implementation. Simply put, the user works with a convenient representation of the object, and it doesn't matter how the object is in reality.

Now the possibility of launching several virtual machines on one physical is of great interest among computer specialists, not only because it increases the flexibility of IT infrastructure, but also because virtualization is actually saving money.

The history of the development of virtualization technologies has more than forty years. IBM was the first to think about creating virtual environments for various user tasks, then in mainframes. In the 60s of the last century, virtualization represented purely scientific interest and was an original solution for insulation of computer systems within a single physical computer. After the appearance of personal computers, interest in virtualization is somewhat weakened due to the rapid development of operating systems that presented adequate requirements to the hardware provision of that time. However, the rapid growth of the hardware capacity of computers at the end of the nineties of the last century made the IT community again recall the virtualization technologies software platforms.

In 1999, VMware introduced the technology of virtualization systems based on x86 as an effective means capable of converting the X86 base system to a single hardware public and destination infrastructure, providing full insulation, mobility and a wide selection of OS for application environments. VMware was one of the first to do a serious bet exclusively on virtualization. As time has shown, it turned out to be absolutely justified. Today, WMWare offers a comprehensive fourth-generation virtualization platform VMware vSphere 4, which includes means for both separate PC and for the data center. The key component of this software package is the VMware ESX Server hypervisor. Later in the "Battle" for the place in this trendy direction of the development of information technology included companies such as Parallels (previously SWSoft), Oracle (Sun Microsystems), Citrix Systems (Xensourse).

Microsoft has entered the virtualization tool market in 2003 with the acquisition of Connectih, releaseing its first Virtual PC product for desktop PCs. Since then, she has consistently increasing the range of proposals in this area and today almost completed the formation of a virtualization platform, which includes such solutions as Windows 2008 Server R2 with Hyper-V component, Microsoft Application Virtual Desktop Infrastructure (VDI), Remote Desktop Services, System Center Virtual Machine Manager.

To date, virtualization technology suppliers offer reliable and easy-to-control platforms, and the market of these technologies is experiencing a real boom. According to leading experts, now virtualization is included in the top three most promising computer technologies. Many experts predict that by 2015 about half of all computer systems will be virtual.

Increased interest in virtualization technologies is currently incredible. The computational power of the current processors is growing rapidly, and the question is not even that this power is to spend, but in the fact that modern "fashion" for dual-core and multi-core systems, penetrating already in personal computers (laptops and desktops), as it should not be better allowed to implement the richest potential of the ideas of the virtualization of operating systems and applications, deriving the convenience of using the computer to a new quality level. Virtualization technology becomes one of the key components (including marketing) in the newest and future Intel and AMD processors, on Microsoft operating systems and a number of other companies.

Advantages of virtualization

We give the main advantages of virtualization technologies:

  1. Effective use of computing resources. Instead of 3, and then 10 servers loaded by 5-20% can be used one used by 50-70%. Among other things, it is also saving electricity, as well as a significant reduction in financial investments: one high-tech server is purchased that performs 5-10 servers functions. Using virtualization, it is possible to achieve significantly more efficient use of resources, since it provides combining standard infrastructure resources into a single pool and overcomes the limitations of the outdated model "one application to the server".
  2. Reducing infrastructure costs: Virtualization allows you to reduce the number of servers and the associated IT equipment in the information center. As a result, the need for maintenance, power supply and cooling material resources is reduced, and much less means is spent.
  3. Software cost reduction. Some software manufacturers have entered individual licensing schemes specifically for virtual environments. So, for example, by buying one license on Microsoft Windows Server 2008 Enterprise, you get the right to simultaneously use it on 1 physical server and 4 virtual (within one server), and Windows Server 2008 Datacenter is licensed only on the number of processors and can be used simultaneously on unlimited The number of virtual servers.
  4. Increase the flexibility and speed of the system response: Virtualization offers a new IT infrastructure management method and helps IT administrators spend less time to perform repeating tasks - for example, initiating, configuration, tracking and maintenance. Many system administrators have experienced trouble when "collapses" server. And it is impossible to pull out hDD, having rearranged it to another server, launch everything as before ... And the installation? Search for drivers, setup, start ... and all need time and resources. When using a virtual server - instant start-up on any "hardware" is possible, and if there is no similar server, you can download the ready-made virtual machine with a installed and configured server, from libraries supported by the companies of hypervisor developers (virtualization programs).
  5. Incompatible applications can work on one computer. When using virtualization on one server is possible installing Linux Both Windows servers, gateways, databases and other non-virtualized application systems.
  6. Improving the availability of applications and ensuring the continuity of the enterprise: Thanks to a reliable system reserve copy And the migration of virtual environments is entirely without interruptions in service, you can reduce the periods of planned downtime and ensure the rapid restoration of the system in critical situations. "Fall" of one virtual server does not lead to the loss of other virtual servers. In addition, in case of failure of one physical server, it is possible to automatically replace the backup server. Moreover, this happens not noticeable for users without rebooting. Thereby ensuring business continuity.
  7. Opportunities for easy archiving. Since the hard disk of the virtual machine is usually submitted as a file format, located on any physical media, virtualization makes it possible to simply copy this file to the backup media as a means of archiving and backing up the entire virtual machine. The ability to raise the server from the archive completely another wonderful feature. And you can raise the server from the archive, without destroying the current server and see the state of affairs for the last period.
  8. Increased infrastructure management: Using centralized virtual infrastructure management allows you to reduce server administration time, provides load balancing and live migration of virtual machines.

Virtual machine we will call a software or hardware environment that hides the real implementation of any process or object from its visible representation..

- This is a completely isolated software container that works with its own OS and applications, like a physical computer. The virtual machine acts the same as a physical computer, and contains its own virtual (i.e. software) RAM, hard disk and network adapter.

The OS cannot distinguish between the virtual and physical machine. The same can be said about applications and other computers on the network. Even sama virtual machine He considers himself a "real" computer. But despite this, virtual machines consist exclusively of software components and do not include equipment. This gives them a number of unique advantages over physical equipment.


Fig. 2.2.

Consider the main features of virtual machines in more detail:

  1. Compatibility. Virtual machines are usually compatible with all standard computers. Like a physical computer, the virtual machine runs running its own guest operating system and performs its own applications. It also contains all components, standard for physical computer (motherboard, video card, network controller, etc.). Therefore, virtual machines are fully compatible with all standard operating systems, applications and device drivers. The virtual machine can be used to perform any software suitable for the appropriate physical computer.
  2. Isolation. Virtual machines are completely isolated from each other, as if they were physical computers, virtual machines can use the general physical resources of one computer and at the same time remain completely isolated from each other, as if they were separate physical machines. For example, if four virtual machines are launched on one physical server, and one of them gives a failure, this does not affect the availability of the remaining three machines. Isolation is an important reason for much higher availability and security of applications performed in a virtual environment compared to applications performed in a standard, non-revualized system.
  3. Encapsulation. Virtual machines fully encapsulate the computing environment. The virtual machine is a software container connecting, or "encapsulating" a complete set of virtual hardware resources, as well as the OS and all its applications in the software package. Thanks to the encapsulation, virtual machines become incredibly mobile and convenient to manage. For example, a virtual machine can be moved or copy from one location to another as well as any other software file. In addition, a virtual machine can be saved on any standard data media: from a compact USB flash memory to corporate storage networks.
  4. Independence from equipment. Virtual machines are fully independent of the basic physical equipment on which they work. For example, for a virtual machine with virtual components (CPU, network card, SCSI controller) You can set the settings that are absolutely not coinciding with the physical characteristics of the basic hardware. Virtual machines can even perform different operating systems (Windows, Linux, etc.) on the same physical server. In combination with encapsulation and compatibility properties, hardware independence provides the ability to freely move virtual machines from one computer based on x86 to another, without changing the drivers of devices, OS or applications. Equipment independence also makes it possible to run in combination of completely different OS and applications on one physical computer.

Consider the main varieties of virtualization, such as:

  • server Virtualization (Full Virtualization and Paruguratualization)
  • virtualization at operating systems level,
  • virtualization of applications,
  • virtualization of representations.

Virtual Environment Concept

The new direction of virtualization, which gives a common holistic picture of the entire network infrastructure using the aggregation technique.

Types of virtualization

Virtualization is a common term covering the abstraction of resources for many aspects of calculations. Virtualization types are shown below.

Software virtualization

Dynamic broadcast

With dynamic broadcast ( binary broadcast) Problem Commands Guest OCs are intercepted by a hypervisor. After these commands are replaced by safe, the management of the Guest OS control is.

Paraircultualization

Paraircuitalization - virtualization technique, in which guest operating systems are prepared for execution in a virtualized medium, for which their core is slightly modified. The operating system interacts with the program of the hypervisor, which provides it with a guest API, instead of using directly such resources as a table of memory pages.

The procedure method allows to achieve higher performance than the method of dynamic broadcast.

The procedure method is applicable only if the guest OS has open source codes that can be modified according to the license, or a hypervisor and a guest OS have been developed by one manufacturer, taking into account the possibility of steaming the guest OS (albeit, a hypervisor can be launched under a hypervisor Lower level, then the pathrahritualization of the hypervisor itself).

For the first time, the term arose in the Denali project.

Built-in virtualization

Benefits:

  • Sharing resources by both OS (catalogs, printers, etc.).
  • Convenience of the interface for applications from different systems (overlapping application windows, the same windows minimization, as in the host system)
  • With fine tuning on the hardware platform, the performance differs little from the original native OS. Fast switching between systems (less than 1 sec.)
  • A simple procedure for updating the guest OS.
  • Two-way virtualization (application of one system is launched in another and vice versa)

Implementation:

Hardware virtualization

Benefits:

  • Simplify the development of virtualization software platforms by providing hardware management interfaces and support virtual guest systems. This reduces the complexity and time on the development of virtualization systems.
  • The ability to increase the speed of virtualization platforms. Management of virtual guest systems is carried out directly a small intermediate layer of software, a hypervisor, which gives an increase in speed.
  • Protection improves, the ability to switch between multiple retained independent virtualization platforms at the hardware level appears. Each of the virtual machines can work independently, in its hardware space, fully isolated from each other. This allows you to eliminate the loss of performance on maintaining the host platform and increase the security.
  • The guest system becomes not tied to the host platform architecture and the implementation of the Virtualization Platform. The technology of hardware virtualization makes it possible to launch 64-bit guest systems on 32-bit host systems (with 32-bit host virtualization environments).

Examples of application:

  • test Laboratories and Training: Testing in virtual machines It is convenient to expose applications that affect the settings of operating systems, such as installation applications. At the expense of simplicity in deploying virtual machines, they are often used to teach new products and technologies.
  • distribution of pre-installed software: Many software developers create ready-made virtual machines with pre-installed products and provide them with a free or commercial basis. Such services provide VMware VMTN or Parallels PTN

Server virtualization

  1. placing several logical servers within one physical (consolidation)
  2. combining multiple physical servers into one logic to solve a specific task. Example: Oracle Real Application Cluster, Grid-Technology, High Performance Clusters.
  • Svista.
  • twoostwo.
  • Red Hat Enterprise Virtualization For Servers
  • PowerVM.

In addition, the server virtualization simplifies the restoration of the failed systems on any available computer, regardless of its specific configuration.

Virtualization of workstations

Virtualization of resources

  • Partitioning (partitioning). Virtualization of resources can be represented as a separation of one physical server into several parts, each of which is visible for the owner as a separate server. It is not the technology of virtual machines, carried out at the level of the OS kernel.

In systems with a second type hypervisor, both OS (guest and hypervisor) take physical resources, and requires separate licensing. Virtual servers operating at the OS kernel level are almost not lost in speed, which makes it possible to run hundreds of virtual, not requiring additional licenses on one physical server.

A shared disk space or network bandwidth to a certain amount of smaller components, the easier resources used by the same type.

For example, the implementation of resource separation can be attributed (CROSSBOW project), which allows you to create several virtual network interfaces based on one physical.

  • Aggregation, distribution or addition of multiple resources in large resources or resource combination. For example, symmetric multiprocessor systems combine many processors; RAID and disk managers combine multiple disks in one large logical disk; RAID and network equipment uses multiple channels combined so that they seem like a single broadband channel. On the meta-level computer clusters make all of the above. Sometimes there are networks file Systems Abstracted data warehouses on which they are built, for example, VMware VMFS, Solaris / OpenSolaris ZFS, NetApp Wafl

Virtualization of applications

Advantages:

  • isolation of application execution: no incompatibility and conflicts;
  • every time in the original form: the registry is not clogged, there are no configuration files - it is necessary for the server;
  • smaller resourceport compared to the emulation of the entire OS.

see also

Links

  • Overview of methods, architectures and virtualization implementations (Linux), www.ibm.com
  • Virtual machines 2007.Natalia Elfmanova, Sergey Pakhomov, ComputerPress 9'2007
Server virtualization
  • Server virtualization. Neil McAllister, InfoWorld
  • Virtualization of standard architecture servers. Leonid Chernyak, open systems
  • Alternatives to the leaders in the channel 2009, August 17, 2009
Hardware virtualization
  • Hardware Virtualization Technologies, iXBT.com
  • Spiral hardware virtualization. Alexander Alexandrov, Open Systems

Notes


Wikimedia Foundation. 2010.

Watch what is "virtualization" in other dictionaries:

    virtualization - In the writings of the SNIA Association, the following general definition is given. "Virtualization is an action (ACT) to combine multiple devices, services or functions of the internal component of the infrastructure (Back end) with additional external (Front ... ... ...

    virtualization - separation physical level Networks (location and connections of devices) from its logical level (working groups and users). Setting the network configuration in logical criteria instead of physical. ... Technical translator directory

    Network Virtualization The process of combining hardware and software network resources into a single virtual network. The network virtualization is divided into external, that is, connecting many networks into one virtual, and the inner, creating ... ... Wikipedia

Virtualization In calculations - the process of representing a set of computing resources, or their logical association, which gives any advantages over the original configuration. This is a new virtual look at the resources that are not limited to the sale, geographical position or physical configuration of components. Typically, virtualized resources include computing power and data warehouse.

"Over the past few years, the server virtualization market has been very many. In many organizations, more than 75% of the virtual servers are talking about a high level of saturation, "said Michael Warrilow to research director of research in Gartner.

According to analysts, the attitude to virtualization among organizations of various sizes is different than ever. The popularity of virtualization among companies with larger IT budgets in 2014-2015 remained at the same level. Such companies continue to use virtualization actively, and in this segment is raised saturation. Among the organizations with smaller IT budgets are expected to reduce the popularity of virtualization in the next two years (until the end of 2017). This trend is already observed.

« Physicole»

According to Gartner Observations, companies are increasingly resorted to the so-called "physicoleization" - launch of servers without virtualization software. It is expected that by the end of 2017, in more than 20% of such companies there will be less than a third of operating systems on servers with an X86 architecture. For comparison, in 2015 such organizations were two times less.

Analysts note that the reasons for the abandonment of virtualization from companies in companies. Today, customers have new options - they can use a software-configurable infrastructure or hypercurned integrated systems. The appearance of such options makes providers of virtualization technologies to act more active: to expand the functionality of their solutions available to "out of the box", simplify interaction with products and reduce customers payback periods.

Hypercurned integrated systems

In early May 2016, Gartner published a forecast for hypercurned integrated systems. According to analysts, in 2016, this segment will grow by 79% compared with 2015 almost to $ 2 billion and reaches the stage of mainstream for five years.

In the coming years, the segment of hypercurned integrated systems will demonstrate the highest growth rates compared to any other integrated systems. By the end of 2019, it will grow to about $ 5 billion and will take 24% of the integrated systems market, predicted in Gartner, noting that the growth of this direction will lead to the cannibalization of other market segments.

Hyperconverged Integrated Systems - HCIS Analysts include hardware-software platforms that combine software-configurable computing nodes and a software-configurable storage system, standard related equipment and a common control panel.

Types of virtualization

Virtualization is a common term covering the abstraction of resources for many aspects of calculations. Some of the most characteristic examples of virtualization are shown below.

Paraircultualization

Paraircuitalization - virtualization technique, in which guest operating systems are prepared for execution in a virtualized medium, for which their core is slightly modified. The operating system interacts with the program of the hypervisor, which provides it with a guest API, instead of using directly such resources as a table of memory pages. The code concerning virtualization is localized directly into the operating system. Paravirtualization requires the guest operating system to be changed for a hypervisor, and this is a disadvantage of this method, since such a change is possible only if the guest OS has open source codes that can be modified according to the license. ON THE SAME TIME, paravirtualization offers performance almost like a real non-revualized system, as well as the possibility of simultaneous support for various operating systems, as in full virtualization.

Infrastructure virtualization

In this case, we will understand the creation of an IT infrastructure under this term, not dependent on the hardware. For example, when the service you need is on a guest virtual machine and, in principle, it is not particularly important for us, on which physical server it is located.

Virtualization of servers, desktops, applications - There are many methods for creating such an independent infrastructure. In this case, on a single physical or host server, by means of a special software, called "hypervisor", there are several virtual or "guest" machines.

Modern virtualization systems, in particular, VMware and Citrix Xenserver for the most part work on the Bare Metal principle, that is, put directly on the "naked iron".

Example

Virtual system, built not on the Bare Metal hypervisor, and on the combination of the Linux CentOS 5.2 and VMware Server operating system based on the Intel SR1500PAL server platform, 2 intel processor Xeon 3.2 / 1/800, 4GB RAM, 2XHDD 36GB RAID1 and 4XHDD 146GB in RAID10 with a total volume of 292GB. Four virtual machines are placed on the host machine:

  • postFix mail server based on the FreeBSD (UNIX) operating system. To deliver mail to the end user used the POP3 protocol.
  • squid proxy server based on the same FreeBSD system.
  • dedicated domain controller, DNS, DHCP based on Windows 2003 Server Standard Edition.
  • windows XP Managing Workstation for Official Goals.

Server virtualization

  • The virtual machine is a surroundings that seem to "guest" operating system as hardware. However, in fact, this is a program environment that is simulated software host system. This simulation must be reliable enough to ensure that the guest system drivers can steadily work. When using pathrahritualization, the virtual machine does not simulate the hardware, and, instead, proposes to use a special

Subject:Acquaintance with virtual machines. Methods for installing UNIX-like and Windows-like OS on a virtual machine.

Purpose:read the software products for virtualization, learn how to install various OS on the virtual machine and get the skills of their settings.

Theoretical information

Virtualization - This isolation of computing processes and resources from each other. This is a new virtual look at the resources of component parts, not limited to the implementation, physical configuration or geographical position. Typically, virtualized resources include computing power and data warehouse. In a broad sense, the concept of virtualization is a concealment of the real implementation of any process or an object from its true presentation for the one who enjoys it. In computer technologies under the term "Virtualization»Usually it is understood by the abstraction of computing resources and providing a system to the user, which" encapsulating "(hides in itself) its own implementation. Simply put, the user works with a convenient representation of the object, and it doesn't matter how the object is in reality.

The term itself "Virtualization" In computer technologies appeared in the sixties of the last century together with the term "virtual machine"Meaning program and hardware platform virtualization.

Types of virtualization

The concept of virtualization can be divided into two fundamentally different categories:

    virtualization platforms

Product of this type of virtualization are virtual machines - software abstractions running on the platform of real hardware and software systems.

    virtualization of resources

This type of virtualization is aimed at combining or simplifying the presentation of hardware resources for the user and receiving certain user abstractions of equipment, namespaces, networks, etc.

In the course of the laboratory work we will get acquainted with platform virtualization For the organization of guest OS.

Under platform virtualization Understand the creation of software systems based on existing hardware and software complexes, depending or independent of them. The system providing hardware resources and software is called host (Host), and the system simulated - guest (Guest). To ensure that guest systems can steadily function on the host system platform, it is necessary that the software and hardware of the host are reliable enough and provided the necessary set of interfaces to access its resources.

Virtual machine (Virtual Machine):

Software and / or hardware system that emulates the hardware of some platform (target is a target, or a guest platform) and the executing program for the target platform on the Host platform (Host - host platform, host platform);

Or virtualizing some platform and creating media on it, insulating programs from each other and even operating systems (sandbox, Sandbox).

There are several types of platform virtualization, each of which is an approach to the concept of "virtualization".

Full emulation (simulation)

With this type of virtualization, the virtual machine completely virtualizes all the hardware while maintaining the guest operating system is unchanged. This approach allows you to emulate various hardware architectures. The main minus of this approach lies in the fact that the emulated hardware support is very and very significantly slows down the speed of the guest system, which makes it difficult to work with it very uncomfortable.

Partial Emulation (Native Virtualization)

In this case, the virtual machine virtualizes only the required amount of hardware so that it can be launched isolated. This approach allows you to run guest operating systems, developed only for the same architecture as the host. Thus, several instances of guest systems can be launched simultaneously. This type of virtualization allows to significantly increase the speed of guest systems compared to complete emulation and is widely used. Also, in order to increase the speed, in the virtualization platforms using this approach, a special "interlayer" is used between the guest operating system and equipment ( hypervin), allowing the guest system to directly access the hardware resources. Hypervisor, also called Virtual Machine Monitor (Virtual Machine Monitor) - One of the key concepts in the world of virtualization.

Examples of native virtualization products: VMware products (Workstation, Server, Player), Microsoft Virtual PC, VirtualBox, Parallels Desktop and others.

Partial virtualizationas well as "Virtualization of Address Space"

With this approach, the virtual machine simulates several copies of the hardware environment (but not only), in particular, address spaces. This type of virtualization allows you to share resources and isolate processes, but does not allow to divide the instances of guest operating systems. Strictly speaking, with this form of virtualization, the user does not create virtual machines, and there is an insulation of any processes at the level of the operating system.

Paraircultualization

When applying is applied, there is no need to simulate hardware, however, instead (or in addition to this), a special programming interface (API) is used to interact with the guest operating system.

Virtualization of the level of the operating system

The essence of this type of virtualization is the virtualization of the physical server at the level of the operating system in order to create several protected virtualized servers on one physical. Guest system, in this case, shares the use of one nucleus of the host operating system with other guest systems. The virtual machine is an environment for applications that are started insulated. This type of virtualization is used in the organization of hosting systems, when within one instance of the kernel, several virtual client servers are required.

Virtualization of application level

This type of virtualization is not similar to everyone else: if in previous cases, virtual environments or virtual machines are created to isolate applications, then in this case the application itself is placed in the container with the necessary items for its operation: registry files, configuration files, user and system objects. The result is an application that does not require installation on a similar platform. When transferring such an application to another machine and its startup, the virtual environment created for the program allows conflicts between it and the operating system, as well as other applications. This method of virtualization is similar to the behavior of interpreters of various programming languages \u200b\u200b(no wonder the interpreter, Virtual machine Java (JVM) also falls into this category).

Brief certificate of virtual machines:

Oracle VirtualBox is a cross-platform free (GNU GPL) virtualization software for Microsoft Windows operating systems, Linux, FreeBSD, Mac OS X, Solaris / OpenSolaris, Reactos, DOS and others. Supported both 32-bit and 64-bit version of the OS.

VMware Workstation - Allows you to create and run several virtual machines (X86-architecture) simultaneously, each of which has its own guest operating system. Supported both 32-bit and 64-bit version of the OS.

VMware Player is free (for personal non-commercial use) a software product designed to create (starting with version 3.0) and start ready-made virtual machines (created in VMware Workstation or VMware Server). Free solution with limited, compared to VMware Workstation, functionality.

Microsoft Virtual PC is a virtualization software package for the Windows operating system.