Clouds: Legends and Myths. Cloud Computing Security Threats Cloud Computing Threats and How to Protect Them

When Eric Schmitt, now the head of Google, first used the term "cloud" to refer to a distributed computing system on the web, he hardly knew that it was one of those words that often appear in legends. In almost all the myths of the peoples of the world, divine beings live very close to the sky - on the clouds. As a result, the term "cloud computing" is very popular among marketers as it provides room for creativity. We will also try to verbalize these myths and understand how organically they are combined with IT.

Death of Merlin

One of the characters in the cycle of legends about King Arthur and his Round Table is the magician and wizard Merlin, who helped Arthur in his reign. It is significant that Merlin ended up being imprisoned in the clouds. He, wanting to boast to the young sorceress and show his magical power, built a castle of clouds and invited his passion to examine it. However, the sorceress turned out to be cunning and imprisoned the magician in his own cloud castle. After that, no one saw Merlin, so it is believed that he died somewhere there - in the cloud castle he himself built.

Now "magicians from IT" have also built a whole mythology around distributed computing, so in order not to be imprisoned in these "locks", you should first figure out what these clouds are, that is, to separate marketing from cutlets.

Initially, there was only one cloud - it was with this symbol that the Internet was traditionally designated. This cloud stood for the collection of all computers connected by the IP protocol and having their own IP address. Over time, the Internet began to allocate server farms that were installed at providers and on which web projects were based. At the same time, to ensure high load and fault tolerance, the largest web systems became multi-tier and distributed.

In a typical such system, the following levels could be distinguished: a reverse proxy, which also acts as a load balancer and SSL decryptor, the web server itself, then an application server, a DBMS and a storage system. At the same time, at each level there could be several elements performing the same functions, and therefore it was not always clear which components were used to process user requests. And when it is not clear, then these are the clouds. Therefore, they began to say that user requests are executed somewhere in the "cloud" from a large number of servers. This is how the term "cloud computing" came into being.

Although initially cloud computing was associated with public web projects - portals, however, with the development of distributed fault-tolerant web systems, they began to be used for solving internal corporate problems. It was the time of the boom in corporate portals that were based on web technologies that had been developed in public systems. At the same time, corporate systems began to consolidate into data centers that were easier and cheaper to maintain.

However, it would be inefficient to allocate a separate server for each element of the cloud - not all elements of the cloud are loaded equally, so the virtualization industry began to develop in parallel. In public clouds, it turned out to be quite popular, since it allowed differentiating access rights and provided a quick transfer of an element of a distributed system to another hardware carrier. Without virtualization, cloud computing would be less dynamic and scalable, which is why clouds are now typically made up of virtual machines.

Cloud computing is mainly associated with the rental of applications, defining three types of such services: IaaS - infrastructure as a service, PaaS - platform as a service and SaaS - software as a service. Sometimes security as a service is also reduced to SaaS, however, in order not to confuse cloud security services with software rentals, it is better to call it ISaaC - Information Security as a Cloud. Such services are also beginning to be provided. However, do not confuse application outsourcing and cloud computing, as clouds can be in-house, public, and hybrid. Each of these types of clouds has its own characteristics when organizing a security system.

The three steps of Vishnu

The god Vishnu in Hindu mythology is known for the fact that it was he who conquered the space for human life with the help of three steps: the first was made on earth, the second - in the clouds, and the third - in the highest abode. In accordance with the Rig Veda, it was by this action that Vishnu conquered all these spaces for people.

Modern IT is also taking a similar "second step" - from the ground to the clouds. However, in order not to fall from these clouds on the ground, it is worth taking care of safety. In the first part, I analyzed the structure of the cloud in such detail in order to understand what threats exist for cloud computing. From the above, the following classes of threats should be distinguished:

    Traditional attacks on software... They are related to the vulnerability of network protocols, operating systems, modular components and others. These are traditional threats, for protection against which it is enough to install an antivirus, firewall, IPS and other discussed components. It is only important that these protections are adapted to the cloud infrastructure and work effectively in a virtualized environment.

    Functional attacks on cloud elements... This type of attack is associated with the layering of the cloud, a general security principle that the overall protection of a system is equal to that of the weakest link. So a successful DoS attack on a reverse proxy installed in front of the cloud will block access to the entire cloud, even though all communications inside the cloud will work without interference. Similarly, an SQL injection passed through the application server will give access to system data, regardless of the access rules in the data storage layer. To protect against functional attacks, for each cloud layer, you need to use specific protection means: for a proxy - protection against DoS attacks, for a web server - page integrity control, for an application server - an application level screen, for a DBMS layer - protection against SQL -injections, for the storage system - backup and access control. Separately, each of these defense mechanisms has already been created, but they are not collected together for comprehensive protection of the cloud, so the task of integrating them into a single system must be solved during the creation of the cloud.

    Client attacks... This type of attack has been practiced in the web environment, but it is also relevant for the cloud, as clients connect to the cloud, usually using a browser. It includes such attacks as Cross Site Scripting (XSS), hijacking web sessions, stealing passwords, "man in the middle" and others. Traditionally, strong authentication and encrypted communication with mutual authentication have been a defense against these attacks, but not all cloud creators can afford such wasteful and usually not very convenient means of protection. Therefore, in this industry of information security there are still unresolved tasks and space for creating new means of protection.

    Virtualization threats... Since the platform for cloud components has traditionally been virtualized environments, attacks on the virtualization system also threaten the entire cloud as a whole. This type of threat is unique to cloud computing, so we'll take a closer look at it below. Solutions for some virtualization threats are now beginning to appear, but this industry is quite new, so so far the existing solutions have not yet been developed. It is quite possible that the information security market will soon develop means of protection against this type of threat.

    Comprehensive Cloud Threats... Controlling and managing clouds is also a security issue. How to ensure that all cloud resources are counted and there are no uncontrolled virtual machines in it, unnecessary business processes are not launched, and the mutual configuration of layers and elements of the cloud is not violated. This type of threat is associated with the manageability of the cloud as a unified information system and the search for abuse or other disruptions in the operation of the cloud, which can lead to unnecessary expenses for maintaining the health of the information system. For example, if there is a cloud that allows you to detect a virus in it using a submitted file, then how to prevent the theft of such detectors? This type of threat is the most high-level and, I suspect, that there is no universal means of protection for it - for each cloud its overall protection must be built individually. This can be helped by the most general risk management model, which still needs to be correctly applied to cloud infrastructures.

The first two types of threats have already been sufficiently studied and defenses have been developed for them, but they still need to be adapted for use in the cloud. For example, firewalls are designed to protect the perimeter, but in the cloud it is not easy to allocate a perimeter to an individual client, which makes protection much more difficult. Therefore, the firewall technology needs to be adapted to the cloud infrastructure. Work in this direction is now actively carried out, for example, by Check Point.

A new type of threat for cloud computing is virtualization problems. The fact is that when using this technology, additional elements appear in the system that can be attacked. These include a hypervisor, a system for transferring virtual machines from one host to another, and a virtual machine management system. Let us consider in more detail what kind of attacks the listed elements can be subjected to.

    Hypervisor attacks... Actually, the key element of a virtual system is the hypervisor, which ensures the division of the resources of a physical computer between virtual machines. Interfering with the operation of the hypervisor can lead to the fact that one virtual machine can access memory and resources of another, intercept its network traffic, take away its physical resources, and even completely displace the virtual machine from the server. So far, few hackers understand exactly how the hypervisor works, so there are practically no attacks of this type, but this does not guarantee that they will not appear in the future.

    Migrate virtual machines... It should be noted that a virtual machine is a file that can be launched for execution in different cloud nodes. Virtual machine management systems provide mechanisms for transferring virtual machines from one host to another. However, the virtual machine file can be stolen and attempted to run outside the cloud. It is impossible to take a physical server out of the data center, but a virtual machine can be stolen over the network without having physical access to the servers. True, a separate virtual machine outside the cloud has no practical value - you need to steal at least one virtual machine from each layer, as well as data from the storage system to restore a similar cloud, nevertheless, virtualization quite allows theft of parts or the entire cloud. That is, interference with the mechanisms for transferring virtual machines creates new risks for the information system.

    Control system attacks... The sheer number of virtual machines that are used in the clouds, especially in public clouds, require management systems that can reliably control the creation, migration, and disposal of virtual machines. Intervention in control systems can lead to the appearance of invisible virtual machines, blocking of some machines and substitution of unauthorized elements in the cloud layers. All this allows attackers to obtain information from the cloud or capture parts of it or the entire cloud.

It should be noted that so far all the threats listed above are purely hypothetical, since there is practically no information about real attacks of this type. At the same time, when virtualization and cloud become popular enough, all these types of attacks could be quite real. Therefore, they should be kept in mind even at the stage of designing cloud systems.

Beyond the seventh heaven

The apostle Paul claimed to have known a man who was caught up to the seventh heaven. Since then, the phrase "seventh heaven" has been firmly entrenched for the designation of paradise. However, not all Christian saints were honored to visit even the first heaven, nevertheless, there is no such person who would not dream of looking at the seventh heaven with at least one eye.

Perhaps it was this legend that prompted the creators of Trend Micro to name one of their cloud protection projects Cloud Nine - the ninth cloud. This is clearly above the seventh. However, now this name is given to a wide variety of things: songs, detective stories, computer games, but it is quite possible that this name was inspired by the Christian legend of Paul.

However, so far the company Trend Micro has published only information that Cloud Nine will be associated with data encryption in the cloud. It is data encryption that makes it possible to protect against most threats to data in the public cloud, so such projects will now be actively developed. Let's imagine what protection tools can still be useful to mitigate the risks described above.

First of all, you need to provide reliable authentication, both cloud users and its components. To do this, you can most likely use ready-made single authentication systems (SSO), which are based on Kerberos and the mutual hardware authentication protocol. Next, you will need identity management systems that allow you to configure user access rights to different systems using role-based management. Of course, you have to tinker with the definition of roles and minimum rights for each role, but once you configure the system, it can be used for a long time.

When all participants in the process and their rights are defined, you need to monitor the observance of these rights and the detection of administrative errors. This requires event processing systems from means of protecting elements of the cloud and additional protective mechanisms, such as firewalls, antiviruses, IPS, and others. True, it is worth using those options that can work in a virtualization environment - this will be more effective.

In addition, you should also use some kind of fraud machine that would allow you to detect fraud in the use of clouds, that is, to reduce the most difficult risk of interfering with business processes. True, now on the market, most likely, there is no fraud machine that would allow working with clouds, nevertheless, technologies for detecting cases of fraud and abuse have already been worked out for telephony. Since a billing system will have to be implemented in the clouds, the fraud machine should also be connected to it. Thus, it will be possible at least to control threats to cloud business processes.

What other defense mechanisms can be used to protect the clouds? The question is still open.

There are several methods for building a corporate IT infrastructure. Deploying all resources and services on a cloud platform is just one of them. However, prejudices about the security of cloud solutions often become an obstacle in this way. In this article we will understand how the security system is arranged in the cloud of one of the most famous Russian providers - Yandex.

A fairy tale is a lie, but there is a hint in it

The beginning of this story can be told like a famous fairy tale. There were three administrators in the company: the senior was a smart fellow, the middle one was this and that, the youngest was ... an enikeys trainee. I started up users in Active Directory and twisted tails to tsiska. The time has come for the company to expand, and the king, that is, the boss, called on his admin army. I wish, he says, new web services for our clients, our own file storage, managed databases and virtual machines for software testing.

The youngest immediately suggested creating his own infrastructure from scratch: purchasing servers, installing and configuring software, expanding the main Internet channel and adding a backup one to it - for reliability. And the company is calmer: hardware is always at hand, at any time you can replace or reconfigure something, and he himself will have an excellent opportunity to pump his admin skills. They calculated and shed tears: the company will not be able to afford such costs. Large businesses can do this, but for medium and small businesses it is too expensive. Well, you need not just purchase equipment, equip a server room, hang air conditioners and set up fire alarms, you also need to organize shift duty in order to keep order day and night and repel the network attacks of dashing people from the Internet. And for some reason, the administrators did not want to work at night and on weekends. If only for double payment.

The senior admin looked thoughtfully through the terminal window and suggested putting all the services in the cloud. But then his colleagues began to scare each other with horror stories: they say, the cloud infrastructure has unprotected interfaces and APIs, poorly balancing the load of different clients, which can damage your own resources, and is also unstable to data theft and external attacks. And in general, it is scary to transfer control over critical data and software to outsiders with whom you have not eaten a pound of salt and drunk a bucket of beer.

Mediocre gave the idea to place the entire IT system in the provider's data center, on its channels. On that and decided. However, there were several surprises awaiting our trio, not all of which were pleasant.

First, any network infrastructure requires the mandatory availability of security and protection tools, which, of course, have been deployed, configured and launched. However, the cost of the hardware resources they use, as it turned out, must be paid by the client himself. A modern information security system consumes considerable resources.

Secondly, the business continued to grow and the infrastructure built from the beginning quickly hit the scalability ceiling. Moreover, for its expansion, a simple change of the tariff was not enough: in this case, many services would have to be transferred to other servers, reconfigured, and something completely redesigned from scratch.

Finally, one day, due to a critical vulnerability in one of the applications, the entire system crashed. The admins quickly picked it up from the backups, but they didn't manage to quickly figure out the reasons for what happened, because they forgot to set up backup for the logging services. Valuable time was lost, and time, as popular wisdom says, is money.

Calculating costs and summing up the results led the company's management to disappointing conclusions: the admin who from the very beginning suggested using the cloud model of IaaS - "infrastructure as a service", was right. As for the security of such platforms, it is worth talking about it separately. And we will do this using the example of the most popular of such services - Yandex.Cloud.

Security in Yandex.Cloud

Let's start, as the Cheshire Cat advised the girl Alice, from the beginning. That is, from the issue of delineation of responsibility. In Yandex.Cloud, as in any other similar platforms, the provider is responsible for the security of the services provided to users, while the client himself is responsible for ensuring the correct operation of the applications he develops, organizing and delimiting remote access to dedicated resources, configuring databases and virtual machines, control over logging. However, for this he is provided with all the necessary tools.

The security of Yandex cloud infrastructure has several levels, each of which implements its own protection principles and uses a separate arsenal of technologies.

Physical layer

It's no secret that Yandex has its own data centers, which are served by their own security departments. We are talking not only about video surveillance and access control services designed to prevent outsiders from entering the server rooms, but also about climate control systems, fire extinguishing and uninterruptible power supplies. Stern security guards are of little use if your server rack is once flooded with water from the fire sprinklers, or if they overheat after an air conditioner failure. This will definitely not happen to them in Yandex data centers.

In addition, the Cloud hardware is physically separated from the "big Yandex": they are located in different racks, but in the same way they undergo regular routine maintenance and replacement of components. On the border of these two infrastructures, hardware firewalls are used, and inside the Cloud - a software Host-based Firewall. In addition, the Top-of-the-rack switches use the Access Control List (ACL) system, which greatly enhances the security of the entire infrastructure. Yandex on an ongoing basis scans the Cloud from the outside in search of open ports and configuration errors, so that a potential vulnerability can be recognized and eliminated in advance. For employees working with Cloud resources, a centralized authentication system using SSH keys with a role-based access model has been implemented, and all administrator sessions are logged. This approach is part of the Secure by default model widely used by Yandex: security is incorporated into the IT infrastructure at the stage of its design and development, and is not added later, when everything is already operational.

Infrastructure level

At the “hardware and software logic” level, Yandex.Cloud uses three infrastructure services: Compute Cloud, Virtual Private Cloud, and Yandex Managed Services. And now about each of them in a little more detail.

Compute Cloud

This service provides scalable computing power for various tasks, such as hosting web projects and high-load services, testing and prototyping, or temporary migration of IT infrastructure for the period of repair or replacement of its own equipment. You can manage the service through the console, command line (CLI), SDK, or API.

Compute Cloud security is based on the fact that all client virtual machines use at least two cores, and there is no overcommitment in memory allocation. Since in this case only the client code is executed on the kernel, the system is not susceptible to vulnerabilities like L1TF, Specter and Meltdown or side-channel attacks.

In addition, Yandex uses its own Qemu / KVM assembly, in which everything unnecessary is disabled, leaving only the minimum set of code and libraries necessary for the operation of hypervisors. At the same time, the processes are launched under the control of AppArmor-based instrumentation, which, using security policies, determines which system resources and with which privileges a particular application can access. AppArmor running on top of each virtual machine reduces the risk of a client application being able to access the hypervisor from the VM. To receive and process logs, Yandex has built a process for delivering data from AppArmor and sandboxes to its own Splunk.

Virtual private cloud

The Virtual Private Cloud service allows you to create cloud networks used to transfer information between different resources and their connection to the Internet. Physically, this service is supported by three independent data centers. In this environment, logical isolation is performed at the level of multi-protocol communication - MPLS. At the same time, Yandex constantly fuzzing the SDN and hypervisor interface, that is, from the side of virtual machines, a stream of malformed packets is continuously sent to the external environment in order to receive a response from SDN, analyze it and close possible configuration gaps. DDoS protection is automatically enabled when virtual machines are created.

Yandex Managed Services

Yandex Managed Services is a software environment for managing various services: DBMS, Kubernetes clusters, virtual servers in the Yandex.Cloud infrastructure. This is where the service takes over most of the security work. All backups, encryption of backups, Vulnerability management and so on are provided automatically by Yandex.Cloud software.

Incident response tools

To respond in a timely manner to information security incidents, it is necessary to identify the source of the problem in time. For this, it is necessary to use reliable monitoring tools that should work around the clock and without interruptions. Such systems will inevitably consume resources, but Yandex.Cloud does not shift the cost of computing power of security tools onto platform users.

When choosing the toolkit, Yandex was guided by another important requirement: in case of successful exploitation of a 0day vulnerability in one of the applications, the attacker should not leave the application host, while the security team should instantly learn about the incident and react as needed.

Last but not least, the wish was that all tools should be open source. These criteria are fully met by the AppArmor + Osquery bundle, which it was decided to use in Yandex.Cloud.

AppArmor

AppArmor was mentioned above: it is a proactive protection software tool based on customizable security profiles. The profiles use the Mandatory Access Control (MAC) privacy labeling technology implemented using LSM directly in the Linux kernel itself since version 2.6. Yandex developers chose AppArmor for the following reasons:

  • lightness and speed, since the tool relies on part of the Linux kernel;
  • it is an open source solution;
  • AppArmor can be deployed very quickly on Linux without writing any code;
  • flexible configuration is possible using configuration files.

Osquery

Osquery is a system security monitoring tool developed by Facebook and is now successfully used in many IT industries. At the same time, the tool is cross-platform and open source.

With the help of Osquery, you can collect information about the state of various components of the operating system, accumulate it, transform it into a standardized JSON format and send it to the chosen recipient. This tool allows you to write and route standard SQL queries to your application, which are stored in the rocksdb database. You can customize the frequency and conditions for these requests to be executed or processed.

The standard tables have already implemented many features, for example, you can get a list of processes running on the system, installed packages, the current set of iptables rules, crontab entities, and so on. Out of the box, support for receiving and parsing events from the kernel audit system has been implemented (used in Yandex.Cloud to handle AppArmor events).

Osquery itself is written in C ++ and is distributed with open source, you can modify them and both add new tables to the main code base, and create your own extensions in C, Go or Python.

A useful feature of Osquery is the presence of a distributed query system, with which you can execute queries in real time to all virtual machines on the network. This can be useful, for example, if a vulnerability is found in a package: with a single query, you can get a list of machines on which this package is installed. This feature is widely used when administering large distributed systems with complex infrastructure.

conclusions

If we return to the story told at the very beginning of this article, we will see that the fears that made our heroes refuse to deploy infrastructure on a cloud platform turned out to be unfounded. At least when it comes to Yandex.Cloud. The security of the cloud infrastructure created by Yandex has a multi-layered echeloned architecture and therefore provides a high level of protection against most of the currently known threats.

At the same time, due to savings on routine maintenance of hardware and payment for resources consumed by monitoring and incident prevention systems that Yandex undertakes, using Yandex.Cloud significantly saves money for small and medium-sized businesses. Of course, completely abandoning the IT department or the department responsible for information security (especially if both of these roles are combined into one team) will not work. But Yandex.Cloud will significantly reduce labor costs and overhead costs.

Since Yandex.Cloud provides its customers with a secure infrastructure with all the necessary security tools, they can focus on business processes, leaving the tasks of servicing and monitoring hardware to the provider. This does not eliminate the need for the current administration of VMs, databases and applications, but such a range of tasks would have to be solved in any case. In general, we can say that Yandex.Cloud saves not only money, but also time. And the second, unlike the first, is an irreplaceable resource.

GRIGORIEV1 Vitaly Robertovich, Candidate of Technical Sciences, Associate Professor KUZNETSOV2 Vladimir Sergeevich

PROBLEMS OF IDENTIFICATION OF VULNERABILITIES IN A CLOUD COMPUTING MODEL

The article provides an overview of approaches to building a conceptual model of cloud computing, as well as a comparison of existing views on identifying vulnerabilities that are inherent in systems built on the basis of this model. Keywords: cloud computing, vulnerability, threat core, virtualization.

The purpose of this article is to provide an overview of the approaches to building a conceptual cloud computing model outlined in the NIST Cloud Computing Reference Architecture and to compare the views of leading organizations in this field on vulnerabilities in this computing model, as well as the main players in the cloud computing market.

Cloud computing is a model that provides convenient, on-demand network access to configurable shared computing resources (networks, servers, data stores, applications, and services) that is quickly delivered with minimal management and service provider interaction. This National Standards Institute (NIST) definition is widely accepted throughout the industry. The definition of cloud computing includes five basic characteristics, three service models, and four deployment models.

Five main characteristics

Self-service on demand

Users are able to obtain, control and manage computing resources without the assistance of system administrators. Broad network access - Computing services are provided over standard networks and heterogeneous devices.

Operative elasticity - 1T-

resources can be quickly scaled in any direction as needed.

Resource pool - IT resources are shared by different applications and users in a disconnected mode.

Calculation of the service cost - the use of 1T resource is tracked for each application and user, as a rule, to provide billing for the public cloud and internal payments for the use of private clouds.

Three service models

Software as a Service (SaaS) - Typically, applications are delivered to end users as a service through a web browser. There are hundreds of SaaS offerings today, ranging from horizontal enterprise applications to industry-specific offerings, as well as consumer applications such as email.

Platform as a Service (PaaS) - An application development and deployment platform is provided as a service to developers to build, deploy and manage SaaS applications. Typically, a platform includes databases, middleware, and development tools, all of which are provided as a service over the Internet. PaaS often targets a programming language or API like Java or Python. A virtualized, clustered distributed computing architecture often serves as a base for systems

1 - MSTU MIREA, Associate Professor of the Department of Information Security;

2 - Moscow State University of Radioelectronics and Automation (MSTU MIREA), student.

Paradise, since the grid structure of the network resource provides the necessary elastic scalability and resource pooling. Infrastructure as a Service (IaaS) - Servers, storage and networking hardware are provided as a service. This infrastructure hardware is often virtualized, so virtualization, management, and operating system software are also part of LaaR.

Four deployment models

Private clouds - Designed for the exclusive use of a single organization and are typically controlled, managed, and hosted by private data centers. Hosting and management of private clouds can be outsourced to an external service provider, but often

The new cloud remains in the exclusive use of one organization. Public clouds - shared by many organizations (users), maintained and managed by external service providers.

Group clouds - Used by a group of related organizations looking to take advantage of a shared cloud computing environment. For example, a group may be composed of various branches of the armed forces, all universities in a given region, or all suppliers of a major manufacturer.

Hybrid clouds - appear when an organization uses both a private and a public cloud for the same application in order to take advantage of both. For example, in a "storm" scenario, an organization-user in the case of a standard load on the application

uses a private cloud, and when the load is peak, for example, at the end of a quarter or during the holiday season, it uses the potential of the public cloud, subsequently returning these resources to the general pool when they are not needed.

In fig. 1 shows the conceptual model of cloud computing according to the document "NIST Cloud Computing Reference Architecture". As shown in Fig. 1 model in the standard highlights the main participants of the cloud system: cloud consumer, cloud provider, cloud auditor, cloud broker, cloud intermediary. Each participant is a person or organization performing its own functions of implementing or providing cloud computing. Cloud consumer - a person or organization that maintains business interactions with other

Cloud consumer

Cloud auditor

C Security audit L I J

I Audit confidentiality I J

(Audit of services provided | J

Cloud provider

Complex of levels

Custom level

^ Service as a Service ^ ^ Platform as a Service ^ Infrastructure as a Service)

Abstraction level

Physical layer

Cloud service

^ J support ^ J customization

Portability

Cloud broker

Cloud intermediary

Rice. 1. Conceptual model developed by NIST specialists

tori network and uses services from cloud providers. Cloud provider - a person, organization, or anyone who is responsible for the availability of services provided to interested consumers. Cloud auditor - a participant who can conduct independent evaluations of cloud services, services, and security of a cloud implementation. A cloud broker is a participant that manages the usage, performance, and delivery of cloud services to the consumer, and negotiates the interactions between cloud providers and cloud consumers. Cloud intermediary - An intermediary that provides communication and delivery of cloud services between cloud providers and cloud consumers.

Advantages and Challenges of Cloud Computing

Recent surveys of IT specialists show that cloud computing offers two main advantages when organizing distributed services - speed and cost. Thanks to offline access to a pool of computing resources, users can be included in the processes of interest to them in a matter of minutes, and not in weeks or months, as was the case in the past. Computing capacity is also changing rapidly thanks to the elastically scalable Grid computing environment. Since in cloud computing, users pay only for what they use, and scalability and automation reach a high level, the ratio of cost and efficiency of the services provided is also a very attractive factor for all participants in exchange processes.

Those same polls show that there are a number of serious considerations holding back some companies from moving to the cloud. Among these considerations, cloud computing security is by far leading.

For an adequate assessment of security in cloud systems, it makes sense to explore the views of the threats in this area of ​​the main market players. We will compare the current cloud threat approaches presented in the NIST Cloud Computing Standards Roadmap with those offered by IBM, Oracle, and VmWare.

US National Standards Institute Cloud Computing Security Standard

The NIST Cloud Computing Standards Roadmap, adopted by the NIST, covers the possible potential types of attacks on cloud computing services:

♦ compromising the confidentiality and availability of data transmitted by cloud providers;

♦ attacks that proceed from the structural features and capabilities of the cloud computing environment to amplify and increase the damage from attacks;

♦ unauthorized consumer access (through incorrect authentication or authorization, or vulnerabilities introduced through periodic maintenance) to software, data and resources used by an authorized cloud service consumer;

♦ an increase in the level of network attacks, such as DoS, exploiting software, the development of which did not take into account the threat model for distributed Internet resources, as well as vulnerabilities in resources that were accessible from private networks;

♦ limited possibilities for data encryption in an environment with a large number of participants;

♦ portability resulting from the use of non-standard APIs that make it difficult for the cloud consumer to migrate to a new cloud provider when availability requirements are not met;

♦ attacks that exploit the physical abstraction of cloud resources and exploit flaws in audit records and procedures;

♦ attacks on virtual machines that have not been updated accordingly;

♦ attacks exploiting inconsistencies in global and private security policies.

The standard also highlights the main security objectives for cloud computing:

♦ protection of user data from unauthorized access, disclosure, modification or viewing; implies the support of the identification service in such a way that the consumer has the ability to perform identification and access control policies on authorized users who have access to cloud services; this approach implies the ability of the consumer to provide access to his data selectively to other users;

♦ protection against supply chain threats; includes confirmation of the degree of trust and reliability of the service provider to the same extent as the degree of confidence in the software and hardware used;

♦ prevention of unauthorized access to cloud computing resources; includes creating secure domains that are logically separate from resources (for example, logically separating workloads running on the same physical server through a hypervisor in a multitenant environment) and using secure default configurations;

♦ development of web applications deployed in the cloud for the threat model of distributed Internet resources and the integration of security functions into the software development process;

♦ protection of Internet browsers from attacks to mitigate end-user security weaknesses; includes taking measures to protect the Internet connection of personal computers through the use of secure software, firewalls (firewalls) and periodic installation of updates;

♦ deployment of access control and intrusion detection technologies

niy from a cloud provider and conducting an independent assessment to verify the availability of these; includes (but is not limited to) traditional perimeter security measures combined with a domain security model; traditional perimeter security includes limiting physical access to the network and devices, protecting individual components from exploitation by deploying updates, setting most security settings by default, disabling all unused ports and services, using role-based access control, monitoring audit records, minimizing used privileges, using anti-virus packages and encrypted connections;

♦ defining trusted boundaries between the service provider (s) and consumers to ensure that the authorized responsibility for providing security is clear;

♦ support for portability, carried out so that the consumer has the opportunity to change the cloud provider in cases where he needs to meet the requirements for integrity, availability, confidentiality; this includes the ability to close the account at the moment and copy data from one service provider to another.

Thus, the NIST Cloud Computing Standards Roadmap, adopted by NIST, defines a basic list of attacks on cloud systems and a list of basic tasks that should

tackled by applying

appropriate measures.

Let's formulate the threats to the information security of the cloud system:

♦ U1 - threat (compromise, availability, etc ...) to data;

♦ U2 - threats generated by the structural features and capabilities of the architecture for the implementation of distributed computing;

♦ U4 - threats associated with an incorrect threat model;

♦ U5 - threats related to incorrect use of encryption (it is necessary to use encryption in an environment where there are several data streams);

♦ U6 - threats associated with the use of non-standard APIs during development;

♦ U7 - virtualization threats;

♦ U8 - threats exploiting inconsistencies in global security policies.

IBM's perspective on cloud computing security

Cloud Security Guidance IBM Recommendations for the Implementation of Cloud Security allows us to draw conclusions about the IBM security vision. Based on this document, we can expand the previously proposed list of threats, namely:

♦ U9 - threats associated with third-party access to physical resources \ systems;

♦ U10 - threats associated with incorrect disposal (life cycle) of personal information;

♦ U11 - threats related to the violation of regional, national and international laws concerning the processed information.

IBM, Oracle and VmWare Approaches to Cloud Computing Security

The documentation provided by these companies describing their views on security in their systems does not fundamentally differ from the above threats.

Table 1 lists the main classes of vulnerabilities formulated by companies in their products. Tab. 1 allows you to see the lack of full coverage of threats in the studied companies and to formulate the "core of threats" created by companies in their cloud systems:

♦ threat to data;

♦ threats based on the structure \ capabilities of distributed computing;

♦ threats associated with an incorrect threat model;

♦ virtualization threats.

Conclusion

An overview of the main classes of vulnerabilities in the cloud platform allows us to conclude that currently there are no ready-made solutions for fully protecting the cloud due to the variety of attacks that use these vulnerabilities.

It should be noted that the constructed table of vulnerability classes (Table 1), which integrates the approaches of the leading

Table 1. Classes of vulnerability

Source Declared Threats

U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 U11

NIST + + + + + + + + - - -

IBM + + + + + - + - + + +

Sun / Oracle + + + + - - + - - + -

VmWare + + + + - - + - - - -

this industry of players is not limited to the threats presented in it. For example, it does not reflect the threats associated with blurring the boundaries between environments with different levels of data confidentiality, as well as blurring the boundaries of responsibility for information security between the service consumer and the cloud provider.

It becomes obvious that in order to implement a complex cloud system, protection must be developed for a specific implementation. Also, an important role for the implementation of secure computing in virtual environments is played by the lack of FSTEC and FSB standards for cloud systems. It makes sense to use the "core of threats" highlighted in the work in the study

the task of constructing a unified model of vulnerability classes. This article is of an overview nature, in the future it is planned to analyze in detail the classes of threats associated with virtualization, to develop approaches to creating a protection system that potentially prevents the implementation of these threats

Literature

1. Cloud Security Guidance IBM Recommendations for the Implementation of Cloud Security, ibm.com/redbooks, November 2, 2009.

2.http: //www.vmware.com/technical-resources/security/index.html.

3. NIST Cloud. Computing Reference Architecture, National Institute of Standards and. Technology, Special Publication. 500-292, September 2011.

4. NIST Cloud. Computing Standards Roadmap, National Institute of Standards and. Technology, Special Publication. 500-291, July 2011.

5.http: //www.oracle.com/technetwork/indexes/documentation/index.html.

Cloud computing collectively refers to a large pool of easily used and readily available virtualized resources (such as hardware systems, services, etc.). These resources can be dynamically reallocated (scaled) to adapt to dynamically changing load, ensuring optimal resource utilization. This resource pool is usually provided on a pay-as-you-go basis. At the same time, the owner of the cloud guarantees the quality of service based on certain agreements with the user.

In accordance with all of the above, the following main features of cloud computing can be distinguished:

1) cloud computing is a new paradigm for the provision of computing resources;

2) basic infrastructure resources (hardware resources, data storage systems, system software) and applications are provided as services;

3) these services can be provided by an independent provider for external users on a pay-as-you-go basis, the main features of cloud computing are virtualization and dynamic scalability;

4) cloud services can be provided to the end user through a web browser or through a specific API (Application Programming Interface).

The general cloud computing model consists of an external and an internal part. These two elements are connected over a network, in most cases over the Internet. Through the external part, the user interacts with the system; the inner part is actually the cloud itself. The front end consists of a client computer or a network of enterprise computers and applications used to access the cloud. The inner part is represented by applications, computers, servers and data stores that create a cloud of services through virtualization (Fig. 1).

When existing physical virtual machines (VMs) are moved from the data center (DC) to external clouds or the provision of IT services outside the secure perimeter in private clouds, the network perimeter becomes completely meaningless, and the overall security level becomes rather low.

If in traditional data centers, engineers 'access to servers is strictly controlled at the physical level, then in cloud computing, engineers' access occurs via the Internet, which leads to the appearance of corresponding threats. Accordingly, strict access control for administrators is critical, as well as ensuring control and transparency of changes at the system level.

Virtual machines are dynamic. The volatility of VMs makes it very difficult to create and maintain a coherent security system. Vulnerabilities and configuration errors can spread out of control. In addition, it is very difficult to record the state of protection at any particular point in time for subsequent audit.

Cloud computing servers use the same OS and the same web applications as on-premises virtual and physical servers. Accordingly, for cloud systems, the threat of remote hacking or malware infection is just as high.

Another threat is the threat to data integrity: compromise and data theft. The integrity of the operating system and application files, as well as internal activity, must be monitored.

The use of multi-tenant cloud services makes it difficult to comply with the requirements of standards and laws, including the requirements for the use of cryptographic tools, to protect sensitive information such as information about the owner of a credit card and information that identifies a person. This, in turn, poses the daunting task of providing reliable protection and secure access to sensitive data.

Based on the analysis of possible threats in cloud computing, a possible hardware and software comprehensive protection of cloud computing security is proposed, which includes 5 technologies: firewall, intrusion detection and prevention, integrity control, log analysis and protection against malicious software.

Cloud computing providers use virtualization to provide their customers with access to low-cost computing resources. At the same time, the client VMs share the same hardware resources, which is necessary to achieve the greatest economic efficiency. Enterprise customers who are interested in cloud computing to expand their internal IT infrastructure must consider the threats that such a move poses. In addition to traditional mechanisms for network protection of data processing centers, using such security approaches as: edge firewall, demilitarized zones, network segmentation, network monitoring tools, intrusion detection and prevention systems, software data protection mechanisms should also be used on virtualization servers or on the servers themselves. VM, since with the transfer of VM to public cloud services, the perimeter of the corporate network gradually loses its meaning and the least protected nodes begin to significantly affect the overall level of security. It is the impossibility of physical separation and the use of security hardware to repel attacks between VMs that leads to the need to place the protection mechanism on the virtualization server or on the VMs themselves. Implementing a comprehensive protection method on the virtual machine itself, including software implementation of firewall, intrusion detection and prevention, integrity control, log analysis and protection against malicious code, is the most effective way to protect integrity, comply with regulatory requirements, and comply with security policies when moving virtual resources from the intranet to the cloud.

Literature:

1. Radchenko G.I. Distributed computing systems // Tutorial. - 2012 .-- S. 146-149.

2. Kondrashin M. Security of cloud computing // Storage News. - 2010. - No. 1.

Coursework by discipline

Information security software and hardware

"Information security in cloud computing: vulnerabilities, methods and means of protection, tools for auditing and incident investigation."

Introduction

1. History and key development factors

2. Definition of cloud computing

3. Reference architecture

4. Service Level Agreement

5. Methods and means of protection in cloud computing

6. Security of cloud models

7. Security audit

8. Investigation of incidents and forensics in cloud computing

9. Threat model

10. International and domestic standards

11. Territorial identity of the data

12. State standards

13. Cloud Security Means

14. Practical part

Output

Literature

Introduction

The growing speed of cloud computing is explained by the fact that for little, in general, money, the customer gets access to the most reliable infrastructure with the required performance without the need to purchase, install and maintain expensive computers. The system reaches 99.9%, which also saves on computing resources. ... And what is more important - almost unlimited scalability. By purchasing regular hosting and trying to jump over your head (with a sharp surge in load), there is a risk of getting a service that has fallen for several hours. In the cloud, additional resources are available on request.

The main problem of cloud computing is the non-guaranteed level of security of the processed information, the degree of protection of resources and, often, a completely absent regulatory framework.

The purpose of the study will be to provide an overview of the existing cloud computing market and the means to ensure security in them.

cloud computing security information

1. History and key development factors

The idea of ​​what we call cloud computing today was first voiced by J. C. R. Licklider in 1970. During these years he was responsible for the creation of ARPANET (Advanced Research Projects Agency Network). His idea was that every person on earth would be connected to a network, from which he would receive not only data but also programs. Another scientist John McCarthy put forward the idea that computing power will be provided to users as a service (service). On this, the development of cloud technologies was suspended until the 90s, after which a number of factors contributed to its development.

The expansion of Internet bandwidth in the 90s did not allow for a significant leap in development in cloud technology, since almost no company and technology of that time was ready for this. However, the very fact of the acceleration of the Internet gave impetus to the early development of cloud computing.

2. One of the most significant developments in this area was the introduction of Salesforce.com in 1999. This company became the first company to provide access to its application through the site. In fact, this company became the first company to provide its software on the basis of software as a service (SaaS).

The next step was the development of a cloud web service by Amazon in 2002. This service made it possible to store information and perform calculations.

In 2006, Amazon launched a service called the Elastic Compute cloud (EC2) as a web service that allowed its users to run their own applications. Amazon EC2 and Amazon S3 were the first cloud computing services available.

Another milestone in the development of cloud computing came with the creation by Google of the Google Apps platform for web applications in the business sector.

Virtualization technologies have played a significant role in the development of cloud technologies, in particular, software that allows you to create a virtual infrastructure.

The development of hardware has contributed not so much to the rapid growth of cloud technologies, but to the availability of this technology for small businesses and individuals. With regard to technological progress, the creation of multi-core processors and an increase in the capacity of information storage have played a significant role in this.

2. Definition of cloud computing

As defined by the US National Institute of Standards and Technology:

Cloud computing (Cloud computing) (EnglishCloud - cloud; computing- computing) is a model for providing ubiquitous and convenient network access as needed to a shared pool of configurable computing resources (e.g. networks, servers, storage systems, applications and services) that can be quickly provisioned and released with minimal management effort and need interaction with a service provider (service provider).

The cloud model supports high service availability and is described by five essential characteristics, three service models and four deployment models.

The programs are launched and display the results of work in a standard web browser window on a local PC, while all applications and their data necessary for work are located on a remote server on the Internet. Cloud computing computers are called "cloud computing". In this case, the load between computers included in the "computing cloud" is distributed automatically. The simplest example of cloud computing is p2p networks.

To implement cloud computing, middleware products created using special technologies are used. They serve as an intermediate link between the equipment and the user and provide monitoring of the status of equipment and programs, equal load distribution and timely provision of resources from a common pool. One of these technologies is virtualization in computing.

Virtualization in Computing- the process of representing a set of computing resources, or their logical combination, which gives any advantages over the original configuration. This is a new virtual view of the resources of the constituent parts, not limited by implementation, physical configuration, or geographic location. Typically, virtualized resources include computing power and data storage. Scientifically, virtualization is the isolation of computing processes and resources from each other.

An example of virtualization is symmetric multiprocessor computer architectures that use more than one processor. Operating systems are usually configured so that multiple processors appear as a single processor unit. This is why software applications can be written for one logical ( virtual) computational module, which is much easier than working with a large number of different processor configurations.

For especially large and resource-intensive calculations, grid calculations are used.

Grid computing (grid - lattice, network) is a form of distributed computing in which a "virtual supercomputer" is represented as clusters of networked, loosely coupled, heterogeneous computers working together to perform a huge number of tasks (operations, jobs).

This technology is used to solve scientific and mathematical problems that require significant computing resources. Grid computing is also used in commercial infrastructure to solve time-consuming tasks such as economic forecasting, seismic analysis, and the development and study of the properties of new drugs.

From the perspective of a networked organization, the grid is a consistent, open and standardized environment that provides flexible, secure, coordinated separation of computing and storage resources that are part of this environment within a single virtual organization.

Paravirtualization Is a virtualization technique that provides virtual machines with a programming interface similar to, but not identical to, the underlying hardware. The goal of this modified interface is to reduce the time spent by the guest operating system to perform operations that are much more difficult to perform in a virtual environment than in a non-virtualized one.

There are special "hooks" that allow the guest and host systems to request and acknowledge these complex tasks, which could be done in a virtual environment, but at a much slower rate.

Hypervisor ( or Virtual Machine Monitor) - in computers, a program or hardware scheme that provides or allows the simultaneous, parallel execution of several or even many operating systems on the same host computer. The hypervisor also provides OS isolation from each other, protection and security, resource sharing between different running OSes, and resource management.

A hypervisor can also (but does not have to) provide the OS running on the same host computer with the means of communication and interaction with each other (for example, through file exchange or network connections) as if these OSs were running on different physical computers.

The hypervisor itself is in some way a minimal operating system (microkernel or nanokernel). It provides the operating systems running under its control with a virtual machine service, virtualizing or emulating the real (physical) hardware of a particular machine, and manages these virtual machines, allocating and freeing resources for them. The hypervisor allows independent "power on", reboot, "shutdown" of any of the virtual machines with a particular OS. However, an operating system running in a virtual machine under the control of a hypervisor can, but does not have to, "know" that it is running in a virtual machine and not on real hardware.

Cloud service models

The options for providing computing power are very different. Everything related to Cloud Computing is usually called the word aaS - it simply stands for "as a Service", that is, "as a service", or "in the form of a service."

Software as a Service (SaaS) - the provider provides the client with a ready-to-use application. Applications are accessible from a variety of client devices or through thin client interfaces such as a web browser (such as webmail) or program interfaces. At the same time, the consumer does not control the underlying cloud infrastructure, including networks, servers, operating systems, storage systems, and even individual application settings with the exception of some user application configuration settings.

In the SaaS model, customers pay not to own the software as such, but to rent it (that is, use it through a web interface). Thus, in contrast to the classical software licensing scheme, the customer incurs relatively small recurring costs, and he does not need to invest significant funds for the purchase of software and its support. The periodic payment scheme assumes that if the need for software is temporarily absent, the customer can suspend its use and freeze payments to the developer.

From a developer's point of view, the SaaS model allows you to effectively combat unlicensed use of software (piracy), since the software itself does not reach the end customers. In addition, the SaaS concept can often reduce the cost of deploying and implementing information systems.

Rice. 1 Typical SaaS Layout

Platform as a Service (PaaS) - the provider offers the client a software platform and tools for designing, developing, testing and deploying user applications. At the same time, the consumer does not control the underlying cloud infrastructure, including networks, servers, operating systems and storage systems, but has control over the deployed applications and, possibly, some configuration parameters of the hosting environment.

Rice. 2 Typical PaaS Layout

Infrastructure as a Service (IaaS). - the provider offers the client computing resources for rent: servers, storage systems, network equipment, operating systems and system software, virtualization systems, resource management systems. At the same time, the consumer does not control the underlying cloud infrastructure, but has control over operating systems, storage systems, deployed applications and, possibly, limited control over the choice of network components (for example, a host with firewalls).

Rice. 3 Typical IaaS Layout

Additionally distinguish services such as:

Communications as a Service (Com-aaS) - it is understood that communication services are provided as services; usually it is IP telephony, mail and instant communications (chats, IM).

Cloud data storage- the user is provided with a certain amount of space for storing information. Since information is stored distributed and duplicated, such storages provide a much greater degree of data safety than local servers.

Workplace as a Service (WaaS) - the user, having at his disposal an insufficiently powerful computer, can buy computing resources from the supplier and use his PC as a terminal to access the service.

Antivirus cloud- the infrastructure that is used to process information coming from users in order to timely recognize new, previously unknown threats. Cloud antivirus does not require any unnecessary actions from the user - it simply sends a request for a suspicious program or link. When the danger is confirmed, all the necessary actions are performed automatically.

Deployment models

Among the deployment models, there are 4 main types of infrastructure.

Private cloud - infrastructure intended for use by one organization, including several consumers (for example, divisions of one organization), possibly also by the clients and contractors of this organization. A private cloud can be owned, operated and operated by the organization itself or by a third party (or some combination of these), and it can physically exist both inside and outside the owner's jurisdiction.

Rice. 4 Private cloud.

Public cloud - infrastructure intended for free use by the general public. The public cloud can be owned, operated and operated by commercial, academic, and government organizations (or any combination of these). The public cloud physically exists in the jurisdiction of the owner - the service provider.

Rice. 5 Public cloud.

Hybrid cloud - it is a combination of two or more different cloud infrastructures (private, public or public) that remain unique objects, but are interconnected by standardized or private technologies for transferring data and applications (for example, short-term use of public cloud resources to balance the load between clouds).

Rice. 6 Hybrid cloud.

Public cloud (community cloud) - a type of infrastructure intended for use by a specific community of consumers from organizations with common objectives (eg mission, security requirements, policies, and compliance with various requirements). A public cloud can be co-owned, operated and operated by one or more community organizations or a third party (or some combination of these), and it can physically exist both within and outside the owner's jurisdiction.

Rice. 7 Description of cloud properties

Basic properties

NIST in its document `The NIST Definition of Cloud Computing` defines the following characteristics of clouds:

On-demand self-service. The consumer has the ability to access the provided computing resources unilaterally as needed, automatically, without the need to interact with the employees of each service provider.

Broad network access. The provided computing resources are available over the network through standard mechanisms for various platforms, thin and thick clients (mobile phones, tablets, laptops, workstations, etc.).

Pooling of resources (Resorce pooling). The computing resources of the provider are pooled to serve many consumers in a multi-tenant model. Pools include a variety of physical and virtual resources that can be dynamically assigned and reassigned to meet customer needs. The consumer does not need to know the exact location of the resources, but it is possible to locate them at a higher level of abstraction (for example, country, region, or data center). Examples of this kind of resources include storage systems, computing power, memory, network bandwidth.

Rapid elasticity. Resources can be resiliently allocated and released, in some cases automatically, to quickly scale in line with demand. For the consumer, the possibilities of providing resources are seen as unlimited, that is, they can be assigned in any quantity and at any time.

Measured service. Cloud systems automatically manage and optimize resources using measurement tools implemented at the abstraction level for different kinds of services (for example, managing external memory, processing, bandwidth, or active user sessions) .Used resources can be monitored and controlled, which provides transparency as for the provider and for the consumer using the service.

Rice. 8 Structural diagram of a cloud server

Pros and cons of cloud computing

Dignity

· Requirements for the computing power of a PC are reduced (an indispensable condition is only the availability of Internet access);

· fault tolerance;

· security;

· High speed of data processing;

· Reduced costs for hardware and software, maintenance and electricity;

· Saving disk space (both data and programs are stored on the Internet).

· Live migration - transfer of a virtual machine from one physical server to another without interrupting the virtual machine and stopping services.

· In late 2010, due to DDoS attacks against companies that refused to provide resources to WikiLeaks, another advantage of cloud computing was revealed. All companies that opposed WikiLeaks were attacked, but only Amazon turned out to be insensitive to these influences, as it used cloud computing means. ("Anonymous: serious threat or mere annoyance", Network Security, N1, 2011).

disadvantages

· Dependence of the safety of user data on companies providing cloud computing services;

· Permanent connection to the network - to gain access to the services of the "cloud" you need a permanent connection to the Internet. However, in our time, this is not such a big drawback, especially with the advent of 3G and 4G cellular technologies.

· Software and its modification - there are restrictions on software that can be deployed on the "clouds" and provided to the user. The software user has limitations in the software used and sometimes does not have the ability to customize it for his own purposes.

· Confidentiality - the confidentiality of data stored on public "clouds" is currently a matter of much controversy, but in most cases experts agree that it is not recommended to store the documents most valuable for the company on the public "cloud", since currently there is no technology that would guaranteed 100% confidentiality of stored data, which is why the use of encryption in the cloud is a must.

· Reliability - as for the reliability of the stored information, we can say with confidence that if you have lost information stored in the "cloud", then you have lost it forever.

· Security - the "cloud" itself is a fairly reliable system, however, when penetrating it, an attacker gains access to a huge data store. and others, which allows the use of viruses.

· High cost of equipment - to build a company's own cloud, it is necessary to allocate significant material resources, which is not beneficial for newly created and small companies.

3. Reference architecture

The NIST Cloud Computing Reference Architecture contains five main actors - the actors. Each actor plays a role and performs actions and functions. The reference architecture is presented as sequential diagrams with increasing levels of detail.

Rice. 9 Conceptual diagram of a reference architecture

Cloud Consumer- a person or organization maintaining a business relationship and using the services of Cloud Providers.

Cloud Consumers are divided into 3 groups:

· SaaS - uses applications to automate business processes.

PaaS - Develops, tests, deploys and manages applications deployed in the cloud environment.

· IaaS - creates, manages IT infrastructure services.

Cloud Provider- the person, organization or entity responsible for the availability of the cloud service to Cloud Consumers.

SaaS - Installs, manages, maintains and provides software deployed on a cloud infrastructure.

PaaS - Provides and manages cloud infrastructure and middleware. Provides development and administration tools.

· IaaS - provides and maintains servers, databases, computing resources. Provides a cloud structure to the consumer.

The activities of Cloud Providers are divided into 5 main typical actions:

Service Deployment:

o Private cloud - served by one organization. The infrastructure is managed both by the organization itself and by a third party and can be deployed both by the Provider (off premise) and by the organization (on premise).

o Shared cloud - the infrastructure is shared by several organizations with similar requirements (security, RD compliance).

o Public cloud - the infrastructure is used by a large number of organizations with different requirements. Off premise only.

o Hybrid cloud - infrastructure combines different infrastructures based on similar technologies.

Service management

o Service level - defines the basic services provided by the Provider.

§ SaaS is an application used by the Consumer by accessing the cloud from special programs.

PaaS - containers for consumer applications, development and administration tools.

§ IaaS - computing power, databases, fundamental resources, on top of which the Consumer deploys his infrastructure.

o Level of abstraction and resource control

§ Management of the hypervisor and virtual components required to implement the infrastructure.

o Level of physical resources

§ Computer equipment

§ Engineering infrastructure

o Availability

o Confidentiality

o Identification

o Security monitoring and incident handling

o Security policies

Privacy

o Protection of processing, storage and transfer of personal data.

Cloud Auditor- A contributor who can independently evaluate cloud services, information systems maintenance, performance and security of a cloud implementation.

It can give its own assessment of security, privacy, performance and other things in accordance with the approved documents.

Rice. 10 Provider Activities

Cloud Broker- the entity that manages the use, performance and delivery of cloud services, and establishes the relationship between Providers and Consumers.

With the development of cloud computing, the integration of cloud services may be too difficult for the consumer.

o Service mediation - expanding the specified service and providing new opportunities

o Aggregation - combining various services to provide the Consumer

Cloud Communication Operator- an intermediary providing connection services and transport (communication services) for the delivery of cloud services from Providers to Consumers.

Provides access through communication devices

Provides a level of connection, according to the SLA.

Among the five actors presented, a cloud broker is optional, since cloud consumers can receive services directly from the cloud provider.

The introduction of actors is due to the need to work out the relationships between the subjects.

4. Service Level Agreement

A service level agreement is a document describing the level of service delivery expected by a customer from a supplier, based on the metrics applicable to a given service, and setting out the provider's responsibility if the agreed metrics are not met.

Here are some indicators, in one form or another, found in operator documents:

ASR (Answer Seizure Ratio) - a parameter that determines the quality of a telephone connection in a given direction. ASR is calculated as a percentage of the number of phone connections established as a result of calls to the total number of calls made in a given direction.

PDD (Post Dial Delay) - parameter defining the period of time (in seconds) elapsed from the moment of the call to the moment of establishing the telephone connection.

Service availability ratio- the ratio of the time of interruption in the provision of services to the total time when the service is to be provided.

Packet loss ratio- the ratio of correctly received data packets to the total number of packets that were transmitted over the network for a certain period of time.

Time delays in the transmission of information packets- the time interval required for the transmission of a packet of information between two network devices.

Reliability of information transfer- the ratio of the number of erroneously transmitted data packets to the total number of transmitted data packets.

Periods of work, time of notification of subscribers and time of restoration of services.

In other words, the service availability of 99.99% indicates that the operator guarantees no more than 4.3 minutes of communication downtime per month, 99.9% - that the service may not be provided for 43.2 minutes, and 99% - that the break can last more than 7 hours. In some practices, there is a differentiation of network availability and a lower value of the parameter is assumed - during off hours. Different values ​​of indicators are also provided for different types of services (traffic classes). For example, the most important thing for voice is the latency rate - it should be minimal. And the speed for it needs low, plus some of the packets can be lost without quality loss (up to about 1%, depending on the codec). For data transmission, speed comes first, and packet loss should tend to zero.

5. Methods and means of protection in cloud computing

Confidentiality must be ensured throughout the chain, including the cloud provider, the consumer, and the communications that link them.

The Provider's task is to ensure both physical and software integrity of data from third parties' attacks. The consumer must put in place appropriate policies and procedures "on their territory" to exclude the transfer of access rights to information to third parties.

The tasks of ensuring the integrity of information in the case of using individual "cloud" applications can be solved - thanks to modern database architectures, backup systems, integrity check algorithms and other industrial solutions. But that's not all. New challenges can arise when it comes to integrating multiple cloud applications from different vendors.

In the near future, for companies looking for a secure virtual environment, the only option is to create a private cloud system. The fact is that private clouds, unlike public or hybrid systems, are most similar to virtualized infrastructures that IT departments of large corporations have already learned to implement and over which they can maintain complete control. Information security flaws in public cloud systems pose a major challenge. Most burglary incidents occur in public clouds.

6. Security of cloud models

The level of risk in the three cloud models is very different, and the ways to address security issues also differ depending on the level of interaction. The security requirements remain the same, but the level of security control changes across different models, SaaS, PaaS, or IaaS. From a logical point of view, nothing changes, but the possibilities of physical implementation are radically different.

Rice. 11. The most urgent information security threats

in the SaaS model, the application runs on the cloud infrastructure and is accessible through a web browser. The client has no control over the network, servers, operating systems, storage, or even some application capabilities. For this reason, in the SaaS model, the primary responsibility for security falls almost entirely on the vendors.

Problem number 1 is password management. In the SaaS model, applications are in the cloud, so the main risk is using multiple accounts to access applications. Organizations can solve this problem by unifying accounts for cloud and on-premises systems. With single sign-on, users can access workstations and cloud services using a single account. This approach reduces the likelihood of "stuck" accounts subject to unauthorized use after the termination of employees.

According to CSA's explanation, PaaS assumes that customers build applications using vendor-supported programming languages ​​and tools and then deploy them to the cloud infrastructure. As in the SaaS model, the customer cannot manage or control the infrastructure — networks, servers, operating systems, or storage systems — but has control over application deployment.

In a PaaS model, users must pay attention to application security as well as API management issues such as validation, authorization, and verification.

Problem number 1 is data encryption. The PaaS model is inherently secure, but the risk is inadequate system performance. This is because encryption is recommended when communicating with PaaS providers, and this requires additional processing power. Nevertheless, in any solution, the transmission of confidential user data must be carried out over an encrypted channel.

While customers here have no control over the underlying cloud infrastructure, they have control over operating systems, storage and application deployment, and possibly limited control over the choice of network components.

This model has several built-in security capabilities without protecting the infrastructure itself. This means that users must manage and secure operating systems, applications, and content, usually through an API.

If this is translated into the language of protection methods, then the provider must provide:

· Reliable control of access to the infrastructure itself;

· Infrastructure resiliency.

At the same time, the cloud consumer takes on much more protection functions:

· Firewalling within the infrastructure;

· Protection against intrusions into the network;

· Protection of operating systems and databases (access control, protection against vulnerabilities, control of security settings);

· Protection of end applications (anti-virus protection, access control).

Thus, most of the protection measures fall on the shoulders of the consumer. The provider can provide standard recommendations for protection or turnkey solutions, which will simplify the task for end users.

Table 1. Delineation of responsibility for security between the client and the service provider. (P - supplier, K - client)


Enterprise Server

Application

Data

Runtime environment

Middleware

Operating system

Virtualization

Server

Data warehouses

network hardware



7. Security audit

The tasks of the Cloud Auditor are essentially the same as those of the auditor of conventional systems. Cloud security audit is subdivided into Supplier audit and User audit. The User's audit is carried out at the User's request, while the Supplier's audit is one of the most important conditions for doing business.

It consists of:

· Initiation of the audit procedure;

· Collection of audit information;

· Analysis of audit data;

· Preparation of an audit report.

At the stage of initiating the audit procedure, the issues of the powers of the auditor, the timing of the audit must be resolved. The obligatory assistance of employees to the auditor should also be stipulated.

In general, the auditor conducts an audit to determine the reliability

· Virtualization systems, hypervisor;

· Servers;

· Data warehouses;

· Network equipment.

If the Supplier uses the IaaS model on the checked server, then this check will be enough to identify vulnerabilities.

When using the PaaS model, additional checks should be made

· operating system,

Middleware,

· Runtime environment.

When using the SaaS model, vulnerabilities are also checked

Data storage and processing systems,

· Applications.

Security audits are performed using the same methods and tools as auditing conventional servers. But unlike a conventional server in cloud technologies, the hypervisor is additionally checked for stability. In the cloud, the hypervisor is one of the core technologies and therefore should be given particular emphasis on auditing.

8. Investigation of incidents and forensics in cloud computing

Information security measures can be divided into preventive (for example, encryption and other access control mechanisms), and reactive (investigations). The proactive aspect of cloud security is an area of ​​active research, while the reactive aspect of cloud security has received much less attention.

Investigation of incidents (including investigation of crimes in the information sphere) is a well-known section of information security. The objectives of such investigations are usually:

Proof that the crime / incident occurred

Recovering the events surrounding the incident

Identification of offenders

Proof of the involvement and responsibility of offenders

Proof of dishonest intentions on the part of the perpetrators.

A new discipline - computer and technical expertise (or forensics) appeared, in view of the need for forensic analysis of digital systems. The objectives of computer forensics are usually as follows:

Recovering data that may have been deleted

Recovery of events that occurred inside and outside the digital systems associated with the incident

Identification of users of digital systems

Detection of the presence of viruses and other malicious software

Detection of the presence of illegal materials and programs

Cracking passwords, encryption keys and access codes

Ideally, computer forensics is a kind of time machine for an investigator, which can travel at any moment into the past of a digital device and provide the investigator with information about:

people who used the device at a certain point

user actions (for example, opening documents, accessing a website, printing data in a word processor, etc.)

data stored, created and processed by the device at a specific time.

Cloud services replacing stand-alone digital devices should provide a similar level of forensic readiness. However, this requires overcoming the challenges associated with resource pooling, multitenancy, and cloud computing infrastructure resilience. The main tool in incident investigation is the audit trail.

Audit trails — designed to monitor the history of user logins, administrative tasks, and data changes — are an essential part of a security system. In the cloud, the audit trail itself is not only a tool for investigations, but also a tool for calculating the cost of using servers. While the audit trail does not address security holes, it provides a critical eye for what is happening and makes suggestions for correcting the situation.

Creating archives and backups is important, but cannot replace a formal audit trail that records who did what, when, and what. The audit trail is one of the main tools of a security auditor.

The service agreement usually mentions which audit logs will be kept and provided to the User.

9. Threat model

In 2010, CSA conducted an analysis of the main security threats in cloud technologies. The result of their work was the document "Top threats of Cloud Computing v 1.0", which currently describes the threat model and the intruder's model in the most complete way. At the moment, a more complete, second version of this document is being developed.

The current document describes the attackers for three service models SaaS, PaaS and IaaS. Seven main attack vectors have been identified. For the most part, all the types of attacks under consideration are attacks inherent in conventional, "non-cloud" servers. Cloud infrastructure imposes certain features on them. So, for example, attacks on the vulnerabilities in the software part of servers are added to attacks on the hypervisor, which is also their software part.

Security threat # 1

Inappropriate and dishonest use of cloud technologies.

Description:

To obtain resources from a cloud-based IaaS provider, the user only needs to have a credit card. Ease of registration and resource allocation allows spammers, virus authors, etc. use the cloud service for their own criminal purposes. Previously, this kind of attack was observed only in PaaS, but recent studies have shown the possibility of using IaaS for DDOS attacks, placing malicious code, creating botnet networks, and more.

Examples of services were used to create a botnet network based on the "Zeus" Trojan program, store the Trojan horse code "InfoStealer" and post information about various MS Office and AdobePDF vulnerabilities.

In addition, botnet networks use IaaS to manage their peers and send spam. Because of this, some IaaS services were blacklisted, and their users were completely ignored by mail servers.

Improvements to user registration procedures

Improvement of credit card verification procedures and monitoring of the use of payment means

Comprehensive study of the network activity of service users

· Tracking the main black sheets for the appearance of a cloud provider network there.

Service Models Affected:

Security threat # 2

Insecure Programming Interfaces (APIs)

Description:

Cloud infrastructure providers provide users with a set of programming interfaces for managing resources, virtual machines, or services. The security of the entire system depends on the security of these interfaces.

Anonymous access to the interface and the transmission of credentials in clear text are the main hallmarks of insecure APIs. Limited monitoring of API usage, lack of logging systems, as well as unknown relationships between various services only increase the risks of hacking.

Analyze the security model of the cloud provider

Ensure strong encryption algorithms are used

Ensure that strong authentication and authorization methods are used

· Understand the whole chain of dependencies between different services.

Service models affected:

Security threat # 3

Internal offenders

Description:

The problem of illegal access to information from within is extremely dangerous. Often, on the side of the provider, a system for monitoring employee activity is not implemented, which means that an attacker can gain access to the client's information using his official position. Since the provider does not disclose its recruitment policy, the threat can come from both an amateur hacker and an organized criminal structure that has infiltrated the ranks of the provider's employees.

At the moment, there are no examples of this kind of abuse.

Implementation of strict rules for the procurement of equipment and the use of appropriate systems for detecting unauthorized access

Regulating the rules for hiring employees in public contracts with users

Creation of a transparent security system, along with the publication of security audit reports on the provider's internal systems

Service models affected:

Rice. 12 Example of an insider

Security threat # 4

Vulnerabilities in cloud technologies

Description:

IaaS service providers use abstraction of hardware resources using virtualization systems. However, hardware can be designed without considering shared resources. In order to minimize the impact of this factor, the hypervisor controls the virtual machine's access to hardware resources, however, even in hypervisors, serious vulnerabilities can exist, the use of which can lead to privilege escalation or gaining illegal access to physical equipment.

In order to protect systems from such problems, it is necessary to implement mechanisms for isolating virtual environments and systems for detecting failures. Virtual machine users should not have access to shared resources.

There are examples of potential vulnerabilities, as well as theoretical methods of bypassing isolation in virtual environments.

Implementation of the most advanced methods of installation, configuration and protection of virtual environments

Use of systems for detection of unauthorized access

Applying strong authentication and authorization rules for administrative work

Tightening the requirements for the application time of patches and updates

· Carrying out timely procedures for scanning and detecting vulnerabilities.

Security threat # 5

Loss or leakage of data

Description:

Data loss can happen for a thousand reasons. For example, deliberate destruction of the encryption key will result in the encrypted information being unrecoverable. Deletion of data or a part of data, illegal access to important information, changes in records or failure of the medium are also examples of such situations. In a complex cloud infrastructure, the likelihood of each of the events increases due to the close interaction of components.

Incorrect application of authentication, authorization and audit rules, incorrect use of encryption rules and methods, and equipment failure can lead to data loss or leakage.

· Using a reliable and secure API

Encryption and protection of transmitted data

Analysis of the data protection model at all stages of the system functioning

Implementation of a reliable encryption key management system

Selecting and purchasing only the most reliable media

Ensuring timely data backup

Service models affected:

Security threat # 6

Identity theft and illegal access to the service

Description:

This kind of threat is not new. It is faced by millions of users every day. The main target of the attackers is the username (login) and his password. In the context of cloud systems, stealing the password and username increases the risk of using data stored in the provider's cloud infrastructure. So the attacker has the opportunity to use the victim's reputation for his activities.

Ban on the transfer of accounts

Using two factor authentication methods

Implementation of proactive monitoring of unauthorized access

· Description of the cloud provider security model.

Service models affected:

Security threat # 7

Other vulnerabilities

Description:

The use of cloud technologies for doing business allows the company to focus on its business, leaving the care of IT infrastructure and services to a cloud provider. By advertising its service, the cloud provider seeks to show all the possibilities, while revealing the details of the implementation. This can pose a serious threat, as knowledge of the internal infrastructure gives an attacker the ability to find an unpatched vulnerability and launch an attack on the system. In order to avoid such situations, cloud providers may not provide information about the internal structure of the cloud, however, this approach also does not increase trust, since potential users do not have the ability to assess the degree of data security. In addition, this approach limits the ability to find and eliminate vulnerabilities in a timely manner.

Amazon refuses to conduct an EC2 cloud security audit

Vulnerability in processing software, leading to a breach of the security system of the Hearthland data center

Disclosure of log data

Full or partial disclosure of data about the architecture of the system and details about the installed software

· Use of vulnerability monitoring systems.

Service models affected:

1. Legal base

According to experts, 70% of security problems in the cloud can be avoided if you correctly draw up a service agreement.

The basis for such an agreement can serve as the "Bill of Rights of the cloud"

The Cloud's Bill of Rights was developed back in 2008 by James Urquhart. He published this material on his blog, which caused so much interest and controversy that the author periodically updates his "manuscript" in accordance with the realities.

Article 1 (in part): Clients own their data

· No manufacturer (or supplier) should, in the process of interacting with customers of any plan, discuss the rights to any data uploaded, created, generated, modified or any other rights to which the customer has.

· Manufacturers should initially provide minimal access to customer data at the stage of developing solutions and services.

· Customers own their data, which means that they are responsible for ensuring that the data complies with legal regulations and laws.

· Since data compliance, security and safety compliance issues are critical, it is imperative that the customer locates their own data. Otherwise, manufacturers must provide users with all guarantees that their data will be stored in accordance with all rules and regulations.

Clause 2: Manufacturers and Customers jointly own and manage service levels in the system

· Manufacturers own and must do everything in order to meet the level of service for each client individually. All the necessary resources and efforts made to achieve the proper level of service in working with clients should be free for the client, that is, not included in the cost of the service.

· Customers, in turn, are responsible for and own the level of service provided to their own internal and external customers. When using the manufacturer's solutions to provide their own services, the responsibility of the client and the level of such service should not entirely depend on the manufacturer.

· If it is necessary to integrate the systems of the manufacturer and the customer, the manufacturers should offer the customers the possibility of monitoring the integration process. If the client has corporate standards for the integration of information systems, the manufacturer must comply with these standards.

· Under no circumstances should manufacturers close customer accounts for political statements, inappropriate speech, religious comments, unless it is contrary to specific legal regulations, is not an expression of hatred, etc.

Article 3: Manufacturers Own Their Interfaces

· Manufacturers are not required to provide standard or open source interfaces unless otherwise specified in customer agreements. Manufacturers have rights to interfaces. If the manufacturer does not consider it possible to provide the client with the opportunity to refine the interface in a familiar programming language, the client can purchase from the manufacturer or third-party developers services for finalizing the interfaces in accordance with their own requirements.

· The client, however, has the right to use the purchased service for his own purposes, as well as expand its capabilities, replicate and improve. This clause does not relieve customers of patent and intellectual property rights.

The above three articles are the foundation for customers and vendors in the cloud. You can find their full text in the public domain on the Internet. Of course, this bill is not a complete legal document, much less an official one. Its articles can be changed and expanded at any time, just as the bill can be supplemented by new articles. This is an attempt to formalize "ownership" in the cloud in order to somehow standardize this freedom-loving area of ​​knowledge and technology.

Relationship between the parties

By far the best cloud security expert is the Cloud Security Alliance (CSA). The organization has released and recently updated a guide that includes hundreds of nuances and best practices to consider when assessing risks in cloud computing.

Another organization that deals with aspects of cloud security is the Trusted Computing Group (TCG). She is the author of several standards in this and other areas, including the widely used Trusted Storage, Trusted Network Connect (TNC), and Trusted Platform Module (TPM) today.

These organizations have jointly worked out a number of issues that the customer and the provider must work through when concluding a contract. These questions will solve most of the problems when using the cloud, force majeure, changing cloud service providers and other situations.

1. Safety of stored data. How does the service provider ensure the safety of the stored data?

The best measure to protect data stored in a data warehouse is to use encryption technologies. The provider must always encrypt the client information stored on its servers to prevent cases of unauthorized access. The provider must also permanently delete data when it is no longer needed and will not be required in the future.

2. Data protection during transmission. How does the provider ensure the safety of data during its transfer (inside the cloud and on the way from / to the cloud)?

The transmitted data must always be encrypted and accessible to the user only after authentication. This approach ensures that this data cannot be changed or read by anyone, even if they gain access to it through untrusted nodes on the network. These technologies have been developed over "thousands of man-years" and have led to the creation of reliable protocols and algorithms (for example, TLS, IPsec and AES). Providers should use these protocols, not invent their own.

3. Authentication. How does the provider know the authenticity of the client?

The most common authentication method is password protection. However, providers looking to offer their customers greater reliability are looking to more powerful tools such as certificates and tokens. Providers should be able to work with standards such as LDAP and SAML in addition to using more secure authentication means. This is necessary to ensure that the provider interacts with the client's user identification system when authorizing and defining the authorizations to be granted to the user. Thanks to this, the provider will always have up-to-date information about the authorized users. The worst case scenario is when the client provides the provider with a specific list of authorized users. As a rule, in this case, when an employee is fired or moved to another position, difficulties may arise.

4. User isolation. How are the data and applications of one customer separated from the data and applications of other customers?

Best option: when each of the clients uses an individual virtual machine (Virtual Machine - VM) and a virtual network. The separation between VMs, and therefore between users, is provided by the hypervisor. Virtual networks, in turn, are deployed using standard technologies such as VLAN (Virtual Local Area Network), VPLS (Virtual Private LAN Service) and VPN (Virtual Private Network).

Some providers put all customer data in a single software environment and try to isolate customer data from each other by changes in its code. This approach is reckless and unreliable. First, an attacker can find a flaw in non-standard code that would allow him to gain access to data that he should not see. Secondly, a mistake in the code can lead to the fact that one client accidentally "sees" the data of another. Recently, there have been both those and other cases. Therefore, to differentiate user data, using different virtual machines and virtual networks is a more reasonable step.

5. Regulatory issues. How well is the provider complying with the laws and regulations applicable to the cloud computing industry?

Depending on the jurisdiction, laws, regulations and any special provisions may vary. For example, they can prohibit the export of data, require strictly defined safeguards, be compliant with certain standards, and be auditable. Ultimately, they can demand that government departments and courts can access information when needed. The provider's negligence to these moments can lead its customers to significant costs due to legal consequences.

The provider must follow strict rules and adhere to a unified legal and regulatory strategy. This concerns the security of user data, their export, compliance with standards, auditing, safety and deletion of data, as well as information disclosure (the latter is especially important when information of several clients can be stored on one physical server). To find out, clients are strongly encouraged to seek help from specialists who will study this issue thoroughly.

6. Reaction to incidents. How does the provider respond to incidents, and to what extent may its customers be involved in the incident?

Sometimes not everything goes according to plan. Therefore, the service provider is obliged to adhere to specific rules of conduct in the event of unforeseen circumstances. These rules should be documented. It is imperative for providers to identify incidents and minimize their consequences by informing users about the current situation. Ideally, they should regularly provide clients with information that is as detailed as possible on the issue. In addition, it is up to customers to assess the likelihood of a security issue and take the necessary action.

10. International and domestic standards

The evolution of cloud technology is outpacing the effort to create and modify the required industry standards, many of which have not been updated in years. Therefore, lawmaking in the field of cloud technologies is one of the most important steps towards ensuring security.

The IEEE, one of the world's largest standards development organizations, has announced the launch of a dedicated Cloud Computing Initiative. This is the first international cloud standardization initiative - to date, cloud computing standards have been dominated by industry consortia. The initiative currently includes 2 projects: IEEE P2301 (tm), "Draft Guide to Portability and Interoperability of Cloud Profiles", and IEEE P2302 (tm) - "Draft Standard for Interoperability and Distributed Interoperability (Federation) of Cloud Systems ”.

Within the framework of the IEEE Standards Development Association, 2 new working groups have been created to work on projects IEEE P2301 and IEEE P2302, respectively. IEEE P2301 will contain profiles of existing and pending application, portability, management, and interoperability standards, as well as file formats and operating agreements. The information in the document will be logically structured according to different target audience groups: vendors, service providers and other interested market participants. Upon completion, the standard is expected to be usable in the procurement, development, construction and use of cloud products and services based on standard technologies.

The IEEE P2302 standard will describe the underlying topology, protocols, functionality, and management methods required for the interaction of various cloud structures (for example, for the interaction between a private cloud and a public one such as EC2). This standard will enable providers of cloud products and services to reap economic benefits from economies of scale, while providing transparency to users of services and applications.

ISO is preparing a special standard for cloud computing security. The main focus of the new standard is to address organizational issues related to clouds. However, due to the complexity of ISO's harmonization procedures, the final version of the document should not be released until 2013.

The value of the document is that not only government organizations (NIST, ENISA) are involved in its preparation, but also representatives of expert communities and associations such as ISACA and CSA. Moreover, one document contains recommendations for both cloud service providers and their consumers - client organizations.

The main purpose of this document is to describe in detail the best practices related to the use of cloud computing from an information security perspective. At the same time, the standard does not concentrate only on technical aspects, but rather on organizational aspects that must not be forgotten in the transition to cloud computing. This is the separation of rights and responsibilities, and the signing of agreements with third parties, and the management of assets owned by different participants in the "cloud" process, and personnel management issues, and so on.

The new document largely incorporates materials previously developed in the IT industry.

Australian government

After months of brainstorming, the Australian government released a series of cloud-based migration guides on February 15, 2012, on the Australian Government Information Management Office (AGIMO) blog.

To make it easier for companies to migrate to the cloud, recommendations have been prepared on best practices for using cloud services to meet the requirements of the 1997 Better Practice Guides for Financial Management and Accountability Act 1997. The guides deal with financial, legal and data protection issues in general terms.

The guidelines talk about the need to constantly monitor and control the use of cloud services through daily analysis of bills and reports. This will help avoid hidden markups and dependence on cloud service providers.

The first guide is titled Privacy and Cloud Computing for Australian Government Agencies (9 pages). This document focuses on privacy and data security issues.

In addition to this guide, Negotiating the Cloud - Legal Issues in Cloud Computing Agreements (19 pages) has also been prepared to help you understand the clauses included in contract.

The final, third handbook, Financial Considerations for Government use of Cloud Computing (6 pages), discusses the financial issues that a company should look out for if it decides to use cloud computing in its business.

In addition to those covered in the guides, there are a number of other issues that need to be addressed when using cloud computing, including issues related to government, procurement and business management policy.

Public discussion of this policy paper provides an opportunity for stakeholders to consider and comment on the following issues of concern:

· Unauthorized access to classified information;

· Loss of access to data;

Failure to ensure the integrity and authenticity of data, and

· Understanding the practical aspects of providing cloud services.

11. Territorial identity of the data

There are a number of regulations in different countries that require sensitive data to remain within the country. While storing data within a given territory may not seem difficult at first glance, cloud service providers often cannot guarantee it. In systems with a high degree of virtualization, data and virtual machines can move from one country to another for various purposes - load balancing, fault tolerance.

Some of the major players in the SaaS market (such as Google, Symantec) can guarantee the storage of data in the respective country. But these are, rather, exceptions, in general, the fulfillment of these requirements is still quite rare. Even if the data remains in the country, there is no way for customers to verify it. In addition, one should not forget about the mobility of company employees. If a specialist working in Moscow travels to New York, then it is better (or at least faster) for him to receive data from a data center in the United States. Providing this is already an order of magnitude more difficult task.

12. State standards

At the moment, there is no serious regulatory framework for cloud technologies in our country, although developments in this area are already underway. So, by order of the President of the Russian Federation No. 146 of 8.02.2012. it was determined that the federal executive bodies authorized in the field of data security in information systems created using supercomputer and grid technologies are the FSB of Russia and the FSTEC of Russia.

In connection with this decree, the powers of these services have been expanded. The FSB of Russia now develops and approves regulatory and methodological documents on ensuring the security of these systems, organizes and conducts research in the field of information security.

The service also carries out expert cryptographic, engineering-cryptographic and special studies of these information systems and prepares expert opinions on proposals for work on their creation.

The document also stipulates that the FSTEC of Russia develops a strategy and determines priority areas for ensuring the security of information in information systems created using supercomputer and grid technologies that process restricted data, and also monitors the state of work to ensure this security.

FSTEC ordered a study, which resulted in a beta version of the "terminology system in the field of" cloud technologies "

As you can understand, this whole Terminological System is an adapted translation of two documents: "Focus Group on Cloud Computing Technical Report" and "The NIST Definition of Cloud Computing". Well, the fact that these two documents are not very consistent with each other is a separate issue. But visually it is still visible: in the Russian "Terminosystem" the authors simply did not provide links to these English documents for a start.

The fact is that for such work, you first need to discuss the concept, goals and objectives, methods of their solution. There are many questions and comments. The main methodological note: it is necessary to very clearly formulate what problem this research solves, its purpose. I would like to point out right away that "the creation of a term system" cannot be a goal, it is a means, but the achievement of what is not yet very clear.

Not to mention that a normal research should include a status quo section.

It is difficult to discuss the results of a study without knowing the original formulation of the problem and how the authors solved it.

But one fundamental mistake of the Terminology System is clearly visible: it is impossible to discuss the "cloudy subject" in isolation from the "non-cloudy" one. Out of the general IT context. But this context is not visible in the study.

And the result of this is that in practice such a Terminology System will be impossible to apply. It can only confuse the situation further.

13. Cloud Security Means

A cloud server protection system in its minimum configuration should ensure the security of network equipment, data storage, server and hypervisor. Additionally, it is possible to place an anti-virus in a dedicated core to prevent infection of the hypervisor through a virtual machine, a data encryption system for storing user information in encrypted form and means for implementing encrypted tunneling between the virtual server and the client machine.

For this we need a server that supports virtualization. Solutions of this kind are offered by Cisco, Microsoft, VMWare, Xen, KVM.

It is also permissible to use a classic server, and provide virtualization on it using a hypervisor.

Any servers with compatible processors are suitable for virtualization of operating systems for x86-64 platforms.

Such a solution will simplify the transition to computing virtualization without making additional financial investments in hardware upgrades.

Scheme of work:

Rice. 11. An example of a "cloud" server

Rice. 12. Server response to equipment failure

At the moment, the market for cloud computing security tools is still quite empty. And this is not surprising. In the absence of a regulatory framework and uncertainty about future standards, development companies do not know what to focus their efforts on.

However, even in such conditions, specialized software and hardware systems appear that make it possible to secure the cloud structure from the main types of threats.

Integrity violation

Hacking a hypervisor

Insiders

Identification

Authentication

Encryption

Accord-B

Hardware and software system Accord-B. designed to protect virtualization infrastructure VMware vSphere 4.1, VMware vSphere 4.0 and VMware Infrastructure 3.5.

Accord-B. Provides protection for all components of the virtualization environment: ESX servers and virtual machines themselves, vCenter management servers and additional servers with VMware services (for example, VMware Consolidated Backup).

The following protection mechanisms are implemented in the Accord-V hardware and software complex:

· Step-by-step control of the integrity of the hypervisor, virtual machines, files inside virtual machines and infrastructure management servers;

· Differentiation of access for administrators of virtual infrastructure and security administrators;

· Differentiation of user access inside virtual machines;

· Hardware identification of all users and administrators of the virtualization infrastructure.

INFORMATION ABOUT THE AVAILABILITY OF CERTIFICATES:

The FSTEC of Russia certificate of conformity No. 2598 dated 03/20/2012 certifies that the hardware and software complex of information protection means from unauthorized access "Accord-V." complies with the requirements of the guidelines "Computer facilities. Protection against unauthorized access to information. Indicators of security against unauthorized access to information" (State Technical Commission of Russia, 1992) - according to 5 security class, "Protection against unauthorized access to information. Part 1. Software for information protection. Classification by the level of control of the absence of undeclared capabilities" (State Technical Commission of Russia, 1999) - by 4 the level of control and technical conditions TU 4012-028-11443195-2010, and can also be used to create automated systems up to security class 1G inclusive and to protect information in personal data information systems up to class 1 inclusive.

vGate R2

vGate R2 is a certified means of information protection against unauthorized access and control of the implementation of information security policies for virtual infrastructure based on VMware vSphere 4 and VMware vSphere 5.S R2 systems - a version of the product applicable to protect information in virtual infrastructures of public companies, whose IP is applied requirements for the use of information security systems with a high level of certification.

Allows you to automate the work of administrators to configure and operate the security system.

Helps counteract errors and abuse in virtual infrastructure management.

Allows you to bring the virtual infrastructure in line with legislation, industry standards and world best practices.

<#"783809.files/image017.gif"> <#"783809.files/image018.gif"> <#"783809.files/image019.gif"> <#"783809.files/image020.gif">

Rice. 13 vGate R2 announced capabilities

Thus, to summarize, here are the main tools that vGate R2 possesses to protect the service provider's data center from internal threats emanating from its own administrators:

Organizational and technical separation of powers for vSphere administrators

Allocation of a separate role of the IS administrator who will manage the security of the resources of the data center based on vSphere

Dividing the cloud into security zones, within which administrators with the appropriate level of authority operate

Integrity control of virtual machines

Ability to receive a report on the security of the vSphere infrastructure at any time, as well as audit information security events

In principle, this is almost all that is needed to protect the infrastructure of a virtual data center from internal threats from the point of view of the virtual infrastructure. Of course, you also need protection at the level of hardware, applications and guest OS, but this is another problem, which is also solved by means of products of the company Security Code<#"783809.files/image021.gif">

Rice. 14. Server structure.

To ensure safety at such a facility, it is necessary to ensure safety, according to Table 2.

To do this, I suggest using the software product vGate R2. It will allow you to solve such problems as:

· Stronger authentication for virtual infrastructure administrators and information security administrators.

· Protection of virtual infrastructure management tools from tampering.

· Protection of ESX-servers from tampering.

· Mandatory access control.

· Monitoring the integrity of the configuration of virtual machines and trusted boot.

· Control of access of VI administrators to data of virtual machines.

· Registration of events related to information security.

· Monitoring the integrity and protection against tampering of the components of the information security system.

· Centralized management and monitoring.

Table 2. Security Needs Mapping for the PaaS Model

FSTEC certificate of Russia (SVT 5, NDV 4) allows the product to be used in automated systems of security level up to class 1G inclusive and in personal data information systems (ISPDN) up to class K1 inclusive. The cost of this solution will be 24,500 rubles for 1 physical processor on the protected host.

In addition, to protect against insiders, you will need to install a security alarm. These solutions are quite richly provided in the server protection market. The price of such a solution with limited access to the controlled area, an alarm and video surveillance system ranges from 200,000 rubles and more

For example, let's take the amount of 250,000 rubles.

To protect virtual machines from virus infections, one server core will run McAfee Total Protection for Virtualization. The cost of the solution is from 42,200 rubles.

Symantec Netbackup will be used to prevent data loss on the storages. It allows you to reliably back up information and system images.

The total cost of implementing such a project will be:

A Microsoft implementation of a similar design solution can be downloaded from here: http://www.microsoft.com/en-us/download/confirmation. aspx? id = 2494

Output

"Cloud technologies" is one of the most actively developing areas of the IT market at the present time. If the rate of growth of technologies does not decrease, then by 2015 they will contribute to the treasury of European countries more than 170 million euros per year. In our country, cloud technologies are treated with caution. This is partly due to the ossified views of the leadership, partly a lack of confidence in security. But this type of technology, with all their advantages and disadvantages, is a new locomotive of IT progress.

The application "on the other side of the cloud" does not matter at all whether you form your request on a computer with x86 processor Intel, AMD, VIA or compose it on a phone or smartphone based on ARM-processor Freescale, OMAP, Tegra. Moreover, by and large it will not matter if you are running Linux operating systems Google Chrome, OHA Android, Intel Moblin, Windows CE, Windows Mobile Windows XP / Vista / 7, or using something even more exotic for this. ... If only the request was compiled correctly and understandable, and your system was able to "master" the received response.

The issue of security is one of the main issues in cloud computing and its solution will improve the quality of services in the computer sphere. However, there is still much to be done in this direction.

In our country, it is worth starting with a unified vocabulary of terms for the entire IT field. Develop standards based on international experience. Put forward requirements for security systems.

Literature

1. Financial Considerations for Government Use of Cloud Computing - Australian Government 2010.

2. Privacy and Cloud Computing for Australian Government Agencies 2007.

Negotiating the cloud - legal issues in cloud computing agreements 2009.

Journal "Modern Science: Actual Problems of Theory and Practice" 2012.

Similar work to - Information security in cloud computing: vulnerabilities, methods and means of protection, tools for auditing and incident investigation