How to make a cluster of two computers. Desktop cluster

First, decide which components and resources will be required. You need one main node, minimum dozen identical computing nodes, Ethernet switch, power distribution unit and rack. Determine the power of wiring and cooling, as well as the space area that you will need. Also decide which IP addresses you want to use for nodes, which you will deliver and what technologies will be required to create parallel computing capacities (more about this below).

  • Although "iron" is expensive, all the programs given in the article are distributed free of charge, and most of them are open source.
  • If you want to find out how fast your supercomputer can be theoretically, use this tool:

Mount the nodes. You will need to collect network nodes or purchase pre-assembled servers.

  • Choose frames for servers with the most rational use of space and energy, as well as with efficient cooling.
  • Or You can "recycle" a dozen or so used servers, somewhat outdated - and let their weight exceed the total weight of components, but you will save a decent amount. All processors, network adapters and motherboards should be the same so that computers work well together. Of course, do not forget about RAM and hard drives For each node, as well as at least one optical drive for the main node.
  • Install servers in a rack. Start from the bottom so that the rack is not overwhelmed from above. You will need a friend's help - the collected servers can be very heavy, and put them in the cells on which they hold in the rack are quite difficult.

    Install the Ethernet switch next to the rack. It is worth it to immediately configure the switch: set the size of the Jumbo framework 9000 bytes, set the static IP address that you selected in step 1 and turn off the unnecessary protocols, such as SMTP.

    Install the power distributor (PDU, or Power Distribution Unit). Depending on which maximum load gives the nodes on your network, you may need 220 volts for a high-performance computer.

  • When everything is installed, proceed to configuration. Linux in fact is the main system for high-performance (HPC) clusters - it is not only perfect as an environment for scientific computing, but you still do not have to pay for the installation of the system for hundreds and even thousands of nodes. Imagine how much it would cost installing Windows On all nodes!

    • Start with installation latest version BIOS for motherboard And by from the manufacturer, which should be the same for all servers.
    • Install the preferred Linux distribution to all nodes, and to the main node - distribution with graphical interface. Popular systems: Centos, OpenSuse, Scientific Linux, Redhat and Sles.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all program and tool cluster, ROCKS implements an excellent method for quick "transfer" of a variety of copies of the system to similar servers using PXE Boot and the "Kick Start" procedure from Red Hat.
  • Set messaging interface, resource manager and other necessary libraries. If you did not put Rocks in the previous step, you will have to manually install the necessary softwareTo configure the logic of parallel computing.

    • To begin with you will need portable system To work with Bash, for example, Torque Resource Manager, which allows you to separate and distribute tasks for several machines.
    • Add to Torque more Maui Cluster Scheduler to complete the installation.
    • Next, you need to install the messaging interface that is necessary in order for individual processes in each individual code to use general data. OpenMP is the easiest option.
    • Do not forget about multi-threaded mathematical libraries and compilers that will "collect" your programs for distributed computing. I already said that you should just put Rocks?
  • Connect computers to the network. The main node sends tasks to calculate on the subordinate nodes, which in turn must return the result back, as well as send messages to each other. And the faster all this happens, the better.

    • Use private Ethernet networkTo connect all the nodes into the cluster.
    • The main node can also work as NFS, PXE, DHCP, TFTP and NTP Servers When connected to Ethernet.
    • You must separate this network from public to be sure that packets do not overlap by others in LAN.
  • Test a cluster. The last thing you should do before you access users to computer facilities - test performance. HPL (High Performance Lynpack) Benchmark is a popular option for measuring the speed of calculations in the cluster. You need to compile from the source sources with the highest degree of optimization that your compiler allows for the architecture you have chosen.

    • You must, of course, compile with all possible settings Optimizations that are available to the platform you have chosen. For example, when using AMD CPU, compile in Open64 and optimization level -0.
    • Compare the results with the Top500.org to match your cluster with 500 fastest supercomputers in the world!
  • Press center

    Creating a cluster based on Windows 2000/2003. Step by step

    The cluster is a group of two or more servers operating together to ensure trouble-free operation of a set of applications or services and perceived by the client as a single element. Cluster nodes are combined with the help of hardware networks, shared resources shared and server software.

    Microsoft Windows. 2000/2003 Supports two clustering technologies: Load balancing clusters and server clusters.

    In the first case (load balancing clusters) The Network Load Balancing service gives the properties and applications of the high level of reliability and scalability by combining up to 32 servers to a single cluster. Requests from customers in this case are distributed among the cluster nodes in a transparent way. If the cluster fails, the cluster automatically changes its configuration and switches the client to any of the available nodes. This cluster configuration mode is also called Active-ACTIVE mode when one application works on several nodes.

    The server cluster distributes its load among cluster servers, and each server carries its own load. If the node fails in the cluster, the applications and services configured to work in the cluster are transparently restarted on any of the free nodes. Server clusters use shared discs to exchange data inside the cluster and to provide transparent access to applications and cluster services. They require special equipment, but this technology provides very high level reliability, since the cluster itself does not have any single point of failure. This cluster configuration mode is also called Active-Passive mode. The application in the cluster works on one node with shared data located on the external storage.

    The cluster approach to the organization of the internal network gives the following advantages:

    A high level of readiness that is, if a service failure occurs on some node of the cluster configured to joint work In the cluster, cluster software allows you to restart this application on another node. Users at the same time will make a short-term delay when carrying out some operation either do not notice the server failure at all. Scalable for applications running in a cluster, adding servers to the cluster means an increase in the capabilities: fault tolerance, load distribution, etc. Controllability Administrators using a single interface can manage applications and services to establish a reaction to the cluster node failure, distribute the load among nodes Cluster and remove the load from nodes for prophylactic work.

    In this article I will try to collect my experience in creating cluster systems on windows database And give small step by step guide By creating a two-zone cluster of servers with shared data storage.

    Software Requirements

    • Microsoft Windows 2000 Advanced (Datacenter) Server or Microsoft Windows 2003 Server Enterprise Edition installed on all cluster servers.
    • Installed DNS service. I will explain a little. If you build a cluster based on two domain controllers, then it is much more convenient to use the DNS service, which you in any case are installed when creating Active Directory.. If you create a cluster based on two servers, Windows NT members domain, you will have to use either WINS service, or to make matching the names and addresses of the machines to the Hosts file.
    • Terminal Services for remote control servers. It is not necessary, but if there is Terminal Services, it is convenient to manage servers from your workplace.

    Hardware requirements

    • The hardware for the cluster node is better to select, based on Cluster Service Hardware Compatible List (HCl). According to Microsoft Recommendations hardware Must be tested for compatibility with Cluster Services.
    • Accordingly, you will need two servers with two network adapters; SCSI adapter having an external interface for connecting an external data array.
    • External array having two external interface. Each of the cluster nodes is connected to one of the interfaces.

    Comment: To create a two-zone cluster, it is not necessary to have two absolutely identical servers at all. After a failure on the first server, you will have a little time to analyze and restore the work of the main node. The second node will work on the reliability of the system as a whole. However, this does not mean that the second server will stand idle. Both cluster nodes can safely do their affairs, solve different tasks. But we can set up a certain critical resource to work in a cluster by increasing it (this resource) fault tolerance.

    Network Settings Requirements

    • Unique NetBIOS name for cluster.
    • Five unique static IP addresses. Two for network adapters On the cluster network, two for network adapters on the common network and one for the cluster.
    • Domain Account for Cluster Service (Cluster Service).
    • All cluster nodes must be either Member Server in the domain or domain controllers.
    • Each server must have two network adapters. One for connecting to a common network (Public Network), the second to exchange data between the cluster nodes (Private Network).

    Comment: On Microsoft Recommendations, your server must have two network adapters, one for a common network, the second to exchange data inside the cluster. Is it possible to build a cluster on one interface - probably yes, but I did not try.

    Cluster installation

    When designing a cluster, you must understand that using one physical network for both cluster exchange and for local networkYou increase the percentage of the failure of the system. Therefore, it is extremely desirable to use one subnet allocated in a separate physical network element for cluster data sharing. And for the local network it is worth using another subnet. Thus, you increase the reliability of the entire system as a whole.

    In the case of constructing a two-zone cluster, one switch is used by a common network. Two cluster servers can be associated with each other cross-cable directly as shown in the figure.

    The installation of a two-zone cluster can be divided into 5 steps

    • Installing and configuring nodes in a cluster.
    • Installing and configuring a shared resource.
    • Verify disk configuration.
    • Configuring the first cluster node.
    • Configuring the second node in the cluster.

    This step-by-step guide will allow you to avoid errors during installation and save a lot of time. So, let's begin.

    Installing and configuring nodes

    We simply simplify the task. Since all cluster nodes must be either domain members or a domain controllers, then the root holder of the AD directory (Active Directory) will make the 1st cluster node, the DNS service will work on it. The 2nd cluster node will be a full domain controller.

    Installation operating system I am ready to skip, believing that you should not have any problems. But the configuration of network devices want to clarify.

    Network settings

    Before starting the cluster and Active Directory installation, you must perform network settings. All network settings want to divide into 4 stages. To recognize name names, it is desirable to have a DNS server with already existing records about the cluster servers.

    Each server has two network cards. One network card will serve to exchange data between cluster nodes, the second will work on customers in our network. Accordingly, I will call the Private Cluster Connection, the second we call Public Cluster Connection.

    Settings of network adapters for one and for another server are identical. Accordingly, I will show how to set up a network adapter and lades a sign with network settings All 4 network adapters on both cluster nodes. To configure a network adapter, you must perform the following steps:

    • My Network Places → Properties
    • Private Cluster Connection → Properties → Configure → Advanced

      This item requires explanation. The fact is that according to the ultimate Microsoft recommendations on all network adapters of the cluster nodes, the optimal speed of the adapter should be installed, as shown in the following figure.

    • Internet Protocol (TCP / IP) → Properties → Use The Following IP: 192.168.30.1

      (For the second node, use the address 192.168.30.2). Enter the subnet mask 255.255.255.252. As the DNS server address for both nodes, use the address 192.168.100.1.

    • Additionally on the Advanced → WINS tab, select Disabled NetBiOS OVER TCP / IP. For network adapters for network adapters (Public), this item is lowered.
    • Do the same with network card For the LAN Public Cluster Connection. Use the addresses given in the table. The only difference in the configuration of two network circuit boards is that the Public Cluster Connection does not need to turn off the WINS mode - NetBIOS OVER TCP / IP.

    To configure all network adapters on the cluster nodes, use the following tablet:

    Knot Network name IP address Mask. DNS Server
    1 Public Cluster Connection 192.168.100.1 255.255.255.0 192.168.100.1
    1 Private Cluster Connection. 192.168.30.1 255.255.255.252 192.168.100.1
    2 Public Cluster Connection 192.168.100.2 255.255.255.0 192.168.100.1
    3 Private Cluster Connection. 192.168.30.2 255.255.255.252 192.168.100.1

    Installing Active Directory.

    Since my article does not pursue the goal to tell about the installation of Active Directory, then this item I will omit. All sorts of recommendations, books about this are written quite a lot. You are taking domain nameLike mycompany.ru, install Active Directory on the first node, add the second node to the domain as a domain controller. When you do everything, check the configuration of the servers, Active Directory.

    Setting Cluster User Account

    • Start → Programs → Administrative Tools → Active Directory Users and Computers
    • Add a new user, for example, ClusterService.
    • Check boxes on: User Cannot Change Password and Password Never Expires.
    • Also add this user to the Administrators group and give him LOG ON AS A SERVICE (Rights are assigned to Local Security Policy I. DOMAIN CONTROLLER SECURITY POLICY).

    Setting up an external data array

    To configure an external data array in a cluster, it must be remembered that before installing the Cluster Service on the nodes you must first configure the discs on the external array, only then install the cluster service first on the first node, only then on the second. In case of violation of the installation order, you will fail, and you will not achieve the goal. Can I fix it - probably yes. When an error appears, you will have time to fix the settings. But Microsoft is so mysterious thing that you do not know at all what rakes will come to. It's easier to have before your eyes step-by-step instructions And do not forget to press the buttons. By steps, configuring an external array looks like this:

    1. Both servers must be turned off, the external array is enabled, connected to both servers.
    2. Turn on the first server. We get access to the disk array.
    3. We check that the external disk array is created as Basic. If this is not the case, then we translate the disk using the Revert to Basic Disk option.
    4. Create on external disk Through Computer Manage-Ment → Disk Management Small section. On the recommendations of Microsoft, it must be at least 50 MB. I recommend creating a section of 500 MB. or a little more. It is quite enough to place cluster data. The section must be formatted in NTFS.
    5. On both cluster nodes, this section will be named by one letter, for example, Q. Accordingly, when creating a partition on the first server, we select item ASSIGN THE FOLLOWING DRIVE LETTER - Q.
    6. You can post the remaining part of the disk at your own desire. Of course, it is extremely desirable to use file System NTFS. For example, when configuring DNS services, WINS main services databases will be transferred to a common disk (not system volume Q, and the second you created). And for the consideration of security, it will be more convenient for you to use NTFS Toms.
    7. Close Disk Management and check access to the newly created section. For example, you can create on it text file Test.txt, record and delete. If everything went fine, then we are finished with the configuration of an external array on the first node.
    8. Now turn off the first server. The external array must be enabled. Turn on the second server and check access to the created section. Also check that the letter appointed by the first partition is identical to the chosen by us, that is, Q.

    This configuration of the external array is completed.

    Cluster Service Software Installation

    Configuration of the first cluster node

    Before starting to install Cluster Service Software, all cluster nodes must be turned off, all external arrays must be turned on. Let us turn to the configuration of the first node. The external array is enabled, the first server is enabled. The entire installation process takes place using CLUSTER SERVICE CONFIGURATION WIZARD:


    Configuration of the second cluster node

    To install and configure the second cluster node, it is necessary that the first node is turned on, all network discs were included. The procedure for setting up the second node very much reminds the one that I described above. However, there are minor changes. To do this, use the following instructions:

    1. In the Create or Join a Cluster dialog box, select The Second or Next Node in the Cluster And click Next.
    2. Enter the cluster name that we have previously set (in the example is MyCluster), and click Next.
    3. After connecting the second node to the cluster Cluster Service Configuration Wizard will automatically take all settings from the main node. To start the Cluster Service service, use the name we created earlier.
    4. Enter your password account And click Next.
    5. In the next dialog box, click Finish to complete the installation.
    6. Cluster Service will be launched on the second node.
    7. Close the Add / Remove Programs window.

    To install additional cluster nodes, use the same instructions.

    Postscript, thanks

    To make you not get confused with all the stages of the cluster installation, I will give a small tablet, which reflects all the main stages.

    Step Node 1. Node 2. External array

    tBVPFBFS about PROK NBYE HTZ OE NDPOP
    YMY DEBEN LMBUFET H DPNBYYY HumpChiscy.

    1. Ccheduyoye

    Nopey Ya Chbu Ynef h Mplbmshopk UEFY OEULPMSHLP Linux NbYoo, in the Rtlefweiulye Chutnub, uchkwispn, RTpgeuuptpn. FBLTER NOPZYE UMBLYBMY P UYUFENBY, H LPFPTCHES NBYSHOP PLAYMOSAFUS HP PDYO UFORLPNRSAFET. OPBMSHOP NBMP LFP RTPVPCHBM RTPCHPDYFSFS FBLYE LLREYNOFSH X WEVS on TBVPFF YMY DPNB. DBCHBKFF RPRTPVKHEN CNEUF UPVTBFSH OEEVPMSHIP LMBUFET. RPUFTPYC LMBUFT CHIST UNPETSFEBBMSHOP HulftiFs Higher Yobuf Lambdby. Ottinet LPNRYMSGY YMY PDOPchteneooh TBVPFH Oleulpmshlyi Tuhutenlyy RTpgeupch. h LFPK UFBFSHA with RPUFBTBAUS TPUULBFSH ChbN LBN NPCOP MEA Pupveshi Huyamike Predeshiyuyua Nbjob, UPLBMSHPK UEFY HU EDSP LMBUFET OB MOSIX.

    2. LLB, SWIM SEE.

    MOSIX - LFP RBFU DMS SDTB Linux at LPNrMelfpn Hfymif, LPFPTSCHK RPCHPMSEF RTPGEUUUUBN at NBYEK NBYEF Retaipdhps (NyztightpchBFS) about Dhzye Kmshchop Mplbmshopk UEFY. CHSFS EZP NPCOP RP BDTeuh http://www.mosix.cs.huji.ac.il b TBRRTPUFBOSEFUS on h yuppods of the LPDBI RPD MIGEYK GPL. RBFY USUEUFCHAF DMS Selfy Sweet Uphbwehopk Cextlium Linux.

    3. Hufbochlb RTPZTBNPSP Pveureyuyuyas.

    h Obehbma HufBopchli Ipyuh Raptelpneeodfchbfsh Chbn Obbwytbfs at Hibr Mosix OE FPMSLP EZP, Op th thief - MPROC, Mexec y Dt.
    h BTICH MOSIX EUFS HUFBOPCHEW Ultyrf Mosix_Install. OEKBFEMSHODSHF H PVS'BFEMSHOPN RPTSDLE TBURBEBFCHBFSH YUIPDOCHE SDTB SDTB CH /USR/SRC/LINUX-* CH / DBSRC/LINUX-2.2.13 DBME OBBRKHUBEFFE MOSIX_INSTALL PFCHUBEFEF About Chuye Yenh Ubuchka Neootzet BBZTHYLY (LILO), RHFS L Yuipdoylbn SDTB CHtpchya Obrhulb.
    rTY FaftPkle SDTB CHLMAYUF PRGY CONFIG_MOSIX, CONFIG_BINFMT_ELF CONFIG_PROC_FS. Chuu BFI PRGY RPDTPVop PRESOVOP HU THLPCHPDFCHE RP HufBopchle Mosix.
    hufBopchyimi? Oh, the UFP TSE - RethemethhtsbKfwe of Chbey Linux at the NTWPN, the argument of the LPFPTPZP Pühmeos VKdef Rpippets on MOSIX-2.2.13.

    4. Footpotpclb

    Yeobyubmshop HufBopchmoshk Mosix Outlee -Oople OE Tyf, LBYE XBBA NBYSHE UEFY YU UFY UH UPESTOSFUS. Oh b shotsbikefus LFP Pühmeos RTPUPP. EMUM CHIST FPMSHLP RPUFBCHIMY MOSIX EMUY CHBE DYUFTAVHFYCH - SUSE YMY REDHAT - UPHEFUNISK, FP Bipdife h Lbfbmz /etc/rc.d/init.d dBChBCFF LFBodh Mosix Start. RTY RETCHPN OBPRHULA LFPF Ultyrf RTPUFF CBU FAKHAKTPYFS MOSIX Y BBRHULBEF FELUFPCHCK TEDBLFPT DMSA ZBBKMB /etc/mosix.map, h LPFPTPN Opripdymphus Uriupl Khizhmps & LMBUFETB. FCDB RTPRYUSCHBEN: h Umkhyuby, Eumy x CBU Chuzp Dchez-FTY NBYSHOY YI IP BDTEUB Utelhaf
    DTHZ BB DTHZPN RP OPNABGY RYYEN FBL:



    1 10.152.1.1 5

    zea Retchchchk Rbtbneft PvPobububef opens ObushPMSP Hikhm, ChPTPK - IP BDTEU Retchpzp Khikmbi RPUMEDYAK - LPMYUEUFCHP Khizhmps at Felhasep. F.E. Weekube x housing h LMBUFET PMKHUBEFUS RSFS Khizhmpch, IP BDTEUB LPFTSCHK Oblobyuchbafus about 1, 2, 3, 4 y 5.
    yMY DThzpk Rtlenet:

    oops the Hibrh IP LPMYUEUUFCHP Khizhmps at Felhasep
    ______________________________________
    1 10.152.1.1 1
    2 10.150.1.55 2
    4 10.150.1.223 1

    h LFP LPZHTBHYY NSH RPMHYUN Usedhaeik Tbohmbd:
    IP 1-PZP HIBBER 10.150.1.1
    IP 2-PZP HIBBER 10.150.1.55
    IP 3-PZP CHBMB 10.150.1.56
    IP 4-PZP CHBMB 10.150.1.223
    ferretsh Okhtsop about Self Nbiobi VKHDHEZP LMBUFETB HufBopchyFS Mosix th dropfall trades PD Lyoblpchchkchtchtbgyposhk zbkm /etc/mosix.map.

    ferretsh RPUME RetaBrhulb Mosix Chubb Nbiob Hzze VKHDEF TBVPFBFS h LMBUFET, YUFP NPCOP HTHIDEFS NPUBRKHUFYCH NPOOFPT LPNBodpk MON. h Umkhyube, Eumy Church Hountife h NpioFapt FPMSLP, NBYOHOHOH YMY ChPPVE, OE Hhudifife Oilpzp, FP, LB Zpchptyfus - VPDP TSHFSH. Ulptej Chuzp x Choir pyivlb yneoop h /etc/mosix.map.
    Oh ChPF, Hchidimy, OP OE Rppvedimy. UFP DBMSH? B DBMSh Pühmeos RTPUPP :-) - Okhtsop Ustbfsh Hfymiphs DMS TBVPFFSH in Junoomeoshchny / Procy RBlefb MProc. h YUBUFOPUFY B FPN RBLEFE YDEF OERMPIBS NPDYZHYLBGYS top - mtop, B LPFPTSCHK DPVBCHYMY CHPNPTSOPUFSH PFPVTBTSEOYS HMB (node), UPTFYTPCHLY RP HMBN, RETEOPUB RTPGEUUB have FELHEEZP HMB ON DTHZPK J HUFBOPCHMEOYS NYOYNBMSHOPK BZTHLY RTPGEUUPTB HMB, RPUME LPFPTPK RTPGEUUSCH OBYUYOBAF NYZTYTPCHBFSH ON DTHZYE MOSIX - HMSCH .
    BRHULBEN mtop, CHSCHVYTBEN RPOTBCHYCHYYKUS OE URSEYK RTPGEUU (TELPNEODHA BRHUFYFSH bzip) J UNEMP DBCHYN LMBCHYYH "g" ON CHBYEK LMBCHYBFHTE, RPUME YUEZP CHCHPDYN ON BRTPU PID CHSCHVTBOOPZP B LBYUEUFCHE TSETFCHSCH RTPGEUUB J BFEN - OPNET HMB, LHDB NShch IPFYN EZP PFRTBCHYFSH. B HCC RPUME BFFPFFFFFF ON THEUHMSHMSFBFSH, PFPTBCBENSCH LPNBodpk Mon - FB NBYAB DPMCOB OBUBFSH VTBFSH OVS OVROWHLKH CHISTBLBEPPP RTPGEUUB.
    Used Customer MTOP - H RPME #N PFPTBCBFSh will open Hikb, Zede for honorpus.
    oP BF UEE OE Chuh - Cedsh Chbn RTBCHDB OE IPYUFUS PFTBCHMSFSF About Dhzye Kmshchu Rtleguing Châshuyuha? NEW OBBIPFEMPUSHPUSH. X MOSIX EUFS Ochermpibes Chowpeoobs WBMBUYTPCHLB Chchft LMBUFETB, LPFPTBS RPCHPMSEF NEME-NEY TBCHOPETOP TBDRUTEDEMSFSH Ozthlch about Chu Kmshch. Oh used ChPF otdefus ONN RTydefus Rpfthdyfus. DMS Obubm with TPUULBCH, LLB UDMBFS FPLHA SHAKHAKTPKLS (TUNE) DMS DMS DCHI HDC LMBUFETB? H RTPGUUUE LPFTPK MOSIX RPMKUBEF YozhpTNBGY P ULTPTPUFSI RTPGEUUPTPCH UEFY:
    Burpnife TB th trough - TUNE NPCOP CHSCPOSFSFS FPMSLP h Single-Mode. Yobyuyu MIVP RPMKHUYFE OE Comprehension LPTTTTPCC TechMSSFBF, MIVP Chibb Nbiob NPCCFR RTPUP BCHYUKHFSH.
    yFBL, CHSCPOSEN TUNE. RPUME Retoshpdb PretggiPopk Uyufenshch h Single - Mode Ottillin LPNBodpk init 1 yy init s bbrhulbene ultyrf prep_tune, LPFTSK Radoynef Cef
    Yofhekheki y burghufyf MOSIX. PPUME BPFPP ON PDOPK Y NBYO Obbrhulben Tune, CCHPDEn Yeh opens DTHZPZP KHUMB DMS Fit Plight Tseden TuchmSFBFB - Hfymifb DPMTSOB CHISDBFSH BBRTPU OHUFY YUYUM, RPMKHYUEOPY PF HSCPMYYS LPBodhch Tune -a<ХЪЕМ> About DThzpn Him. CONTRACTEPUE PLASTBGY RTIDEFUS RPCHFPTYFSH About DThzpn Home LPNBodpk Tune -a<ХЪЕМ>, B teachmshfbf yyufy Uuyem Chechufi about a retrochkchki shoot. PPUME RPDPVPSP FAOAOZB H ChBeshek Uyufen Dpmtso Rpschyfus ZBCM / ETC / Overheads, Upesbake YozhpTNBHA DMS MOSIX h Chede Olei Yuumpchechi Brownie. h Umkhyube, Eumy RP LBLN-FP Rtleuyobn Tune OE UNPZ Ulprthkfe, RTPUP Ulprthkfu Ye Felhasep Lbfbmb Zbqm Mosix.Cost h / etc / overheads. LFP RPNPSF ;-).
    RTY FAOOSE LMBUFETB Ya NMEY Yuen Dchhi Nbmyo Okhtsi Yurpmshpchbfs Hfymifh, LPFPTBS FBBFBChmsefus at Mosix - Tune_kernel. DBCOBS Hfymifb Rpchpmsef
    ChbN h under the RTPUPN in RTychshopn Chede Shortpym LMBUFET, Pfchych about Oleulpmshlp ChPRTPUPCH RTPCHEDS Faoyosis at Dchhnes Nbyobny LMBUFETB.
    Lufgfi, RP of customodcooppn zhvkhh ULBBFSH, YuFP RTY PRTAYAKLPKL LMBUFETB with TVPneodeodha Chbn OE еBBZTHTSBFS UEFSH, OBPVPTPF - RTYPUFBOPCHIFS Chue Blinger Pretggy h Mplbmshopk UEFY.

    5. Htbchmeoy LMBUFETPN

    DMS Horthbchmeoise Hipn LMBUFETB UMEUUFCHF OEEVPMSHIPK OBPT LPNBOD, TEYM LPFCTSI:

    mosctl - Lpoftpmsh OBD HJPN. RPCHPMSEF YENOSFSFSFS RBTBNestech Khumb - Flite, LBOCK, STAY, LSTAY, DELAY F.D
    dBCHBKFE TPUUUPFTIN OETULPMSLP RBTBNFFCH LFP Hfymifshch:
    stay. - Rpchpmsef Pufbobchmichbfs Nyztbgya RTpgeupch about DThusee Kmshchi at Felhake Nbyosezhek. Pfnosephus RBTBnefpn Nostay YMY -STAY
    lstay. - Britteebef FPMSLP MPLBMSoshkn RTPGUUUUUBN NYZTBGYA, BTPGEUUUUCHUKH DTHZYY NBYO NPZHF RTPDPMCSBFS LFPD DEMBFSH. PfnaineFus RBTBnefpn Nolstay YMY -LStay.
    block - Britteebef HDBMoshchny / Zpuchchchenchenchny RTPGUUUUBN CHRACPOSFUS ON THE BIFN KHEM. Pfnosephus RBTBneftpn Noblock YMY -Block.
    bring - ChPCTBEBEF PVTBFOP Chu RTpgeuuchu in Felhasep Khbrovsya Khschpmossechko about Dhzyi Nbiobi LMBUFETB. LFPF RBTBnext NPCCF UTBVBFCHBFSH, RPLB NYZTTHBCHBCHBCHBCHUFF RTTTSCHEKUE PF Uyufenc.
    sETDELAY. HufBocchmichbf Kuba, RTPGUUU OBUBFPFPEF NYZTTPCHBFSP.
    cEDSH UPRUBUFUESH - H Umkhyuby, Eumy Covene Czhatmoisus RTPGEUUUUB Nonoye Leverski Unschum Retopufsh EZP About DThzee Nbjose UEFY YUYUEBEF. YNOOP OIPP CHEKHMYMFBCHMSEFUS Hfymifpk Mosctl at Rbtbnefpn SetDecay. Rtlenet:
    Mosctl SetDecay 1 500 200
    Hufbocchmichbf Kuba Retaipdb about DTHZYE KMUSCH 500 Nymayhelizh h Umkhyube, Eumy RTpgeuu Obrkeoo Lonb Slow y 200 Nymialege DMS FAST RTpgeupch. PVTBFIFE Choinbaeye, UFP RBTBnext Slow Chutsub Dpmtso VSSFS NMY YMY TBCHEO RBTBnert Fast.

    mosrun. - Obrochulbef RTympsoye H LMBUFET. Ottinet Mosrun -E -J5 Make Obbrhufyf Make About 5-Mon Hime LMBUFETB, RTY BFP Chu EZP DPGEUTY RTPGUUUCH VHDHF FBLDCE CHSCHPOSFUS ON 5-PN KHEME. RTBCHDB otdeus EUFSH PDY OUBOOKU, RTY YUEN DPCHPMSHOP ULEUFCHOPCHOP:
    H Umkhyube, Eumy Dyutuya RTPGUUUSCHUKH CHSHRPMOSAFUS SPECTRY YUEN HUFBOPCHMOBS HFIMIFPK MOSCTL Lambdytzlb (Delay) FP Rtleguuu OE VKHDAF NYZTPCHBFSH About DThsee Khatshch LMBUFETB. X MOSRUN EEE DPCHPMShop NPPP TBMYSURES Yofteets RBTBneftpch, OP RPDTPVop CHOBFSH
    P OIU CHIST UNPETSFE YE THLPCHPDUFCHB RP LFP Hfämife. (man mosrun)

    mON. - LBL NShch HTSE OBEN, FP NPOYFPT LMBUFETB, LPFPTSCHK B RUECHDPZTBZHYYUEULPN CHYDE PFPVTBTSBEF BZTHLH LBTSDPZP TBVPYUEZP HMB CHBYEZP LMBUFETB, LPMYYUEUFCHP UCHPVPDOPK J BOSFPK RBNSFY HMPCH J CHSCHDBEF NOPZP DTHZPK, OE NEOEE YOFETEUOPK YOZHPTNBGYY.

    mtop. - NPDJJAGYTPCHBBS DMS Yurpmschbekhs about Khmeby LMBUFETB Chetuis Lfbodch Top. PFPVTBCBEF ON LTBTE Dyognyuyulha YozhpTNBGYA PTPGEUUUBY, Obroheeo-Obbrehui, CHEMB, Lhdbm, Lhdbmi Rtlegueuusch.

    mPS. - FPCA Npdyzhytpchbobobes Chetuis Lfbodshch PS. DPVBChMEOP EEE PDOP RPME - Opno Hikb, about LPFTSCHK NYZTYTPCHBM RTPGUUU.

    chPP on the NPK Châzzda Chue Pugchpick Hfymiph. About UBNPN DEME LPEYOP NPCP PVPKFYUUS DBCA OEI. Ottinet Yurpmshkhs DMS LPFFTPMS PFD LMBUFETPN / PROC / MOSIX.
    fBN LTPNE FPZP, YuFP NPCPP OBCPH PUPCHO YozhpTNBHA PO Footpklby Hibrh, RTPGEUUBI BRHEEOFI at DThzyi Khizhmpsy F.D., B Fblem RPNespsh Yubufsh RBTBnefsph.

    6. Ylotheneofytchen.

    L UPSBMEA, NEO HDBMPUSHPUBCHYFSHFSHPSPOSFUS LBPPK-FP PDY RTPGUUU PDOPchtenoopopoga about Oleulpmshlii Khmby. NBLUINKHN, Jesp with Dpufizh H RTPGUUUE OLRYTYNEOPHPCH in LMBUFETPN-Yurpmshptoma DMS Khsphpmuyuyui Tehuztenlyy RTPGUUUPH about DThzpn Himme.
    DBCHBCFF TPUUUPFTIN PDYY YEY RTYNETPH:
    dPRHUFNE, YuFP X LMBUFET TBVPFBAF TEKHEZHEBA (DCHB KHBB), PDJA LPFPTCHA at Opnetpn 1 (366 Celeron), DTHZPK - OPNPN 5 (PIII450). Olretinofitpchbfsh Nwby Vshden about 5-Mon Hime. 1-to CPF CBF TTELES RTPUFBYCHBM. ;-)
    yFBL, BRHULBEN ON 5 H HME HFYMYFH crark LCA RPDVPTB RBTPMS A dv rar BTIYChH.eUMY LFP CHBU RTPVPCHBM TBVPFBFSH have RPDPVOSCHNY HFYMYFBNY, FP Software DPMTSEO OBFSH, YUFP RTPGEUU RPDVPTB RBTPMS "LHYBEF" DP 99 RTPGEOFPCH RTPGEUUPTB. Oh, the UFP TSE - RPUME OBMADBEN, YFP RTPGUUU PUFBEFUS ON LFPN, 5-PN Himme. TBKHNOP - CEDSH YNOOP XFPZP CHBIM RTPYCHPDIFEMSHOPUFSFUM RTURCHESKEFEF 1-K RPUFY CLUE CB TBB.
    dBMEE RTPUPP OBTLH KDE 2.0. Unpftin FBVMIGHKH RTPGUUUPCH YUFEIN, YUFP CRARK Khureyop Nyztytfchbm about a 1-room shot, Puchpvpec RTPGEUPT RBNSFSFS (DB, dB - RBNSFSFS Fplay FBCPBFBFBEFUS) DMS MAKE. B LLB FPMSLP Make Clarpuyum TBVFFH - CRARK Chetokhmus Putbfop, about TPDOP CHEX 5-K.
    jofteethek Lzhelf RPMKHUBEFUS, EMUY CRARK OBBRKHULBFSH ON NEMEOPN 1-HEME.
    fBN Nwby Oblumadben Rtlefyueuli RTPFJPRPMPSUCC TechmSFBF - RTpgueu Utbh-tse Nyzythithhf about 5-K, NEME SCHODCHT SHOP. RTY LFPN PTCTBEBEFUS PVTBFOP, LPDB IPSYO RSFPZP LPNrshafetb Obuisobhef Lbuye-FP Dekufchys at Uyufenpk.

    7. Yurpmshppchboy

    DBCHBKFE H LPOKA TB Lady, LMBMSHPHBFSh LMBUFEF HUMB RPCHEYTOPK Tsyoya.
    LCA OBYUBMB OHTSOP TB J OBCHUEZDB BRPNOYFSH - LMBUFET CHSCHZPDEO FPMSHLP B FPN UMHYUBE, LPZDB B CHBYEK UEFY EUFSH OOPE LPMYYUEUFCHP NBYYO, LPFPTSCHE YUBUFEOSHLP RTPUFBYCHBAF J BL IPFYFE YURPMSHPCHBFSH YEE TEUHTUSCH OBRTYNET LCA UVPTLY KDE YMY LCA MAVSCHI UETSHEOSCHI RTPGEUUPCH. CEDHSBZPDBTS LMBUFETH YE 10 NBYO NPCop Pradchtenoopope
    LPNRYMYTPCHBFS DP 10 Fstsitis RTPZTBNN about FPN-TSA C ++. YMY RPDVYTBFSH LBPK-FP RBTPMSH,
    OE RTTTBEBS oh about the levies of the LFPP RTPGUUUUB OEBCHILYYUNP PF Ozthli about Chba LPNrshaft.
    dB CHPPVEE - LFP RTPUP JOFETUOP ;-).

    8. Bablmayuoyee

    h Bablmayuoye Ipyuh ULBBFSH, YuFP h BFK \u200b\u200bUFBFSHA OE TBUUNPFTEKSHU ChuE-ChPNPTSOPUFY MOSIX, FL With the RTPUP DP OEI EEE OE DPTBMUS. Eumy Dpiverhush - Central RTpdpmtsoyce. :-)