词条 | Standard RAID levels | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
释义 |
In computer storage, the standard RAID levels comprise a basic set of RAID (redundant array of independent disks) configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 and its variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard.[1] While most RAID levels can provide good protection against and recovery from hardware defects or defective sectors/read errors (hard errors), they do not provide any protection against data loss due to catastrophic failures (fire, water) or soft errors such as user error, software malfunction, or malware infection. For valuable data, RAID is only one building block of a larger data loss prevention and recovery scheme – it cannot replace a backup plan. RAID 0RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail; as a result of having data striped across all disks, the failure will result in total data loss. This configuration is typically implemented having speed as the intended goal.[2][3] RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks.[4]A RAID 0 setup can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 120 GB × 2 = 240 GB. However, some RAID implementations allow the remaining 200 GB to be used for other purposes. The diagram in this section shows how the data is distributed into A{{mvar|x}} stripes on two disks, with A1:A2 as the first stripe, A3:A4 as the second one, etc. Once the stripe size is defined during the creation of a RAID 0 array, it needs to be maintained at all times. Since the stripes are accessed in parallel, an {{mvar|n}}-drive RAID 0 array appears as a single large disk with a data rate {{mvar|n}} times higher than the single-disk rate. PerformanceA RAID 0 array of {{mvar|n}} drives provides data read and write transfer rates up to {{mvar|n}} times as high as the individual drive rates, but with no data redundancy. As a result, RAID 0 is primarily used in applications that require high performance and are able to tolerate lower reliability, such as in scientific computing[5] or computer gaming.[5] Some benchmarks of desktop applications show RAID 0 performance to be marginally better than a single drive.[6][7] Another article examined these claims and concluded that "striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance".[8][9] Synthetic benchmarks show different levels of performance improvements when multiple HDDs or SSDs are used in a RAID 0 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.[10][11] RAID 1{{See also|RAID 1E}}RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.[12][13]The array will continue to operate so long as at least one member drive is operational.[15] PerformanceAny read request can be serviced and handled by any drive in the array; thus, depending on the nature of I/O load, random read performance of a RAID 1 array may equal up to the sum of each member's performance,{{Efn|name="raid1-read"|Theoretical maximum, as low as single-disk performance in practice}} while the write performance remains at the level of a single disk. However, if disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk.[13][14] Synthetic benchmarks show varying levels of performance improvements when multiple HDDs or SSDs are used in a RAID 1 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.[10][11] {{Clear}}RAID 2RAID 2, which is rarely used in practice, stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin at the same angular orientation (they reach index at the same time{{Clarify|date=May 2015}}), so it generally cannot service multiple requests simultaneously.[15][16] However, depending with a high rate Hamming code, many spindles would operate in parallel to simultaneously transfer data so that "very high data transfer rates" are possible[17] as for example in the DataVault where 32 data bits were transmitted simultaneously. With all hard disk drives implementing internal error correction, the complexity of an external Hamming code offered little advantage over parity so RAID 2 has been rarely implemented; it is the only original level of RAID that is not currently used.[15][16] {{Clear}}RAID 3RAID 3, which is rarely used in practice, consists of byte-level striping with a dedicated parity disk. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously, which happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same physical location on each disk. Therefore, any I/O operation requires activity on every disk and usually requires synchronized spindles. This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.[16] The requirement that all disks spin synchronously (in a lockstep) added design considerations to a level that provided no significant advantages over other RAID levels, so it quickly became useless and is now obsolete.[15] Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[18] RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.[16] {{Clear}}RAID 4RAID 4 consists of block-level striping with a dedicated parity disk. As a result of its layout, RAID 4 provides good performance of random reads, while the performance of random writes is low due to the need to write all parity data to a single disk.[19]In diagram 1, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1. {{Clear}}RAID 5RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.[20] RAID 5 requires at least three disks.[21]In comparison to RAID 4, RAID 5's distributed parity evens out the stress of a dedicated parity disk among all RAID members. Additionally, write performance is increased since all RAID members participate in the serving of write requests. Although it won't be as efficient as a striping (RAID 0) setup, because parity must still be written, this is no longer a bottleneck.[22] Since parity calculation is performed on the full stripe, small changes to the array experience write amplification: in the worst case when a single, logical sector is to be written, the original sector and the according parity sector need to be read, the original data is removed from the parity, the new data calculated into the parity and both the new data sector and the new parity sector are written. {{Clear}}RAID 6RAID 6 extends RAID 5 by adding another parity block; thus, it uses block-level striping with two parity blocks distributed across all member disks.[23]According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."[24] PerformanceRAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture—in software, firmware, or by using firmware and specialized ASICs for intensive parity calculations. RAID 6 can read up to the same speed as RAID 5 with the same number of physical drives.[25] When either diagonal or orthogonal dual parity is used, a second parity calculation is necessary for write operations. This doubles CPU overhead for RAID-6 versus single parity RAID levels. When a Reed Solomon code is used the second parity calculation is unnecessary. Reed Solomon has the advantage of allowing all redundancy information to be contained within a given stripe. Simplified parity exampleSuppose we would like to distribute our data over chunks. Our goal is to define two parity values and , known as syndromes, resulting in a system of physical drives that is resilient to the loss of any two of them. In order to generate more than a single independent syndrome, we will need to perform our parity calculations on data chunks of size A typical choice in practice is a chunk size , i.e. striping the data per-byte. We will denote the base-2 representation of a data chunk as , where each is either 0 or 1. If we are using a small number of chunks , we can use a simple parity computation, which will help motivate the use of the Reed-Solomon system in the general case. For our first parity value , we compute the simple XOR of the data across the stripes, as with RAID 5. This is written where denotes the XOR operator. The second parity value is analogous, but with each data chunk bit-shifted a different amount. Writing , we define In the event of a single drive failure, the data can be recomputed from just like with RAID 5. We will show we can also recover from simultaneous failure of 2 drives. If we lose a data chunk and , we can recover from and the remaining data by using the fact that . Suppose on a system of chunks, the drive containing chunk has failed. We can compute and recover the lost data by undoing the bit shift. We can also recover from the failure of two data disks by computing the XOR of and with the remaining data. If in the previous example, chunk had been lost as well, we would computeOn a bitwise level, this represents a system of equations in unknowns which uniquely determine the lost data. This system will no longer work applied to a larger number of drives . This is because if we repeatedly apply the shift operator times to a chunk of length , we end up back where we started! If we tried to apply the algorithm above to a system containing disks, the second set of equations would be , which is the same as the first set of equations. This is only half as many as we need to solve for the missing values. General parity systemIt is possible to support a far greater number of drives by choosing the parity function more carefully. The issue we face is to ensure that a system of equations over the finite field has a unique solution, so we will turn to the theory of polynomial equations. Consider the Galois field with . This field is isomorphic to a polynomial field for a suitable irreducible polynomial of degree over . We will represent the data elements as polynomials in the Galois field. Let correspond to the stripes of data across hard drives encoded as field elements in this manner. We will use to denote addition in the field, and concatenation to denote multiplication. The reuse of is intentional: this is because addition in the finite field represents to the XOR operator, so computing the sum of two elements is equivalent to computing XOR on the polynomial coefficients. A generator of a field is an element of the field such that is different for each nonnegative . This means each element of the field, except the value , can be written as a power of A finite field is guaranteed to have at least one generator. Pick one such generator , and define and as follows: As before, the first checksum is just the XOR of each stripe, though interpreted now as a polynomial. The effect of can be thought of as the action of a carefully chosen linear feedback shift register on the data chunk.[26] Unlike the bit shift in the simplified example, which could only be applied times before the encoding began to repeat, applying the operator multiple times is guaranteed to produce unique invertible functions, which will allow a chunk length of to support up to data pieces. If one data chunk is lost, the situation is similar to the one before. In the case of two lost data chunks, we can compute the recovery formulas algebraically. Suppose that and are the lost values with , then, using the other values of , we find constants and : We can solve for in the second equation and plug it into the first to find , and then . Unlike P, The computation of Q is relatively CPU intensive, as it involves polynomial multiplication in . This can be mitigated with a hardware implementation or by using an FPGA. Comparison{{See also|Nested RAID levels#Comparison}}{{Refimprove section|date=January 2015}}The following table provides an overview of some considerations for standard RAID levels. In each case:
System ImplicationsIn measurement of the I/O performance of five filesystems with five storage configurations—single SSD, RAID 0, RAID 1, RAID 10, and RAID 5 it was shown that F2FS on RAID 0 and RAID 5 with eight SSDs outperforms EXT4 by 5 times and 50 times, respectively. The measurements also suggest that RAID controller can be a significant bottleneck in building a RAID system with high speed SSDs.[29] Nested RAID{{Main article|Nested RAID levels}}Combinations of two or more standard RAID levels. They are also known as RAID 0+1 or RAID 01, RAID 0+3 or RAID 03, RAID 1+0 or RAID 10, RAID 5+0 or RAID 50, RAID 6+0 or RAID 60, and RAID 10+0 or RAID 100. Non-standard variants{{Main article|Non-standard RAID levels|Non-RAID drive architectures}}In addition to standard and nested RAID levels, alternatives include non-standard RAID levels, and non-RAID drive architectures. Non-RAID drive architectures are referred to by similar terms and acronyms, notably JBOD ("just a bunch of disks"), SPAN/BIG, and MAID ("massive array of idle disks"). Notes{{Notelist|40em}}References1. ^{{Cite web|title=Common raid Disk Data Format (DDF)|url=http://www.snia.org/tech_activities/standards/curr_standards/ddf/|publisher=Storage Networking Industry Association|website=SNIA.org|accessdate=2013-04-23}} 2. ^{{cite web | url = https://www.datarecovery.net/RAID/raid-0-data-recovery.html | title = RAID 0 Data Recovery | accessdate = 2015-04-30 | website = DataRecovery.net}} 3. ^{{cite web | url = http://www.cru-inc.com/data-protection-topics/understanding-raid/ | title = Understanding RAID | accessdate = 2015-04-30 | website = CRU-Inc.com}} 4. ^{{cite web | url = http://lifehacker.com/5986883/how-to-combine-multiple-hard-drives-into-one-volume-for-cheap-high-capacity-storage | title = How to Combine Multiple Hard Drives Into One Volume for Cheap, High-Capacity Storage | date = 2013-02-26 | accessdate = 2015-04-30 | website = LifeHacker.com}} 5. ^{{cite web | url = http://www.gameplayinside.com/optimize/gaming-storage-shootout-2015-ssd-hdd-or-raid-0-which-is-best/ | title = Gaming storage shootout 2015: SSD, HDD or RAID 0, which is best? | date = 2015-04-13 | accessdate = 2015-09-22 | first = Sebastiaan | last = de Kooter | website = GamePlayInside.com}} 6. ^{{Cite web|url=http://www.anandtech.com/storage/showdoc.aspx?i=2101 |website=AnandTech.com |title=Western Digital's Raptors in RAID-0: Are two drives better than one? |date=July 1, 2004 |publisher=AnandTech |accessdate=2007-11-24}} 7. ^{{Cite web|url=http://www.anandtech.com/storage/showdoc.aspx?i=2974 |website=AnandTech.com |title=Hitachi Deskstar 7K1000: Two Terabyte RAID Redux |date=April 23, 2007 |publisher=AnandTech |accessdate=2007-11-24}} 8. ^{{Cite web|url=http://tweakers.net/reviews/515/1/raid-0-hype-or-blessing-pagina-1.html |title=RAID 0: Hype or blessing? |date=August 7, 2004 |website=Tweakers.net |publisher=Persgroep Online Services |accessdate=2008-07-23}} 9. ^{{Cite web|url=http://www.hardwaresecrets.com/does-raid0-really-increase-disk-performance/ |website=HardwareSecrets.com |title=Does RAID0 Really Increase Disk Performance? |date=November 1, 2006}} 10. ^1 {{cite web | url = https://www.phoronix.com/scan.php?page=article&item=btrfs_raid01_linux316&num=1 | title = Btrfs RAID HDD Testing on Ubuntu Linux 14.10 | date = 2014-10-22 | accessdate = 2015-09-19 | first = Michael | last = Larabel | publisher = Phoronix}} 11. ^1 {{cite web | url = https://www.phoronix.com/scan.php?page=article&item=btrfs_4way_ssdraid&num=1 | title = Btrfs on 4 × Intel SSDs In RAID 0/1/5/6/10 | date = 2014-10-29 | accessdate = 2015-09-19 | first = Michael | last = Larabel | publisher = Phoronix}} 12. ^{{cite web | url = http://www.freebsd.org/doc/handbook/geom-mirror.html | title = FreeBSD Handbook: 19.3. RAID 1 – Mirroring | date = 2014-03-23 | accessdate = 2014-06-11 | website = FreeBSD.org}} 13. ^1 {{cite web | url = http://www.adaptec.com/en-us/_common/compatibility/_education/raid_level_compar_wp.htm#2.2 | title = Which RAID Level is Right for Me?: RAID 1 (Mirroring) | accessdate = 2014-01-02 | publisher = Adaptec | website = Adaptec.com}} 14. ^1 2 3 {{cite web | url = https://docs.oracle.com/cd/E19691-01/820-1847-20/appendixf.html#50515995_74175 | title = Selecting the Best RAID Level: RAID 1 Arrays (Sun StorageTek SAS RAID HBA Installation Guide) | date = 2010-12-23 | accessdate = 2014-01-02 | publisher = Oracle Corporation | website = Docs.Oracle.com}} 15. ^1 2 {{Cite book| title = Managing RAID on Linux. O'Reilly Series | first = Derek | last = Vadala | edition = illustrated | publisher = O'Reilly | year = 2003 | isbn = 9781565927308 | page = 6 | url = https://books.google.com/?id=RM4tahggCVcC&pg=PA6&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }} 16. ^1 2 3 {{Cite book| title = Blueprints for high availability | first1 = Evan | last1 = Marcus | first2 = Hal | last2 = Stern | edition = 2, illustrated | publisher = John Wiley and Sons | year = 2003 | isbn = 9780471430261 | page = 167 | url = https://books.google.com/?id=D_jYqFoJVEAC&pg=RA2-PA167&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }} 17. ^The RAIDbook, 4th Edition, The RAID Advisory Board, June 1995, p.101 18. ^{{Cite book| title = Mike Meyers' A+ Guide to Managing and Troubleshooting PCs | first1 = Michael | last1 = Meyers | first2 = Scott | last2 = Jernigan | edition = illustrated | publisher = McGraw-Hill Professional | year = 2003 |isbn = 9780072231465 | page = 321 | url = https://books.google.com/?id=9vfQKUT_BjgC&pg=PT348&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }} 19. ^{{cite web | url = http://www.thegeekstuff.com/2011/11/raid2-raid3-raid4-raid6/ | title = RAID 2, RAID 3, RAID 4 and RAID 6 Explained with Diagrams | date = 2011-11-21 | accessdate = 2015-01-02 | first = Ramesh | last = Natarajan | website = TheGeekStuff.com}} 20. ^1 {{Cite journal |first1=Peter |last1=Chen |first2=Edward |last2=Lee |first3=Garth |last3=Gibson |first4=Randy |last4=Katz |first5=David |last5=Patterson |title=RAID: High-Performance, Reliable Secondary Storage |journal=ACM Computing Surveys |volume=26 |issue = 2|pages=145–185|year=1994 |doi=10.1145/176979.176981|citeseerx = 10.1.1.41.3889}} 21. ^{{cite web | url = http://www.vantagetech.com/faq/raid-5-recovery-faq.html | title = RAID 5 Data Recovery FAQ | accessdate = 2014-07-16 | publisher = Vantage Technologies | website = VantageTech.com}} 22. ^{{Cite web |title=Basic RAID Organizations |url=http://www.ecs.umass.edu/ece/koren/architecture/Raid/basicRAID.html |first=Israel |last=Koren |publisher=University of Massachusetts |website=ECS.UMass.edu |accessdate=2014-11-04}} 23. ^{{cite web | url = https://docs.oracle.com/cd/E19494-01/820-1260-15/appendixf.html#50548797_51002 | title = Sun StorageTek SAS RAID HBA Installation Guide, Appendix F: Selecting the Best RAID Level: RAID 6 Arrays | date = 2010-12-23 | accessdate = 2015-08-27 | website = Docs.Oracle.com}} 24. ^{{Cite web|url=http://www.snia.org/education/dictionary/r/ |title=Dictionary R |website=SNIA.org |publisher=Storage Networking Industry Association |accessdate = 2007-11-24}} 25. ^{{Cite journal|first=Rickard E.|last=Faith|title=A Comparison of Software RAID Types|url=http://alephnull.com/benchmarks/sata2009/raidtype.html|date=13 May 2009}} 26. ^{{Cite web |url=https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf |last=Anvin |first=H. Peter |title=The Mathematics of RAID-6 |website=Kernel.org |publisher=Linux Kernel Organization |date=May 21, 2009 |accessdate=November 4, 2009}} 27. ^{{cite journal|last1=Radu|first1=Mihaela|title=Using Markov models to estimate the reliability of RAID architectures|journal=IEEE Long Island Systems, Applications and Technology Conference (LISAT)|date=2013|url=http://ieeexplore.ieee.org/document/6578246/citations}} 28. ^{{cite journal|last1=M. Greenan|first1=Kevin|last2=S. Plank|first2=James|last3=J. Wylie|first3=Jay|title=Mean time to meaningless: MTTDL, Markov models, and storage system reliability|journal=USENIX: Hotstorage|date=2010|url=https://www.usenix.org/legacy/event/hotstorage10/tech/full_papers/Greenan.pdf}} 29. ^{{cite book| last1 = Park| first1 = Chanhyun| last2 = Lee| first2 = Seongjin| last3 = Won| first3 = Youjip| date = 2014| title = An Analysis on Empirical Performance of SSD-Based RAID| url = https://scholar.google.co.kr/scholar?cluster=1955840554275208892| language = en| journal = Information Sciences and Systems| volume = 2014| pages = 395–405| doi = 10.1007/978-3-319-09465-6_41| isbn = 978-3-319-09464-9}} Further reading
External links
2 : RAID|Computer data storage |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。