存儲數(shù)據(jù)保護Raid技術DDP說明_第1頁
存儲數(shù)據(jù)保護Raid技術DDP說明_第2頁
存儲數(shù)據(jù)保護Raid技術DDP說明_第3頁
存儲數(shù)據(jù)保護Raid技術DDP說明_第4頁
存儲數(shù)據(jù)保護Raid技術DDP說明_第5頁
已閱讀5頁,還剩15頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

1、存儲高級數(shù)據(jù)保護技術-DDP說明1VolumesSANtricity RAID Protection Volume groups RAID 0, 1, 10, 5, 6 Intermix RAID levels Various group sizes Dynamic disk pools Min 11 SSDs Max 120 SSDs Up to 10 disk pools per system2NetApp ConfidentialVolume GroupsVolumesHost LUNsSSDsDisk PoolSSDsHost LUNsSANtricity RAID Levels RA

2、ID 0 striped RAID 5 data disks and rotating parity RAID 1 (10) mirrored and striped RAID 6 (P+Q) data disks and rotating dual parity3NetApp ConfidentialBlock-level striping with a distributed parityDataDataDataDataDataDataMirrorMirrorDataDataParityDataDataParityDataDataParityDataDataQ ParityParityDa

3、taDataQ ParityParityDataDataQ ParityDataDataDataTraditional RAID Volumes Disk drives organized into RAID groups Volumes reside across the drives in a RAID groupPerformance is dictated by the number of spindles Hot spares sit idle until a drive failsSpare capacity is “stranded”24-drive system with 2

4、10-drive groups (8+2) and 4 hot spares4Traditional RAIDDrive Failure Data is reconstructed onto hot spareSingle drive responsible for all writes (bottleneck) Reconstruction happens linearly (one stripe at a time) All volumes in that group are significantly impacted24-drive system with 2 10-drive gro

5、ups (8+2) and 4 hot spares5The ProblemThe Large-Disk-Drive Challenge Staggering amounts of data to store, protect, access Some sites have thousands of large-capacity drives Drive failures are continual, particularly with NL-SAS drives Production I/O is impacted during rebuilds Up to 40% in many case

6、s As drive capacities continue to grow, traditional RAID protection is pushed to its limit Drive transfer rates have not kept up with capacities Larger drives equal longer rebuildsanywhere from 10+ hours to several days64TB+Dynamic Disk PoolsMaintain SLAs during drive failure Stay in the green Perfo

7、rmance drop is minimized following drive failure Dynamic rebalance completes up to 8x faster than traditional RAID in random environments and up to 2x faster in sequential environments Large pool of spindles for every volume reduces hot spots Each volume spread across all drives in pool Dynamic dist

8、ribution/redistribution is a nondisruptive background operation7Balanced: Algorithm randomly spreads data across all drives, balancing workload and rebuilding if necessary.Easy: No RAID or idle spares to manage active spare capacity on all drives.Combining effort: All drives in the pool sustain the

9、workloadperfect for virtual mixed workloads or fast reconstruction if needed. Flexible: Add ANY* number of drives for additional capacitysystem automatically rebalances data for optimal performance.Traditional RAID TechnologyInnovative Dynamic Disk Pools8“With Dynamic Disk Pools, you can add or lose

10、 disk drives without impact, reconfiguration, or headaches.”* After the minimum of 11.9Data Rebalancing in Minutes vs. Days020406080100120300GB Drive 900GB Drive 2TB Drive 3TB Drive DDPRAID 6Hours2.5 Days1.3 DaysTypical rebalancing improvements are based on a 24-disk mixed workloadMore than 4 DaysDD

11、PRAIDBusiness ImpactBusiness Impact96 Minutes(Estimated)99% ExposureImprovementMaintain business SLAs with a drive failureRAID Level Comparison10NetApp ConfidentialRAID-0RAID-1 and 1+0RAID-5RAID-6DescriptionData is striped across multiple SSDs.RAID 1 uses mirroring to write data to two duplicate SSD

12、s simultaneously. RAID 10 uses striping to stripe data across a set of mirrored SSD pairsSSDs operated independently with user data and redundant information (parity) are striped across the SSDs. The equivalent capacity of one SSD is used for redundant information.SSDs operated independently with us

13、er data and redundant information (dual parity) are striped across the SSDs. The equivalent capacity of two SSDs is used for redundant information.Min # of SSDs1235Max # of SSDsSystem maxSystem max3030Usable capacity as % of raw capacity100%50%67% to 97%60% to 93%ApplicationIOPS | MB/sIOPSIOPS | MB/

14、sIOPS | MB/sAdvantagesPerformance due to parallel operation of the accessPerformance as multiple requests can be fulfilled simultaneously. Also offers the highest data availability Good for reads, small IOPS, many concurrent IOPS and random I/Os. Parity utilizes small portion of raw capacity.Good fo

15、r reads, small IOPS, many concurrent IOPS and random I/Os. Parity utilizes small portion of raw capacity.DisadvantagesNo redundancy. One drive fails, data is lostStorage costs are doubledWrites are particularly demandingWrites are particularly demandingDynamic Disk Pools Overview DDP dynamically dis

16、tributes data, spare capacity,and parity information across a pool of SSDs All drives are active (no idle hot spares) Spare capacity is available to all volumes Data is dynamically recreated/redistributed whenever pools grows or shrinks11NetApp ConfidentialDDP: Simplicity, Performance, Protection Si

17、mplified administration No RAID sets or hot spares to manage Data is automatically balanced within pool Flexible disk pool sizing optimizes capacity utilization Consistent performance Data is distributed throughput the pool (no hot spots) Performance drop is minimized during a drive rebuild Signific

18、antly faster return to optimal state Relentless data protection Significantly faster rebuild times as data is reconstructed throughout the disk pool Prioritized reconstruction minimizes exposure12NetApp ConfidentialDDP Insight: How It Works Each DDP volume is composed of some number of 4GB “virtual

19、stripes” called dynamic stripes Each D-stripe resides on a pseudo-randomly selected set of 10 drives from within the pool D-stripes are allocated at time of volume creation and allocated sequentially on a per-volume basis13NetApp Confidential24-SSD poolDDP SSD Failure For each D-Stripe which has dat

20、a on failed SSD: Segments on other SSDs are read to recreate data A new SSD is chosen to write segments from failed SSD Rebuild operations run in parallel across all SSDs14NetApp Confidential23DDP Multiple Disk Failure If two SSDs have failed system will rebuild critical segments firstBrown and Light Blue15NetApp Confidential If additional SSDs fail new critical segments will be identified and rebuiltBlue, Orange and PinkDDP Adding SSDs To Pool Add a single SSD or add multiple SSDs simultaneously Immediately rebalances data to maintain equilibrium Segments are just m

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論