summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/dev/erasure-coded-pool.rst
blob: 4694a7a8211c39d0ed18543115f5f40f94e7f2b4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
Erasure Coded pool
==================

Purpose
-------

Erasure-coded pools require less storage space compared to replicated
pools.  The erasure-coding support has higher computational requirements and
only supports a subset of the operations allowed on an object (for instance,
partial write is not supported).

Use cases
---------

Cold storage
~~~~~~~~~~~~

An erasure-coded pool is created to store a large number of 1GB
objects (imaging, genomics, etc.) and 10% of them are read per
month. New objects are added every day and the objects are not
modified after being written. On average there is one write for 10,000
reads.

A replicated pool is created and set as a cache tier for the
erasure coded pool. An agent demotes objects (i.e. moves them from the
replicated pool to the erasure-coded pool) if they have not been
accessed in a week.

The erasure-coded pool crush ruleset targets hardware designed for
cold storage with high latency and slow access time. The replicated
pool crush ruleset targets faster hardware to provide better response
times.

Cheap multidatacenter storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Ten datacenters are connected with dedicated network links. Each
datacenter contains the same amount of storage with no power-supply
backup and no air-cooling system.

An erasure-coded pool is created with a crush map ruleset that will
ensure no data loss if at most three datacenters fail
simultaneously. The overhead is 50% with erasure code configured to
split data in six (k=6) and create three coding chunks (m=3). With
replication the overhead would be 400% (four replicas).

Interface
---------

Set up an erasure-coded pool::

 $ ceph osd pool create ecpool 12 12 erasure

Set up an erasure-coded pool and the associated crush ruleset::

 $ ceph osd crush rule create-erasure ecruleset
 $ ceph osd pool create ecpool 12 12 erasure \
     default ecruleset

Set the ruleset failure domain to osd (instead of the host which is the default)::

 $ ceph osd erasure-code-profile set myprofile \
     crush-failure-domain=osd
 $ ceph osd erasure-code-profile get myprofile
 k=2
 m=1
 plugin=jerasure
 technique=reed_sol_van
 crush-failure-domain=osd
 $ ceph osd pool create ecpool 12 12 erasure myprofile

Control the parameters of the erasure code plugin::

 $ ceph osd erasure-code-profile set myprofile \
     k=3 m=1
 $ ceph osd erasure-code-profile get myprofile
 k=3
 m=1
 plugin=jerasure
 technique=reed_sol_van
 $ ceph osd pool create ecpool 12 12 erasure \
     myprofile

Choose an alternate erasure code plugin::

 $ ceph osd erasure-code-profile set myprofile \
     plugin=example technique=xor
 $ ceph osd erasure-code-profile get myprofile
 k=2
 m=1
 plugin=example
 technique=xor
 $ ceph osd pool create ecpool 12 12 erasure \
     myprofile

Display the default erasure code profile::

  $ ceph osd erasure-code-profile ls
  default
  $ ceph osd erasure-code-profile get default
  k=2
  m=1
  plugin=jerasure
  technique=reed_sol_van

Create a profile to set the data to be distributed on six OSDs (k+m=6) and sustain the loss of three OSDs (m=3) without losing data::

  $ ceph osd erasure-code-profile set myprofile k=3 m=3
  $ ceph osd erasure-code-profile get myprofile
  k=3
  m=3
  plugin=jerasure
  technique=reed_sol_van
  $ ceph osd erasure-code-profile ls
  default
  myprofile

Remove a profile that is no longer in use (otherwise it will fail with EBUSY)::

  $ ceph osd erasure-code-profile ls
  default
  myprofile
  $ ceph osd erasure-code-profile rm myprofile
  $ ceph osd erasure-code-profile ls
  default

Set the ruleset to take ssd (instead of default)::

 $ ceph osd erasure-code-profile set myprofile \
     crush-root=ssd
 $ ceph osd erasure-code-profile get myprofile
 k=2
 m=1
 plugin=jerasure
 technique=reed_sol_van
 crush-root=ssd