storage revision 1.11 1 1.11 dholland $NetBSD: storage,v 1.11 2015/11/20 08:13:41 dholland Exp $
2 1.1 agc
3 1.1 agc NetBSD Storage Roadmap
4 1.1 agc ======================
5 1.1 agc
6 1.1 agc This is a small roadmap document, and deals with the storage and file
7 1.10 dholland systems side of the operating system. It discusses elements, projects,
8 1.10 dholland and goals that are under development or under discussion; and it is
9 1.10 dholland divided into three categories based on perceived priority.
10 1.10 dholland
11 1.10 dholland The following elements, projects, and goals are considered strategic
12 1.10 dholland priorities for the project:
13 1.10 dholland
14 1.10 dholland 1. Improving iscsi
15 1.10 dholland 2. nfsv4 support
16 1.10 dholland 3. A better journaling file system solution
17 1.10 dholland 4. Getting zfs working for real
18 1.10 dholland 5. Seamless full-disk encryption
19 1.11 dholland 6. Finish tls-maxphys
20 1.10 dholland
21 1.10 dholland The following elements, projects, and goals are not strategic
22 1.10 dholland priorities but are still important undertakings worth doing:
23 1.10 dholland
24 1.11 dholland 7. nvme support
25 1.11 dholland 8. lfs64
26 1.11 dholland 9. Per-process namespaces
27 1.11 dholland 10. lvm tidyup
28 1.11 dholland 11. Flash translation layer
29 1.11 dholland 12. Shingled disk support
30 1.11 dholland 13. ext3/ext4 support
31 1.11 dholland 14. Port hammer from Dragonfly
32 1.11 dholland 15. afs maintenance
33 1.11 dholland 16. execute-in-place
34 1.10 dholland
35 1.10 dholland The following elements, projects, and goals are perhaps less pressing;
36 1.10 dholland this doesn't mean one shouldn't work on them but the expected payoff
37 1.10 dholland is perhaps less than for other things:
38 1.1 agc
39 1.11 dholland 17. coda maintenance
40 1.1 agc
41 1.8 agc
42 1.10 dholland Explanations
43 1.10 dholland ============
44 1.1 agc
45 1.10 dholland 1. Improving iscsi
46 1.10 dholland ------------------
47 1.1 agc
48 1.10 dholland Both the existing iscsi target and initiator are fairly bad code, and
49 1.10 dholland neither works terribly well. Fixing this is fairly important as iscsi
50 1.10 dholland is where it's at for remote block devices. Note that there appears to
51 1.10 dholland be no compelling reason to move the target to the kernel or otherwise
52 1.10 dholland make major architectural changes.
53 1.10 dholland
54 1.10 dholland - As of November 2015 nobody is known to be working on this.
55 1.10 dholland - There is currently no clear timeframe or release target.
56 1.10 dholland - Contact agc for further information.
57 1.10 dholland
58 1.10 dholland
59 1.10 dholland 2. nfsv4 support
60 1.10 dholland ----------------
61 1.10 dholland
62 1.10 dholland nfsv4 is at this point the de facto standard for FS-level (as opposed
63 1.10 dholland to block-level) network volumes in production settings. The legacy nfs
64 1.10 dholland code currently in NetBSD only supports nfsv2 and nfsv3.
65 1.10 dholland
66 1.10 dholland The intended plan is to port FreeBSD's nfsv4 code, which also includes
67 1.10 dholland nfsv2 and nfsv3 support, and eventually transition to it completely,
68 1.10 dholland dropping our current nfs code. (Which is kind of a mess.) So far the
69 1.10 dholland only step that has been taken is to import the code from FreeBSD. The
70 1.10 dholland next step is to update that import (since it was done a while ago now)
71 1.10 dholland and then work on getting it to configure and compile.
72 1.10 dholland
73 1.10 dholland - As of November 2015 nobody is working on this, and a volunteer to
74 1.10 dholland take charge is urgently needed.
75 1.10 dholland - There is no clear timeframe or release target, although having an
76 1.10 dholland experimental version ready for -8 would be great.
77 1.10 dholland - Contact dholland for further information.
78 1.10 dholland
79 1.10 dholland
80 1.10 dholland 3. A better journaling file system solution
81 1.10 dholland -------------------------------------------
82 1.10 dholland
83 1.10 dholland WAPBL, the journaling FFS that NetBSD rolled out some time back, has a
84 1.10 dholland critical problem: it does not address the historic ffs behavior of
85 1.10 dholland allowing stale on-disk data to leak into user files in crashes. And
86 1.10 dholland because it runs faster, this happens more often and with more data.
87 1.10 dholland This situation is both a correctness and a security liability. Fixing
88 1.10 dholland it has turned out to be difficult. It is not really clear what the
89 1.10 dholland best option at this point is:
90 1.10 dholland
91 1.10 dholland + Fixing WAPBL (e.g. to flush newly allocated/newly written blocks to
92 1.10 dholland disk early) has been examined by several people who know the code base
93 1.10 dholland and judged difficult. Still, it might be the best way forward.
94 1.10 dholland
95 1.10 dholland + There is another journaling FFS; the Harvard one done by Margo
96 1.10 dholland Seltzer's group some years back. We have a copy of this, but as it was
97 1.10 dholland written in BSD/OS circa 1999 it needs a lot of merging, and then will
98 1.10 dholland undoubtedly also need a certain amount of polishing to be ready for
99 1.10 dholland production use. It does record-based rather than block-based
100 1.10 dholland journaling and does not share the stale data problem.
101 1.10 dholland
102 1.10 dholland + We could bring back softupdates (in the softupdates-with-journaling
103 1.10 dholland form found today in FreeBSD) -- this code is even more complicated
104 1.10 dholland than the softupdates code we removed back in 2009, and it's not clear
105 1.10 dholland that it's any more robust either. However, it would solve the stale
106 1.10 dholland data problem if someone wanted to port it over. It isn't clear that
107 1.10 dholland this would be any less work than getting the Harvard journaling FFS
108 1.10 dholland running... or than writing a whole new file system either.
109 1.10 dholland
110 1.10 dholland + We could write a whole new journaling file system. (That is, not
111 1.10 dholland FFS. Doing a new journaling FFS implementation is probably not
112 1.10 dholland sensible relative to merging the Harvard journaling FFS.) This is a
113 1.10 dholland big project.
114 1.10 dholland
115 1.10 dholland Right now it is not clear which of these avenues is the best way
116 1.10 dholland forward. Given the general manpower shortage, it may be that the best
117 1.10 dholland way is whatever looks best to someone who wants to work on the
118 1.10 dholland problem.
119 1.10 dholland
120 1.10 dholland - As of November 2015 nobody is working on fixing WAPBL. There has
121 1.10 dholland been some interest in the Harvard journaling FFS but no significant
122 1.10 dholland progress. Nobody is known to be working on or particularly
123 1.10 dholland interested in porting softupdates-with-journaling. And, while
124 1.10 dholland dholland has been mumbling for some time about a plan for a
125 1.10 dholland specific new file system to solve this problem, there isn't any
126 1.10 dholland realistic prospect of significant progress on that in the
127 1.10 dholland foreseeable future, and nobody else is known to have or be working
128 1.10 dholland on even that much.
129 1.10 dholland - There is no clear timeframe or release target; but given that WAPBL
130 1.10 dholland has been disabled by default for new installs in -7 this problem
131 1.10 dholland can reasonably be said to have become critical.
132 1.10 dholland - Contact joerg or martin regarding WAPBL; contact dholland regarding
133 1.10 dholland the Harvard journaling FFS.
134 1.10 dholland
135 1.10 dholland
136 1.10 dholland 4. Getting zfs working for real
137 1.10 dholland -------------------------------
138 1.10 dholland
139 1.10 dholland ZFS has been almost working for years now. It is high time we got it
140 1.10 dholland really working. One of the things this entails is updating the ZFS
141 1.10 dholland code, as what we have is rather old. The Illumos version is probably
142 1.10 dholland what we want for this.
143 1.10 dholland
144 1.10 dholland - There has been intermittent work on zfs, but as of November 2015
145 1.10 dholland nobody is known to be actively working on it
146 1.10 dholland - There is no clear timeframe or release target.
147 1.10 dholland - Contact riastradh or ?? for further information.
148 1.1 agc
149 1.1 agc
150 1.10 dholland 5. Seamless full-disk encryption
151 1.10 dholland --------------------------------
152 1.1 agc
153 1.10 dholland (This is only sort of a storage issue.) We have cgd, and it is
154 1.10 dholland believed to still be cryptographically suitable, at least for the time
155 1.10 dholland being. However, we don't have any of the following things:
156 1.1 agc
157 1.10 dholland + An easy way to install a machine with full-disk encryption. It
158 1.10 dholland should really just be a checkbox item in sysinst, or not much more
159 1.10 dholland than that.
160 1.5 agc
161 1.10 dholland + Ideally, also an easy way to turn on full-disk encryption for a
162 1.10 dholland machine that's already been installed, though this is harder.
163 1.1 agc
164 1.10 dholland + A good story for booting off a disk that is otherwise encrypted;
165 1.10 dholland obviously one cannot encrypt the bootblocks, but it isn't clear where
166 1.10 dholland in boot the encrypted volume should take over, or how to make a best
167 1.10 dholland effort at protecting the unencrypted elements needed to boot. (At
168 1.10 dholland least, in the absence of something like UEFI secure boot combined with
169 1.10 dholland an cryptographic oracle to sign your bootloader image so UEFI will
170 1.10 dholland accept it.) There's also the question of how one runs cgdconfig(8) and
171 1.10 dholland where the cgdconfig binary comes from.
172 1.1 agc
173 1.10 dholland + A reasonable way to handle volume passphrases. MacOS apparently uses
174 1.10 dholland login passwords for this (or as passphrases for secondary keys, or
175 1.10 dholland something) and this seems to work well enough apart from the somewhat
176 1.10 dholland surreal experience of sometimes having to log in twice. However, it
177 1.10 dholland will complicate the bootup story.
178 1.1 agc
179 1.10 dholland Given the increasing regulatory-level importance of full-disk
180 1.10 dholland encryption, this is at least a de facto requirement for using NetBSD
181 1.10 dholland on laptops in many circumstances.
182 1.1 agc
183 1.10 dholland - As of November 2015 nobody is known to be working on this.
184 1.10 dholland - There is no clear timeframe or release target.
185 1.10 dholland - Contact dholland for further information.
186 1.5 agc
187 1.5 agc
188 1.11 dholland 6. Finish tls-maxphys
189 1.11 dholland ---------------------
190 1.11 dholland
191 1.11 dholland The tls-maxphys branch changes MAXPHYS (the maximum size of a single
192 1.11 dholland I/O request) from a global fixed constant to a value that's probed
193 1.11 dholland separately for each particular I/O channel based on its
194 1.11 dholland capabilities. Large values are highly desirable for e.g. feeding large
195 1.11 dholland disk arrays but do not work with all hardware.
196 1.11 dholland
197 1.11 dholland The code is nearly done and just needs more testing and support in
198 1.11 dholland more drivers.
199 1.11 dholland
200 1.11 dholland - As of November 2015 nobody is known to be working on this.
201 1.11 dholland - There is no clear timeframe or release target.
202 1.11 dholland - Contact tls for further information.
203 1.11 dholland
204 1.11 dholland
205 1.11 dholland 7. nvme suppport
206 1.11 dholland ----------------
207 1.11 dholland
208 1.11 dholland nvme ("NVM Express") is a hardware interface standard for PCI-attached
209 1.11 dholland SSDs. NetBSD currently has no driver for these; unfortunately, while
210 1.11 dholland both FreeBSD and OpenBSD do neither of their drivers is likely
211 1.11 dholland directly suitable: the FreeBSD driver is severely overcomplicated and
212 1.11 dholland the OpenBSD driver won't be MPSAFE. (And there isn't much point in a
213 1.11 dholland non-MPSAFE nvme driver.)
214 1.11 dholland
215 1.11 dholland Relatedly, the I/O path needs to be restructured to avoid software
216 1.11 dholland bottlenecks on the way to an nvme device: they are fast enough that
217 1.11 dholland things like disksort() do not make sense.
218 1.11 dholland
219 1.11 dholland Semi-relatedly, it is also time for scsipi to become MPSAFE.
220 1.11 dholland
221 1.11 dholland - As of November 2015 nobody is known to be working on this.
222 1.11 dholland - There is no clear timeframe or release target.
223 1.11 dholland - Contact msaitoh or agc for further information.
224 1.11 dholland
225 1.11 dholland
226 1.11 dholland 8. lfs64
227 1.10 dholland --------
228 1.5 agc
229 1.10 dholland LFS currently only supports volumes up to 2 TB. As LFS is of interest
230 1.10 dholland for use on shingled disks (which are larger than 2 TB) and also for
231 1.10 dholland use on disk arrays (ditto) this is something of a problem. A 64-bit
232 1.10 dholland version of LFS for large volumes is in the works.
233 1.5 agc
234 1.10 dholland - As of November 2015 dholland is working on this.
235 1.10 dholland - It is close to being ready for at least experimental use and is
236 1.10 dholland expected to be in 8.0.
237 1.10 dholland - Responsible: dholland
238 1.5 agc
239 1.8 agc
240 1.11 dholland 9. Per-process namespaces
241 1.10 dholland -------------------------
242 1.5 agc
243 1.10 dholland Support for per-process variation of the file system namespace enables
244 1.10 dholland a number of things; more flexible chroots, for example, and also
245 1.10 dholland potentially more efficient pkgsrc builds. dholland thought up a
246 1.10 dholland somewhat hackish but low-footprint way to implement this.
247 1.5 agc
248 1.10 dholland - As of November 2015 dholland is working on this.
249 1.10 dholland - It is scheduled to be in 8.0.
250 1.10 dholland - Responsible: dholland
251 1.5 agc
252 1.8 agc
253 1.11 dholland 10. lvm tidyup
254 1.11 dholland --------------
255 1.5 agc
256 1.10 dholland [agc says someone should look at our lvm stuff; XXX fill this in]
257 1.5 agc
258 1.10 dholland - As of November 2015 nobody is known to be working on this.
259 1.10 dholland - There is no clear timeframe or release target.
260 1.10 dholland - Contact agc for further information.
261 1.5 agc
262 1.1 agc
263 1.11 dholland 11. Flash translation layer
264 1.11 dholland ---------------------------
265 1.9 agc
266 1.10 dholland SSDs ship with firmware called a "flash translation layer" that
267 1.10 dholland arbitrates between the block device software expects to see and the
268 1.10 dholland raw flash chips. FTLs handle wear leveling, lifetime management, and
269 1.10 dholland also internal caching, striping, and other performance concerns. While
270 1.10 dholland NetBSD has a file system for raw flash (chfs), it seems that given
271 1.10 dholland things NetBSD is often used for it ought to come with a flash
272 1.10 dholland translation layer as well.
273 1.10 dholland
274 1.10 dholland Note that this is an area where writing your own is probably a bad
275 1.10 dholland plan; it is a complicated area with a lot of prior art that's also
276 1.10 dholland reportedly full of patent mines. There are a couple of open FTL
277 1.10 dholland implementations that we might be able to import.
278 1.10 dholland
279 1.10 dholland - As of November 2015 nobody is known to be working on this.
280 1.10 dholland - There is no clear timeframe or release target.
281 1.10 dholland - Contact dholland for further information.
282 1.10 dholland
283 1.10 dholland
284 1.11 dholland 12. Shingled disk support
285 1.10 dholland -------------------------
286 1.10 dholland
287 1.10 dholland Shingled disks (or more technically, disks with "shingled magnetic
288 1.10 dholland recording" or SMR) can only write whole tracks at once. Thus, to
289 1.10 dholland operate effectively they require translation support similar to the
290 1.10 dholland flash translation layers found in SSDs. The nature and structure of
291 1.10 dholland shingle translation layers is still being researched; however, at some
292 1.10 dholland point we will want to support these things in NetBSD.
293 1.10 dholland
294 1.10 dholland - As of November 2015 one of dholland's coworkers is looking at this.
295 1.10 dholland - There is no clear timeframe or release target.
296 1.10 dholland - Contact dholland for further information.
297 1.10 dholland
298 1.10 dholland
299 1.11 dholland 13. ext3/ext4 support
300 1.10 dholland ---------------------
301 1.10 dholland
302 1.10 dholland We would like to be able to read and write Linux ext3fs and ext4fs
303 1.10 dholland volumes. (We can already read clean ext3fs volumes as they're the same
304 1.10 dholland as ext2fs, modulo volume features our ext2fs code does not support;
305 1.10 dholland but we can't write them.)
306 1.10 dholland
307 1.10 dholland Ideally someone would write ext3 and/or ext4 code, whether integrated
308 1.10 dholland with or separate from the ext2 code we already have. It might also
309 1.10 dholland make sense to port or wrap the Linux ext3 or ext4 code so it can be
310 1.10 dholland loaded as a GPL'd kernel module; it isn't clear if that would be more
311 1.10 dholland or less work than doing an implementation.
312 1.10 dholland
313 1.10 dholland Note however that implementing ext3 has already defeated several
314 1.10 dholland people; this is a harder project than it looks.
315 1.10 dholland
316 1.10 dholland - As of November 2015 nobody is known to be working on this.
317 1.10 dholland - There is no clear timeframe or release target.
318 1.10 dholland - Contact ?? for further information.
319 1.10 dholland
320 1.10 dholland
321 1.11 dholland 14. Port hammer from Dragonfly
322 1.10 dholland ------------------------------
323 1.10 dholland
324 1.10 dholland While the motivation for and role of hammer isn't perhaps super
325 1.10 dholland persuasive, it would still be good to have it. Porting it from
326 1.10 dholland Dragonfly is probably not that painful (compared to, say, zfs) but as
327 1.10 dholland the Dragonfly and NetBSD VFS layers have diverged in different
328 1.10 dholland directions from the original 4.4BSD, may not be entirely trivial
329 1.10 dholland either.
330 1.10 dholland
331 1.10 dholland - As of November 2015 nobody is known to be working on this.
332 1.10 dholland - There is no clear timeframe or release target.
333 1.10 dholland - There probably isn't any particular person to contact; for VFS
334 1.10 dholland concerns contact dholland or hannken.
335 1.10 dholland
336 1.10 dholland
337 1.11 dholland 15. afs maintenance
338 1.10 dholland -------------------
339 1.10 dholland
340 1.10 dholland AFS needs periodic care and feeding to continue working as NetBSD
341 1.10 dholland changes, because the kernel-level bits aren't kept in the NetBSD tree
342 1.10 dholland and don't get updated with other things. This is an ongoing issue that
343 1.10 dholland always seems to need more manpower than it gets. It might make sense
344 1.10 dholland to import some of the kernel AFS code, or maybe even just some of the
345 1.10 dholland glue layer that it uses, in order to keep it more current.
346 1.10 dholland
347 1.10 dholland - jakllsch sometimes works on this.
348 1.10 dholland - We would like every release to have working AFS by the time it's
349 1.10 dholland released.
350 1.10 dholland - Contact jakllsch or gendalia about AFS; for VFS concerns contact
351 1.10 dholland dholland or hannken.
352 1.10 dholland
353 1.10 dholland
354 1.11 dholland 16. execute-in-place
355 1.10 dholland --------------------
356 1.10 dholland
357 1.10 dholland It is likely that the future includes non-volatile storage (so-called
358 1.10 dholland "nvram") that looks like RAM from the perspective of software. Most
359 1.10 dholland importantly: the storage is memory-mapped rather than looking like a
360 1.10 dholland disk controller. There are a number of things NetBSD ought to have to
361 1.10 dholland be ready for this, of which probably the most important is
362 1.10 dholland "execute-in-place": when an executable is run from such storage, and
363 1.10 dholland mapped into user memory with mmap, the storage hardware pages should
364 1.10 dholland be able to appear directly in user memory. Right now they get
365 1.10 dholland gratuitously copied into RAM, which is slow and wasteful. There are
366 1.10 dholland also other reasons (e.g. embedded device ROMs) to want execute-in-
367 1.10 dholland place support.
368 1.10 dholland
369 1.10 dholland Note that at the implementation level this is a UVM issue rather than
370 1.10 dholland strictly a storage issue.
371 1.10 dholland
372 1.10 dholland Also note that one does not need access to nvram hardware to work on
373 1.10 dholland this issue; given the performance profiles touted for nvram
374 1.10 dholland technologies, a plain RAM disk like md(4) is sufficient both
375 1.10 dholland structurally and for performance analysis.
376 1.10 dholland
377 1.10 dholland - As of November 2015 nobody is known to be working on this. Some
378 1.10 dholland time back, uebayasi wrote some preliminary patches, but they were
379 1.10 dholland rejected by the UVM maintainers.
380 1.10 dholland - There is no clear timeframe or release target.
381 1.10 dholland - Contact dholland for further information.
382 1.10 dholland
383 1.10 dholland
384 1.11 dholland 17. coda maintenance
385 1.10 dholland --------------------
386 1.10 dholland
387 1.10 dholland Coda only sort of works. [And I think it's behind relative to
388 1.10 dholland upstream, or something of the sort; XXX fill this in.] Also the code
389 1.10 dholland appears to have an ugly incestuous relationship with FFS. This should
390 1.10 dholland really be cleaned up. That or maybe it's time to remove Coda.
391 1.10 dholland
392 1.10 dholland - As of November 2015 nobody is known to be working on this.
393 1.10 dholland - There is no clear timeframe or release target.
394 1.10 dholland - There isn't anyone in particular to contact.
395 1.9 agc
396 1.9 agc
397 1.9 agc Alistair Crooks, David Holland
398 1.10 dholland Fri Nov 20 02:17:53 EST 2015
399