storage revision 1.13
1$NetBSD: storage,v 1.13 2016/05/05 06:17:45 dholland Exp $
2
3NetBSD Storage Roadmap
4======================
5
6This is a small roadmap document, and deals with the storage and file
7systems side of the operating system. It discusses elements, projects,
8and goals that are under development or under discussion; and it is
9divided into three categories based on perceived priority.
10
11The following elements, projects, and goals are considered strategic
12priorities for the project:
13
14 1. Improving iscsi
15 2. nfsv4 support
16 3. A better journaling file system solution
17 4. Getting zfs working for real
18 5. Seamless full-disk encryption
19 6. Finish tls-maxphys
20
21The following elements, projects, and goals are not strategic
22priorities but are still important undertakings worth doing:
23
24 7. nvme support
25 8. lfs64
26 9. Per-process namespaces
27 10. lvm tidyup
28 11. Flash translation layer
29 12. Shingled disk support
30 13. ext3/ext4 support
31 14. Port hammer from Dragonfly
32 15. afs maintenance
33 16. execute-in-place
34
35The following elements, projects, and goals are perhaps less pressing;
36this doesn't mean one shouldn't work on them but the expected payoff
37is perhaps less than for other things:
38
39 17. coda maintenance
40
41
42Explanations
43============
44
451. Improving iscsi
46------------------
47
48Both the existing iscsi target and initiator are fairly bad code, and
49neither works terribly well. Fixing this is fairly important as iscsi
50is where it's at for remote block devices. Note that there appears to
51be no compelling reason to move the target to the kernel or otherwise
52make major architectural changes.
53
54 - As of November 2015 nobody is known to be working on this.
55 - There is currently no clear timeframe or release target.
56 - Contact agc for further information.
57
58
592. nfsv4 support
60----------------
61
62nfsv4 is at this point the de facto standard for FS-level (as opposed
63to block-level) network volumes in production settings. The legacy nfs
64code currently in NetBSD only supports nfsv2 and nfsv3.
65
66The intended plan is to port FreeBSD's nfsv4 code, which also includes
67nfsv2 and nfsv3 support, and eventually transition to it completely,
68dropping our current nfs code. (Which is kind of a mess.) So far the
69only step that has been taken is to import the code from FreeBSD. The
70next step is to update that import (since it was done a while ago now)
71and then work on getting it to configure and compile.
72
73 - As of November 2015 nobody is working on this, and a volunteer to
74   take charge is urgently needed.
75 - There is no clear timeframe or release target, although having an
76   experimental version ready for -8 would be great.
77 - Contact dholland for further information.
78
79
803. A better journaling file system solution
81-------------------------------------------
82
83WAPBL, the journaling FFS that NetBSD rolled out some time back, has a
84critical problem: it does not address the historic ffs behavior of
85allowing stale on-disk data to leak into user files in crashes. And
86because it runs faster, this happens more often and with more data.
87This situation is both a correctness and a security liability. Fixing
88it has turned out to be difficult. It is not really clear what the
89best option at this point is:
90
91+ Fixing WAPBL (e.g. to flush newly allocated/newly written blocks to
92disk early) has been examined by several people who know the code base
93and judged difficult. Also, some other problems have come to light
94more recently; e.g. PR 50725, PR 47146, and a problem where truncating
95large sparse files takes ~forever. Also see PR 45676. Still, it might
96be the best way forward.
97
98+ There is another journaling FFS; the Harvard one done by Margo
99Seltzer's group some years back. We have a copy of this, but as it was
100written in BSD/OS circa 1999 it needs a lot of merging, and then will
101undoubtedly also need a certain amount of polishing to be ready for
102production use. It does record-based rather than block-based
103journaling and does not share the stale data problem.
104
105+ We could bring back softupdates (in the softupdates-with-journaling
106form found today in FreeBSD) -- this code is even more complicated
107than the softupdates code we removed back in 2009, and it's not clear
108that it's any more robust either. However, it would solve the stale
109data problem if someone wanted to port it over. It isn't clear that
110this would be any less work than getting the Harvard journaling FFS
111running... or than writing a whole new file system either.
112
113+ We could write a whole new journaling file system. (That is, not
114FFS. Doing a new journaling FFS implementation is probably not
115sensible relative to merging the Harvard journaling FFS.) This is a
116big project.
117
118Right now it is not clear which of these avenues is the best way
119forward. Given the general manpower shortage, it may be that the best
120way is whatever looks best to someone who wants to work on the
121problem.
122
123 - As of November 2015 nobody is working on fixing WAPBL. There has
124   been some interest in the Harvard journaling FFS but no significant
125   progress. Nobody is known to be working on or particularly
126   interested in porting softupdates-with-journaling. And, while
127   dholland has been mumbling for some time about a plan for a
128   specific new file system to solve this problem, there isn't any
129   realistic prospect of significant progress on that in the
130   foreseeable future, and nobody else is known to have or be working
131   on even that much.
132 - There is no clear timeframe or release target; but given that WAPBL
133   has been disabled by default for new installs in -7 this problem
134   can reasonably be said to have become critical.
135 - Contact joerg or martin regarding WAPBL; contact dholland regarding
136   the Harvard journaling FFS.
137
138
1394. Getting zfs working for real
140-------------------------------
141
142ZFS has been almost working for years now. It is high time we got it
143really working. One of the things this entails is updating the ZFS
144code, as what we have is rather old. The Illumos version is probably
145what we want for this.
146
147 - There has been intermittent work on zfs, but as of November 2015
148   nobody is known to be actively working on it
149 - There is no clear timeframe or release target.
150 - Contact riastradh or ?? for further information.
151
152
1535. Seamless full-disk encryption
154--------------------------------
155
156(This is only sort of a storage issue.) We have cgd, and it is
157believed to still be cryptographically suitable, at least for the time
158being. However, we don't have any of the following things:
159
160+ An easy way to install a machine with full-disk encryption. It
161should really just be a checkbox item in sysinst, or not much more
162than that.
163
164+ Ideally, also an easy way to turn on full-disk encryption for a
165machine that's already been installed, though this is harder.
166
167+ A good story for booting off a disk that is otherwise encrypted;
168obviously one cannot encrypt the bootblocks, but it isn't clear where
169in boot the encrypted volume should take over, or how to make a best
170effort at protecting the unencrypted elements needed to boot. (At
171least, in the absence of something like UEFI secure boot combined with
172an cryptographic oracle to sign your bootloader image so UEFI will
173accept it.) There's also the question of how one runs cgdconfig(8) and
174where the cgdconfig binary comes from.
175
176+ A reasonable way to handle volume passphrases. MacOS apparently uses
177login passwords for this (or as passphrases for secondary keys, or
178something) and this seems to work well enough apart from the somewhat
179surreal experience of sometimes having to log in twice. However, it
180will complicate the bootup story.
181
182Given the increasing regulatory-level importance of full-disk
183encryption, this is at least a de facto requirement for using NetBSD
184on laptops in many circumstances.
185
186 - As of November 2015 nobody is known to be working on this.
187 - There is no clear timeframe or release target.
188 - Contact dholland for further information.
189
190
1916. Finish tls-maxphys
192---------------------
193
194The tls-maxphys branch changes MAXPHYS (the maximum size of a single
195I/O request) from a global fixed constant to a value that's probed
196separately for each particular I/O channel based on its
197capabilities. Large values are highly desirable for e.g. feeding large
198disk arrays but do not work with all hardware.
199
200The code is nearly done and just needs more testing and support in
201more drivers.
202
203 - As of November 2015 nobody is known to be working on this.
204 - There is no clear timeframe or release target.
205 - Contact tls for further information.
206
207
2087. nvme suppport
209----------------
210
211nvme ("NVM Express") is a hardware interface standard for PCI-attached
212SSDs. NetBSD now has a driver for these; however, it was ported from
213OpenBSD and is not (yet) MPSAFE. This is, unfortunately, a fairly
214serious limitation given the point and nature of nvme devices.
215
216Relatedly, the I/O path needs to be restructured to avoid software
217bottlenecks on the way to an nvme device: they are fast enough that
218things like disksort() do not make sense.
219
220Semi-relatedly, it is also time for scsipi to become MPSAFE.
221
222 - As of May 2016 a port of OpenBSD's driver has been commited. This
223   will be in -8.
224 - However, the driver still needs to be made MPSAFE, and we still
225   need to attend to scsipi and various other I/O path bottlenecks.
226 - There is no clear timeframe or release target for these points.
227 - Contact msaitoh or agc for further information.
228
229
2308. lfs64
231--------
232
233LFS currently only supports volumes up to 2 TB. As LFS is of interest
234for use on shingled disks (which are larger than 2 TB) and also for
235use on disk arrays (ditto) this is something of a problem. A 64-bit
236version of LFS for large volumes is in the works.
237
238 - As of November 2015 dholland is working on this.
239 - It is close to being ready for at least experimental use and is
240   expected to be in 8.0.
241 - Responsible: dholland
242
243
2449. Per-process namespaces
245-------------------------
246
247Support for per-process variation of the file system namespace enables
248a number of things; more flexible chroots, for example, and also
249potentially more efficient pkgsrc builds. dholland thought up a
250somewhat hackish but low-footprint way to implement this.
251
252 - As of November 2015 dholland is working on this.
253 - It is scheduled to be in 8.0.
254 - Responsible: dholland
255
256
25710. lvm tidyup
258--------------
259
260[agc says someone should look at our lvm stuff; XXX fill this in]
261
262 - As of November 2015 nobody is known to be working on this.
263 - There is no clear timeframe or release target.
264 - Contact agc for further information.
265
266
26711. Flash translation layer
268---------------------------
269
270SSDs ship with firmware called a "flash translation layer" that
271arbitrates between the block device software expects to see and the
272raw flash chips. FTLs handle wear leveling, lifetime management, and
273also internal caching, striping, and other performance concerns. While
274NetBSD has a file system for raw flash (chfs), it seems that given
275things NetBSD is often used for it ought to come with a flash
276translation layer as well.
277
278Note that this is an area where writing your own is probably a bad
279plan; it is a complicated area with a lot of prior art that's also
280reportedly full of patent mines. There are a couple of open FTL
281implementations that we might be able to import.
282
283 - As of November 2015 nobody is known to be working on this.
284 - There is no clear timeframe or release target.
285 - Contact dholland for further information.
286
287
28812. Shingled disk support
289-------------------------
290
291Shingled disks (or more technically, disks with "shingled magnetic
292recording" or SMR) can only write whole tracks at once. Thus, to
293operate effectively they require translation support similar to the
294flash translation layers found in SSDs. The nature and structure of
295shingle translation layers is still being researched; however, at some
296point we will want to support these things in NetBSD.
297
298 - As of November 2015 one of dholland's coworkers is looking at this.
299 - There is no clear timeframe or release target.
300 - Contact dholland for further information.
301
302
30313. ext3/ext4 support
304---------------------
305
306We would like to be able to read and write Linux ext3fs and ext4fs
307volumes. (We can already read clean ext3fs volumes as they're the same
308as ext2fs, modulo volume features our ext2fs code does not support;
309but we can't write them.)
310
311Ideally someone would write ext3 and/or ext4 code, whether integrated
312with or separate from the ext2 code we already have. It might also
313make sense to port or wrap the Linux ext3 or ext4 code so it can be
314loaded as a GPL'd kernel module; it isn't clear if that would be more
315or less work than doing an implementation.
316
317Note however that implementing ext3 has already defeated several
318people; this is a harder project than it looks.
319
320 - As of May 2016 there is a GSoC project to implement read-only ext4
321   support, but (it not being summer yet) no particular progress.
322 - There is no clear timeframe or release target.
323 - Contact ?? for further information.
324
325
32614. Port hammer from Dragonfly
327------------------------------
328
329While the motivation for and role of hammer isn't perhaps super
330persuasive, it would still be good to have it. Porting it from
331Dragonfly is probably not that painful (compared to, say, zfs) but as
332the Dragonfly and NetBSD VFS layers have diverged in different
333directions from the original 4.4BSD, may not be entirely trivial
334either.
335
336 - As of November 2015 nobody is known to be working on this.
337 - There is no clear timeframe or release target.
338 - There probably isn't any particular person to contact; for VFS
339   concerns contact dholland or hannken.
340
341
34215. afs maintenance
343-------------------
344
345AFS needs periodic care and feeding to continue working as NetBSD
346changes, because the kernel-level bits aren't kept in the NetBSD tree
347and don't get updated with other things. This is an ongoing issue that
348always seems to need more manpower than it gets. It might make sense
349to import some of the kernel AFS code, or maybe even just some of the
350glue layer that it uses, in order to keep it more current.
351
352 - jakllsch sometimes works on this.
353 - We would like every release to have working AFS by the time it's
354   released.
355 - Contact jakllsch or gendalia about AFS; for VFS concerns contact
356   dholland or hannken.
357
358
35916. execute-in-place
360--------------------
361
362It is likely that the future includes non-volatile storage (so-called
363"nvram") that looks like RAM from the perspective of software. Most
364importantly: the storage is memory-mapped rather than looking like a
365disk controller. There are a number of things NetBSD ought to have to
366be ready for this, of which probably the most important is
367"execute-in-place": when an executable is run from such storage, and
368mapped into user memory with mmap, the storage hardware pages should
369be able to appear directly in user memory. Right now they get
370gratuitously copied into RAM, which is slow and wasteful. There are
371also other reasons (e.g. embedded device ROMs) to want execute-in-
372place support.
373
374Note that at the implementation level this is a UVM issue rather than
375strictly a storage issue. 
376
377Also note that one does not need access to nvram hardware to work on
378this issue; given the performance profiles touted for nvram
379technologies, a plain RAM disk like md(4) is sufficient both
380structurally and for performance analysis.
381
382 - As of November 2015 nobody is known to be working on this. Some
383   time back, uebayasi wrote some preliminary patches, but they were
384   rejected by the UVM maintainers.
385 - There is no clear timeframe or release target.
386 - Contact dholland for further information.
387
388
38917. coda maintenance
390--------------------
391
392Coda only sort of works. [And I think it's behind relative to
393upstream, or something of the sort; XXX fill this in.] Also the code
394appears to have an ugly incestuous relationship with FFS. This should
395really be cleaned up. That or maybe it's time to remove Coda.
396
397 - As of November 2015 nobody is known to be working on this.
398 - There is no clear timeframe or release target.
399 - There isn't anyone in particular to contact.
400
401
402Alistair Crooks, David Holland
403Fri Nov 20 02:17:53 EST 2015
404Sun May  1 16:50:42 EDT 2016 (some updates)
405
406