storage revision 1.12
1$NetBSD: storage,v 1.12 2016/05/01 20:51:36 dholland Exp $ 2 3NetBSD Storage Roadmap 4====================== 5 6This is a small roadmap document, and deals with the storage and file 7systems side of the operating system. It discusses elements, projects, 8and goals that are under development or under discussion; and it is 9divided into three categories based on perceived priority. 10 11The following elements, projects, and goals are considered strategic 12priorities for the project: 13 14 1. Improving iscsi 15 2. nfsv4 support 16 3. A better journaling file system solution 17 4. Getting zfs working for real 18 5. Seamless full-disk encryption 19 6. Finish tls-maxphys 20 21The following elements, projects, and goals are not strategic 22priorities but are still important undertakings worth doing: 23 24 7. nvme support 25 8. lfs64 26 9. Per-process namespaces 27 10. lvm tidyup 28 11. Flash translation layer 29 12. Shingled disk support 30 13. ext3/ext4 support 31 14. Port hammer from Dragonfly 32 15. afs maintenance 33 16. execute-in-place 34 35The following elements, projects, and goals are perhaps less pressing; 36this doesn't mean one shouldn't work on them but the expected payoff 37is perhaps less than for other things: 38 39 17. coda maintenance 40 41 42Explanations 43============ 44 451. Improving iscsi 46------------------ 47 48Both the existing iscsi target and initiator are fairly bad code, and 49neither works terribly well. Fixing this is fairly important as iscsi 50is where it's at for remote block devices. Note that there appears to 51be no compelling reason to move the target to the kernel or otherwise 52make major architectural changes. 53 54 - As of November 2015 nobody is known to be working on this. 55 - There is currently no clear timeframe or release target. 56 - Contact agc for further information. 57 58 592. nfsv4 support 60---------------- 61 62nfsv4 is at this point the de facto standard for FS-level (as opposed 63to block-level) network volumes in production settings. The legacy nfs 64code currently in NetBSD only supports nfsv2 and nfsv3. 65 66The intended plan is to port FreeBSD's nfsv4 code, which also includes 67nfsv2 and nfsv3 support, and eventually transition to it completely, 68dropping our current nfs code. (Which is kind of a mess.) So far the 69only step that has been taken is to import the code from FreeBSD. The 70next step is to update that import (since it was done a while ago now) 71and then work on getting it to configure and compile. 72 73 - As of November 2015 nobody is working on this, and a volunteer to 74 take charge is urgently needed. 75 - There is no clear timeframe or release target, although having an 76 experimental version ready for -8 would be great. 77 - Contact dholland for further information. 78 79 803. A better journaling file system solution 81------------------------------------------- 82 83WAPBL, the journaling FFS that NetBSD rolled out some time back, has a 84critical problem: it does not address the historic ffs behavior of 85allowing stale on-disk data to leak into user files in crashes. And 86because it runs faster, this happens more often and with more data. 87This situation is both a correctness and a security liability. Fixing 88it has turned out to be difficult. It is not really clear what the 89best option at this point is: 90 91+ Fixing WAPBL (e.g. to flush newly allocated/newly written blocks to 92disk early) has been examined by several people who know the code base 93and judged difficult. Still, it might be the best way forward. 94 95+ There is another journaling FFS; the Harvard one done by Margo 96Seltzer's group some years back. We have a copy of this, but as it was 97written in BSD/OS circa 1999 it needs a lot of merging, and then will 98undoubtedly also need a certain amount of polishing to be ready for 99production use. It does record-based rather than block-based 100journaling and does not share the stale data problem. 101 102+ We could bring back softupdates (in the softupdates-with-journaling 103form found today in FreeBSD) -- this code is even more complicated 104than the softupdates code we removed back in 2009, and it's not clear 105that it's any more robust either. However, it would solve the stale 106data problem if someone wanted to port it over. It isn't clear that 107this would be any less work than getting the Harvard journaling FFS 108running... or than writing a whole new file system either. 109 110+ We could write a whole new journaling file system. (That is, not 111FFS. Doing a new journaling FFS implementation is probably not 112sensible relative to merging the Harvard journaling FFS.) This is a 113big project. 114 115Right now it is not clear which of these avenues is the best way 116forward. Given the general manpower shortage, it may be that the best 117way is whatever looks best to someone who wants to work on the 118problem. 119 120 - As of November 2015 nobody is working on fixing WAPBL. There has 121 been some interest in the Harvard journaling FFS but no significant 122 progress. Nobody is known to be working on or particularly 123 interested in porting softupdates-with-journaling. And, while 124 dholland has been mumbling for some time about a plan for a 125 specific new file system to solve this problem, there isn't any 126 realistic prospect of significant progress on that in the 127 foreseeable future, and nobody else is known to have or be working 128 on even that much. 129 - There is no clear timeframe or release target; but given that WAPBL 130 has been disabled by default for new installs in -7 this problem 131 can reasonably be said to have become critical. 132 - Contact joerg or martin regarding WAPBL; contact dholland regarding 133 the Harvard journaling FFS. 134 135 1364. Getting zfs working for real 137------------------------------- 138 139ZFS has been almost working for years now. It is high time we got it 140really working. One of the things this entails is updating the ZFS 141code, as what we have is rather old. The Illumos version is probably 142what we want for this. 143 144 - There has been intermittent work on zfs, but as of November 2015 145 nobody is known to be actively working on it 146 - There is no clear timeframe or release target. 147 - Contact riastradh or ?? for further information. 148 149 1505. Seamless full-disk encryption 151-------------------------------- 152 153(This is only sort of a storage issue.) We have cgd, and it is 154believed to still be cryptographically suitable, at least for the time 155being. However, we don't have any of the following things: 156 157+ An easy way to install a machine with full-disk encryption. It 158should really just be a checkbox item in sysinst, or not much more 159than that. 160 161+ Ideally, also an easy way to turn on full-disk encryption for a 162machine that's already been installed, though this is harder. 163 164+ A good story for booting off a disk that is otherwise encrypted; 165obviously one cannot encrypt the bootblocks, but it isn't clear where 166in boot the encrypted volume should take over, or how to make a best 167effort at protecting the unencrypted elements needed to boot. (At 168least, in the absence of something like UEFI secure boot combined with 169an cryptographic oracle to sign your bootloader image so UEFI will 170accept it.) There's also the question of how one runs cgdconfig(8) and 171where the cgdconfig binary comes from. 172 173+ A reasonable way to handle volume passphrases. MacOS apparently uses 174login passwords for this (or as passphrases for secondary keys, or 175something) and this seems to work well enough apart from the somewhat 176surreal experience of sometimes having to log in twice. However, it 177will complicate the bootup story. 178 179Given the increasing regulatory-level importance of full-disk 180encryption, this is at least a de facto requirement for using NetBSD 181on laptops in many circumstances. 182 183 - As of November 2015 nobody is known to be working on this. 184 - There is no clear timeframe or release target. 185 - Contact dholland for further information. 186 187 1886. Finish tls-maxphys 189--------------------- 190 191The tls-maxphys branch changes MAXPHYS (the maximum size of a single 192I/O request) from a global fixed constant to a value that's probed 193separately for each particular I/O channel based on its 194capabilities. Large values are highly desirable for e.g. feeding large 195disk arrays but do not work with all hardware. 196 197The code is nearly done and just needs more testing and support in 198more drivers. 199 200 - As of November 2015 nobody is known to be working on this. 201 - There is no clear timeframe or release target. 202 - Contact tls for further information. 203 204 2057. nvme suppport 206---------------- 207 208nvme ("NVM Express") is a hardware interface standard for PCI-attached 209SSDs. NetBSD now has a driver for these; however, it was ported from 210OpenBSD and is not (yet) MPSAFE. This is, unfortunately, a fairly 211serious limitation given the point and nature of nvme devices. 212 213Relatedly, the I/O path needs to be restructured to avoid software 214bottlenecks on the way to an nvme device: they are fast enough that 215things like disksort() do not make sense. 216 217Semi-relatedly, it is also time for scsipi to become MPSAFE. 218 219 - As of May 2016 a port of OpenBSD's driver has been commited. This 220 will be in -8. 221 - However, the driver still needs to be made MPSAFE, and we still 222 need to attend to scsipi and various other I/O path bottlenecks. 223 - There is no clear timeframe or release target for these points. 224 - Contact msaitoh or agc for further information. 225 226 2278. lfs64 228-------- 229 230LFS currently only supports volumes up to 2 TB. As LFS is of interest 231for use on shingled disks (which are larger than 2 TB) and also for 232use on disk arrays (ditto) this is something of a problem. A 64-bit 233version of LFS for large volumes is in the works. 234 235 - As of November 2015 dholland is working on this. 236 - It is close to being ready for at least experimental use and is 237 expected to be in 8.0. 238 - Responsible: dholland 239 240 2419. Per-process namespaces 242------------------------- 243 244Support for per-process variation of the file system namespace enables 245a number of things; more flexible chroots, for example, and also 246potentially more efficient pkgsrc builds. dholland thought up a 247somewhat hackish but low-footprint way to implement this. 248 249 - As of November 2015 dholland is working on this. 250 - It is scheduled to be in 8.0. 251 - Responsible: dholland 252 253 25410. lvm tidyup 255-------------- 256 257[agc says someone should look at our lvm stuff; XXX fill this in] 258 259 - As of November 2015 nobody is known to be working on this. 260 - There is no clear timeframe or release target. 261 - Contact agc for further information. 262 263 26411. Flash translation layer 265--------------------------- 266 267SSDs ship with firmware called a "flash translation layer" that 268arbitrates between the block device software expects to see and the 269raw flash chips. FTLs handle wear leveling, lifetime management, and 270also internal caching, striping, and other performance concerns. While 271NetBSD has a file system for raw flash (chfs), it seems that given 272things NetBSD is often used for it ought to come with a flash 273translation layer as well. 274 275Note that this is an area where writing your own is probably a bad 276plan; it is a complicated area with a lot of prior art that's also 277reportedly full of patent mines. There are a couple of open FTL 278implementations that we might be able to import. 279 280 - As of November 2015 nobody is known to be working on this. 281 - There is no clear timeframe or release target. 282 - Contact dholland for further information. 283 284 28512. Shingled disk support 286------------------------- 287 288Shingled disks (or more technically, disks with "shingled magnetic 289recording" or SMR) can only write whole tracks at once. Thus, to 290operate effectively they require translation support similar to the 291flash translation layers found in SSDs. The nature and structure of 292shingle translation layers is still being researched; however, at some 293point we will want to support these things in NetBSD. 294 295 - As of November 2015 one of dholland's coworkers is looking at this. 296 - There is no clear timeframe or release target. 297 - Contact dholland for further information. 298 299 30013. ext3/ext4 support 301--------------------- 302 303We would like to be able to read and write Linux ext3fs and ext4fs 304volumes. (We can already read clean ext3fs volumes as they're the same 305as ext2fs, modulo volume features our ext2fs code does not support; 306but we can't write them.) 307 308Ideally someone would write ext3 and/or ext4 code, whether integrated 309with or separate from the ext2 code we already have. It might also 310make sense to port or wrap the Linux ext3 or ext4 code so it can be 311loaded as a GPL'd kernel module; it isn't clear if that would be more 312or less work than doing an implementation. 313 314Note however that implementing ext3 has already defeated several 315people; this is a harder project than it looks. 316 317 - As of May 2016 there is a GSoC project to implement read-only ext4 318 support, but (it not being summer yet) no particular progress. 319 - There is no clear timeframe or release target. 320 - Contact ?? for further information. 321 322 32314. Port hammer from Dragonfly 324------------------------------ 325 326While the motivation for and role of hammer isn't perhaps super 327persuasive, it would still be good to have it. Porting it from 328Dragonfly is probably not that painful (compared to, say, zfs) but as 329the Dragonfly and NetBSD VFS layers have diverged in different 330directions from the original 4.4BSD, may not be entirely trivial 331either. 332 333 - As of November 2015 nobody is known to be working on this. 334 - There is no clear timeframe or release target. 335 - There probably isn't any particular person to contact; for VFS 336 concerns contact dholland or hannken. 337 338 33915. afs maintenance 340------------------- 341 342AFS needs periodic care and feeding to continue working as NetBSD 343changes, because the kernel-level bits aren't kept in the NetBSD tree 344and don't get updated with other things. This is an ongoing issue that 345always seems to need more manpower than it gets. It might make sense 346to import some of the kernel AFS code, or maybe even just some of the 347glue layer that it uses, in order to keep it more current. 348 349 - jakllsch sometimes works on this. 350 - We would like every release to have working AFS by the time it's 351 released. 352 - Contact jakllsch or gendalia about AFS; for VFS concerns contact 353 dholland or hannken. 354 355 35616. execute-in-place 357-------------------- 358 359It is likely that the future includes non-volatile storage (so-called 360"nvram") that looks like RAM from the perspective of software. Most 361importantly: the storage is memory-mapped rather than looking like a 362disk controller. There are a number of things NetBSD ought to have to 363be ready for this, of which probably the most important is 364"execute-in-place": when an executable is run from such storage, and 365mapped into user memory with mmap, the storage hardware pages should 366be able to appear directly in user memory. Right now they get 367gratuitously copied into RAM, which is slow and wasteful. There are 368also other reasons (e.g. embedded device ROMs) to want execute-in- 369place support. 370 371Note that at the implementation level this is a UVM issue rather than 372strictly a storage issue. 373 374Also note that one does not need access to nvram hardware to work on 375this issue; given the performance profiles touted for nvram 376technologies, a plain RAM disk like md(4) is sufficient both 377structurally and for performance analysis. 378 379 - As of November 2015 nobody is known to be working on this. Some 380 time back, uebayasi wrote some preliminary patches, but they were 381 rejected by the UVM maintainers. 382 - There is no clear timeframe or release target. 383 - Contact dholland for further information. 384 385 38617. coda maintenance 387-------------------- 388 389Coda only sort of works. [And I think it's behind relative to 390upstream, or something of the sort; XXX fill this in.] Also the code 391appears to have an ugly incestuous relationship with FFS. This should 392really be cleaned up. That or maybe it's time to remove Coda. 393 394 - As of November 2015 nobody is known to be working on this. 395 - There is no clear timeframe or release target. 396 - There isn't anyone in particular to contact. 397 398 399Alistair Crooks, David Holland 400Fri Nov 20 02:17:53 EST 2015 401Sun May 1 16:50:42 EDT 2016 (some updates) 402 403