storage revision 1.22
1$NetBSD: storage,v 1.22 2017/04/19 21:48:58 jdolecek Exp $ 2 3NetBSD Storage Roadmap 4====================== 5 6This is a small roadmap document, and deals with the storage and file 7systems side of the operating system. It discusses elements, projects, 8and goals that are under development or under discussion; and it is 9divided into three categories based on perceived priority. 10 11The following elements, projects, and goals are considered strategic 12priorities for the project: 13 14 1. Improving iscsi 15 2. nfsv4 support 16 3. A better journaling file system solution 17 4. Getting zfs working for real 18 5. Seamless full-disk encryption 19 6. Finish tls-maxphys 20 21The following elements, projects, and goals are not strategic 22priorities but are still important undertakings worth doing: 23 24 7. nvme support 25 8. lfs64 26 9. Per-process namespaces 27 10. lvm tidyup 28 11. Flash translation layer 29 12. Shingled disk support 30 13. ext3/ext4 support 31 14. Port hammer from Dragonfly 32 15. afs maintenance 33 16. execute-in-place 34 17. extended attributes for acl and capability storage 35 36The following elements, projects, and goals are perhaps less pressing; 37this doesn't mean one shouldn't work on them but the expected payoff 38is perhaps less than for other things: 39 40 18. coda maintenance 41 42 43Explanations 44============ 45 461. Improving iscsi 47------------------ 48 49Both the existing iscsi target and initiator are fairly bad code, and 50neither works terribly well. Fixing this is fairly important as iscsi 51is where it's at for remote block devices. Note that there appears to 52be no compelling reason to move the target to the kernel or otherwise 53make major architectural changes. 54 55 - As of January 2017 nobody is known to be working on this. 56 - There is currently no clear timeframe or release target. 57 - Contact agc for further information. 58 59 602. nfsv4 support 61---------------- 62 63nfsv4 is at this point the de facto standard for FS-level (as opposed 64to block-level) network volumes in production settings. The legacy nfs 65code currently in NetBSD only supports nfsv2 and nfsv3. 66 67The intended plan is to port FreeBSD's nfsv4 code, which also includes 68nfsv2 and nfsv3 support, and eventually transition to it completely, 69dropping our current nfs code. (Which is kind of a mess.) So far the 70only step that has been taken is to import the code from FreeBSD. The 71next step is to update that import (since it was done a while ago now) 72and then work on getting it to configure and compile. 73 74 - As of January 2017 pgoyette has done a bit of prodding of the code 75 recently, but otherwise nobody is working on this, and a volunteer to 76 take charge and move it forward rapidly is urgently needed. 77 - There is no clear timeframe or release target, although having an 78 experimental version ready for -8 would be great. 79 - Contact dholland for further information. 80 81 823. A better journaling file system solution 83------------------------------------------- 84 85WAPBL, the journaling FFS that NetBSD rolled out some time back, has a 86critical problem: it does not address the historic ffs behavior of 87allowing stale on-disk data to leak into user files in crashes. And 88because it runs faster, this happens more often and with more data. 89This situation is both a correctness and a security liability. Fixing 90it has turned out to be difficult. It is not really clear what the 91best option at this point is: 92 93+ Fixing WAPBL (e.g. to flush newly allocated/newly written blocks to 94disk early) has been examined by several people who know the code base 95and judged difficult. Also, some other problems have come to light 96more recently; e.g. PR 50725, and 45676. Still, it might be the best 97way forward. 98 99+ There is another journaling FFS; the Harvard one done by Margo 100Seltzer's group some years back. We have a copy of this, but as it was 101written in BSD/OS circa 1999 it needs a lot of merging, and then will 102undoubtedly also need a certain amount of polishing to be ready for 103production use. It does record-based rather than block-based 104journaling and does not share the stale data problem. 105 106+ We could bring back softupdates (in the softupdates-with-journaling 107form found today in FreeBSD) -- this code is even more complicated 108than the softupdates code we removed back in 2009, and it's not clear 109that it's any more robust either. However, it would solve the stale 110data problem if someone wanted to port it over. It isn't clear that 111this would be any less work than getting the Harvard journaling FFS 112running... or than writing a whole new file system either. 113 114+ We could write a whole new journaling file system. (That is, not 115FFS. Doing a new journaling FFS implementation is probably not 116sensible relative to merging the Harvard journaling FFS.) This is a 117big project. 118 119Right now it is not clear which of these avenues is the best way 120forward. Given the general manpower shortage, it may be that the best 121way is whatever looks best to someone who wants to work on the 122problem. 123 124 - There has been some interest in the Harvard journaling FFS but no 125 significant progress. Nobody is known to be working on or particularly 126 interested in porting softupdates-with-journaling. And, while 127 dholland has been mumbling for some time about a plan for a 128 specific new file system to solve this problem, there isn't any 129 realistic prospect of significant progress on that in the 130 foreseeable future, and nobody else is known to have or be working 131 on even that much. 132 - There is no clear timeframe or release target; but given that WAPBL 133 has been disabled by default for new installs in -7 this problem 134 can reasonably be said to have become critical. 135 - jdolecek is working on fixing WAPBL, goal is to get WAPBL fixed 136 enough to be safe to re-enable as default for -8 137 - Contact joerg or martin regarding WAPBL; contact dholland regarding 138 the Harvard journaling FFS. 139 140 1414. Getting zfs working for real 142------------------------------- 143 144ZFS has been almost working for years now. It is high time we got it 145really working. One of the things this entails is updating the ZFS 146code, as what we have is rather old. The Illumos version is probably 147what we want for this. 148 149 - There has been intermittent work on zfs, but as of January 2017 150 nobody is known to be actively working on it 151 - There is no clear timeframe or release target. 152 - Contact riastradh or ?? for further information. 153 154 1555. Seamless full-disk encryption 156-------------------------------- 157 158(This is only sort of a storage issue.) We have cgd, and it is 159believed to still be cryptographically suitable, at least for the time 160being. However, we don't have any of the following things: 161 162+ An easy way to install a machine with full-disk encryption. It 163should really just be a checkbox item in sysinst, or not much more 164than that. 165 166+ Ideally, also an easy way to turn on full-disk encryption for a 167machine that's already been installed, though this is harder. 168 169+ A good story for booting off a disk that is otherwise encrypted; 170obviously one cannot encrypt the bootblocks, but it isn't clear where 171in boot the encrypted volume should take over, or how to make a best 172effort at protecting the unencrypted elements needed to boot. (At 173least, in the absence of something like UEFI secure boot combined with 174an cryptographic oracle to sign your bootloader image so UEFI will 175accept it.) There's also the question of how one runs cgdconfig(8) and 176where the cgdconfig binary comes from. 177 178+ A reasonable way to handle volume passphrases. MacOS apparently uses 179login passwords for this (or as passphrases for secondary keys, or 180something) and this seems to work well enough apart from the somewhat 181surreal experience of sometimes having to log in twice. However, it 182will complicate the bootup story. 183 184Given the increasing regulatory-level importance of full-disk 185encryption, this is at least a de facto requirement for using NetBSD 186on laptops in many circumstances. 187 188 - As of January 2017 nobody is known to be working on this. 189 - There is no clear timeframe or release target. 190 - Contact dholland for further information. 191 192 1936. Finish tls-maxphys 194--------------------- 195 196The tls-maxphys branch changes MAXPHYS (the maximum size of a single 197I/O request) from a global fixed constant to a value that's probed 198separately for each particular I/O channel based on its 199capabilities. Large values are highly desirable for e.g. feeding large 200disk arrays and SSDs, but do not work with all hardware. 201 202The code is nearly done and just needs more testing and support in 203more drivers. 204 205 - As of January 2017 nobody is known to be working on this. 206 - There is no clear timeframe or release target. 207 - Contact tls for further information. 208 209 2107. nvme suppport 211---------------- 212 213nvme ("NVM Express") is a hardware interface standard for PCI-attached 214SSDs. NetBSD now has a driver for these. 215 216Driver is now MPSAFE and uses bufq fcfs (i.e. no disksort()) already, 217so the most obvious software bottlenecks were treated. It still needs 218more testing on real hardware, and it may be good to investigate some further 219optimizations, such as DragonFly pbuf(9) or something similar. 220 221Semi-relatedly, it is also time for scsipi to become MPSAFE. 222 223 - As of May 2016 a port of OpenBSD's driver has been commited. This 224 will be in -8. 225 - The nvme driver is a backend to ld(4) which is MPSAFE, but we still 226 need to attend to I/O path bottlenecks. Better instrumentation 227 is needed. 228 - Flush cache commands via DIOCCACHESYNC currently doesn't wait for completion; 229 it must not poll since that corrupts command queue, but it should use 230 a condition variable to wait for the flush to actually finish 231 - There is no clear timeframe or release target for these points. 232 - Contact msaitoh or agc for further information. 233 234 2358. lfs64 236-------- 237 238LFS currently only supports volumes up to 2 TB. As LFS is of interest 239for use on shingled disks (which are larger than 2 TB) and also for 240use on disk arrays (ditto) this is something of a problem. A 64-bit 241version of LFS for large volumes is in the works. 242 243 - dholland was working on this in fall 2015 but time to finish it 244 dried up. 245 - The goal now is to get a few remaining things done in time for 8.0 246 so it will at least be ready for experimental use there. 247 - Responsible: dholland 248 249 2509. Per-process namespaces 251------------------------- 252 253Support for per-process variation of the file system namespace enables 254a number of things; more flexible chroots, for example, and also 255potentially more efficient pkgsrc builds. dholland thought up a 256somewhat hackish but low-footprint way to implement this, and has a 257preliminary implementation, but concluded the scheme was too fragile 258for production. A different approach is probably needed, although the 259existing code could be tidied up and committed if that seems desirable. 260 261 - As of January 2017 nobody is working on this. 262 - Contact: dholland 263 264 26510. lvm tidyup 266-------------- 267 268[agc says someone should look at our lvm stuff; XXX fill this in] 269 270 - As of January 2017 nobody is known to be working on this. 271 - There is no clear timeframe or release target. 272 - Contact agc for further information. 273 274 27511. Flash translation layer 276--------------------------- 277 278SSDs ship with firmware called a "flash translation layer" that 279arbitrates between the block device software expects to see and the 280raw flash chips. FTLs handle wear leveling, lifetime management, and 281also internal caching, striping, and other performance concerns. While 282NetBSD has a file system for raw flash (chfs), it seems that given 283things NetBSD is often used for it ought to come with a flash 284translation layer as well. 285 286Note that this is an area where writing your own is probably a bad 287plan; it is a complicated area with a lot of prior art that's also 288reportedly full of patent mines. There are a couple of open FTL 289implementations that we might be able to import. 290 291 - As of January 2017 nobody is known to be working on this. 292 - There is no clear timeframe or release target. 293 - Contact dholland for further information. 294 295 29612. Shingled disk support 297------------------------- 298 299Shingled disks (or more technically, disks with "shingled magnetic 300recording" or SMR) can only write whole tracks at once. Thus, to 301operate effectively they require translation support similar to the 302flash translation layers found in SSDs. The nature and structure of 303shingle translation layers is still being researched; however, at some 304point we will want to support these things in NetBSD. 305 306 - As of 2016 one of dholland's coworkers was looking at this. 307 - There is no clear timeframe or release target. 308 - Contact dholland for further information. 309 310 31113. ext3/ext4 support 312--------------------- 313 314We would like to be able to read and write Linux ext3fs and ext4fs 315volumes. (We can already read clean ext3fs volumes as they're the same 316as ext2fs, modulo volume features our ext2fs code does not support; 317but we can't write them.) 318 319Ideally someone would write ext3 and/or ext4 code, whether integrated 320with or separate from the ext2 code we already have. It might also 321make sense to port or wrap the Linux ext3 or ext4 code so it can be 322loaded as a GPL'd kernel module; it isn't clear if that would be more 323or less work than doing an implementation. 324 325Note however that implementing ext3 has already defeated several 326people; this is a harder project than it looks. 327 328 - GSoc 2016 brought support for extents, and also ro support for dir 329 hashes; jdolecek also implemented several frequently used ext4 features 330 so most contemporary ext filesystems should be possible to mount 331 read-write 332 - still need rw dir_nhash and xattr (semi-easy), and eventually journalling 333 (hard) 334 - There is no clear timeframe or release target. 335 - jdolecek is working on improving ext3/ext4 support (particularily 336 journalling) 337 338 33914. Port hammer from Dragonfly 340------------------------------ 341 342While the motivation for and role of hammer isn't perhaps super 343persuasive, it would still be good to have it. Porting it from 344Dragonfly is probably not that painful (compared to, say, zfs) but as 345the Dragonfly and NetBSD VFS layers have diverged in different 346directions from the original 4.4BSD, may not be entirely trivial 347either. 348 349 - As of January 2017 nobody is known to be working on this. 350 - There is no clear timeframe or release target. 351 - There probably isn't any particular person to contact; for VFS 352 concerns contact dholland or hannken. 353 354 35515. afs maintenance 356------------------- 357 358AFS needs periodic care and feeding to continue working as NetBSD 359changes, because the kernel-level bits aren't kept in the NetBSD tree 360and don't get updated with other things. This is an ongoing issue that 361always seems to need more manpower than it gets. It might make sense 362to import some of the kernel AFS code, or maybe even just some of the 363glue layer that it uses, in order to keep it more current. 364 365 - jakllsch sometimes works on this. 366 - We would like every release to have working AFS by the time it's 367 released. 368 - Contact jakllsch or gendalia about AFS; for VFS concerns contact 369 dholland or hannken. 370 371 37216. execute-in-place 373-------------------- 374 375It is likely that the future includes non-volatile storage (so-called 376"nvram") that looks like RAM from the perspective of software. Most 377importantly: the storage is memory-mapped rather than looking like a 378disk controller. There are a number of things NetBSD ought to have to 379be ready for this, of which probably the most important is 380"execute-in-place": when an executable is run from such storage, and 381mapped into user memory with mmap, the storage hardware pages should 382be able to appear directly in user memory. Right now they get 383gratuitously copied into RAM, which is slow and wasteful. There are 384also other reasons (e.g. embedded device ROMs) to want execute-in- 385place support. 386 387Note that at the implementation level this is a UVM issue rather than 388strictly a storage issue. 389 390Also note that one does not need access to nvram hardware to work on 391this issue; given the performance profiles touted for nvram 392technologies, a plain RAM disk like md(4) is sufficient both 393structurally and for performance analysis. 394 395 - As of January 2017 nobody is known to be working on this. Some 396 time back, uebayasi wrote some preliminary patches, but they were 397 rejected by the UVM maintainers. 398 - There is no clear timeframe or release target. 399 - Contact dholland for further information. 400 401 40217. use extended attributes for ACL and capability storage 403---------------------------------------------------------- 404 405Currently there is some support for extended attributes in ffs, 406but nothing really uses it. I would be nice if we came up with 407a standard format to store ACL's and capabilities like Linux has. 408The various tools must be modified to understand this and be able 409to copy them if requested. Also tools to manipulate the data will 410need to be written. 411 412 41318. coda maintenance 414-------------------- 415 416Coda only sort of works. [And I think it's behind relative to 417upstream, or something of the sort; XXX fill this in.] Also the code 418appears to have an ugly incestuous relationship with FFS. This should 419really be cleaned up. That or maybe it's time to remove Coda. 420 421 - As of January 2017 nobody is known to be working on this. 422 - There is no clear timeframe or release target. 423 - There isn't anyone in particular to contact. 424 - Circa 2012 christos made it work read-write and split it 425 into modules. Since then christos has not tested it. 426 427 428Alistair Crooks, David Holland 429Fri Nov 20 02:17:53 EST 2015 430Sun May 1 16:50:42 EDT 2016 (some updates) 431Fri Jan 13 00:40:50 EST 2017 (some more updates) 432 433