TODO revision 1.3
1# $NetBSD: TODO,v 1.3 1999/03/15 00:46:47 perseant Exp $ 2 3- If we put an LFS onto a striped disk, we want to be able to specify 4 the segment size to be equal to the stripe size, regardless of whether 5 this is a power of two; also, the first segment should just eat the 6 label pad, like the segments eat the superblocks. Then, we could 7 neatly lay out the segments along stripe boundaries. 8 9- Working fsck_lfs. (Have something that will verify, need something 10 that will fix too. Really, need a general-purpose external 11 partial-segment writer.) 12 13- Roll-forward agent, *at least* to verify the newer superblock's 14 checkpoint (easy) but also to create a valid checkpoint for 15 post-checkpoint writes (requires an external partial-segment writer). 16 17- Blocks created in the cache are currently not marked in any way, 18 except that b_blkno == b_lblkno, which can happen naturally too. LFS 19 needs to know for accounting. 20 21- Inode blocks are currently the same size as the fs block size; but all 22 the ones I've seen are mostly empty, and this will be especially true 23 if atime information is kept in the ifile instead of the inode. Could 24 we shrink the inode block size to 512? Or parametrize it at fs 25 creation time? 26 27- Get rid of DEV_BSIZE, pay attention to the media block size at mount time. 28 29- More fs ops need to call lfs_imtime. Which ones? (Blackwell et al., 1995) 30 31- lfs_vunref_head exists so that vnodes loaded solely for cleaning can 32 be put back on the *head* of the vnode free list. Make sure we 33 actually do this, since we now take IN_CLEANING off during segment write. 34 35- Investigate the "unlocked access" in lfs_bmapv, see if we could wait 36 there most of the time? Are we getting inconsistent data? 37 38- Change the free_lock to be fs-specific, and change the dirvcount to be 39 subsystem-wide. 40 41- The cleaner could be enhanced to be controlled from other processes, 42 and possibly perform additional tasks: 43 44 - Backups. At a minimum, turn the cleaner off and on to allow 45 effective live backups. More aggressively, the cleaner itself could 46 be the backup agent, and dump_lfs would merely be a controller. 47 48 - Cleaning time policies. Be able to tweak the cleaner's thresholds 49 to allow more thorough cleaning during policy-determined idle 50 periods (regardless of actual idleness) or put off until later 51 during short, intensive write periods. 52 53 - File coalescing and placement. During periods we expect to be idle, 54 coalesce fragmented files into one place on disk for better read 55 performance. Ideally, move files that have not been accessed in a 56 while to the extremes of the disk, thereby shortening seek times for 57 files that are accessed more frequently (though how the cleaner 58 should communicate "please put this near the beginning or end of the 59 disk" to the kernel is a very good question; flags to lfs_markv?). 60 61 - Versioning. When it cleans a segment it could write data for files 62 that were less than n versions old to tape or elsewhere. Perhaps it 63 could even write them back onto the disk, although that requires 64 more thought (and kernel mods). 65 66- Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue; 67 perhaps keep the name, replace the function. Could it count referenced 68 vnodes as well, if it was in vfs_subr.c instead? 69 70- If we clean a DIROP vnode, and we toss a fake buffer in favor of a 71 pending held real buffer, we risk writing part of the dirop during a 72 synchronous checkpoint. This is bad. Now that we're doing `stingy' 73 cleaning, is there a good reason to favor real blocks over fake ones? 74 75- Why not delete the lfs_bmapv call, just mark everything dirty that 76 isn't deleted/truncated? Get some numbers about what percentage of 77 the stuff that the cleaner thinks might be live is live. If it's 78 high, get rid of lfs_bmapv. 79 80- There is a nasty problem in that it may take *more* room to write the 81 data to clean a segment than is returned by the new segment because of 82 indirect blocks in segment 2 being dirtied by the data being copied 83 into the log from segment 1. The suggested solution at this point is 84 to detect it when we have no space left on the filesystem, write the 85 extra data into the last segment (leaving no clean ones), make it a 86 checkpoint and shut down the file system for fixing by a utility 87 reading the raw partition. Argument is that this should never happen 88 and is practically impossible to fix since the cleaner would have to 89 theoretically build a model of the entire filesystem in memory to 90 detect the condition occurring. A file coalescing cleaner will help 91 avoid the problem, and one that reads/writes from the raw disk could 92 fix it. 93 94- Overlap the version and nextfree fields in the IFILE 95 96- Change so that only search one sector of inode block file for the 97 inode by using sector addresses in the ifile instead of 98 logical disk addresses. 99 100- Fix the use of the ifile version field to use the generation number instead. 101 102- Need to keep vnode v_numoutput up to date for pending writes? 103 104- If delete a file that's being executed, the version number isn't 105 updated, and fsck_lfs has to figure this out; case is the same as if 106 have an inode that no directory references, so the file should be 107 reattached into lost+found. 108 109- Investigate: should the access time be part of the IFILE: 110 pro: theoretically, saves disk writes 111 con: cacheing inodes should obviate this advantage 112 the IFILE is already humongous 113 114- Currently there's no notion of write error checking. 115 + Failed data/inode writes should be rescheduled (kernel level bad blocking). 116 + Failed superblock writes should cause selection of new superblock 117 for checkpointing. 118 119- Future fantasies: 120 - unrm, versioning 121 - transactions 122 - extended cleaner policies (hot/cold data, data placement) 123 124- Problem with the concept of multiple buffer headers referencing the segment: 125 Positives: 126 Don't lock down 1 segment per file system of physical memory. 127 Don't copy from buffers to segment memory. 128 Don't tie down the bus to transfer 1M. 129 Works on controllers supporting less than large transfers. 130 Disk can start writing immediately instead of waiting 1/2 rotation 131 and the full transfer. 132 Negatives: 133 Have to do segment write then segment summary write, since the latter 134 is what verifies that the segment is okay. (Is there another way 135 to do this?) 136 137- The algorithm for selecting the disk addresses of the super-blocks 138 has to be available to the user program which checks the file system. 139