TODO revision 1.9
1# $NetBSD: TODO,v 1.9 2005/04/01 21:59:46 perseant Exp $ 2 3- Now that our cache is basically all of physical memory, we need to make 4 sure that segwrite is not starving other important things. Need a way 5 to prioritize which blocks are most important to write, and write only 6 those, saving the rest for later. Does this change our notion of what 7 a checkpoint is? 8 9- Investigate alternate inode locking strategy: Inode locks are useful 10 for locking against simultaneous changes to inode size (balloc, 11 truncate, write) but because the assignment of disk blocks is also 12 covered by the segment lock, we don't really need to pay attention to 13 the inode lock when writing a segment, right? If this is true, the 14 locking problem in lfs_{bmapv,markv} goes away and lfs_reserve can go, 15 too. 16 17- Get rid of DEV_BSIZE, pay attention to the media block size at mount time. 18 19- More fs ops need to call lfs_imtime. Which ones? (Blackwell et al., 1995) 20 21- lfs_vunref_head exists so that vnodes loaded solely for cleaning can 22 be put back on the *head* of the vnode free list. Make sure we 23 actually do this, since we now take IN_CLEANING off during segment write. 24 25- The cleaner could be enhanced to be controlled from other processes, 26 and possibly perform additional tasks: 27 28 - Backups. At a minimum, turn the cleaner off and on to allow 29 effective live backups. More aggressively, the cleaner itself could 30 be the backup agent, and dump_lfs would merely be a controller. 31 32 - Cleaning time policies. Be able to tweak the cleaner's thresholds 33 to allow more thorough cleaning during policy-determined idle 34 periods (regardless of actual idleness) or put off until later 35 during short, intensive write periods. 36 37 - File coalescing and placement. During periods we expect to be idle, 38 coalesce fragmented files into one place on disk for better read 39 performance. Ideally, move files that have not been accessed in a 40 while to the extremes of the disk, thereby shortening seek times for 41 files that are accessed more frequently (though how the cleaner 42 should communicate "please put this near the beginning or end of the 43 disk" to the kernel is a very good question; flags to lfs_markv?). 44 45 - Versioning. When it cleans a segment it could write data for files 46 that were less than n versions old to tape or elsewhere. Perhaps it 47 could even write them back onto the disk, although that requires 48 more thought (and kernel mods). 49 50- Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue; 51 perhaps keep the name, replace the function. Could it count referenced 52 vnodes as well, if it was in vfs_subr.c instead? 53 54- Why not delete the lfs_bmapv call, just mark everything dirty that 55 isn't deleted/truncated? Get some numbers about what percentage of 56 the stuff that the cleaner thinks might be live is live. If it's 57 high, get rid of lfs_bmapv. 58 59- There is a nasty problem in that it may take *more* room to write the 60 data to clean a segment than is returned by the new segment because of 61 indirect blocks in segment 2 being dirtied by the data being copied 62 into the log from segment 1. The suggested solution at this point is 63 to detect it when we have no space left on the filesystem, write the 64 extra data into the last segment (leaving no clean ones), make it a 65 checkpoint and shut down the file system for fixing by a utility 66 reading the raw partition. Argument is that this should never happen 67 and is practically impossible to fix since the cleaner would have to 68 theoretically build a model of the entire filesystem in memory to 69 detect the condition occurring. A file coalescing cleaner will help 70 avoid the problem, and one that reads/writes from the raw disk could 71 fix it. 72 73- Need to keep vnode v_numoutput up to date for pending writes? 74 75- If delete a file that's being executed, the version number isn't 76 updated, and fsck_lfs has to figure this out; case is the same as if 77 have an inode that no directory references, so the file should be 78 reattached into lost+found. 79 80- Currently there's no notion of write error checking. 81 + Failed data/inode writes should be rescheduled (kernel level bad blocking). 82 + Failed superblock writes should cause selection of new superblock 83 for checkpointing. 84 85- Future fantasies: 86 - unrm, versioning 87 - transactions 88 - extended cleaner policies (hot/cold data, data placement) 89 90- Problem with the concept of multiple buffer headers referencing the segment: 91 Positives: 92 Don't lock down 1 segment per file system of physical memory. 93 Don't copy from buffers to segment memory. 94 Don't tie down the bus to transfer 1M. 95 Works on controllers supporting less than large transfers. 96 Disk can start writing immediately instead of waiting 1/2 rotation 97 and the full transfer. 98 Negatives: 99 Have to do segment write then segment summary write, since the latter 100 is what verifies that the segment is okay. (Is there another way 101 to do this?) 102 103- The algorithm for selecting the disk addresses of the super-blocks 104 has to be available to the user program which checks the file system. 105