Home | History | Annotate | Line # | Download | only in lfs
TODO revision 1.4
      1 #   $NetBSD: TODO,v 1.4 2000/11/17 19:14:41 perseant Exp $
      2 
      3 - If we put an LFS onto a striped disk, we want to be able to specify
      4   the segment size to be equal to the stripe size, regardless of whether
      5   this is a power of two; also, the first segment should just eat the
      6   label pad, like the segments eat the superblocks.  Then, we could
      7   neatly lay out the segments along stripe boundaries. [v2]
      8 
      9 - Working fsck_lfs.  (Have something that will verify, need something
     10   that will fix too.  Really, need a general-purpose external
     11   partial-segment writer.)
     12 
     13 - Roll-forward agent, *at least* to verify the newer superblock's
     14   checkpoint (easy) but also to create a valid checkpoint for
     15   post-checkpoint writes (requires an external partial-segment writer).
     16 
     17 - Inode blocks are currently the same size as the fs block size; but all
     18   the ones I've seen are mostly empty, and this will be especially true
     19   if atime information is kept in the ifile instead of the inode.  Could
     20   we shrink the inode block size to 512?  Or parametrize it at fs
     21   creation time?
     22 
     23 - Get rid of DEV_BSIZE, pay attention to the media block size at mount time.
     24 
     25 - More fs ops need to call lfs_imtime.  Which ones?  (Blackwell et al., 1995)
     26 
     27 - lfs_vunref_head exists so that vnodes loaded solely for cleaning can
     28   be put back on the *head* of the vnode free list.  Make sure we
     29   actually do this, since we now take IN_CLEANING off during segment write.
     30 
     31 - Investigate the "unlocked access" in lfs_bmapv, see if we could wait
     32   there most of the time?  Are we getting inconsistent data?
     33 
     34 - Change the free_lock to be fs-specific, and change the dirvcount to be
     35   subsystem-wide.
     36 
     37 - The cleaner could be enhanced to be controlled from other processes,
     38   and possibly perform additional tasks:
     39 
     40   - Backups.  At a minimum, turn the cleaner off and on to allow
     41 	effective live backups.  More aggressively, the cleaner itself could
     42 	be the backup agent, and dump_lfs would merely be a controller.
     43 
     44   - Cleaning time policies.  Be able to tweak the cleaner's thresholds
     45 	to allow more thorough cleaning during policy-determined idle
     46 	periods (regardless of actual idleness) or put off until later
     47 	during short, intensive write periods.
     48 
     49   - File coalescing and placement.  During periods we expect to be idle,
     50     coalesce fragmented files into one place on disk for better read
     51     performance.  Ideally, move files that have not been accessed in a
     52     while to the extremes of the disk, thereby shortening seek times for
     53     files that are accessed more frequently (though how the cleaner
     54     should communicate "please put this near the beginning or end of the
     55     disk" to the kernel is a very good question; flags to lfs_markv?).
     56 
     57   - Versioning.  When it cleans a segment it could write data for files
     58     that were less than n versions old to tape or elsewhere.  Perhaps it
     59     could even write them back onto the disk, although that requires
     60     more thought (and kernel mods).
     61 
     62 - Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue;
     63   perhaps keep the name, replace the function.  Could it count referenced
     64   vnodes as well, if it was in vfs_subr.c instead?
     65 
     66 - Why not delete the lfs_bmapv call, just mark everything dirty that
     67   isn't deleted/truncated?  Get some numbers about what percentage of
     68   the stuff that the cleaner thinks might be live is live.  If it's
     69   high, get rid of lfs_bmapv.
     70 
     71 - There is a nasty problem in that it may take *more* room to write the
     72   data to clean a segment than is returned by the new segment because of
     73   indirect blocks in segment 2 being dirtied by the data being copied
     74   into the log from segment 1.  The suggested solution at this point is
     75   to detect it when we have no space left on the filesystem, write the
     76   extra data into the last segment (leaving no clean ones), make it a
     77   checkpoint and shut down the file system for fixing by a utility
     78   reading the raw partition.  Argument is that this should never happen
     79   and is practically impossible to fix since the cleaner would have to
     80   theoretically build a model of the entire filesystem in memory to
     81   detect the condition occurring.  A file coalescing cleaner will help
     82   avoid the problem, and one that reads/writes from the raw disk could
     83   fix it.
     84 
     85 - Overlap the version and nextfree fields in the IFILE
     86 
     87 - Change so that only search one sector of inode block file for the
     88   inode by using sector addresses in the ifile instead of
     89   logical disk addresses.
     90 
     91 - Fix the use of the ifile version field to use the generation number instead.
     92 
     93 - Need to keep vnode v_numoutput up to date for pending writes?
     94 
     95 - If delete a file that's being executed, the version number isn't
     96   updated, and fsck_lfs has to figure this out; case is the same as if
     97   have an inode that no directory references, so the file should be
     98   reattached into lost+found.
     99 
    100 - Investigate: should the access time be part of the IFILE:
    101         pro: theoretically, saves disk writes
    102         con: cacheing inodes should obviate this advantage
    103              the IFILE is already humongous
    104 
    105 - Currently there's no notion of write error checking.
    106   + Failed data/inode writes should be rescheduled (kernel level bad blocking).
    107   + Failed superblock writes should cause selection of new superblock
    108   for checkpointing.
    109 
    110 - Future fantasies:
    111   - unrm, versioning
    112   - transactions
    113   - extended cleaner policies (hot/cold data, data placement)
    114 
    115 - Problem with the concept of multiple buffer headers referencing the segment:
    116   Positives:
    117     Don't lock down 1 segment per file system of physical memory.
    118     Don't copy from buffers to segment memory.
    119     Don't tie down the bus to transfer 1M.
    120     Works on controllers supporting less than large transfers.
    121     Disk can start writing immediately instead of waiting 1/2 rotation
    122         and the full transfer.
    123   Negatives:
    124     Have to do segment write then segment summary write, since the latter
    125     is what verifies that the segment is okay.  (Is there another way
    126     to do this?)
    127 
    128 - The algorithm for selecting the disk addresses of the super-blocks
    129   has to be available to the user program which checks the file system.
    130