Home | History | Annotate | Line # | Download | only in lfs
TODO revision 1.3
      1  1.3  perseant #   $NetBSD: TODO,v 1.3 1999/03/15 00:46:47 perseant Exp $
      2  1.2       cgd 
      3  1.3  perseant - If we put an LFS onto a striped disk, we want to be able to specify
      4  1.3  perseant   the segment size to be equal to the stripe size, regardless of whether
      5  1.3  perseant   this is a power of two; also, the first segment should just eat the
      6  1.3  perseant   label pad, like the segments eat the superblocks.  Then, we could
      7  1.3  perseant   neatly lay out the segments along stripe boundaries.
      8  1.3  perseant 
      9  1.3  perseant - Working fsck_lfs.  (Have something that will verify, need something
     10  1.3  perseant   that will fix too.  Really, need a general-purpose external
     11  1.3  perseant   partial-segment writer.)
     12  1.3  perseant 
     13  1.3  perseant - Roll-forward agent, *at least* to verify the newer superblock's
     14  1.3  perseant   checkpoint (easy) but also to create a valid checkpoint for
     15  1.3  perseant   post-checkpoint writes (requires an external partial-segment writer).
     16  1.3  perseant 
     17  1.3  perseant - Blocks created in the cache are currently not marked in any way,
     18  1.3  perseant   except that b_blkno == b_lblkno, which can happen naturally too.  LFS
     19  1.3  perseant   needs to know for accounting.
     20  1.3  perseant 
     21  1.3  perseant - Inode blocks are currently the same size as the fs block size; but all
     22  1.3  perseant   the ones I've seen are mostly empty, and this will be especially true
     23  1.3  perseant   if atime information is kept in the ifile instead of the inode.  Could
     24  1.3  perseant   we shrink the inode block size to 512?  Or parametrize it at fs
     25  1.3  perseant   creation time?
     26  1.3  perseant 
     27  1.3  perseant - Get rid of DEV_BSIZE, pay attention to the media block size at mount time.
     28  1.3  perseant 
     29  1.3  perseant - More fs ops need to call lfs_imtime.  Which ones?  (Blackwell et al., 1995)
     30  1.3  perseant 
     31  1.3  perseant - lfs_vunref_head exists so that vnodes loaded solely for cleaning can
     32  1.3  perseant   be put back on the *head* of the vnode free list.  Make sure we
     33  1.3  perseant   actually do this, since we now take IN_CLEANING off during segment write.
     34  1.3  perseant 
     35  1.3  perseant - Investigate the "unlocked access" in lfs_bmapv, see if we could wait
     36  1.3  perseant   there most of the time?  Are we getting inconsistent data?
     37  1.3  perseant 
     38  1.3  perseant - Change the free_lock to be fs-specific, and change the dirvcount to be
     39  1.3  perseant   subsystem-wide.
     40  1.3  perseant 
     41  1.3  perseant - The cleaner could be enhanced to be controlled from other processes,
     42  1.3  perseant   and possibly perform additional tasks:
     43  1.3  perseant 
     44  1.3  perseant   - Backups.  At a minimum, turn the cleaner off and on to allow
     45  1.3  perseant 	effective live backups.  More aggressively, the cleaner itself could
     46  1.3  perseant 	be the backup agent, and dump_lfs would merely be a controller.
     47  1.3  perseant 
     48  1.3  perseant   - Cleaning time policies.  Be able to tweak the cleaner's thresholds
     49  1.3  perseant 	to allow more thorough cleaning during policy-determined idle
     50  1.3  perseant 	periods (regardless of actual idleness) or put off until later
     51  1.3  perseant 	during short, intensive write periods.
     52  1.3  perseant 
     53  1.3  perseant   - File coalescing and placement.  During periods we expect to be idle,
     54  1.3  perseant     coalesce fragmented files into one place on disk for better read
     55  1.3  perseant     performance.  Ideally, move files that have not been accessed in a
     56  1.3  perseant     while to the extremes of the disk, thereby shortening seek times for
     57  1.3  perseant     files that are accessed more frequently (though how the cleaner
     58  1.3  perseant     should communicate "please put this near the beginning or end of the
     59  1.3  perseant     disk" to the kernel is a very good question; flags to lfs_markv?).
     60  1.3  perseant 
     61  1.3  perseant   - Versioning.  When it cleans a segment it could write data for files
     62  1.3  perseant     that were less than n versions old to tape or elsewhere.  Perhaps it
     63  1.3  perseant     could even write them back onto the disk, although that requires
     64  1.3  perseant     more thought (and kernel mods).
     65  1.3  perseant 
     66  1.3  perseant - Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue;
     67  1.3  perseant   perhaps keep the name, replace the function.  Could it count referenced
     68  1.3  perseant   vnodes as well, if it was in vfs_subr.c instead?
     69  1.3  perseant 
     70  1.3  perseant - If we clean a DIROP vnode, and we toss a fake buffer in favor of a
     71  1.3  perseant   pending held real buffer, we risk writing part of the dirop during a
     72  1.3  perseant   synchronous checkpoint.  This is bad.  Now that we're doing `stingy'
     73  1.3  perseant   cleaning, is there a good reason to favor real blocks over fake ones?
     74  1.3  perseant 
     75  1.3  perseant - Why not delete the lfs_bmapv call, just mark everything dirty that
     76  1.3  perseant   isn't deleted/truncated?  Get some numbers about what percentage of
     77  1.3  perseant   the stuff that the cleaner thinks might be live is live.  If it's
     78  1.3  perseant   high, get rid of lfs_bmapv.
     79  1.3  perseant 
     80  1.3  perseant - There is a nasty problem in that it may take *more* room to write the
     81  1.3  perseant   data to clean a segment than is returned by the new segment because of
     82  1.3  perseant   indirect blocks in segment 2 being dirtied by the data being copied
     83  1.3  perseant   into the log from segment 1.  The suggested solution at this point is
     84  1.3  perseant   to detect it when we have no space left on the filesystem, write the
     85  1.3  perseant   extra data into the last segment (leaving no clean ones), make it a
     86  1.3  perseant   checkpoint and shut down the file system for fixing by a utility
     87  1.3  perseant   reading the raw partition.  Argument is that this should never happen
     88  1.3  perseant   and is practically impossible to fix since the cleaner would have to
     89  1.3  perseant   theoretically build a model of the entire filesystem in memory to
     90  1.3  perseant   detect the condition occurring.  A file coalescing cleaner will help
     91  1.3  perseant   avoid the problem, and one that reads/writes from the raw disk could
     92  1.3  perseant   fix it.
     93  1.3  perseant 
     94  1.3  perseant - Overlap the version and nextfree fields in the IFILE
     95  1.3  perseant 
     96  1.3  perseant - Change so that only search one sector of inode block file for the
     97  1.3  perseant   inode by using sector addresses in the ifile instead of
     98  1.3  perseant   logical disk addresses.
     99  1.3  perseant 
    100  1.3  perseant - Fix the use of the ifile version field to use the generation number instead.
    101  1.3  perseant 
    102  1.3  perseant - Need to keep vnode v_numoutput up to date for pending writes?
    103  1.3  perseant 
    104  1.3  perseant - If delete a file that's being executed, the version number isn't
    105  1.3  perseant   updated, and fsck_lfs has to figure this out; case is the same as if
    106  1.3  perseant   have an inode that no directory references, so the file should be
    107  1.3  perseant   reattached into lost+found.
    108  1.3  perseant 
    109  1.3  perseant - Investigate: should the access time be part of the IFILE:
    110  1.3  perseant         pro: theoretically, saves disk writes
    111  1.3  perseant         con: cacheing inodes should obviate this advantage
    112  1.3  perseant              the IFILE is already humongous
    113  1.3  perseant 
    114  1.3  perseant - Currently there's no notion of write error checking.
    115  1.3  perseant   + Failed data/inode writes should be rescheduled (kernel level bad blocking).
    116  1.3  perseant   + Failed superblock writes should cause selection of new superblock
    117  1.3  perseant   for checkpointing.
    118  1.3  perseant 
    119  1.3  perseant - Future fantasies:
    120  1.3  perseant   - unrm, versioning
    121  1.3  perseant   - transactions
    122  1.3  perseant   - extended cleaner policies (hot/cold data, data placement)
    123  1.3  perseant 
    124  1.3  perseant - Problem with the concept of multiple buffer headers referencing the segment:
    125  1.3  perseant   Positives:
    126  1.3  perseant     Don't lock down 1 segment per file system of physical memory.
    127  1.3  perseant     Don't copy from buffers to segment memory.
    128  1.3  perseant     Don't tie down the bus to transfer 1M.
    129  1.3  perseant     Works on controllers supporting less than large transfers.
    130  1.3  perseant     Disk can start writing immediately instead of waiting 1/2 rotation
    131  1.3  perseant         and the full transfer.
    132  1.3  perseant   Negatives:
    133  1.3  perseant     Have to do segment write then segment summary write, since the latter
    134  1.3  perseant     is what verifies that the segment is okay.  (Is there another way
    135  1.3  perseant     to do this?)
    136  1.1   mycroft 
    137  1.3  perseant - The algorithm for selecting the disk addresses of the super-blocks
    138  1.3  perseant   has to be available to the user program which checks the file system.
    139