Home | History | Annotate | Line # | Download | only in lfs
README revision 1.2
      1 #	$NetBSD: README,v 1.2 1994/06/29 06:46:43 cgd Exp $
      2 
      3 #	@(#)README	8.1 (Berkeley) 6/11/93
      4 
      5 The file system is reasonably stable, but incomplete.  There are
      6 places where cleaning performance can be improved dramatically (see
      7 comments in lfs_syscalls.c).  For details on the implementation,
      8 performance and why garbage collection always wins, see Dr. Margo
      9 Seltzer's thesis available for anonymous ftp from toe.cs.berkeley.edu,
     10 in the directory pub/personal/margo/thesis.ps.Z, or the January 1993
     11 USENIX paper.
     12 
     13 Missing Functionality:
     14 	Multiple block sizes and/or fragments are not yet implemented.
     15 
     16 ----------
     17 The disk is laid out in segments.  The first segment starts 8K into the
     18 disk (the first 8K is used for boot information).  Each segment is composed
     19 of the following:
     20 
     21 	An optional super block
     22 	One or more groups of:
     23 		segment summary
     24 		0 or more data blocks
     25 		0 or more inode blocks
     26 
     27 The segment summary and inode/data blocks start after the super block (if
     28 present), and grow toward the end of the segment.
     29 
     30 	_______________________________________________
     31 	|         |            |         |            |
     32 	| summary | data/inode | summary | data/inode |
     33 	|  block  |   blocks   |  block  |   blocks   | ...
     34 	|_________|____________|_________|____________|
     35 
     36 The data/inode blocks following a summary block are described by the
     37 summary block.  In order to permit the segment to be written in any order
     38 and in a forward direction only, a checksum is calculated across the
     39 blocks described by the summary.  Additionally, the summary is checksummed
     40 and timestamped.  Both of these are intended for recovery; the former is
     41 to make it easy to determine that it *is* a summary block and the latter
     42 is to make it easy to determine when recovery is finished for partially
     43 written segments.  These checksums are also used by the cleaner.
     44 
     45 	Summary block (detail)
     46 	________________
     47 	| sum cksum    |
     48 	| data cksum   |
     49 	| next segment |
     50 	| timestamp    |
     51 	| FINFO count  |
     52 	| inode count  |
     53 	| flags        |
     54 	|______________|
     55 	|   FINFO-1    | 0 or more file info structures, identifying the
     56 	|     .        | blocks in the segment.
     57 	|     .        |
     58 	|     .        |
     59 	|   FINFO-N    |
     60 	|   inode-N    |
     61 	|     .        |
     62 	|     .        |
     63 	|     .        | 0 or more inode daddr_t's, identifying the inode
     64 	|   inode-1    | blocks in the segment.
     65 	|______________|
     66 
     67 Inode blocks are blocks of on-disk inodes in the same format as those in
     68 the FFS.  However, spare[0] contains the inode number of the inode so we
     69 can find a particular inode on a page.  They are packed page_size /
     70 sizeof(inode) to a block.  Data blocks are exactly as in the FFS.  Both
     71 inodes and data blocks move around the file system at will.
     72 
     73 The file system is described by a super-block which is replicated and
     74 occurs as the first block of the first and other segments.  (The maximum
     75 number of super-blocks is MAXNUMSB).  Each super-block maintains a list
     76 of the disk addresses of all the super-blocks.  The super-block maintains
     77 a small amount of checkpoint information, essentially just enough to find
     78 the inode for the IFILE (fs->lfs_idaddr).
     79 
     80 The IFILE is visible in the file system, as inode number IFILE_INUM.  It
     81 contains information shared between the kernel and various user processes.
     82 
     83 	Ifile (detail)
     84 	________________
     85 	| cleaner info | Cleaner information per file system.  (Page
     86 	|              | granularity.)
     87 	|______________|
     88 	| segment      | Space available and last modified times per
     89 	| usage table  | segment.  (Page granularity.)
     90 	|______________|
     91 	|   IFILE-1    | Per inode status information: current version #,
     92 	|     .        | if currently allocated, last access time and
     93 	|     .        | current disk address of containing inode block.
     94 	|     .        | If current disk address is LFS_UNUSED_DADDR, the
     95 	|   IFILE-N    | inode is not in use, and it's on the free list.
     96 	|______________|
     97 
     98 
     99 First Segment at Creation Time:
    100 _____________________________________________________________
    101 |        |       |         |       |       |       |       |
    102 | 8K pad | Super | summary | inode | ifile | root  | l + f |
    103 |        | block |         | block |       | dir   | dir   |
    104 |________|_______|_________|_______|_______|_______|_______|
    105 	  ^
    106            Segment starts here.
    107 
    108 Some differences from the Sprite LFS implementation.
    109 
    110 1. The LFS implementation placed the ifile metadata and the super block
    111    at fixed locations.  This implementation replicates the super block
    112    and puts each at a fixed location.  The checkpoint data is divided into
    113    two parts -- just enough information to find the IFILE is stored in
    114    two of the super blocks, although it is not toggled between them as in
    115    the Sprite implementation.  (This was deliberate, to avoid a single
    116    point of failure.)  The remaining checkpoint information is treated as
    117    a regular file, which means that the cleaner info, the segment usage
    118    table and the ifile meta-data are stored in normal log segments.
    119    (Tastes great, less filling...)
    120 
    121 2. The segment layout is radically different in Sprite; this implementation
    122    uses something a lot like network framing, where data/inode blocks are
    123    written asynchronously, and a checksum is used to validate any set of
    124    summary and data/inode blocks.  Sprite writes summary blocks synchronously
    125    after the data/inode blocks have been written and the existence of the
    126    summary block validates the data/inode blocks.  This permits us to write
    127    everything contiguously, even partial segments and their summaries, whereas
    128    Sprite is forced to seek (from the end of the data inode to the summary
    129    which lives at the end of the segment).  Additionally, writing the summary
    130    synchronously should cost about 1/2 a rotation per summary.
    131 
    132 3. Sprite LFS distinguishes between different types of blocks in the segment.
    133    Other than inode blocks and data blocks, we don't.
    134 
    135 4. Sprite LFS traverses the IFILE looking for free blocks.  We maintain a
    136    free list threaded through the IFILE entries.
    137 
    138 5. The cleaner runs in user space, as opposed to kernel space.  It shares
    139    information with the kernel by reading/writing the IFILE and through
    140    cleaner specific system calls.
    141 
    142