Home | History | Annotate | only in /src/sys/ufs/lfs
Up to higher level directory
NameDateSize
CHANGES07-Jan-20258K
lfs.h17-Sep-202550.2K
lfs_accessors.h15-Sep-202546.9K
lfs_alloc.c17-Sep-202531.3K
lfs_balloc.c05-Sep-202019.9K
lfs_bio.c15-Sep-202521.5K
lfs_cksum.c02-Aug-20154.3K
lfs_debug.c23-Feb-202010.6K
lfs_extern.h17-Sep-202510.8K
lfs_inode.c23-Apr-202027.8K
lfs_inode.h23-Mar-202210.3K
lfs_itimes.c10-Jun-20173.7K
lfs_kernel.h20-Jun-20165K
lfs_pages.c11-Apr-202326.5K
lfs_rename.c20-Oct-202132K
lfs_rfw.c17-Sep-202527.7K
lfs_segment.c17-Sep-202578.9K
lfs_subr.c04-Sep-202518.1K
lfs_syscalls.c18-Feb-202027.1K
lfs_vfsops.c19-Sep-202573.5K
lfs_vnops.c17-Sep-202561K
Makefile28-Jul-2015170
README15-Mar-19995.9K
README.wc06-Jun-2013706
TODO11-Dec-20055.3K
ulfs_bmap.c30-Mar-201712.6K
ulfs_bswap.h19-Apr-20182.7K
ulfs_dinode.h20-Jun-20162.8K
ulfs_dirhash.c07-Aug-202233.7K
ulfs_dirhash.h19-Aug-20215.2K
ulfs_extattr.c10-Feb-202441.1K
ulfs_extattr.h20-Jun-20164.7K
ulfs_extern.h18-Jul-20215.2K
ulfs_inode.c05-Sep-20207.4K
ulfs_inode.h17-Feb-20248.1K
ulfs_lookup.c08-Sep-202437.5K
ulfs_quota.c19-Jun-201624.2K
ulfs_quota.h20-Jun-20166.1K
ulfs_quota1.c29-Jun-202123.4K
ulfs_quota1.h20-Jun-20164.4K
ulfs_quota1_subr.c25-Jul-20213.1K
ulfs_quota2.c28-May-202241.1K
ulfs_quota2.h06-Jun-20134.7K
ulfs_quota2_subr.c24-Aug-20234.3K
ulfs_quotacommon.h08-Jun-20133.1K
ulfs_readwrite.c20-Oct-202114.7K
ulfs_snapshot.c05-Sep-20202.8K
ulfs_vfsops.c17-Jan-20206.3K
ulfs_vnops.c27-Mar-202231.2K
ulfsmount.h20-Jun-20164.2K

README

      1 #	$NetBSD: README,v 1.3 1999/03/15 00:46:47 perseant Exp $
      2 
      3 #	@(#)README	8.1 (Berkeley) 6/11/93
      4 
      5 The file system is reasonably stable...I think.
      6 
      7 For details on the implementation, performance and why garbage
      8 collection always wins, see Dr. Margo Seltzer's thesis available for
      9 anonymous ftp from toe.cs.berkeley.edu, in the directory
     10 pub/personal/margo/thesis.ps.Z, or the January 1993 USENIX paper.
     11 
     12 ----------
     13 The disk is laid out in segments.  The first segment starts 8K into the
     14 disk (the first 8K is used for boot information).  Each segment is composed
     15 of the following:
     16 
     17 	An optional super block
     18 	One or more groups of:
     19 		segment summary
     20 		0 or more data blocks
     21 		0 or more inode blocks
     22 
     23 The segment summary and inode/data blocks start after the super block (if
     24 present), and grow toward the end of the segment.
     25 
     26 	_______________________________________________
     27 	|         |            |         |            |
     28 	| summary | data/inode | summary | data/inode |
     29 	|  block  |   blocks   |  block  |   blocks   | ...
     30 	|_________|____________|_________|____________|
     31 
     32 The data/inode blocks following a summary block are described by the
     33 summary block.  In order to permit the segment to be written in any order
     34 and in a forward direction only, a checksum is calculated across the
     35 blocks described by the summary.  Additionally, the summary is checksummed
     36 and timestamped.  Both of these are intended for recovery; the former is
     37 to make it easy to determine that it *is* a summary block and the latter
     38 is to make it easy to determine when recovery is finished for partially
     39 written segments.  These checksums are also used by the cleaner.
     40 
     41 	Summary block (detail)
     42 	________________
     43 	| sum cksum    |
     44 	| data cksum   |
     45 	| next segment |
     46 	| timestamp    |
     47 	| FINFO count  |
     48 	| inode count  |
     49 	| flags        |
     50 	|______________|
     51 	|   FINFO-1    | 0 or more file info structures, identifying the
     52 	|     .        | blocks in the segment.
     53 	|     .        |
     54 	|     .        |
     55 	|   FINFO-N    |
     56 	|   inode-N    |
     57 	|     .        |
     58 	|     .        |
     59 	|     .        | 0 or more inode daddr_t's, identifying the inode
     60 	|   inode-1    | blocks in the segment.
     61 	|______________|
     62 
     63 Inode blocks are blocks of on-disk inodes in the same format as those in
     64 the FFS.  However, spare[0] contains the inode number of the inode so we
     65 can find a particular inode on a page.  They are packed page_size /
     66 sizeof(inode) to a block.  Data blocks are exactly as in the FFS.  Both
     67 inodes and data blocks move around the file system at will.
     68 
     69 The file system is described by a super-block which is replicated and
     70 occurs as the first block of the first and other segments.  (The maximum
     71 number of super-blocks is MAXNUMSB).  Each super-block maintains a list
     72 of the disk addresses of all the super-blocks.  The super-block maintains
     73 a small amount of checkpoint information, essentially just enough to find
     74 the inode for the IFILE (fs->lfs_idaddr).
     75 
     76 The IFILE is visible in the file system, as inode number IFILE_INUM.  It
     77 contains information shared between the kernel and various user processes.
     78 
     79 	Ifile (detail)
     80 	________________
     81 	| cleaner info | Cleaner information per file system.  (Page
     82 	|              | granularity.)
     83 	|______________|
     84 	| segment      | Space available and last modified times per
     85 	| usage table  | segment.  (Page granularity.)
     86 	|______________|
     87 	|   IFILE-1    | Per inode status information: current version #,
     88 	|     .        | if currently allocated, last access time and
     89 	|     .        | current disk address of containing inode block.
     90 	|     .        | If current disk address is LFS_UNUSED_DADDR, the
     91 	|   IFILE-N    | inode is not in use, and it's on the free list.
     92 	|______________|
     93 
     94 
     95 First Segment at Creation Time:
     96 _____________________________________________________________
     97 |        |       |         |       |       |       |       |
     98 | 8K pad | Super | summary | inode | ifile | root  | l + f |
     99 |        | block |         | block |       | dir   | dir   |
    100 |________|_______|_________|_______|_______|_______|_______|
    101 	  ^
    102            Segment starts here.
    103 
    104 Some differences from the Sprite LFS implementation.
    105 
    106 1. The LFS implementation placed the ifile metadata and the super block
    107    at fixed locations.  This implementation replicates the super block
    108    and puts each at a fixed location.  The checkpoint data is divided into
    109    two parts -- just enough information to find the IFILE is stored in
    110    two of the super blocks, although it is not toggled between them as in
    111    the Sprite implementation.  (This was deliberate, to avoid a single
    112    point of failure.)  The remaining checkpoint information is treated as
    113    a regular file, which means that the cleaner info, the segment usage
    114    table and the ifile meta-data are stored in normal log segments.
    115    (Tastes great, less filling...)
    116 
    117 2. The segment layout is radically different in Sprite; this implementation
    118    uses something a lot like network framing, where data/inode blocks are
    119    written asynchronously, and a checksum is used to validate any set of
    120    summary and data/inode blocks.  Sprite writes summary blocks synchronously
    121    after the data/inode blocks have been written and the existence of the
    122    summary block validates the data/inode blocks.  This permits us to write
    123    everything contiguously, even partial segments and their summaries, whereas
    124    Sprite is forced to seek (from the end of the data inode to the summary
    125    which lives at the end of the segment).  Additionally, writing the summary
    126    synchronously should cost about 1/2 a rotation per summary.
    127 
    128 3. Sprite LFS distinguishes between different types of blocks in the segment.
    129    Other than inode blocks and data blocks, we don't.
    130 
    131 4. Sprite LFS traverses the IFILE looking for free blocks.  We maintain a
    132    free list threaded through the IFILE entries.
    133 
    134 5. The cleaner runs in user space, as opposed to kernel space.  It shares
    135    information with the kernel by reading/writing the IFILE and through
    136    cleaner specific system calls.
    137 
    138 

README.wc

      1 Line counts.
      2 
      3 (Part of the premise of splitting lfs from ufs is that in the long run
      4 the size of a standalone lfs will be substantially smaller than the
      5 size of lfs plus the size of ufs. This file is for keeping track of
      6 this proposition.)
      7 
      8 As of 20130604 (before the split):
      9 		.h	.c	total
     10 lfs		1467	13858	15325
     11 ufs		2056	12919	14975
     12 
     13 As of 20130605 (copied all ufs files verbatim):
     14 
     15 lfs-native	1467	13858	15325
     16 lfs-ulfs	2070	12938	15008
     17 lfs-total	3537	26796	30333
     18 
     19 A few extra lines appeared copying ufs because I preserved a copy of
     20 the old rcsids.
     21 
     22 As of 20130606 (committed the initial split and made things buildable):
     23 
     24 lfs-native	1482	13858	15340
     25 lfs-ulfs	1994	13028	15022
     26 lfs-total	3476	26886	30362
     27