I recently overheard that the config service would keep
it's database in a directory tree where each record would
be a file.  I don't think this is a very good idea.

The time required to load the data from the disc, using this
scheme, becomes dominated by the open and close times.  Each open
requires the reading of the files inode and the updating of it
when closed.  You also give up the advantage of read-aheads since
each record is less than a block.

The amount of disc space wasted is very high.  The numbers I have heard
are 200,000 attributes at 50 characters each.  Each inode costs 64 bytes,
and each block costs 1K.  200,000 * (64 + 1024) = 217,600,000.  The wasted
space in each block is 1024 - 50 or 974 per attribute.  That is 194,800,000
total.  I think that a single file could be used instead.

Also, the probability of the file system getting corrupted is purportional
to the amount of file creating and unlinking going on in the system
when it goes down.  Fsck times will also be longer the more files on the disc.

There is no need for such a subterfuge.
UNIX makes writting data access programs very easy.  The read/write
pointer for a file can be changed with the inexpensive lseek(2) call.
Given that we can define the maximum size of each data item, we
could use one file for each type of record and calculate the
location of the record.  A lseek(2) later and we have our info.
