Media Longitevity/Care

Guy Sotomayor ggs at shiresoft.com
Wed Mar 16 17:02:03 CST 2005


On Wed, 2005-03-16 at 14:17 -0800, Eric Smith wrote:
> Scott wrote:
> > The phenomenon of failure from the outside in has interesting
> > possibilities-
> > have software write a known pattern in the outermost section of the disc
> > and
> > pop up a warning when it starts to deteriorate. Hopefully this would give
> > enough time to move data to another disk.
> 
> It doesn't have to be a known pattern.  There's so much error correction
> going on that as soon as you start detecting read errors (i.e., so many
> errors that the error correction fails), you know you've got problems.
> Writing all zeros, all ones, alternating bits or bytes, or random bits
> would all work equally well for this test, and if they read at all, there's
> no point in comparing the data to known values.
> 
> The reason all zeros or all ones is a reasonable test is that the data
> goes through a scrambling process and is recorded with a DC-free code.
> No matter what your write, there ends up being a mix of pit lengths.
> (You could construct a pathological case, but it would actually be
> quite hard to do so and to write it with any guarantee that it comes
> out the way you expect.)

Actually you want to find patterns (and they do exist) that are
pathological (ie worst case) for the encoder/decoder *and* for the error
correction being used.

Data CDROMs use an 8-14 encoding and 2 CIRSC's (Cross Interleaved Reed
Soloman Codes) for error correction.

An aspect of this is that the error correction that is chosen is
(usually) based on the type and frequency of errors you expect (ie
single random bits, runs of bad bits, etc).  Things get dicey when the
errors being encountered aren't what the original design anticipated.
-- 

TTFN - Guy



More information about the cctalk mailing list