From: Digest <deadmail>
To: "OS/2GenAu Digest"<deadmail>
Date: Mon, 14 Oct 2002 00:02:04 EST-10EDT,10,-1,0,7200,3,-1,0,7200,3600
Subject: [os2genau_digest] No. 469
Reply-To: <deadmail>
X-List-Unsubscribe: www.os2site.com/list/

**************************************************
Sunday 13 October 2002
 Number  469
**************************************************

Subjects for today
 
1  [os2genau] Zippinf files > 2GB : Ed Durrant <edurrant at bigpond dot net dot au>
2  Re: [os2genau] Zippinf files > 2GB : Ian Manners" <deadmail>
3  [os2genau] File systems : Alan Duval" <amoht at ozemail dot com dot au>
4  Re: [os2genau] File systems : Ed Durrant <edurrant at bigpond dot net dot au>

**= Email   1 ==========================**

Date:  Sun, 13 Oct 2002 18:47:24 +1000
From:  Ed Durrant <edurrant at bigpond dot net dot au>
Subject:  [os2genau] Zippinf files > 2GB

I'm trying to zip a VirtualPC Virtual Hard disk file and whether I
zip it with ZIP or PKZIP, the resultant zip file is corrupt (reports
an invalid end-of-file error).  Is this a restriction of the zip
equation, that it can't zip files that are greater than 2GB in size
?? has anyone got an alternative suggestion, that they know works
with large files ??

Cheers/2

Ed
----------------------------------------------------------------------------------
 

**= Email   2 ==========================**

Date:  Sun, 13 Oct 2002 20:18:49 +1000 (EST)
From:  "Ian Manners" <deadmail>
Subject:  Re: [os2genau] Zippinf files > 2GB

Hi Ed

>I'm trying to zip a VirtualPC Virtual Hard disk file and whether I
>zip it with ZIP or PKZIP, the resultant zip file is corrupt (reports

I use RAR for all my backups, that way if I need to replace
a HD I simply LVM or FDISK, format, then unrar the image
back. Also easy to retrieve individual files if needed.

http://www.os2site dot com/sw/util/archiver/

RAR250P.EXE is the last GUI version, the more recent
ones are command line only.

RAR happily rar's up 11GB of os2site dot com.

>an invalid end-of-file error).  Is this a restriction of the zip
>equation, that it can't zip files that are greater than 2GB in size

I believe that zip has a file and/or size limit.

Cheers
Ian B Manners


Not tonight dear, I have a modem.
----------------------------------------------------------------------------------
 

**= Email   3 ==========================**

Date:  Sun, 13 Oct 2002 21:46:30 -0400 (EDT)
From:  "Alan Duval" <amoht at ozemail dot com dot au>
Subject:  [os2genau] File systems

Hi,

I have just read the following article.

The performance of plain HPFS not very good. The maximum cache size is ridiculously small but that doesn't 
explain the
     surprisingly poor performance of large sequential reads. 
     The performance delta between HPFS386 and JFS is very small. 
     HPFS386 is the fastest on writes, JFS is somewhat slowed down by the journaling overhead. 
     JFS has clearly the best read throughput, most likely due to straighter path through the kernel. I suspect 
that FAT has a similar
     advantage (though for different reasons). 
     The differences in raw read throughput are simply amazing. The winner (JFS) was very nearly 100% faster 
than the loser (HPFS).
     I was impressed by JFS's performance because the theoretical maximum throughput of UW SCSI is 
40MB/sec. I consider achieving
     slightly over 75% of the theoretical maximum at application level quite good.
     It is necessary to differentiate between the filesystem layout on storage media and the actual filesystem 
driver. The latter is
     obviously tremendously important as the comparison of HPFS versus HPFS386 shows. The performance 
difference is striking when
     we consider that both IFSs organize the data on storage media in exactly the same way. 
     It is interesting that out of only three tests and four filesystems, no filesystem consistently scored best or 
worst. That shows how
     difficult it is to pick a winner. 

So which filesystem is the best? The answer is "it depends" - that is, it depends on user's needs. To make 
things simpler, first let's see
which filesystem is not the best: 

     FAT - the performance isn't terribly good even with a big fat cache. And when it comes to features, FAT is 
the clear loser. Lack of
     long filenames and maximum volume size limit of 2GB preclude FAT from serious use. Its only saving grace 
is wide compatibility
     with other OSes and the fact that FAT is still a good filesystem for floppies. 
     HPFS - features are almost as good as HPFS386 but the performance isn't stellar. Extremely small cache 
size limit seems to be
     HPFS's worst deficiency but sequential read performance isn't very impressive either - HPFS was by far the 
slowest in that test,
     slower even than FAT. My recommendation: use plain HPFS as little as possible. 

That leaves two contestants ahead of the pack: HPFS386 and JFS. There is no clear winner. There is little 
difference between these two
IFSs performance-wise. HPFS386 is faster on writes but JFS has a clear edge when it comes to reading big 
chunks of data. Both have very
efficient caches - in the build test the CPU was 100% utilized almost all the time with both filesystems. Unless 
you actually take a stopwatch,
both IFSs perform equally well, although each of them has specific strengths and weaknesses. 


As I have never heard of HPFS386 before, could someone tell me how would one get this and how could it 
be installed for OS/2 to use it?

Also how do people use OS/2 with JFS if you can't boot it?


Alan Duval


----------------------------------------------------------------------------------
 

**= Email   4 ==========================**

Date:  Sun, 13 Oct 2002 21:57:31 +1000
From:  Ed Durrant <edurrant at bigpond dot net dot au>
Subject:  Re: [os2genau] File systems

HPFS386 is a 32bit Installable File System, that was supplied with
OS/2 Warp Server Advanced. It was never intended to be used on a
client system however some people have managed this.

JFS under OS/2 is used on data drives, at present it cannot be used
as the boot drive's file system (but that could change soon ....).
Again JFS is meant for use with Warp Server for e-business, rather
than OS/2 Warp Client, however since there are great similarities
between the client and server OS now, it will indeed run also on the
client OS.

One MAJOR advantage of JFS over HPFS, HPFS 386 and NTFS is that
should a system fail and leave a drive "dirty", JFS only needs to go
back and re-apply any missed changes. It does not need to checkdisk
the complete drive as the other file systems do. This might not
sound like a big advantage, however when you have 100 or 200 GB disk
array's in servers, a full checkdisk can take 6-12 HOURS to complete
! JFS takes less than 5 minutes.

Microsoft are about to release with their .NET Server 2003, a
similar file system however they are not calling it JFS however to
all intents and purposes it is "their version" of JFS.

Cheers/2

Ed.  

Alan Duval wrote:
> 
> Hi,
> 
> I have just read the following article.
> 
> The performance of plain HPFS not very good. The maximum cache size is ridiculously small but that doesn't
> explain the
>      surprisingly poor performance of large sequential reads.
>      The performance delta between HPFS386 and JFS is very small.
>      HPFS386 is the fastest on writes, JFS is somewhat slowed down by the journaling overhead.
>      JFS has clearly the best read throughput, most likely due to straighter path through the kernel. I suspect
> that FAT has a similar
>      advantage (though for different reasons).
>      The differences in raw read throughput are simply amazing. The winner (JFS) was very nearly 100% faster
> than the loser (HPFS).
>      I was impressed by JFS's performance because the theoretical maximum throughput of UW SCSI is
> 40MB/sec. I consider achieving
>      slightly over 75% of the theoretical maximum at application level quite good.
>      It is necessary to differentiate between the filesystem layout on storage media and the actual filesystem
> driver. The latter is
>      obviously tremendously important as the comparison of HPFS versus HPFS386 shows. The performance
> difference is striking when
>      we consider that both IFSs organize the data on storage media in exactly the same way.
>      It is interesting that out of only three tests and four filesystems, no filesystem consistently scored best or
> worst. That shows how
>      difficult it is to pick a winner.
> 
> So which filesystem is the best? The answer is "it depends" - that is, it depends on user's needs. To make
> things simpler, first let's see
> which filesystem is not the best:
> 
>      FAT - the performance isn't terribly good even with a big fat cache. And when it comes to features, FAT is
> the clear loser. Lack of
>      long filenames and maximum volume size limit of 2GB preclude FAT from serious use. Its only saving grace
> is wide compatibility
>      with other OSes and the fact that FAT is still a good filesystem for floppies.
>      HPFS - features are almost as good as HPFS386 but the performance isn't stellar. Extremely small cache
> size limit seems to be
>      HPFS's worst deficiency but sequential read performance isn't very impressive either - HPFS was by far the
> slowest in that test,
>      slower even than FAT. My recommendation: use plain HPFS as little as possible.
> 
> That leaves two contestants ahead of the pack: HPFS386 and JFS. There is no clear winner. There is little
> difference between these two
> IFSs performance-wise. HPFS386 is faster on writes but JFS has a clear edge when it comes to reading big
> chunks of data. Both have very
> efficient caches - in the build test the CPU was 100% utilized almost all the time with both filesystems. Unless
> you actually take a stopwatch,
> both IFSs perform equally well, although each of them has specific strengths and weaknesses.
> 
> As I have never heard of HPFS386 before, could someone tell me how would one get this and how could it
> be installed for OS/2 to use it?
> 
> Also how do people use OS/2 with JFS if you can't boot it?
> 
> Alan Duval
> 

>  

----------------------------------------------------------------------------------
 

