From bqt at update.uu.se  Fri Jun  1 01:02:27 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Thu, 31 May 2018 17:02:27 +0200
Subject: [TUHS] Control-T (was top)
In-Reply-To: <mailman.1.1527732002.17884.tuhs@minnie.tuhs.org>
References: <mailman.1.1527732002.17884.tuhs@minnie.tuhs.org>
Message-ID: <f237df92-45a8-6820-8f74-fc4ec7ae34e0@update.uu.se>

On 31.05.18 04:00, Pete Turnbull<pete at dunnington.plus.com> wrote:
> On 30/05/2018 23:10, Johnny Billquist wrote:
>> On 30.05.18 02:19, Clem cole wrote:
>>> 1). If you grab ^s/^q from the alphabet it means you can send raw 8
>>> bit data because they are valid characters (yes you can use escape
>>> chars but it gets very bad)
>> But this has nothing to do with 8 bit data.
> [...]
>> What you are talking about is simply the problem when you have inband
>> signalling, and is a totally different problem than 8bit data.
> True, but what I believe Clem meant was "binary data".  He did write
> "raw data".

Clem originally said:
"And the other issue was besides not enough buffering DZ11s and the TU58 
assumes  software (^S/^Q) for the flow control (DH11s with the DM11 
option has hardware (RTS/CTS) to control the I/O rate.  So this of 
course made 8bit data diificult too."

And that is what I objected to. If we're talking about just getting data 
through a communications channel cleanly, then obviously inband 
signalling like XON/XOFF is a problem, but that was not what Clem claimed.

By the way, while I'm on this topic. The TU58 (aka DECtape II - curse 
that name) initially did have overrun problems, and that was because the 
protocol did not have proper handshaking. There is flow control when the 
TU58 sends data to the host, but there is no handshaking, which 
amplifies the problem with flow control on that device. The protocol the 
TU58 uses is called RSP, for Radial Serial Protocol. DEC realized the 
problem, and modified the protocol, which were then called MRSP, or 
Modified Radial Serial Protocol, which addressed the specific problem of 
handshaking when sending data to the host.

But in addition to the lack of handshaking, the TU58 is normally always 
expected to be connected to a DL11, and the DL11 is the truly stupid, 
simple device that only have a 1 character buffer. So that interface can 
easily drop characters on reception. Flow control or not. And the TU58 
is not really to blame. And if you have an updated TU58 which uses MRSP, 
you are better off.

>> RTS/CTS signalling is certainly a part of the standard, but it is not
>> meant for flow control. It is for half duplex and the way to signal when
>> you want to change the direction of communication.
>>
>> It is meant for*half duplex*  communication. Not flow control.
> Actually, for both, explicitly in the standard.  The standard (which I
> have in front of me, RS232C August 1969 reaffirmed June 1981) does
> indeed explain at length how to use it for half duplex turnaround but
> also indicates how to use it for flow control.  It states the use on
> one-way channels and full-duplex channels to stop transmission in a
> paragraph before the paragraph discussing half duplex.

Using RTS/CTS for flow control have eventually made it into the 
standard, but this only happened around 1990 with RS-232E-E. See 
https://groups.google.com/forum/#!original/comp.dcom.modems/iOZRZkTKc-o/ZcdcMgUmBDkJ 
for some background story on that.

And that obviously is way after DEC was making any of these serial 
interfaces.

Wikipedia have a pretty decent explanation of how these signals were 
defined back in the day:
"The RTS and CTS signals were originally defined for use with 
half-duplex (one direction at a time) modems such as the Bell 202. These 
modems disable their transmitters when not required and must transmit a 
synchronization preamble to the receiver when they are re-enabled. The 
DTE asserts RTS to indicate a desire to transmit to the DCE, and in 
response the DCE asserts CTS to grant permission, once synchronization 
with the DCE at the far end is achieved. Such modems are no longer in 
common use. There is no corresponding signal that the DTE could use to 
temporarily halt incoming data from the DCE. Thus RS-232's use of the 
RTS and CTS signals, per the older versions of the standard, is asymmetric."

See Wikipedia itself if you want to get even more signals explained.

If you have a copy of the 1969 standards document, could you share it? I 
was trying to locate a copy now, but failed. I know I've read it in the 
past, but have no recollection on where I had it then, or where I got it 
from.

>> The AT "standard" was never actually a standard. It's only a defacto
>> standard, but is just something Hayes did, as I'm sure you know.
> Except for ITU-T V.250, which admittedly is a Recommendation not a Standard.

This also becomes a question of who you accept as an authority to 
establish standards. One could certainly in one sense say that the Hayes 
AT commands were a standard. But all of that also happened long after 
DEC made these interfaces, and as such are also pretty irrelevant. The 
link to the post in news back in 1990 is from the standards 
representative from Hayes, by the way. So they were certainly working 
with the standards bodies to establish how to do things.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From cym224 at gmail.com  Fri Jun  1 07:42:44 2018
From: cym224 at gmail.com (Nemo)
Date: Thu, 31 May 2018 17:42:44 -0400
Subject: [TUHS] Control-T (was top)
In-Reply-To: <f237df92-45a8-6820-8f74-fc4ec7ae34e0@update.uu.se>
References: <mailman.1.1527732002.17884.tuhs@minnie.tuhs.org>
 <f237df92-45a8-6820-8f74-fc4ec7ae34e0@update.uu.se>
Message-ID: <CAJfiPzwygjhfPVHuRa0w8fB=gh6zS7xFwqxOu+JQ3P6WNtevZw@mail.gmail.com>

On 31/05/2018, Johnny Billquist <bqt at update.uu.se> wrote (in part):
[Wonderful stuff clipped out.]

This really brought back fond memories of writing software to
interface with modems.

> This also becomes a question of who you accept as an authority to
> establish standards. One could certainly in one sense say that the Hayes
> AT commands were a standard. But all of that also happened long after
> DEC made these interfaces, and as such are also pretty irrelevant. The
> link to the post in news back in 1990 is from the standards
> representative from Hayes, by the way. So they were certainly working
> with the standards bodies to establish how to do things.

Indeed, now we have the ETSI AT commands. #6-)

N.

>
>    Johnny


From norman at oclsc.org  Sat Jun  2 07:35:08 2018
From: norman at oclsc.org (Norman Wilson)
Date: Fri, 01 Jun 2018 17:35:08 -0400
Subject: [TUHS] Serial communications (was Control-T (was top))
Message-ID: <1527888911.6660.for-standards-violators@oclsc.org>

Clem Cole:

  WE212 modems worked on dz11 for input but just barely and could not use
  a WE dialer for output (who's number I now forget). Remember the AT
  command dialer stuff was not originally a standard from the telcos.
  The modems were independent of the dialer (which used the RS-4xx
  standard and I also have some where and I forget the number).

=====

There was certainly some WECo modems-in-a-rack-with-one-dialer setup
that could be driven by DZ11s.  I know that because that is what the
UUCP node `research' used in the 1980s, when I was at Bell Labs.

I don't know what the modem hardware was; maybe it was newer than
what Clem describes, or maybe someone (likely Bill Marshall or Joe
Condon) made some special hardware to make it work.  But the UNIX
side of things was a VAX-11/750 running Research UNIX with a DZ11;
I'm quite sure of that.

I'm also sure that the modems were genuine WECo and spoke the 212
protocol, max speed 1200bps.

Aside: when I arrived at the Labs, the standard in the Computing
Science Center was that everyone got a Jerq terminal (AT&T 5620,
commercial descendant of the Blit) at work and at home.  For the
home end everyone got a modem with attached telephone (that's the
way the official Bell stuff came back then) and a second phone
line paid for by the Labs.  There was a pool of modems dedicated
to dialin for people working from home.

A few months later, it was discovered that a Japanese modem
manufacturer (Fujitsu? I forget) had a 9600bps modem that required
a four-wire leased line but had a reasonable cost, and that New
Jersey Bell had a new offering for leased lines at a reasonable
cost too.  So the Labs paid for a pair of the modems and a leased
line for each of us, and we all brought the old 212 modems back
in when convenient, and most of the second phone lines were
cancelled.  (A few people wanted to keep them, because they were
business lines from which one could make calls to BTL offices etc
without having to submit your home phone bills for reimbursement.)

The modems had LCD status displays.  In the early samples we had
for initial testing, I vaguely remember a few spelling errors in
the status messages; in particular SIGNAL QUARITY IS POOR.  The
error had been fixed in the big bulk order.

There was one more step in the evolution of RS-232-to-home: several
years later New Jersey Bell began offering a new service called
Co-LAN to those served by certain central offices (including me).
The customer was given a box with input phone jack, output phone
jack, and a DB25 connector.  The latter afforded a serial connection
to the CO, multiplexed with the POTS signal (presumably a forerunner
of what is now called DSL).  At the CO, the serial line was plugged
into a Datakit switch.  So I would turn on my terminal and get a
Datakit prompt, type a network address referring to a system on
our Datakit network at MH, type a network-access password, and
I was connected the same as if I were at work, at 9600 bps.

New Jersey Bell hoped to sell both the service and Datakit to
other corporations; hence the password, so customers could get
only to the networks for which they were authorized.  I don't
know whether that happened much.  It was sure convenient for
Bell Labs, though.

Norman Wilson
Toronto ON


From dave at horsfall.org  Sat Jun  2 08:09:01 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 2 Jun 2018 08:09:01 +1000 (EST)
Subject: [TUHS] Serial communications (was Control-T (was top))
In-Reply-To: <1527888911.6660.for-standards-violators@oclsc.org>
References: <1527888911.6660.for-standards-violators@oclsc.org>
Message-ID: <alpine.BSF.2.21.999.1806020807530.37599@aneurin.horsfall.org>

On Fri, 1 Jun 2018, Norman Wilson wrote:

> The modems had LCD status displays.  In the early samples we had for 
> initial testing, I vaguely remember a few spelling errors in the status 
> messages; in particular SIGNAL QUARITY IS POOR.  The error had been 
> fixed in the big bulk order.

Well, they were Japanese modems, after all... :-)

-- Dave


From dave at horsfall.org  Sun Jun  3 09:35:22 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sun, 3 Jun 2018 09:35:22 +1000 (EST)
Subject: [TUHS] In memory of: J. Presper Eckert
Message-ID: <alpine.BSF.2.21.999.1806030932050.37599@aneurin.horsfall.org>

(Yeah, I was asked to not use "RIP", so...)

We lost J. Presper Eckert back in 1995; he was a co-inventor of ENIAC (one 
of the world's first "electronic brains").

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


From mike.ab3ap at gmail.com  Sun Jun  3 10:16:47 2018
From: mike.ab3ap at gmail.com (Mike Markowski)
Date: Sat, 2 Jun 2018 20:16:47 -0400
Subject: [TUHS] In memory of: J. Presper Eckert
In-Reply-To: <alpine.BSF.2.21.999.1806030932050.37599@aneurin.horsfall.org>
References: <alpine.BSF.2.21.999.1806030932050.37599@aneurin.horsfall.org>
Message-ID: <19da2a9d-cf3d-504b-1aa8-15af7adc20e9@gmail.com>

On 06/02/2018 07:35 PM, Dave Horsfall wrote:
> (Yeah, I was asked to not use "RIP", so...)
> 
> We lost J. Presper Eckert back in 1995; he was a co-inventor of ENIAC 
> (one of the world's first "electronic brains").

Around 1991 or so, four of us, including Mike Muuss who is mentioned 
here from time to time, used the old Ballistic Research Lab ENIAC room 
(small cavern!) as our office.  Some old ENIAC plans were even found in 
an old closet and given to the museum.  Chunks of the computer are still 
on display.

Mike Markowski


From ron at ronnatalie.com  Mon Jun  4 03:42:00 2018
From: ron at ronnatalie.com (Ron Natalie)
Date: Sun, 3 Jun 2018 13:42:00 -0400
Subject: [TUHS] In memory of: J. Presper Eckert
In-Reply-To: <19da2a9d-cf3d-504b-1aa8-15af7adc20e9@gmail.com>
References: <alpine.BSF.2.21.999.1806030932050.37599@aneurin.horsfall.org>
 <19da2a9d-cf3d-504b-1aa8-15af7adc20e9@gmail.com>
Message-ID: <013401d3fb62$2da41090$88ec31b0$@ronnatalie.com>

The ENIAC went to the Smithsonian.   It was on display for hears in the History and Technology Museum (now American History).   Some of the guys you probably knew moved that down there (Don Merritt was one of
them).    They said they actually brought it up and running after it moved.

For years, we knew that room as the BRLESC room, which was the computer that superseded the ENIAC.    Pieces of the BRLESC were still there when we were using that room.   I have some of the power supply parts still.   I remember there was a big circuit breaker marked "FILAMENTS."   It's been a long time since we had computers with filaments.   The BRLESC had one of the first machine-independent languages.   FORAST was a locally developed language which ran on both machines (the BRLESC and the ORDVAC, such great computer names back then).    Irv Chidsey (who you might have met as well), used to lament of the days they had to switch from FORAST to FORTRAN.

A small plaque on the post in that room compared the ENIAC compute power to then then current HP65 programmable caluculator.   The room also had one of the earliest raised floors that I was aware of (though they carpeted over it when they turned it into offices).

While Mike and I still shared an office in 394, the ENIAC room was where the IMP 29 on the ARPANET was and a PDP-11/40 system that ran a terminal server called ANTS (ArpaNet Terminal Server) complete with little ants silkscreened on the rack tops.    When the ARPANET went to long leaders, Mike replaced that software with a UNIX host giving the BRL their real first HOST on the Arpanet.   Years later I recycled those racks (discarding the 11/40) to hold BRL Gateways (retaining the ants).     Subsequently the Honeywell IMP was replaced with C-30's and located to the computer room downstairs which contained the last CDC 7600 ever built.     That ran until right about the time I left the BRL in 1987.     Oddly enough a lot of the junk (timeplexors, etc...) associated with the 7600 ended up on my hand receipt.    The staff was not amused when I put a turn-in tag (a way to get rid of surplus equipment) on the 7600 itself after it was scheduled to be decommissioned.    We had already removed the HEP and replaced it with the Patton Cray XMP.   Supercomputer UNIX was there to stay.

--
Around 1991 or so, four of us, including Mike Muuss who is mentioned 
here from time to time, used the old Ballistic Research Lab ENIAC room 
(small cavern!) as our office.  Some old ENIAC plans were even found in 
an old closet and given to the museum.  Chunks of the computer are still 
on display.

 



From pnr at planet.nl  Mon Jun  4 20:18:25 2018
From: pnr at planet.nl (Paul Ruizendaal)
Date: Mon, 4 Jun 2018 12:18:25 +0200
Subject: [TUHS] ANTS (was: In memory of: J. Presper Eckert)
In-Reply-To: <mailman.1.1528077602.18558.tuhs@minnie.tuhs.org>
References: <mailman.1.1528077602.18558.tuhs@minnie.tuhs.org>
Message-ID: <D9276431-DD9D-4BF2-B7D4-D4D853840887@planet.nl>

ANTS was written by Gary Grossman and the experience with ANTS and ANTS 2 was the direct inspiration for NCP Unix (which was probably what Mike installed):
http://chiselapp.com/user/pnr/repository/TUHS_wiki/wiki?name=ncpunix

The source for NCP Unix is available on the Unix Tree webpage:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC


> Date: Sun, 3 Jun 2018 13:42:00 -0400
> From: "Ron Natalie" <ron at ronnatalie.com>
> 
> While Mike and I still shared an office in 394, the ENIAC room was where the IMP 29 on the ARPANET was and a PDP-11/40 system that ran a terminal server called ANTS (ArpaNet Terminal Server) complete with little ants silkscreened on the rack tops. When the ARPANET went to long leaders, Mike replaced that software with a UNIX host giving the BRL their real first HOST on the Arpanet. Years later I recycled those racks (discarding the 11/40) to hold BRL Gateways (retaining the ants).



From ron at ronnatalie.com  Mon Jun  4 22:28:24 2018
From: ron at ronnatalie.com (Ron Natalie)
Date: Mon, 4 Jun 2018 08:28:24 -0400
Subject: [TUHS] ANTS (was: In memory of: J. Presper Eckert)
In-Reply-To: <D9276431-DD9D-4BF2-B7D4-D4D853840887@planet.nl>
References: <mailman.1.1528077602.18558.tuhs@minnie.tuhs.org>
 <D9276431-DD9D-4BF2-B7D4-D4D853840887@planet.nl>
Message-ID: <002b01d3fbff$88de7930$9a9b6b90$@ronnatalie.com>

The kernel remained the JHU/BRL Unix kernel we used everywhere else (a
hacked version of V6 where we put a filesystem switch in to allow mounting
of either V6 or V7 filesystems).   I don't know where he got the NCP code
that he jammed into it, but it was likely the NCP Unix project you
mentioned.    I was actually not there at the time  (My roommate Bob Miles
was) and this was one of the "January 1" cut over dates (a few years later,
we'd be jamming 4 BSD TCP into that kernel for the TCP/IP conversion).    I
do remember watching Mike l building the thing on the 11/70 and lugging the
RK05 over the the other building to test it.

-----Original Message-----
From: TUHS [mailto:tuhs-bounces at minnie.tuhs.org] On Behalf Of Paul
Ruizendaal
Sent: Monday, June 4, 2018 6:18 AM
To: tuhs at minnie.tuhs.org
Subject: [TUHS] ANTS (was: In memory of: J. Presper Eckert)

ANTS was written by Gary Grossman and the experience with ANTS and ANTS 2
was the direct inspiration for NCP Unix (which was probably what Mike
installed):
http://chiselapp.com/user/pnr/repository/TUHS_wiki/wiki?name=ncpunix

The source for NCP Unix is available on the Unix Tree webpage:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC


> Date: Sun, 3 Jun 2018 13:42:00 -0400
> From: "Ron Natalie" <ron at ronnatalie.com>
> 
> While Mike and I still shared an office in 394, the ENIAC room was where
the IMP 29 on the ARPANET was and a PDP-11/40 system that ran a terminal
server called ANTS (ArpaNet Terminal Server) complete with little ants
silkscreened on the rack tops. When the ARPANET went to long leaders, Mike
replaced that software with a UNIX host giving the BRL their real first HOST
on the Arpanet. Years later I recycled those racks (discarding the 11/40) to
hold BRL Gateways (retaining the ants).




From dave at horsfall.org  Thu Jun  7 11:32:24 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Thu, 7 Jun 2018 11:32:24 +1000 (EST)
Subject: [TUHS] In Memorium: Alan Turing
Message-ID: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>

We lost the Father of Computing, Alan Turing, on this day when he suicided 
in 1954 (long story).  Just imagine where computing would've been now...

Yes, there are various theories surrounding his death, such as a jealous 
lover, the FBI knowing that he knew about Verona and could be compromised 
as a result of his sexuality, etc.  Unless they speak up (and they ain't), 
we will never know.

Unix reference?  Oh, that...  No computable devices (read his paper), no 
computers...  And after dealing with a shitload of OSs in my career, I 
daresay that there is no more computable OS than Unix.  Sorry, penguins, 
but you seem to require a fancy graphical interface these days, just like 
Windoze.

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


From lm at mcvoy.com  Thu Jun  7 12:34:57 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Wed, 6 Jun 2018 19:34:57 -0700
Subject: [TUHS] In Memorium: Alan Turing
In-Reply-To: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
References: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
Message-ID: <20180607023457.GN25584@mcvoy.com>

If there are people here who knew him I'd love to learn more.

On Thu, Jun 07, 2018 at 11:32:24AM +1000, Dave Horsfall wrote:
> We lost the Father of Computing, Alan Turing, on this day when he suicided
> in 1954 (long story).  Just imagine where computing would've been now...
> 
> Yes, there are various theories surrounding his death, such as a jealous
> lover, the FBI knowing that he knew about Verona and could be compromised as
> a result of his sexuality, etc.  Unless they speak up (and they ain't), we
> will never know.
> 
> Unix reference?  Oh, that...  No computable devices (read his paper), no
> computers...  And after dealing with a shitload of OSs in my career, I
> daresay that there is no more computable OS than Unix.  Sorry, penguins, but
> you seem to require a fancy graphical interface these days, just like
> Windoze.
> 
> -- 
> Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From ggm at algebras.org  Thu Jun  7 12:59:43 2018
From: ggm at algebras.org (George Michaelson)
Date: Thu, 7 Jun 2018 12:59:43 +1000
Subject: [TUHS] In Memorium: Alan Turing
In-Reply-To: <20180607023457.GN25584@mcvoy.com>
References: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
 <20180607023457.GN25584@mcvoy.com>
Message-ID: <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>

Prefaced by "not me" obviously, I'm too young. But. Anyway: FWIW...

I've never bought into the conspiracy theory. Homosexuality was
illegal in and of itself, and if you were caught it was really hard to
avoid court. No amount of connections were going to keep Turing out of
the legal system. The 'medical cure' path was a well trodden (and
obviously hugely wrong) path, which he acceded to, because if he
didn't his security status would have been withdrawn (frankly, it was
going anyway given how outside the mainstream he was) but the hormone
therapy completely screwed him up mentally. It has a huge emotional
burden. I'm told the physical symptoms, things like gynocomastia (man
boobs) are pretty awful if you aren't primed for them.

My dad knew him, they met at conferences. Sid died in the 1990s and my
memory of talking to him about Turing is that he liked him and
respected him, but was a bit distant. Sid was in the communist party
so for obvious reasons, Turing and anyone else with a security
clearance wasn't going to get very chummy. Sid certainly respected
him, but he had a very old-school outlook on homosexuality and said
"poor chap: he was mentally ill" and left it at that. My mum rolled
her eyes. She'd been to art college, and had a more liberal view of
things. Sids group at Imperial (ICCE, Tocher) had built a machine out
of relays, cheap from the post office. I think we can probably guess
why a large number of post office relays were available cheaply in the
early 1950s. Turings group was further along the road and had more
money, so the stuff at Manchester and the NPL was going on in
parallel. A whole bunch of these early 1950s machines are written up
in [1] including Sids stuff.

Lots of the manchester staff are still around. Retired, but not gone.
Tony Brooker for instance (I think he was subsequently at Essex) and
Tommy Thomas (he went to CSIRO in Australia before he retired) I doubt
they're on this list, but you never know. I don't think people from
Bell who crossed over with Turing in his pre-computing era role are
around, his post computing era role is somewhat complicated by a
rather thick brick wall of not-invented-here between the US and the
UK. A whole bunch of ideas about computing basically didn't pass
across the Atlantic ocean in either direction, and each economies
history of computing is very "one eyed" at times (thats my subjective
opinion, but I'm biassed)

I think most of the Bletchley cohort who were men of army recruitment
age are gone now, The TRE engineers are mostly gone AFAIK, there are
quite a few women (they tend to live longer) but given the
sensibilities of the time, I would be surprised if they want to talk
much: I knew an anglican priest who was heading to HK for the ministry
before WWII who was moved sideways into Japanese and undoubtedly
worked on the US/UK/AU shared decrypts from the Japanese crypto
campaign, He simply refused to talk about it, at any level.

Sid was a professor at Edinburgh, alongside Donald Michie who had
worked with Turing. They (Sid and Donald) fought like cats and dogs
over the AI story, there was no love lost there nor much mutual
respect, so nothing came from that side, and Donald is dead too. I
never really spoke to him about anything. By the time I had any sense
of interesting questions to ask, It wasnt going to happen.

Andrew Hodges did a tour when his biography of Turing first came out.
I went to a series of seminars at Leeds Uni he gave which were
fascinating.

I don't want to over-state it, because it was back in 1983 or so, but
I don't sense there is a hell of a lot of personal biographical detail
beyond that which has come out. Again, I'm biassed, and probably wrong
there. But it feels like a lot of not much.

[1]  B.V. Bowden ( ed.) Faster Than Thought ( A Symposium on Digital
Computing Machines ) Sir Isaac Pitman & Sons Ltd. 1953
https://archive.org/details/FasterThanThought

On Thu, Jun 7, 2018 at 12:34 PM, Larry McVoy <lm at mcvoy.com> wrote:
> If there are people here who knew him I'd love to learn more.
>
> On Thu, Jun 07, 2018 at 11:32:24AM +1000, Dave Horsfall wrote:
>> We lost the Father of Computing, Alan Turing, on this day when he suicided
>> in 1954 (long story).  Just imagine where computing would've been now...
>>
>> Yes, there are various theories surrounding his death, such as a jealous
>> lover, the FBI knowing that he knew about Verona and could be compromised as
>> a result of his sexuality, etc.  Unless they speak up (and they ain't), we
>> will never know.
>>
>> Unix reference?  Oh, that...  No computable devices (read his paper), no
>> computers...  And after dealing with a shitload of OSs in my career, I
>> daresay that there is no more computable OS than Unix.  Sorry, penguins, but
>> you seem to require a fancy graphical interface these days, just like
>> Windoze.
>>
>> --
>> Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."
>
> --
> ---
> Larry McVoy                  lm at mcvoy.com             http://www.mcvoy.com/lm


From dave at horsfall.org  Thu Jun  7 18:03:40 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Thu, 7 Jun 2018 18:03:40 +1000 (EST)
Subject: [TUHS] In Memorium: Alan Turing
In-Reply-To: <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
References: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
 <20180607023457.GN25584@mcvoy.com>
 <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
Message-ID: <alpine.BSF.2.21.999.1806071802000.68981@aneurin.horsfall.org>

On Thu, 7 Jun 2018, George Michaelson wrote:

> Prefaced by "not me" obviously, I'm too young. But. Anyway: FWIW...

Thank you, George, for that fascinating personal history!

-- Dave


From toby at telegraphics.com.au  Thu Jun  7 23:05:30 2018
From: toby at telegraphics.com.au (Toby Thain)
Date: Thu, 7 Jun 2018 09:05:30 -0400
Subject: [TUHS] 1950s computers - Re:  In Memorium: Alan Turing
In-Reply-To: <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
References: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
 <20180607023457.GN25584@mcvoy.com>
 <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
Message-ID: <26e35dfa-ea87-6c95-bd7f-066d89e65d77@telegraphics.com.au>

On 2018-06-06 10:59 PM, George Michaelson wrote:
> ...
> My dad ... Sids group at Imperial (ICCE, Tocher) had built a machine out
> of relays, cheap from the post office. I think we can probably guess
> why a large number of post office relays were available cheaply in the
> early 1950s. Turings group was further along the road and had more
> money, so the stuff at Manchester and the NPL was going on in
> parallel. A whole bunch of these early 1950s machines are written up
> in [1] including Sids stuff.
> 


For more technical details on the surprisingly modern Manchester Atlas
(and other machines), Per Brinch Hansen's "Classic Operating Systems" is
a good anthology. ( https://dl.acm.org/citation.cfm?id=360596 )

--Toby

> [1]  B.V. Bowden ( ed.) Faster Than Thought ( A Symposium on Digital
> Computing Machines ) Sir Isaac Pitman & Sons Ltd. 1953
> https://archive.org/details/FasterThanThought
> 



From ron at ronnatalie.com  Fri Jun  8 03:35:52 2018
From: ron at ronnatalie.com (Ron Natalie)
Date: Thu, 7 Jun 2018 13:35:52 -0400
Subject: [TUHS] NROFF & Model 37s
Message-ID: <020201d3fe85$fbb2f070$f318d150$@ronnatalie.com>

Hopkins had a KSR37 that was our standard word processing output for a long time before the daisy wheel printers started showing up.
It even had the "greek box" so when eqn or whatever wanted that, it just sent shift-in/shift-out (control-n, -o).     Years later I managed to pick up a surplus ASR37 from Rocky Flats.    I had it in my kitchen for years on a modem.    It was great fun to have one of the few terminals that nroff would send all those ESC-8/9 things for the vertical positioning without needing an output filter.    No greek box, though.   It also had a giant NEWLINE key and didn't need to have the nl mode turned on.   Amusingly the thing would sit there quiet until the modem was powered up and then DSR ready would bring it to life.    When CD came up an giant green PROCEED light illuminated above the keyboard.   The paper tape unit was a monster side car.   I never got around to programming the "here-is" drum.

I think RS took it and left it behind someone's car at one as a practical joke.



From rminnich at gmail.com  Fri Jun  8 08:47:48 2018
From: rminnich at gmail.com (ron minnich)
Date: Thu, 7 Jun 2018 16:47:48 -0600
Subject: [TUHS] where did the term daemon come from?
Message-ID: <CAP6exYLc-G+qtK0Ty5nYiavTBF2Vt072DDBbR7+hiUKuOWv23Q@mail.gmail.com>

someone asked me today, and I only remember using that term in 1975 but
can't recall the origin.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180607/5b1f4cd1/attachment.html>

From rminnich at gmail.com  Fri Jun  8 08:48:43 2018
From: rminnich at gmail.com (ron minnich)
Date: Thu, 7 Jun 2018 16:48:43 -0600
Subject: [TUHS] NROFF & Model 37s
In-Reply-To: <020201d3fe85$fbb2f070$f318d150$@ronnatalie.com>
References: <020201d3fe85$fbb2f070$f318d150$@ronnatalie.com>
Message-ID: <CAP6exYJsurH5NfhDbKTbi6H5GD1MTALtJNVaduC6w2KELk5L2w@mail.gmail.com>

rocky flats? Did you put a geiger counter near it :-)

On Thu, Jun 7, 2018 at 10:36 AM Ron Natalie <ron at ronnatalie.com> wrote:

> Hopkins had a KSR37 that was our standard word processing output for a
> long time before the daisy wheel printers started showing up.
> It even had the "greek box" so when eqn or whatever wanted that, it just
> sent shift-in/shift-out (control-n, -o).     Years later I managed to pick
> up a surplus ASR37 from Rocky Flats.    I had it in my kitchen for years on
> a modem.    It was great fun to have one of the few terminals that nroff
> would send all those ESC-8/9 things for the vertical positioning without
> needing an output filter.    No greek box, though.   It also had a giant
> NEWLINE key and didn't need to have the nl mode turned on.   Amusingly the
> thing would sit there quiet until the modem was powered up and then DSR
> ready would bring it to life.    When CD came up an giant green PROCEED
> light illuminated above the keyboard.   The paper tape unit was a monster
> side car.   I never got around to programming the "here-is" drum.
>
> I think RS took it and left it behind someone's car at one as a practical
> joke.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180607/4105cc3c/attachment.html>

From reed at reedmedia.net  Fri Jun  8 09:05:06 2018
From: reed at reedmedia.net (Jeremy C. Reed)
Date: Thu, 7 Jun 2018 18:05:06 -0500 (CDT)
Subject: [TUHS] where did the term daemon come from?
In-Reply-To: <CAP6exYLc-G+qtK0Ty5nYiavTBF2Vt072DDBbR7+hiUKuOWv23Q@mail.gmail.com>
References: <CAP6exYLc-G+qtK0Ty5nYiavTBF2Vt072DDBbR7+hiUKuOWv23Q@mail.gmail.com>
Message-ID: <alpine.NEB.2.20.1806071801320.18911@t1.m.reedmedia.net>

See this long thread
https://minnie.tuhs.org//pipermail/tuhs/2018-March/thread.html#13142

Sorry the thread is long, but some of it tells us where it comes from.

(It predates Unix. http://multicians.org/mgd.html )


From cym224 at gmail.com  Fri Jun  8 11:54:38 2018
From: cym224 at gmail.com (Nemo)
Date: Thu, 7 Jun 2018 21:54:38 -0400
Subject: [TUHS] where did the term daemon come from?
In-Reply-To: <alpine.NEB.2.20.1806071801320.18911@t1.m.reedmedia.net>
References: <CAP6exYLc-G+qtK0Ty5nYiavTBF2Vt072DDBbR7+hiUKuOWv23Q@mail.gmail.com>
 <alpine.NEB.2.20.1806071801320.18911@t1.m.reedmedia.net>
Message-ID: <CAJfiPzyD=uK0DqwDfA7b41AP_pk9T-Sc_aQJHYqkucmUdtgBfQ@mail.gmail.com>

On 07/06/2018, Jeremy C. Reed <reed at reedmedia.net> wrote:
> See this long thread
> https://minnie.tuhs.org//pipermail/tuhs/2018-March/thread.html#13142
>
> Sorry the thread is long, but some of it tells us where it comes from.
>
> (It predates Unix. http://multicians.org/mgd.html )

And speaking of daemons, does anyone know the origin of that
photograph on the Web showing someone in priestly garb shaking an
aspergillum at a row of servers with the caption "Daemon Stop!".

N.


From ron at ronnatalie.com  Fri Jun  8 14:31:37 2018
From: ron at ronnatalie.com (Ron Natalie)
Date: Fri, 8 Jun 2018 00:31:37 -0400
Subject: [TUHS] NROFF & Model 37s
In-Reply-To: <CAP6exYJsurH5NfhDbKTbi6H5GD1MTALtJNVaduC6w2KELk5L2w@mail.gmail.com>
References: <020201d3fe85$fbb2f070$f318d150$@ronnatalie.com>
 <CAP6exYJsurH5NfhDbKTbi6H5GD1MTALtJNVaduC6w2KELk5L2w@mail.gmail.com>
Message-ID: <029f01d3fee1$977ee990$c67cbcb0$@ronnatalie.com>

No, but at least it didn’t glow when I turned the lights off.    I was living in Denver at the time.   I can’t believe I paid to have that thing shipped back to Maryland when I took the job at BRL.

 

From: ron minnich [mailto:rminnich at gmail.com] 
Sent: Thursday, June 7, 2018 6:49 PM
To: Ron Natalie
Cc: tuhs at tuhs.org
Subject: Re: [TUHS] NROFF & Model 37s

 

rocky flats? Did you put a geiger counter near it :-)

 

On Thu, Jun 7, 2018 at 10:36 AM Ron Natalie <ron at ronnatalie.com> wrote:

Hopkins had a KSR37 that was our standard word processing output for a long time before the daisy wheel printers started showing up.
It even had the "greek box" so when eqn or whatever wanted that, it just sent shift-in/shift-out (control-n, -o).     Years later I managed to pick up a surplus ASR37 from Rocky Flats.    I had it in my kitchen for years on a modem.    It was great fun to have one of the few terminals that nroff would send all those ESC-8/9 things for the vertical positioning without needing an output filter.    No greek box, though.   It also had a giant NEWLINE key and didn't need to have the nl mode turned on.   Amusingly the thing would sit there quiet until the modem was powered up and then DSR ready would bring it to life.    When CD came up an giant green PROCEED light illuminated above the keyboard.   The paper tape unit was a monster side car.   I never got around to programming the "here-is" drum.

I think RS took it and left it behind someone's car at one as a practical joke.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180608/faee8824/attachment.html>

From doug at cs.dartmouth.edu  Sat Jun  9 04:04:16 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Fri, 08 Jun 2018 14:04:16 -0400
Subject: [TUHS] NROFF & Model 37s
Message-ID: <201806081804.w58I4Glc091367@tahoe.cs.Dartmouth.EDU>

> I think RS took it and left it behind someone's car at one as a practical joke.

That behemoth was built to last. It would surely stop a car.
Moving it single-handed would be quite a feat. When Andy Hall 
retired his, he was able to push it as far as his back porch,
where it remained for more than a year.

Under the keyboard was remarkable mechanical crossbar
encoder that dripped machine oil. Fortunately the operator's
knees were protected by a drip pan. The oil smell led me
to keep mine in my workshop; its fragrance reminded
me of the first major piece of computing equipment I saw
as a child: Vennevar Bush's mechanical differential
analyzer, a room-size table of shafts, gears and integrators
that was redolent of machine shop.

Doug


From dave at horsfall.org  Sat Jun  9 07:51:44 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 9 Jun 2018 07:51:44 +1000 (EST)
Subject: [TUHS] NROFF & Model 37s
In-Reply-To: <201806081804.w58I4Glc091367@tahoe.cs.Dartmouth.EDU>
References: <201806081804.w58I4Glc091367@tahoe.cs.Dartmouth.EDU>
Message-ID: <alpine.BSF.2.21.999.1806090739330.68981@aneurin.horsfall.org>

On Fri, 8 Jun 2018, Doug McIlroy wrote:

> Under the keyboard was remarkable mechanical crossbar encoder that 
> dripped machine oil. {...]

It may interest some here that that crossbar arrangement, along with the 
distributor and its selector magnets, actually form a hardware shift 
register...

Here's a Gootoob video (11m): it's a mechanical marvel!

http://www.youtube.com/watch?v=HcMHam54EOI

-- Dave VK2KFU, who was into RTTY once


From doug at cs.dartmouth.edu  Sun Jun 10 06:48:01 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Sat, 09 Jun 2018 16:48:01 -0400
Subject: [TUHS] NROFF & Model 37s
Message-ID: <201806092048.w59Km1xP064399@tahoe.cs.Dartmouth.EDU>

> Here's a Gootoob video (11m): it's a mechanical marvel!

A wonderful model, but quite unlike a TTY 37, where sender
and receiver are the same machine.

Doug


From dave at horsfall.org  Wed Jun 13 09:29:34 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Wed, 13 Jun 2018 09:29:34 +1000 (EST)
Subject: [TUHS] Happy birthday, Leonard Kleinrock!
Message-ID: <alpine.BSF.2.21.999.1806130925360.68981@aneurin.horsfall.org>

An ARPAnet pioneer, apparently he was born on this day in 1934, and was 
famous for sending the first message over the ARPAnet in October 1969 (it 
got as far as "LO" before crashing).

-- Dave


From arnold at skeeve.com  Fri Jun 15 19:12:46 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Fri, 15 Jun 2018 03:12:46 -0600
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
Message-ID: <201806150912.w5F9CkZp004310@freefriends.org>

https://hackaday.com/2018/06/03/its-unix-on-a-microcontroller/

A neat hack, if nothing else. :-)

Arnold


From lars at nocrew.org  Fri Jun 15 19:22:52 2018
From: lars at nocrew.org (Lars Brinkhoff)
Date: Fri, 15 Jun 2018 09:22:52 +0000
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <201806150912.w5F9CkZp004310@freefriends.org>
 (arnold@skeeve.com's message of "Fri, 15 Jun 2018 03:12:46 -0600")
References: <201806150912.w5F9CkZp004310@freefriends.org>
Message-ID: <7w8t7gcsk3.fsf@junk.nocrew.org>

Arnold wrote:
> https://hackaday.com/2018/06/03/its-unix-on-a-microcontroller/

It says PDP-11 right in the article, so it must be on topic.

RetroBSD is a different take.  Instead of running a PDP-11 emulator on
the microcontroller, they ported 2.11BSD to run on the metal:

http://retrobsd.org/


From a.phillip.garcia at gmail.com  Fri Jun 15 21:19:28 2018
From: a.phillip.garcia at gmail.com (A. P. Garcia)
Date: Fri, 15 Jun 2018 07:19:28 -0400
Subject: [TUHS] core
Message-ID: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>

jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.

kill -3 mailx
core dumped
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/3ec03f65/attachment.html>

From steffen at sdaoden.eu  Fri Jun 15 23:50:27 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Fri, 15 Jun 2018 15:50:27 +0200
Subject: [TUHS] core
In-Reply-To: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
Message-ID: <20180615135027.3uOZn%steffen@sdaoden.eu>

A. P. Garcia wrote in <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivA\
Xg at mail.gmail.com>:
 |jay forrester first described an invention called core memory in a \
 |lab notebook 69 years ago today.
 |
 |kill -3 mailx
 |
 |core dumped

No no, that will not die.  It says "Quit", and then exits.
But maybe it will be able to create text/html via MIME alternative
directly in a few years from now on.  In general i am talking
about my own maintained branch here only, of course.

 --End of <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg at mail.gmail\
 .com>

A nice weekend everybody, may it overtrump the former.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From clemc at ccc.com  Sat Jun 16 00:21:44 2018
From: clemc at ccc.com (Clem Cole)
Date: Fri, 15 Jun 2018 10:21:44 -0400
Subject: [TUHS] core
In-Reply-To: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
Message-ID: <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>

On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
wrote:

> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.
>
​Be careful -- Forrester named it and put it into an array and build a
random access memory with it, but An Wang invented and patented basic
technology we now call 'core' in 1955  2,708,722
<https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
memory')​.   As I understand it (I'm old, but not that old so I can not
speak from experience, as I was a youngling when this patent came about),
Wang thought Forrester's use of his idea was great, but Wang's patent was
the broader one.

There is an interesting history of Wang and his various fights at: An Wang
- The Man Who Might Have Invented The Personal Computer
<http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/c0b856a0/attachment.html>

From imp at bsdimp.com  Sat Jun 16 00:25:03 2018
From: imp at bsdimp.com (Warner Losh)
Date: Fri, 15 Jun 2018 08:25:03 -0600
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <7w8t7gcsk3.fsf@junk.nocrew.org>
References: <201806150912.w5F9CkZp004310@freefriends.org>
 <7w8t7gcsk3.fsf@junk.nocrew.org>
Message-ID: <CANCZdfop7o89gJMC79myU_KRC7kuzXfnmh82k3ben=mVVJ41dg@mail.gmail.com>

On Fri, Jun 15, 2018 at 3:22 AM, Lars Brinkhoff <lars at nocrew.org> wrote:

> Arnold wrote:
> > https://hackaday.com/2018/06/03/its-unix-on-a-microcontroller/
>
> It says PDP-11 right in the article, so it must be on topic.
>
> RetroBSD is a different take.  Instead of running a PDP-11 emulator on
> the microcontroller, they ported 2.11BSD to run on the metal:
>
> http://retrobsd.org/


It looks like retrobsd hasn't been active in the last couple of years
though. A cool accomplishment, but with some caveats. All the network is in
userland, not the kernel, for example.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/9c68991a/attachment.html>

From jpl.jpl at gmail.com  Sat Jun 16 01:11:19 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Fri, 15 Jun 2018 11:11:19 -0400
Subject: [TUHS] core
In-Reply-To: <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
Message-ID: <CAC0cEp-PUxcAipNhyX8qDO2MsXD0akkXMoM7XRyPR8C-TWDHcQ@mail.gmail.com>

Excerpt from the article Clem pointed to:

The company was called Wang Laboratories and it specialised in magnetic
memories. He used his contacts to sell magnetic cores which he built and
sold for $4 each.


4 bucks a bit!

On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc at ccc.com> wrote:

>
>
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> ​Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955  2,708,722
> <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
> memory')​.   As I understand it (I'm old, but not that old so I can not
> speak from experience, as I was a youngling when this patent came about),
> Wang thought Forrester's use of his idea was great, but Wang's patent was
> the broader one.
>
> There is an interesting history of Wang and his various fights at: An
> Wang - The Man Who Might Have Invented The Personal Computer
> <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
>
> ᐧ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/95290b4a/attachment.html>

From lm at mcvoy.com  Sat Jun 16 01:21:27 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 15 Jun 2018 08:21:27 -0700
Subject: [TUHS] core
In-Reply-To: <CAC0cEp-PUxcAipNhyX8qDO2MsXD0akkXMoM7XRyPR8C-TWDHcQ@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
 <CAC0cEp-PUxcAipNhyX8qDO2MsXD0akkXMoM7XRyPR8C-TWDHcQ@mail.gmail.com>
Message-ID: <20180615152127.GC24509@mcvoy.com>

And today it is $.000000009 / bit.

On Fri, Jun 15, 2018 at 11:11:19AM -0400, John P. Linderman wrote:
> Excerpt from the article Clem pointed to:
> 
> The company was called Wang Laboratories and it specialised in magnetic
> memories. He used his contacts to sell magnetic cores which he built and
> sold for $4 each.
> 
> 
> 4 bucks a bit!
> 
> On Fri, Jun 15, 2018 at 10:21 AM, Clem Cole <clemc at ccc.com> wrote:
> 
> >
> >
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955  2,708,722
> > <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
> > memory')???.   As I understand it (I'm old, but not that old so I can not
> > speak from experience, as I was a youngling when this patent came about),
> > Wang thought Forrester's use of his idea was great, but Wang's patent was
> > the broader one.
> >
> > There is an interesting history of Wang and his various fights at: An
> > Wang - The Man Who Might Have Invented The Personal Computer
> > <http://www.i-programmer.info/history/people/550-an-wang-wang-laboratories.html>
> >
> > ???
> >

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From jnc at mercury.lcs.mit.edu  Sat Jun 16 01:25:42 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Fri, 15 Jun 2018 11:25:42 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>

    > From: "John P. Linderman"

    > 4 bucks a bit!

When IBM went to license the core patent(s?) from MIT, they offered MIT a
choice of a large lump sump (the number US$20M sticks in my mind), or US$.01 a
bit.

The negotiators from MIT took the lump sum.

When I first heard this story, I thought it was corrupt folklore, but I later
found it in one of the IBM histories.

This story repets itself over and over again, though: one of the Watson's
saying there was a probably market for <single-digit> of computers; Ken Olsen
saying people wouldn't want computers in their homes; etc, etc.

      Noel


From imp at bsdimp.com  Sat Jun 16 05:12:12 2018
From: imp at bsdimp.com (Warner Losh)
Date: Fri, 15 Jun 2018 13:12:12 -0600
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <m2d0wr50jx.fsf@thuvia.hamartun.priv.no>
References: <201806150912.w5F9CkZp004310@freefriends.org>
 <7w8t7gcsk3.fsf@junk.nocrew.org>
 <CANCZdfop7o89gJMC79myU_KRC7kuzXfnmh82k3ben=mVVJ41dg@mail.gmail.com>
 <m2d0wr50jx.fsf@thuvia.hamartun.priv.no>
Message-ID: <CANCZdfpMz2-PRc80bryTZ5VJ+Oj-TPKkqENdaNXiVn8kce9bpA@mail.gmail.com>

On Fri, Jun 15, 2018 at 1:09 PM, Tom Ivar Helbekkmo <tih at hamartun.priv.no>
wrote:

> Warner Losh <imp at bsdimp.com> writes:
>
> > It looks like retrobsd hasn't been active in the last couple of years
> > though. A cool accomplishment, but with some caveats. All the network
> > is in userland, not the kernel, for example.
>
> Isn't 2.11BSD networking technically in userland?  I forget.  Johnny?
>

It looks to be integrated into the kernel.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/b8f845e5/attachment.html>

From tih at hamartun.priv.no  Sat Jun 16 05:09:38 2018
From: tih at hamartun.priv.no (Tom Ivar Helbekkmo)
Date: Fri, 15 Jun 2018 21:09:38 +0200
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <CANCZdfop7o89gJMC79myU_KRC7kuzXfnmh82k3ben=mVVJ41dg@mail.gmail.com>
 (Warner Losh's message of "Fri, 15 Jun 2018 08:25:03 -0600")
References: <201806150912.w5F9CkZp004310@freefriends.org>
 <7w8t7gcsk3.fsf@junk.nocrew.org>
 <CANCZdfop7o89gJMC79myU_KRC7kuzXfnmh82k3ben=mVVJ41dg@mail.gmail.com>
Message-ID: <m2d0wr50jx.fsf@thuvia.hamartun.priv.no>

Warner Losh <imp at bsdimp.com> writes:

> It looks like retrobsd hasn't been active in the last couple of years
> though. A cool accomplishment, but with some caveats. All the network
> is in userland, not the kernel, for example.

Isn't 2.11BSD networking technically in userland?  I forget.  Johnny?

-tih
-- 
Most people who graduate with CS degrees don't understand the significance
of Lisp.  Lisp is the most important idea in computer science.  --Alan Kay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/12778883/attachment.sig>

From dave at horsfall.org  Sat Jun 16 09:05:09 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 16 Jun 2018 09:05:09 +1000 (EST)
Subject: [TUHS] core
In-Reply-To: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
Message-ID: <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>

On Fri, 15 Jun 2018, Noel Chiappa wrote:

> This story repets itself over and over again, though: one of the 
> Watson's saying there was a probably market for <single-digit> of 
> computers; Ken Olsen saying people wouldn't want computers in their 
> homes; etc, etc.

I seem to recall reading somewhere that these were urban myths...  Does 
anyone have actual references in their contexts?

E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I 
think that Olsens's quote was about the DEC-System 10...

-- Dave, who has been known to be wrong before


From lyndon at orthanc.ca  Sat Jun 16 09:22:22 2018
From: lyndon at orthanc.ca (Lyndon Nerenberg)
Date: Fri, 15 Jun 2018 16:22:22 -0700 (PDT)
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
Message-ID: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>

>> This story repets itself over and over again, though: one of the Watson's 
>> saying there was a probably market for <single-digit> of computers; Ken 
>> Olsen saying people wouldn't want computers in their homes; etc, etc.
>
> I seem to recall reading somewhere that these were urban myths...  Does 
> anyone have actual references in their contexts?
>
> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and I 
> think that Olsens's quote was about the DEC-System 10...

At the time, those were the only computers out there.  The "personal 
computer" was the myth of the day.  Go back and look at the prices IBM and 
DEC were charging for their gear.  Even in the modern context, that 
hardware was more expensive than the rediculously inflated prices of 
housing in Vancouver.

--lyndon


From grog at lemis.com  Sat Jun 16 11:08:04 2018
From: grog at lemis.com (Greg 'groggy' Lehey)
Date: Sat, 16 Jun 2018 11:08:04 +1000
Subject: [TUHS] core
In-Reply-To: <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
Message-ID: <20180616010804.GA28267@eureka.lemis.com>

On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
> wrote:
>
>> jay forrester first described an invention called core memory in a lab
>> notebook 69 years ago today.
>>
> ???Be careful -- Forrester named it and put it into an array and build a
> random access memory with it, but An Wang invented and patented basic
> technology we now call 'core' in 1955  2,708,722
> <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
> memory')???.

Tha patent may date from 1955, but by that time it was already in use.
Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
memory.  There are some interesting photos at
https://en.wikipedia.org/wiki/Magnetic-core_memory.

Greg
--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/ef7eede2/attachment.sig>

From cym224 at gmail.com  Sat Jun 16 12:00:33 2018
From: cym224 at gmail.com (Nemo Nusquam)
Date: Fri, 15 Jun 2018 22:00:33 -0400
Subject: [TUHS] core
In-Reply-To: <20180616010804.GA28267@eureka.lemis.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
 <20180616010804.GA28267@eureka.lemis.com>
Message-ID: <721e2cb8-cf2f-ff45-07ce-21bd617903b9@gmail.com>

On 06/15/18 21:08, Greg 'groggy' Lehey wrote:
> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
>> wrote:
>>
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>>
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955  2,708,722
>> <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
>> memory')???.
>
> Tha patent may date from 1955,

The patent issued in 1955 but the priority date is listed as 1949-10-21.

N.


From jpl.jpl at gmail.com  Sat Jun 16 12:17:50 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Fri, 15 Jun 2018 22:17:50 -0400
Subject: [TUHS] core
In-Reply-To: <20180616010804.GA28267@eureka.lemis.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
 <20180616010804.GA28267@eureka.lemis.com>
Message-ID: <CAC0cEp_mVvQxtQUwgBxPMKQ2a3HZFDgLgVsM43V7QseJr6BuiA@mail.gmail.com>

Forrester was my freshman adviser. A remarkably nice person. For those of
us who weren't going home for Thanksgiving, he invited us to his home. Well
above the call of duty. It's my understanding that his patent wasn't
anywhere near the biggest moneymaker. A patent for the production of
penicillin was, as I understood, the biggie.

On Fri, Jun 15, 2018 at 9:08 PM, Greg 'groggy' Lehey <grog at lemis.com> wrote:

> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
> > On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <
> a.phillip.garcia at gmail.com>
> > wrote:
> >
> >> jay forrester first described an invention called core memory in a lab
> >> notebook 69 years ago today.
> >>
> > ???Be careful -- Forrester named it and put it into an array and build a
> > random access memory with it, but An Wang invented and patented basic
> > technology we now call 'core' in 1955  2,708,722
> > <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
> > memory')???.
>
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory.  There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog at lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed.  If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180615/528dea34/attachment.html>

From clemc at ccc.com  Sat Jun 16 13:06:50 2018
From: clemc at ccc.com (Clem cole)
Date: Fri, 15 Jun 2018 23:06:50 -0400
Subject: [TUHS] core
In-Reply-To: <20180616010804.GA28267@eureka.lemis.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
 <CAC20D2NqrMmwF9YVNBzXZ9bw4Ojsdkco7aNoKdU_6pfFW7WMOQ@mail.gmail.com>
 <20180616010804.GA28267@eureka.lemis.com>
Message-ID: <6D29F004-730D-46B6-A5D5-6220CF41A53D@ccc.com>

Greg-   Sorry if my words read and came to be interpreted to imply 1955 was the date of the invention.  I only wanted clarify that Wang had the idea of magnetic (core) memory before Forrester.  1955 is the date of patent grant as you mentioned.  My primary point was to be careful about giving all the credit to Forrester.  It took both as I understand the history.  Again I was not part of that fight 😉

It seemed the courts have said what I mentioned - it was Wang’s original idea and I just wanted the history stayed more clearly.   That said, obviously Forrester improved on it (made it practical).  And  IBM needed licenses for both btw build their products.  

FWIW: I’ve been on a couple corporate patent committees. I called him on the comment because I knew that we use core history sometimes as an example when we try to teach young inventors how develop a proper invention disclosure (and how I was taught about some of these issues).  What Forrester did I have seen used as an example of an improvement and differentiation from a previous idea.  That said, Wang had the fundamental patent and Forrester needed to rely on Wang’s idea to “reduce to practice” his own.  As I said, IBM needed to license both in the end to make a product.  

What we try to teach is how something is new (novel in patent-speak) and to make sure they disclose what there ideas are built upon.   And as importantly, are you truly novel and if you are - can you build on that previous idea without a license. 

This is actually pretty important to get right and is not just an academic exercise.  For instance, a few years ago I was granted a fundamental computer synchronization patent for use in building supercomputers out of smaller computers (ie clusters).  When we wrote the stuff for disclosure we had to show how what was the same, what was built upon, and what made it new/different.  Computer synchronization is an old idea but what we did was quite new (and frankly would not have been considered in the 60s as those designers did have some of issues in scale we have today). But because we were able to show both the base and how it was novel, the application went right through in both the USA and Internationally. 

So back to my point, Forrester came up with the idea and practical scheme to use Wang’s concept of the magnetic memory in an array, which as I understand it, was what made Wang’s idea practical.  Just as my scheme did not invent synchronization, but tradition 1960 style schemes are impractical with today’s technology. 

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 15, 2018, at 9:08 PM, Greg 'groggy' Lehey <grog at lemis.com> wrote:
> 
>> On Friday, 15 June 2018 at 10:21:44 -0400, Clem Cole wrote:
>> On Fri, Jun 15, 2018 at 7:19 AM, A. P. Garcia <a.phillip.garcia at gmail.com>
>> wrote:
>> 
>>> jay forrester first described an invention called core memory in a lab
>>> notebook 69 years ago today.
>>> 
>> ???Be careful -- Forrester named it and put it into an array and build a
>> random access memory with it, but An Wang invented and patented basic
>> technology we now call 'core' in 1955  2,708,722
>> <https://patents.google.com/patent/US2708722A/en>  (calling it `dynamic
>> memory')???.
> 
> Tha patent may date from 1955, but by that time it was already in use.
> Whirlwind I used it in 1949, and the IBM 704 (1954) used it for main
> memory.  There are some interesting photos at
> https://en.wikipedia.org/wiki/Magnetic-core_memory.
> 
> Greg
> --
> Sent from my desktop computer.
> Finger grog at lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed.  If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA


From dave at horsfall.org  Sat Jun 16 16:36:48 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 16 Jun 2018 16:36:48 +1000 (EST)
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
Message-ID: <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>

On Fri, 15 Jun 2018, Lyndon Nerenberg wrote:

>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc, and 
>> I think that Olsen's quote was about the DEC-System 10...
>
> At the time, those were the only computers out there.  The "personal 
> computer" was the myth of the day.  Go back and look at the prices IBM 
> and DEC were charging for their gear.  Even in the modern context, that 
> hardware was more expensive than the rediculously inflated prices of 
> housing in Vancouver.

Precisely, but idiots keep repeating those "quotes" (if they were repeated 
accurately at all) in some sort of an effort to make the so-called 
"experts" look silly; a form of reverse jealousy/snobbery or something? 
It really pisses me off, and I'm sure that there's a medical term for 
it...

I came across a web site that had these populist quotes, alongside the 
original *taken in context*, but I'm damned if I can find it now.

I mean, how many people could afford a 7094, FFS?  The power bill, the air 
conditioning, the team of technicians, the warehouse of spare parts...

Hell, I'll bet that my iPhone has more power than our System-360/50, but 
it has nowhere near the sheer I/O throughput of a mainframe :-)

-- Dave


From jnc at mercury.lcs.mit.edu  Sat Jun 16 22:51:32 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sat, 16 Jun 2018 08:51:32 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180616125132.2663418C0A7@mercury.lcs.mit.edu>

    > From: Dave Horsfall <dave at horsfall.org>

    >> one of the Watson's saying there was a probably market for
    >> <single-digit> of computers; Ken Olsen saying people wouldn't want
    >> computers in their homes; etc, etc.

    > I seem to recall reading somewhere that these were urban myths...  Does
    > anyone have actual references in their contexts?

Well, for the Watson one, there is some controversy:

  https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution

My guess is that he might actually have said it, and it was passed down orally
for a while before it was first written down. The thing is that he is alleged
to have said it in 1943, and there probably _was_ a market for only 5 of the
kind of computing devices available at that point (e.g. the Mark I).

    > E.g. Watson was talking about the multi-megabuck 704/709/7094 etc

No. The 7094 is circa 1960, almost 20 years later.


    > Olsens's quote was about the DEC-System 10...

Again, no. He did say it, but it was not about PDP-10s:

  https://en.wikiquote.org/wiki/Ken_Olsen

"Olsen later explained that he was referring to smart homes rather than
personal computers." Which sounds plausible (in the sense of 'what he meant',
not 'it was correct'), given where he said it (a World Future Society
meeting).

	Noel


From clemc at ccc.com  Sat Jun 16 23:11:17 2018
From: clemc at ccc.com (Clem cole)
Date: Sat, 16 Jun 2018 09:11:17 -0400
Subject: [TUHS] core
In-Reply-To: <20180616125132.2663418C0A7@mercury.lcs.mit.edu>
References: <20180616125132.2663418C0A7@mercury.lcs.mit.edu>
Message-ID: <05790A38-D097-4F12-8C6D-3CC6D153546D@ccc.com>

And I believe at the time, KO was commenting in terms of Bell’s  ‘minimal computer’ definition (the ‘mini’ - aka 12 bit systems) of the day -DEC’s PDP-8 not the 10.  IIRC The 8 pretty much had a base price in the $30k range in the mid to late 60s.  FWIW the original 8 was discrete bipolar (matched) transistors built with DEC flip chips. And physically about 2or3 19” relay racks in size.  Later models used TTL and got down to a single 3U ‘drawer.’

Clem

Also please remember that originally, mini did not mean small.  That was a computer press redo when the single chip, ‘micro’ computers, came to being in the mid 1970s

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

On Jun 16, 2018, at 8:51 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:

>> From: Dave Horsfall <dave at horsfall.org>
> 
>>> one of the Watson's saying there was a probably market for
>>> <single-digit> of computers; Ken Olsen saying people wouldn't want
>>> computers in their homes; etc, etc.
> 
>> I seem to recall reading somewhere that these were urban myths...  Does
>> anyone have actual references in their contexts?
> 
> Well, for the Watson one, there is some controversy:
> 
>  https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attribution
> 
> My guess is that he might actually have said it, and it was passed down orally
> for a while before it was first written down. The thing is that he is alleged
> to have said it in 1943, and there probably _was_ a market for only 5 of the
> kind of computing devices available at that point (e.g. the Mark I).
> 
>> E.g. Watson was talking about the multi-megabuck 704/709/7094 etc
> 
> No. The 7094 is circa 1960, almost 20 years later.
> 
> 
>> Olsens's quote was about the DEC-System 10...
> 
> Again, no. He did say it, but it was not about PDP-10s:
> 
>  https://en.wikiquote.org/wiki/Ken_Olsen
> 
> "Olsen later explained that he was referring to smart homes rather than
> personal computers." Which sounds plausible (in the sense of 'what he meant',
> not 'it was correct'), given where he said it (a World Future Society
> meeting).
> 
>    Noel


From doug at cs.dartmouth.edu  Sat Jun 16 22:58:23 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Sat, 16 Jun 2018 08:58:23 -0400
Subject: [TUHS] core
Message-ID: <201806161258.w5GCwNDN014646@tahoe.cs.Dartmouth.EDU>

> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.

Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
reliability.

Doug


From jnc at mercury.lcs.mit.edu  Sat Jun 16 23:37:16 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sat, 16 Jun 2018 09:37:16 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>

    > From: Dave Horsfall

    > idiots keep repeating those "quotes" ... in some sort of an effort to
    > make the so-called "experts" look silly; a form of reverse
    > jealousy/snobbery or something?  It really pisses me off

You've just managed to hit one of my hot buttons.

I can't speak to the motivations of everyone who repeats these stories, but my
professional career has been littered with examples of poor vision from
technical colleagues (some of whom should have known better), against which I
(in my role as an architect, which is necessarily somewhere where long-range
thinking is - or should be - a requirement) have struggled again and again -
sometimes successfully, more often, not.

So I chose those two only because they are well-known examples - but, as you
correctly point out, they are poor examples, for a variety of reasons. But
they perfectly illustrate something I am _all_ too familiar with, and which
happens _a lot_. And the original situation I was describing (the MIT core
patent) really happened - see "Memories that Shaped an Industry", page 211.

Examples of poor vision are legion - and more importantly, often/usually seen
to be such _at the time_ by some people - who were not listened to.

Let's start with the UNIBUS. Why does it have only 18 address lines? (I have
this vague memory of a quote from Gordon Bell admitting that was a mistake,
but I don't recall exactly where I saw it.) That very quickly became a major
limitation. I'm not sure why they did it (the number of contact on a standard
DEC connector probably had something do with it, connected to the fact that
the first PDP-11 had only 16-bit addressing anyway), but it should have been
obvious that it was not enough.

And a major one from the very start of my career: the decision to remove the
variable-length addresses from IPv3 and substitute the 32-bit addresses of
IPv4. Alas, I was too junior to be in the room that day (I was just down the
hall), but I've wished forever since that I was there to propose an alternate
path (the details of which I will skip) - that would have saved us all
countless billions (no, I am not exaggerating) spent on IPv6. Dave Reed was
pretty disgusted with that at the time, and history shows he was right.

Dave also tried to get a better checksum algorithm into TCP/UDP (I helped him
write the PDP-11 code to prove that it was plausible) but again, it got turned
down. (As did his wonderful idea for bottom-up address allocation, instead of
top-down. Oh well.) People have since discussed issues with the TCP/IP
checksum, but it's too late now to change it.

One place where I _did_ manage to win was in adding subnetting support to
hosts (in the Host Requirements WG); it was done the way I wanted, with the
result that when CIDR came along, even though it hadn't been forseen at the
time we did subnetting, it required _no_ hosts changes of any kind. But
mostly I lost. :-(

So, is poor vision common? All too common.

       Noel


From jnc at mercury.lcs.mit.edu  Sat Jun 16 23:49:28 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sat, 16 Jun 2018 09:49:28 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>

    > From: Clem Cole

    > The 8 pretty much had a base price in the $30k range in the mid to late
    > 60s. 

His statement was made in 1977 (ironically, the same year as the Apple
II).

(Not really that relevant, since he was apparently talking about 'smart
homes'; still, the history of DEC and personal computers is not a happy one;
perhaps why that quotation was taken up.)

    > Later models used TTL and got down to a single 3U 'drawer'.

There was eventually a single-chip micro version, done in the mid-70's; it
was used in a number of DEC word-processing products.

	Noel


From usotsuki at buric.co  Sun Jun 17 00:10:43 2018
From: usotsuki at buric.co (Steve Nickolas)
Date: Sat, 16 Jun 2018 10:10:43 -0400 (EDT)
Subject: [TUHS] core
In-Reply-To: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
Message-ID: <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>

On Sat, 16 Jun 2018, Noel Chiappa wrote:

>    > From: Clem Cole
>
>    > The 8 pretty much had a base price in the $30k range in the mid to late
>    > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy one;
> perhaps why that quotation was taken up.)
>
>    > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
> 	Noel
>

Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like 
Paperboy?

-uso.


From clemc at ccc.com  Sun Jun 17 00:10:24 2018
From: clemc at ccc.com (Clem Cole)
Date: Sat, 16 Jun 2018 10:10:24 -0400
Subject: [TUHS] core
In-Reply-To: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
Message-ID: <CAC20D2Ndy8KKiupAQqEvN1fAKgG2Axpj5uXFbJGQYTO1FRRmvA@mail.gmail.com>

Thanks, I thought it was  about 10 years earlier.   It means that the 16
bit systems were  definitely the norm and the 32 bit system were well under
design and the micros already birthed.  That said, as I pointed out in my
paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with
max memory) ran between $50-150K depending how it was configured and an
11/70 was closer to $250K.   To scale, In 2017 dollars, we calculated that
comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher
in those days cost about $5-$10K per year.

ᐧ

On Sat, Jun 16, 2018 at 9:49 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Clem Cole
>
>     > The 8 pretty much had a base price in the $30k range in the mid to
> late
>     > 60s.
>
> His statement was made in 1977 (ironically, the same year as the Apple
> II).
>
> (Not really that relevant, since he was apparently talking about 'smart
> homes'; still, the history of DEC and personal computers is not a happy
> one;
> perhaps why that quotation was taken up.)
>
>     > Later models used TTL and got down to a single 3U 'drawer'.
>
> There was eventually a single-chip micro version, done in the mid-70's; it
> was used in a number of DEC word-processing products.
>
>         Noel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/0a63f178/attachment.html>

From pechter at gmail.com  Sun Jun 17 00:13:10 2018
From: pechter at gmail.com (William Pechter)
Date: Sat, 16 Jun 2018 10:13:10 -0400
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
Message-ID: <35B2FC9E-CFE4-4F62-9D98-9BACB28EFCB3@gmail.com>

Yup.  IIRC they used the T11 -- the same chip in the Vax86xx front end. 

Bill

On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki at buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>>    > From: Clem Cole
>>
>>    > The 8 pretty much had a base price in the $30k range in the mid
>to late
>>    > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>>    > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> 	Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like 
>Paperboy?
>
>-uso.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/d5cda7d3/attachment.html>

From pechter at gmail.com  Sun Jun 17 00:13:10 2018
From: pechter at gmail.com (William Pechter)
Date: Sat, 16 Jun 2018 10:13:10 -0400
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
Message-ID: <35B2FC9E-CFE4-4F62-9D98-9BACB28EFCB3@gmail.com>

Yup.  IIRC they used the T11 -- the same chip in the Vax86xx front end. 

Bill

On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki at buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>>    > From: Clem Cole
>>
>>    > The 8 pretty much had a base price in the $30k range in the mid
>to late
>>    > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>>    > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> 	Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like 
>Paperboy?
>
>-uso.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/d5cda7d3/attachment-0001.html>

From pechter at gmail.com  Sun Jun 17 00:13:10 2018
From: pechter at gmail.com (William Pechter)
Date: Sat, 16 Jun 2018 10:13:10 -0400
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
Message-ID: <35B2FC9E-CFE4-4F62-9D98-9BACB28EFCB3@gmail.com>

Yup.  IIRC they used the T11 -- the same chip in the Vax86xx front end. 

Bill

On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki at buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>>    > From: Clem Cole
>>
>>    > The 8 pretty much had a base price in the $30k range in the mid
>to late
>>    > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>>    > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> 	Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like 
>Paperboy?
>
>-uso.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/d5cda7d3/attachment-0002.html>

From pechter at gmail.com  Sun Jun 17 00:13:10 2018
From: pechter at gmail.com (William Pechter)
Date: Sat, 16 Jun 2018 10:13:10 -0400
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <alpine.BSF.2.02.1806161010160.10239@frieza.hoshinet.org>
Message-ID: <35B2FC9E-CFE4-4F62-9D98-9BACB28EFCB3@gmail.com>

Yup.  IIRC they used the T11 -- the same chip in the Vax86xx front end. 

Bill

On June 16, 2018 10:10:43 AM EDT, Steve Nickolas <usotsuki at buric.co> wrote:
>On Sat, 16 Jun 2018, Noel Chiappa wrote:
>
>>    > From: Clem Cole
>>
>>    > The 8 pretty much had a base price in the $30k range in the mid
>to late
>>    > 60s.
>>
>> His statement was made in 1977 (ironically, the same year as the
>Apple
>> II).
>>
>> (Not really that relevant, since he was apparently talking about
>'smart
>> homes'; still, the history of DEC and personal computers is not a
>happy one;
>> perhaps why that quotation was taken up.)
>>
>>    > Later models used TTL and got down to a single 3U 'drawer'.
>>
>> There was eventually a single-chip micro version, done in the
>mid-70's; it
>> was used in a number of DEC word-processing products.
>>
>> 	Noel
>>
>
>Wasn't a one-chip PDP11 used by Tengen in a few arcade games, like 
>Paperboy?
>
>-uso.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/d5cda7d3/attachment-0003.html>

From stewart at serissa.com  Sun Jun 17 00:34:00 2018
From: stewart at serissa.com (Lawrence Stewart)
Date: Sat, 16 Jun 2018 10:34:00 -0400
Subject: [TUHS] core
In-Reply-To: <CAC20D2Ndy8KKiupAQqEvN1fAKgG2Axpj5uXFbJGQYTO1FRRmvA@mail.gmail.com>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <CAC20D2Ndy8KKiupAQqEvN1fAKgG2Axpj5uXFbJGQYTO1FRRmvA@mail.gmail.com>
Message-ID: <F0915590-3DE6-4336-9CE3-684C8E240876@serissa.com>

5-10K for a grad student seems low for the late ‘70s.  I was an RA at Stanford then and they paid about $950/month.  The loaded cost would have been about twice that.

My lab had an 11/34 with V7 and we considered ourselves quite well off.

-L

> On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc at ccc.com> wrote:
> 
> Thanks, I thought it was  about 10 years earlier.   It means that the 16 bit systems were  definitely the norm and the 32 bit system were well under design and the micros already birthed.  That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K.   To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
> 
> ᐧ

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/22efa541/attachment.html>

From lm at mcvoy.com  Sun Jun 17 01:28:57 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Sat, 16 Jun 2018 08:28:57 -0700
Subject: [TUHS] core
In-Reply-To: <F0915590-3DE6-4336-9CE3-684C8E240876@serissa.com>
References: <20180616134928.1574518C0A7@mercury.lcs.mit.edu>
 <CAC20D2Ndy8KKiupAQqEvN1fAKgG2Axpj5uXFbJGQYTO1FRRmvA@mail.gmail.com>
 <F0915590-3DE6-4336-9CE3-684C8E240876@serissa.com>
Message-ID: <20180616152857.GC12485@mcvoy.com>

I was a grad student at UWisc in 1986 and got paid $16K/year. 

On Sat, Jun 16, 2018 at 10:34:00AM -0400, Lawrence Stewart wrote:
> 5-10K for a grad student seems low for the late ???70s.  I was an RA at Stanford then and they paid about $950/month.  The loaded cost would have been about twice that.
> 
> My lab had an 11/34 with V7 and we considered ourselves quite well off.
> 
> -L
> 
> > On 2018, Jun 16, at 10:10 AM, Clem Cole <clemc at ccc.com> wrote:
> > 
> > Thanks, I thought it was  about 10 years earlier.   It means that the 16 bit systems were  definitely the norm and the 32 bit system were well under design and the micros already birthed.  That said, as I pointed out in my paper last summer, in 1977, a PDP-11 that was able to run UNIX (11/34 with max memory) ran between $50-150K depending how it was configured and an 11/70 was closer to $250K.   To scale, In 2017 dollars, we calculated that comes to $208K/$622K/$1M and as I also pointed out, a graduate researcher in those days cost about $5-$10K per year.
> > 
> > ???
> 

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From clemc at ccc.com  Sun Jun 17 04:59:27 2018
From: clemc at ccc.com (Clem Cole)
Date: Sat, 16 Jun 2018 14:59:27 -0400
Subject: [TUHS] core
In-Reply-To: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
Message-ID: <CAC20D2PRmftgpZatQAd4+NPQSK1YEv6ZocOKoCrLsjO408ktVw@mail.gmail.com>

below...

On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:
>
>
> I can't speak to the motivations of everyone who repeats these stories,
> but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against
> which I
> (in my role as an architect, which is necessarily somewhere where
> long-range
> thinking is - or should be - a requirement) have struggled again and again
> -
> sometimes successfully, more often, not.
>
​Amen, although sadly many of us if not all of us have a few of these
stories.  In fact, I'm fighting another one of these battles right now.🤔
My experience is that more often than not, it's less a failure to see what
a successful future might bring, and often one of well '*we don't need to
do that now/costs too much/we don't have the time*.'

That said, DEC was the epitome of the old line about perfection being the
enemy of success.    I like to say to my colleagues, pick the the things
that are going to really matter.  Make those perfect and bet the company on
them.  But think in terms of what matters.   As you point out, address size
issues are killers and you need to get those right at time t0.

Without saying too much, many firms like my own, think in terms of
computation (math libraries, cpu kernels), but frankly if I can not get the
data to/from CPU's functional units, or the data is stored in the wrong
place, or I have much of the main memory tied up in the OS managing
different types of user memory; it doesn't matter [HPC customers in
particular pay for getting a job done -- they really don't care how -- just
get it done and done fast].

To me, it becomes a matter of 'value' -- our HW folks know a crappy
computational system will doom the device, so that is what they put there
effort into building.   My argument has often been that the messaging
systems, memory hierarchy and house keeping are what you have to get right
at this point.  No amount of SW will fix HW that is lacking the right
support in those places (not that lots of computes are bad, but they are
actually not the big issue in the HPC when yet get down to it these days).



>
> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
> have
> this vague memory of a quote from Gordon Bell admitting that was a mistake,
> but I don't recall exactly where I saw it.)

​I think it was part of the same paper where he made the observation that
the greatest mistake an architecture can have is too few address bits.​
 My understanding is that the problem was that UNIBUS was perceived as an
I/O bus and as I was pointing out, the folks creating it/running the team
did not value it, so in the name of 'cost', more bits was not considered
important.

I used to know and work with the late Henk Schalke, who ran Unibus (HW)
engineering at DEC for many years.    Henk was notoriously frugal (we might
even say 'cheap'), so I can imagine that he did not want to spend on
anything that he thought was wasteful.   Just like I retold the
Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
I don't know for sure, but I can see that without someone really arguing
with Henk as to why 18 bits was not 'good enough.' I can imagine the
conversation going something like:  Someone like me saying: *"Henk, 18 bits
is not going to cut it."*   He might have replied something like:   *"Bool
sheet *[a dutchman's way of cursing in English], *we already gave you two
more bit than you can address* (actually he'd then probably stop mid
sentence and translate in his head from Dutch to English - which was always
interesting when you argued with him).

Note: I'm not blaming Henk, just stating that his thinking was very much
that way, and I suspect he was not not alone.  Only someone like Gordon and
the time could have overruled it, and I don't think the problems were
foreseen as Noel notes.




>
> And a major one from the very start of my career: the decision to remove
> the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.
>
​I always wondered about the back story on that one.  I do seem to remember
that there had been a proposal for variable-length addresses at one point;
but never knew why it was not picked.   As you say, I was certainly way to
junior to have been part of that discussion.  We just had some of the
document from you guys and we were told to try to implement it.​  My guess
is this is an example of folks thinking, length addressing was wasteful.
 32-bits seemed infinite in those days and no body expected the network to
scale to the size it is today and will grow to in the future [I do remember
before Noel and team came up with ARP, somebody quipped that Xerox
Ethernet's 48-bits were too big and IP's 32-bit was too small.  The
original hack I did was since we used 3Com board and they all shared the
upper 3 bytes of the MAC address to map the lower 24 to the IP address - we
were not connecting to the global network so it worked.  Later we used a
look up table, until the ARP trick was created].



>
> One place where I _did_ manage to win was in adding subnetting support to
> hosts (in the Host Requirements WG); it was done the way I wanted, with the
> result that when CIDR came along, even though it hadn't been forseen at the
> time we did subnetting, it required _no_ hosts changes of any kind.

​Amen and thank you.​




> But
> ​ ​
> mostly I lost. :-(
>
​I know the feeling.  To many battles that in hindsight you think - darn if
they had only listened.    FWIW: if you try to mess with Intel OPA2 fabric
these days there is a back story.  A few years ago I had a quite a battle
with the HW folks, but I won that one.   The SW cannot tell the difference
between on-die or off-die, so the OS does not have the manage it.   Huge
difference in OS performance and space efficiency.   But I suspect that
there are some HW folks that spit on the floor when I come in the room.
We'll see if I am proven to be right in the long run; but at 1M cores I
don't want to think of the OS mess to manage two different types of memory
for the message system.​



>
> So, is poor vision common? All too common.

Indeed.   But to be fair, you can also end up with being like DEC and often
late to the market.​


M
​y example is of Alpha (and 64-bits vs 32-bit).   No attempt to support
32-bit was really done because​ 64-bit was the future.  Sr folks considered
32-bit mode was wasteful.  The argument was that adding it was not only
technically not a good idea, but it would suck up engineering resources to
implement it in both HW and SW.  Plus, we coming from the VAX so folks had
to recompile all the code anyway (god forbid that the SW might not be
64-bit clean mind you).  [VMS did a few hacks, but Tru64 stayed 'clean.']
 Similarly (back to UNIX theme for this mailing list) Tru64 was a rewrite
of OSF/1 - but hardly any OSF code was left in the DEC kernel by the time
it shipped.  Each subsystem was rewritten/replaced to 'make it perfect'
 [always with a good argument mind you, but never looking at the long term
issues].    Those two choices cost 3 years in market acceptance.   By the
time, Alphas hit the street it did not matter. I think in both cases would
have been allowed Alpha to be better accepted if DEC had shipped earlier
with a few hacks, but them improved Tru64 as a better version was developed
(*i.e.* replace the memory system, the I/O system, the TTY handler, the FS
just to name a few that got rewritten from OSF/1 because folks thought they
were 'weak').

The trick in my mind, is to identify the real technical features you can
not fix later and get those right at the beginning.   Then place the bet on
those features, and develop as fast as you can and do the best with them as
you you are able given your constraints.  Then slowly over time improve the
things that mattered less at the beginning as you have a review stream.
 If you wait for perfection, you get something like Alpha which was a great
architecture (particularly compared to INTEL*64) - but in the end, did not
matter.

Clem
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/59b657c8/attachment.html>

From clemc at ccc.com  Sun Jun 17 05:07:02 2018
From: clemc at ccc.com (Clem Cole)
Date: Sat, 16 Jun 2018 15:07:02 -0400
Subject: [TUHS] core
In-Reply-To: <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
Message-ID: <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>

On Sat, Jun 16, 2018 at 2:36 AM, Dave Horsfall <dave at horsfall.org> wrote:

> ​...​
>
> Hell, I'll bet that my iPhone has more power than our System-360/50, but
> it has nowhere near the sheer I/O throughput of a mainframe :-)


​I sent Dave's comment to the chief designer of the Model 50 (my friend
Russ Robelen).   His reply is cut/pasted below.  For context he refers to a
ver rare book called: ‘IBM’s 360 and Early 370 Systems’ by Pugh, Johnson &
J.H.Palmer .  A book and other other IBMers like Russ is considered the
biblical text on 360:



As to Dave’s comment: I'll bet that my iPhone has more power than our
System-360/50, but it has nowhere near the sheer I/O throughput of a
mainframe.


The ratio of bits of I/O to CPU MIPS  was very high back in those days,
Particularly for the Model 50 which was considered a ‘Commercial machine’
vs a ‘Scientific machine’.  The machine was doing payroll and inventory
management, high on I/O low on compute.  Much different today even for an
iPhone.  The latest iPhone runs on a ARMv8 derivative of Apple's "Swift" *dual
core *architecture called "Cyclone" and it runs at 1.3 GHz.  The Mod 50 ran
at 2 MHz.  The ARMv8 is a 64 bit machine.  The Mod 50 was a 32 bit machine.
The mod 50 had no cache (memory ran at .5 Mhz).   Depending on what
instruction mix you want to use I would put the iPhone at  conservatively
1,500 times the Mod 50.  I might add, a typical Mod 50 system with I/O sold
for $1M.


On the question of memory cost - this is from the bible on 360 mentioned
earlier.

*For example, the Model 50 main memory with a read-write cycle of 2
microseconds cost .8 cents per bit*.

*Page 194 Chapter 4 ‘IBM’s 360 and Early 370 Systems”*



​
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180616/7068511c/attachment.html>

From bqt at update.uu.se  Sun Jun 17 05:13:35 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sat, 16 Jun 2018 21:13:35 +0200
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <mailman.1.1529114402.31005.tuhs@minnie.tuhs.org>
References: <mailman.1.1529114402.31005.tuhs@minnie.tuhs.org>
Message-ID: <a2dde2ad-1569-3b26-d5ee-d5886a986dd8@update.uu.se>

On 2018-06-16 04:00, Tom Ivar Helbekkmo<tih at hamartun.priv.no> wrote:
> Warner Losh<imp at bsdimp.com>  writes:
> 
>> It looks like retrobsd hasn't been active in the last couple of years
>> though. A cool accomplishment, but with some caveats. All the network
>> is in userland, not the kernel, for example.
> Isn't 2.11BSD networking technically in userland?  I forget.  Johnny?

No, networking in 2.11BSD is not in userland. But it's not a part of 
/unix either. It's a separate image (/netnix) that gets loaded at boot 
time, but it's run in the context of the kernel.

I'd have to go and check this if anyone wants details. It's been quite a 
while since I was fooling around inside there. Or maybe someone else 
remembers more details on how it integrates.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From jnc at mercury.lcs.mit.edu  Sun Jun 17 05:24:02 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sat, 16 Jun 2018 15:24:02 -0400 (EDT)
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
Message-ID: <20180616192402.120E718C0A7@mercury.lcs.mit.edu>

    > From: Johnny Billquist

    > It's a separate image (/netnix) that gets loaded at boot time, but it's
    > run in the context of the kernel.

ISTR reading that it runs in Supervisor mode (no doubt so it could use the
Supervisor mode virtual address space, and not have to go crazy with overlays
in the Kernet space).

Never looked at the code, though.

	Noel


From bqt at update.uu.se  Sun Jun 17 08:14:00 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sun, 17 Jun 2018 00:14:00 +0200
Subject: [TUHS] core
In-Reply-To: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
Message-ID: <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>

On 2018-06-16 21:00, Clem Cole <clemc at ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa 
<jnc at mercury.lcs.mit.edu> wrote:
>>
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> have
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> ​I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.​

I think the paper you both are referring to is the "What have we learned 
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.

https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf

There is some additional comments in 
https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm

>   My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
> important.

Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was 
very clearly designed a the system bus for all needs by DEC, and was 
used just like that until the 11/70, which introduced a separate memory 
bus. In all previous PDP-11s, both memory and peripherals were connected 
on the Unibus.

Why it only have 18 bits, I don't know. It might have been a reflection 
back on that most things at DEC was either 12 or 18 bits at the time, 
and 12 was obviously not going to cut it. But that is pure speculation 
on my part.

But, if you read that paper again (the one from Bell), you'll see that 
he was pretty much a source for the Unibus as well, and the whole idea 
of having it for both memory and peripherals. But that do not tell us 
anything about why it got 18 bits. It also, incidentally have 18 data 
bits, but that is mostly ignored by all systems. I believe the KS-10 
made use of that, though. And maybe the PDP-15. And I suspect the same 
would be true for the address bits. But neither system was probably 
involved when the Unibus was created, but made fortuitous use of it when 
they were designed.

> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years.    Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful.   Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like:  Someone like me saying: *"Henk, 18 bits
> is not going to cut it."*   He might have replied something like:   *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).

Quite possible. :-)

> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone.  Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.

Bell in retrospect thinks that they should have realized this problem, 
but it would appear they really did not consider it at the time. Or 
maybe just didn't believe in what they predicted.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From krewat at kilonet.net  Sun Jun 17 08:38:47 2018
From: krewat at kilonet.net (Arthur Krewat)
Date: Sat, 16 Jun 2018 18:38:47 -0400
Subject: [TUHS] core
In-Reply-To: <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
Message-ID: <89cede81-501b-b2cf-50b7-1d678c5c2316@kilonet.net>

On 6/16/2018 6:14 PM, Johnny Billquist wrote:
> I believe the KS-10 made use of that, though.

Absolutely.

The 36-bit transfer mode requires one memory cycle
(rather than two) for each word transferred. In 36-bit
mode, the PDP-11 word count must be even (an odd word count
would hang the UBA) . Only one device on each UBA can do
36-bit transfers; this is because the UBA has only one
buffer to hold the left 18 bits, while the right 18 bits
come across the UNIBUS. On the 2020 system, the disk is the
only device using the 36-bit transfer mode.



From jnc at mercury.lcs.mit.edu  Sun Jun 17 08:57:41 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sat, 16 Jun 2018 18:57:41 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180616225741.F121718C0A7@mercury.lcs.mit.edu>

    > From: Johnny Billquist

    > incidentally have 18 data bits, but that is mostly ignored by all
    > systems. I believe the KS-10 made use of that, though. And maybe the
    > PDP-15.

The 18-bit data thing is a total kludge; they recycled the two bus parity
lines as data lines.

The first device that I know of that used it is the RK11-E:

  http://gunkies.org/wiki/RK11_disk_controller#RK11-E

which is the same cards as the RK11-D, with a jumper set for 18-bit operation,
and a different clock crystal. The other UNIBUS interface that could do this
was the RH11 MASSBUS controller. Both were originally done for the PDP-15;
they were used with the UC15 Unichannel.

The KS10:

  http://gunkies.org/wiki/KS10

wound up using the 18-bit RH11 hack, but that was many years later.

      Noel


From clemc at ccc.com  Sun Jun 17 10:15:14 2018
From: clemc at ccc.com (Clem cole)
Date: Sat, 16 Jun 2018 20:15:14 -0400
Subject: [TUHS] core
In-Reply-To: <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
Message-ID: <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>

Hmm. I think you are trying to put to fine a point on it.  Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it.  Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end.  The whole reason was to have an io bus the 3rd parties could build io boards around.  Plus,  By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess,   And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe.   But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy.   OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core.  And I expect future GPUs to work the same way. 

And at this point (on die) the differences between a memory and io bus are often driven by power and die size.  The Memory bus folks are often willing to pay more to keep the memory heiarchary a bit more sane.  IO busses will often let the SW deal with consistency in return for massive scale.  

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 16, 2018, at 6:14 PM, Johnny Billquist <bqt at update.uu.se> wrote:
> 
>> On 2018-06-16 21:00, Clem Cole <clemc at ccc.com> wrote:
>> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa 
> <jnc at mercury.lcs.mit.edu> wrote:
>>> 
>>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>>> have
>>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>>> but I don't recall exactly where I saw it.)
>> ​I think it was part of the same paper where he made the observation that
>> the greatest mistake an architecture can have is too few address bits.​
> 
> I think the paper you both are referring to is the "What have we learned from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
> 
> https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
> 
> There is some additional comments in https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm
> 
>>  My understanding is that the problem was that UNIBUS was perceived as an
>> I/O bus and as I was pointing out, the folks creating it/running the team
>> did not value it, so in the name of 'cost', more bits was not considered
>> important.
> 
> Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was very clearly designed a the system bus for all needs by DEC, and was used just like that until the 11/70, which introduced a separate memory bus. In all previous PDP-11s, both memory and peripherals were connected on the Unibus.
> 
> Why it only have 18 bits, I don't know. It might have been a reflection back on that most things at DEC was either 12 or 18 bits at the time, and 12 was obviously not going to cut it. But that is pure speculation on my part.
> 
> But, if you read that paper again (the one from Bell), you'll see that he was pretty much a source for the Unibus as well, and the whole idea of having it for both memory and peripherals. But that do not tell us anything about why it got 18 bits. It also, incidentally have 18 data bits, but that is mostly ignored by all systems. I believe the KS-10 made use of that, though. And maybe the PDP-15. And I suspect the same would be true for the address bits. But neither system was probably involved when the Unibus was created, but made fortuitous use of it when they were designed.
> 
>> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
>> engineering at DEC for many years.    Henk was notoriously frugal (we might
>> even say 'cheap'), so I can imagine that he did not want to spend on
>> anything that he thought was wasteful.   Just like I retold the
>> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
>> I don't know for sure, but I can see that without someone really arguing
>> with Henk as to why 18 bits was not 'good enough.' I can imagine the
>> conversation going something like:  Someone like me saying: *"Henk, 18 bits
>> is not going to cut it."*   He might have replied something like:   *"Bool
>> sheet *[a dutchman's way of cursing in English], *we already gave you two
>> more bit than you can address* (actually he'd then probably stop mid
>> sentence and translate in his head from Dutch to English - which was always
>> interesting when you argued with him).
> 
> Quite possible. :-)
> 
>> Note: I'm not blaming Henk, just stating that his thinking was very much
>> that way, and I suspect he was not not alone.  Only someone like Gordon and
>> the time could have overruled it, and I don't think the problems were
>> foreseen as Noel notes.
> 
> Bell in retrospect thinks that they should have realized this problem, but it would appear they really did not consider it at the time. Or maybe just didn't believe in what they predicted.
> 
>  Johnny
> 
> -- 
> Johnny Billquist                  || "I'm on a bus
>                                  ||  on a psychedelic trip
> email: bqt at softjar.se             ||  Reading murder books
> pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From bqt at update.uu.se  Sun Jun 17 11:50:54 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sun, 17 Jun 2018 03:50:54 +0200
Subject: [TUHS] core
In-Reply-To: <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
 <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
Message-ID: <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>

On 2018-06-17 02:15, Clem cole wrote:
> Hmm. I think you are trying to put to fine a point on it.  Having had this conversation with a number of folks who were there, you’re right that the ability to memory on the Unibus at the lower end was clearly there but I’ll take Dave Cane and Henk’s word for it as calling it an IO bus who were the primary HW folks behind a lot of it.  Btw it’s replacement, Dave’s BI, was clearly designed with IO as the primary point. It was supposed to be open sourced in today’s parlance but DEC closed it at the end.  The whole reason was to have an io bus the 3rd parties could build io boards around.  Plus,  By then DEC started doing split transactions on the memory bus ala the SMI which was supposed to be private. After the BI mess,   And then By the time of Alpha BI morphed into PCI which was made open and of course is now Intel’s PCIe.   But that said, even today we need to make large memory windows available thru it for things like messaging and GPUs - so the differences can get really squishy.   OPA2 has a full MMU and can do a lot of cache protocols just because the message HW really looks to memory a whole lot like a CPU core.  And I expect future GPUs to work the same way.

Well, it is an easy observable fact that before the PDP-11/70, all 
PDP-11 models had their memory on the Unibus. So it was not only "an 
ability at the lower end", but the only option for a number of years. 
The need to grow beyond 18 bit addresses, as well as speed demands, 
drove the PDP-11/70 to abandon the Unibus for memory. And this was then 
also the case for the PDP-11/44, PDP-11/24, PDP-11/84 and PDP-11/94, 
which also had 22-bit addressing of memory, which then followed the 
PDP-11/70 in having memory separated from the Unibus. All other Unibus 
machines had the memory on the Unibus.

After Unibus (if we skip Q-bus) you had SBI, which was also used both 
for controllers (well, mostly bus adaptors) and memory, for the 
VAX-11/780. However, evaluation of where the bottleneck was on that 
machine led to the memory being moved away from SBI for the VAX-86x0 
machines. SBI never was used much for any direct controllers, but 
instead you had Unibus adapters, so here the Unibus was used as a pure 
I/O bus.

And BI came after that. (I'm ignoring the CMI of the VAX-11/750 and 
others here...)
By the time of the VAX, the Unibus was clearly only an I/O bus (for VAX 
and high end PDP-11s). No VAX ever had memory on the Unibus. But the 
Unibus was not design with the VAX in mind.

But unless my memory fails me, VAXBI have CPUs, memory and bus adapters 
as the most common kind of devices (or "nodes" as I think they are 
called) on BI. Which really helped when you started doing multiprocessor 
systems, since you then had a shared bus for all memory and CPU 
interconnect. You could also have Unibus adapters on VAXBI, but it was 
never common.

And after VAXBI you got XMI. Which is also what early large Alphas used. 
Also with CPUs, memory and bus adapters. And in XMI machines, you 
usually have VAXBI adapters for peripherals, and of course on an XMI 
machine, you would not hook up CPUs or memory on the VAXBI, so in an XMI 
machine, VAXBI became de facto an I/O bus.

Not sure there is any relationship at all between VAXBI and PCI, by the way.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol

From bqt at update.uu.se  Sun Jun 17 19:36:00 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sun, 17 Jun 2018 11:36:00 +0200
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <mailman.1.1529200802.30486.tuhs@minnie.tuhs.org>
References: <mailman.1.1529200802.30486.tuhs@minnie.tuhs.org>
Message-ID: <d5f35579-d901-3d67-3611-60c0d806d500@update.uu.se>

On 2018-06-17 04:00, jnc at mercury.lcs.mit.edu  (Noel Chiappa) wrote:
>      > From: Johnny Billquist
> 
>      > It's a separate image (/netnix) that gets loaded at boot time, but it's
>      > run in the context of the kernel.
> 
> ISTR reading that it runs in Supervisor mode (no doubt so it could use the
> Supervisor mode virtual address space, and not have to go crazy with overlays
> in the Kernet space).

Yes. That rings a bell now that you mention it. Pretty sure you are correct.

	Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From tih at hamartun.priv.no  Sun Jun 17 20:27:33 2018
From: tih at hamartun.priv.no (Tom Ivar Helbekkmo)
Date: Sun, 17 Jun 2018 12:27:33 +0200
Subject: [TUHS] maybe off-topic: Unix on a microcontroller
In-Reply-To: <20180616192402.120E718C0A7@mercury.lcs.mit.edu> (Noel Chiappa's
 message of "Sat, 16 Jun 2018 15:24:02 -0400 (EDT)")
References: <20180616192402.120E718C0A7@mercury.lcs.mit.edu>
Message-ID: <m2fu1lr9m2.fsf@thuvia.hamartun.priv.no>

Noel Chiappa <jnc at mercury.lcs.mit.edu> writes:

> ISTR reading that it runs in Supervisor mode (no doubt so it could use the
> Supervisor mode virtual address space, and not have to go crazy with overlays
> in the Kernet space).

Indeed.  From "Installing and Operating 2.11BSD on the PDP-11":

    The networking in 2.11BSD runs in supervisor mode, separate from
    the mainstream kernel.  There is room without overlaying to hold
    both a SL/IP and ethernet driver.  This is a major win, as it
    allows the networking to maintain its mbufs in normal data
    space, among other things.

-tih
-- 
Most people who graduate with CS degrees don't understand the significance
of Lisp.  Lisp is the most important idea in computer science.  --Alan Kay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180617/054e16c2/attachment.sig>

From ron at ronnatalie.com  Sun Jun 17 20:33:02 2018
From: ron at ronnatalie.com (Ronald Natalie)
Date: Sun, 17 Jun 2018 06:33:02 -0400
Subject: [TUHS] core
In-Reply-To: <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
 <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
 <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
Message-ID: <298B82DF-31A4-42EF-89A7-CBCFDECB32D3@ronnatalie.com>

> 
> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b

That’s not quite true.   While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.




From dfawcus+lists-tuhs at employees.org  Sun Jun 17 22:15:16 2018
From: dfawcus+lists-tuhs at employees.org (Derek Fawcus)
Date: Sun, 17 Jun 2018 13:15:16 +0100
Subject: [TUHS] core
In-Reply-To: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
Message-ID: <20180617121515.GA73135@accordion.employees.org>

On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> 
> And a major one from the very start of my career: the decision to remove the
> variable-length addresses from IPv3 and substitute the 32-bit addresses of
> IPv4.

Are you able to point to any document which still describes that variable length scheme?

I see that IEN 28 defines a variable length scheme (using version 2),
and that IEN 41 defines a different variable length scheme,
but is proposing to use version 4.

(IEN 44 looks a lot like the current IPv4).

DF


From jnc at mercury.lcs.mit.edu  Mon Jun 18 00:36:10 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sun, 17 Jun 2018 10:36:10 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180617143610.B53FE18C0A7@mercury.lcs.mit.edu>

    > From: Derek Fawcus

    > Are you able to point to any document which still describes that
    > variable length scheme? I see that IEN 28 defines a variable length
    > scheme (using version 2)

That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).

    > and that IEN 41 defines a different variable length scheme, but is
    > proposing to use version 4.

Right, that's a draft only (no code ever written for it), from just before the
meeting that substituted 32-bit addresses.

    > (IEN 44 looks a lot like the current IPv4).

Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
stuff). :-)

	Noel


From dfawcus+lists-tuhs at employees.org  Mon Jun 18 01:58:52 2018
From: dfawcus+lists-tuhs at employees.org (Derek Fawcus)
Date: Sun, 17 Jun 2018 16:58:52 +0100
Subject: [TUHS] core
In-Reply-To: <20180617143610.B53FE18C0A7@mercury.lcs.mit.edu>
References: <20180617143610.B53FE18C0A7@mercury.lcs.mit.edu>
Message-ID: <20180617155852.GA12237@accordion.employees.org>

On Sun, Jun 17, 2018 at 10:36:10AM -0400, Noel Chiappa wrote:
>     > From: Derek Fawcus
> 
>     > Are you able to point to any document which still describes that
>     > variable length scheme? I see that IEN 28 defines a variable length
>     > scheme (using version 2)
> 
> That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
> here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).

Ah - thanks.

So my scan of it suggests that only the host part of the address which were
extensible, but then I guess the CIDR scheme could have eventually been applied
to that portion.

The other thing obviously missing in the IEN 28 version is the TTL (which
appeared by IEN 41).

>     > and that IEN 41 defines a different variable length scheme, but is
>     > proposing to use version 4.
> 
> Right, that's a draft only (no code ever written for it), from just before the
> meeting that substituted 32-bit addresses.
> 
>     > (IEN 44 looks a lot like the current IPv4).
> 
> Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
> stuff). :-)

I wrote 'a lot', because it has the DF flag in the TOS field, and an OP bit
in the flags field; the CIDR vs A/B/C stuff didn't really change the rest.
But yeah - essentially what we still use now. Now 40 years and still going.

The other bit I find amusing are the various movements of the port numbers,
obviously they were originally part of the combined header (e.g. IEN 26),
then the IEN 21 TCP has them in the middle of its header, by IEN 44 it is
sort of its own header split from the TCP header which now omits ports.

Eventually they end up in the current location, as part of the start of the
TCP header (and UDP, etc), essentially combining that 'Port Header' with
whatever transport follows.

DF


From tytso at mit.edu  Mon Jun 18 03:33:41 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Sun, 17 Jun 2018 13:33:41 -0400
Subject: [TUHS] core
In-Reply-To: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
Message-ID: <20180617173341.GB31064@thunk.org>

On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> I can't speak to the motivations of everyone who repeats these stories, but my
> professional career has been littered with examples of poor vision from
> technical colleagues (some of whom should have known better), against which I
> (in my role as an architect, which is necessarily somewhere where long-range
> thinking is - or should be - a requirement) have struggled again and again -
> sometimes successfully, more often, not....
> 
> Examples of poor vision are legion - and more importantly, often/usually seen
> to be such _at the time_ by some people - who were not listened to.

To be fair, it's really easy to be wise to after the fact.  Let's
start with Unix; Unix is very bare-bones, when other OS architects
wanted to add lots of features that were spurned for simplicity's
sake.  Or we could compare X.500 versus LDAP, and X.400 and SMTP.

It's easy to mock decisions that weren't forward-thinking enough; but
it's also really easy to mock failed protocols and designs that
collapsed of their own weight because architects added too much "maybe
it will be useful in the future".

The architects that designed the original (never shipped) Microsoft
Vista thought they had great "vision".  Unfortunately they added way
too much complexity and overhead to an operating system just in time
to watch the failure of Moore's law to be able to support all of that
overhead.  Adding a database into the kernel and making it a
fundamental part of the file system?  OK, stupid?  How about adding
all sorts of complexity in VMS and network protocols to support
record-oriented files?

Sometimes having architects being successful to add their "vision" to
a product can be worst thing that ever happened to a operating sytsem
or, say, the entire OSI networking protocol suite.

> So, is poor vision common? All too common.

Definitely.  The problem is it's hard to figure out in advance which
is poor vision versus brilliant engineering to cut down the design so
that it is "as simple as possible", but nevertheless, "as complex as
necessary".

					- Ted


From jnc at mercury.lcs.mit.edu  Mon Jun 18 03:58:14 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sun, 17 Jun 2018 13:58:14 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180617175814.D8BAD18C0A7@mercury.lcs.mit.edu>

    > From: "Theodore Y. Ts'o"

    > To be fair, it's really easy to be wise to after the fact.

Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".

    > failed protocols and designs that collapsed of their own weight because
    > architects added too much "maybe it will be useful in the future"

And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.

    > Sometimes having architects being successful to add their "vision" to a
    > product can be worst thing that ever happened

A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').

It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.

    > The problem is it's hard to figure out in advance which is poor vision
    > versus brilliant engineering to cut down the design so that it is "as
    > simple as possible", but nevertheless, "as complex as necessary".
      
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
addressing decision.

One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
problem.

With that in hand, there was no excuse for that decision.

I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.

	Noel


From nobozo at gmail.com  Mon Jun 18 05:50:39 2018
From: nobozo at gmail.com (Jon Forrest)
Date: Sun, 17 Jun 2018 12:50:39 -0700
Subject: [TUHS] core
In-Reply-To: <20180617173341.GB31064@thunk.org>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
 <20180617173341.GB31064@thunk.org>
Message-ID: <4505da5a-3e91-5e4a-fa22-135ee4ab4e91@gmail.com>



On 6/17/2018 10:33 AM, Theodore Y. Ts'o wrote:

> The architects that designed the original (never shipped) Microsoft
> Vista thought they had great "vision". 

There's a very fine boundary between "vision" and "hallucination".

Jon



From jnc at mercury.lcs.mit.edu  Mon Jun 18 07:18:32 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sun, 17 Jun 2018 17:18:32 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180617211832.5C6B618C092@mercury.lcs.mit.edu>

    > From: Derek Fawcus <dfawcus+lists-tuhs at employees.org>

    > my scan of it suggests that only the host part of the address which were
    > extensible

Well, the division into 'net' and 'rest' does not appear to have been a hard
one at that point, as it later became.

    > The other thing obviously missing in the IEN 28 version is the TTL

Yes... interesting!

    > it has the DF flag in the TOS field, and an OP bit in the flags field

Yeah, small stuff like that got added/moved/removed around a lot.


    > the CIDR vs A/B/C stuff didn't really change the rest.

It made packet processing in routers quite different; routing lookups, the
routing table, etc became much more complex (I remember that change)! Also in
hosts, which had not yet had their understanding of fields in the addresses
lobotomized away (RFC-1122, Section 3.3.1).

Yes, the impact on code _elsewhere_ in the stack was minimal, because the
overall packet format didn't change, and addresses were still 32 bits, but...


    > The other bit I find amusing are the various movements of the port
    > numbers

Yeah, there was a lot of discussion about whether they were properly part of
the internetwork layer, or the transport. I'm not sure there's really a 'right'
answer; PUP:

  http://gunkies.org/wiki/PARC_Universal_Packet

made them part of the internetwork header, and seemed to do OK.

I think we eventually decided that we didn't want to mandate a particular port
name size across all transports, and moved it out. This had the down-side that
there are some times when you _do_ want to have the port available to an
IP-only device, which is why ICMP messages return the first N bytes of the
data _after_ the IP header (since it's not clear where the port field[s] will
be).

But I found, working with PUP, there were some times when the defined ports
didn't always make sense with some protocols (although PUP didn't really have
a 'protocol' field per se); the interaction of 'PUP type' and 'socket' could
sometimes be confusing/problemtic. So I personally felt that was at least as
good a reason to move them out. 'Ports' make no sense for routing protocols,
etc.

Overall, I think in the end, TCP/IP got that all right - the semantics of the
'protocol' field are clear and simple, and ports in the transport layer have
worked well; I can't think of any places (other than routers which want to
play games with connections) where not having ports in the internetwork layer
has been an issue.

    Noel


From bqt at update.uu.se  Mon Jun 18 15:58:56 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Mon, 18 Jun 2018 07:58:56 +0200
Subject: [TUHS] core
In-Reply-To: <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
References: <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
Message-ID: <de343543-a222-e490-1cb0-cf59242a2684@update.uu.se>

On 2018-06-18 04:00, Ronald Natalie <ron at ronnatalie.com> wrote:
>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
> That’s not quite true.   While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.

Eh... Yes... But...
The "separate" bus for the semiconductor memory is just a second Unibus, 
so the statement is still true. All (earlier) PDP-11 models had their 
memory on the Unibus. Including the 11/45,50,55.
It's just that those models have two Unibuses.

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From tfb at tfeb.org  Mon Jun 18 19:16:26 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Mon, 18 Jun 2018 10:16:26 +0100
Subject: [TUHS] core
In-Reply-To: <20180617175814.D8BAD18C0A7@mercury.lcs.mit.edu>
References: <20180617175814.D8BAD18C0A7@mercury.lcs.mit.edu>
Message-ID: <24E1059F-F3ED-45EE-915E-6F32EFBAFA50@tfeb.org>

On 17 Jun 2018, at 18:58, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
> 
> One of two things was going to be true of the 'Internet' (that name didn't
> exist then, but it's a convenient tag): i) It was going to be a failure (in
> which case, it probably didn't matter what was done, or ii) it was going to be
> a success, in which case that 32-bit field was clearly going to be a crippling
> problem.

There's a third possibility: if implementation of the specification is too complicated it will fail, while if the implementation is simple enough it may succeed, in which case the specification which allowed the simple implementation will eventually become a problem.

I don't know whether IP falls into that category, but I think a lot of things do.

--tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180618/1e4c3b3a/attachment.html>

From tfb at tfeb.org  Mon Jun 18 19:25:03 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Mon, 18 Jun 2018 10:25:03 +0100
Subject: [TUHS] core
In-Reply-To: <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
Message-ID: <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>

Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.

The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s.  I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection (this is towards the phone, the other direction is much worse).  Over WiFi with something serious behind it it gets 70-80Mb/s in both directions.  I have no idea what the raw WiFi bandwidth limit is, still less what the raw memory bandwidth is or its bandwidth to 'disk' (ie flash or whatever the storage in the phone is), but the phone has much more bandwidth over a partly wireless network to an endpoint tens or hundreds of miles away than the 360/50 had to main memory.

From tfb at tfeb.org  Mon Jun 18 21:06:23 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Mon, 18 Jun 2018 12:06:23 +0100
Subject: [TUHS] In Memorium: Alan Turing
In-Reply-To: <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
References: <alpine.BSF.2.21.999.1806071123240.68981@aneurin.horsfall.org>
 <20180607023457.GN25584@mcvoy.com>
 <CAKr6gn1yCF3+=r5we-dtpgeRWJsAhju_USZbuVC6zThZXZb_Gg@mail.gmail.com>
Message-ID: <7AB73139-9EEA-4B9C-81A7-A1EE8D0970E5@tfeb.org>

On 7 Jun 2018, at 03:59, George Michaelson <ggm at algebras.org> wrote:
> 
> 
> Sid was a professor at Edinburgh, alongside Donald Michie who had
> worked with Turing. They (Sid and Donald) fought like cats and dogs
> over the AI story, there was no love lost there nor much mutual
> respect, so nothing came from that side, and Donald is dead too. I
> never really spoke to him about anything. By the time I had any sense
> of interesting questions to ask, It wasnt going to happen.

This is tangential perhaps, but, having been at Edinburgh in the (late) 1980s and 1990s -- I met Donald Michie a couple of times and I think my wife was taught by your father -- I wonder if anyone is properly  writing down (I mean, writing book(s) about) the history of electronic computing, and AI in particular, in the UK?  Although (almost?) all the first generation of people must be gone now, there must be many people who knew them well &/or were their students.  But in due course there won't be and all the first-hand information will be gone.  It probably wasn't really *possible* to write it down in a sensible way until the BP stuff became public because so much of the early part would have made no sense ('how did so many people know what to do?'), but there's a window when it is possible to make sense of it, which is probably still open now but will close.

If there are good books on this I'd be interested.

--tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180618/8f81c830/attachment.html>

From ron at ronnatalie.com  Mon Jun 18 22:39:29 2018
From: ron at ronnatalie.com (Ronald Natalie)
Date: Mon, 18 Jun 2018 08:39:29 -0400
Subject: [TUHS] core
In-Reply-To: <de343543-a222-e490-1cb0-cf59242a2684@update.uu.se>
References: <mailman.1.1529287201.13697.tuhs@minnie.tuhs.org>
 <de343543-a222-e490-1cb0-cf59242a2684@update.uu.se>
Message-ID: <C3BAEEAA-F448-4E6B-8A76-B7E2E492B242@ronnatalie.com>



> On Jun 18, 2018, at 1:58 AM, Johnny Billquist <bqt at update.uu.se> wrote:
> 
> On 2018-06-18 04:00, Ronald Natalie <ron at ronnatalie.com> wrote:
>>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
>> That’s not quite true.   While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
> 
> Eh... Yes... But...
> The "separate" bus for the semiconductor memory is just a second Unibus, so the statement is still true. All (earlier) PDP-11 models had their memory on the Unibus. Including the 11/45,50,55.
> It's just that those models have two Unibuses.

No, you are confusing different things I think.   The fastbus (where the memory was) is a distinct bus.    The fastbus was dual ported with the CPU in one port and a second UNIBUS could be connected to the other port.



From dot at dotat.at  Mon Jun 18 22:36:50 2018
From: dot at dotat.at (Tony Finch)
Date: Mon, 18 Jun 2018 13:36:50 +0100
Subject: [TUHS] core
In-Reply-To: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
Message-ID: <alpine.DEB.2.11.1806181335590.916@grey.csi.cam.ac.uk>

Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>
> (As did [Dave Reed's] wonderful idea for bottom-up address allocation,
> instead of top-down. Oh well.)

Was this written down anywhere?

Tony.
-- 
f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/
the widest possible distribution of wealth


From jnc at mercury.lcs.mit.edu  Tue Jun 19 00:51:27 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Mon, 18 Jun 2018 10:51:27 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180618145127.BB45318C08A@mercury.lcs.mit.edu>

    > From: Tony Finch <dot at dotat.at>

    > Was this written down anywhere?

Alas, no. It was a presentation at a group seminar, and used either hand-drawn
transparencies, or a white-board - don't recall exactly which. I later tried to
dig it up for use in Nimrod, but without success.

As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.

Or something like that! :-)


The issue with 'top-down' is that you have to have some global 'authority' to
manage the top level - hand out chunks, etc, etc. (For a spectacular example
of where this can go, look at NSAP's.) And what do you do when you run out of
top-level space? (Although in the NSAP case, they had such a complex top
couple of layers, they probably would have avoided that issue. Instead, they
had the problem that their name-space was spectacularly ill-suited to path
selection [routing], since in very large networks, interface names
[adddresses] must have a topological aspect if the path selection is to
scale. Although looking at the Internet nowadays, perhaps not!)

'Bottom-up' is not without problems of course (e.g. what if you want to add
another layer, e.g. to support potentially-nested virtual machines).

I'm not sure how well Dave understood the issue of path selection scaling at
the time he proposed it - it was very early on, '78 or so - since we didn't
understand path selection then as well as we do now. IIRC, I think he was
mostly was interested in it as a way to avoid having to have an asssignment
authority. The attraction for me was that it was easier to ensure that the
names had the needed topological aspect.

      Noel


From clemc at ccc.com  Tue Jun 19 00:56:21 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 18 Jun 2018 10:56:21 -0400
Subject: [TUHS] core
In-Reply-To: <20180617173341.GB31064@thunk.org>
References: <20180616133716.2302F18C0A7@mercury.lcs.mit.edu>
 <20180617173341.GB31064@thunk.org>
Message-ID: <CAC20D2MfPSYDhgUFKBU0rJAtbv8mE611eqoH1+yQeCKQpe2riA@mail.gmail.com>

On Sun, Jun 17, 2018 at 1:33 PM, Theodore Y. Ts'o <tytso at mit.edu> wrote:

> On Sat, Jun 16, 2018 at 09:37:16AM -0400, Noel Chiappa wrote:
> > I can't speak to the motivations of everyone who repeats these stories,
> but my
> > professional career has been littered with examples of poor vision from
> > technical colleagues (some of whom should have known better), against
> which I
> > (in my role as an architect, which is necessarily somewhere where
> long-range
> > thinking is - or should be - a requirement) have struggled again and
> again -
> > sometimes successfully, more often, not....
> >
> > Examples of poor vision are legion - and more importantly, often/usually
> seen
> > to be such _at the time_ by some people - who were not listened to.
>
> To be fair, it's really easy to be wise to after the fact.  Let's
> start with Unix; Unix is very bare-bones, when other OS architects
> wanted to add lots of features that were spurned for simplicity's
> sake.

Amen brother.  I refer to this as figuring out and understanding what
matters and what is really just window dressing.​  That is much easier to
do after the fact and for those of us that lived UNIX, we spent a lot of
time defending it.  Many of the 'attacks' were from systems like VMS and
RSX that were thought to be more 'complete' or 'professional.'


> Or we could compare X.500 versus LDAP, and X.400 and SMTP.
>
​Hmmm. I'll accept X.500, but SMTP I always said was hardly 'simple' -
although compared to what it replaced (FTPing files and remote execution)
is was.​

​




>
> It's easy to mock decisions that weren't forward-thinking enough; but
> it's also really easy to mock failed protocols and designs that
> collapsed of their own weight because architects added too much "maybe
> it will be useful in the future".
>
​+1
​


>
> ​...
>  Adding a database into the kernel and making it a
> fundamental part of the file system?  OK, stupid?  How about adding
> all sorts of complexity in VMS and network protocols to support
> record-oriented files?
>
​tjt once put it well:  'It's not so bad that RMS has 250-1000 options, but
some has to check for each them on every IO.'



>
> Sometimes having architects being successful to add their "vision" to
> a product can be worst thing that ever happened to a operating sytsem
> or, say, the entire OSI networking protocol suite.
>
​I'll always describe it as having 'good taste.'  And part of 'good taste'
is learning what really works and what really does not.​  BTW: having good
taste in one thing does necessarily give you license in another area.  And
I think that is a common issues.   "Hey were were successfully here, we
must be genius..."   Much of DEC's SW as good, but not all of it as an
example.  Or to pick on my own BSD expereince, sendmail is a great example
of something that solved a problem we had, but boy do I wish Eric had not
screwed the SMTP Daemon into it ....



>
> > So, is poor vision common? All too common.
>
> Definitely.  The problem is it's hard to figure out in advance which
> is poor vision versus brilliant engineering to cut down the design so
> that it is "as simple as possible", but nevertheless, "as complex as
> necessary".

​Exactly...   or as was said before:  *as simple as possible, but not
simpler.​*

*​  *But I like to add that understanding 'possible' is different from 'it
works.'* ​*
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180618/a92d1f5e/attachment.html>

From imp at bsdimp.com  Tue Jun 19 00:58:53 2018
From: imp at bsdimp.com (Warner Losh)
Date: Mon, 18 Jun 2018 08:58:53 -0600
Subject: [TUHS] core
In-Reply-To: <20180618145127.BB45318C08A@mercury.lcs.mit.edu>
References: <20180618145127.BB45318C08A@mercury.lcs.mit.edu>
Message-ID: <CANCZdfqUpG1i6sr34nE3AYO_7QXo1dVnr+WhvB9Hb8F9p1pWHQ@mail.gmail.com>

On Mon, Jun 18, 2018 at 8:51 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

> I'm not sure how well Dave understood the issue of path selection scaling
> at
> the time he proposed it - it was very early on, '78 or so - since we didn't
> understand path selection then as well as we do now. IIRC, I think he was
> mostly was interested in it as a way to avoid having to have an asssignment
> authority. The attraction for me was that it was easier to ensure that the
> names had the needed topological aspect.


I always thought the domain names were backwards, but that's because I
wanted command completion to work on them. I have not reevaluated this
view, though, since the very early days of the internet and I suppose
completion would be less useful than I'd originally supposed because the
cost to get the list is high or administratively blocked (tell me all the
domains that start with, in my notation, "com.home" when I hit <tab> in a
fast, easy way).

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180618/99b969a2/attachment.html>

From tfb at tfeb.org  Tue Jun 19 01:33:57 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Mon, 18 Jun 2018 16:33:57 +0100
Subject: [TUHS] core
In-Reply-To: <24E1059F-F3ED-45EE-915E-6F32EFBAFA50@tfeb.org>
References: <20180617175814.D8BAD18C0A7@mercury.lcs.mit.edu>
 <24E1059F-F3ED-45EE-915E-6F32EFBAFA50@tfeb.org>
Message-ID: <5808F9AB-0860-457B-BA04-4DBD67B5ACFB@tfeb.org>


> On 18 Jun 2018, at 15:33, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
> 
> When you say "if implementation of the specification is too complicated it
> will fail", I think you mean 'the specification is so complex that _any_
> implementation must _necessarily_ be so complex that all the implementations,
> and thus the specification itself, will fail'?

Yes, except that I don't think the specification itself has to be complex, it just needs to require implementations be hard, computationally expensive, or both.  For instance a specification for integer arithmetic in a programming language which required that (a/b)*b be a except where b is zero isn't complicated, I think, but it requires implementations which are either hard, computationally-expensive or both.

> 
> And for "in which case the specification which allowed the simple
> implementation will eventually become a problem", I think that's part of your
> saying 'you get to pick your poison; either the spec is so complicated it
> fails right away, or if it's simple enough to succeed in the short term, it
> will neccessarily fail in the long term'?

Yes, with the above caveat about the spec not needing to be complicated, and I'd say something like 'become limiting in the long term' rather than 'fail in the long term'.

So I think your idea was that, if there's a choice between a do-the-right-thing, future-proof (variable-length-address field) solution  or a will-work-for-now, implementationally-simpler (fixed-length) solution, then you should do the right thing, because if the thing fails it makes no odds and if it succeeds then you will regret the second solution. My caveat is that success (in the sense of wide adoption) is not independent of which solution you pick, and in particular that will-work-for-now solutions are much more likely to succeed than do-the-right-thing ones.

As I said, I don't know whether this applies to IP.

--tim

From jnc at mercury.lcs.mit.edu  Tue Jun 19 03:56:38 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Mon, 18 Jun 2018 13:56:38 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180618175638.383D418C086@mercury.lcs.mit.edu>

    > From: Clem Cole

    > My experience is that more often than not, it's less a failure to see
    > what a successful future might bring, and often one of well '*we don't
    > need to do that now/costs too much/we don't have the time*.'

Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.

By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.

In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.


    > I can imagine that he did not want to spend on anything that he thought
    > was wasteful.

Understandable. But see above...

The art is in finding a path that leave the future open (i.e.  reduces future
costs, when you 'hit the wall'), without running up costs now.

A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.

They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).


    > Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
    > thinking Brooks was nuts

Don't think I've heard that one?


    >> the decision to remove the variable-length addresses from IPv3 and
    >> substitute the 32-bit addresses of IPv4.

    > I always wondered about the back story on that one.

My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.

(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)

    > 32-bits seemed infinite in those days and no body expected the network
    > to scale to the size it is today and will grow to in the future

Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!

And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.


    >> So, is poor vision common? All too common.

    > But to be fair, you can also end up with being like DEC and often late
    > to the market.

Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...

This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.

So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.


    > I think in both cases would have been allowed Alpha to be better
    > accepted if DEC had shipped earlier with a few hacks, but them improved
    > Tru64 as a better version was developed (*i.e.* replace the memory
    > system, the I/O system, the TTY handler, the FS just to name a few that
    > got rewritten from OSF/1 because folks thought they were 'weak').

But you can lose with that strategy too.

Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.

Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:

  http://www.multicians.org/myths.html

Sigh, sometimes you can't win!

      Noel


From clemc at ccc.com  Tue Jun 19 04:51:02 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 18 Jun 2018 14:51:02 -0400
Subject: [TUHS] core
In-Reply-To: <20180618175638.383D418C086@mercury.lcs.mit.edu>
References: <20180618175638.383D418C086@mercury.lcs.mit.edu>
Message-ID: <CAC20D2MR9OHLabn-P7v=vd3VJ_3j1CgOgjMaghTD_4zaPWAcjA@mail.gmail.com>

On Mon, Jun 18, 2018 at 1:56 PM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>
>     > Just like I retold the Amdahl/Brooks story of the 8-bit byte and
> Amdahl
>     > thinking Brooks was nuts
>
> Don't think I've heard that one?

​Apologies for the repeat, if I have set this to TUHS before (I know I have
mentioned it in other places).​  Somebody else on this list mentioned David
Brailsford YouTube Video.  Which has more details, biut he had some of not
quite right.  I had watched it an we were communicating.   The actual story
behind the byte is a bit deeper than he describes in the lecture, which he
thanked me - as I have since introduced him to  my  old friend & colleague
Russ Robelen who was was the chief designer of the Model 50 and later lead
the ASC system with John Coche working for him.  Brailsford is right about
results of byte addressing and I did think his lecture is excellent and you
can learn a great deal.   That said, Russ tells the details of story like
this:

Gene Amdahl wanted the byte to be 6 bits and felt that 8 bits was
'wasteful' of his hardware.   Amdahl also did not see why more than 24 bits
for a word was really needed and most computations used words of course.
4, 6-bit bytes in a word seemed satisfactory to Gene.   Fred Brooks kept
kicking Amdahl out of his office and told him flatly - that until he came
back with things that were power's of 2, don't bother - we couldn't program
it.   The 32-bit word was a compromise, but note the original address was
only 24-bits (3, 8-bit bytes), although Brooks made sure all address
functions stored all 32-bits - which as Gordon Bell pointed out later was
the #1 thing that saved System 360 and made it last.


​BTW: ​
Russ and
​ ​
Ed Sussenguth invented speculative execution for the ACS system
​ a couple of years later​
.   I was
​ ​
harassing him
​ last January,​
because for 40 years we have been using his cool idea and it
​ ​
came back to bite us at Intel.  Here is the message cut/pasted from his
​ ​
email for context:

"I see you are still leading a team at Intel
​ ​
developing super computers and associated technologies. It certainly
​ ​
is exciting times in high speed computing.
​ ​
It brings back memories of my last work at IBM 50 years ago on the ACS
​ ​
project. You know you are old when you publish in the IEEE Annals of
​ ​
the History of Computing. One of the co-authors, Ed Sussenguth, passed
​ ​
away before our paper was published
​ ​
in
​ ​
2016.
​ ​
 https://www.computer.og/csdl/mags/an/2016/01/man2016010060.html
<https://www.computer.org/csdl/mags/an/2016/01/man2016010060.html>
​​
 Some
​ ​
of the work we did way back then has made the news in an unusual way
with the recent revelations on Spectre and Meltdown. I read the
​ ​
‘Spectre Attacks: Exploiting Speculative Execution’ paper yesterday
​ ​
trying to understand how speculative execution was being exploited.
​ ​
At
​ ​
ACS we were the first group at IBM to come up with the notion of the
​ ​
Branch Table and other techniques for speeding up execution.

I wish you were closer. I’d do love to hear your views on the state of
​ ​
computing today. I have a framed micrograph of the IBM Power 8 chip on
the wall in my office. In many ways the Power Series is an outgrowth
​ ​
of ACS.
​ ​
I still try to keep up with what is happening in my old field. The
​ ​
recent advances by Google in Deep Learning are breathtaking to me.
​  ​
Advances like AlphaGo Zero I never expected to see in my lifetime.
​"​



>
> But you can lose with that strategy too.
>
> Multics had a lot of sub-systems re-written from the ground up over time,
> and
> the new ones were always better (faster, more efficient) - a common even
> when
> you have the experience/knowledge of the first pass.
>
> Unfortunately, by that time it had the reputation as 'horribly slow and
> inefficient', and in a lot of ways, never kicked that:
>
>   http://www.multicians.org/myths.html
>
> Sigh, sometimes you can't win!

​Yep - although I think that may have been a case of economics.    Multics
for all its great ideas, was just a huge system, when Moores law started to
make smaller systems possible.  So I think your comment about thinking
about what you need now and what you will need in the future was part of
the issue.   ​


I look at Multics vs Unix the same way I look at TNC vs Beowulf clusters.
 At the time, we did TNC, we worked really hard to nail the transparency
thing and we did.   It was (is) awesome.  But it cost.   Tru64 (VMS and
other SSI) style clusters are not around today.  The hack that is Beowulf
is what lived on.   The key is that it was good enough and for most people,
that extra work we did to get rid of those seams just was not worth it.
And in the because Beowulf was economically successful, things were
implemented for it, that were never even considered for Tru64 and the SSI
style systems.     To me, Multics and Unix have the same history.
​   Multics was (is) cool; but Unix is similar; but different and too the
lead.   The key is that it was not a straight path.  Once Unix took over,
history went in a different direction.

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180618/41cca787/attachment.html>

From bqt at update.uu.se  Tue Jun 19 07:13:32 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Mon, 18 Jun 2018 23:13:32 +0200
Subject: [TUHS] core
In-Reply-To: <20180618121738.98BD118C087@mercury.lcs.mit.edu>
References: <20180618121738.98BD118C087@mercury.lcs.mit.edu>
Message-ID: <f6d2ac5f-0e1f-0216-22dc-2913ac71fea8@update.uu.se>

On 2018-06-18 14:17, Noel Chiappa wrote:
>      > The "separate" bus for the semiconductor memory is just a second Unibus
> 
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.

Ah. You and Ron are right. I am confused.

So there were some previous PDP-11 models who did not have their memory 
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through 
the Unibus, but through the fastbus, which was a pure memory bus, as far 
as I understand. You (obviously) could also have memory on the Unibus, 
but that would be slower then.

Ah, and there is a jumper to tell which addresses are served by the 
fastbus, and the rest then go to the Unibus. Thanks, I had missed these 
details before. (To be honest, I have never actually worked on any of 
those machines.)

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From dave at horsfall.org  Tue Jun 19 10:07:42 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Tue, 19 Jun 2018 10:07:42 +1000 (EST)
Subject: [TUHS] In memorium: T.J. Watson
Message-ID: <alpine.BSF.2.21.999.1806190955470.68981@aneurin.horsfall.org>

We lost the founder of IBM, Thomas J. Watson, on this day in 1956 (and I 
have no idea whether or not he was buried 9-edge down).

Oh, and I cannot find any hard evidence that he said "Nobody ever got 
fired for buying IBM"; can anyone help?  I suspect that it was a media 
beat-up from his PR department i.e. "fake news"...

-- 
Dave Horsfall DTM (VK2KFU)  "Those who don't understand security will suffer."


From doug at cs.dartmouth.edu  Tue Jun 19 21:50:27 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Tue, 19 Jun 2018 07:50:27 -0400
Subject: [TUHS] core
Message-ID: <201806191150.w5JBoR34045549@tahoe.cs.Dartmouth.EDU>

> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.

The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
in RFS).

Doug


From jnc at mercury.lcs.mit.edu  Tue Jun 19 22:23:59 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Tue, 19 Jun 2018 08:23:59 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180619122359.1525D18C084@mercury.lcs.mit.edu>

    > From: Doug McIlroy <doug at cs.dartmouth.edu>

    > Core memory wiped out competing technologies (Williams tube, mercury
    > delay line, etc) almost instantly and ruled for over twent years.

I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.

In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!

There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.

    > Yet late in his life Forrester told me that the Whirlwind-connected
    > invention he was most proud of was marginal testing

Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.

In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.

Is there any other explanation? Did he go into any depth as to _why_ he felt
that way?

       Noel


From clemc at ccc.com  Tue Jun 19 22:58:36 2018
From: clemc at ccc.com (Clem Cole)
Date: Tue, 19 Jun 2018 08:58:36 -0400
Subject: [TUHS] core
In-Reply-To: <20180619122359.1525D18C084@mercury.lcs.mit.edu>
References: <20180619122359.1525D18C084@mercury.lcs.mit.edu>
Message-ID: <CAC20D2OWCgm0GXg2-RYcnX97Sm_n3TUpL-YUSfUayFWVUNYhog@mail.gmail.com>

On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Doug McIlroy <doug at cs.dartmouth.edu>
>
>     > Yet late in his life Forrester told me that the Whirlwind-connected
>     > invention he was most proud of was marginal testing
>
> Given the above, I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum level as core.

​Wow -- I had exactly the same reaction.     To me, core was the second
most important invention (semiconductors switching being he first) for
making computing practical.   I was thinking that systems must have been
really bad (worse than I knew) from a reliability stand point if he put
marginal testing up there as more important than core.

Like you, I thought core memory was pretty darned important.  I never used
a system that had Williams tubes, although we had one in storage so I knew
what it looked like and knew how much more 'dense' core was compared to
it.   Which is pretty amazing still compare today.  For the modern user,
the IBM 360 a 1M core box (which we had 4) was made up of  4 19" relay
racks, each was about 54" high and 24" deep.    If you go to
CMU Computer Photos from Chris Hausler
<http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
​ and scroll down you can see some pictures of the old 360 (including a
copy of me in them circa 75/76 in front of it) to gage the size).



FWIW:
I broke in with MECL which Motorola invented / developed for IBM for System
360 and it (and TTL) were the first logic families I learned with which to
design.   I remember the margin pots on the front of the 360 that we used
when we were trying to find weak gates, which happened about ones every 10
days.

The interesting part to me is that I'm suspect the PDP-10's and the Univac
1108 broke as often as the 360 did, but I have fewer memories of chasing
problems with them.   Probably because it was a less of an issue that was
causing so many people to be disrupted by the 'down' time.
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180619/7991b444/attachment.html>

From jpl.jpl at gmail.com  Tue Jun 19 23:53:36 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Tue, 19 Jun 2018 09:53:36 -0400
Subject: [TUHS] core
In-Reply-To: <CAC20D2OWCgm0GXg2-RYcnX97Sm_n3TUpL-YUSfUayFWVUNYhog@mail.gmail.com>
References: <20180619122359.1525D18C084@mercury.lcs.mit.edu>
 <CAC20D2OWCgm0GXg2-RYcnX97Sm_n3TUpL-YUSfUayFWVUNYhog@mail.gmail.com>
Message-ID: <CAC0cEp_yOaic-xej_C4SPFgP1GhWa-_3zMUL0taBOOusq6gLMA@mail.gmail.com>

If I read the wikipedia entry for Whirlwind correctly (not a safe
assumption), it was tube based, and I think there was a tradeoff of speed,
as determined by power, and tube longevity. Given the purpose, early
warning of air attack, speed was vital, but so, too, was keeping it alive.
So a means of finding a "sweet spot" was really a matter of national
security. I can understand Forrester's pride in that context.

On Tue, Jun 19, 2018 at 8:58 AM, Clem Cole <clemc at ccc.com> wrote:

>
>
> On Tue, Jun 19, 2018 at 8:23 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
> wrote:
>
>>     > From: Doug McIlroy <doug at cs.dartmouth.edu>
>>
>>     > Yet late in his life Forrester told me that the Whirlwind-connected
>>     > invention he was most proud of was marginal testing
>>
>> Given the above, I'm totally gobsmacked to hear that. Margin testing was
>> important, yes, but not even remotely on the same quantum level as core.
>
> ​Wow -- I had exactly the same reaction.     To me, core was the second
> most important invention (semiconductors switching being he first) for
> making computing practical.   I was thinking that systems must have been
> really bad (worse than I knew) from a reliability stand point if he put
> marginal testing up there as more important than core.
>
> Like you, I thought core memory was pretty darned important.  I never used
> a system that had Williams tubes, although we had one in storage so I knew
> what it looked like and knew how much more 'dense' core was compared to
> it.   Which is pretty amazing still compare today.  For the modern user,
> the IBM 360 a 1M core box (which we had 4) was made up of  4 19" relay
> racks, each was about 54" high and 24" deep.    If you go to
> CMU Computer Photos from Chris Hausler
> <http://www.silogic.com/Athena/CMU%20Photos%20from%20Chris%20Hausler.html>
> ​ and scroll down you can see some pictures of the old 360 (including a
> copy of me in them circa 75/76 in front of it) to gage the size).
>
>
>
> FWIW:
> I broke in with MECL which Motorola invented / developed for IBM for
> System 360 and it (and TTL) were the first logic families I learned with
> which to design.   I remember the margin pots on the front of the 360 that
> we used when we were trying to find weak gates, which happened about ones
> every 10 days.
>
> The interesting part to me is that I'm suspect the PDP-10's and the Univac
> 1108 broke as often as the 360 did, but I have fewer memories of chasing
> problems with them.   Probably because it was a less of an issue that was
> causing so many people to be disrupted by the 'down' time.
> ᐧ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180619/3b104dd1/attachment.html>

From peter at rulingia.com  Wed Jun 20 06:45:36 2018
From: peter at rulingia.com (Peter Jeremy)
Date: Wed, 20 Jun 2018 06:45:36 +1000
Subject: [TUHS] core
In-Reply-To: <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
Message-ID: <20180619204536.GA91748@server.rulingia.com>

On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb at tfeb.org> wrote:
>Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>
>The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s.  I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection

One way of looking at this actually backs up the claim: An iPhone has maybe
3 orders of magnitude more CPU power than a 360/50 but only a couple of
times the I/O bandwidth.  So it's actually got maybe 2 orders of magnitude
less relative I/O throughput.

-- 
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 963 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180620/fd11d6bf/attachment.sig>

From davida at pobox.com  Wed Jun 20 08:55:05 2018
From: davida at pobox.com (David Arnold)
Date: Wed, 20 Jun 2018 08:55:05 +1000
Subject: [TUHS] core
In-Reply-To: <20180619204536.GA91748@server.rulingia.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
Message-ID: <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>

Does the screen count as I/O?

I’d suggest that it’s just that the balance is (intentionally) quite different.  If you squint right, a GPU could look like a channelized I/O controller. 




d

> On 20 Jun 2018, at 06:45, Peter Jeremy <peter at rulingia.com> wrote:
> 
>> On 2018-Jun-18 10:25:03 +0100, Tim Bradshaw <tfb at tfeb.org> wrote:
>> Apropos of the 'my iPhone has more power than our System-360/50, but it has nowhere near the sheer I/O throughput of a mainframe' comment: there's obviously no doubt that devices like phones (and laptops, desktops &c) are I/O-starved compared to serious machines, but comparing the performance of an iPhone and a 360/50 seems to be a matter of choosing how fine the dust you want the 360/50 to be ground into should be.
>> 
>> The 360/50 could, I think, transfer 4 bytes every 2 microseconds to/from main memory, which is 20Mb/s.  I've just measured my iPhone (6): it can do about 36Mb/s ... over WiFi, backed by a 4G cellular connection
> 
> One way of looking at this actually backs up the claim: An iPhone has maybe
> 3 orders of magnitude more CPU power than a 360/50 but only a couple of
> times the I/O bandwidth.  So it's actually got maybe 2 orders of magnitude
> less relative I/O throughput.
> 
> -- 
> Peter Jeremy



From peter at rulingia.com  Wed Jun 20 15:04:54 2018
From: peter at rulingia.com (Peter Jeremy)
Date: Wed, 20 Jun 2018 15:04:54 +1000
Subject: [TUHS] core
In-Reply-To: <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
Message-ID: <20180620050454.GC91748@server.rulingia.com>

On 2018-Jun-20 08:55:05 +1000, David Arnold <davida at pobox.com> wrote:
>Does the screen count as I/O?

I was thinking about that as well.  1080p30 video is around 2MBps as H.264 or
about 140MBps as 6bpp raw.  The former is negligible, the latter is still shy
of the disparity in CPU power, especially if you take into account the GPU
power needed to do the decoding.

>I’d suggest that it’s just that the balance is (intentionally) quite different.  If you squint right, a GPU could look like a channelized I/O controller. 

I agree.  Even back then, there was a difference between commercial-oriented
mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and the
scientific mainframes (709x, 360/85 - which stressed arithmetic capabilities).

One, not too inaccurate, description of the BCM2835 (RPi SoC) is that it's a
GPU and the sole job of the CPU is to push data into the GPU.

-- 
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 963 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180620/a47a1e36/attachment.sig>

From imp at bsdimp.com  Wed Jun 20 15:41:41 2018
From: imp at bsdimp.com (Warner Losh)
Date: Tue, 19 Jun 2018 23:41:41 -0600
Subject: [TUHS] core
In-Reply-To: <20180620050454.GC91748@server.rulingia.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
Message-ID: <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>

On Tue, Jun 19, 2018 at 11:04 PM, Peter Jeremy <peter at rulingia.com> wrote:

> On 2018-Jun-20 08:55:05 +1000, David Arnold <davida at pobox.com> wrote:
> >Does the screen count as I/O?
>
> I was thinking about that as well.  1080p30 video is around 2MBps as H.264
> or
> about 140MBps as 6bpp raw.  The former is negligible, the latter is still
> shy
> of the disparity in CPU power, especially if you take into account the GPU
> power needed to do the decoding.
>
> >I’d suggest that it’s just that the balance is (intentionally) quite
> different.  If you squint right, a GPU could look like a channelized I/O
> controller.
>
> I agree.  Even back then, there was a difference between
> commercial-oriented
> mainframes (the 1401 and 360/50 lineage - which stressed lots of I/O) and
> the
> scientific mainframes (709x, 360/85 - which stressed arithmetic
> capabilities).
>

So what could an old mainframe do as far as I/O was concerned? Google
didn't provide me a straight forward answer...

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180619/634c13d9/attachment.html>

From grog at lemis.com  Wed Jun 20 18:10:32 2018
From: grog at lemis.com (Greg 'groggy' Lehey)
Date: Wed, 20 Jun 2018 18:10:32 +1000
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
Message-ID: <20180620081032.GF28267@eureka.lemis.com>

On Tuesday, 19 June 2018 at 23:41:41 -0600, Warner Losh wrote:
> On Tue, Jun 19, 2018 at 11:04 PM, Peter Jeremy <peter at rulingia.com> wrote:
>
>> On 2018-Jun-20 08:55:05 +1000, David Arnold <davida at pobox.com> wrote:
>>> Does the screen count as I/O?
>>
>> I was thinking about that as well.  1080p30 video is around 2MBps
>> as H.264 or about 140MBps as 6bpp raw.  The former is negligible,
>> the latter is still shy of the disparity in CPU power, especially
>> if you take into account the GPU power needed to do the decoding.
>>
>>> I???d suggest that it???s just that the balance is (intentionally) quite
>> different.  If you squint right, a GPU could look like a channelized I/O
>> controller.
>>
>> I agree.  Even back then, there was a difference between
>> commercial-oriented mainframes (the 1401 and 360/50 lineage - which
>> stressed lots of I/O) and the scientific mainframes (709x, 360/85 -
>> which stressed arithmetic capabilities).
>
> So what could an old mainframe do as far as I/O was concerned? Google
> didn't provide me a straight forward answer...

Looking at something like the IBM 370 series (mid-1970s), I/O was
performed by the channels, effectively separate processors with a very
limited instruction set.  Others, like the UNIVAC 1100 series, could
perform I/O directly or via separate processors.  This was similar on
the /360, but very different on the 1401.

In each case, from my recollection, main memory and the peripheral
were the bottleneck.  For the UNIVAC 1108 (1965, the one of which I
have the best recollection), memory was 36 bits every 750 ns, and you
could expect it to be interleaved at least 2 ways, so you could
transfer data across two channels to a FH 432 drum at in the order of
2.5 MW/s.  This could lead to underruns depending on what else was
going on in the system.  Other peripherals were slower, so this would
have been about the maximum.

Greg

--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180620/c48c3727/attachment.sig>

From dave at horsfall.org  Wed Jun 20 20:06:04 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Wed, 20 Jun 2018 20:06:04 +1000 (EST)
Subject: [TUHS] core
In-Reply-To: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
References: <CAFCBnZshnexAs8WH8HQmCXU88gLxiW4=FJ2EmnC9ge283ivAXg@mail.gmail.com>
Message-ID: <alpine.BSF.2.21.999.1806202003050.68981@aneurin.horsfall.org>

(Yeah, I'm a bit behind right now; I got involved in some electronics stuff
for a change.)

On Fri, 15 Jun 2018, A. P. Garcia wrote:

> jay forrester first described an invention called core memory in a lab
> notebook 69 years ago today.

Odd, I don't have that in my history notes; may I use it?

-- Dave


From dave at horsfall.org  Wed Jun 20 21:10:18 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Wed, 20 Jun 2018 21:10:18 +1000 (EST)
Subject: [TUHS] core
In-Reply-To: <201806161258.w5GCwNDN014646@tahoe.cs.Dartmouth.EDU>
References: <201806161258.w5GCwNDN014646@tahoe.cs.Dartmouth.EDU>
Message-ID: <alpine.BSF.2.21.999.1806202035140.68981@aneurin.horsfall.org>

On Sat, 16 Jun 2018, Doug McIlroy wrote:

>> jay forrester first described an invention called core memory in a lab 
>> notebook 69 years ago today.
>
> Core memory wiped out competing technologies (Williams tube [...]

Ah, the Williams tube.  Was that not the one where women clerks (they 
weren't called programmers) wearing nylon stockings were not allowed 
anywhere near them, because of the static zaps thus generated?  Or was 
that yet another urban myth that appears to be polluting my history file?

How said women were determined to be wearing nylons and thus banned, well. 
I have no idea...  Hell, even a popular computer weekly in the 80s used to 
joke that "software" was a female computer programmer; I'm sure that Rear 
Admiral Grace Hopper (USN retd, decd) would've had a few words to say 
about that sexism...

-- Dave


From paul.winalski at gmail.com  Thu Jun 21 02:33:05 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Wed, 20 Jun 2018 12:33:05 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <20180620081032.GF28267@eureka.lemis.com>
References: <20180615152542.E1EC918C08C@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806160855070.68981@aneurin.horsfall.org>
 <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
Message-ID: <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>

On 6/20/18, Greg 'groggy' Lehey <grog at lemis.com> wrote:
>
> Looking at something like the IBM 370 series (mid-1970s), I/O was
> performed by the channels, effectively separate processors with a very
> limited instruction set.  Others, like the UNIVAC 1100 series, could
> perform I/O directly or via separate processors.  This was similar on
> the /360, but very different on the 1401.

All of the System/360 series except the model 25 used separate channel
processors to perform I/O.  Once the I/O was initiated, the channel
performed data transfer to and from main storage (IBM didn't use the
term "memory") completely independently from the CPU.  The S/360 model
25 was the last of the 360 series and was really a 16-bit minicomputer
microprogrammed to execute the S/360 instruction set.  It had what
they called "integrated channels", meaning that I/O was handled by CPU
microcode.

IBM used the model 25 I/O design in the S/370 models 135 and 145.  The
models 115 and 125 were actually four 16-bit processors on a bus along
with main memory.  One of them, the service processor, handled system
power-on, power-off, microcode load, diagnostics, and the system
console (a modified 3277 transaction terminal).  One was the CPU and
executed the S/370 instruction set in microcode.  The remaining two
processors acted as a byte and block multiplexer channel.  This meant
that I/O proceeded independently from the CPU, as with the old 360s.
It also meant that if the system was reading cards, punching cards,
and printing all at the same time (often the case when spooling), a
model 125 outperformed a 145 in compute speed, because the 145 had to
execute a microcode loop for every byte of I/O.

The 158 and 168 appear to have had independent channels, but housed in
the same boxes as the CPU as opposed to stand-alone units as with
S/360.

-Paul W.


From paul.winalski at gmail.com  Thu Jun 21 02:55:58 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Wed, 20 Jun 2018 12:55:58 -0400
Subject: [TUHS] core
In-Reply-To: <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
 <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
 <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
Message-ID: <CABH=_VTh5A5_C1MPk5rXOF4bUDnr8PcQ38fJ5KMKQixLTHXApA@mail.gmail.com>

On 6/16/18, Johnny Billquist <bqt at update.uu.se> wrote:
>
> After Unibus (if we skip Q-bus) you had SBI, which was also used both
> for controllers (well, mostly bus adaptors) and memory, for the
> VAX-11/780. However, evaluation of where the bottleneck was on that
> machine led to the memory being moved away from SBI for the VAX-86x0
> machines. SBI never was used much for any direct controllers, but
> instead you had Unibus adapters, so here the Unibus was used as a pure
> I/O bus.

You forgot the Massbus--commonly used for disk I/O.

SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
controller, Unibus controllers, Massbus controllers, and CI
controller.

CI (computer interconnect) was the communications medium for
VAXclusters (including the HSC50 intelligent disk controller).

DEC later introduced another interconnect system called BI, intended
to replace Unibus and Massbus.

As we used to say in our software group about all the various busses
and interconnects:

    E Unibus Plurum

-Paul W.


From doug at cs.dartmouth.edu  Thu Jun 21 06:11:24 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Wed, 20 Jun 2018 16:11:24 -0400
Subject: [TUHS] core
Message-ID: <201806202011.w5KKBOT5040344@tahoe.cs.Dartmouth.EDU>

>     > Yet late in his life Forrester told me that the Whirlwind-connected
>     > invention he was most proud of was marginal testing

> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.

> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.

It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:

> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.

If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.



From bqt at update.uu.se  Thu Jun 21 07:35:09 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Wed, 20 Jun 2018 23:35:09 +0200
Subject: [TUHS] core
In-Reply-To: <CABH=_VTh5A5_C1MPk5rXOF4bUDnr8PcQ38fJ5KMKQixLTHXApA@mail.gmail.com>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
 <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
 <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
 <CABH=_VTh5A5_C1MPk5rXOF4bUDnr8PcQ38fJ5KMKQixLTHXApA@mail.gmail.com>
Message-ID: <60d0717e-b6ed-fc18-b799-76ea4811a6b6@update.uu.se>

On 2018-06-20 18:55, Paul Winalski wrote:
> On 6/16/18, Johnny Billquist <bqt at update.uu.se> wrote:
>>
>> After Unibus (if we skip Q-bus) you had SBI, which was also used both
>> for controllers (well, mostly bus adaptors) and memory, for the
>> VAX-11/780. However, evaluation of where the bottleneck was on that
>> machine led to the memory being moved away from SBI for the VAX-86x0
>> machines. SBI never was used much for any direct controllers, but
>> instead you had Unibus adapters, so here the Unibus was used as a pure
>> I/O bus.
> 
> You forgot the Massbus--commonly used for disk I/O.

I didn't try for pure I/O buses. But yes, the Massbus certainly also 
existed.

> SBI on the 11/780 connected the CPU (CPUs, for the 11/782), memory
> controller, Unibus controllers, Massbus controllers, and CI
> controller.

Right.

> CI (computer interconnect) was the communications medium for
> VAXclusters (including the HSC50 intelligent disk controller).

But CI I wouldn't even call a bus.

> DEC later introduced another interconnect system called BI, intended
> to replace Unibus and Massbus.

I think that is where my comment started.

	Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From paul.winalski at gmail.com  Thu Jun 21 08:24:38 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Wed, 20 Jun 2018 18:24:38 -0400
Subject: [TUHS] core
In-Reply-To: <60d0717e-b6ed-fc18-b799-76ea4811a6b6@update.uu.se>
References: <mailman.1.1529175600.3826.tuhs@minnie.tuhs.org>
 <86dcd19d-f805-f338-f190-ab38d1ac82c1@update.uu.se>
 <20B7C3F5-2E44-41AB-91E4-510451428C83@ccc.com>
 <2a35f8dd-e8b8-937a-1af9-d18ac31b8be2@update.uu.se>
 <CABH=_VTh5A5_C1MPk5rXOF4bUDnr8PcQ38fJ5KMKQixLTHXApA@mail.gmail.com>
 <60d0717e-b6ed-fc18-b799-76ea4811a6b6@update.uu.se>
Message-ID: <CABH=_VQGfV6PnydJS87R6AFt_f-SSSQD2-5JXPyjVLx9_QJ9AA@mail.gmail.com>

On 6/20/18, Johnny Billquist <bqt at update.uu.se> wrote:
>
>> CI (computer interconnect) was the communications medium for
>> VAXclusters (including the HSC50 intelligent disk controller).
>
> But CI I wouldn't even call a bus.

You're right--it's a high-speed point-to-point communications network
with a star topology.

-Paul W.


From tfb at tfeb.org  Thu Jun 21 09:53:52 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Thu, 21 Jun 2018 00:53:52 +0100
Subject: [TUHS] core
In-Reply-To: <201806202011.w5KKBOT5040344@tahoe.cs.Dartmouth.EDU>
References: <201806202011.w5KKBOT5040344@tahoe.cs.Dartmouth.EDU>
Message-ID: <580E4D90-FB9F-4C7B-8904-7994C49806E7@tfeb.org>

Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in.  So by the time of Whirlwind it was presumably well-understood that this was possible.

(This is not meant to detract from what Whirlwind did in any way.)

> On 20 Jun 2018, at 21:11, Doug McIlroy <doug at cs.dartmouth.edu> wrote:
> 
> If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
> radio to a 5000-tube computer (say nothing of the 50,000-tube machines
> for which Whirlwind served as a prototype), computing looks like a
> crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
> computed reliably--a sine qua non for the nascent industry.



From peter at rulingia.com  Thu Jun 21 13:05:05 2018
From: peter at rulingia.com (Peter Jeremy)
Date: Thu, 21 Jun 2018 13:05:05 +1000
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
Message-ID: <20180621030505.GA89671@server.rulingia.com>

On 2018-Jun-20 12:33:05 -0400, Paul Winalski <paul.winalski at gmail.com> wrote:
>All of the System/360 series except the model 25 used separate channel
>processors to perform I/O.

The S360 architecture defined separate main CPU and I/O channel processors
and the actual implementation varied between models.  IBM stressed the
compatibility between models so it can be difficult to determine what the
actual implementation did in hardware vs microcode.  At least the model 30
also emulated the channel processor using the main CPU.  [1] confirms this
for the multiplexor[2] channels and implies it for the selector[3] channels.
The "CPU interference factors" in [5](p65) suggest the model 50 also
emulated the channel processors.

The idea of separate I/O processors was also used in the CDC6600.

>term "memory") completely independently from the CPU.  The S/360 model
>25 was the last of the 360 series and was really a 16-bit minicomputer
>microprogrammed to execute the S/360 instruction set.

Note that most S360 machines were microcoded with the native ALU size
varying between 8 and 32 bits.  The model 25 was also the only S360 with
writable microcode and there was a microcoded APL implementation for it so
it "natively" executed APL.  I'm not sure if there were any other novel
microcode sets for it.

Going back to Greg's question of actual I/O performance: A model 50 could
support 3 selector channels, with a nominal rate of 800kBps each[6].  Since
each selector channel could only perform a single I/O operation at a time, I
believe the actual rate was effectively limited to the fastest device on the
channel - which [5] indicates was 340kBps for a 7340-3 Hypertape at 3022bpi.
That implies a total of 1020kBps of I/O.  The "CPU interference" indicates
that each byte transferred blocked the CPU for 0.95us, so 1020kBps of I/O
would also steal 97% of the CPU-storage bandwidth.

[1] http://bitsavers.org/pdf/ibm/360/funcChar/GA24-3231-7_360-30_funcChar.pdf
[2] "multiplexer" channels were used for low speed devices - card readers,
    card punches, printers, serial communications.
[3] "selector" channels were used for high speed devices - tape, DASD[4]
[4] Direct Access Storage Device - IBM speak for "disk"
[5] http://bitsavers.org/pdf/ibm/360/funcChar/A22-6898-1_360-50_funcChar_1967.pdf
[6] https://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2050.html
-- 
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 963 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180621/e11c4f3a/attachment.sig>

From doug at cs.dartmouth.edu  Thu Jun 21 23:46:19 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Thu, 21 Jun 2018 09:46:19 -0400
Subject: [TUHS] core
Message-ID: <201806211346.w5LDkJJG126945@tahoe.cs.Dartmouth.EDU>

Tim Bradshaw wrote:
"Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in.  So by the time of Whirlwind it was presumably well-understood that this was possible."

"Colossus: The Secrest of Bletchley Park's Codebreaking Computers"
by Copeland et al has little to say about reliability. But one
ex-operator remarks, "Often the machine broke down."

Whether it was the (significant) mechanical part or the electronics
that typically broke is unclear. Failures in a machine that's always
doing the same thing are easier to detect quickly than failures in
a mchine that has a varied load. Also the task at hand could fail
for many other reasons (e.g. mistranscribed messages) so there was
no presumption of correctness of results--that was determined by
reading the decrypted messages.  So I think it's a stretch to
argue that reliability was known to be a manageable issue.

Doug


From paul.winalski at gmail.com  Fri Jun 22 00:00:04 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 21 Jun 2018 10:00:04 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <20180621030505.GA89671@server.rulingia.com>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
 <20180621030505.GA89671@server.rulingia.com>
Message-ID: <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>

On 6/20/18, Peter Jeremy <peter at rulingia.com> wrote:
>
> Note that most S360 machines were microcoded with the native ALU size
> varying between 8 and 32 bits.  The model 25 was also the only S360 with
> writable microcode and there was a microcoded APL implementation for it so
> it "natively" executed APL.  I'm not sure if there were any other novel
> microcode sets for it.

Yes, the model 25's microcode was in core.  I remember we had to
reaload it from punch cards at one point when IBM issued an update.  I
didn't know about the custom APL microcode.  I do recall that the disk
controller logic, as well as the "selector channel", was in CPU
microcode.  After the model 25 was withdrawn, IBM released the sources
for the microcode to customers.  There were several hacks in there to
slow down the disk I/O so that it didn't outperform the model 30.

> [3] "selector" channels were used for high speed devices - tape, DASD[4]
> [4] Direct Access Storage Device - IBM speak for "disk"

They used the term DASD because it covered non-disk devices such as
drums and the 2321 data cell drive (aka "noodle picker").

-Paul W.


From paul.winalski at gmail.com  Fri Jun 22 00:09:58 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 21 Jun 2018 10:09:58 -0400
Subject: [TUHS] core
In-Reply-To: <20180619122359.1525D18C084@mercury.lcs.mit.edu>
References: <20180619122359.1525D18C084@mercury.lcs.mit.edu>
Message-ID: <CABH=_VTNYg=6qr=B+yAkEJghQ4AZm7RNZm4UWDFS==q+WBByDQ@mail.gmail.com>

On 6/19/18, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>
> In other words, there's a reason they were all dropped like hot potatoes in
> favour of core - which, looked at from our DRAM-era perspective, seems
> quaintly dinosaurian. Individual pieces of hardware you can actually _see_
> with the naked eye, for _each_ bit? But that should give some idea of how
> much
> worse everything before it was, that it killed them all off so quickly!

When we replaced our S/360 model 25 with a S/370 model 125, I remember
being shocked when I found out that the semiconductor memory on the
125 was volatile.  You mean we're going to have to reload the OS every
time we power the machine off?  Yuck!!

-Paul W.


From arrigo at alchemistowl.org  Fri Jun 22 00:49:21 2018
From: arrigo at alchemistowl.org (Arrigo Triulzi)
Date: Thu, 21 Jun 2018 16:49:21 +0200
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
 <20180621030505.GA89671@server.rulingia.com>
 <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>
Message-ID: <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>

On 21 Jun 2018, at 16:00, Paul Winalski <paul.winalski at gmail.com> wrote:
[...]
> for the microcode to customers.  There were several hacks in there to
> slow down the disk I/O so that it didn't outperform the model 30.

Is this the origin of the lore on “the IBM slowdown device”?

I seem to recall there was also some trickery at the CPU level so that you could “field upgrade” between two models by removing it but a) I cannot find the source and b) my Pugh book is far and cannot scan through it.

Arrigo 





From tfb at tfeb.org  Fri Jun 22 02:13:00 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Thu, 21 Jun 2018 17:13:00 +0100
Subject: [TUHS] core
In-Reply-To: <201806211346.w5LDkJJG126945@tahoe.cs.Dartmouth.EDU>
References: <201806211346.w5LDkJJG126945@tahoe.cs.Dartmouth.EDU>
Message-ID: <9F50E0F4-25A5-40F9-90D7-B454BE0EF9D6@tfeb.org>

On 21 Jun 2018, at 14:46, Doug McIlroy <doug at cs.dartmouth.edu> wrote:
> 
> Whether it was the (significant) mechanical part or the electronics
> that typically broke is unclear. Failures in a machine that's always
> doing the same thing are easier to detect quickly than failures in
> a mchine that has a varied load. Also the task at hand could fail
> for many other reasons (e.g. mistranscribed messages) so there was
> no presumption of correctness of results--that was determined by
> reading the decrypted messages.  So I think it's a stretch to
> argue that reliability was known to be a manageable issue.

I think it's reasonably well-documented that Flowers understood things about making valves (tubes) reliable that were not previously known: Colossus was well over ten times larger than any previous valve-based system at the time and there was huge scepticism about making it work, to the extent that he funded the initial one substantially himself as BP wouldn't.  For instance he understood that you should never turn the things off: even quite significant maintenance was done on Colossi with them on (I believe they were kept on even when the room flooded on occasion, which led to fairly exciting electrical conditions).  He also did things with heaters (the heater voltage was ramped up & down to avoid thermal stresses when powering things on or off) and other tricks such as soldering the valves into their bases to avoid connector problems.

The mechanical part (the tape reader) was in fact one significant reason for Colossus: the previous thing, Heath Robinson, had used two tapes which needed to be kept in sync (or, rather, needed to be allowed to drift out of sync in a controlled way with respect to each other), and this did not work well at all as the tapes would stretch & break.  Colossus generated one of the tapes (the one corresponding to the Lorenz machine's settings) electronically and would then sync itself to the tape with the message text on it.

It's also not the case that the Colossi did only one thing: they were not general-purpose machines but they were more general-purpose than they needed to be and people found all sorts of ways of getting them to compute properties they had not originally been intended to compute.  I remember talking to Donald Michie about this (he was one of the members of the Newmanry which was the bit of BP where the Colossi were).

There is a paper, by Flowers, in the Annals of the History of Computing which discusses a lot of this.  I am not sure if it's available online (or where my copy is).

Further, of course, you can just ask people who run one about reliability: there's a reconstructed one at TNMOC which is well worth seeing (there's a Heath Robinson as well), and the people there are very willing to talk about reliability -- I learnt about the heater-voltage-ramping thing by asking what the huge rheostat was for, and later seeing it turned on in the morning -- one of the problems with the reconstruction is that they are required to to turn it off at night (this is the same problem that afflicts people who look after steam locomotives. which in real life would almost never have been allowed to get cold).

As I said, I don't want to detract from Whirlwind in any way, but it is the case that Tommy Flowers did sort out significant aspects of the reliability of relatively large valve systems.

--tim



From beebe at math.utah.edu  Fri Jun 22 03:07:54 2018
From: beebe at math.utah.edu (Nelson H. F. Beebe)
Date: Thu, 21 Jun 2018 11:07:54 -0600
Subject: [TUHS] core
Message-ID: <CMM.0.96.0.1529600874.beebe@gamma.math.utah.edu>

Tim Bradshaw <tfb at tfeb.org> commented on a paper by Tommy Flowers on
the design of the Colossus: here is the reference:

	Thomas H. Flowers, The Design of Colossus, Annals of the
	History of Computing 5(3) 239--253 July/August 1983
	https://doi.org/10.1109/MAHC.1983.10079

Notice that it appeared in the Annnals..., not the successor journal
IEEE Annals....

There is a one-column obituary of Tommy Flowers at

	http://doi.ieeecomputersociety.org/10.1109/MC.1998.10137

Last night, I finished reading this recent book:

	Thomas Haigh and Mark (Peter Mark) Priestley and Crispin Rope
	ENIAC in action: making and remaking the modern computer
	MIT Press 2016
	ISBN 0-262-03398-4
	https://doi.org/10.7551/mitpress/9780262033985.001.0001

It has extensive commentary about the ENIAC at the Moore School of
Engineering at the University of Pennsylvania in Philadelphia, PA.  Its
construction began in 1943, with a major instruction set redesign in
1948, and was shutdown permanently on 2 October 1955 at 23:45.

The book notes that poor reliability of vacuum tubes and thousands of
soldered connections was a huge problem, and in the early years, only
about 1 hour out of 24 was devoted to useful runs; the rest of the
time was used for debuggin, problem setup (which required wiring
plugboards), testing, and troubleshooting.  Even so, runs generally
had to be repeated to verify that the same answers could be obtained:
often, they differed.

The book also reports that reliability was helped by never turning off
power: tubes were more susceptible to failure when power was restored.

The book reports that reliability of the ENIAC improved significantly
when on 28 April 1948, Nick Metropolis (co-inventor of the famous
Monte Carlo method) had the clock rate reduced from 100kHz to 60kHz.
It was only several years later that, with vacuum tube manufacturing
improvements, the clock rate was eventually moved back to 100Khz.

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
- 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------


From jnc at mercury.lcs.mit.edu  Fri Jun 22 03:40:35 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Thu, 21 Jun 2018 13:40:35 -0400 (EDT)
Subject: [TUHS] core
Message-ID: <20180621174035.AC42B18C073@mercury.lcs.mit.edu>

    > From: Tim Bradshaw

    > There is a paper, by Flowers, in the Annals of the History of Computing
    > which discusses a lot of this. I am not sure if it's available online

Yes:

  http://www.ivorcatt.com/47c.htm

(It's also available behind the IEEE pay-wall.) That issue of the Annals (5/3)
has a couple of articles about Colossus; the Coombs one on the design is
available here:

  http://www.ivorcatt.com/47d.htm

but doesn't have much on the reliability issues; the Chandler one on
maintenance has more, but alas is only available behind the paywall:

  https://www.computer.org/csdl/mags/an/1983/03/man1983030260-abs.html

Brian Randell did a couple of Colossus articles (in "History of Computing in
the 20th Century" and "Origin of Digital Computers") for which he interviewed
Flowers and others, there may be details there too.

	Noel


From paul.winalski at gmail.com  Fri Jun 22 06:39:23 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 21 Jun 2018 16:39:23 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
 <20180621030505.GA89671@server.rulingia.com>
 <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>
 <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
Message-ID: <CABH=_VR91sbBbf5pxBeB5AktQp_qXoVc8zghDKXEJPOXf21yYg@mail.gmail.com>

On 6/21/18, Arrigo Triulzi <arrigo at alchemistowl.org> wrote:
> On 21 Jun 2018, at 16:00, Paul Winalski <paul.winalski at gmail.com> wrote:
> [...]
>> for the microcode to customers.  There were several hacks in there to
>> slow down the disk I/O so that it didn't outperform the model 30.
>
> Is this the origin of the lore on “the IBM slowdown device”?
>
> I seem to recall there was also some trickery at the CPU level so that you
> could “field upgrade” between two models by removing it but a) I cannot find
> the source and b) my Pugh book is far and cannot scan through it.

I don't know about that for IBM systems, but DEC pulled that trick
with the VAX 8500 series.  Venus, the successor to the 11/780, was
originally to be called the 11/790 and was done in ECL by the PDP-10
folks.  The project suffered many delays and badly missed its initial
market window.  It eventually was released as the VAX 8600.  It had a
rather short market life because by that time the next generation CPU,
codenamed Nautilus and implemented in TTL, was nearly ready for market
and offered comparable performance.  There was also a slower and lower
cost system in that series codenamed Skipjack.  When it finally came
time to market these machines, it was found that the product line
needed a reduced cost version of Skipjack.  Rather than design a new
CPU, they just put NOPs in the Skipjack microcode to slow it down.
The official code name for this machine was Flounder, but within DEC
engineering we called it "Wimpjack".  Customers could buy a field
upgrade for Flounder microcode that restored it to Skipjack
performance levels.

-Paul W.


From beebe at math.utah.edu  Fri Jun 22 08:44:28 2018
From: beebe at math.utah.edu (Nelson H. F. Beebe)
Date: Thu, 21 Jun 2018 16:44:28 -0600
Subject: [TUHS] core
Message-ID: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>

The discussion of the last few days about the Colossus is getting
somewhat off-topic for Unix heritage and history, but perhaps this
list of titles (truncated to 80 characters) and locations may be of
interest to some TUHS list readers.

------------------------------------------------------------------------

    Papers and book chapters about the Colossus at Bletchley Park

Journal- and subject specific Bibliography files are found at

	http://www.math.utah.edu/pub/tex/bib/NAME.bib

and author-specific ones under

	http://www.math.utah.edu/pub/bibnet/authors/N/

where N is the first letter of the bibliography filename.

The first line in each group contains the bibliography filename, and
the citation label of the cited publication.  DOIs are supplied
whenever they are available.  Paragraphs are sorted by filename.

adabooks.bib             	 Randell:1976:C
The COLOSSUS

annhistcomput.bib        	 Good:1979:EWC
Early Work on Computers at Bletchley
https://doi.org/10.1109/MAHC.1979.10011

annhistcomput.bib        	 deLeeuw:2007:HIS
Tunny and Colossus: breaking the Lorenz Schl{\"u}sselzusatz traffic
https://www.sciencedirect.com/science/article/pii/B978044451608450016X

babbage-charles.bib      	 Randell:1976:C
The COLOSSUS

cacm2010.bib             	 Anderson:2013:MNF
Max Newman: forgotten man of early British computing
https://doi.org/10.1145/2447976.2447986

computer1990.bib         	 Anonymous:1996:UWI
Update: WW II Colossus computer is restored to operation

cryptography.bib         	 Good:1979:EWC
Early Work on Computers at Bletchley

cryptography1990.bib     	 Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758

cryptography2000.bib     	 Rojas:2000:FCH
The first computers: history and architectures

cryptography2010.bib     	 Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His

cryptologia.bib          	 Michie:2002:CBW
Colossus and the Breaking of the Wartime ``Fish'' Codes

debroglie-louis.bib      	 Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969

dijkstra-edsger-w.bib    	 Randell:1980:C
The COLOSSUS

dyson-freeman-j.bib      	 Brockman:2015:WTA
What to think about machines that think: today's leading thinkers on the age of

fparith.bib              	 Randell:1977:CGC
Colossus: Godfather of the Computer

ieeeannhistcomput.bib    	 Anonymous:1997:CBP
The Colossus of Bletchley Park, IEEE Rev., vol. 41, no. 2, pp. 55--59 [Book Revi
https://doi.org/10.1109/MAHC.1997.560758

lncs1997b.bib            	 Carter:1997:BLC
The Breaking of the Lorenz Cipher: An Introduction to the Theory behind the Oper

lncs2000.bib             	 Sale:2000:CGL
Colossus and the German Lorenz Cipher --- Code Breaking in WW II

master.bib               	 Randell:1977:CGC
Colossus: Godfather of the Computer

mathcw.bib               	 Randell:1982:ODC
The Origins of Digital Computers: Selected Papers
http://dx.doi.org/10.1007/978-3-642-61812-3

rutherford-ernest.bib    	 Homberger:1970:CMN
The Cambridge mind: ninety years of the booktitle Cambridge Review, 1879--1969

rutherfordj.bib          	 Copeland:2010:CBG
Colossus: Breaking the German `Tunny' Code at Bletchley Park. An Illustrated His

sigcse1990.bib           	 Plimmer:1998:MIW
Machines invented for WW II code breaking
https://doi.org/10.1145/306286.306309

turing-alan-mathison.bib 	 Randell:1976:C
The COLOSSUS

von-neumann-john.bib     	 Randell:1982:ODC
The Origins of Digital Computers: Selected Papers

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
- 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------


From gtaylor at tnetconsulting.net  Fri Jun 22 09:07:48 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Thu, 21 Jun 2018 17:07:48 -0600
Subject: [TUHS] core
In-Reply-To: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
Message-ID: <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>

On 06/21/2018 04:44 PM, Nelson H. F. Beebe wrote:
> The discussion of the last few days about the Colossus is getting somewhat 
> off-topic for Unix heritage and history

This is the the first time that this problem has come up.

Would it be worth while to have another (sub)mailing list that such 
topics can be moved to?

Given that the topics linger makes me think that people are interested 
in having the conversations.  So perhaps another venue is worthwhile.

Sort of like cctalk & cctech.

I make a motion for tuhs-off-topic.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180621/dcdd2ccb/attachment.bin>

From toby at telegraphics.com.au  Fri Jun 22 09:38:45 2018
From: toby at telegraphics.com.au (Toby Thain)
Date: Thu, 21 Jun 2018 19:38:45 -0400
Subject: [TUHS] core
In-Reply-To: <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
Message-ID: <d3dfdfd7-32d6-97fb-8d80-f24deb8bf430@telegraphics.com.au>

On 2018-06-21 7:07 PM, Grant Taylor via TUHS wrote:
> On 06/21/2018 04:44 PM, Nelson H. F. Beebe wrote:
>> The discussion of the last few days about the Colossus is getting
>> somewhat off-topic for Unix heritage and history
> 
> This is the the first time that this problem has come up.
> 
> Would it be worth while to have another (sub)mailing list that such
> topics can be moved to?
> 
> Given that the topics linger makes me think that people are interested
> in having the conversations.  So perhaps another venue is worthwhile.
> 
> Sort of like cctalk & cctech.

...seems like you just named an appropriate venue for that conversation...

--T

> 
> I make a motion for tuhs-off-topic.
> 
> 
> 



From wkt at tuhs.org  Fri Jun 22 09:47:06 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Fri, 22 Jun 2018 09:47:06 +1000
Subject: [TUHS] off-topic list
In-Reply-To: <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
Message-ID: <20180621234706.GA23316@minnie.tuhs.org>

On Thu, Jun 21, 2018 at 05:07:48PM -0600, Grant Taylor via TUHS wrote:
>This is the the first time that this problem has come up.

No :-)

>Would it be worth while to have another (sub)mailing list that such 
>topics can be moved to? I make a motion for tuhs-off-topic.

I'm happy to make a mailing list here for it. But perhaps a name that
reflects its content. Computing history is maybe too generic. Ideas?

	Warren


From gtaylor at tnetconsulting.net  Fri Jun 22 11:11:08 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Thu, 21 Jun 2018 19:11:08 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180621234706.GA23316@minnie.tuhs.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
Message-ID: <5833157e-5d40-11ca-24c3-df5917895003@spamtrap.tnetconsulting.net>

On 06/21/2018 05:47 PM, Warren Toomey wrote:
>> This is the the first time that this problem has come up.

This is *not* the first time that this problem has come up.

Apparently I can't articulate myself properly.

> No :-)

;-)

> I'm happy to make a mailing list here for it. But perhaps a name that 
> reflects its content. Computing history is maybe too generic. Ideas?

"tuhs-open-discussion"  It's a play on Open Systems and still related to 
things tangentially related to Unix.

"tuhs-cantina"

¯\_(ツ)_/¯



-- 
Grant. . . .
unix || die


From jnc at mercury.lcs.mit.edu  Fri Jun 22 12:21:19 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Thu, 21 Jun 2018 22:21:19 -0400 (EDT)
Subject: [TUHS] off-topic list
Message-ID: <20180622022119.1370B18C073@mercury.lcs.mit.edu>

    > From: Warren Toomey

    > Computing history is maybe too generic.

There already is a "computer-history" list, hosted at the Postel Institute:

  http://www.postel.org/computer-history/

Unlike its sibling "Internet-history" list, it didn't catch on, though. (No
traffic for some years now.)

	Noel



From robert at timetraveller.org  Fri Jun 22 13:53:41 2018
From: robert at timetraveller.org (Robert Brockway)
Date: Fri, 22 Jun 2018 13:53:41 +1000 (AEST)
Subject: [TUHS] off-topic list
In-Reply-To: <20180621234706.GA23316@minnie.tuhs.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
Message-ID: <alpine.DEB.2.20.1806221349280.9560@mira.opentrend.net>

On Fri, 22 Jun 2018, Warren Toomey wrote:

> I'm happy to make a mailing list here for it. But perhaps a name that
> reflects its content. Computing history is maybe too generic. Ideas?

Other groups have often elected to go with -chat.

[TUHS-chat]?

Rob


From dave at horsfall.org  Fri Jun 22 14:18:01 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Fri, 22 Jun 2018 14:18:01 +1000 (EST)
Subject: [TUHS] off-topic list
In-Reply-To: <20180621234706.GA23316@minnie.tuhs.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
Message-ID: <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>

On Fri, 22 Jun 2018, Warren Toomey wrote:

> I'm happy to make a mailing list here for it. But perhaps a name that
> reflects its content. Computing history is maybe too generic. Ideas?

COFF - Computer Old Fart Followers.

-- Dave


From fair-tuhs at netbsd.org  Fri Jun 22 15:32:39 2018
From: fair-tuhs at netbsd.org (Erik E. Fair)
Date: Thu, 21 Jun 2018 22:32:39 -0700
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CABH=_VR91sbBbf5pxBeB5AktQp_qXoVc8zghDKXEJPOXf21yYg@mail.gmail.com>
References: <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
Message-ID: <27914.1529645559@cesium.clock.org>

The VAX 8800 was also the advent of the DEC BI bus attempt to lock third-party I/O devices out of the VAX market and prevent "unauthorized" competition with their own overpriced and underperforming I/O devices.

In late 1988 or early 1989 my group at Apple ordered a VAX-8810 to replace two 11/780s on the promise from DEC that all our UniBus and MASSbus peripherals would still work ... which we knew (from others on the Internet who'd tried and reported their experiences) to be a lie.

After allowing DEC field circus to embarass themselves for a while trying to make it work, we cancelled our 8810 order and bought two 8650s instead (they cost half as much!), which we knew would run 4.3 BSD UNIX (unlike the 8800 series where we'd be stuck with Ultrix) and where all our old but still useful peripherals would still work. Surprise, DEC - your customers talk to each other and compare notes.

IIRC, as a consolation for DEC, we still bought a heavily discounted 6000 series BI machine with all new I/O to handle some other tasks that the 8650s weren't doing while also making clear to DEC that we understood the game they'd tried to play with us.

After that, Apple engineering concentrated on Sun & SGI gear, along with our Cray running UniCOS, but Apple IS&T (corporate IT) bought quite a bit of VAX gear to run VMS for certain applications they had to support.

Part of being in the Unix community was participating it as a community and sharing experiences like this for the benefit of all (and to keep hardware vendors honest) - the Unix-Wizards Internet mailing list, and comp.unix USENET newsgroup were invaluable in this regard.

	Erik Fair


>Date: Thu, 21 Jun 2018 16:39:23 -0400
>From: Paul Winalski <paul.winalski at gmail.com>
>
>On 6/21/18, Arrigo Triulzi <arrigo at alchemistowl.org> wrote:
>> On 21 Jun 2018, at 16:00, Paul Winalski <paul.winalski at gmail.com> wrote:
>> [...]
>>> for the microcode to customers.  There were several hacks in there to
>>> slow down the disk I/O so that it didn't outperform the model 30.
>>
>> Is this the origin of the lore on “the IBM slowdown device�>�?
>>
>> I seem to recall there was also some trickery at the CPU level so that yo>u
>> could “field upgrade” between two models by removing it b>ut a) I cannot find
>> the source and b) my Pugh book is far and cannot scan through it.
>
>I don't know about that for IBM systems, but DEC pulled that trick
>with the VAX 8500 series.  Venus, the successor to the 11/780, was
>originally to be called the 11/790 and was done in ECL by the PDP-10
>folks.  The project suffered many delays and badly missed its initial
>market window.  It eventually was released as the VAX 8600.  It had a
>rather short market life because by that time the next generation CPU,
>codenamed Nautilus and implemented in TTL, was nearly ready for market
>and offered comparable performance.  There was also a slower and lower
>cost system in that series codenamed Skipjack.  When it finally came
>time to market these machines, it was found that the product line
>needed a reduced cost version of Skipjack.  Rather than design a new
>CPU, they just put NOPs in the Skipjack microcode to slow it down.
>The official code name for this machine was Flounder, but within DEC
>engineering we called it "Wimpjack".  Customers could buy a field
>upgrade for Flounder microcode that restored it to Skipjack
>performance levels.
>
>-Paul W.


From krewat at kilonet.net  Fri Jun 22 21:44:28 2018
From: krewat at kilonet.net (Arthur Krewat)
Date: Fri, 22 Jun 2018 07:44:28 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
Message-ID: <94c51dad-33b7-d514-f364-1f4c1a701017@kilonet.net>

I was trying to come up with an acronym involving "old" - you got it ;)


On 6/22/2018 12:18 AM, Dave Horsfall wrote:
> COFF - Computer Old Fart Followers. 



From jnc at mercury.lcs.mit.edu  Fri Jun 22 23:11:29 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Fri, 22 Jun 2018 09:11:29 -0400 (EDT)
Subject: [TUHS] Old mainframe I/O speed (was: core)
Message-ID: <20180622131129.C82D018C073@mercury.lcs.mit.edu>

    > From: "Erik E. Fair"

    > ordered a VAX-8810 to replace two 11/780s on the promise from DEC that
    > all our UniBus and MASSbus peripherals would still work ... which we
    > knew (from others on the Internet who'd and tried reported their
    > experiences) to be a lie.

Just out of curiousity, why'd you all order something you knew wouldn't work?
So you could get a better deal out of DEC for whatever you ordered instead,
later, as they tried to make it up to you all for trying to sell you something
broken?

	Noel



From clemc at ccc.com  Fri Jun 22 23:32:17 2018
From: clemc at ccc.com (Clem Cole)
Date: Fri, 22 Jun 2018 09:32:17 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <27914.1529645559@cesium.clock.org>
References: <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
 <CABH=_VR91sbBbf5pxBeB5AktQp_qXoVc8zghDKXEJPOXf21yYg@mail.gmail.com>
 <27914.1529645559@cesium.clock.org>
Message-ID: <CAC20D2NSjx3iD86S2g9zykP=EawMDTT2+741hVCrkPPfpbWGkA@mail.gmail.com>

On Fri, Jun 22, 2018 at 1:32 AM, Erik E. Fair <fair-tuhs at netbsd.org> wrote:

> The VAX 8800 was also the advent of the DEC BI bus attempt to lock
> third-party I/O devices out of the VAX market and prevent "unauthorized"
> competition with their own overpriced and underperforming I/O devices.
>
Interesting story on the BI.   My friend, Dave Cane who had been #2 on the
780 and lead the 750 project was the primary force behind the BI and
developed by the team in the laboratory products division (LDP).​  The
whole idea behind BI was, BI was supposed to be (i.e. designed as) an
'open' bus and DEC was originally going to license the driver chips to a
number of 3rd parties (I believe Western Digital, Mostek, TI).   The whole
idea was to create a 3rd party market for I/O devices for LDP.   By making
sure everyone use the same interface chips, their was a reasonable belief
that the boards would not break bus protocol and have some of the issues
that they had had with Omibus and Unibus.

But ... once the BI was completed and actually put into the 8800 and the
main line system, DEC central marketing made it private and locked up.
 Dave quit (his resignation letter was sent out on the engining mailing
list -- KO and GB were not happy -- and one of the thing he sites is the
fact that he thought taking the BI private was going to be bad).

FYI: Masscomp was formed shortly there after (my mostly ex-LDP, 750 and VMS
folks) and Dave used MultiBus and later VMEbus for the I/O (but he did a
private SMI like split transaction memory bus).

One of the other BI people, who's name now escapes me, although I can see
his face in my mind, maybe I'll think of it later), would go on to do the
PCI for Alpha a couple of years later.   As I said, DEC did manage to get
that one public, after the BI was made private as Erik points out.



ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/86e2680a/attachment.html>

From lm at mcvoy.com  Sat Jun 23 00:28:46 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 22 Jun 2018 07:28:46 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
Message-ID: <20180622142846.GS21272@mcvoy.com>

For the record, I'm fine with old stuff getting discussed on TUHS.
Even not Unix stuff.  We wandered into Linux/ext2 with Ted, that was fun.
We've wandered into the VAX history (UWisc got an 8600 and called it
"speedy" - what a mistake) and I learned stuff I didn't know, so that
was fun (and sounded just like history at every company I've worked for,
sadly :)

I think this list has self selected for adults who are reasonable.  So
we mostly are fine.

Warren, I'd ask your heavy hitters, like Ken, if it's ok if the list
wanders a bit.  If he and his ilk are fine, I'd just leave it all on
one list.  It's a fun list.
-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From tfb at tfeb.org  Sat Jun 23 00:46:06 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Fri, 22 Jun 2018 15:46:06 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <20180622142846.GS21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
Message-ID: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>

On 22 Jun 2018, at 15:28, Larry McVoy <lm at mcvoy.com> wrote:
> 
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.  We wandered into Linux/ext2 with Ted, that was fun.
> We've wandered into the VAX history (UWisc got an 8600 and called it
> "speedy" - what a mistake) and I learned stuff I didn't know, so that
> was fun (and sounded just like history at every company I've worked for,
> sadly :)

As a perpetrator of some of this off-topicness I think (on reflection, I originally also wondered about a different list) that a good approach would be to allow anyone, if something is just clearly off-topic, to say 'please take this to a more appropriate forum', but if no-one does so it should be fine.  Obviously this requires people to actually do that, but I hope no-one sits and fumes without saying anything.  Equally obviously this is just my opinion.

(My main problem with off-topic things is now that the OSX mail client no longer seems to be able to reliably thread things which I find an astonishing regression, so I have to delete several threads.  I suspect it gets precisely no care and feeding from Apple, at least not from anyone who understands email.)

--tim

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/2982cf59/attachment.html>

From lm at mcvoy.com  Sat Jun 23 00:54:02 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 22 Jun 2018 07:54:02 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
Message-ID: <20180622145402.GT21272@mcvoy.com>

On Fri, Jun 22, 2018 at 03:46:06PM +0100, Tim Bradshaw wrote:
> (My main problem with off-topic things is now that the OSX mail client no longer seems to be able to reliably thread things which I find an astonishing regression, so I have to delete several threads.  I suspect it gets precisely no care and feeding from Apple, at least not from anyone who understands email.)

Try mutt, it's what I use and it threads topics just fine.


From ralph at inputplus.co.uk  Sat Jun 23 00:52:14 2018
From: ralph at inputplus.co.uk (Ralph Corderoy)
Date: Fri, 22 Jun 2018 15:52:14 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <20180622142846.GS21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
Message-ID: <20180622145214.54A691FBFC@orac.inputplus.co.uk>

Larry wrote:
> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.

+1

> I think this list has self selected for adults who are reasonable.
> So we mostly are fine.

And when it's trying to recreate the Usenet glory days of `Is this the
longest thread ever?', Warren steps in to admonish.

Cheers, Ralph.


From spedraja at gmail.com  Sat Jun 23 01:13:04 2018
From: spedraja at gmail.com (SPC)
Date: Fri, 22 Jun 2018 17:13:04 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <20180622145214.54A691FBFC@orac.inputplus.co.uk>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <20180622145214.54A691FBFC@orac.inputplus.co.uk>
Message-ID: <CACytpF_Y-DYSezPdDhRK=m6dpjoWgruMsH-cLJYAtYR9oiuV6g@mail.gmail.com>

​

El vie., 22 jun. 2018 a las 16:59, Ralph Corderoy (<ralph at inputplus.co.uk>)
escribió:

> Larry wrote:
> > For the record, I'm fine with old stuff getting discussed on TUHS.
> > Even not Unix stuff.
>

​Me too. But...

Just in case, try something like 'old-iron', 'tuhs-old-iron-chat',
'tuhs-alt-chat'​...

Anyway, as more-or-less community manager on modern social networks, I
think it won't be easy to separate the topics of both lists.
​​
Gracias | Regards - Saludos | Greetings | Freundliche Grüße | Salutations
​
-- 
*Sergio Pedraja*
--
http://www.linkedin.com/in/sergiopedraja
-----
No crea todo lo que ve, ni crea que está viéndolo todo
-----
"El estado de una Copia de Seguridad es desconocido
hasta que intentas restaurarla" (- nixCraft)
-----
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/aa1104fb/attachment.html>

From steffen at sdaoden.eu  Sat Jun 23 01:17:51 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Fri, 22 Jun 2018 17:17:51 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <20180622145402.GT21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
Message-ID: <20180622151751.BEK9i%steffen@sdaoden.eu>

Larry McVoy wrote in <20180622145402.GT21272 at mcvoy.com>:
 |On Fri, Jun 22, 2018 at 03:46:06PM +0100, Tim Bradshaw wrote:
 |> (My main problem with off-topic things is now that the OSX mail client \
 |> no longer seems to be able to reliably thread things which I find \
 |> an astonishing regression, so I have to delete several threads.  \
 |> I suspect it gets precisely no care and feeding from Apple, at least \
 |> not from anyone who understands email.)
 |
 |Try mutt, it's what I use and it threads topics just fine.
 --End of <20180622145402.GT21272 at mcvoy.com>

I understood that as a request that people seem to have forgotten
that In-Reply-To: should be removed when doing the "Yz (Was: Xy)",
so that a new thread is created, actually.  Or take me: i seem to
never have learned that at first!

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From clemc at ccc.com  Sat Jun 23 01:28:55 2018
From: clemc at ccc.com (Clem Cole)
Date: Fri, 22 Jun 2018 11:28:55 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180622142846.GS21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
Message-ID: <CAC20D2NQ60Y1VSGNOdbzzSeTFvCGxmzY+H_dXqYN6DvVbLTEQw@mail.gmail.com>

On Fri, Jun 22, 2018 at 10:28 AM, Larry McVoy <lm at mcvoy.com> wrote:

> I think this list has self selected for adults

​...right ... problem is if some one never grew up how would they know ....

Seriously, I think you pretty much nailed it.   It is about being
respectful of everyone on the list, particularly those where were there and
lived the history, it allows us all to learn from some fun memories and
broaden our understanding beyond what we thought we knew as complete
'fact'.   I've enjoyed filling in tid bits I've collected along my strange
journey, as well as having others clue me in on some details I did not
know.     As Larry said, it really is an interesting community and think
this list has helped to set the historical record straight in a couple of
places that I am aware.

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/7261db0b/attachment.html>

From tfb at tfeb.org  Sat Jun 23 02:07:57 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Fri, 22 Jun 2018 17:07:57 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <20180622145402.GT21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
Message-ID: <0EB33B2A-581F-4B3D-A498-E6E98B6B43CE@tfeb.org>

On 22 Jun 2018, at 15:54, Larry McVoy <lm at mcvoy.com> wrote:
> 
> Try mutt, it's what I use and it threads topics just fine.

The trouble is I have > 10 years of mail sitting in the OSX mail system and although I could probably export it all (it at least used to be relatively straightforward to do this) the sheer terror of doing that is, well, terrifying because there's a lot of stuff in there that matters.

Using the system-provided mail tool was a stupid decision, and one I managed to avoid with the browser &c, but it's too late now.

(Now this is an off-topic discussion from a discussion about off-topicness.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/68f8882e/attachment.html>

From lm at mcvoy.com  Sat Jun 23 02:45:03 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 22 Jun 2018 09:45:03 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <20180622145214.54A691FBFC@orac.inputplus.co.uk>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <20180622145214.54A691FBFC@orac.inputplus.co.uk>
Message-ID: <20180622164503.GA21272@mcvoy.com>

On Fri, Jun 22, 2018 at 03:52:14PM +0100, Ralph Corderoy wrote:
> And when it's trying to recreate the Usenet glory days of `Is this the
> longest thread ever?', Warren steps in to admonish.

I've been trying (not always succeeding) to not help things get to that
point.  Warren is awesome but we all should have a goal of not "needing"
him to step in.  

--lm


From gtaylor at tnetconsulting.net  Sat Jun 23 03:17:04 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Fri, 22 Jun 2018 11:17:04 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180622142846.GS21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
Message-ID: <adb496d2-b035-d8fc-c44f-57099ec8aca5@spamtrap.tnetconsulting.net>

On 06/22/2018 08:28 AM, Larry McVoy wrote:
> For the record, I'm fine with old stuff getting discussed on TUHS. 
> Even not Unix stuff.

I agree.

It seems like a number of people are okay with non-TUHS specific topis 
being discussed on TUHS.  I'm sure there are others that want non-TUHS 
specific topics to stay off TUHS.  I feel like each group is entitled to 
their opinions.

So, perhaps we could leverage technology to satisfy both groups.

I believe that Warren hosts TUHS using Mailman.  I'm fairly certain that 
Mailman supports topics / key words.  Thus we could possible configure 
Mailman to support such topics and allow subscribers to select which 
topics they care about and the ever so important if they want to receive 
messages that don't match any defined topics.

> We wandered into Linux/ext2 with Ted, that was fun.  We've wandered 
> into the VAX history (UWisc got an 8600 and called it "speedy" - what a 
> mistake) and I learned stuff I didn't know, so that was fun (and sounded 
> just like history at every company I've worked for, sadly :)

I routinely find things being discussed that I didn't know I was 
interested in, but learned that I was while reading.  I almost always 
learn something from every (major) thread.

> I think this list has self selected for adults who are reasonable. 
> So we mostly are fine.

:-)



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/04150480/attachment.bin>

From gtaylor at tnetconsulting.net  Sat Jun 23 03:27:30 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Fri, 22 Jun 2018 11:27:30 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180622151751.BEK9i%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
Message-ID: <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>

On 06/22/2018 09:17 AM, Steffen Nurpmeso wrote:
> I understood that as a request that people seem to have forgotten that 
> In-Reply-To: should be removed when doing the "Yz (Was: Xy)", so that 
> a new thread is created, actually.  Or take me: i seem to never have 
> learned that at first!

I agree that removing In-Reply-To and References is a good thing to do 
when breaking a thread.  But not all MUAs make that possible, much less 
easy.  Which means that you're left with starting a new message to the 
same recipients.

I've also seen how (sub)threads tend to drift from their original intent 
and then someone modifies the subject some time later.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/e4aeff2f/attachment.bin>

From fair-tuhs at netbsd.org  Sat Jun 23 03:49:54 2018
From: fair-tuhs at netbsd.org (Erik E. Fair)
Date: Fri, 22 Jun 2018 10:49:54 -0700
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <20180622131129.C82D018C073@mercury.lcs.mit.edu>
Message-ID: <425.1529689794@cesium.clock.org>


>Date: Fri, 22 Jun 2018 09:11:29 -0400 (EDT)
>From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
>
>    > From: "Erik E. Fair"
>
>    > ordered a VAX-8810 to replace two 11/780s on the promise from DEC that
>    > all our UniBus and MASSbus peripherals would still work ... which we
>    > knew (from others on the Internet who'd and tried reported their
>    > experiences) to be a lie.
>
>Just out of curiousity, why'd you all order something you knew wouldn't work?
>So you could get a better deal out of DEC for whatever you ordered instead,
>later, as they tried to make it up to you all for trying to sell you something
>broken?

Precisely. It worked too, at some cost in our time. The DEC salespeople were willing to put their lie in writing, you see ...

One of those 8650s was "apple.com" (host) for quite a number of years, as the 11/780 before it: DNS primary NS for the domain, SMTP server, NTP server (VAXen had decent, low-drift hardware clocks), UUCP/USENET host (as "apple" in that world), NNTP server - it was our public face to the world. I was given the explicit mandate to make it so when I was hired in 1988.

Unix was the OS for a wide range of facilities within Apple. Probably still is (I've been gone from there since 1997, but I still hear from folks within from time to time). As hardware got cheaper and more capable, other systems were added to the mix to provide anonymous FTP (ftp.apple.com started as a Mac IIci running A/UX under my desk), HTTP service, and so on.

The main thing that changed over time was what hardware (and version of Unix) we were running for whatever task or service (the RISC bloom was wonderful to see, even if the vendors tried bending Unix in to a proprietary lock-in thing - it's rather sad that we're mostly stuck with the awful x86 ISA after all that), and the overall character of the system use. When I arrived, Unix was used as a now-classical interactive timesharing system (with Macs as terminals - does anyone else remember the wonderful "UnixWindows" multi-windowing terminal emulator for MacOS, with its associated Unix back-end?), and by the time I left, Macs were TCP/IP hosts (peers) themselves, speaking as clients (IMAP, NNTP, HTTP) over our networks to Unix machines as servers.

	Erik Fair


From ca6c at bitmessage.ch  Sat Jun 23 03:29:04 2018
From: ca6c at bitmessage.ch (=?iso-8859-1?B?Q+Fn?=)
Date: Fri, 22 Jun 2018 13:29:04 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
Message-ID: <20180622172904.GB17134@syrtis>

Dave Horsfall wrote:
 
> COFF - Computer Old Fart Followers.

Is it "Old farts who follow computers" or "Followers of computer old
farts"? Or "old farts who follow other old farts"? :)

-- 
caóc



From crossd at gmail.com  Sat Jun 23 04:00:43 2018
From: crossd at gmail.com (Dan Cross)
Date: Fri, 22 Jun 2018 14:00:43 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180622142846.GS21272@mcvoy.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
Message-ID: <CAEoi9W5QJxg=HKDkzyab9UP0xXu3Gho3QDaMoH52WsqSgR-KZA@mail.gmail.com>

On Fri, Jun 22, 2018 at 10:29 AM Larry McVoy <lm at mcvoy.com> wrote:

> For the record, I'm fine with old stuff getting discussed on TUHS.
> Even not Unix stuff.


With the caveat that I must acknowledge that I've been guilty of wandering
off topic more often than I should, it occurs to me that when discussing
the history of something, very often the histories of other things
necessarily feed into that history and become intertwined and inseparable.
There was a lot of "stuff" happening in the computer industry at the time
Unix was created, in it's early years, and on in its (continuing)
evolution, and that "stuff" surely impacted Unix in one way or another. If
we really want to understand where Unix came from and why it is, we must
open ourselves to understanding those influences as well.

That said, of course, there's a balance. Having a place one could point to
and respectfully say, "Hey, this has gone on for 50+ messages; could you
move it over to off-tuhs?" might be useful for folks who want to deep-dive
on something.

We wandered into Linux/ext2 with Ted, that was fun.
>

Indeed. I'll go further and confess that I use TUHS as a learning resource
that influences by work professionally. There's a lot of good information
that comes across this list that gets filed away in my brain and manifests
itself in surprising ways by informing my work. I selfishly want that to
continue.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/63887e60/attachment.html>

From steffen at sdaoden.eu  Sat Jun 23 05:25:05 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Fri, 22 Jun 2018 21:25:05 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
Message-ID: <20180622192505.mfig_%steffen@sdaoden.eu>

Grant Taylor via TUHS wrote in <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07 at spa\
mtrap.tnetconsulting.net>:
 |On 06/22/2018 09:17 AM, Steffen Nurpmeso wrote:
 |> I understood that as a request that people seem to have forgotten that 
 |> In-Reply-To: should be removed when doing the "Yz (Was: Xy)", so that 
 |> a new thread is created, actually.  Or take me: i seem to never have 
 |> learned that at first!
 |
 |I agree that removing In-Reply-To and References is a good thing to do 
 |when breaking a thread.  But not all MUAs make that possible, much less 
 |easy.  Which means that you're left with starting a new message to the 
 |same recipients.

True, but possible since some time, for the thing i maintain, too,
unfortunately.  It will be easy starting with the next release,
however.

 |I've also seen how (sub)threads tend to drift from their original intent 
 |and then someone modifies the subject some time later.

Yes.  Yes.  And then, whilst not breaking the thread stuff as
such, there is the "current funny thing to do", which also impacts
thread visualization sometimes.  For example replacing spaces with
tabulators so that the "is the same thread subject" compression
cannot work, so one first thinks the subject has really changed.
At the moment top posting seems to be a fascinating thing to do.
And you seem to be using DMARC, which irritates the list-reply
mechanism of at least my MUA.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From gtaylor at tnetconsulting.net  Sat Jun 23 07:04:43 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Fri, 22 Jun 2018 15:04:43 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180622192505.mfig_%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
Message-ID: <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>

On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
> True, but possible since some time, for the thing i maintain, too, 
> unfortunately.  It will be easy starting with the next release, however.

I just spent a few minutes looking at how to edit headers in reply 
messages in Thunderbird and I didn't quickly find it.  (I do find an 
Add-On that allows editing messages in reader, but not the composer.)

> Yes.  Yes.  And then, whilst not breaking the thread stuff as such, 
> there is the "current funny thing to do", which also impacts thread 
> visualization sometimes.  For example replacing spaces with tabulators 
> so that the "is the same thread subject" compression cannot work, so 
> one first thinks the subject has really changed.

IMHO that's the wrong way to thread.  I believe threading should be done 
by the In-Reply-To: and References: headers.

I consider Subject: based threading to be a hack.  But it's a hack that 
many people use.  I think Thunderbird even uses it by default.  (I've 
long since disabled it.)

> At the moment top posting seems to be a fascinating thing to do.

I blame ignorance and the prevalence of tools that encourage such behavior.

> And you seem to be using DMARC, which irritates the list-reply mechanism 
> of at least my MUA.

Yes I do use DMARC as well as DKIM and SPF (w/ -all).  I don't see how 
me using that causes problems with "list-reply".

My working understanding is that "list-reply" should reply to the list's 
posting address in the List-Post: header.

List-Post: <mailto:tuhs at minnie.tuhs.org>

What am I missing or not understanding?



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/b2198276/attachment.bin>

From bakul at bitblocks.com  Sat Jun 23 06:55:20 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Fri, 22 Jun 2018 13:55:20 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <0EB33B2A-581F-4B3D-A498-E6E98B6B43CE@tfeb.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <0EB33B2A-581F-4B3D-A498-E6E98B6B43CE@tfeb.org>
Message-ID: <CEF331FB-9DEF-4EFF-B7D8-B5E93859C22C@bitblocks.com>

On Jun 22, 2018, at 9:07 AM, Tim Bradshaw <tfb at tfeb.org> wrote:
> 
> On 22 Jun 2018, at 15:54, Larry McVoy <lm at mcvoy.com> wrote:
>> 
>> Try mutt, it's what I use and it threads topics just fine.
> 
> The trouble is I have > 10 years of mail sitting in the OSX mail system and although I could probably export it all (it at least used to be relatively straightforward to do this) the sheer terror of doing that is, well, terrifying because there's a lot of stuff in there that matters.

At least on my Mac messages seem to be stored one per file.
Attachments are stored separately. Doesn't seem hard to figure out.
You can periodically rsync to a different place and experiment.

> Using the system-provided mail tool was a stupid decision, and one I managed to avoid with the browser &c, but it's too late now.

There is not much available that is decent.I too use Apple Mail
but also have a separate MH store that goes way back. MH is great
for bulk operations but not for viewing MIME infested mail or
simple things like attaching documents. What I really want is a
combined MUA.

> (Now this is an off-topic discussion from a discussion about off-topicness.)

meta-meta-meta!



From doug at cs.dartmouth.edu  Sat Jun 23 08:23:58 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Fri, 22 Jun 2018 18:23:58 -0400
Subject: [TUHS] off-topic list
Message-ID: <201806222223.w5MMNwob041114@tahoe.cs.Dartmouth.EDU>

Because some "off-topic" discussions wander into estoterica that's beyond
me, I superficially thought an off-topic siding would be a good thing for
some trains of thought. But then I wondered, if I ignore the siding how
will I hear of new hear about new topics that get initiated there? I could
miss good stuff. So I'm quite happy with the current arrangement where
occasionally ever-patient Warren gives a nudge. (I'm a digest reader. If
every posting came as a distinct message, I might feel otherwise.)

Doug


From jpl.jpl at gmail.com  Sat Jun 23 09:20:26 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Fri, 22 Jun 2018 19:20:26 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <201806222223.w5MMNwob041114@tahoe.cs.Dartmouth.EDU>
References: <201806222223.w5MMNwob041114@tahoe.cs.Dartmouth.EDU>
Message-ID: <CAC0cEp9qdGr=U2yk24WDFyVr=E61WfyZ30JoRfKa-P=TEc2v8A@mail.gmail.com>

I’m more interested in Unix history than old hardware, but the volume of
this group is well within my tolerance for annoyance. I agree with Doug.
Post at will, I’ll grouse when it becomes intolerable.

On Fri, Jun 22, 2018 at 6:24 PM Doug McIlroy <doug at cs.dartmouth.edu> wrote:

> Because some "off-topic" discussions wander into estoterica that's beyond
> me, I superficially thought an off-topic siding would be a good thing for
> some trains of thought. But then I wondered, if I ignore the siding how
> will I hear of new hear about new topics that get initiated there? I could
> miss good stuff. So I'm quite happy with the current arrangement where
> occasionally ever-patient Warren gives a nudge. (I'm a digest reader. If
> every posting came as a distinct message, I might feel otherwise.)
>
> Doug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/ffcee08f/attachment.html>

From wkt at tuhs.org  Sat Jun 23 10:22:28 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sat, 23 Jun 2018 10:22:28 +1000
Subject: [TUHS] off-topic list
In-Reply-To: <201806222223.w5MMNwob041114@tahoe.cs.Dartmouth.EDU>
References: <201806222223.w5MMNwob041114@tahoe.cs.Dartmouth.EDU>
Message-ID: <20180623002228.GA4616@minnie.tuhs.org>

All, based on the comments we've had in the past day or so, it seems that
you are mostly happy to stick with one list, and to tolerate a bit of
off-topic material. 

I'm also aware that this list is approaching its own quarter-century
anniversary and it has become an important historical record.

So feel free to drift away from Unix now and then. But please self-regulate:
if you (as an individual) think the S/N ratio is dropping, please ask the
list to improve it.

As always, I'll insert my own nudges and requests based on my own eclectic
set of rules :-)

Cheers all, Warren


From scj at yaccman.com  Sat Jun 23 02:36:12 2018
From: scj at yaccman.com (Steve Johnson)
Date: Fri, 22 Jun 2018 09:36:12 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <0EB33B2A-581F-4B3D-A498-E6E98B6B43CE@tfeb.org>
Message-ID: <3f7910445c3dda57fec2c64e5d922dea9a531700@webmail.yaccman.com>

I, for one, am happy to see some off-topic stuff.   The people on
this list, for the most part, represent a way of looking at the world
that is,
sadly, rather uncommon these days.   I enjoy looking at more
contemporary events through those eyes, and musing on the changes that
have taken place...

Steve

----- Original Message -----
From:
 "Tim Bradshaw" <tfb at tfeb.org>

To:
"Larry McVoy" <lm at mcvoy.com>
Cc:
"The Eunuchs Hysterical Society" <tuhs at tuhs.org>
Sent:
Fri, 22 Jun 2018 17:07:57 +0100
Subject:
Re: [TUHS] off-topic list

 On 22 Jun 2018, at 15:54, Larry McVoy <lm at mcvoy.com [1]> wrote:

Try mutt, it's what I use and it threads topics just fine.

The trouble is I have > 10 years of mail sitting in the OSX mail
system and although I could probably export it all (it at least used
to be relatively straightforward to do this) the sheer terror of doing
that is, well, terrifying because there's a lot of stuff in there that
matters.

Using the system-provided mail tool was a stupid decision, and one I
managed to avoid with the browser &c, but it's too late now.

(Now this is an off-topic discussion from a discussion about
off-topicness.)
 

Links:
------
[1] mailto:lm at mcvoy.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180622/25ad3896/attachment.html>

From wkt at tuhs.org  Sat Jun 23 15:15:21 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sat, 23 Jun 2018 15:15:21 +1000
Subject: [TUHS] Slight TUHS Mailing List Policy Change
Message-ID: <20180623051521.GA22333@minnie.tuhs.org>

 [ A few list members asked me to re-post this out of the existing
   thread, so that it is more prominent. ]

All, based on the comments we've had in the past day or so, it seems that
you are mostly happy to stick with one list, and to tolerate a bit of
off-topic material.

I'm also aware that this list is approaching its own quarter-century
anniversary and it has become an important historical record.

So feel free to drift away from Unix now and then. But please self-regulate:
if you (as an individual) think the S/N ratio is dropping, please ask the
list to improve it.

As always, I'll insert my own nudges and requests based on my own eclectic
set of rules :-)

Cheers all, Warren


From wkt at tuhs.org  Sat Jun 23 15:32:16 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sat, 23 Jun 2018 15:32:16 +1000
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes, stories,
 interviews
Message-ID: <20180623053216.GA23860@minnie.tuhs.org>

All, I've had a fair bit of positive feedback for my TUHS work. In reality
I'm just the facilitator, collecting the stuff that you send me and keeping
the mailing list going.

I think we've captured nearly all we can of the 1970s Unix in terms of
software. After that it becomes commercial, but I am building up the
"Hidden Unix" archive to hold that. Just wish I could open that up ...

What we haven't done a good job yet is to collect other things: photos,
stories, anecdotes, scanned ephemera.

Photos & scanned things: I'm very happy to collect these, but does anybody
know of an existing place that accepts (and makes available online) photos
and scanned ephemera? They are a bit out of scope for bitsavers as far as
I can tell, but I'm happy to be corrected. Al? Other comments here?

Stories & anecdotes: definitely type them in & e-mail them in and/or e-mail
them to me if you want me just to preserve them. There is the Unix wiki I
started here: https://wiki.tuhs.org/doku.php?id=start, but maybe there is
already a better place. Gunkies?

Interviews: Sometimes it's easier to glean stories & knowledge with interviews.
I've never tried this but perhaps it's time. Who is up to have an audio
interview? I'll worry about the technical details eventually, but is there
interest?

All of the above would slot in with the upcoming 50th anniversary. If you
do have photos, bits of paper, stories to tell etc., then let's try to
preserve them so that they are not lost.

Cheers all, Warren


From khm at sciops.net  Sat Jun 23 15:41:08 2018
From: khm at sciops.net (Kurt H Maier)
Date: Fri, 22 Jun 2018 22:41:08 -0700
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <20180623053216.GA23860@minnie.tuhs.org>
References: <20180623053216.GA23860@minnie.tuhs.org>
Message-ID: <20180623054108.GA98978@wopr>

On Sat, Jun 23, 2018 at 03:32:16PM +1000, Warren Toomey wrote:
> 
> All of the above would slot in with the upcoming 50th anniversary. If you
> do have photos, bits of paper, stories to tell etc., then let's try to
> preserve them so that they are not lost.

Archive.org is great at this sort of thing.  It may be worth reaching
out to Jason Scott (jason at textfiles.com) for assistance with some of the
digital artifacts.  His personal area of interest is more in the 80s BBS
era, but he knows the ins and outs of preserving things in context.  It
doesn't hurt that he's a great guy, too.

khm


From crossd at gmail.com  Sat Jun 23 15:48:50 2018
From: crossd at gmail.com (Dan Cross)
Date: Sat, 23 Jun 2018 01:48:50 -0400
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <20180623053216.GA23860@minnie.tuhs.org>
References: <20180623053216.GA23860@minnie.tuhs.org>
Message-ID: <CAEoi9W7jh2Vmobj_tN-WSXfLH7Fj2Kq_qVXOZdkGimUit99oPg@mail.gmail.com>

I wonder if you've talked with Peter Salus: he must have had a veritable
trove of interesting and useful source material for the 25 Years of Unix
book.

On Sat, Jun 23, 2018, 1:32 AM Warren Toomey <wkt at tuhs.org> wrote:

> All, I've had a fair bit of positive feedback for my TUHS work. In reality
> I'm just the facilitator, collecting the stuff that you send me and keeping
> the mailing list going.
>
> I think we've captured nearly all we can of the 1970s Unix in terms of
> software. After that it becomes commercial, but I am building up the
> "Hidden Unix" archive to hold that. Just wish I could open that up ...
>
> What we haven't done a good job yet is to collect other things: photos,
> stories, anecdotes, scanned ephemera.
>
> Photos & scanned things: I'm very happy to collect these, but does anybody
> know of an existing place that accepts (and makes available online) photos
> and scanned ephemera? They are a bit out of scope for bitsavers as far as
> I can tell, but I'm happy to be corrected. Al? Other comments here?
>
> Stories & anecdotes: definitely type them in & e-mail them in and/or e-mail
> them to me if you want me just to preserve them. There is the Unix wiki I
> started here: https://wiki.tuhs.org/doku.php?id=start, but maybe there is
> already a better place. Gunkies?
>
> Interviews: Sometimes it's easier to glean stories & knowledge with
> interviews.
> I've never tried this but perhaps it's time. Who is up to have an audio
> interview? I'll worry about the technical details eventually, but is there
> interest?
>
> All of the above would slot in with the upcoming 50th anniversary. If you
> do have photos, bits of paper, stories to tell etc., then let's try to
> preserve them so that they are not lost.
>
> Cheers all, Warren
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180623/6ea47da5/attachment.html>

From dave at horsfall.org  Sat Jun 23 16:08:23 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 23 Jun 2018 16:08:23 +1000 (EST)
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
 <20180621030505.GA89671@server.rulingia.com>
 <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>
 <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
Message-ID: <alpine.BSF.2.21.999.1806231550410.68981@aneurin.horsfall.org>

On Thu, 21 Jun 2018, Arrigo Triulzi wrote:

>> for the microcode to customers.  There were several hacks in there to 
>> slow down the disk I/O so that it didn't outperform the model 30.
>
> Is this the origin of the lore on “the IBM slowdown device”?

If I remember my computer lore correctly, wasn't the difference between 
the Cyber 72 and the 73 just a timing capacitor, if you knew which board?

And I still reckon that the olde 360/20 (I did see the console for one) 
should never have been part of the 360 series;

     * It had a HALT instruction!

     * It had about half the instruction set.

     * Half the number of registers, and width.

     * Floating point?  What's that?

Etc.  A triumph of marketing over engineering...

-- Dave, who did his CompSci thesis on the 360...

From grog at lemis.com  Sat Jun 23 17:16:21 2018
From: grog at lemis.com (Greg 'groggy' Lehey)
Date: Sat, 23 Jun 2018 17:16:21 +1000
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <CAEoi9W7jh2Vmobj_tN-WSXfLH7Fj2Kq_qVXOZdkGimUit99oPg@mail.gmail.com>
References: <20180623053216.GA23860@minnie.tuhs.org>
 <CAEoi9W7jh2Vmobj_tN-WSXfLH7Fj2Kq_qVXOZdkGimUit99oPg@mail.gmail.com>
Message-ID: <20180623071621.GB23824@eureka.lemis.com>

On Saturday, 23 June 2018 at  1:48:50 -0400, Dan Cross wrote:
> I wonder if you've talked with Peter Salus: he must have had a veritable
> trove of interesting and useful source material for the 25 Years of Unix
> book.

Yes, as I read the previous messages I wondered why he hasn't been
mentioned before.  I'm also surprised that he's not on this list.

Greg
--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180623/a71423b2/attachment.sig>

From dave at horsfall.org  Sat Jun 23 17:32:58 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 23 Jun 2018 17:32:58 +1000 (EST)
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <20180623053216.GA23860@minnie.tuhs.org>
References: <20180623053216.GA23860@minnie.tuhs.org>
Message-ID: <alpine.BSF.2.21.999.1806231627050.68981@aneurin.horsfall.org>

On Sat, 23 Jun 2018, Warren Toomey wrote:

> All of the above would slot in with the upcoming 50th anniversary. If 
> you do have photos, bits of paper, stories to tell etc., then let's try 
> to preserve them so that they are not lost.

That would depend upon the Statute of Limitations (I'm in NSW, Australia)
and boy, can I tell some stories from the 70s/80s...

The snag is that Australia has about the toughest defamation laws in the 
world.

Let's see: the boss of XXX Dept was known to be having it off with his 
secretary (despite being married, but not to her).

Said boss was also observed rolling along the corridors in the 
afternoon, and the operators had to intervene whenever he was 
observed simply bouncing off the computer room door.

A boss of another dept was known to show his staff how to roll and smoke a 
joint.

Should we need a "sealed section"?

-- Dave


From bqt at update.uu.se  Sat Jun 23 20:32:24 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sat, 23 Jun 2018 12:32:24 +0200
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
References: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
Message-ID: <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>

On 2018-06-22 20:01, Clem Cole<clemc at ccc.com> wrote:
> One of the other BI people, who's name now escapes me, although I can see
> his face in my mind, maybe I'll think of it later), would go on to do the
> PCI for Alpha a couple of years later.   As I said, DEC did manage to get
> that one public, after the BI was made private as Erik points out.

Clem, I think I saw you say something similar in an earlier post.
To me it sounds as if you are saying that DEC did/designed PCI.
Are you sure about that? As far as I know, PCI was designed and created 
by Intel, and the first users were just plain PC machines.
Alpha did eventually also get PCI, but it was not where it started, and 
DEC had no control at all about PCI being public.

Might you have been thinking of Turbobus, Futurebus, or some other thing 
that DEC did? Or do you have some more information about DEC being the 
creator of PCI?

   Johnny

-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From akosela at andykosela.com  Sat Jun 23 21:22:45 2018
From: akosela at andykosela.com (Andy Kosela)
Date: Sat, 23 Jun 2018 06:22:45 -0500
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <20180623054108.GA98978@wopr>
References: <20180623053216.GA23860@minnie.tuhs.org>
 <20180623054108.GA98978@wopr>
Message-ID: <CALMnNGgepWPXYqSeSSSTnWToTw+hPbU86Nuz6KtxyApBgq8feg@mail.gmail.com>

What about starting the TUHS channel on YouTube collecting pieces of UNIX
history in video format?  Personally I really enjoy watching video
presentations about UNIX and other computer technologies of the past.  It
gives you a more deeper insight into this forgotten and ancient world we
all know and miss.  Those videos are truly acting as time capsules.

Perhaps one of my favorite is this one, taken from the AT&T Archives:

  https://youtu.be/tc4ROCJYbm0

--Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180623/638e711c/attachment.html>

From clemc at ccc.com  Sat Jun 23 21:39:46 2018
From: clemc at ccc.com (Clem cole)
Date: Sat, 23 Jun 2018 07:39:46 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
References: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
 <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
Message-ID: <EDFBE1AB-2288-49ED-AE30-A5DF504C960F@ccc.com>

PCI was a late 1980s DEC design bus design that where released via license ala the Ethernet experience of  the xerox/dec/Intel blue book.  DEC had mostly learned it lesson that interface standards were better shared.  I’ve forgotten now the name of the person who lead the team.  I did not know him very well.  I can picture his face as I said.  

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 23, 2018, at 6:32 AM, Johnny Billquist <bqt at update.uu.se> wrote:
> 
>> On 2018-06-22 20:01, Clem Cole<clemc at ccc.com> wrote:
>> One of the other BI people, who's name now escapes me, although I can see
>> his face in my mind, maybe I'll think of it later), would go on to do the
>> PCI for Alpha a couple of years later.   As I said, DEC did manage to get
>> that one public, after the BI was made private as Erik points out.
> 
> Clem, I think I saw you say something similar in an earlier post.
> To me it sounds as if you are saying that DEC did/designed PCI.
> Are you sure about that? As far as I know, PCI was designed and created by Intel, and the first users were just plain PC machines.
> Alpha did eventually also get PCI, but it was not where it started, and DEC had no control at all about PCI being public.
> 
> Might you have been thinking of Turbobus, Futurebus, or some other thing that DEC did? Or do you have some more information about DEC being the creator of PCI?
> 
>  Johnny
> 
> -- 
> Johnny Billquist                  || "I'm on a bus
>                                  ||  on a psychedelic trip
> email: bqt at softjar.se             ||  Reading murder books
> pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From bqt at update.uu.se  Sat Jun 23 21:57:51 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Sat, 23 Jun 2018 13:57:51 +0200
Subject: [TUHS] Old mainframe I/O speed
In-Reply-To: <EDFBE1AB-2288-49ED-AE30-A5DF504C960F@ccc.com>
References: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
 <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
 <EDFBE1AB-2288-49ED-AE30-A5DF504C960F@ccc.com>
Message-ID: <04190921-6f50-a643-63f7-41f3bfd0b7e5@update.uu.se>

On 2018-06-23 13:39, Clem cole wrote:
> PCI was a late 1980s DEC design bus design that where released via license ala the Ethernet experience of  the xerox/dec/Intel blue book.  DEC had mostly learned it lesson that interface standards were better shared.  I’ve forgotten now the name of the person who lead the team.  I did not know him very well.  I can picture his face as I said.

It's just that this sounds so much like the TURBOchannel (not Turbobus 
as I wrote previously). That bus exactly matches your description of 
details, timelines and circumstances, while the PCI, to my knowledge 
don't match at all.

And my recollection also matches Wikipedia, which even gives the PCI 
V1.0 spec being released in 1992. (See 
https://en.wikipedia.org/wiki/Conventional_PCI)

Compare to TURBOchannel: https://en.wikipedia.org/wiki/TURBOchannel

   Johnny


> 
> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> 
>> On Jun 23, 2018, at 6:32 AM, Johnny Billquist <bqt at update.uu.se> wrote:
>>
>>> On 2018-06-22 20:01, Clem Cole<clemc at ccc.com> wrote:
>>> One of the other BI people, who's name now escapes me, although I can see
>>> his face in my mind, maybe I'll think of it later), would go on to do the
>>> PCI for Alpha a couple of years later.   As I said, DEC did manage to get
>>> that one public, after the BI was made private as Erik points out.
>>
>> Clem, I think I saw you say something similar in an earlier post.
>> To me it sounds as if you are saying that DEC did/designed PCI.
>> Are you sure about that? As far as I know, PCI was designed and created by Intel, and the first users were just plain PC machines.
>> Alpha did eventually also get PCI, but it was not where it started, and DEC had no control at all about PCI being public.
>>
>> Might you have been thinking of Turbobus, Futurebus, or some other thing that DEC did? Or do you have some more information about DEC being the creator of PCI?
>>
>>   Johnny
>>
>> -- 
>> Johnny Billquist                  || "I'm on a bus
>>                                   ||  on a psychedelic trip
>> email: bqt at softjar.se             ||  Reading murder books
>> pdp is alive!                     ||  tryin' to stay hip" - B. Idol


-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From steffen at sdaoden.eu  Sun Jun 24 00:49:59 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Sat, 23 Jun 2018 16:49:59 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
Message-ID: <20180623144959.M9byU%steffen@sdaoden.eu>

Hello.

Grant Taylor via TUHS wrote in <89e5ae21-ccc0-5c84-837b-120a1a7d9e26 at spa\
mtrap.tnetconsulting.net>:
 |On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
 |> True, but possible since some time, for the thing i maintain, too, 
 |> unfortunately.  It will be easy starting with the next release, however.
 |
 |I just spent a few minutes looking at how to edit headers in reply 
 |messages in Thunderbird and I didn't quickly find it.  (I do find an 
 |Add-On that allows editing messages in reader, but not the composer.)

Oh, I do not know: i have never used a graphical MUA, only pine,
then mutt, and now, while anytime before now and then anyway, BSD
Mail.  Only once i implemented RFC 2231 support i started up Apple
Mail (of an up-to-date Snow Leopard i had back then) to see how
they do it, and that was a brainwave because they i think
misinterpreted RFC 2231 to only allow MIME paramaters to be split
in ten sections.  (But then i recall that i have retested that
with a Tiger Mail once i had a chance to, and there the bug was
corrected.)

Graphical user interfaces are a difficult abstraction, or tedious
to use.  I have to use a graphical browser, and it is always
terrible to enable Cookies for a site.  For my thing i hope we
will at some future day be so resilient that users can let go the
nmh mailer without loosing any freedom.

I mean, user interfaces are really a pain, and i think this will
not go away until we come to that brain implant which i have no
doubt will arrive some day, and then things may happen with
a think.  Things like emacs or Acme i can understand, and the
latter is even Unix like in the way it works.

Interesting that most old a.k.a. established Unix people give up
that Unix freedom of everything-is-a-file, that was there for
email access via nupas -- the way i have seen it in Plan9 (i never
ran a research Unix), at least -- in favour of a restrictive
graphical user interface!

 |> Yes.  Yes.  And then, whilst not breaking the thread stuff as such, 
 |> there is the "current funny thing to do", which also impacts thread 
 |> visualization sometimes.  For example replacing spaces with tabulators 
 |> so that the "is the same thread subject" compression cannot work, so 
 |> one first thinks the subject has really changed.
 |
 |IMHO that's the wrong way to thread.  I believe threading should be done 
 |by the In-Reply-To: and References: headers.
 |
 |I consider Subject: based threading to be a hack.  But it's a hack that 
 |many people use.  I think Thunderbird even uses it by default.  (I've 
 |long since disabled it.)

No, we use the same threading algorithm that Zawinski described
([1], "the threading algorithm that was used in Netscape Mail and
News 2.0 and 3.0").  I meant, in a threaded display, successive
follow-up messages which belong to the same thread will not
reiterate the Subject:, because it is the same one as before, and
that is irritating.

  [1] http://www.jwz.org/doc/threading.html

 |> At the moment top posting seems to be a fascinating thing to do.
 |
 |I blame ignorance and the prevalence of tools that encourage such behavior.

I vote for herdes instinct, but that is a non-scientific view.  We
here where i live had the "fold rear view mirrors even if not done
automatically by the car" until some young men started crashing
non-folded ones with baseball bats (say: who else would crash
rearview mirrors, i have not seen one doing this, but a lot of
damaged mirrors by then), and since a couple of years we have
"cut trees and hedges and replace with grass", which is really,
really terrible and i am not living at the same place than before.
Many birds, bats etc. think the same, so i am not alone; i hope
this ends soon.  I can have a walk to reach nightingales, though,
and still.  I would hope for "turning off the lights to be able to
see a true night sky", but .. i am dreaming.

 |> And you seem to be using DMARC, which irritates the list-reply mechanism 
 |> of at least my MUA.
 |
 |Yes I do use DMARC as well as DKIM and SPF (w/ -all).  I don't see how 
 |me using that causes problems with "list-reply".
 |
 |My working understanding is that "list-reply" should reply to the list's 
 |posting address in the List-Post: header.
 |
 |List-Post: <mailto:tuhs at minnie.tuhs.org>
 |
 |What am I missing or not understanding?

That is not how it works for the MUAs i know.  It is an
interesting idea.  And in fact it is used during the "mailing-list
address-massage" if possible.  But one must or should
differentiate in between a subscribed list and a non-subscribed
list, for example.  This does not work without additional
configuration (e.g., we have `mlist' and `mlsubscribe' commands to
make known mailing-lists to the machine), though List-Post: we use
for automatic configuration (as via `mlist').

And all in all this is a complicated topic (there are
Mail-Followup-To: and Reply-To:, for example), and before you say
"But what i want is a list reply!", yes, of course you are right.
But.  For my thing i hope i have found a sensible way through
this, and initially also one that does not deter users of console
MUA number one (mutt).

   --End of <89e5ae21-ccc0-5c84-837b-120a1a7d9e26 at spamtrap.tnetconsulting.net>

Cheerio.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From toby at telegraphics.com.au  Sun Jun 24 01:25:35 2018
From: toby at telegraphics.com.au (Toby Thain)
Date: Sat, 23 Jun 2018 11:25:35 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180623144959.M9byU%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
Message-ID: <915cccf1-50b2-0ed7-c4e2-caa81b87394b@telegraphics.com.au>

On 2018-06-23 10:49 AM, Steffen Nurpmeso wrote:
> Hello.
> 
> Grant Taylor via TUHS wrote in <89e5ae21-ccc0-5c84-837b-120a1a7d9e26 at spa\
> mtrap.tnetconsulting.net>:
>  |On 06/22/2018 01:25 PM, Steffen Nurpmeso wrote:
>  |> True, but possible since some time, for the thing i maintain, too, 
>  |> unfortunately.  It will be easy starting with the next release, however.
>  |
>  |I just spent a few minutes looking at how to edit headers in reply 
>  |messages in Thunderbird and I didn't quickly find it.  (I do find an 
>  |Add-On that allows editing messages in reader, but not the composer.)
> 
> Oh, I do not know: i have never used a graphical MUA, only pine,
> then mutt, and now, while anytime before now and then anyway, BSD
> Mail.  Only once i implemented RFC 2231 support ...


Like most retrocomputing lists, only one off-topic list is ever really
needed: "Email RFCs and etiquette"





From clemc at ccc.com  Sun Jun 24 01:35:55 2018
From: clemc at ccc.com (Clem cole)
Date: Sat, 23 Jun 2018 11:35:55 -0400
Subject: [TUHS] Old mainframe I/O speed
In-Reply-To: <04190921-6f50-a643-63f7-41f3bfd0b7e5@update.uu.se>
References: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
 <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
 <EDFBE1AB-2288-49ED-AE30-A5DF504C960F@ccc.com>
 <04190921-6f50-a643-63f7-41f3bfd0b7e5@update.uu.se>
Message-ID: <1A6A6B78-974A-4B4B-B29A-70DC15088038@ccc.com>

Ah.  Maybe I understand where you are coming (or may be not).  What the formal marketing  names were on the street - I never much worried about.  I’ve always followed the engineering path between the teams on the inside and the technologies and never cared what the marketing people named them. 

What we now call pci was developed as the io bus for turbo laser as part of  Alpha.  It needed to be cheap, fast and expandable to 64 bits.  Intel and the PC did not have anything coming that could do that and the DEC folks knew that.  

Anyway.  You may also remember intel tripped over 10 patents in the mid 90s when Penguin magically caught up in one generation and DEC sued Intel - my favorite Andy Grove quote - “there is nothing left to steal.”  One of the patents was part of the pci bus technology.  You are probably correct that it was sourced at dec as part of the turbochannel program - I don’t remember.   But the result of the suit was that the guts of pci was licensed by intel from DEC.  I played a very very small part of it all that a long time ago.  The NDAs have probably all expired but I generally don’t talk much more about it that what I have.   

When it was all said and done AMD got the Alpha memory bus (K7 and EV6 are electrical brothers) and the industry got PCI.    


BTW.  When I came to Intel I do know there was still grumbling about license fees to then HP.   I’m not sure how all that was finally resolved but I believe it has been as part of the Itainium stuff but I’ve not been a part of any of that.  

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 23, 2018, at 7:57 AM, Johnny Billquist <bqt at update.uu.se> wrote:
> 
>> On 2018-06-23 13:39, Clem cole wrote:
>> PCI was a late 1980s DEC design bus design that where released via license ala the Ethernet experience of  the xerox/dec/Intel blue book.  DEC had mostly learned it lesson that interface standards were better shared.  I’ve forgotten now the name of the person who lead the team.  I did not know him very well.  I can picture his face as I said.
> 
> It's just that this sounds so much like the TURBOchannel (not Turbobus as I wrote previously). That bus exactly matches your description of details, timelines and circumstances, while the PCI, to my knowledge don't match at all.
> 
> And my recollection also matches Wikipedia, which even gives the PCI V1.0 spec being released in 1992. (See https://en.wikipedia.org/wiki/Conventional_PCI)
> 
> Compare to TURBOchannel: https://en.wikipedia.org/wiki/TURBOchannel
> 
>  Johnny
> 
> 
>> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
>>>> On Jun 23, 2018, at 6:32 AM, Johnny Billquist <bqt at update.uu.se> wrote:
>>>> 
>>>> On 2018-06-22 20:01, Clem Cole<clemc at ccc.com> wrote:
>>>> One of the other BI people, who's name now escapes me, although I can see
>>>> his face in my mind, maybe I'll think of it later), would go on to do the
>>>> PCI for Alpha a couple of years later.   As I said, DEC did manage to get
>>>> that one public, after the BI was made private as Erik points out.
>>> 
>>> Clem, I think I saw you say something similar in an earlier post.
>>> To me it sounds as if you are saying that DEC did/designed PCI.
>>> Are you sure about that? As far as I know, PCI was designed and created by Intel, and the first users were just plain PC machines.
>>> Alpha did eventually also get PCI, but it was not where it started, and DEC had no control at all about PCI being public.
>>> 
>>> Might you have been thinking of Turbobus, Futurebus, or some other thing that DEC did? Or do you have some more information about DEC being the creator of PCI?
>>> 
>>>  Johnny
>>> 
>>> -- 
>>> Johnny Billquist                  || "I'm on a bus
>>>                                  ||  on a psychedelic trip
>>> email: bqt at softjar.se             ||  Reading murder books
>>> pdp is alive!                     ||  tryin' to stay hip" - B. Idol
> 
> 
> -- 
> Johnny Billquist                  || "I'm on a bus
>                                  ||  on a psychedelic trip
> email: bqt at softjar.se             ||  Reading murder books
> pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From clemc at ccc.com  Sun Jun 24 01:38:06 2018
From: clemc at ccc.com (Clem cole)
Date: Sat, 23 Jun 2018 11:38:06 -0400
Subject: [TUHS] Old mainframe I/O speed
In-Reply-To: <1A6A6B78-974A-4B4B-B29A-70DC15088038@ccc.com>
References: <mailman.1.1529690481.3725.tuhs@minnie.tuhs.org>
 <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
 <EDFBE1AB-2288-49ED-AE30-A5DF504C960F@ccc.com>
 <04190921-6f50-a643-63f7-41f3bfd0b7e5@update.uu.se>
 <1A6A6B78-974A-4B4B-B29A-70DC15088038@ccc.com>
Message-ID: <0257BE8F-74CB-487C-B526-13691494FD7E@ccc.com>

#%^* autocorrect. Pentium sigh.  

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 23, 2018, at 11:35 AM, Clem cole <clemc at ccc.com> wrote:
> 
> Ah.  Maybe I understand where you are coming (or may be not).  What the formal marketing  names were on the street - I never much worried about.  I’ve always followed the engineering path between the teams on the inside and the technologies and never cared what the marketing people named them. 
> 
> What we now call pci was developed as the io bus for turbo laser as part of  Alpha.  It needed to be cheap, fast and expandable to 64 bits.  Intel and the PC did not have anything coming that could do that and the DEC folks knew that.  
> 
> Anyway.  You may also remember intel tripped over 10 patents in the mid 90s when Penguin magically caught up in one generation and DEC sued Intel - my favorite Andy Grove quote - “there is nothing left to steal.”  One of the patents was part of the pci bus technology.  You are probably correct that it was sourced at dec as part of the turbochannel program - I don’t remember.   But the result of the suit was that the guts of pci was licensed by intel from DEC.  I played a very very small part of it all that a long time ago.  The NDAs have probably all expired but I generally don’t talk much more about it that what I have.   
> 
> When it was all said and done AMD got the Alpha memory bus (K7 and EV6 are electrical brothers) and the industry got PCI.    
> 
> 
> BTW.  When I came to Intel I do know there was still grumbling about license fees to then HP.   I’m not sure how all that was finally resolved but I believe it has been as part of the Itainium stuff but I’ve not been a part of any of that.  
> 
> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 
> 
>>> On Jun 23, 2018, at 7:57 AM, Johnny Billquist <bqt at update.uu.se> wrote:
>>> 
>>> On 2018-06-23 13:39, Clem cole wrote:
>>> PCI was a late 1980s DEC design bus design that where released via license ala the Ethernet experience of  the xerox/dec/Intel blue book.  DEC had mostly learned it lesson that interface standards were better shared.  I’ve forgotten now the name of the person who lead the team.  I did not know him very well.  I can picture his face as I said.
>> 
>> It's just that this sounds so much like the TURBOchannel (not Turbobus as I wrote previously). That bus exactly matches your description of details, timelines and circumstances, while the PCI, to my knowledge don't match at all.
>> 
>> And my recollection also matches Wikipedia, which even gives the PCI V1.0 spec being released in 1992. (See https://en.wikipedia.org/wiki/Conventional_PCI)
>> 
>> Compare to TURBOchannel: https://en.wikipedia.org/wiki/TURBOchannel
>> 
>> Johnny
>> 
>> 
>>> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
>>>>> On Jun 23, 2018, at 6:32 AM, Johnny Billquist <bqt at update.uu.se> wrote:
>>>>> 
>>>>> On 2018-06-22 20:01, Clem Cole<clemc at ccc.com> wrote:
>>>>> One of the other BI people, who's name now escapes me, although I can see
>>>>> his face in my mind, maybe I'll think of it later), would go on to do the
>>>>> PCI for Alpha a couple of years later.   As I said, DEC did manage to get
>>>>> that one public, after the BI was made private as Erik points out.
>>>> 
>>>> Clem, I think I saw you say something similar in an earlier post.
>>>> To me it sounds as if you are saying that DEC did/designed PCI.
>>>> Are you sure about that? As far as I know, PCI was designed and created by Intel, and the first users were just plain PC machines.
>>>> Alpha did eventually also get PCI, but it was not where it started, and DEC had no control at all about PCI being public.
>>>> 
>>>> Might you have been thinking of Turbobus, Futurebus, or some other thing that DEC did? Or do you have some more information about DEC being the creator of PCI?
>>>> 
>>>> Johnny
>>>> 
>>>> -- 
>>>> Johnny Billquist                  || "I'm on a bus
>>>>                                 ||  on a psychedelic trip
>>>> email: bqt at softjar.se             ||  Reading murder books
>>>> pdp is alive!                     ||  tryin' to stay hip" - B. Idol
>> 
>> 
>> -- 
>> Johnny Billquist                  || "I'm on a bus
>>                                 ||  on a psychedelic trip
>> email: bqt at softjar.se             ||  Reading murder books
>> pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From rminnich at gmail.com  Sun Jun 24 03:02:18 2018
From: rminnich at gmail.com (ron minnich)
Date: Sat, 23 Jun 2018 10:02:18 -0700
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <alpine.BSF.2.21.999.1806231550410.68981@aneurin.horsfall.org>
References: <alpine.BSF.2.21.999.1806151618550.51464@orthanc.ca>
 <alpine.BSF.2.21.999.1806161611250.68981@aneurin.horsfall.org>
 <CAC20D2MEogGGZh+nXGVfmWMup8GPdPySA-dKYexM2k1OT9TdPA@mail.gmail.com>
 <EE68EB27-E780-476A-867C-61DF328D1B9C@tfeb.org>
 <20180619204536.GA91748@server.rulingia.com>
 <F1BC14F5-0E4A-4A18-914D-BB273753912B@pobox.com>
 <20180620050454.GC91748@server.rulingia.com>
 <CANCZdfrxEipUKKN4BZzUm0NYSjUFJYuaFp8NA2Xgq+Eb+806sw@mail.gmail.com>
 <20180620081032.GF28267@eureka.lemis.com>
 <CABH=_VReMmH1Smkb8aZnuFZh2Dct7E-d0X_KxwdHWicWaz42vg@mail.gmail.com>
 <20180621030505.GA89671@server.rulingia.com>
 <CABH=_VSzhumPL4xUfLpwpJRH9Qgv5TLy8M5ombeiw2VnGikdYg@mail.gmail.com>
 <CE5AFD89-9E9E-41DD-A243-6DD471B54B2A@alchemistowl.org>
 <alpine.BSF.2.21.999.1806231550410.68981@aneurin.horsfall.org>
Message-ID: <CAP6exYKAmSchEzKuyRm7huvwt67QNp__x7mD7S9VUXmjVnN4Zw@mail.gmail.com>

A complete summary of slowdown devices and eunuch hardware would make a
fascinating document. Every vendor I know of at some point had the "wire
wrap" clock-x-2 upgrade, there is of course the infamous 486/487 story, and
Jon Hall used to love telling the story of the VAX backplane with the glue
in the board slots, which clever customers managed to damage and have
repaired with a non-glued-up backplane.

And on a bigger scale is the tragedy of, e.g., DECs use of its ownership of
Alpha firmware to hamstring its customer-competitors who tried to build
systems with Alpha.
"What are you doing to our Alpha customers? They're selling systems with
Alpha!"
"They're system competitors, we must crush them"

It would be neat to collect them all somewhere ...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180623/7ee7e3ae/attachment.html>

From gtaylor at tnetconsulting.net  Sun Jun 24 04:49:36 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sat, 23 Jun 2018 12:49:36 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180623144959.M9byU%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
Message-ID: <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>

On 06/23/2018 08:49 AM, Steffen Nurpmeso wrote:
> Hello.

Hi,

> Oh, I do not know: i have never used a graphical MUA, only pine, then 
> mutt, and now, while anytime before now and then anyway, BSD Mail.

I agree that text mode MUAs from before the turn of the century do have 
a LOT more functionality than most GUI MUAs that came after that point 
in time.

Thankfully we are free to use what ever MUA we want to.  :-)

> they i think misinterpreted RFC 2231 to only allow MIME paramaters to be 
> split in ten sections.

I've frequently found things that MUAs (and other applications) don't do 
properly.  That's when it becomes a learning game to see what subset of 
the defined standard was implemented incorrectly and deciding if I want 
to work around it or not.

> Graphical user interfaces are a difficult abstraction, or tedious to use. 
> I have to use a graphical browser, and it is always terrible to enable 
> Cookies for a site.

Cookies are their own problem as of late.  All the "We use 
cookies...<bla>...<bla>...<bla>...." warnings that we now seem to have 
to accept get really annoying.  I want a cookie, or more likely a 
header, that says "I accept (first party) cookies." as a signal to not 
pester me.

> For my thing i hope we will at some future day be so resilient that 
> users can let go the nmh mailer without loosing any freedom.

Why do you have to let go of one tool?  Why can't you use a suite of 
tools that collectively do what you want?

> I mean, user interfaces are really a pain, and i think this will not 
> go away until we come to that brain implant which i have no doubt will 
> arrive some day, and then things may happen with a think.  Things like 
> emacs or Acme i can understand, and the latter is even Unix like in the 
> way it works.

You can keep the brain implant.  I have a desire to not have one.

> Interesting that most old a.k.a. established Unix people give up that 
> Unix freedom of everything-is-a-file, that was there for email access via 
> nupas -- the way i have seen it in Plan9 (i never ran a research Unix), 
> at least -- in favour of a restrictive graphical user interface!

Why do you have to give up one tool to start using a different tool?

I personally use Thunderbird as my primary MUA but weekly use mutt 
against the same mailbox w/ data going back 10+ years.  I extensively 
use Procmail to file messages into the proper folders.  I recently wrote 
a script that checks (copies of) messages that are being written to 
folders to move a message with that Message-ID from the Inbox to the 
Trash.  (The point being to remove copies that I got via To: or CC: when 
I get a copy from the mailing list.)

It seems to be like I'm doing things that are far beyond what 
Thunderbird can do by leveraging other tools to do things for me.  I 
also have a handful devices checking the same mailbox.

> No, we use the same threading algorithm that Zawinski described ([1], 
> "the threading algorithm that was used in Netscape Mail and News 2.0 
> and 3.0").  I meant, in a threaded display, successive follow-up messages 
> which belong to the same thread will not reiterate the Subject:, because 
> it is the same one as before, and that is irritating.
> 
>    [1] http://www.jwz.org/doc/threading.html

I *LOVE* threaded views.  I've been using threaded view for longer than 
I can remember.  I can't fathom not using threaded view.

I believe I mistook your statement to mean that you wanted to thread 
based on Subject: header, not the In-Reply-To: or References: header.

>>> And you seem to be using DMARC, which irritates the list-reply mechanism
>>> of at least my MUA.
>> 
>> Yes I do use DMARC as well as DKIM and SPF (w/ -all).  I don't see how
>> me using that causes problems with "list-reply".
>> 
>> My working understanding is that "list-reply" should reply to the list's
>> posting address in the List-Post: header.
>> 
>> List-Post: <mailto:tuhs at minnie.tuhs.org>
>> 
>> What am I missing or not understanding?
> 
> That is not how it works for the MUAs i know.  It is an interesting idea. 
> And in fact it is used during the "mailing-list address-massage" 
> if possible.  But one must or should differentiate in between a 
> subscribed list and a non-subscribed list, for example.  This does 
> not work without additional configuration (e.g., we have `mlist' and 
> `mlsubscribe' commands to make known mailing-lists to the machine), 
> though List-Post: we use for automatic configuration (as via `mlist').
> 
> And all in all this is a complicated topic (there are Mail-Followup-To: 
> and Reply-To:, for example), and before you say "But what i want is a 
> list reply!", yes, of course you are right.  But.  For my thing i hope 
> i have found a sensible way through this, and initially also one that 
> does not deter users of console MUA number one (mutt).

How does my use of DMARC irritate the list-reply mechanism of your MUA?

DMARC is completely transparent to message contents.  Sure, DKIM adds 
headers with a signature.  But I don't see anything about DKIM's use 
that has any impact on how any MUA handles a message.

Or are you referring to the fact that some mailing lists modify the 
From: header to be DMARC compliant?

Please elaborate on what you mean by "DMARC irritate the list-reply 
mechanism of your MUA".



-- 
Grant. . . .
unix || die


From tih at hamartun.priv.no  Sun Jun 24 07:05:02 2018
From: tih at hamartun.priv.no (Tom Ivar Helbekkmo)
Date: Sat, 23 Jun 2018 23:05:02 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 (Grant Taylor via TUHS's message of "Sat, 23 Jun 2018 12:49:36 -0600")
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
Message-ID: <m2r2kxqkn5.fsf@thuvia.hamartun.priv.no>

Grant Taylor via TUHS <tuhs at minnie.tuhs.org> writes:

> How does my use of DMARC irritate the list-reply mechanism of your MUA?
>
> DMARC is completely transparent to message contents.  Sure, DKIM adds
> headers with a signature.  But I don't see anything about DKIM's use
> that has any impact on how any MUA handles a message.
>
> Or are you referring to the fact that some mailing lists modify the
> From: header to be DMARC compliant?
>
> Please elaborate on what you mean by "DMARC irritate the list-reply
> mechanism of your MUA".

Thanks, Grant!  It's always a pleasure when someone decides to follow
these things through, and I'm glad to see you doing it now (even though
I certainly wouldn't have the stamina to do it myself.

-tih
-- 
Most people who graduate with CS degrees don't understand the significance
of Lisp.  Lisp is the most important idea in computer science.  --Alan Kay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180623/2611654b/attachment.sig>

From mparson at bl.org  Sun Jun 24 07:21:04 2018
From: mparson at bl.org (Michael Parson)
Date: Sat, 23 Jun 2018 16:21:04 -0500 (CDT)
Subject: [TUHS] off-topic list
In-Reply-To: <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
Message-ID: <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>

On Sat, 23 Jun 2018, Grant Taylor via TUHS wrote:
<snip>

> I recently wrote a script that checks (copies of) messages that are
> being written to folders to move a message with that Message-ID from
> the Inbox to the Trash.  (The point being to remove copies that I got
> via To: or CC: when I get a copy from the mailing list.)

The first rule in my .procmailrc does this with formail:

:0 Wh: msgid.lock
| /usr/local/bin/formail -D 8192 .msgid.cache

Doesn't move duplicates to the trash, it just prevents multiple copies
of the same message-id from being delivered at all.  The 8192 specfies
how large of a cache (stored in ~/.msgid.cache) of message-ids to keep
track of.

-- 
Michael Parson
Pflugerville, TX
KF5LGQ


From steffen at sdaoden.eu  Sun Jun 24 08:38:51 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Sun, 24 Jun 2018 00:38:51 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
Message-ID: <20180623223851.LcBjy%steffen@sdaoden.eu>

Grant Taylor via TUHS wrote in <ce6f617c-cf8e-63c6-8186-27e09c78020c at spa\
mtrap.tnetconsulting.net>:
 |On 06/23/2018 08:49 AM, Steffen Nurpmeso wrote:
 |> Oh, I do not know: i have never used a graphical MUA, only pine, then 
 |> mutt, and now, while anytime before now and then anyway, BSD Mail.
 |
 |I agree that text mode MUAs from before the turn of the century do have 
 |a LOT more functionality than most GUI MUAs that came after that point 
 |in time.
 |
 |Thankfully we are free to use what ever MUA we want to.  :-)

Absolutely true.  And hoping that, different to web browsers, no
let me call it pseudo feature race is started that results in less
diversity instead of anything else.

 |> they i think misinterpreted RFC 2231 to only allow MIME paramaters to be 
 |> split in ten sections.
 |
 |I've frequently found things that MUAs (and other applications) don't do 
 |properly.  That's when it becomes a learning game to see what subset of 
 |the defined standard was implemented incorrectly and deciding if I want 
 |to work around it or not.

If there is the freedom for a decision.  That is how it goes, yes.
For example one may have renamed "The Ethiopian Jewish Exodus --
Narratives of the Migration Journey to Israel 1977 - 1985" to
"Falasha" in order to use the book as an email attachment.  Which
would be pretty disappointing.  But it is also nice to see that
there are standards which were not fully thought through and
required backward incompatible hacks to fill the gaps.  Others
like DNS are pretty perfect and scale fantastic.  Thus.

 |> Graphical user interfaces are a difficult abstraction, or tedious \
 |> to use. 
 |> I have to use a graphical browser, and it is always terrible to enable 
 |> Cookies for a site.
 |
 |Cookies are their own problem as of late.  All the "We use 
 |cookies...<bla>...<bla>...<bla>...." warnings that we now seem to have 
 |to accept get really annoying.  I want a cookie, or more likely a 
 |header, that says "I accept (first party) cookies." as a signal to not 
 |pester me.

Ah, it has become a real pain.  It is ever so astounding how much
HTML5 specifics, scripting and third party analytics can be
involved for a page that would have looked better twenty years
ago, or say whenever CSS 2.0 came around.  Today almost each and
every commercial site i visit is in practice a denial of service
attack.  For me and my little box at least, and without gaining
any benefit from that!

The thing is, i do not want be lied to, but what those boxes you
refer to say are often plain lies.  It has nothing to do with my
security, or performance boosts, or whatever ... maximally so in
a juristically unobjectionable way.  Very disappointing.

 |> For my thing i hope we will at some future day be so resilient that 
 |> users can let go the nmh mailer without loosing any freedom.
 |
 |Why do you have to let go of one tool?  Why can't you use a suite of 
 |tools that collectively do what you want?

It is just me and my desire to drive this old thing forward so
that it finally will be usable for something real.  Or what i deem
for that.  I actually have no idea of nmh, but i for one think the
sentence of the old BSD Mail manual stating it is an "intelligent
mail processing system, which has a command syntax reminiscent of
ed(1) with lines replaced by messages" has always been a bit
excessive.  And the fork i maintain additionally said it "is also
usable as a mail batch language, both for sending and receiving
mail", adding onto that.

 |> I mean, user interfaces are really a pain, and i think this will not 
 |> go away until we come to that brain implant which i have no doubt will 
 |> arrive some day, and then things may happen with a think.  Things like 
 |> emacs or Acme i can understand, and the latter is even Unix like in the 
 |> way it works.
 |
 |You can keep the brain implant.  I have a desire to not have one.

Me too, me too.  Oh, me too.  Unforgotten those american black and
white films with all those sneaky alien attacks, and i was
a teenager when i saw them.  Yeah, but in respect to that i am
afraid the real rabble behind the real implants will be even
worse, and drive General Motors or something. ^.^  I have no
idea..  No, but.. i really do not think future humans will get
around that, just imagine automatic 911 / 112, biologic anomaly
detection, and then you do not need to learn latin vocabulary but
can buy the entire set for 99.95$, and have time for learning
something real important.  For example.
It is all right, just be careful and ensure that not all police
men go to the tourist agency to book a holiday in Texas at the
very same time.  I think this is manageable.

 |> Interesting that most old a.k.a. established Unix people give up that 
 |> Unix freedom of everything-is-a-file, that was there for email access \
 |> via 
 |> nupas -- the way i have seen it in Plan9 (i never ran a research Unix), 
 |> at least -- in favour of a restrictive graphical user interface!
 |
 |Why do you have to give up one tool to start using a different tool?
 |
 |I personally use Thunderbird as my primary MUA but weekly use mutt 
 |against the same mailbox w/ data going back 10+ years.  I extensively 
 |use Procmail to file messages into the proper folders.  I recently wrote 
 |a script that checks (copies of) messages that are being written to 
 |folders to move a message with that Message-ID from the Inbox to the 
 |Trash.  (The point being to remove copies that I got via To: or CC: when 
 |I get a copy from the mailing list.)
 |
 |It seems to be like I'm doing things that are far beyond what 
 |Thunderbird can do by leveraging other tools to do things for me.  I 
 |also have a handful devices checking the same mailbox.

You say it.  In the research Unix nupas mail (as i know it from
Plan9) all those things could have been done with a shell script
and standard tools like grep(1) and such.  Even Thunderbird would
simply be a maybe even little nice graphical application for
display purposes.  The actual understanding of storage format and
email standards would lie solely within the provider of the file
system.  Now you use several programs which all ship with all the
knowledge.  I for one just want my thing to be easy and reliable
controllable via a shell script.  You could replace procmail
(which is i think perl and needs quite some perl modules) with
a monolithic possibly statically linked C program.  Then.  With
full error checking etc.  This is a long road ahead, for my thing.

 |> No, we use the same threading algorithm that Zawinski described ([1], 
 |> "the threading algorithm that was used in Netscape Mail and News 2.0 
 |> and 3.0").  I meant, in a threaded display, successive follow-up \
 |> messages 
 |> which belong to the same thread will not reiterate the Subject:, because 
 |> it is the same one as before, and that is irritating.
 |> 
 |>    [1] http://www.jwz.org/doc/threading.html
 |
 |I *LOVE* threaded views.  I've been using threaded view for longer than 
 |I can remember.  I can't fathom not using threaded view.
 |
 |I believe I mistook your statement to mean that you wanted to thread 
 |based on Subject: header, not the In-Reply-To: or References: header.

It seems you did not have a chance not do.  My fault, sorry.

 |>>> And you seem to be using DMARC, which irritates the list-reply \
 |>>> mechanism
 |>>> of at least my MUA.
 |>> 
 |>> Yes I do use DMARC as well as DKIM and SPF (w/ -all).  I don't see how
 |>> me using that causes problems with "list-reply".
 |>> 
 |>> My working understanding is that "list-reply" should reply to the list's
 |>> posting address in the List-Post: header.
 |>> 
 |>> List-Post: <mailto:tuhs at minnie.tuhs.org>
 |>> 
 |>> What am I missing or not understanding?
 |> 
 |> That is not how it works for the MUAs i know.  It is an interesting \
 |> idea. 
 |> And in fact it is used during the "mailing-list address-massage" 
 |> if possible.  But one must or should differentiate in between a 
 |> subscribed list and a non-subscribed list, for example.  This does 
 |> not work without additional configuration (e.g., we have `mlist' and 
 |> `mlsubscribe' commands to make known mailing-lists to the machine), 
 |> though List-Post: we use for automatic configuration (as via `mlist').
 |> 
 |> And all in all this is a complicated topic (there are Mail-Followup-To: 
 |> and Reply-To:, for example), and before you say "But what i want is a 
 |> list reply!", yes, of course you are right.  But.  For my thing i hope 
 |> i have found a sensible way through this, and initially also one that 
 |> does not deter users of console MUA number one (mutt).
 |
 |How does my use of DMARC irritate the list-reply mechanism of your MUA?

So ok, it does not, actually.  It chooses your "Grant Taylor via
TUHS" which ships with the TUHS address, so one may even see this
as an improvement to DMARC-less list replies, which would go to
TUHS, with or without the "The Unix Heritage Society".
I am maybe irritated by the 'dkim=fail reason="signature
verification failed"' your messages produce.  It would not be good
to filter out failing DKIMs, at least on TUHS.

 |DMARC is completely transparent to message contents.  Sure, DKIM adds 
 |headers with a signature.  But I don't see anything about DKIM's use 
 |that has any impact on how any MUA handles a message.
 |
 |Or are you referring to the fact that some mailing lists modify the 
 |From: header to be DMARC compliant?

Ach, it has been said without much thought.  Maybe yes.  There is
a Reply-To: of yours.

 |Please elaborate on what you mean by "DMARC irritate the list-reply 
 |mechanism of your MUA".

My evil subconsciousness maybe does not like the MARC and DKIM
standards.  That could be it.  I do not know anything better,
maybe the next OpenPGP and S/MIME standards will include
a checksum of some headers in signed-only data, will specify that
some headers have to be repeated in the MIME part and extend the
data to-be-verified to those, whatever.  I have not read the
OpenPGP draft nor anyting S/MIMEish after RFC 5751.

A nice sunday i wish.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From gtaylor at tnetconsulting.net  Sun Jun 24 09:31:06 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sat, 23 Jun 2018 17:31:06 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
Message-ID: <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>

On 06/23/2018 03:21 PM, Michael Parson wrote:
> The first rule in my .procmailrc does this with formail:

That works well for many.  But it does not work at all for me.

There are many additional factors that I have to consider:

  - I use different email addresses for different things.
  - Replies come to one email address.
  - Emails from the mailing list come to a different address.
  - Replies are direct and arrive sooner.
  - Emails from the mailing list are indirect and arrive later.
This means that the emails I get from the mailing list are to different 
addresses than message that were CCed directly to me.
  - I want the copy from the mailing list.
  - I want to not arbitrarily toss the direct reply in case the message 
never arrives from the mailing list.

> Doesn't move duplicates to the trash, it just prevents multiple copies 
> of the same message-id from being delivered at all.  The 8192 specfies 
> how large of a cache (stored in ~/.msgid.cache) of message-ids to keep 
> track of.

In light of the above facts, I can't simply reject subsequent copies of 
Message-IDs as that does not accomplish the aforementioned desires.

I also want to move the existing message to the Trash folder instead of 
blindly removing it so that I have the option of going to Trash and 
recovering it via my MUA if I feel the need or desire to do so.

What I do is quite likely extremely atypical.  But it does what I want. 
I have my own esoteric reasons (some are better than others) for doing 
what I do.  I'm happy to share if anyone is interested.



-- 
Grant. . . .
unix || die


From lm at mcvoy.com  Sun Jun 24 09:36:06 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Sat, 23 Jun 2018 16:36:06 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
References: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
Message-ID: <20180623233606.GH24200@mcvoy.com>

On Sat, Jun 23, 2018 at 05:31:06PM -0600, Grant Taylor via TUHS wrote:
> On 06/23/2018 03:21 PM, Michael Parson wrote:
> >The first rule in my .procmailrc does this with formail:
> 
> That works well for many.  But it does not work at all for me.
> 
> There are many additional factors that I have to consider:
> 
>  - I use different email addresses for different things.

I think you misunderstood what formail is doing.  It's looking
at the Message-ID header.  The one for the email you sent
looks like:

Message-ID: <ea179b08-d58c-e116-ae07-cee882f306f8 at spamtrap.tnetconsulting.net>

All formail is doing is keeping a recent cache of those ids and only letting
one get through.

So suppose someone reply-alls to you and the list (very common).  Without
the formail trick you'll see that message twice.


From lm at mcvoy.com  Sun Jun 24 09:37:51 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Sat, 23 Jun 2018 16:37:51 -0700
Subject: [TUHS] off-topic list
In-Reply-To: <20180623233606.GH24200@mcvoy.com>
References: <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
 <20180623233606.GH24200@mcvoy.com>
Message-ID: <20180623233751.GI24200@mcvoy.com>

Never mind, I didn't read far enough into your email, my bad.

On Sat, Jun 23, 2018 at 04:36:06PM -0700, Larry McVoy wrote:
> On Sat, Jun 23, 2018 at 05:31:06PM -0600, Grant Taylor via TUHS wrote:
> > On 06/23/2018 03:21 PM, Michael Parson wrote:
> > >The first rule in my .procmailrc does this with formail:
> > 
> > That works well for many.  But it does not work at all for me.
> > 
> > There are many additional factors that I have to consider:
> > 
> >  - I use different email addresses for different things.
> 
> I think you misunderstood what formail is doing.  It's looking
> at the Message-ID header.  The one for the email you sent
> looks like:
> 
> Message-ID: <ea179b08-d58c-e116-ae07-cee882f306f8 at spamtrap.tnetconsulting.net>
> 
> All formail is doing is keeping a recent cache of those ids and only letting
> one get through.
> 
> So suppose someone reply-alls to you and the list (very common).  Without
> the formail trick you'll see that message twice.

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From wkt at tuhs.org  Sun Jun 24 09:43:45 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sun, 24 Jun 2018 09:43:45 +1000
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
 stories, interviews
In-Reply-To: <20180623053216.GA23860@minnie.tuhs.org>
References: <20180623053216.GA23860@minnie.tuhs.org>
Message-ID: <20180623234345.GA30787@minnie.tuhs.org>

On Sat, Jun 23, 2018 at 03:32:16PM +1000, Warren Toomey wrote:
>What we haven't done a good job yet is to collect other things: photos,
>stories, anecdotes, scanned ephemera.

A few updates. I e-mailed Peter Salus about QCU primary sources. He wrote:

  Sorry to say (a) most of the photos were borrowed (from, e.g. Kirk
  McKusick and Debbie Scherrer) and then returned.  The transcripts of
  interviews, etc., were on a DEC back-up tape that went astray about 20
  years ago, but even then, I knew no one who could read it.  (You know,
  when QCU was in process, Addison-Wesley was just installing a way to
  receive mss. electronically.)

So I'll hit on these people and others soon :-)

Jason Scott from archive.org wrote:

   We're absolutely a good home for scanned ephemera and photos. We do
   a lot of it. Examples of collections we have:
   https://archive.org/details/bitsavers
   https://archive.org/details/manuals
   https://archive.org/details/catalogs
   You can begin to scan things and we can host it generally, and then as
   the set grows we can make you a collection.

Therefore, I propose that we start uploading scans of Unix photos and
memorabilia to archive.org and then ask Jason to make a collection to
hold all of this. How does that sound?

Cheers, Warren


From wkt at tuhs.org  Sun Jun 24 09:44:41 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sun, 24 Jun 2018 09:44:41 +1000
Subject: [TUHS] (Meta) mail about mail about mail ...
Message-ID: <20180623234441.GB30787@minnie.tuhs.org>

I think it might be time for "Peter Out" to arrive here :-)

Cheers, Warren


From peter at peteradams.org  Sun Jun 24 10:08:21 2018
From: peter at peteradams.org (Peter Adams)
Date: Sat, 23 Jun 2018 14:08:21 -1000
Subject: [TUHS] Request: Unix Photos, scanned ephemera, anecdotes,
	stories, interviews
Message-ID: <B8F60D55-0C78-477F-92C9-2FE0B329B753@peteradams.org>

Over the last few years I’ve photographed many of the people listed on the wiki. 

You can see the photos here:

http://facesofopensource.com

-P- 

--
Peter Adams
http://www.peteradamsphoto.com

> On Jun 22, 2018, at 7:32 PM, Warren Toomey <wkt at tuhs.org> wrote:
> 
> All, I've had a fair bit of positive feedback for my TUHS work. In reality
> I'm just the facilitator, collecting the stuff that you send me and keeping
> the mailing list going.
> 
> I think we've captured nearly all we can of the 1970s Unix in terms of
> software. After that it becomes commercial, but I am building up the
> "Hidden Unix" archive to hold that. Just wish I could open that up ...
> 
> What we haven't done a good job yet is to collect other things: photos,
> stories, anecdotes, scanned ephemera.
> 
> Photos & scanned things: I'm very happy to collect these, but does anybody
> know of an existing place that accepts (and makes available online) photos
> and scanned ephemera? They are a bit out of scope for bitsavers as far as
> I can tell, but I'm happy to be corrected. Al? Other comments here?
> 
> Stories & anecdotes: definitely type them in & e-mail them in and/or e-mail
> them to me if you want me just to preserve them. There is the Unix wiki I
> started here: https://wiki.tuhs.org/doku.php?id=start, but maybe there is
> already a better place. Gunkies?
> 
> Interviews: Sometimes it's easier to glean stories & knowledge with interviews.
> I've never tried this but perhaps it's time. Who is up to have an audio
> interview? I'll worry about the technical details eventually, but is there
> interest?
> 
> All of the above would slot in with the upcoming 50th anniversary. If you
> do have photos, bits of paper, stories to tell etc., then let's try to
> preserve them so that they are not lost.
> 
> Cheers all, Warren
> 



From gtaylor at tnetconsulting.net  Sun Jun 24 10:18:42 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sat, 23 Jun 2018 18:18:42 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180623223851.LcBjy%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
Message-ID: <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>

On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
> Absolutely true.  And hoping that, different to web browsers, no let me 
> call it pseudo feature race is started that results in less diversity 
> instead of anything else.

I'm not sure I follow.  I feel like we do have the choice of MUAs or web 
browsers.  Sadly some choices are lacking compared to other choices. 
IMHO the maturity of some choices and lack there of in other choices 
does not mean that we don't have choices.

> If there is the freedom for a decision.  That is how it goes, yes.  For 
> example one may have renamed "The Ethiopian Jewish Exodus -- Narratives 
> of the Migration Journey to Israel 1977 - 1985" to "Falasha" in order to 
> use the book as an email attachment.  Which would be pretty disappointing.

I don't follow what you're getting at.

> But it is also nice to see that there are standards which were not fully 
> thought through and required backward incompatible hacks to fill the gaps.

Why is that a "nice" thing?

> Others like DNS are pretty perfect and scale fantastic.  Thus.

Yet I frequently see DNS problems for various reasons.  Not the least of 
which is that many clients do not gracefully fall back to the secondary 
DNS server when the primary is unavailable.

> Ah, it has become a real pain.  It is ever so astounding how much HTML5 
> specifics, scripting and third party analytics can be involved for a page 
> that would have looked better twenty years ago, or say whenever CSS 2.0 
> came around.

I'm going to disagree with you there.  IMHO the standard is completely 
separate with what people do with it.

> Today almost each and every commercial site i visit is in practice 
> a denial of service attack.  For me and my little box at least, and 
> without gaining any benefit from that!

I believe those webmasters have made MANY TERRIBLE choices and have 
ended up with the bloat that they now have.  -  I do not blame the 
technology.  I blame what the webmasters have done with the technology.

Too many people do not know what they are doing and load "yet another 
module" to do something that they can already do with what's already 
loaded on their page.  But they don't know that.  They just glue things 
togehter until they work.  Even if that means that they are loading the 
same thing multiple times because multiple of their 3rd party components 
loads a common module themselves.  This is particularly pernicious with 
JavaScript.

> The thing is, i do not want be lied to, but what those boxes you refer 
> to say are often plain lies.  It has nothing to do with my security, 
> or performance boosts, or whatever ... maximally so in a juristically 
> unobjectionable way.  Very disappointing.

I don't understand what you're saying.

Who is lying to you?  How and where are they lying to you?

What boxes are you saying I'm referring to?

> It is just me and my desire to drive this old thing forward so that it 
> finally will be usable for something real.  Or what i deem for that.

I don't see any reason that you can't continue to use and improve what 
ever tool you want to.

> I actually have no idea of nmh, but i for one think the sentence of 
> the old BSD Mail manual stating it is an "intelligent mail processing 
> system, which has a command syntax reminiscent of ed(1) with lines 
> replaced by messages" has always been a bit excessive.  And the fork i 
> maintain additionally said it "is also usable as a mail batch language, 
> both for sending and receiving mail", adding onto that.

What little I know about the MH type mail stores and associated 
utilities are indeed quite powerful.  I think they operate under the 
premise that each message is it's own file and that you work in 
something akin to a shell if not your actual OS shell.  I think the MH 
commands are quite literally unix command that can be called from the 
unix shell.  I think this is in the spirit of simply enhancing the shell 
to seem as if it has email abilities via the MH commands.  Use any 
traditional unix text processing utilities you want to manipulate email.

MH has always been attractive to me, but I've never used it myself.

> You say it.  In the research Unix nupas mail (as i know it from Plan9) 
> all those things could have been done with a shell script and standard 
> tools like grep(1) and such.

I do say that and I do use grep, sed, awk, formail, procmail, cp, mv, 
and any number of traditional unix file / text manipulation utilities on 
my email weekly.  I do this both with the Maildir (which is quite 
similar to MH) on my mail server and to the Thumderbird message store 
that is itself a variant of Maildir with a file per message in a 
standard directory structure.

> Even Thunderbird would simply be a maybe even little nice graphical 
> application for display purposes.

The way that I use Thunderbird, that's exactly what it is.  A friendly 
and convenient GUI front end to access my email.

> The actual understanding of storage format and email standards would 
> lie solely within the provider of the file system.

The emails are stored in discrete files that themselves are effectively 
mbox files with one message therein.

> Now you use several programs which all ship with all the knowledge.

I suppose if you count greping for a line in a text file as knowledge of 
the format, okay.

egrep "^Subject: " message.txt

There's nothing special about that.  It's a text file with a line that 
looks like this:

Subject: Re: [TUHS] off-topic list

> I for one just want my thing to be easy and reliable controllable via 
> a shell script.

That's a laudable goal.  I think MH is very conducive to doing that.

> You could replace procmail (which is i think perl and needs quite some 
> perl modules) with a monolithic possibly statically linked C program.

I'm about 95% certain that procmail is it's own monolithic C program. 
I've never heard of any reference to Perl in association with procmail. 
Are you perhaps thinking of a different local delivery agent?

> Then.  With full error checking etc.  This is a long road ahead, for 
> my thing.

Good luck to you.

> So ok, it does not, actually.  It chooses your "Grant Taylor via TUHS" 
> which ships with the TUHS address, so one may even see this as an 
> improvement to DMARC-less list replies, which would go to TUHS, with or 
> without the "The Unix Heritage Society".

Please understand, that's not how I send the emails.  I send them with 
my name and my email address.  The TUHS mailing list modifies them.

Aside:  I support the modification that it is making.

> I am maybe irritated by the 'dkim=fail reason="signature verification 
> failed"' your messages produce.  It would not be good to filter out 
> failing DKIMs, at least on TUHS.

Okay.  That is /an/ issue.  But I believe it's not /my/ issue to solve.

My server DKIM signs messages that it sends out.  From everything that 
I've seen and tested (and I actively look for problems) the DKIM 
signatures are valid and perfectly fine.

That being said, the TUHS mailing list modifies message in the following 
ways:

1)  Modifies the From: when the sending domain uses DMARC.
2)  Modifies the Subject to prepend "[TUHS] ".
3)  Modifies the body to append a footer.

All three of these actions modify the data that receiving DKIM filters 
calculate hashes based on.  Since the data changed, obviously the hash 
will be different.

I do not fault THUS for this.

But I do wish that TUHS stripped DKIM and associated headers of messages 
going into the mailing list.  By doing that, there would be no data to 
compare to that wouldn't match.

I think it would be even better if TUHS would DKIM sign messages as they 
leave the mailing list's mail server.

> Ach, it has been said without much thought.  Maybe yes.  There is a 
> Reply-To: of yours.

I do see the Reply-To: header that you're referring to.  I don't know 
where it's coming from.  I am not sending it.  I just confirmed that 
it's not in my MUA's config, nor is it in my sent Items.

> My evil subconsciousness maybe does not like the MARC and DKIM standards. 
> That could be it.  I do not know anything better, maybe the next 
> OpenPGP and S/MIME standards will include a checksum of some headers 
> in signed-only data, will specify that some headers have to be repeated 
> in the MIME part and extend the data to-be-verified to those, whatever. 
> I have not read the OpenPGP draft nor anyting S/MIMEish after RFC 5751.

I don't know if PGP or S/MIME will ever mandate anything about headers 
which are structurally outside of their domain.

I would like to see an option in MUAs that support encrypted email for 
something like the following:

    Subject:  (Subject in encrypted body.)

Where the encrypted body included a header like the following:

    Encrypted-Subject: Re: [TUHS] off-topic list

I think that MUAs could then display the subject that was decrypted out 
of the encrypted body.

> A nice sunday i wish.

You too.



-- 
Grant. . . .
unix || die


From gtaylor at tnetconsulting.net  Sun Jun 24 10:20:28 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sat, 23 Jun 2018 18:20:28 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180623233751.GI24200@mcvoy.com>
References: <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <ea179b08-d58c-e116-ae07-cee882f306f8@spamtrap.tnetconsulting.net>
 <20180623233606.GH24200@mcvoy.com> <20180623233751.GI24200@mcvoy.com>
Message-ID: <4407a9b6-5491-34c0-5c3f-57d9681ee36a@spamtrap.tnetconsulting.net>

On 06/23/2018 05:37 PM, Larry McVoy wrote:
> Never mind, I didn't read far enough into your email, my bad.

;-)

I'm always happy to have people (politely) challenge me.  I find it 
keeps me on my toes and helps me make sure that I didn't make a mistake.

So thank you for challenging me.  :-)



-- 
Grant. . . .
unix || die


From norman at oclsc.org  Sun Jun 24 13:08:19 2018
From: norman at oclsc.org (Norman Wilson)
Date: Sat, 23 Jun 2018 23:08:19 -0400 (EDT)
Subject: [TUHS] off-topic list
Message-ID: <20180624030819.390F24422F@lignose.oclsc.org>

Grant Taylor:

  Why do you have to give up one tool to start using a different tool?

====

I hereby declare this part of the conversation very much
on-topic for TUHS.

The question of what tools should exist, what should do what,
whether to make a new tool or add something to an existing
one, is a continuing thread in the history of UNIX and its
use and abuse.

Norman Wilson
Toronto ON


From norman at oclsc.org  Sun Jun 24 13:14:54 2018
From: norman at oclsc.org (Norman Wilson)
Date: Sat, 23 Jun 2018 23:14:54 -0400 (EDT)
Subject: [TUHS] Old mainframe I/O speed (was: core)
Message-ID: <20180624031454.4F9484422F@lignose.oclsc.org>

Ron Minnich:

  Jon Hall used to love telling the story of the VAX backplane with the glue
  in the board slots, which clever customers managed to damage and have
  repaired with a non-glued-up backplane.

=====

It wasn't exactly a VAX backplane; it was a QBus backplane,
though I don't know whether this marketing-induced castration
was performed on anywhere but on the backplaces of certain
MicroVAX models.

I think one of my saved-from-the-dumpster BA23s had one
of those backplanes.  I just declared it to be a source
of spare parts, other than backplanes.

Norman Wilson
Toronto ON


From michael at kjorling.se  Sun Jun 24 20:04:38 2018
From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=)
Date: Sun, 24 Jun 2018 10:04:38 +0000
Subject: [TUHS] off-topic list
In-Reply-To: <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
References: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
Message-ID: <20180624100438.GY10129@h-174-65.A328.priv.bahnhof.se>

On 23 Jun 2018 18:18 -0600, from tuhs at minnie.tuhs.org (Grant Taylor via TUHS):
>> Now you use several programs which all ship with all the knowledge.
> 
> I suppose if you count greping for a line in a text file as
> knowledge of the format, okay.
> 
> egrep "^Subject: " message.txt
> 
> There's nothing special about that.  It's a text file with a line
> that looks like this:
> 
> Subject: Re: [TUHS] off-topic list

The problem, of course (and I hope this is keeping this Unixy enough),
with that approach is that it won't handle headers split across
multiple lines (I'm looking at you, Received:, but you aren't alone),
and that it'll match lines in the body of the message as well (such as
the "Subject: " line in the body of your message), unless the body
happens to be e.g. Base64 encoded which instead complicates searching
for non-header material.

For sure neither is insurmountable even with standard tools, but it
does require a bit more complexity than a simple egrep to properly
parse even a single message, let alone a combination of multiple ones
(as seen in mbox mailboxes, for example). At that point having
specific tools, such as formail, that understand the specific file
format does start to make sense...

There isn't really much conceptual difference between writing, say,
    formail -X Subject: < message.txt
and
    egrep "^Subject: " message.txt
but the way the former handles certain edge cases is definitely better
than that of the latter.

Make everything as simple as possible, but not simpler. (That goes for
web pages, too, by the way.)


> But I do wish that TUHS stripped DKIM and associated headers of
> messages going into the mailing list.  By doing that, there would be
> no data to compare to that wouldn't match.
> 
> I think it would be even better if TUHS would DKIM sign messages as
> they leave the mailing list's mail server.

I believe the correct way would indeed be to validate, strip and
possibly re-sign. That way, everyone would (should) be making correct
claims about a message's origin.

FWIW, SPF presents a similar problem with message forwarding without
address rewriting... so it's definitely not just DKIM.

-- 
Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)


From paul.winalski at gmail.com  Sun Jun 24 23:03:39 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Sun, 24 Jun 2018 09:03:39 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <20180624031454.4F9484422F@lignose.oclsc.org>
References: <20180624031454.4F9484422F@lignose.oclsc.org>
Message-ID: <CABH=_VTOZbdzfXrYYbb13XG5EZTxXXYDTGUaWLPRrWYx5MBuzA@mail.gmail.com>

On 6/23/18, Norman Wilson <norman at oclsc.org> wrote:
> Ron Minnich:
>
>   Jon Hall used to love telling the story of the VAX backplane with the
> glue
>   in the board slots, which clever customers managed to damage and have
>   repaired with a non-glued-up backplane.

That was the VAXstation-11/RC.  Marketing wanted a VAXstation with
fewer backplane slots that it could sell at a cheaper price.  Rather
than manufacture a different board, they just filled the extra
backplane slots with glue to render them unusable.  "RC" officially
stood for "restricted configuration", but we in Engineering called it
"resin caulked".

-Paul W.


From jnc at mercury.lcs.mit.edu  Sun Jun 24 23:14:58 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Sun, 24 Jun 2018 09:14:58 -0400 (EDT)
Subject: [TUHS] off-topic list
Message-ID: <20180624131458.6E96518C082@mercury.lcs.mit.edu>

    > On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:

    > Others like DNS are pretty perfect and scale fantastic.

It's perhaps worth noting that today's DNS is somewhat different from the
original; some fairly substantial changes were made early on (although maybe
it was just in the security, I don't quite recall).

(The details escape me at this point, but at one point I did a detailed study
of DNS, and DNS security, for writing the security architecture document for
the resolution system in LISP - the networking one, not the language.)

    Noel


From stewart at serissa.com  Mon Jun 25 00:41:42 2018
From: stewart at serissa.com (Lawrence Stewart)
Date: Sun, 24 Jun 2018 10:41:42 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <CABH=_VTOZbdzfXrYYbb13XG5EZTxXXYDTGUaWLPRrWYx5MBuzA@mail.gmail.com>
References: <20180624031454.4F9484422F@lignose.oclsc.org>
 <CABH=_VTOZbdzfXrYYbb13XG5EZTxXXYDTGUaWLPRrWYx5MBuzA@mail.gmail.com>
Message-ID: <07BD6DF6-66B3-4EF3-B3B0-F2E2EBB3A209@serissa.com>



> On 2018, Jun 24, at 9:03 AM, Paul Winalski <paul.winalski at gmail.com> wrote:
> 
> On 6/23/18, Norman Wilson <norman at oclsc.org> wrote:
>> Ron Minnich:
>> 
>>  Jon Hall used to love telling the story of the VAX backplane with the
>> glue
>>  in the board slots, which clever customers managed to damage and have
>>  repaired with a non-glued-up backplane.
> 
> That was the VAXstation-11/RC.  Marketing wanted a VAXstation with
> fewer backplane slots that it could sell at a cheaper price.  Rather
> than manufacture a different board, they just filled the extra
> backplane slots with glue to render them unusable.  "RC" officially
> stood for "restricted configuration", but we in Engineering called it
> "resin caulked".
> 
> -Paul W.

Some customers of the 11/RC figured out how to buy the full backplane as “spare parts” which worked for a while.

Every industry with up-front R&D or capital costs has the problem that the marginal cost of goods is much lower than the average cost.  It happens to airlines, chip companies, system companies and especially commercial software companies.  Trying to introduce product differentiation is one solution. Glue or microcode NOPs or DRM licence unlock codes work, but they tend to damage your reputation.

-L

From krewat at kilonet.net  Mon Jun 25 01:47:32 2018
From: krewat at kilonet.net (Arthur Krewat)
Date: Sun, 24 Jun 2018 11:47:32 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <07BD6DF6-66B3-4EF3-B3B0-F2E2EBB3A209@serissa.com>
References: <20180624031454.4F9484422F@lignose.oclsc.org>
 <CABH=_VTOZbdzfXrYYbb13XG5EZTxXXYDTGUaWLPRrWYx5MBuzA@mail.gmail.com>
 <07BD6DF6-66B3-4EF3-B3B0-F2E2EBB3A209@serissa.com>
Message-ID: <2a7b500f-3637-6756-ec0e-d0d5f6534147@kilonet.net>

On 6/24/2018 10:41 AM, Lawrence Stewart wrote:
> Glue or microcode NOPs or DRM licence unlock codes work, but they tend to damage your reputation.

I was called in at the last minute to help a consulting firm who was 
having a hard time convincing their customer that they knew what they 
were doing. They spec'd out a SunFire 4800 cluster back in the mid 
2000's that would run 10 or so medium-to-large OLTP and DW Oracle 
instances. I came across the notion of "Capacity On Demand" (COD) CPU 
licensing. You would buy a complete system, full of CPUS and memory, but 
only license a subset of the CPUs (and associated memory).

The customer thought "great!" we can save a few $'s and if we want, we 
can turn on the extra capacity when/if we need it.

After reading all the documentation, I was on a conference call with 
some Sun engineers, the sales rep, and my customer's team (including 
some of the consultants who were a little too "wet behind the ears".

I point-blank asked the engineers: "I see in the documentation that if 
you use COD, memory interleaving is turned off, which only makes sense. 
Since we're only licensing 3 of every 4 CPU, Doesn't that mean we're 
only going to get half if not one quarter of the platform's advertised 
memory bandwidth?" (Single vs. two-way vs. four-way interleaving. Odd 
number of CPUs, no interleaving). Reluctantly, one of the engineers 
agreed that was indeed the case. The other "engineers" had no freakin' 
clue, but muttered something about "we have to remember that for next 
time".

I roughly calculated the difference in my head and said: "For an extra 
2% of the entire project cost (IBM Shark, Oracle licenses and Sun 
hardware combined), we're going to hobble these systems that much?". 
After the consulting firm I was sub-contracting for balked on telling 
the customer about this extra cost, I mentioned this in the presentation 
for the customer's CIO. She perked up her ears and immediately said 
'We'll spend the extra for that much performance. What were you guys 
thinking?'" (referring to the original consulting firm's own "Sun 
expert" who I'd had a lot of arguments with, actually quit the day they 
signed the contract with Sun).

I wonder, to this day, how many Sun customers were sold this COD concept 
only to suffer through 1/2 or 1/4 the memory bandwidth. This was for the 
entire SunFire 3800/4800/4810/6800/E12K/E15K line.

I went on to support that system for 5 more years as the customer 
wouldn't let the consulting firm even THINK of letting me leave ;)

art k.



From norman at oclsc.org  Mon Jun 25 04:49:30 2018
From: norman at oclsc.org (Norman Wilson)
Date: Sun, 24 Jun 2018 14:49:30 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
Message-ID: <1529866175.26954.for-standards-violators@oclsc.org>

Paul Winalski:

  Rather than design a new
  CPU, they just put NOPs in the Skipjack microcode to slow it down.
  The official code name for this machine was Flounder, but within DEC
  engineering we called it "Wimpjack".  Customers could buy a field
  upgrade for Flounder microcode that restored it to Skipjack
  performance levels.

====

As I remember it, once it came out that the upgrade
merely removed gratuitous nops, customers raised sufficient
fuss that the denopped microcode was made available for
free (perhaps only to those with service contracts) and
Flounder (VAX 8500) was no longer sold.

Norman Wilson
Toronto ON


From norman at oclsc.org  Mon Jun 25 04:49:46 2018
From: norman at oclsc.org (Norman Wilson)
Date: Sun, 24 Jun 2018 14:49:46 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
Message-ID: <1529866190.27006.for-standards-violators@oclsc.org>

Paul Winalski:

  That was the VAXstation-11/RC.

===

Yep, that's the name.

My first batch of discarded MicroVAX IIs were the original
backbone routers for a large university campus, installed
ca. 1990.  That backbone ran over serial-line connections,
at 56Kbps, which was quite impressive for the day given the
physical distances involved.

Either they had a bunch of Qbus backplanes lying around, or
someone computed that the cost of an 11/RC plus a backplane
was appreciably less than a system with an unobstructed
backplane.  In any case, they swapped most of the backplanes
themselves.  The one I got that still had the glue in was an
anomaly; maybe it was a spare chassis.

The MicroVAX routers ran Ultrix, and some of them had uptimes
of five years when they were finally shut down to be discarded.
All the hardware I rescued tested out fine, and some of it is
still running happily in my basement.  I've had a few disk
failures over the years, and I think lost one power supply
back around Y2K and maybe had a DZV11 fail, but that's it.
We don't make hardware like that any more.

Norman Wilson
Toronto ON


From bakul at bitblocks.com  Mon Jun 25 10:43:10 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Sun, 24 Jun 2018 17:43:10 -0700
Subject: [TUHS] mail (Re:  off-topic list
In-Reply-To: <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
Message-ID: <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>

On Jun 23, 2018, at 5:18 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org> wrote:
> 
>> I actually have no idea of nmh, but i for one think the sentence of the old BSD Mail manual stating it is an "intelligent mail processing system, which has a command syntax reminiscent of ed(1) with lines replaced by messages" has always been a bit excessive.  And the fork i maintain additionally said it "is also usable as a mail batch language, both for sending and receiving mail", adding onto that.
> 
> What little I know about the MH type mail stores and associated utilities are indeed quite powerful.  I think they operate under the premise that each message is it's own file and that you work in something akin to a shell if not your actual OS shell.  I think the MH commands are quite literally unix command that can be called from the unix shell.  I think this is in the spirit of simply enhancing the shell to seem as if it has email abilities via the MH commands.  Use any traditional unix text processing utilities you want to manipulate email.
> 
> MH has always been attractive to me, but I've never used it myself.

One of the reasons I continue using MH (now nmh) as my primary
email system is because it works so well with Unixy tools. I
can e.g. do

    pick +TUHS -after '31 Dec 2017' -and -from bakul | xargs scan

To see summary lines of messages I posted to this group this
year. Using MH tools with non-MH tools is a bit clunky though
useful; I just have to cd to the right Mail directory first.

A mail message is really a structured object and if we 
represent it as a directory, all the standard unix tools can
be used. This is what Presotto's upasfs on plan9 does. With it
you can do things like

    cd /mail/fs/mbox
    grep -l foo */from | sed 's/from/to/' | xargs grep -l bar

But you soon realize standard unix tools are quite a bit
clunky in dealing with structured objects - often you want to
deal with a set of sub components simultaneously (e.g. the 
pick command above). This problem is actually pretty common in
the unix world and you have to have context/target specific 
tools. At the same time a unix sensibility can guide the
architecture / design. Something like MH toolset would be
easier to implement if you assume an underlying mbox
filesystem.  Conversely, the same sort of tools can perhaps be
useful elsewhere.

\vague-idea{
Seems to me that we have taken the unix (& plan9) model for
granted and not really explored it at this level much. That
is, can we map structured objects into trees/graphs in an
efficient way such that standard tools can be used on them?
Can we extend standard tools to explore substructures in a
more generic way?
}



From dave at horsfall.org  Mon Jun 25 11:38:56 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Mon, 25 Jun 2018 11:38:56 +1000 (EST)
Subject: [TUHS] off-topic list
In-Reply-To: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
References: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
Message-ID: <alpine.BSF.2.21.999.1806251121440.68981@aneurin.horsfall.org>

On Sun, 24 Jun 2018, Noel Chiappa wrote:

> It's perhaps worth noting that today's DNS is somewhat different from 
> the original; some fairly substantial changes were made early on 
> (although maybe it was just in the security, I don't quite recall).

The only updates I've seen to BIND in recent years were security-related, 
not functionality.  Yeah, I could switch to another DNS server, but like 
Sendmail vs. Postfix[*] it's better the devil you know etc...

That said, if I had to set up a brand new server then it would be in 
parallel with the old one for a while, serving a different set of domains 
so that I won't care if I screw things up (I've got quite a few domains 
that I could subscribe to existing lists in parallel).

[*]
And I won't even consider switching to EMACS from VI :-)

-- Dave


From gtaylor at tnetconsulting.net  Mon Jun 25 11:46:51 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sun, 24 Jun 2018 19:46:51 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806251121440.68981@aneurin.horsfall.org>
References: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806251121440.68981@aneurin.horsfall.org>
Message-ID: <8a81f6fb-5689-f0b4-49e3-5871c4d3a402@spamtrap.tnetconsulting.net>

On 06/24/2018 07:38 PM, Dave Horsfall wrote:
> The only updates I've seen to BIND in recent years were 
> security-related, not functionality.  Yeah, I could switch to another 
> DNS server, but like Sendmail vs. Postfix[*] it's better the devil you 
> know etc...

I've seen the following features introduced within the last five years:

  - Response Policy Zone has added (sub)features and matured.
  - Response Policy Service (think milter like functionality for BIND).
  - Catalog Zones
  - Query Minimization
  - EDNS0 Client Subnet support is being worked on.

That's just what comes to mind quickly.



-- 
Grant. . . .
unix || die


From lyndon at orthanc.ca  Mon Jun 25 11:15:52 2018
From: lyndon at orthanc.ca (Lyndon Nerenberg)
Date: Sun, 24 Jun 2018 18:15:52 -0700
Subject: [TUHS] mail (Re:  off-topic list
In-Reply-To: <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
Message-ID: <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>


> On Jun 24, 2018, at 5:43 PM, Bakul Shah <bakul at bitblocks.com> wrote:
> 
> \vague-idea{
> Seems to me that we have taken the unix (& plan9) model for
> granted and not really explored it at this level much. That
> is, can we map structured objects into trees/graphs in an
> efficient way such that standard tools can be used on them?
> Can we extend standard tools to explore substructures in a
> more generic way?
> }

One of the main reasons I bailed out of nmh development was the disinterest in trying new and offside ways of addressing the mail store model.  I have been an MH user since the mid-80s, and I still think it is more functional than any other MUA in use today (including Alpine).  Header voyeurs will note I'm sending this from Mail.app.  As was mentioned earlier in this conversation, there is no one MUA, just as there is no one text editing tool.  I used a half dozen MUAs on a daily basis, depending on my needs.

But email, as a structured subset of text, deserves its own set of dedicated tools.  formail and procmail are immediate examples.  As MIME intruded on the traditional ASCII-only UNIX mbox format, grep and friends became less helpful.  HTML just made it worse.

Plan9's upas/fs abstraction of the underlying mailbox helps a lot.  By undoing all the MIME encoding, and translating the character sets to UTF8, grep and friends work again.  That filesystem abstraction has so much more potential, too.  E.g. pushing a pgpfs layer in there to transparently decode PGP content.

I really wish there was a way to bind Plan9-style file servers into the UNIX filename space.  Matt Blaze's CFS is the closest example of this I have seen.  Yet even it needs superuser privs.  (Yes, there's FUSE, but it just re-invents the NFS interface with no added benefit.)

Meanwhile, I have been hacking on a version of nmh that uses a native UTF8 message store.  I.e. it undoes all the MIME encoding, and undoes non-UTF8 charsets.  So grep and friends work, once again.

--lyndon



From ggm at algebras.org  Mon Jun 25 12:44:15 2018
From: ggm at algebras.org (George Michaelson)
Date: Mon, 25 Jun 2018 12:44:15 +1000
Subject: [TUHS] mail (Re: off-topic list
In-Reply-To: <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
 <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
Message-ID: <CAKr6gn0DDZg9vUZSkxYvCLWTPOxNRK72_7bNv5U5LrEEeOa7LQ@mail.gmail.com>

your email Lyndon, was uncannily like what I wanted to say. I think.

a) (n)MH is great
b) I can't live in one mail UI right now for a number of reasons.
Thats unfortunate
3) integration of MH into pop/imap was abortive and requires effort.
If thats improved, I'd love to know
4) we stopped advancing the art (tm) for handling data via pipes and
grep-like workflows

the plan9 plumber is amazing. I remain in awe of the work done to get
there, but its observable it isn't well applied anywhere else in the
ecology, that I can see. Which is odd, and .a shame given Pike works
in Google now. I guess people move into different roles (I know I
have)

If you put your version of NMH up anywhere I'd love to see it.

the one thing I do know, is that I have far more email accounts than I
should do (three, with two subvariants) and far more UI than I want
(four)


From dave at horsfall.org  Mon Jun 25 12:53:04 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Mon, 25 Jun 2018 12:53:04 +1000 (EST)
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
Message-ID: <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>

On Sat, 23 Jun 2018, Michael Parson wrote:

> The first rule in my .procmailrc does this with formail:

Anyone with any concept of security will not be running Procmail; it's not 
even supported by its author any more, due to its opaque syntax and likely 
vulnerabilities (it believes user-supplied headers and runs shell commands 
based upon them).

-- Dave VK2KFU


From lm at mcvoy.com  Mon Jun 25 13:04:00 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Sun, 24 Jun 2018 20:04:00 -0700
Subject: [TUHS] mail (Re: off-topic list
In-Reply-To: <CAKr6gn0DDZg9vUZSkxYvCLWTPOxNRK72_7bNv5U5LrEEeOa7LQ@mail.gmail.com>
References: <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
 <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
 <CAKr6gn0DDZg9vUZSkxYvCLWTPOxNRK72_7bNv5U5LrEEeOa7LQ@mail.gmail.com>
Message-ID: <20180625030400.GA17529@mcvoy.com>

On Mon, Jun 25, 2018 at 12:44:15PM +1000, George Michaelson wrote:
> your email Lyndon, was uncannily like what I wanted to say. I think.
> 
> a) (n)MH is great
> b) I can't live in one mail UI right now for a number of reasons.
> Thats unfortunate
> 3) integration of MH into pop/imap was abortive and requires effort.
> If thats improved, I'd love to know

I use rackspace for mcvoy.com but plain old mutt for reading it here
and sending.  I use fetchmail to get the mail locally.  Works for me
because mcvoy.com used to be a mail server and is still set up from
those times to send mail.  Kind of hacky but rackspace does the spam
filtering and I got sick of babysitting that.

> 4) we stopped advancing the art (tm) for handling data via pipes and
> grep-like workflows

So the source management system I started, BitKeeper, has sort of an
answer for the processing question.  It's a stretch but if you look 
at it enough a mail message is a little like a commit, they both have
fields.

Below is an example of the little "language" we built for processing
deltas in a revision history graph.  Some notes on how it works:

	:SOMETHING: means take struct delta->SOMETHING and replace the
	:SOMETHING: with that value.

	Control statements begin with $, so $if (expr).
	From awk we get $begin and $end (this whole language is very awk
	like with what awk would consider a line we hand the language
	a struct delta.)

	We invented a $each(:MULTI_LINE_FILE:) {
		:MULTI_LINE_FILE:
	}
	that is an interator, the variable in the body evaluates to 
	the next line of the multi line thing.  Weird but it works.

	$json(:FIELD:) json encodes the field.

	"text" is just printed.

	We gave you 10 registers/variables in $0 .. $9, they default to
	false.

This little script is running through each commit and printing out the
information in json.  Examples after the script.

It's important to understand that the $begin/end are run before/after
the deltas and then the main part of the script is run once per delta,
just like awk.

# dspec-v2
# The dspec used by 'bk changes -json'

$begin {
	"[\n"
}

$if (:CHANGESET: && !:COMPONENT_V:) {
	$if($0 -eq 1) {
		"\},\n"
	}
	"\{\n"
	"  \"key\": \":MD5KEY:\",\n"
	"  \"user\": \":USER:\",\n"
	"  \"host\": \":HOST:\",\n"
	"  \"date\": \":Dy:-:Dm:-:Dd:T:T::TZ:\",\n"
	"  \"serial\": :DS:,\n"
	"  \"comments\": \"" $each(:C:){$json{(:C:)}\\n} "\",\n"
        $if (:TAGS:) {
             "  \"tags\": [ "
             $each(:TAGS:){:JOIN:"\""(:TAGS:)"\""}
             " ],\n"
        }
        "  \"parents\": [ "
            $if(:PARENT:){"\"" :MD5KEY|PARENT: "\""}
            $if(:PARENT: && :MPARENT:){," "}
            $if(:MPARENT:){"\"" :MD5KEY|MPARENT: "\""}
            " ]\n"
	${0=1}		 		# we need to close off the changeset
}

$end {
	$if($0 -eq 1) {
		"\}\n"
	}
	"]\n"
}

So here is human readable output:

$ bk changes -1
ChangeSet at 1.2926, 2018-03-12 08:00:33-04:00, rob at bugs.(none)
  L: Fix bug where "defaultx:" would be scanned as a T_DEFAULT
  followed by a T_COLON. The "x" and anything else after
  "default" and before the colon would be ignored.
  
  So if you ever had an option name that began with "default",
  it wouldn't be scanned correctly.
  
  This bug was reported by user GNX on the BitKeeper user's forum
  (Little language area).

And here is the same thing run through that scrip above.

$ bk changes -1 --json
[
{
  "key": "5aa66be1MaS_1t5lQkNCflPexCwd2w",
  "user": "rob",
  "host": "bugs.(none)",
  "date": "2018-03-12T08:00:33-04:00",
  "serial": 11178,
  "comments": "L: Fix bug where \"defaultx:\" would be scanned as a T_DEFAULT\nf
ollowed by a T_COLON. The \"x\" and anything else after\n\"default\" and before 
the colon would be ignored.\n\nSo if you ever had an option name that began with
 \"default\",\nit wouldn't be scanned correctly.\n\nThis bug was reported by use
r GNX on the BitKeeper user's forum\n(Little language area).\n",
  "parents": [ "5a2d8748bf8TYIOquTa3CZInTjC7KQ" ]
}
]


If you have read this far, maybe some ideas from this stuff could be used
to get a sort of processing system for structured data.  You'd need a plugin
for each structure but the harness itself could be reused.


From bakul at bitblocks.com  Mon Jun 25 13:15:49 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Sun, 24 Jun 2018 20:15:49 -0700
Subject: [TUHS] mail (Re:  off-topic list
In-Reply-To: <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
 <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
Message-ID: <8FBA4480-4ABE-4C65-8B0C-8B2B696A29A0@bitblocks.com>

On Jun 24, 2018, at 6:15 PM, Lyndon Nerenberg <lyndon at orthanc.ca> wrote:
> 
> But email, as a structured subset of text, deserves its own set of dedicated tools.  formail and procmail are immediate examples.  As MIME intruded on the traditional ASCII-only UNIX mbox format, grep and friends became less helpful.  HTML just made it worse.

Storing messages in mbox format files is like keeping
tarfiles around. You wouldn't (usually) run grep on a
tarfile - you'd unpack it and then select files based
on extensions etc before running grep. In the same way
running grep on what should be a transport only mail
format doesn't make much sense.

> Meanwhile, I have been hacking on a version of nmh that uses a native UTF8 message store.  I.e. it undoes all the MIME encoding, and undoes non-UTF8 charsets.  So grep and friends work, once again.

Note that Mail.app already does this (more or less).

I have a paper design for (n)mh<->imap connection &
associated command changes. I didn't want to touch nmh
code so I have been doing some prototyping in Go but
any project progress is in fits and starts....

From gtaylor at tnetconsulting.net  Mon Jun 25 15:40:52 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Sun, 24 Jun 2018 23:40:52 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
Message-ID: <1e57a799-813a-4a3d-bda8-f460220ac0ea@spamtrap.tnetconsulting.net>

On 06/24/2018 08:53 PM, Dave Horsfall wrote:
> Anyone with any concept of security will not be running Procmail;

I'm going to have to throw a flag and cry foul on that play.

1)  "Anyone (with)" is a rather large group.
2)  "any concept of security" is a rather large (sub)group.
3)  "will not" is rather absolute.

I do believe that I have a better concept of security than many (but not 
all) of my colleagues.

  - I've got leading (if not bleeding) edge email security.
  - I've got full disk encryption on multiple server and workstations.
  - I use encrypted email when ever I can.
  - I play with 802.1ae MACsec (encrypted Ethernet frames).
  - I use salted hashes in proof of concepts.
  - I advocate for proper use of sudo...
  - ...and go out of my way to educate others on how to use sudo properly.

I could go on, but you probably don't care.  In short, I believe I fall 
squarely in categories #1 and #2.

Seeing as how I run procmail I invalidate #3.

So, I ask that you retract or amend your statement.  Or at least admit 
it's (partial) inaccuracies.

> it's not even supported by its author any more,

Many of the software packages that TUHS subscribers run on physical and 
/ or virtual systems are not supported by their authors any more.  Some 
of them are directly connected to the Internet too.

How many copies if (Open)VMS are running on (virtual) VAX (emulators)? 
Don't like (Open)VMS, then how about ancient versions of BSD or AT&T SYS 
V?  How many people are running wide array ancient BBSs on as many 
platforms?

How many people in corporate offices are running software that went End 
of Support 18 months ago?

Lack of support does not make something useless.

> due to its opaque syntax

I'm not aware of Procmail ever having claimed to have simple syntax.  I 
also believe that Procmail is not alone in this.

m4 is known for being obtuse, as is Sendmail, both of which are still 
used too.  SQL is notorious for being finicky.  I think there's a lot of 
C and C++ code that can fall in the same category.  (LISP … enough said)

> and likely vulnerabilities

Everything has vulnerabilities.  It's about how risky the (known) 
vulnerabilities are, and how likely they are to be exploited.  It's a 
balancing act.  Every administrator (or company directing said 
administrator) performs a risk assessment and makes a decision.

> (it believes user-supplied headers

Does the latest and greatest SMTP server from Google believe the 
information that the user supplies to it?  What about the Nginx web 
server that seems to be in vogue, does it believe the verb, URL, HTTP 
version and Host: header that users supply?

Does Mailman that hosts the TUHS mailing list believe the information 
that minnie provides that was originally user supplied?

Does your web browser believe and act on the information that the web 
server you are connecting to provided?

Applications are designed to trust the information that is provided to 
them.  Sure, run some sanity checks on it.  But ultimately it's the job 
of software to act on the user supplied information.

> and runs shell commands based upon them).

I've seen exceedingly few procmail recipes that actually run shell 
commands.  Almost all of the procmail recipes that I've seen and use do 
a very simple match for fixed text and file the message away in a 
folder.  These are functions built into procmail and NOT shell commands.

The very few procmail recipes that I've seen that do run shell commands 
are passing the message into STDIN of another utility that is itself 
designed to accept user supplied data, ideally in a safe way.

So, I believe your statement, "Anyone with any concept of security will 
not be running Procmail", is false, literally from word one.

If you want to poke fun at something, take a look at SpamAssassin and 
it's Perl.  Both of which are still actively supported.  Or how about 
all the NPM stuff that people are using in web page that are being 
pulled from places that God only knows.



-- 
Grant. . . .
unix || die


From arnold at skeeve.com  Mon Jun 25 16:15:42 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Mon, 25 Jun 2018 00:15:42 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
Message-ID: <201806250615.w5P6FgHA018820@freefriends.org>

Dave Horsfall <dave at horsfall.org> wrote:

> On Sat, 23 Jun 2018, Michael Parson wrote:
>
> > The first rule in my .procmailrc does this with formail:
>
> Anyone with any concept of security will not be running Procmail; it's not 
> even supported by its author any more, due to its opaque syntax and likely 
> vulnerabilities (it believes user-supplied headers and runs shell commands 
> based upon them).
>
> -- Dave VK2KFU

So what is the alternative?  I've been using it for years with
a pretty static setup to route incoming mail to different places.
I need *something* to do what it does.

Thanks,

Arnold


From bakul at bitblocks.com  Mon Jun 25 17:27:42 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Mon, 25 Jun 2018 00:27:42 -0700
Subject: [TUHS] off-topic list
In-Reply-To: Your message of "Mon, 25 Jun 2018 00:15:42 -0600."
 <201806250615.w5P6FgHA018820@freefriends.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
Message-ID: <20180625072751.47D0F156E7DB@mail.bitblocks.com>

On Mon, 25 Jun 2018 00:15:42 -0600 arnold at skeeve.com wrote:
> 
> > On Sat, 23 Jun 2018, Michael Parson wrote:
> >
> > > The first rule in my .procmailrc does this with formail:
> >
> > Anyone with any concept of security will not be running Procmail; it's not 
> > even supported by its author any more, due to its opaque syntax and likely 
> > vulnerabilities (it believes user-supplied headers and runs shell commands 
> > based upon them).
> >
> > -- Dave VK2KFU
> 
> So what is the alternative?  I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.

My crude method has worked better than anything else for me.
[in used for over two decades]

As I read only a subset of messages from mailing lists, if I
directly filed such messages into their own folders, I would
either have to waste more time scanning much larger mail
folders &/or miss paying attention to some messages even
once[1].

Fortunately, in MH one can use named sequences (that map to
set of picked messages). In essence, I use sequences as "work
space" and other folders as storage space.

For example

  $ <run spam filtering script>
  $ pick -seq me -to bakul -or -cc bakul -or -bcc bakul 
  $ pick -seq tuhs -to tuhs at tuhs -or -cc tuhs at tuhs
  ...

When I have some idle time, I type

  $ inc # to incorporate new messages into inbox
  $ pickall # my script for creating sequences

Next I scan these sequences in a priority order to see if
anything seems interesting and then process these messages.
Once done, I file them into their own folders and move on to
the next sequence. The whole process takes a few minutes at
most[2] and at the end the inbox is "zeroed"! By zeroing it
each time, I ensure that the next time I will be processing
only new messages, and typically spend less than a second per
message summary line.

[1] This happens to me on Apple Mail.
[2] Unless I decide to reply!


From dot at dotat.at  Mon Jun 25 22:45:21 2018
From: dot at dotat.at (Tony Finch)
Date: Mon, 25 Jun 2018 13:45:21 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
References: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
Message-ID: <alpine.DEB.2.11.1806251309390.916@grey.csi.cam.ac.uk>

Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>
> It's perhaps worth noting that today's DNS is somewhat different from the
> original; some fairly substantial changes were made early on (although maybe
> it was just in the security, I don't quite recall).

The key early changes were described in RFC 973 (1986): bigger TTLs,
MX records, CNAME and wildcard clarifications.

Next, I think, was NOTIFY / IXFR / UPDATE in 1996/7 which made the whole
system (potentially) a lot more dynamic.

RFC 2181 (also 1997) is important because it includes the standardized
pre-DNSSEC answer to the 1990s cache poisoning attacks found by Bellovin
and others. (Though I think a lot of this was put in place well before the
RFC was published.) This greatly restricted the gossip protocol aspect of
the DNS (records in the additional section).

There was a lot of churn related to IPv6 easy renumbering, which has all
been thrown away apart from DNAME.

There was also a lot of churn around DNSSEC, going right back into the
1990s, which finally settled on what we have now by about 2008. Along the
way they discovered a lot more unclarified edge cases in things like
wildcards. DNSSEC turned the DNS into a somewhat half-arsed PKI. It could
also allow implementations to bring back gossip, though there are
performance and packet size constraints that make it tricky.

The half-arsedness of DNSSEC is mostly related to the administrative
aspects of registrations and transfers and so forth, which are frequently
not very confidence-inspiring. Some of this is due to the way EPP works
(and its predecessor the registry-registrar protocol), but it's mostly
because there's no standard interface between domain owners, DNS
operators, and registrars. (And registrars don't want one because it would
commoditize them. There's probably a David Clark-style Tussle in
Cyberspace case study in here somewhere.)

Tony.
-- 
f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/
work to the benefit of all


From mparson at bl.org  Mon Jun 25 22:52:38 2018
From: mparson at bl.org (Michael Parson)
Date: Mon, 25 Jun 2018 07:52:38 -0500
Subject: [TUHS] off-topic list
In-Reply-To: <201806250615.w5P6FgHA018820@freefriends.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
Message-ID: <d50d5ada2d25cf5c536f205037e1e942@bl.org>

On 2018-06-25 01:15, arnold at skeeve.com wrote:
> Dave Horsfall <dave at horsfall.org> wrote:
> 
>> On Sat, 23 Jun 2018, Michael Parson wrote:
>> 
>> > The first rule in my .procmailrc does this with formail:
>> 
>> Anyone with any concept of security will not be running Procmail; it's 
>> not
>> even supported by its author any more, due to its opaque syntax and 
>> likely
>> vulnerabilities (it believes user-supplied headers and runs shell 
>> commands
>> based upon them).
>> 
>> -- Dave VK2KFU
> 
> So what is the alternative?  I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.

Sieve[0] is what I've seen suggested a bit.  I've skimmed over the docs 
some, but haven't invested the time in figuring out how much effort 
would be involved in replacing procmail with it, at first glance, it 
does not seem to be a drop-in replacement.

-- 
Michael Parson
Pflugerville, TX
KF5LGQ

[0] http://sieve.info/


From arnold at skeeve.com  Mon Jun 25 23:41:56 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Mon, 25 Jun 2018 07:41:56 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <d50d5ada2d25cf5c536f205037e1e942@bl.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
 <d50d5ada2d25cf5c536f205037e1e942@bl.org>
Message-ID: <201806251341.w5PDfuj9007252@freefriends.org>

Michael Parson <mparson at bl.org> wrote:

> On 2018-06-25 01:15, arnold at skeeve.com wrote:
> > So what is the alternative?  I've been using it [procmail] for years with
> > a pretty static setup to route incoming mail to different places.
> > I need *something* to do what it does.
>
> Sieve[0] is what I've seen suggested a bit.  I've skimmed over the docs 
> some, but haven't invested the time in figuring out how much effort 
> would be involved in replacing procmail with it, at first glance, it 
> does not seem to be a drop-in replacement.
>
> -- 
> Michael Parson
> Pflugerville, TX
> KF5LGQ
>
> [0] http://sieve.info/

Thanks for the answer.  This looks medium complicated.

It's starting to feel like I could do this myself in my favorite
programming language (gawk) simply by writing a library to parse 
the headers and then writing awk code to check things and send emails
to the right place.

If I didn't already have more side projects than I can curently handle ...

Thanks,

Arnold


From arnold at skeeve.com  Mon Jun 25 23:56:04 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Mon, 25 Jun 2018 07:56:04 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <201806251341.w5PDfuj9007252@freefriends.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
 <d50d5ada2d25cf5c536f205037e1e942@bl.org>
 <201806251341.w5PDfuj9007252@freefriends.org>
Message-ID: <201806251356.w5PDu4Wq009050@freefriends.org>

> > [0] http://sieve.info/

This looks like it might be useful:

https://github.com/tonioo/sievelib

It's a client side library and would need a wrapper to make
it standalone.

Thanks,

Arnold


From clemc at ccc.com  Tue Jun 26 00:18:29 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 10:18:29 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
Message-ID: <CAC20D2MvgZN1P5wcZ4g_Gab6j5KPgKg+yhieU-_OS5-20xtjGA@mail.gmail.com>

On Sat, Jun 23, 2018 at 8:18 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org
> wrote:
>
>
> What little I know about the MH type mail stores and associated utilities
> are indeed quite powerful.

​Yep, their power and flaw all rolled together actually.     Until I had
Pachyderm on Tru64/Alpha with AltaVista under the covers (which was gmail's
predecessor), I ran a flavor of MH from the time Bruce (Borden - MH's
author) first released it on the 6th edition on the Rand USENIX tape. ​
I'm going to guess for about 25 years.   Although for the last 8-10 years,
I ran a post processor user interface called 'HM' (also from Rand) that was
curses based that split the screen into two.




>   I think they operate under the premise that each message is it's own
> file

​Correct - which is great, other than on small systems it chews up inodes
and disk space which for v6 and v7 could be a problem.   ​But it means
everything was always ASCII and easy to grok and any tool from an editor to
macro processor could be inserted.   It also meant that unlike AT&T "mail",
the division between the MUA and the MTA was first declared by the Rand and
understood in Unix and used in the original UofI ArpaNet code (before
Kurt's delivermail  [sendmail's predecessor] which was part of UCB Mail, or
the MIT mailer ArpaNet hacks that would come later).

BTW: I may have the the original Rand MH release somewhere.  We ran it at
Tektronix on V6 on the 11/60 and then V7 on the TekLabs 11/70, as I brought
it with me.  We hacked the MTA portion to talk smtpd under Bruce's UNET
code to our VMS/SMTPD at some point.



> and that you work in something akin to a shell if not your actual OS shell.

​Exactly.   ​Your shell or emacs if you so desired - whatever your native
system interface was.  HM took the idea a little further to make things
more screen oriented and later versions of MH picked some of the HM stuff
I'm told; but I had started to use Pachyderm - which was search based.



>   I think the MH commands are quite literally unix command that can be
> called from the unix shell.  I think this is in the spirit of simply
> enhancing the shell to seem as if it has email abilities via the MH
> commands.  Use any traditional unix text processing utilities you want to
> manipulate email.

​Absolutely.  I do find myself, pulling things out of gmail, sometimes so I
can do Unix tricks to inbound mail that gmail will not let me do.   And
when I want to do anything really automating on the send side, I have MH
installed and it calls the local MTA.   But I admit, the indexing that
search gives you is incredibly powerful for day to day use and I could not
go back. to MH.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/1f6e02af/attachment.html>

From ats at offog.org  Mon Jun 25 23:59:56 2018
From: ats at offog.org (Adam Sampson)
Date: Mon, 25 Jun 2018 14:59:56 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <201806250615.w5P6FgHA018820@freefriends.org> (arnold's message
 of "Mon, 25 Jun 2018 00:15:42 -0600")
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
Message-ID: <y2aefgvhspv.fsf@offog.org>

arnold at skeeve.com writes:

> So what is the alternative?  I've been using it for years with
> a pretty static setup to route incoming mail to different places.
> I need *something* to do what it does.

I use maildrop from the Courier project, which does pretty much the same
job as procmail, with a C-ish filtering language:

http://www.courier-mta.org/maildrop/

I switched over from procmail to maildrop in 2004 and it's worked nicely
for me. There are also a couple of different filtering languages that
Exim understands, if that's your MTA of choice.

-- 
Adam Sampson <ats at offog.org>                         <http://offog.org/>


From jnc at mercury.lcs.mit.edu  Tue Jun 26 00:44:54 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Mon, 25 Jun 2018 10:44:54 -0400 (EDT)
Subject: [TUHS] off-topic list
Message-ID: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>

    > From: Clem Cole

    > I may have the the original Rand MH release somewhere.

There's this:

  https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/mh

Not sure how modified from the formal release this is, it may be pretty much
the original (it's certainly quite old - pre-TCP/IP).

    Noel


From gtaylor at tnetconsulting.net  Tue Jun 26 01:05:18 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 09:05:18 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <201806250615.w5P6FgHA018820@freefriends.org>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
Message-ID: <53713b13-0f08-5ff6-87bb-730e6b58c24e@spamtrap.tnetconsulting.net>

On 06/25/2018 12:15 AM, arnold at skeeve.com wrote:
> So what is the alternative?  I've been using it for years with a 
> pretty static setup to route incoming mail to different places.  I need 
> *something* to do what it does.

As others have pointed out, Sieve and Maildrop are a couple of options.

I've looked at both of them a few times, somewhat in earnest when I last 
built my mail server,  but didn't feel that they would be drop in 
replacements for my existing LDA needs.

Perhaps my limitations are my own making.  I use Maildir and want an LDA 
that is compatible with that.  At (currently) 453 procmail recipes, I'd 
like something that is closer to a drop in replacement.  I will rewrite 
things if I must, but I'd rather not if it's not required.

I think one of the reasons the LDA space is languishing is that many 
mail servers seem to migrating to mail stores that are more centralized 
/ virtualized under one unix account, and they come with their own 
purpose built LDA.

The other option that Arnold mentioned, writing your own LDA, seems 
somewhat unpleasant to me.  Can it be done, absolutely.  I'm afraid that 
anything I would produce would likely share similar security problems 
that Procmail may very well have.  Which leaves me wondering, am I 
better off using a tool with a questionable security posture and others 
looking at it and trying to abuse it -or- my own tool with a completely 
unknown security posture that nobody else is looking at.  Thus I choose 
the lesser of the evils and continue to use Procmail.

Thanks to Michael and Adam for mentioning Sieve and Maildrop 
(respectively) while I was too lazy to comment on my phone in bed.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/2d1d316a/attachment.bin>

From clemc at ccc.com  Tue Jun 26 01:44:19 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 11:44:19 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
References: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
Message-ID: <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>

Noel, I did a quick look at that code.   That is some of it for sure -
looks like the parts in /usr/bin and maybe some of /usr/lib (MH was scatter
all of the file system in traditional UNIX manner -- lots of small programs
- each to do one job only - each fit in a small address PDP-11 just fine).
The docs are missing and the MTA part is not there (which I think I
remember was called 'submit' - but I could be very wrong on that).   It's
the second version because that code is using stdio, the first version used
a Rand IO library, if I remember right (not the portable C library).

Clem

PS For all you younger readers to this list, you need to remember that for
early C, I/O was specifically not defined as part of the language (in some
sense it is still not), so many early programs had their own libraries and
its a good way to date things.   If the code is using stdio, it actually
more 'modern' in the life of the PDP-11 [post 'typesetter C'].     BTW: I
can say, in the mid 1970s, I personally found the lack of defined I/O
confusing when I was learning the language and (remember Dennis has not yet
written the 'White Book' which was really part of V7).   It was one of my
bitches about C compared to BLISS, which was what I was coming and was a
very rich system at CMU - while I/O in C was really a pain because ever
program did it different - everybody wrote their own routines - which I
thought was silly.   Soon there after the 'portable C library' appeared and
then with typesetter C, stdio.

ᐧ

On Mon, Jun 25, 2018 at 10:44 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Clem Cole
>
>     > I may have the the original Rand MH release somewhere.
>
> There's this:
>
>   https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/mh
>
> Not sure how modified from the formal release this is, it may be pretty
> much
> the original (it's certainly quite old - pre-TCP/IP).
>
>     Noel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/d3ecee23/attachment.html>

From jon at fourwinds.com  Tue Jun 26 01:47:21 2018
From: jon at fourwinds.com (Jon Steinhart)
Date: Mon, 25 Jun 2018 08:47:21 -0700
Subject: [TUHS] off-topic list [ really mh ]
In-Reply-To: <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
References: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
 <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
Message-ID: <201806251547.w5PFlLJ2003448@darkstar.fourwinds.com>

Clem Cole writes:
>
> Noel, I did a quick look at that code.   That is some of it for sure -
> looks like the parts in /usr/bin and maybe some of /usr/lib (MH was scatter
> all of the file system in traditional UNIX manner -- lots of small programs
> - each to do one job only - each fit in a small address PDP-11 just fine).
> The docs are missing and the MTA part is not there (which I think I
> remember was called 'submit' - but I could be very wrong on that).   It's
> the second version because that code is using stdio, the first version used
> a Rand IO library, if I remember right (not the portable C library).
>
> Clem

BTW, I would think that the original code is somewhere in the nmh archives too.


From steffen at sdaoden.eu  Tue Jun 26 01:51:03 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Mon, 25 Jun 2018 17:51:03 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
Message-ID: <20180625155103.6iH7s%steffen@sdaoden.eu>

Grant Taylor via TUHS wrote in <09ee8833-c8c0-8911-751c-906b737209b7 at spa\
mtrap.tnetconsulting.net>:
 |On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
 |> Absolutely true.  And hoping that, different to web browsers, no let me 
 |> call it pseudo feature race is started that results in less diversity 
 |> instead of anything else.
 |
 |I'm not sure I follow.  I feel like we do have the choice of MUAs or web 
 |browsers.  Sadly some choices are lacking compared to other choices. 
 |IMHO the maturity of some choices and lack there of in other choices 
 |does not mean that we don't have choices.

Interesting.  I do not see a real choice for me.  Netsurf may be
one but cannot do Javascript once i looked last, so this will not
work out for many, and increasing.  Just an hour ago i had a fight
with Chromium on my "secure box" i use for the stuff which knows
about bank accounts and passwords in general, and (beside the fact
it crashes when trying to prepare PDF printouts) it is just
unusable slow.  I fail to understand (well actually i fail to
understand a lot of things but) why you need a GPU process for
a 2-D HTML page.  And then this:

  libGL error: unable to load driver: swrast_dri.so
  libGL error: failed to load driver: swrast
  [5238:5251:0625/143535.184020:ERROR:bus.cc(394)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
  [5238:5279:0625/143536.401172:ERROR:bus.cc(394)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
  libGL error: unable to load driver: swrast_dri.so
  libGL error: failed to load driver: swrast
  [5293:5293:0625/143541.299106:ERROR:gl_implementation.cc(292)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
  [5293:5293:0625/143541.404733:ERROR:viz_main_impl.cc(196)] Exiting GPU process due to errors during initialization
[..repeating..]
  [5238:5269:0625/143544.747819:ERROR:browser_gpu_channel_host_factory.cc(121)] Failed to launch GPU process.
  [5238:5238:0625/143544.749814:ERROR:gpu_process_transport_factory.cc(1009)] Lost UI shared context.

I mean, it is not there, ok?
I cannot really imagine how hard it is to write a modern web
browser, with the highly integrative DOM tree, CSS and Javascript
and such, however, and the devil is in the details anyway.
I realize that Chromium does not seem to offer options for Cookies
and such - i have a different elder browser for that.
All of that is off-topic anyway, but i do not know why you say
options.

 |> If there is the freedom for a decision.  That is how it goes, yes.  For 
 ..
 |> But it is also nice to see that there are standards which were not fully 
 |> thought through and required backward incompatible hacks to fill \
 |> the gaps.
 |
 |Why is that a "nice" thing?

Good question.  Then again, young men (and women) need to have
a chance to do anything at all.  Practically speaking.  For
example we see almost thousands of new RFCs per year.  That is
more than in the first about two decades all in all.  And all is
getting better.

 |> Others like DNS are pretty perfect and scale fantastic.  Thus.
 |
 |Yet I frequently see DNS problems for various reasons.  Not the least of 
 |which is that many clients do not gracefully fall back to the secondary 
 |DNS server when the primary is unavailable.

Really?  Not that i know of.  Resolvers should be capable to
provide quality of service, if multiple name servers are known,
i would say.  This is even RFC 1034 as i see, SLIST and SBELT,
whereas the latter i filled in from "nameserver" as of
/etc/resolv.conf, it should have multiple, then.  (Or point to
localhost and have a true local resolver or something like
dnsmasq.)

I do see DNS failures via Wifi but that is not the fault of DNS,
but of the provider i use.

P.S.: actually the only three things i have ever hated about DNS,
and i came to that in 2004 with EDNS etc. all around yet, is that
backward compatibly has been chosen for domain names, and
therefore we gained IDNA, which is a terribly complicated
brainfuck thing that actually caused incompatibilities, but these
where then waved through and ok.  That is absurd.

(If UTF-8 is too lengthy, i would have used UTF-7 for this, the
average octet/character ratio is still enough for most things on
the internet.  An EDNS bit could have been used for extended
domain name/label lengths, and if that would have been done 25
years ago we were fine out in practice.  Until then registrars and
administrators had to decide whether they want to use extended
names or not.  And labels of 25 characters are not common even
today, nowhere i ever looked.)

And the third is DNSSEC, which i read the standard of and said
"no".  Just last year or the year before that we finally got DNS
via TCP/TLS and DNS via DTLS, that is, normal transport security!
Twenty years too late, but i really had good days when i saw those
standards flying by!  Now all we need are zone administrators
which publish certificates via DNS and DTLS and TCP/TLS consumers
which can integrate those in their own local pool (for at least
their runtime).

 |> Ah, it has become a real pain.  It is ever so astounding how much HTML5 
 |> specifics, scripting and third party analytics can be involved for \
 |> a page 
 |> that would have looked better twenty years ago, or say whenever CSS 2.0 
 |> came around.
 |
 |I'm going to disagree with you there.  IMHO the standard is completely 
 |separate with what people do with it.

They have used <object> back in the 90s, an equivalent should have
made it into the standard back then.  I mean this direction was
clear, right?, maybe then the right people would have more
influence still.  Not that it would matter, all those many people
from the industry will always sail sair own regatta and use what
is hip.

 |> Today almost each and every commercial site i visit is in practice 
 |> a denial of service attack.  For me and my little box at least, and 
 |> without gaining any benefit from that!
 |
 |I believe those webmasters have made MANY TERRIBLE choices and have 
 |ended up with the bloat that they now have.  -  I do not blame the 
 |technology.  I blame what the webmasters have done with the technology.
 |
 |Too many people do not know what they are doing and load "yet another 
 |module" to do something that they can already do with what's already 
 |loaded on their page.  But they don't know that.  They just glue things 
 |togehter until they work.  Even if that means that they are loading the 
 |same thing multiple times because multiple of their 3rd party components 
 |loads a common module themselves.  This is particularly pernicious with 
 |JavaScript.

Absolutely.  And i am watching car industry as they present
themselves in Germany pretty closely, and they are all a nuisance,
in how they seem to track each other and step onto their
footsteps.  We had those all-script-based slide effects, and they
all need high definition images and need to load them all at once,
non-progressively.  It becomes absurd if you need to download 45
megabytes of data, and >43 of those are an image of a car in
nature with a very clean model wearing an axe.  This is "cool"
only if you have a modern environment and the data locally
available.  Im my humble opinion.

  ...
 |> I actually have no idea of nmh, but i for one think the sentence of 
 |> the old BSD Mail manual stating it is an "intelligent mail processing 
 |> system, which has a command syntax reminiscent of ed(1) with lines 
 |> replaced by messages" has always been a bit excessive.  And the fork i 
 |> maintain additionally said it "is also usable as a mail batch language, 
 |> both for sending and receiving mail", adding onto that.
 |
 |What little I know about the MH type mail stores and associated 
 |utilities are indeed quite powerful.  I think they operate under the 
 |premise that each message is it's own file and that you work in 
 |something akin to a shell if not your actual OS shell.  I think the MH 
 |commands are quite literally unix command that can be called from the 
 |unix shell.  I think this is in the spirit of simply enhancing the shell 
 |to seem as if it has email abilities via the MH commands.  Use any 
 |traditional unix text processing utilities you want to manipulate email.
 |
 |MH has always been attractive to me, but I've never used it myself.

I actually hate the concept very, very much ^_^, for me it has
similarities with brainfuck.  I could not use it.

 |> You say it.  In the research Unix nupas mail (as i know it from Plan9) 
 |> all those things could have been done with a shell script and standard 
 |> tools like grep(1) and such.
 |
 |I do say that and I do use grep, sed, awk, formail, procmail, cp, mv, 
 |and any number of traditional unix file / text manipulation utilities on 
 |my email weekly.  I do this both with the Maildir (which is quite 
 |similar to MH) on my mail server and to the Thumderbird message store 
 |that is itself a variant of Maildir with a file per message in a 
 |standard directory structure.
 |
 |> Even Thunderbird would simply be a maybe even little nice graphical 
 |> application for display purposes.
 |
 |The way that I use Thunderbird, that's exactly what it is.  A friendly 
 |and convenient GUI front end to access my email.
 |
 |> The actual understanding of storage format and email standards would 
 |> lie solely within the provider of the file system.
 |
 |The emails are stored in discrete files that themselves are effectively 
 |mbox files with one message therein.
 |
 |> Now you use several programs which all ship with all the knowledge.
 |
 |I suppose if you count greping for a line in a text file as knowledge of 
 |the format, okay.
 |
 |egrep "^Subject: " message.txt
 |
 |There's nothing special about that.  It's a text file with a line that 
 |looks like this:
 |
 |Subject: Re: [TUHS] off-topic list

Except that this will work only for all-english, as otherwise
character sets come into play, text may be in a different
character set, mail standards may impose a content-transfer
encoding, and then what you are looking at is actually a database,
not the data as such.

This is what i find so impressive about that Plan9 approach, where
the individual subparts of the message are available as files in
the filesystem, subjects etc. as such, decoded and available as
normal files.  I think this really is .. impressive.

 |> I for one just want my thing to be easy and reliable controllable via 
 |> a shell script.
 |
 |That's a laudable goal.  I think MH is very conducive to doing that.

Maybe.

 |> You could replace procmail (which is i think perl and needs quite some 
 |> perl modules) with a monolithic possibly statically linked C program.
 |
 |I'm about 95% certain that procmail is it's own monolithic C program. 
 |I've never heard of any reference to Perl in association with procmail. 
 |Are you perhaps thinking of a different local delivery agent?

Oh really?  Then likely so, yes.  I have never used this.

 |> Then.  With full error checking etc.  This is a long road ahead, for 
 |> my thing.
 |
 |Good luck to you.

Thanks, eh, thanks.  Time will bring.. or not.

 |> So ok, it does not, actually.  It chooses your "Grant Taylor via TUHS" 
 |> which ships with the TUHS address, so one may even see this as an 
 |> improvement to DMARC-less list replies, which would go to TUHS, with or 
 |> without the "The Unix Heritage Society".
 |
 |Please understand, that's not how I send the emails.  I send them with 
 |my name and my email address.  The TUHS mailing list modifies them.
 |
 |Aside:  I support the modification that it is making.

It is of course not the email that leaves you no more.  It is not
just headers are added to bring the traceroute path.  I do have
a bad feeling with these, but technically i do not seem to have an
opinion.

 |> I am maybe irritated by the 'dkim=fail reason="signature verification 
 |> failed"' your messages produce.  It would not be good to filter out 
 |> failing DKIMs, at least on TUHS.
 |
 |Okay.  That is /an/ issue.  But I believe it's not /my/ issue to solve.
 |
 |My server DKIM signs messages that it sends out.  From everything that 
 |I've seen and tested (and I actively look for problems) the DKIM 
 |signatures are valid and perfectly fine.
 |
 |That being said, the TUHS mailing list modifies message in the following 
 |ways:
 |
 |1)  Modifies the From: when the sending domain uses DMARC.
 |2)  Modifies the Subject to prepend "[TUHS] ".
 |3)  Modifies the body to append a footer.
 |
 |All three of these actions modify the data that receiving DKIM filters 
 |calculate hashes based on.  Since the data changed, obviously the hash 
 |will be different.
 |
 |I do not fault THUS for this.
 |
 |But I do wish that TUHS stripped DKIM and associated headers of messages 
 |going into the mailing list.  By doing that, there would be no data to 
 |compare to that wouldn't match.
 |
 |I think it would be even better if TUHS would DKIM sign messages as they 
 |leave the mailing list's mail server.

Well, this adds the burden onto TUHS.  Just like i have said.. but
you know, more and more SMTP servers connect directly via STARTSSL
or TCP/TLS right away.  TUHS postfix server does not seem to do so
on the sending side -- you know, i am not an administor, no
earlier but on the 15th of March this year i realized that my
Postfix did not reiterate all the smtpd_* variables as smtp_*
ones, resulting in my outgoing client connections to have an
entirely different configuration than what i provided for what
i thought is "the server", then i did, among others

  smtpd_tls_security_level = may
 +smtp_tls_security_level = $smtpd_tls_security_level

But if TUHS did, why should it create a DKIM signature?
Ongoing is the effort to ensure SMTP uses TLS all along the route,
i seem to recall i have seen RFCs pass by which accomplish that.
Or only drafts??  Hmmm.

  ...
 |I don't know if PGP or S/MIME will ever mandate anything about headers 
 |which are structurally outside of their domain.
 |
 |I would like to see an option in MUAs that support encrypted email for 
 |something like the following:
 |
 |    Subject:  (Subject in encrypted body.)
 |
 |Where the encrypted body included a header like the following:
 |
 |    Encrypted-Subject: Re: [TUHS] off-topic list
 |
 |I think that MUAs could then display the subject that was decrypted out 
 |of the encrypted body.

Well S/MIME does indeed specify this mode of encapsulating the
entire message including the headers, and enforce MUAs to
completely ignore the outer envelope in this case.  (With a RFC
discussing problems of this approach.)  The BSD Mail clone
i maintain does not support this yet, along with other important
aspects of S/MIME, like the possibility to "self-encrypt" (so that
the message can be read again, a.k.a. that the encrypted version
lands on disk in a record, not the plaintext one).  I hope it will
be part of the OpenPGP, actually privacy rewrite this summer.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From paul.winalski at gmail.com  Tue Jun 26 02:03:24 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Mon, 25 Jun 2018 12:03:24 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
References: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
 <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
Message-ID: <CABH=_VTWhyyAd+HskNM6L-8eTdBTooGB+USTo+fOy3jLDYN9Jg@mail.gmail.com>

On 6/25/18, Clem Cole <clemc at ccc.com> wrote:
>
> BTW: I can say, in the mid 1970s, I personally found the lack of defined I/O
> confusing when I was learning the language and (remember Dennis has not yet
> written the 'White Book' which was really part of V7).   It was one of my
> bitches about C compared to BLISS, which was what I was coming and was a
> very rich system at CMU - while I/O in C was really a pain because ever
> program did it different - everybody wrote their own routines - which I
> thought was silly.   Soon there after the 'portable C library' appeared and
> then with typesetter C, stdio.

???

The BLISS language doesn't have any I/O capability built into the
language (as do BASIC,  Fortran, COBOL, PL/I).  Being intended as a
systems programming language, it is expected that programmers will
write their own I/O routines using the target OS's I/O services.

-Paul W.


From jnc at mercury.lcs.mit.edu  Tue Jun 26 02:10:16 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Mon, 25 Jun 2018 12:10:16 -0400 (EDT)
Subject: [TUHS] off-topic list
Message-ID: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>

    > From: Clem Cole

    > the MTA part is not there

That system was using the MMDF MTA:

  https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/mmdf

written by David Crocker while he was at UDel (under Farber).

	Noel



From steffen at sdaoden.eu  Tue Jun 26 02:10:52 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Mon, 25 Jun 2018 18:10:52 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <20180624100438.GY10129@h-174-65.A328.priv.bahnhof.se>
References: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <20180624100438.GY10129@h-174-65.A328.priv.bahnhof.se>
Message-ID: <20180625161052.6PXXL%steffen@sdaoden.eu>

Michael Kjörling wrote in <20180624100438.GY10129 at h-174-65.A328.priv.ba\
hnhof.se>:
 |On 23 Jun 2018 18:18 -0600, from tuhs at minnie.tuhs.org (Grant Taylor \
 |via TUHS):
 |>> Now you use several programs which all ship with all the knowledge.
 |> 
 |> I suppose if you count greping for a line in a text file as
 |> knowledge of the format, okay.
 |> 
 |> egrep "^Subject: " message.txt
 |> 
 |> There's nothing special about that.  It's a text file with a line
 |> that looks like this:
 |> 
 |> Subject: Re: [TUHS] off-topic list
 |
 |The problem, of course (and I hope this is keeping this Unixy enough),
 |with that approach is that it won't handle headers split across
 |multiple lines (I'm looking at you, Received:, but you aren't alone),
 |and that it'll match lines in the body of the message as well (such as
 |the "Subject: " line in the body of your message), unless the body
 |happens to be e.g. Base64 encoded which instead complicates searching
 |for non-header material.
 |
 |For sure neither is insurmountable even with standard tools, but it
 |does require a bit more complexity than a simple egrep to properly
 |parse even a single message, let alone a combination of multiple ones
 |(as seen in mbox mailboxes, for example). At that point having
 |specific tools, such as formail, that understand the specific file
 |format does start to make sense...
 |
 |There isn't really much conceptual difference between writing, say,
 |    formail -X Subject: < message.txt
 |and
 |    egrep "^Subject: " message.txt
 |but the way the former handles certain edge cases is definitely better
 |than that of the latter.
 |
 |Make everything as simple as possible, but not simpler. (That goes for
 |web pages, too, by the way.)

 |> But I do wish that TUHS stripped DKIM and associated headers of
 |> messages going into the mailing list.  By doing that, there would be
 |> no data to compare to that wouldn't match.
 |> 
 |> I think it would be even better if TUHS would DKIM sign messages as
 |> they leave the mailing list's mail server.
 |
 |I believe the correct way would indeed be to validate, strip and
 |possibly re-sign. That way, everyone would (should) be making correct
 |claims about a message's origin.

DKIM reuses the *SSL key infrastructure, which is good.  (Many eyes
see the code in question.)  It places records in DNS, which is
also good, now that we have DNS over TCP/TLS and (likely) DTLS.
In practice however things may differ and to me DNS security is
all in all not given as long as we get to the transport layer
security.

I personally do not like DKIM still, i have opendkim around and
thought about it, but i do not use it, i would rather wish that
public TLS certificates could also be used in the same way as
public S/MIME certificates or OpenPGP public keys work, then only
by going to a TLS endpoint securely once, there could be
end-to-end security.  I am not a cryptographer, however.  (I also
have not read the TLS v1.3 standard which is about to become
reality.)  The thing however is that for DKIM a lonesome user
cannot do anything -- you either need to have your own SMTP
server, or you need to trust your provider.  But our own user
interface is completely detached.  (I mean, at least if no MTA is
used one could do the DKIM stuff, too.)

 |FWIW, SPF presents a similar problem with message forwarding without
 |address rewriting... so it's definitely not just DKIM.
 --End of <20180624100438.GY10129 at h-174-65.A328.priv.bahnhof.se>

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From jon at fourwinds.com  Tue Jun 26 01:28:52 2018
From: jon at fourwinds.com (Jon Steinhart)
Date: Mon, 25 Jun 2018 08:28:52 -0700
Subject: [TUHS] off-topic list [ really mh ]
In-Reply-To: <CAC20D2MvgZN1P5wcZ4g_Gab6j5KPgKg+yhieU-_OS5-20xtjGA@mail.gmail.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <CAC20D2MvgZN1P5wcZ4g_Gab6j5KPgKg+yhieU-_OS5-20xtjGA@mail.gmail.com>
Message-ID: <201806251528.w5PFSqaL000557@darkstar.fourwinds.com>

I'm a big fan of mh (nmh now) and a sometimes maintainer.
What I love about it is that it's a separate set of commands as
opposed to an integrated blob.  That means that I can script.

I also love the one-file-per-message thing, although it's somewhat
less useful with mime; can't just grep anymore.

One of the features that added to nmh to allow it to invoke external
programs when doing its own message processing.  I use this to maintain
a parallel ElasticSearch database of my mail; just a simple word to
message folder/file thing.  I actually go through the trouble of
decoding many mime times when building this database.  This gives me
the ability to very quickly locate messages based on their contents.


From steffen at sdaoden.eu  Tue Jun 26 02:26:16 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Mon, 25 Jun 2018 18:26:16 +0200
Subject: [TUHS] mail (Re:  off-topic list
In-Reply-To: <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
 <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
Message-ID: <20180625162616.7JTz0%steffen@sdaoden.eu>

Lyndon Nerenberg wrote in <42849F79-D132-4059-8A94-FFF8B141C49E at orthanc.ca>:
 |> On Jun 24, 2018, at 5:43 PM, Bakul Shah <bakul at bitblocks.com> wrote:
 |> 
 |> \vague-idea{
 |> Seems to me that we have taken the unix (& plan9) model for
 |> granted and not really explored it at this level much. That
 |> is, can we map structured objects into trees/graphs in an
 |> efficient way such that standard tools can be used on them?
 |> Can we extend standard tools to explore substructures in a
 |> more generic way?
 |>}
 |
 |One of the main reasons I bailed out of nmh development was the disinter\
 |est in trying new and offside ways of addressing the mail store model. \
 | I have been an MH user since the mid-80s, and I still think it is \
 |more functional than any other MUA in use today (including Alpine). \
 | Header voyeurs will note I'm sending this from Mail.app.  As was mentio\
 |ned earlier in this conversation, there is no one MUA, just as there \
 |is no one text editing tool.  I used a half dozen MUAs on a daily basis, \
 |depending on my needs.
 |
 |But email, as a structured subset of text, deserves its own set of \
 |dedicated tools.  formail and procmail are immediate examples.  As \
 |MIME intruded on the traditional ASCII-only UNIX mbox format, grep \
 |and friends became less helpful.  HTML just made it worse.
 |
 |Plan9's upas/fs abstraction of the underlying mailbox helps a lot. \
 | By undoing all the MIME encoding, and translating the character sets \
 |to UTF8, grep and friends work again.  That filesystem abstraction \
 |has so much more potential, too.  E.g. pushing a pgpfs layer in there \
 |to transparently decode PGP content.
 |
 |I really wish there was a way to bind Plan9-style file servers into \
 |the UNIX filename space.  Matt Blaze's CFS is the closest example of \
 |this I have seen.  Yet even it needs superuser privs.  (Yes, there's \
 |FUSE, but it just re-invents the NFS interface with no added benefit.)
 |
 |Meanwhile, I have been hacking on a version of nmh that uses a native \
 |UTF8 message store.  I.e. it undoes all the MIME encoding, and undoes \
 |non-UTF8 charsets.  So grep and friends work, once again.

The SMTPUTF8 standard can this, and nmh now too.  There was
a conference presentation of i think a Microsoft employee that was
linked on one of the lists i track.  He also said this, "finally
being able to just look at the mail as it is (again)".  He showed
one in an editor too.  But the presentation could be seen as the
whale of a lie, as propaganda, there was no traceroute, no
signatures, no attachments.  Cool and so, but not very realistic.

I do not want to look at the plain source of a mail message in
almost all cases.  It is otherwise mostly for development.
Therefore i am having trouble to understand all this, i mean,
i need a PDFviewer to look into a PDF, [..etc...], just the same
is true for email.  Maybe because i like MBOX a lot, it is just
a database, a single file, that i can easily take and move.
I have never understood why people get excited about Maildir, you
have trees full of files with names which hurt the eyes, and they
miss a From_ line (and are thus not valid MBOX files, i did not
miss that in the other mail).
Despite that, having a possibility to look easily i miss much.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From steffen at sdaoden.eu  Tue Jun 26 02:41:53 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Mon, 25 Jun 2018 18:41:53 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
References: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
Message-ID: <20180625164153.8OSfj%steffen@sdaoden.eu>

Noel Chiappa wrote in <20180624131458.6E96518C082 at mercury.lcs.mit.edu>:
 |> On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
 |
 |> Others like DNS are pretty perfect and scale fantastic.
 |
 |It's perhaps worth noting that today's DNS is somewhat different from the
 |original; some fairly substantial changes were made early on (although \
 |maybe
 |it was just in the security, I don't quite recall).

No.. not that i know?

 |(The details escape me at this point, but at one point I did a detailed \
 |study
 |of DNS, and DNS security, for writing the security architecture document \
 |for
 |the resolution system in LISP - the networking one, not the language.)

It is basically still the same that Mockapetris designed, or it
was like this in 2004..2005 at least.  We have seen many new types
and extensions and clarifications (many early after the DNS RFCs
1035+ were published, for example RFC 1122, "Requirements for
Internet Hosts -- Communication Layers"), like EDNS to extend the
DGRAM packet size and such, and then luckily someone from the IETF
really waved through transport layer security for DNS, via TCP and
also via DTLS, which made me really happy.  (RFC 8310, and 7858
(TCP) and 8094 (UDP).)  There were a lot of RFCs regarding zone
transfer, i have to admit that i never read those, as i never had
anything to do with the server side of DNS.

But the DNS concept by itself scales still and is unchanged?!?
I would expect that in the future more and more software becomes
capable to follow chains of trust from zone to zone upwards, so
that individual zones can use zone-specific TLS certificates,
signed only by zones higher up the layer... or a member of the CA
pool, the root zone is pretty much U.S.A., which is possibly a bit
unfair.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From steffen at sdaoden.eu  Tue Jun 26 02:44:04 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Mon, 25 Jun 2018 18:44:04 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <8a81f6fb-5689-f0b4-49e3-5871c4d3a402@spamtrap.tnetconsulting.net>
References: <20180624131458.6E96518C082@mercury.lcs.mit.edu>
 <alpine.BSF.2.21.999.1806251121440.68981@aneurin.horsfall.org>
 <8a81f6fb-5689-f0b4-49e3-5871c4d3a402@spamtrap.tnetconsulting.net>
Message-ID: <20180625164404.xXWXh%steffen@sdaoden.eu>

Grant Taylor via TUHS wrote in <8a81f6fb-5689-f0b4-49e3-5871c4d3a402 at spa\
mtrap.tnetconsulting.net>:
 |On 06/24/2018 07:38 PM, Dave Horsfall wrote:
 |> The only updates I've seen to BIND in recent years were 
 |> security-related, not functionality.  Yeah, I could switch to another 
 |> DNS server, but like Sendmail vs. Postfix[*] it's better the devil you 
 |> know etc...
 |
 |I've seen the following features introduced within the last five years:
 |
 |  - Response Policy Zone has added (sub)features and matured.
 |  - Response Policy Service (think milter like functionality for BIND).
 |  - Catalog Zones
 |  - Query Minimization
 |  - EDNS0 Client Subnet support is being worked on.
 |
 |That's just what comes to mind quickly.

But that misses indeed the greatest achievements of all imho,
transport layer security support for DNS via those standards which
drive the web, are seen by thousands and thousands of eyes, and
are used a billion times a day at least, and that is DNS over
TCP/TLS and DTLS!

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From clemc at ccc.com  Tue Jun 26 03:22:31 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 13:22:31 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <CABH=_VTWhyyAd+HskNM6L-8eTdBTooGB+USTo+fOy3jLDYN9Jg@mail.gmail.com>
References: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
 <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
 <CABH=_VTWhyyAd+HskNM6L-8eTdBTooGB+USTo+fOy3jLDYN9Jg@mail.gmail.com>
Message-ID: <CAC20D2ONWKZfVVYoqZg1knbc7_1V2=s06hNwEjZbQ4CFHc=SAg@mail.gmail.com>

On Mon, Jun 25, 2018 at 12:03 PM, Paul Winalski <paul.winalski at gmail.com>
wrote:

>
> The BLISS language doesn't have any I/O capability built into the
> language (as do BASIC,  Fortran, COBOL, PL/I).

​Sorry for the strange side trip ...  you didn't parse my words careful
(which I know can sometimes be a challenge).  What I said was that CMU had
a rich set of BLISS system​

​services where I/O was one set of those services.
I did not say it was part of the language
​.   But I/O
was very much part of way we programmed and we moved code from the PDP10
and the 11's reasonably freely that was not intended to be machine
specific, particularly since the PDP11 compiler was a cross compiler that
ran on the 10.
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/60d6ae63/attachment.html>

From clemc at ccc.com  Tue Jun 26 03:37:16 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 13:37:16 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
Message-ID: <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>

Ah ... Makes sense.  MMDF was possibly my favorite Unix MTA.   We shipped
it as Masscomp's default Mail System for a longtime.   It was only after I
left that they broken down and switched to sendmail to be like Sun and much
of the rest of the internet.

It's Interesting, MMDF had a child, PMDF (the rewrite in Pascal) which
became the the default Mailer for a lot of VMS systems, particularly ones
that had IP connections.  I know of no one still running MMDF at this point
(even me), but I do know of a couple of folks running PMDF.  A few years
ago, I gave up on MMDF and switched to Bornstien's QMAIL because of DNS
issues.    In many ways, MMDF and QMAIL are a lot alike in the way they
work under the covers, but to give Bornstien credit he had really walked
through QMAIL doing a security audit and I was unwilling to take the time
to do that for MMDF; and I knew that any MTA on the internet had to be
hardenned.  I'm sure MMDF could be attacked with stack overwrites and
strcpy(3) style attacks because when Crocker wrote it, that was not what
was being considered.
ᐧ

On Mon, Jun 25, 2018 at 12:10 PM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Clem Cole
>
>     > the MTA part is not there
>
> That system was using the MMDF MTA:
>
>   https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/mmdf
>
> written by David Crocker while he was at UDel (under Farber).
>
>         Noel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/9e226741/attachment.html>

From gtaylor at tnetconsulting.net  Tue Jun 26 04:21:09 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 12:21:09 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180625155103.6iH7s%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <20180625155103.6iH7s%steffen@sdaoden.eu>
Message-ID: <c4debab5-181d-c071-850a-a8865d6e98d0@spamtrap.tnetconsulting.net>

On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
> I cannot really imagine how hard it is to write a modern web browser, 
> with the highly integrative DOM tree, CSS and Javascript and such, 
> however, and the devil is in the details anyway.

Apparently it's difficult.  Or people aren't sufficiently motivated.

I know that I personally can't do it.

> Good question.  Then again, young men (and women) need to have a chance 
> to do anything at all.  Practically speaking.  For example we see almost 
> thousands of new RFCs per year.  That is more than in the first about 
> two decades all in all.  And all is getting better.

Yep.  I used to have aspirations of reading RFCs.  But work and life 
happened, then the rate of RFC publishing exploded, and I gave up.

> Really?  Not that i know of.  Resolvers should be capable to provide 
> quality of service, if multiple name servers are known, i would say.

I typically see this from the authoritative server side.  I've 
experienced this (at least) once myself and have multiple colleagues 
that have also experienced this (at least) once themselves too.

If the primary name server is offline for some reason (maintenance, 
outage, hardware, what have you) clients start reporting a problem that 
manifests itself as a complete failure.  They seem to not fall back to 
the secondary name server.

I'm not talking about "slowdown" complaints.  I'm talking about 
"complete failure" and "inability to browse the web".

I have no idea why this is.  All the testing that I've done indicate 
that clients fall back to secondary name servers.  Yet multiple 
colleagues and myself have experienced this across multiple platforms.

It's because of these failure symptoms that I'm currently working with a 
friend & colleague to implement VRRP between his name servers so that 
either of them can serve traffic for both DNS service IPs / Virtual IPs 
(VIPs) in the event that the other server is unable to do so.

We all agree that a 5 ~ 90 second (depending on config) outage with 
automatic service continuation after VIP migration is a LOT better than 
end users experiencing what are effectively service outages of the 
primary DNS server.

Even in the outages, the number of queries to the secondary DNS 
server(s) don't increase like you would expect as clients migrate from 
the offline primary to the online secondary.

In short, this is a big unknown that I, and multiple colleagues, have 
seen and can't explain.  So, we have each independently decided to 
implement solutions to keep the primary DNS IP address online using what 
ever method we prefer.

> This is even RFC 1034 as i see, SLIST and SBELT, whereas the latter 
> i filled in from "nameserver" as of /etc/resolv.conf, it should have 
> multiple, then.  (Or point to localhost and have a true local resolver 
> or something like dnsmasq.)

I completely agree that per all documentation, things SHOULD work with 
just the secondary.  Yet my experience from recursive DNS server 
operator stand point, things don't work.

> I do see DNS failures via Wifi but that is not the fault of DNS, but of 
> the provider i use.

Sure.  That's the nature of a wireless network.  It's also unrelated to 
the symptoms I describe above.

> P.S.: actually the only three things i have ever hated about DNS, and 
> i came to that in 2004 with EDNS etc. all around yet, is that backward 
> compatibly has been chosen for domain names, and therefore we gained 
> IDNA, which is a terribly complicated brainfuck thing that actually 
> caused incompatibilities, but these where then waved through and ok. 
> That is absurd.

I try to avoid dealing with IDNs.  Fortunately I'm fairly lucky in doing so.

> And the third is DNSSEC, which i read the standard of and said "no". 
> Just last year or the year before that we finally got DNS via TCP/TLS 
> and DNS via DTLS, that is, normal transport security!

My understanding is that DNSSEC provides verifiable authenticity of the 
information transported by DNSSEC.  It's also my understanding that 
DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption 
(authentication & privacy).

As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that 
prevent servers on the far end of said transports from modifying queries 
/ responses that pass through them, or serving up spoofed content.

To me, DNSSEC serves a completely different need than DTLS, DOH, 
DNScrypt, etc.

Please correct me and enlighten me if I'm wrong or have oversimplified 
things.

> Twenty years too late, but i really had good days when i saw those 
> standards flying by!  Now all we need are zone administrators which 
> publish certificates via DNS and DTLS and TCP/TLS consumers which can 
> integrate those in their own local pool (for at least their runtime).

DANE has been waiting on that for a while.

I think DANE does require DNSSEC.  I think DKIM does too.

> I actually hate the concept very, very much ^_^, for me it has 
> similarities with brainfuck.  I could not use it.

Okay.  To each his / her own.

> Except that this will work only for all-english, as otherwise character 
> sets come into play, text may be in a different character set, mail 
> standards may impose a content-transfer encoding, and then what you are 
> looking at is actually a database, not the data as such.

Hum.  I've not considered non-English as I so rarely see it in raw email 
format.

> This is what i find so impressive about that Plan9 approach, where 
> the individual subparts of the message are available as files in the 
> filesystem, subjects etc. as such, decoded and available as normal files. 
> I think this really is .. impressive.

I've never had the pleasure of messing with Plan9.  It's on my 
proverbial To-Do-Someday list.  It does sound very interesting.

> Thanks, eh, thanks.  Time will bring.. or not.

;-)

> It is of course not the email that leaves you no more.  It is not just 
> headers are added to bring the traceroute path.  I do have a bad feeling 
> with these, but technically i do not seem to have an opinion.

I too have some unease with things while not having any technical 
objection to them.

> Well, this adds the burden onto TUHS.  Just like i have said.. but 
> you know, more and more SMTP servers connect directly via STARTSSL or 
> TCP/TLS right away.  TUHS postfix server does not seem to do so on the 
> sending side

The nature of email is (and has been) changing.

We're no longer trying to use 486s to manage mainstream email.  My 
opinion is that we have the CPU cycles to support current standards.

> -- you know, i am not an administor, no earlier but on the 15th of March 
> this year i realized that my Postfix did not reiterate all the smtpd_* 
> variables as smtp_* ones, resulting in my outgoing client connections 
> to have an entirely different configuration than what i provided for 
> what i thought is "the server", then i did, among others
> 
> smtpd_tls_security_level = may +smtp_tls_security_level = 
> $smtpd_tls_security_level

Oy vey.

> But if TUHS did, why should it create a DKIM signature?  Ongoing is the 
> effort to ensure SMTP uses TLS all along the route, i seem to recall i 
> have seen RFCs pass by which accomplish that.  Or only drafts??  Hmmm.

SMTP over TLS (STARTTLS) is just a transport.  I can send anything I 
want across said transport.

DKIM is about enabling downstream authentication in email.  Much like 
DNSSEC does for DNS.

There is nothing that prevents sending false information down a secure 
communications channel.

> Well S/MIME does indeed specify this mode of encapsulating the entire 
> message including the headers, and enforce MUAs to completely ignore the 
> outer envelope in this case.  (With a RFC discussing problems of this 
> approach.)

Hum.  That's contrary to my understanding.  Do you happen to have RFC 
and section numbers handy?  I've wanted to, needed to, go read more on 
S/MIME.  It now sounds like I'm missing something.

I wonder if the same can be said about PGP.

> The BSD Mail clone i maintain does not support this yet, along with other 
> important aspects of S/MIME, like the possibility to "self-encrypt" (so 
> that the message can be read again, a.k.a. that the encrypted version 
> lands on disk in a record, not the plaintext one).  I hope it will be 
> part of the OpenPGP, actually privacy rewrite this summer.

I thought that many MUAs handled that problem by adding the sender as an 
additional recipient in the S/MIME structure.  That way the sender could 
extract the ephemeral key using their private key and decrypt the 
encrypted message that they sent.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/e20d8d0e/attachment.bin>

From gtaylor at tnetconsulting.net  Tue Jun 26 04:48:33 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 12:48:33 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <20180625161052.6PXXL%steffen@sdaoden.eu>
References: <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <20180624100438.GY10129@h-174-65.A328.priv.bahnhof.se>
 <20180625161052.6PXXL%steffen@sdaoden.eu>
Message-ID: <5da463dd-fb08-f601-68e3-197e720d5716@spamtrap.tnetconsulting.net>

On 06/25/2018 10:10 AM, Steffen Nurpmeso wrote:
> DKIM reuses the *SSL key infrastructure, which is good.

Are you saying that DKIM relies on the traditional PKI via CA 
infrastructure?  Or are you saying that it uses similar technology that 
is completely independent of the PKI / CA infrastructure?

> (Many eyes see the code in question.)  It places records in DNS, which 
> is also good, now that we have DNS over TCP/TLS and (likely) DTLS. 
> In practice however things may differ and to me DNS security is all in 
> all not given as long as we get to the transport layer security.

I believe that a secure DNS /transport/ is not sufficient.  Simply 
security the communications channel does not mean that the entity on the 
other end is not lying.  Particularly when not talking to the 
authoritative server, likely by relying on caching recursive resolvers.

> I personally do not like DKIM still, i have opendkim around and 
> thought about it, but i do not use it, i would rather wish that public 
> TLS certificates could also be used in the same way as public S/MIME 
> certificates or OpenPGP public keys work, then only by going to a TLS 
> endpoint securely once, there could be end-to-end security.

All S/MIME interactions that I've seen do use certificates from a well 
know CA via the PKI.

(My understanding of) what you're describing is encryption of data in 
flight.  That does nothing to encrypt / protect data at rest.

S/MIME /does/ provide encryption / authentication of data in flight 
/and/ data at rest.

S/MIME and PGP can also be used for things that never cross the wire.

> I am not a cryptographer, however.  (I also have not read the TLS v1.3 
> standard which is about to become reality.)  The thing however is that 
> for DKIM a lonesome user cannot do anything -- you either need to have 
> your own SMTP server, or you need to trust your provider.

I don't think that's completely accurate.  DKIM is a method of signing 
(via cryptographic hash) headers as you see (send) them.  I see no 
reason why a client can't add DKIM headers / signature to messages it 
sends to the MSA.

Granted, I've never seen this done.  But I don't see anything preventing 
it from being the case.

> But our own user interface is completely detached.  (I mean, at least 
> if no MTA is used one could do the DKIM stuff, too.)

I know that it is possible to do things on the receiving side.  I've got 
the DKIM Verifier add-on installed in Thunderbird, which gives me client 
side UI indication if the message that's being displayed still passes 
DKIM validation or not.  The plugin actually calculates the DKIM hash 
and compares it locally.  It's not just relying on a header added by 
receiving servers.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/b3da55a5/attachment.bin>

From gtaylor at tnetconsulting.net  Tue Jun 26 04:59:10 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 12:59:10 -0600
Subject: [TUHS] mail (Re: off-topic list
In-Reply-To: <20180625162616.7JTz0%steffen@sdaoden.eu>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <AE998A8F-3541-41E6-87F3-266340768C26@bitblocks.com>
 <42849F79-D132-4059-8A94-FFF8B141C49E@orthanc.ca>
 <20180625162616.7JTz0%steffen@sdaoden.eu>
Message-ID: <8c0da561-f786-8039-d2fc-907f2ddd09e3@spamtrap.tnetconsulting.net>

On 06/25/2018 10:26 AM, Steffen Nurpmeso wrote:
> I have never understood why people get excited about Maildir, you have 
> trees full of files with names which hurt the eyes,

First, a single Maildir is a parent directory and three sub-directories. 
  Many Maildir based message stores are collections of multiple 
individual Maildirs.

Second, the names may look ugly, but they are named independently of the 
contents of the message.

Third, I prefer the file per message as opposed to monolithic mbox for 
performance reasons.  Thus I message corruption impacts a single message 
and not the entire mail folder (mbox).

Aside:  I already have a good fast database that most people call a file 
system and it does a good job tracking metadata for me.

Fourth, I found maildir to be faster on older / slower servers because 
it doesn't require copying (backing up) the monolithic mbox file prior 
to performing an operation on it.  It splits reads & writes into much 
smaller chunks that are more friendly (and faster) to the I/O sub-system.

Could many of the same things be said about MH?  Very likely.  What I 
couldn't say about MH at the time I went looking and comparing (typical) 
unix mail stores was the readily available POP3 and IMAP interface. 
Seeing as how POP3 and IMAP were a hard requirement for me, MH was a 
non-starter.

> they miss a From_ line (and are thus not valid MBOX files, i did not 
> miss that in the other mail).

True.  Though I've not found that to be a limitation.  I'm fairly 
confident that the Return-Path: header that is added / replaced by my 
MTA does functionally the same thing.

I'm sure similar differences can be had between many different solutions 
in Unix's history.  SYS V style init.d scripts vs BSD style /etc/rc\d 
style init scripts vs systemd or Solaris's SMD (I think that's it's name).

To each his / her own.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/4e2db240/attachment.bin>

From gtaylor at tnetconsulting.net  Tue Jun 26 05:35:23 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 13:35:23 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
Message-ID: <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>

On 06/25/2018 11:37 AM, Clem Cole wrote:
> Ah ... Makes sense.  MMDF was possibly my favorite Unix MTA.

Hum.  I have no experience with MMDF.  Perhaps I should play.

> We shipped it as Masscomp's default Mail System for a longtime.

I never knew that MMDF was used anywhere other than SCO Unix.  (I don't 
know which SCO product.)

According to Wikipedia PMDF was used on VMS.  Now I wonder if there's 
any relation to PMDF and what I've frequently heard referred to a Mail-11.

> It was only after I left that they broken down and switched to sendmail 
> to be like Sun and much of the rest of the internet.

The Wikipedia article also indicates that PMDF became Sun Java System 
Messaging Server.  Which seems to counter Clem's comment.

Or, perhaps as typical for Sun, there are multiple solutions to the same 
""problem.  Ship Sendmail with the base OS but sell a larger product 
that (hypothetically) does a super set of functions.

> any MTA on the internet had to be hardenned.  I'm sure MMDF could be 
> attacked with stack overwrites and strcpy(3) style attacks because when 
> Crocker wrote it, that was not what was being considered.

Thankfully you don't have to put an MTA directly on the internet to be 
able to play with it.  It's trivial to put an MTA behind a smart host 
that that shields the (potentially) vulnerable MTA from the brunt of the 
Internet.



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/800cc3bf/attachment.bin>

From clemc at ccc.com  Tue Jun 26 06:09:08 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 16:09:08 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
Message-ID: <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>

Below...

On Mon, Jun 25, 2018 at 3:35 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org
> wrote:

> On 06/25/2018 11:37 AM, Clem Cole wrote:
>
>> Ah ... Makes sense.  MMDF was possibly my favorite Unix MTA.
>>
>
> Hum.  I have no experience with MMDF.  Perhaps I should play.

​Warning.... there a lot of stuff that is pre-internet in the guts (i.e.
was designed during the Arpanet) so IP support got added to it.   And even
with the last versions, its missing a lot (i.e. I'm the person that
hacked^h^h^h^h^h^hadded the original BSD resolv client code to it one
weekend, lord knows how many years ago).   Again this is why QMAIL became
the replacement.  Borstien is an amazing coder and I respect him immensely,
although his personality leaves a little to be desired...​




>
>
> We shipped it as Masscomp's default Mail System for a longtime.
>>
>
> I never knew that MMDF was used anywhere other than SCO Unix.  (I don't
> know which SCO product.)
>
​Ahem ...   We did it before they did (by a number of year actually).   It
was also the mail for CS-NET/PhoneNET, when BBN picked it up.​



> According to Wikipedia PMDF was used on VMS.  Now I wonder if there's any
> relation to PMDF and what I've frequently heard referred to a Mail-11.

​Mail-11 is DEC's mail standard. And is mostly a protocol spec.

PMDF was a pseudo open source rewrite of MMDF (from on the mid western
universities I believe), that got taken closed and I never knew all of the
politics.  Larry M correct me here, he might know some of it, as I'm
thinking PMDF came out of Wisconsin originally.  Whoever wrote it, took the
CS-Net C MMDF implementation and rewrote it into Pascal for VMS - this was
during the height the C vs Pascal war in CS Depts and also the time of the
UNIX vs VMS wars.​     The DEC Pascal Compiler was very good and was an
excellent teaching compiler.   Paul W might remember the ordering of the
releases from the compiler group, but I think VMS Pascal was released
before VAX-11C -- which I think played into the MMDF/PMDF thing.   As I
recall, VMS Pascal definitely was bundled in the University package and was
'cheaper' if you were willing to run VMS instead of Unix at your
University.       Anyway, the folks that did PMDF formed a small firm and
sold it for a while.   There was a commercial IP implementation from France
call TUV for VMS and IIRC, the TUV folks bought PMDF and whole thing got
sold to a lot people and had quite a ride .





>
>
> It was only after I left that they broken down and switched to sendmail to
>> be like Sun and much of the rest of the internet.
>>
>
> The Wikipedia article also indicates that PMDF became Sun Java System
> Messaging Server.  Which seems to counter Clem's comment.
>
​I know nothing about that.   I wonder of the Pascal version got
reimplemented in Java at some point.  I do not know.  That would not
surprise me.​



>
> Or, perhaps as typical for Sun, there are multiple solutions to the same
> ""problem.  Ship Sendmail with the base OS but sell a larger product that
> (hypothetically) does a super set of functions.

​That would sound more like it.   Also left and right hands not talking to
each other.   Sun had become a large place by that point.​



>
>
> any MTA on the internet had to be hardenned.  I'm sure MMDF could be
>> attacked with stack overwrites and strcpy(3) style attacks because when
>> Crocker wrote it, that was not what was being considered.
>>
>
> Thankfully you don't have to put an MTA directly on the internet to be
> able to play with it.  It's trivial to put an MTA behind a smart host that
> that shields the (potentially) vulnerable MTA from the brunt of the
> Internet.

​Sure, but its more work than I want to mess with these days.   Best wishes
and have at it 😘​


ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/85497bef/attachment.html>

From lyndon at orthanc.ca  Tue Jun 26 06:15:24 2018
From: lyndon at orthanc.ca (Lyndon Nerenberg)
Date: Mon, 25 Jun 2018 13:15:24 -0700 (PDT)
Subject: [TUHS] off-topic list
In-Reply-To: <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
Message-ID: <alpine.BSF.2.21.999.1806251313020.51914@orthanc.ca>

> The Wikipedia article also indicates that PMDF became Sun Java System 
> Messaging Server.  Which seems to counter Clem's comment.

Commercial PMDF was sold/maintained by Innosoft (Ned Freed, Chris Newman, 
...)  Innosoft was bought up by Sun prior to its assimilation by the 
Oracle borg ...


From gtaylor at tnetconsulting.net  Tue Jun 26 06:47:23 2018
From: gtaylor at tnetconsulting.net (Grant Taylor)
Date: Mon, 25 Jun 2018 14:47:23 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
Message-ID: <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>

On 06/25/2018 02:09 PM, Clem Cole wrote:
> Below...

;-)

> ​Warning.... there a lot of stuff that is pre-internet in the guts (i.e. 
> was designed during the Arpanet) so IP support got added to it.   And 
> even with the last versions, its missing a lot (i.e. I'm the person that 
> hacked^h^h^h^h^h^hadded the original BSD resolv client code to it one 
> weekend, lord knows how many years ago).   Again this is why QMAIL 
> became the replacement.  Borstien is an amazing coder and I respect him 
> immensely, although his personality leaves a little to be desired...​

I consider myself so advised.

Wait, are you saying that qmail was written as a replacement for PMDF? 
Or that you used qmail as your replacement for PMDF?

> ​Ahem ...   We did it before they did (by a number of year actually).  
>   It was also the mail for CS-NET/PhoneNET, when BBN picked it up.​

I'm sorry, I was not meaning to imply anything other than my ignorance 
of MMDF history.

> ​Mail-11 is DEC's mail standard. And is mostly a protocol spec.

ACK

> PMDF was a pseudo open source rewrite of MMDF (from on the mid western 
> universities I believe), that got taken closed and I never knew all of 
> the politics.  Larry M correct me here, he might know some of it, as I'm 
> thinking PMDF came out of Wisconsin originally.  Whoever wrote it, took 
> the CS-Net C MMDF implementation and rewrote it into Pascal for VMS - 
> this was during the height the C vs Pascal war in CS Depts and also the 
> time of the UNIX vs VMS wars.​     The DEC Pascal Compiler was very good 
> and was an excellent teaching compiler.   Paul W might remember the 
> ordering of the releases from the compiler group, but I think VMS Pascal 
> was released before VAX-11C -- which I think played into the MMDF/PMDF 
> thing.   As I recall, VMS Pascal definitely was bundled in the 
> University package and was 'cheaper' if you were willing to run VMS 
> instead of Unix at your University.       Anyway, the folks that did 
> PMDF formed a small firm and sold it for a while.   There was a 
> commercial IP implementation from France call TUV for VMS and IIRC, the 
> TUV folks bought PMDF and whole thing got sold to a lot people and had 
> quite a ride .

Interesting.

> ​I know nothing about that.   I wonder of the Pascal version got 
> reimplemented in Java at some point.  I do not know.  That would not 
> surprise me.​

"In 1999 PMDF was translated from Pascal to C. The C version of PMDF 
became the basis of the Sun Java System Messaging Server of Sun 
Microsystems"

I wouldn't bet that (the C version of) PMDF was reimplemented in Java 
just because the name contained Java.  I seem to recall Sun putting Java 
in the name of many products at the time.

> ​That would sound more like it.   Also left and right hands not talking 
> to each other.   Sun had become a large place by that point.​

ACK

> ​Sure, but its more work than I want to mess with these days.   Best 
> wishes and have at it 😘​

Fair enough.

Thank you for the history on MMDF / PMDF.  #larningIsFun



-- 
Grant. . . .
unix || die

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/3c4555a4/attachment.bin>

From clemc at ccc.com  Tue Jun 26 07:15:00 2018
From: clemc at ccc.com (Clem Cole)
Date: Mon, 25 Jun 2018 17:15:00 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
Message-ID: <CAC20D2OQcoURrODRr6VfsKyYmg-__G2seTsuLedss+WL5m-ySg@mail.gmail.com>

On Mon, Jun 25, 2018 at 4:47 PM, Grant Taylor via TUHS <tuhs at minnie.tuhs.org
> wrote:

>
>
> Wait, are you saying that qmail was written as a replacement for PMDF? Or
> that you used qmail as your replacement for PMDF?

​Unclear - why Borstein does anything in my mind. Best I can tell he wrote
because he (and a lot of us) disliked sendmail.   I think he felt MMDF was
too much of a mess by that point and it was time to start over.   He had
already did a replacement for bind.  But it would have primarily been a
replacement for MMDF I would have thought --- PMDF originally was written
in Pascal for VMS, although PMDF might have been moved back to UNIX by
then.  The dates are fuzzy.


> "In 1999 PMDF was translated from Pascal to C. The C version of PMDF
> became the basis of the Sun Java System Messaging Server of Sun
> Microsystems"
>

My memory is hazy, I seem to remember there was one more heavy weight mail
system after MMDF for UNIX that came from the Brits in the 1980s/early 90s
in the same vein who's name I can not think of. ​  That must have been PMDF
now that I think about it.    Simon Rosenthal was one of guys involved in
it.  He did the support for what it was for us at LCC.  Thinking about it,
we must have been running that version at Locus before I went to DEC.  If I
can find Simon or the other Andy Tannenbaum (trb), I'll ask them if either
of them remembers.  But I can say the date has to be wrong on this quote
you mention.  I would think the translation from Pascal back to C would
have been 10 year earlier and that would make more sense.   By 1999, I had
left LCC.    As I said, Simon would be a good person to ask if I can dig
him up.



> I wouldn't bet that (the C version of) PMDF was reimplemented in Java just
> because the name contained Java.  I seem to recall Sun putting Java in the
> name of many products at the time.

​Right....​

​Clem​

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180625/b9e31820/attachment.html>

From dave at horsfall.org  Tue Jun 26 12:00:49 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Tue, 26 Jun 2018 12:00:49 +1000 (EST)
Subject: [TUHS] Happy birthday, Maurice Wilkes!
Message-ID: <alpine.BSF.2.21.999.1806261015400.68981@aneurin.horsfall.org>

As I continue to push the boundaries of subjective topicism, I'd like 
to mention that today was when Sir Maurice Wilkes FRS FReng was born way 
back in 1913.

He had a bit to do with EDSAC, microprogramming, and that sort of thing.

And dmmmit, I missed Alan Turing's birthday last Saturday (23rd June, 1912),
and Komrad Zuse's back in 22nd June, 1910, along with the Qwerty keyboard
being patented by one Christopher Sholes back in 1868 (long story).

-- Dave


From ralph at inputplus.co.uk  Tue Jun 26 17:16:04 2018
From: ralph at inputplus.co.uk (Ralph Corderoy)
Date: Tue, 26 Jun 2018 08:16:04 +0100
Subject: [TUHS] Old RAND MH Source.
In-Reply-To: <201806251547.w5PFlLJ2003448@darkstar.fourwinds.com>
References: <20180625144454.EAB7918C082@mercury.lcs.mit.edu>
 <CAC20D2MHSomazArsH4CnJAUqfBOV5N8bBzgT4RgL0Gf81UP0Xw@mail.gmail.com>
 <201806251547.w5PFlLJ2003448@darkstar.fourwinds.com>
Message-ID: <20180626071604.414D21F97D@orac.inputplus.co.uk>

Hi Jon,

> > > https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/mh
> >
> > Noel, I did a quick look at that code.   That is some of it for sure
>
> BTW, I would think that the original code is somewhere in the nmh
> archives too.

No, I don't think there's anything going back this far, e.g. no
`pickup.c' in git, and I think David Levine, another maintainer, along
with you and me, would like to expand the collection as he often looks
back to understand the present.

Clem, if you do dig out old MH source then please point the
https://lists.nongnu.org/mailman/listinfo/nmh-workers list at it, or me
directly if you'd prefer.  Amongst nmh's current users is its co-author,
https://en.wikipedia.org/wiki/Norman_Shapiro, and I don't think he has
source that old otherwise he'd have passed it on.

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


From arnold at skeeve.com  Tue Jun 26 17:01:00 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Tue, 26 Jun 2018 01:01:00 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2OQcoURrODRr6VfsKyYmg-__G2seTsuLedss+WL5m-ySg@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
 <CAC20D2OQcoURrODRr6VfsKyYmg-__G2seTsuLedss+WL5m-ySg@mail.gmail.com>
Message-ID: <201806260701.w5Q710Y9019898@freefriends.org>

Clem Cole <clemc at ccc.com> wrote:

> > "In 1999 PMDF was translated from Pascal to C. The C version of PMDF
> > became the basis of the Sun Java System Messaging Server of Sun
> > Microsystems"
> >
>
> But I can say the date has to be wrong on this quote
> you mention.  I would think the translation from Pascal back to C would
> have been 10 year earlier and that would make more sense.   By 1999, I had
> left LCC.    As I said, Simon would be a good person to ask if I can dig
> him up.

It sounds more like:

Early 1980s: C MMDF translated to VMS Pascal.
1999: PMDF translated anew to C by Sun.

I briefly remember having MMDF at Georgia Tech, I think in the 4.1 BSD
time frame; it was used for CSNet and UUCP for USENET / UUCB mail. Then
when 4.2 came along, I think it was dropped for Sendmail.

But I *really* don't remember the details, just that MMDF was in use
on one of the systems I used a long time ago.

Thanks,

Arnold


From ralph at inputplus.co.uk  Tue Jun 26 17:49:52 2018
From: ralph at inputplus.co.uk (Ralph Corderoy)
Date: Tue, 26 Jun 2018 08:49:52 +0100
Subject: [TUHS] off-topic list [ really mh ]
In-Reply-To: <201806251528.w5PFSqaL000557@darkstar.fourwinds.com>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <CAC20D2MvgZN1P5wcZ4g_Gab6j5KPgKg+yhieU-_OS5-20xtjGA@mail.gmail.com>
 <201806251528.w5PFSqaL000557@darkstar.fourwinds.com>
Message-ID: <20180626074952.6B38D1F97D@orac.inputplus.co.uk>

Hi Jon,

> I'm a big fan of mh (nmh now) and a sometimes maintainer.

Ditto × 2.  :-)  https://www.nongnu.org/nmh/

> What I love about it is that it's a separate set of commands as
> opposed to an integrated blob.  That means that I can script.

Yes, this is key.  I don't procmail(1) my emails into folders for later
reading by theme, instead I have a shell script that runs through the
inbox looking for emails to process.

pick(1) finds things of interest, e.g.
    pick --list-id '<tuhs\.minnie\.tuhs\.org>'
I then display those emails in a variety of ways:
    one-line summary of each;
    scrape and summarise signal from third-party noise with sed(1),
        etc., having decoded the MIME; 
    read each in full in less(1) having configured `K', `S', ... to
        update nmh's sequences called `keep', `spam', ..., and move onto
        the next email.
And finally have read(1) prompt me for an action with a common default,
delete, refile, etc.
Then the script does it all again for the pick from the inbox.

The result is I'm lead through all the routine processing, mainly
hitting Enter with the odd bit of `DDDKSD'.  A bit similiar to using the
rn(1) Usenet reader's `do the next thing' `Space' key.

In interactive use, old-MH hands don't type `pick -subject ...', but
instead have a personal ~/bin/* that suits their needs.  For me, `s'
shows an email, `sc' gives a scan(1) listing, one per line, of the end
of the current folder;  just enough based on `tput lines'.  I search
with ~/bin/-sub, and so on.

Much more flexible than a silo like mail(1) by benefiting from the
`programming tools' model.

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


From dot at dotat.at  Tue Jun 26 18:27:50 2018
From: dot at dotat.at (Tony Finch)
Date: Tue, 26 Jun 2018 09:27:50 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.BSF.2.21.999.1806251313020.51914@orthanc.ca>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <alpine.BSF.2.21.999.1806251313020.51914@orthanc.ca>
Message-ID: <alpine.DEB.2.11.1806260919560.916@grey.csi.cam.ac.uk>

Lyndon Nerenberg <lyndon at orthanc.ca> wrote:

> > The Wikipedia article also indicates that PMDF became Sun Java System
> > Messaging Server.  Which seems to counter Clem's comment.
>
> Commercial PMDF was sold/maintained by Innosoft (Ned Freed, Chris Newman, ...)
> Innosoft was bought up by Sun prior to its assimilation by the Oracle borg ...

They are still active though I can't remember what their software is
called these days. It has grown a lot of features, e.g. IMAP message
store.

According to the MMDF FAQ, MMDF was also the basis for PP, but my PP
manual says MMDF was just the inspiration, and it suggests that Kille and
his team rewrote it from scratch. PP was the UK Grey Book / X.400 mailer,
and (according to its manual) its name definitely did not stand for
Postman Pat. PP is still around as the (commercial) ISODE mail server.

When I worked at Demon Internet in 1997-2000, they were using MMDF for
their mail servers - I gather this came from their origins as a SCO
consultancy.

At Cambridge, our central email relay was named the "PP Switch" in about
1991 (IIRC) and it is still to this day called ppsw.cam.ac.uk.

Tony.
-- 
f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/
Lundy, Fastnet, Irish Sea: Variable 3 or 4, occasionally east 5 in Lundy and
Fastnet. Smooth or slight, occasionally moderate in southwest Fastnet. Fair.
Good.


From dfawcus+lists-tuhs at employees.org  Tue Jun 26 19:05:39 2018
From: dfawcus+lists-tuhs at employees.org (Derek Fawcus)
Date: Tue, 26 Jun 2018 10:05:39 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <201806250615.w5P6FgHA018820@freefriends.org>
References: <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
Message-ID: <20180626090539.GB96296@accordion.employees.org>

On Mon, Jun 25, 2018 at 12:15:42AM -0600, arnold at skeeve.com wrote:
> 
> So what is the alternative?

Well, I've generally found that > 90% of my mail filtering needs
can be handled by the use of extension addresses (see my From: line).

I started doing that with qmail back in the 90s, and these days
tend to use a similar mechanism which is present in postfix.
I've even found that some sendmail installs (as a previous employer used)
support as similar mechanism.

It is only occasionally that I have to resort to procmail (say for
parsing stuff out of Atlassian tools messages), and can then make
its use simpler by having recipices which match a bit, then feed
the message back in to a extension address.  At which point delivery
may occur, or some other procmail rules may be run.

DF


From dfawcus+lists-tuhs at employees.org  Tue Jun 26 18:57:56 2018
From: dfawcus+lists-tuhs at employees.org (Derek Fawcus)
Date: Tue, 26 Jun 2018 09:57:56 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2OQcoURrODRr6VfsKyYmg-__G2seTsuLedss+WL5m-ySg@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
 <CAC20D2OQcoURrODRr6VfsKyYmg-__G2seTsuLedss+WL5m-ySg@mail.gmail.com>
Message-ID: <20180626085756.GA96296@accordion.employees.org>

On Mon, Jun 25, 2018 at 05:15:00PM -0400, Clem Cole wrote:
> > Wait, are you saying that qmail was written as a replacement for PMDF? Or
> > that you used qmail as your replacement for PMDF?
> 
> Unclear - why Borstein does anything in my mind.

Bernstein.

> Best I can tell he wrote
> because he (and a lot of us) disliked sendmail.   I think he felt MMDF was
> too much of a mess by that point and it was time to start over.   He had
> already did a replacement for bind.  But it would have primarily been a

As I recall qmail came before djbdns.  I started using qmail in the 90s in
its pre 1.0 version because I wanted a SMTP MTA didn't want to use sendmail,
and had wasted too much time trying to get my head around MMDF.

qmail had the advantage of being small, simple, with easily understood
distinct parts,  and one could easily extend it - sort of in the tools
approach, by plugging bits in to its dataflow pipeline.

The code was a bit 'interesting', but not too awkward.

DF


From tfb at tfeb.org  Tue Jun 26 21:29:27 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Tue, 26 Jun 2018 12:29:27 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <f8d76407-f217-f444-653d-caac36012d1b@spamtrap.tnetconsulting.net>
Message-ID: <05839822-7BBF-4071-871B-E463308C1B26@tfeb.org>


> On 25 Jun 2018, at 21:47, Grant Taylor via TUHS <tuhs at minnie.tuhs.org> wrote:
> 
> I wouldn't bet that (the C version of) PMDF was reimplemented in Java just because the name contained Java.  I seem to recall Sun putting Java in the name of many products at the time.

This is true: Java was (they thought) so important that they labelled everything with it.  They changed the stock symbol from SUNW to JAVA at some point, even.  That has to be a really good example of fiddling while Rome burns.


From dot at dotat.at  Tue Jun 26 23:09:27 2018
From: dot at dotat.at (Tony Finch)
Date: Tue, 26 Jun 2018 14:09:27 +0100
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
Message-ID: <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>

Clem Cole <clemc at ccc.com> wrote:

> Anyway, the folks that did PMDF formed a small firm and sold it for a
> while.  There was a commercial IP implementation from France call TUV
> for VMS and IIRC, the TUV folks bought PMDF and whole thing got sold to
> a lot people and had quite a ride .

Still has, it seems! http://www.process.com/products/pmdf/

Tony.
-- 
f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/
East Forties: Northerly 4 or 5. Slight, occasionally moderate until later.
Fair. Good, occasionally poor.


From michael at kjorling.se  Wed Jun 27 01:57:58 2018
From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=)
Date: Tue, 26 Jun 2018 15:57:58 +0000
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
Message-ID: <20180626155758.GC29822@h-174-65.A328.priv.bahnhof.se>

On 25 Jun 2018 16:09 -0400, from clemc at ccc.com (Clem Cole):
> Borstien

Actually, I'm pretty sure his name is Bernstein.

Debian's package repository seems to agree, calling the original
author of qmail Daniel Bernstein.

-- 
Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)


From beebe at math.utah.edu  Wed Jun 27 03:54:24 2018
From: beebe at math.utah.edu (Nelson H. F. Beebe)
Date: Tue, 26 Jun 2018 11:54:24 -0600
Subject: [TUHS]  PDP-11 legacy, C, and modern architectures
Message-ID: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>

There is a provocative article published today in the lastest issue of
Communications of the ACM:

	David Chisnall
	C is not a low-level language
	Comm ACM 61(7) 44--48 July 2018
	https://doi.org/10.1145/3209212

Because C is the implementation language of choice for a substantial
part of the UNIX world, it seems useful to announce the new article to
TUHS list members.

David Chisnall discusses the PDP-11 legacy, the design of C, and the
massive parallelism available in modern processors that is not so easy
to exploit in C, particularly, portable C.  He also observes:

>> ...
>> A processor designed purely for speed, not for a compromise between
>> speed and C support, would likely support large numbers of threads,
>> have wide vector units, and have a much simpler memory model. Running
>> C code on such a system would be problematic, so, given the large
>> amount of legacy C code in the world, it would not likely be a
>> commercial success.
>> ...

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
- 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------


From imp at bsdimp.com  Wed Jun 27 04:04:38 2018
From: imp at bsdimp.com (Warner Losh)
Date: Tue, 26 Jun 2018 12:04:38 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
Message-ID: <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>

On Tue, Jun 26, 2018 at 7:09 AM, Tony Finch <dot at dotat.at> wrote:

> Clem Cole <clemc at ccc.com> wrote:
>
> > Anyway, the folks that did PMDF formed a small firm and sold it for a
> > while.  There was a commercial IP implementation from France call TUV
> > for VMS and IIRC, the TUV folks bought PMDF and whole thing got sold to
> > a lot people and had quite a ride .
>
> Still has, it seems! http://www.process.com/products/pmdf/


It was TGV that produced multinet. It was not from France, but named for
the fast train in France. They were located in Santa Cruz. They were the
main competitor to TWG, The Wollongong Group in the VMS TCP/IP space.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/8d80a5b4/attachment.html>

From ron at ronnatalie.com  Wed Jun 27 04:52:23 2018
From: ron at ronnatalie.com (Ronald Natalie)
Date: Tue, 26 Jun 2018 14:52:23 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
Message-ID: <1BC6FEE2-8989-4A67-A5E3-24E3436FE70E@ronnatalie.com>

Try this ilnk:  https://queue.acm.org/detail.cfm?id=3212479 <https://queue.acm.org/detail.cfm?id=3212479>

> On Jun 26, 2018, at 1:54 PM, Nelson H. F. Beebe <beebe at math.utah.edu> wrote:
> 
> There is a provocative article published today in the lastest issue of
> Communications of the ACM:
> 
> 	David Chisnall
> 	C is not a low-level language
> 	Comm ACM 61(7) 44--48 July 2018
> 	https://doi.org/10.1145/3209212
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/234c9d5e/attachment.html>

From ron at ronnatalie.com  Wed Jun 27 05:01:50 2018
From: ron at ronnatalie.com (Ronald Natalie)
Date: Tue, 26 Jun 2018 15:01:50 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
Message-ID: <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>

I’m not sure I buy his arguments.    First off, he argues that a true low level language requires knowledge of the “irrelevant” and then he goes and argues that with C you need such knowledge on other than a PDP-11.
His arguemnt that whowever he surveyed is ignorant of how C handles padding is equally pointless.     Further, his architecture world seems to be roughly limited to PDP-11’s and Intel x86 chips.

I’ve ported UNIX to a number of machines from the Denelcor HEP MIMD supercomputer to various micros from x86 to i860 etc…  In addition, I’ve ported high performance computer graphics applications to just about every UNIX platform aailable (my app is pretty much an OS into itself) including the x86 of various flavors, MIPS, 68000, Sparc, Stellar, Ardent, Apollo DN1000, HP9000, iTanium, Alpha, PA RISC, i860 (of various configurations), etc…   All done in C.   All done with exacting detail.   Yes, you do have to understand the underlying code being generated, and such is not as bad as he thinks.    In fact, all his arguments argue that C does fit his definition of the low level language.



From steffen at sdaoden.eu  Wed Jun 27 06:38:38 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Tue, 26 Jun 2018 22:38:38 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <c4debab5-181d-c071-850a-a8865d6e98d0@spamtrap.tnetconsulting.net>
References: <CMM.0.96.0.1529621068.beebe@gamma.math.utah.edu>
 <f1a2f732-400f-8044-1c90-9a8500a17d15@spamtrap.tnetconsulting.net>
 <20180621234706.GA23316@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806221416210.68981@aneurin.horsfall.org>
 <20180622142846.GS21272@mcvoy.com>
 <DE56C21F-CF7D-4B44-BC43-0C27CBD6DD7A@tfeb.org>
 <20180622145402.GT21272@mcvoy.com> <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <20180623223851.LcBjy%steffen@sdaoden.eu>
 <09ee8833-c8c0-8911-751c-906b737209b7@spamtrap.tnetconsulting.net>
 <20180625155103.6iH7s%steffen@sdaoden.eu>
 <c4debab5-181d-c071-850a-a8865d6e98d0@spamtrap.tnetconsulting.net>
Message-ID: <20180626203838.m_JEb%steffen@sdaoden.eu>

Hello.  And sorry for the late reply, "lovely weather we're
having"..

Grant Taylor via TUHS wrote in <c4debab5-181d-c071-850a-a8865d6e98d0 at spa\
mtrap.tnetconsulting.net>:
 |On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
 ...
 |> I cannot really imagine how hard it is to write a modern web browser, 
 |> with the highly integrative DOM tree, CSS and Javascript and such, 
 |> however, and the devil is in the details anyway.
 |
 |Apparently it's difficult.  Or people aren't sufficiently motivated.
 |
 |I know that I personally can't do it.
 |
 |> Good question.  Then again, young men (and women) need to have a chance 
 |> to do anything at all.  Practically speaking.  For example we see almost 
 |> thousands of new RFCs per year.  That is more than in the first about 
 |> two decades all in all.  And all is getting better.
 |
 |Yep.  I used to have aspirations of reading RFCs.  But work and life 
 |happened, then the rate of RFC publishing exploded, and I gave up.

I think it is something comparable.

 |> Really?  Not that i know of.  Resolvers should be capable to provide 
 |> quality of service, if multiple name servers are known, i would say.
 |
 |I typically see this from the authoritative server side.  I've 
 |experienced this (at least) once myself and have multiple colleagues 
 |that have also experienced this (at least) once themselves too.
 |
 |If the primary name server is offline for some reason (maintenance, 
 |outage, hardware, what have you) clients start reporting a problem that 
 |manifests itself as a complete failure.  They seem to not fall back to 
 |the secondary name server.
 |
 |I'm not talking about "slowdown" complaints.  I'm talking about 
 |"complete failure" and "inability to browse the web".
 |
 |I have no idea why this is.  All the testing that I've done indicate 
 |that clients fall back to secondary name servers.  Yet multiple 
 |colleagues and myself have experienced this across multiple platforms.
 |
 |It's because of these failure symptoms that I'm currently working with a 
 |friend & colleague to implement VRRP between his name servers so that 
 |either of them can serve traffic for both DNS service IPs / Virtual IPs 
 |(VIPs) in the event that the other server is unable to do so.
 |
 |We all agree that a 5 ~ 90 second (depending on config) outage with 
 |automatic service continuation after VIP migration is a LOT better than 
 |end users experiencing what are effectively service outages of the 
 |primary DNS server.
 |
 |Even in the outages, the number of queries to the secondary DNS 
 |server(s) don't increase like you would expect as clients migrate from 
 |the offline primary to the online secondary.
 |
 |In short, this is a big unknown that I, and multiple colleagues, have 
 |seen and can't explain.  So, we have each independently decided to 
 |implement solutions to keep the primary DNS IP address online using what 
 |ever method we prefer.

That is beyond me, i have never dealt with the server side.  I had
a pretty normal QOS client: a sequentially used searchlist,
configurable DNS::conf_retries, creating SERVFAIL cache entries if
all bets were off.  Placing failing servers last in the SBELT, but
keeping them; i see a dangling TODO note for an additional place for
failing nameservers, to have them rest for a while.

 |> This is even RFC 1034 as i see, SLIST and SBELT, whereas the latter 
 |> i filled in from "nameserver" as of /etc/resolv.conf, it should have 
 |> multiple, then.  (Or point to localhost and have a true local resolver 
 |> or something like dnsmasq.)
 |
 |I completely agree that per all documentation, things SHOULD work with 
 |just the secondary.  Yet my experience from recursive DNS server 
 |operator stand point, things don't work.

That is bad.

 |> I do see DNS failures via Wifi but that is not the fault of DNS, but of 
 |> the provider i use.
 |
 |Sure.  That's the nature of a wireless network.  It's also unrelated to 
 |the symptoms I describe above.

Well, in fact i think there is something like a scheduler error
around there, too, because sometimes i am placed nowhere and get
no bits at all, most often at minute boundaries, and for minutes.
But well...

 |> P.S.: actually the only three things i have ever hated about DNS, and 
 |> i came to that in 2004 with EDNS etc. all around yet, is that backward 
 |> compatibly has been chosen for domain names, and therefore we gained 
 |> IDNA, which is a terribly complicated brainfuck thing that actually 
 |> caused incompatibilities, but these where then waved through and ok. 
 |> That is absurd.
 |
 |I try to avoid dealing with IDNs.  Fortunately I'm fairly lucky in \
 |doing so.

I censor this myself.  I for one would vote for UTF-7 with
a different ACE (ASCII compatible encoding) prefix, UTF-7 is
somewhat cheap to implement and used for, e.g., the pretty
omnipresent IMAP.  I mean, i (but who am i) have absolutely
nothing against super smart algorithms somewhere, for example in
a software that domain registrars have to run in order to grant
domain names.  But the domain name as such should just be it,
otherwise it is .. not.  Yes that is pretty Hollywood but if
i mistype an ASCII hostname i also get a false result.

Anyway: fact is that for example German IDNA 2008 rules are
incompatible with the earlier ones: really, really bizarre.

 |> And the third is DNSSEC, which i read the standard of and said "no". 
 |> Just last year or the year before that we finally got DNS via TCP/TLS 
 |> and DNS via DTLS, that is, normal transport security!
 |
 |My understanding is that DNSSEC provides verifiable authenticity of the 
 |information transported by DNSSEC.  It's also my understanding that 
 |DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption 
 |(authentication & privacy).
 |
 |As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that 
 |prevent servers on the far end of said transports from modifying queries 
 |/ responses that pass through them, or serving up spoofed content.
 |
 |To me, DNSSEC serves a completely different need than DTLS, DOH, 
 |DNScrypt, etc.
 |
 |Please correct me and enlighten me if I'm wrong or have oversimplified 
 |things.

No, a part is that you can place signatures which can be used to
verify the content that is actually delivered.  This is right.
And yes, this is different to transport layer security.  And DNS
is different in that data is cached anywhere in the distributed
topology, so having the data ship with verifieable signatures may
sound sound.  But when i look around this is not what is in use,
twenty years thereafter.  I mean you can lookup netbsd.org and see
how it works, and if you have a resolver who can make use that
would be ok (mine can not), but as long as those signatures are
not mandatory they can be left out somewhere on the way, can they.
My VM hoster offers two nameservers and explicitly does not
support DNSSEC for example (or did so once we made the contract,
about 30 months ago).

Then again i wish (i mean, it is not that bad in respect to this,
politics or philosophy are something different) i could reach out
securely to the mentioned servers.  In the end you need to put
trust in someone, i mean, most of you have smartphones, and some
processors run entire operating systems aside the running
operating system, and the one software producer who offers its
entire software open on a public hoster is condemned whereas
others who do not publish any source code are taken along to the
toilet and into the bedroom, that is anything but not in
particular in order.

So i am sitting behind that wall and have to take what i get.
And then i am absolutely convinced that humans make a lot of
faults, and anything that does not autoconfigure correctly is
prone to misconfiguration and errors, whatever may be the cause,
from a hoped-for girl friend to a dying suffering wife, you know.

I think i personally could agree with these signatures if the
transport would be secure, and if i could make a short single
connection to the root server of a zone and get the certificate
along with the plain TLS handshake.  What i mean is, the DNS would
automatically provide signatures based on the very same key that
is used for TLS, and the time to live for all delivered records is
en par with the lifetime of that certificate at maximum.
No configuration at all, automatically secure, a lot code less to
maintain.  And maybe even knowledge whether signatures are to be
expected for a zone, so that anything else can be treated as
spoofed.

All this is my personal blabla though.  I think that thinking is
not bad, however.  Anyway anybody who is not driving all the
server her- or himself is putting trust since decades and all the
time, and if i can have a secure channel to those people i (put)
trust (in) then this is a real improvement.  If those people do
the same then i have absolutely zero problems with only encrypted
transport as opposed to open transport and signed data.

 |> Twenty years too late, but i really had good days when i saw those 
 |> standards flying by!  Now all we need are zone administrators which 
 |> publish certificates via DNS and DTLS and TCP/TLS consumers which can 
 |> integrate those in their own local pool (for at least their runtime).
 |
 |DANE has been waiting on that for a while.
 |
 |I think DANE does require DNSSEC.  I think DKIM does too.

I have the RFCs around... but puuh.  DKIM says

   The DNS is proposed as the initial mechanism for the public keys.
   Thus, DKIM currently depends on DNS administration and the security
   of the DNS system.  DKIM is designed to be extensible to other key
   fetching services as they become available.

 |> I actually hate the concept very, very much ^_^, for me it has 
 |> similarities with brainfuck.  I could not use it.
 |
 |Okay.  To each his / her own.

Of course.

 |> Except that this will work only for all-english, as otherwise character 
 |> sets come into play, text may be in a different character set, mail 
 |> standards may impose a content-transfer encoding, and then what you are 
 |> looking at is actually a database, not the data as such.
 |
 |Hum.  I've not considered non-English as I so rarely see it in raw email 
 |format.
 |
 |> This is what i find so impressive about that Plan9 approach, where 
 |> the individual subparts of the message are available as files in the 
 |> filesystem, subjects etc. as such, decoded and available as normal \
 |> files. 
 |> I think this really is .. impressive.
 |
 |I've never had the pleasure of messing with Plan9.  It's on my 
 |proverbial To-Do-Someday list.  It does sound very interesting.

To me it is more about some of the concepts in there, i actually
cannot use it with the defaults.  I cannot the editors Sam nor
Acme, and i do not actually like using the mouse, which is
a problem.

 |> Thanks, eh, thanks.  Time will bring.. or not.
 |
 |;-)

Oh, i am hoping for "will", but it takes a lot of time.  It would
have been easier to start from scratch, and, well, in C++.  I have
never been a real application developer, i am more for libraries
or say, interfaces which enable you to do something.  Staying
a bit in the back, but providing support as necessary.  Well.
I hope we get there.

 |> It is of course not the email that leaves you no more.  It is not just 
 |> headers are added to bring the traceroute path.  I do have a bad feeling 
 |> with these, but technically i do not seem to have an opinion.
 |
 |I too have some unease with things while not having any technical 
 |objection to them.
 |
 |> Well, this adds the burden onto TUHS.  Just like i have said.. but 
 |> you know, more and more SMTP servers connect directly via STARTSSL or 
 |> TCP/TLS right away.  TUHS postfix server does not seem to do so on the 
 |> sending side
 |
 |The nature of email is (and has been) changing.
 |
 |We're no longer trying to use 486s to manage mainstream email.  My 
 |opinion is that we have the CPU cycles to support current standards.
 |
 |> -- you know, i am not an administor, no earlier but on the 15th of March 
 |> this year i realized that my Postfix did not reiterate all the smtpd_* 
 |> variables as smtp_* ones, resulting in my outgoing client connections 
 |> to have an entirely different configuration than what i provided for 
 |> what i thought is "the server", then i did, among others
 |> 
 |> smtpd_tls_security_level = may +smtp_tls_security_level = 
 |> $smtpd_tls_security_level
 |
 |Oy vey.

Hm.

 |> But if TUHS did, why should it create a DKIM signature?  Ongoing is the 
 |> effort to ensure SMTP uses TLS all along the route, i seem to recall i 
 |> have seen RFCs pass by which accomplish that.  Or only drafts??  Hmmm.
 |
 |SMTP over TLS (STARTTLS) is just a transport.  I can send anything I 
 |want across said transport.

That is true.

 |DKIM is about enabling downstream authentication in email.  Much like 
 |DNSSEC does for DNS.
 |
 |There is nothing that prevents sending false information down a secure 
 |communications channel.

I mean, that argument is an all destroying hammer.  But it is
true.  Of course.

 |> Well S/MIME does indeed specify this mode of encapsulating the entire 
 |> message including the headers, and enforce MUAs to completely ignore the 
 |> outer envelope in this case.  (With a RFC discussing problems of this 
 |> approach.)
 |
 |Hum.  That's contrary to my understanding.  Do you happen to have RFC 
 |and section numbers handy?  I've wanted to, needed to, go read more on 
 |S/MIME.  It now sounds like I'm missing something.

In draft "Considerations for protecting Email header with S/MIME"
by Melnikov there is

    [RFC5751] describes how to protect the Email message
    header [RFC5322], by wrapping the message inside a message/rfc822
    container [RFC2045]:

    In order to protect outer, non-content-related message header
    fields (for instance, the "Subject", "To", "From", and "Cc"
    fields), the sending client MAY wrap a full MIME message in a
    message/rfc822 wrapper in order to apply S/MIME security services
    to these header fields.  It is up to the receiving client to
    decide how to present this "inner" header along with the
    unprotected "outer" header.

    When an S/MIME message is received, if the top-level protected
    MIME entity has a Content-Type of message/rfc822, it can be
    assumed that the intent was to provide header protection.  This
    entity SHOULD be presented as the top-level message, taking into
    account header merging issues as previously discussed.

I am a bit behind, the discussion after the Efail report this
spring has revealed some new development to me that i did not know
about, and i have not yet found the time to look at those.

 |I wonder if the same can be said about PGP.

Well yes, i think the same is true for such, i have already
received encrypted mails which ship a MIME multipart message, the
first being 'Content-Type: text/rfc822-headers;
protected-headers="v1"', the latter providing the text.

 |> The BSD Mail clone i maintain does not support this yet, along with \
 |> other 
 |> important aspects of S/MIME, like the possibility to "self-encrypt" (so 
 |> that the message can be read again, a.k.a. that the encrypted version 
 |> lands on disk in a record, not the plaintext one).  I hope it will be 
 |> part of the OpenPGP, actually privacy rewrite this summer.
 |
 |I thought that many MUAs handled that problem by adding the sender as an 
 |additional recipient in the S/MIME structure.  That way the sender could 
 |extract the ephemeral key using their private key and decrypt the 
 |encrypted message that they sent.

Yes, that is how this is done.  The former maintainer implemented
this rather as something like GnuPG's --hidden-recipient option,
more or less: each receiver gets her or his own S/MIME encrypted
copy.  The call chain itself never sees such a mail.

 --End of <c4debab5-181d-c071-850a-a8865d6e98d0 at spamtrap.tnetconsulting.net>

Grant Taylor via TUHS wrote in <5da463dd-fb08-f601-68e3-197e720d5716 at spa\
mtrap.tnetconsulting.net>:
 |On 06/25/2018 10:10 AM, Steffen Nurpmeso wrote:
 |> DKIM reuses the *SSL key infrastructure, which is good.
 |
 |Are you saying that DKIM relies on the traditional PKI via CA 
 |infrastructure?  Or are you saying that it uses similar technology that 
 |is completely independent of the PKI / CA infrastructure?

I mean it uses those *SSL tools and thus an infrastructure that is
standardized as such and very widely used, seen by many eyes.

 |> (Many eyes see the code in question.)  It places records in DNS, which 
 |> is also good, now that we have DNS over TCP/TLS and (likely) DTLS. 
 |> In practice however things may differ and to me DNS security is all in 
 |> all not given as long as we get to the transport layer security.
 |
 |I believe that a secure DNS /transport/ is not sufficient.  Simply 
 |security the communications channel does not mean that the entity on the 
 |other end is not lying.  Particularly when not talking to the 
 |authoritative server, likely by relying on caching recursive resolvers.

That record distribution as above, yes, but still: caching those
TSIGs or what their name was is not mandatory i think.

 |> I personally do not like DKIM still, i have opendkim around and 
 |> thought about it, but i do not use it, i would rather wish that public 
 |> TLS certificates could also be used in the same way as public S/MIME 
 |> certificates or OpenPGP public keys work, then only by going to a TLS 
 |> endpoint securely once, there could be end-to-end security.
 |
 |All S/MIME interactions that I've seen do use certificates from a well 
 |know CA via the PKI.
 |
 |(My understanding of) what you're describing is encryption of data in 
 |flight.  That does nothing to encrypt / protect data at rest.
 |
 |S/MIME /does/ provide encryption / authentication of data in flight 
 |/and/ data at rest.
 |
 |S/MIME and PGP can also be used for things that never cross the wire.

As above, i just meant that if the DNS server is TLS protected,
that key could be used to automatically sign the entire zone data,
so that the entire signing and verification is automatic and can
be deduced from the key used for and by TLS.  Using the same
library, a single configuration line etc.  Records could then be
cached anywhere just the same as now.  Just an idea.

You can use self-signed S/MIME, you can use an OpenPGP key without
any web of trust.  Different to the latter, where anyone can upload
her or his key to the pool of OpenPGP (in practice rather GnuPG only
i would say) servers (sks-keyservers.net, which used a self-signed
TLS certificate last time i looked btw., fingerprint
79:1B:27:A3:8E:66:7F:80:27:81:4D:4E:68:E7:C4:78:A4:5D:5A:17),
there is no such possibility for the former.  So whereas everybody
can look for the fingerprint i claim via hkps:// and can be pretty
sure that this really is me, no such thing exists for S/MIME.
(I for one was very disappointed when the new German passport had
been developed around Y2K, and no PGP and S/MIME was around.  The
Netherlands i think are much better in that.  But this is
a different story.)

But i think things happen here, that HTTP .well-known/ mechanism
seems to come into play for more and more things, programs learn
to use TOFU (trust on first use), and manage a dynamic local pool
of trusted certificates.  (I do not know exactly, but i have seen
fly by messages which made me think the mutt MUA put some work
into that, for example.)  So this is decentralized, then.

 |> I am not a cryptographer, however.  (I also have not read the TLS v1.3 
 |> standard which is about to become reality.)  The thing however is that 
 |> for DKIM a lonesome user cannot do anything -- you either need to have 
 |> your own SMTP server, or you need to trust your provider.
 |
 |I don't think that's completely accurate.  DKIM is a method of signing 
 |(via cryptographic hash) headers as you see (send) them.  I see no 
 |reason why a client can't add DKIM headers / signature to messages it 
 |sends to the MSA.
 |
 |Granted, I've never seen this done.  But I don't see anything preventing 
 |it from being the case.
 |
 |> But our own user interface is completely detached.  (I mean, at least 
 |> if no MTA is used one could do the DKIM stuff, too.)
 |
 |I know that it is possible to do things on the receiving side.  I've got 
 |the DKIM Verifier add-on installed in Thunderbird, which gives me client 
 |side UI indication if the message that's being displayed still passes 
 |DKIM validation or not.  The plugin actually calculates the DKIM hash 
 |and compares it locally.  It's not just relying on a header added by 
 |receiving servers.

I meant that the MUA could calculate the DKIM stuff itself, but
this works only if the MTA does not add or change headers.  That
is what i referred to, but DKIM verification could be done by
a MUA, if it could.  Mine cannot.

 --End of <5da463dd-fb08-f601-68e3-197e720d5716 at spamtrap.tnetconsulting.net>

Grant Taylor via TUHS wrote in <8c0da561-f786-8039-d2fc-907f2ddd09e3 at spa\
mtrap.tnetconsulting.net>:
 |On 06/25/2018 10:26 AM, Steffen Nurpmeso wrote:
 |> I have never understood why people get excited about Maildir, you have 
 |> trees full of files with names which hurt the eyes,
 |
 |First, a single Maildir is a parent directory and three sub-directories. 
 |  Many Maildir based message stores are collections of multiple 
 |individual Maildirs.

This is Maildir+ i think was the name, yes.

 |Second, the names may look ugly, but they are named independently of the 
 |contents of the message.
 |
 |Third, I prefer the file per message as opposed to monolithic mbox for 
 |performance reasons.  Thus I message corruption impacts a single message 
 |and not the entire mail folder (mbox).

Corruption should not happen, then.  This is true for each
database or repository it think.

 |Aside:  I already have a good fast database that most people call a file 
 |system and it does a good job tracking metadata for me.

This can be true, yes.  I know of files systems which get very
slow when there are a lot of files in a directory, or which even
run into limits in the worst case.  (I have indeed once seen the
latter on a FreeBSD system with which i though were good file
system defaults.)

 |Fourth, I found maildir to be faster on older / slower servers because 
 |it doesn't require copying (backing up) the monolithic mbox file prior 
 |to performing an operation on it.  It splits reads & writes into much 
 |smaller chunks that are more friendly (and faster) to the I/O sub-system.

I think this depends.  Mostly anything incoming is an appending
write, in my use case then everything is moved away in one go,
too.  Then it definetely depends on the disks you have.  Before
i was using a notebook which had a 3600rpm hard disk, then you
want it compact.  And then it also depends on the cleverness of
the software.  Unfortunately my MUA cannot, but you could have an
index like git has or like the already mentioned Zawinski
describes as it was used for Netscape.  I think the i think
Enterprise Mail server dovecot also uses MBOX plus index by
default, but i could be mistaken.  I mean, if you control the file
you do not need to perform an action immediately, for example --
and then, when synchronization time happens, you end up with
a potentially large write on a single file, instead of having to
fiddle around with a lot of files.  But your experience may vary.

 |Could many of the same things be said about MH?  Very likely.  What I 
 |couldn't say about MH at the time I went looking and comparing (typical) 
 |unix mail stores was the readily available POP3 and IMAP interface. 
 |Seeing as how POP3 and IMAP were a hard requirement for me, MH was a 
 |non-starter.
 |
 |> they miss a From_ line (and are thus not valid MBOX files, i did not 
 |> miss that in the other mail).
 |
 |True.  Though I've not found that to be a limitation.  I'm fairly 
 |confident that the Return-Path: header that is added / replaced by my 
 |MTA does functionally the same thing.

With From_ line i meant that legendary line that i think was
introduced in Unix V5 mail and which prepends each mail message in
a MBOX file, and which causes that ">From" quoting at the
beginning of lines of non MIME-willing (or so configured) MUAs.

 |I'm sure similar differences can be had between many different solutions 
 |in Unix's history.  SYS V style init.d scripts vs BSD style /etc/rc\d 
 |style init scripts vs systemd or Solaris's SMD (I think that's it's name).

Oh!  I am happy to have the former two on my daily machines again,
and i have been reenabled to tell you what is actually going on
there (in user space), even without following some external
software development.  Just in case of interest.

 |To each his / her own.

It seems you will never be able to 1984 that from the top,
especially not over time.

 --End of <8c0da561-f786-8039-d2fc-907f2ddd09e3 at spamtrap.tnetconsulting.net>

Ciao.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From clemc at ccc.com  Wed Jun 27 06:56:34 2018
From: clemc at ccc.com (Clem Cole)
Date: Tue, 26 Jun 2018 16:56:34 -0400
Subject: [TUHS] Masscomp Engineering Mailing List,
 Custom Products Engineering and the Manatee
Message-ID: <CAC20D2N72Ot_jfSb3ER5ZZe0fMuVnZSpdWqrG-94-6wT+m0b3Q@mail.gmail.com>

Semi-Unix History....

Dave's "this date in history stuff" has reminded me of a great Masscomp
story that really needs to be written down and not lost to Unix history.
I'm hoping Warren can forgive me a little as the Unix history is the
Masscomp part and people involved; but I think it is fun and should be more
widely known.

Some of you may know who Jack Burness is (and even be part of his humor
mailing list - which is one of the most amazing who-is-who of the
industry).  Jack is probably most infamous for his being the author of RT-11
Moonlander <https://en.wikipedia.org/wiki/Lunar_Lander_(video_game_genre)>.
 Some of his stunts over his career are legends, as our old friend and
colleague Mike Leibensperger once said, I want to be like Jack, when he
grows up (at ~70, I can assure you that he still has not).

After Jack left DEC, he became the original one man Masscomp Graphics
group.  While the year really does not matter, at some point, Jack had been
given a tear off desk calendar as a Christmas present from his girfriend
called the 'Sex Fact of the Day Calendar.'   Like Dave's daily reminders to
us on TUHS, every morning he typed in the one line factiod and passed it on
to the Masscomp Engineering Mailing List.   This is important because the
VCs had just given us a straight laced ex-IBM guy as a president named Gus
Klein (aka Mr. Potatohead), who did not read email, and of course did not
read the engineering mailing list.   He wants to workspace to be
'professional' and look like his idea of an 'office' and his 'memos' were
amazing as you can imagine.

The other important note to the story is that the late Roger Gourd, our
direct boss, had just created Masscomp's new Custom Product Engineering
(CPE) group; to be the analog to DEC's CSS.   When Roger had left DEC for
Gould he had of course, moved to south Florida where they were based and
become quite a fisherman and protector of natural resources.  When he was
recruited to come move back to New England to take on the reins in
development at Masscomp be brought some of south Florida back with him.

Well from Jack we had learn two important facts and from those facts
realized that mediterranean fisherman were liars:  at some time in March it
was reported that fisherman in the med that caught a female manatee in
their nets and brought it into their boat considered it bad luck unless they
sodomized them because the manatee were thought to be mermaids; and some
time in May we learned that the Roman Catholic Church would excommunicate
any fisherman if it was discovered that said fisherman had caught a manatee
and that manatee had been used as a sex toy.

Upon learning this interesting catch-22, Gourd immediately sponsors the
"Save the Manatee Society in South Florida" and makes it the Group Mascot
for CPE.  Well, manatees start popping up all over engineering.  Mr.
Potatohead is clueless of the significance of course.  At his memorial I
gave Roger's widow a stuffed manatee I still had from my desk from those
days, she knew and laughed and laughed knowing that Roger would have
approved.

BTW: I do however, still keep 'Darth Tater, on my desk at Intel'.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/c89bbc30/attachment.html>

From steffen at sdaoden.eu  Wed Jun 27 07:09:06 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Tue, 26 Jun 2018 23:09:06 +0200
Subject: [TUHS] off-topic list
In-Reply-To: <20180626155758.GC29822@h-174-65.A328.priv.bahnhof.se>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <20180626155758.GC29822@h-174-65.A328.priv.bahnhof.se>
Message-ID: <20180626210906.ibNMf%steffen@sdaoden.eu>

Michael Kjörling wrote in <20180626155758.GC29822 at h-174-65.A328.priv.ba\
hnhof.se>:
 |On 25 Jun 2018 16:09 -0400, from clemc at ccc.com (Clem Cole):
 |> Borstien
 |
 |Actually, I'm pretty sure his name is Bernstein.

Absolutely that is his name.  I did not dare to ask and suspected
some internal joke or story, given that he was called Bor as in
Bored, and Stein is Stone, and Bernstein is actually the German
word for amber.

 |Debian's package repository seems to agree, calling the original
 |author of qmail Daniel Bernstein.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From krewat at kilonet.net  Wed Jun 27 07:16:56 2018
From: krewat at kilonet.net (Arthur Krewat)
Date: Tue, 26 Jun 2018 17:16:56 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
Message-ID: <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>



On 6/26/2018 3:01 PM, Ronald Natalie wrote:
> I’m not sure I buy his arguments.
I was going to say it was total and complete BS, at least based on the 
quoted statement in the initial email. But I decided not to send it. ;)

I wrote a POSIX thread based queuing system a few years back that could 
handle thousands of threads on a dual processor SPARC-10 before it just 
completely locked up Solaris (I think 9). It was targeted at larger 
systems, and it could easily scale as far as I wanted it to.

While you could argue that pthreads are not "C", the language was quite 
happy doing what I asked of it.

Sometimes, I wonder... Programmers are supposed to be smarter than the 
language. Not the other way around.

art k.





From clemc at ccc.com  Wed Jun 27 07:16:36 2018
From: clemc at ccc.com (Clem Cole)
Date: Tue, 26 Jun 2018 17:16:36 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
 <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
Message-ID: <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>

On Tue, Jun 26, 2018 at 2:04 PM, Warner Losh <imp at bsdimp.com> wrote:
​Ok, that all sounds right and I'll take your word for it.   I followed it
only from the side and not directly as a customer, since by then I was
really not doing much VMS anything.  That said, I had thought some of the
original folks that were part of the PMDF work were the same crew that did
SOL (Michel Gien - the Pascal rewrite of UNIX - whom I knew in those days
from the OS side of the world).  I also thought the reason why the the firm
was named after the TGV (and yes I stand corrected on the name) was because
they were French and at the time the French bullet train was know for being
one of the fastest in the world and the French were very proud of it.

ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/05d46970/attachment.html>

From clemc at ccc.com  Wed Jun 27 07:18:18 2018
From: clemc at ccc.com (Clem Cole)
Date: Tue, 26 Jun 2018 17:18:18 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <20180626210906.ibNMf%steffen@sdaoden.eu>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <20180626155758.GC29822@h-174-65.A328.priv.bahnhof.se>
 <20180626210906.ibNMf%steffen@sdaoden.eu>
Message-ID: <CAC20D2Mh8fF6NUSbbUyGJ_JimNN17u=N7tSD3hSv10FR-QzPEA@mail.gmail.com>

On Tue, Jun 26, 2018 at 5:09 PM, Steffen Nurpmeso <steffen at sdaoden.eu>
wrote:

>
>  |Actually, I'm pretty sure his name is Bernstein.
> ​
>
​It is​.   Many pardons....
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/2c31eeaf/attachment.html>

From beebe at math.utah.edu  Wed Jun 27 07:21:03 2018
From: beebe at math.utah.edu (Nelson H. F. Beebe)
Date: Tue, 26 Jun 2018 15:21:03 -0600
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <f1a49622-ac75-dc08-3ad9-7e443763065b@texoma.net>
Message-ID: <CMM.0.96.0.1530048063.beebe@gamma.math.utah.edu>

>> DOI not found ...
>> Could it be that the write-up is going to take some time for the 
>> general public to see it?

Yes, that is a common problem with ACM journal publication
announcements: the URL

	https://dl.acm.org/citation.cfm?id=3209212

takes you today to a page that offers both HTML and PDF views of David
Chisnall's new article ``C is not a low-level language''.

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
- 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------


From ckeck at texoma.net  Wed Jun 27 04:03:24 2018
From: ckeck at texoma.net (Cornelius Keck)
Date: Tue, 26 Jun 2018 13:03:24 -0500
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
Message-ID: <f1a49622-ac75-dc08-3ad9-7e443763065b@texoma.net>

Now, that sounds interesting.. only hiccup is that I'm getting a "DOI 
Not Found" for 10.1145/3209212. Could it be that the write-up is going 
to take some time for the general public to see it?

Nelson H. F. Beebe wrote:
> There is a provocative article published today in the lastest issue of
> Communications of the ACM:
>
> 	David Chisnall
> 	C is not a low-level language
> 	Comm ACM 61(7) 44--48 July 2018
> 	https://doi.org/10.1145/3209212
>
> Because C is the implementation language of choice for a substantial
> part of the UNIX world, it seems useful to announce the new article to
> TUHS list members.
>
> David Chisnall discusses the PDP-11 legacy, the design of C, and the
> massive parallelism available in modern processors that is not so easy
> to exploit in C, particularly, portable C.  He also observes:
>
>>> ...
>>> A processor designed purely for speed, not for a compromise between
>>> speed and C support, would likely support large numbers of threads,
>>> have wide vector units, and have a much simpler memory model. Running
>>> C code on such a system would be problematic, so, given the large
>>> amount of legacy C code in the world, it would not likely be a
>>> commercial success.
>>> ...
>
> -------------------------------------------------------------------------------
> - Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
> - University of Utah                    FAX: +1 801 581 4148                  -
> - Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
> - 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
> - Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
> -------------------------------------------------------------------------------
>


From lm at mcvoy.com  Wed Jun 27 07:50:12 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Tue, 26 Jun 2018 14:50:12 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
Message-ID: <20180626215012.GE8150@mcvoy.com>

On Tue, Jun 26, 2018 at 05:16:56PM -0400, Arthur Krewat wrote:
> 
> 
> On 6/26/2018 3:01 PM, Ronald Natalie wrote:
> >I???m not sure I buy his arguments.
> I was going to say it was total and complete BS, at least based on the
> quoted statement in the initial email. But I decided not to send it. ;)
> 
> I wrote a POSIX thread based queuing system a few years back that could
> handle thousands of threads on a dual processor SPARC-10 before it just
> completely locked up Solaris (I think 9). It was targeted at larger systems,
> and it could easily scale as far as I wanted it to.
> 
> While you could argue that pthreads are not "C", the language was quite
> happy doing what I asked of it.

So I agree, had the same initial reaction.  But I read the paper a 
second time and the point about Fortran, all these years later, still
being a thing resonated.  The hardware guys stand on their heads to
give us coherent caches.  

> Sometimes, I wonder... Programmers are supposed to be smarter than the
> language. Not the other way around.

That's a great quote.  But I do sort of grudgingly see the author's 
point of view, at least somewhat.

--lm


From ron at ronnatalie.com  Wed Jun 27 07:54:32 2018
From: ron at ronnatalie.com (Ronald Natalie)
Date: Tue, 26 Jun 2018 17:54:32 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180626215012.GE8150@mcvoy.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
Message-ID: <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>

> 
> So I agree, had the same initial reaction.  But I read the paper a 
> second time and the point about Fortran, all these years later, still
> being a thing resonated.  The hardware guys stand on their heads to
> give us coherent caches.  

Fortran is a higher level language.    It gives he compiler more flexibility in deciding what the programmer intended and how to automatically optimize for the platform.
C is often a “You asked for it, you got it” type paradigm/




From lm at mcvoy.com  Wed Jun 27 07:59:05 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Tue, 26 Jun 2018 14:59:05 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
Message-ID: <20180626215905.GH8150@mcvoy.com>

On Tue, Jun 26, 2018 at 05:54:32PM -0400, Ronald Natalie wrote:
> > 
> > So I agree, had the same initial reaction.  But I read the paper a 
> > second time and the point about Fortran, all these years later, still
> > being a thing resonated.  The hardware guys stand on their heads to
> > give us coherent caches.  
> 
> Fortran is a higher level language.    It gives he compiler more flexibility in deciding what the programmer intended and how to automatically optimize for the platform.
> C is often a ???You asked for it, you got it??? type paradigm/

I think you are more or less agreeing with the author.  (I also think, as
Unix die hards, we all bridle a little when anyone dares to say anything
negative about C.  We should resist that if it gets in the way of making
things better.)

The author at least has me thinking about how you could make a C like 
language that didn't ask as much from the hardware.
-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From khm at sciops.net  Wed Jun 27 07:56:04 2018
From: khm at sciops.net (Kurt H Maier)
Date: Tue, 26 Jun 2018 14:56:04 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <f1a49622-ac75-dc08-3ad9-7e443763065b@texoma.net>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <f1a49622-ac75-dc08-3ad9-7e443763065b@texoma.net>
Message-ID: <20180626215604.GA94145@wopr>

On Tue, Jun 26, 2018 at 01:03:24PM -0500, Cornelius Keck wrote:
> Now, that sounds interesting.. only hiccup is that I'm getting a "DOI 
> Not Found" for 10.1145/3209212. Could it be that the write-up is going 
> to take some time for the general public to see it?

ACM Communications is behind a paywall.  If you have a subscription it's
available here:

https://cacm.acm.org/magazines/2018/7/229036-c-is-not-a-low-level-language/abstract


khm


From bakul at bitblocks.com  Wed Jun 27 08:20:40 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Tue, 26 Jun 2018 15:20:40 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180626215905.GH8150@mcvoy.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
Message-ID: <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>

On Jun 26, 2018, at 2:59 PM, Larry McVoy <lm at mcvoy.com> wrote:
> 
> On Tue, Jun 26, 2018 at 05:54:32PM -0400, Ronald Natalie wrote:
>>> 
>>> So I agree, had the same initial reaction.  But I read the paper a 
>>> second time and the point about Fortran, all these years later, still
>>> being a thing resonated.  The hardware guys stand on their heads to
>>> give us coherent caches.  
>> 
>> Fortran is a higher level language.    It gives he compiler more flexibility in deciding what the programmer intended and how to automatically optimize for the platform.
>> C is often a ???You asked for it, you got it??? type paradigm/
> 
> I think you are more or less agreeing with the author.  (I also think, as
> Unix die hards, we all bridle a little when anyone dares to say anything
> negative about C.  We should resist that if it gets in the way of making
> things better.)

With new attacks like TLBleed etc. it is becoming increasingly clear that
caching (hidden memory to continue with the illusion of a simple memory
model) itself is a potential security issue. I didn't think anything the
author said was particularly controversial any more. A lot of processor
evolution seems to have been to accommodate C's simple memory model.

What is remarkable is how long this illusion has been maintained and
how far we have gotten with it.

> The author at least has me thinking about how you could make a C like 
> language that didn't ask as much from the hardware.

Erlang. Actor, vector & dataflow languages. Actually even C itself
can be used if it is used only on individual simple cores and instead
of caches any accessible memory is made explicit. Not sure if there
is a glue language for mapping & scheduling computation to a set of
simple cores with local memory and high speed links to their neighbors.




From akosela at andykosela.com  Wed Jun 27 08:33:29 2018
From: akosela at andykosela.com (Andy Kosela)
Date: Tue, 26 Jun 2018 17:33:29 -0500
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180626215905.GH8150@mcvoy.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
Message-ID: <CALMnNGhKmKrkQYEKQd=zGaBY74oUf1HSS1PbpnWoMiSgnsr=Tw@mail.gmail.com>

On Tuesday, June 26, 2018, Larry McVoy <lm at mcvoy.com> wrote:

> On Tue, Jun 26, 2018 at 05:54:32PM -0400, Ronald Natalie wrote:
> > >
> > > So I agree, had the same initial reaction.  But I read the paper a
> > > second time and the point about Fortran, all these years later, still
> > > being a thing resonated.  The hardware guys stand on their heads to
> > > give us coherent caches.
> >
> > Fortran is a higher level language.    It gives he compiler more
> flexibility in deciding what the programmer intended and how to
> automatically optimize for the platform.
> > C is often a ???You asked for it, you got it??? type paradigm/
>
> I think you are more or less agreeing with the author.  (I also think, as
> Unix die hards, we all bridle a little when anyone dares to say anything
> negative about C.  We should resist that if it gets in the way of making
> things better.)
>
> The author at least has me thinking about how you could make a C like
> language that didn't ask as much from the hardware.
> --
>
>
David Chisnall is known for pushing Go as a next generation C.  He even
wrote a book about it.  I think he has a point in saying that Go was
created as direct remedy to many things in C.  Most of it features come
from decades of experience working with C, and seeing ways in which it can
be improved.

--Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180626/be7fca1f/attachment.html>

From krewat at kilonet.net  Wed Jun 27 08:33:37 2018
From: krewat at kilonet.net (Arthur Krewat)
Date: Tue, 26 Jun 2018 18:33:37 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>
Message-ID: <769e0f86-eb90-0ec5-e744-319ab74dbd85@kilonet.net>

On 6/26/2018 6:20 PM, Bakul Shah wrote:
> it is becoming increasingly clear that
> caching (hidden memory to continue with the illusion of a simple memory
> model) itself is a potential security issue.

Then let's discuss why caching is the problem. If thread X reads memory 
location A, why is thread Y able to access that cached value? Shouldn't 
that cached value be associated with memory location A which I would 
assume would be in a protected space that thread Y shouldn't be able to 
access?

I know the nuts and bolts of how this cache exploit works, that's not 
what I'm asking.

What I'm asking is, why is cache accessible in the first place? Any 
cache offset should have the same memory protection as the value it 
represents. Isn't this the CPU manufacturer's fault?


art k.







From ggm at algebras.org  Wed Jun 27 09:45:03 2018
From: ggm at algebras.org (George Michaelson)
Date: Wed, 27 Jun 2018 09:45:03 +1000
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2Mh8fF6NUSbbUyGJ_JimNN17u=N7tSD3hSv10FR-QzPEA@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <20180626155758.GC29822@h-174-65.A328.priv.bahnhof.se>
 <20180626210906.ibNMf%steffen@sdaoden.eu>
 <CAC20D2Mh8fF6NUSbbUyGJ_JimNN17u=N7tSD3hSv10FR-QzPEA@mail.gmail.com>
Message-ID: <CAKr6gn0_WSF0gKOKD6Tqm5wEPFxhoE3aeFodnGhcbY8Xw3fHpA@mail.gmail.com>

Nathaniel Borenstein did much work defining MIME. Maybe thats your moment
of confusion?

On Wed, Jun 27, 2018 at 7:18 AM, Clem Cole <clemc at ccc.com> wrote:

>
>
> On Tue, Jun 26, 2018 at 5:09 PM, Steffen Nurpmeso <steffen at sdaoden.eu>
> wrote:
>
>>
>>  |Actually, I'm pretty sure his name is Bernstein.
>> ​
>>
> ​It is​.   Many pardons....
> ᐧ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/36455f52/attachment.html>

From bakul at bitblocks.com  Wed Jun 27 09:53:00 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Tue, 26 Jun 2018 16:53:00 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <769e0f86-eb90-0ec5-e744-319ab74dbd85@kilonet.net>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>
 <769e0f86-eb90-0ec5-e744-319ab74dbd85@kilonet.net>
Message-ID: <FCFFF195-EA89-4F86-9A5C-B0946191E73D@bitblocks.com>

On Jun 26, 2018, at 3:33 PM, Arthur Krewat <krewat at kilonet.net> wrote:
> 
> On 6/26/2018 6:20 PM, Bakul Shah wrote:
>> it is becoming increasingly clear that
>> caching (hidden memory to continue with the illusion of a simple memory
>> model) itself is a potential security issue.
> 
> Then let's discuss why caching is the problem. If thread X reads memory location A, why is thread Y able to access that cached value? Shouldn't that cached value be associated with memory location A which I would assume would be in a protected space that thread Y shouldn't be able to access?
> 
> I know the nuts and bolts of how this cache exploit works, that's not what I'm asking.
> 
> What I'm asking is, why is cache accessible in the first place? Any cache offset should have the same memory protection as the value it represents. Isn't this the CPU manufacturer's fault?

As I understand it, the difference in cache access vs
other caches/memory access times allows for timing attacks.
By its nature a cache is much smaller than the next level
cache or memory so there will have to be a way to evict
stale data from it and there will be (false) sharing and
consequent access time difference. Knowledge of specific
attacks can help devise specific fixes but I don't think
we can say unequivocally we have seen the worst of it. 



From bakul at bitblocks.com  Wed Jun 27 10:11:21 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Tue, 26 Jun 2018 17:11:21 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CALMnNGhKmKrkQYEKQd=zGaBY74oUf1HSS1PbpnWoMiSgnsr=Tw@mail.gmail.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <CALMnNGhKmKrkQYEKQd=zGaBY74oUf1HSS1PbpnWoMiSgnsr=Tw@mail.gmail.com>
Message-ID: <870B7FD6-B6CF-44FC-ABBD-60BD0285524E@bitblocks.com>

On Jun 26, 2018, at 3:33 PM, Andy Kosela <akosela at andykosela.com> wrote:
>  
> David Chisnall is known for pushing Go as a next generation C.  He even wrote a book about it.  I think he has a point in saying that Go was created as direct remedy to many things in C.  Most of it features come from decades of experience working with C, and seeing ways in which it can be improved.

I primarily write code in Go these days and like it a lot (as
a "better" C) but I am not sure it will have C's longevity.
It still uses a flat shared memory model. This is harder and
harder for hardware to emulate efficiently (and comes with
more complexity) at smaller and smaller minimum feature sizes
and higher & higher CPU clock rates & on-chip comm speeds. We
need something other than a better C to squeeze maximum
performance out of a CPU built out of 100s to 1000s of cores.

From tytso at mit.edu  Wed Jun 27 12:18:43 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Tue, 26 Jun 2018 22:18:43 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectTures
In-Reply-To: <20180626215905.GH8150@mcvoy.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
Message-ID: <20180627021843.GA31920@thunk.org>

One of the things which is probably causing a lot of hang ups is the
phrase "low-level language".  What is exactly meant by that?  The
article actually uses multiple definitions of "low-level language",
and switches between them when it is convenient, or as a rhetorical
trick.

There is at least one of his points that I very much agree with, which
is the abstract model which is exported by the C programming paradigm
is very different from what totality of what the CPU can do.  Consider
the oft-made admonition that most humans can't program in assembler
more efficiently than the compiler.  Think about that for a moment.
That saying is essentially telling us that the compiler is capable of
doing a lot of things behinds our back (which goes against the common
understanding of what low-level language does), and second, that
modern CPU's has gotten too complicated for most humans to be able to
program directly, either comfortably or practically.

Also, all of these compiler optimizations are not _mandatory_.  They
are there because people really want performance.  You are free to
disable them if you care about predictability and avoiding the "any
sufficiently advanced compiler is indistinguishable from a malicious
adversary" dynamic.  In fact, if are writing a program where you are,
more often than not, I/O bound and/or fundamentally bound by memory
bandwidth, turning off a lot of these compiler optimizations might be
the right thing to do, as they may not buy you much, and given you
much in the way of headaches.  There is a reason why the Linux kernel
compiles with the GCC flags:

	-fno-asynchronous-unwind-tables -fno-delete-null-pointer-checks \
	-fstack-protector-strong -fno-var-tracking-assignments \
	-femit-struct-debug-baseonly -fno-var-tracking \
	-fno-inline-functions-called-once -fno-strict-overflow \
	-fno-merge-all-constants -fmerge-constants -fno-stack-check

By the way, I disagree with Chisnall's assertion that low-level
languages are fast.  If you believe that compilers can often do a
better job than humansh writing assembly, does that mean that since
assembly code isn't fast that it's not a low-level language?)  And the
performance wars is an important part of dynamic --- at the end of the
day, the vast majority of people choose performance as the most
important driver of their purchasing decision.  They may pay lip
service to other things, including security, robustness, etc., but if
you take a look at what they actually _do_, far more often than not,
they choose fast over slow.

I see this an awful lot in file system design.  People *talk* a great
game about how they care about atomicity, and durability, etc., but
when you look at their actual behavior as opposed to expressed
desires, more often than not what they choose is again, fast over 
slow.  And so I have an extension to the fictional dialog to Pillai's
presentation for his 2014 OSDI paper:

Experienced Academic: Real file systems don't do that.

   But <...> does just that.

Academic: My students would flunk class if they built a file system that way.

Cynical Industry Developer's rejoinder: A file system designed the way
   you propose would likely be a commercial failure, and if it was the
   only/default FS for an OS, would probably drag the OS down to
   failure with it.


And this goes to Spectre, Meltdown, TLBleed, and so on.  Intel built
processors this way because the market _demanded_ it.  It's for the
same reason that Ford Motor Company built gas-guzzling pickup trucks.
Whether or not Maimi Beach will end up drowning under rising sea
levels never entered into the the consumer or the manufacturer's
consideration, at least not in any real way.

It's very easy to criticize Intel for engineering their CPU's the way
they did, but if you want to know who to blame for massively high
ILP's and caching, and compilers that do all sorts of re-ordering and
loop rewriting behind the developer's back, just look in the mirror.
It's the same as why we have climate change and sea level rise.

"We have met the enemy and he is us". -- Pogo (Walt Kelly)

						- Ted


From tytso at mit.edu  Wed Jun 27 12:22:08 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Tue, 26 Jun 2018 22:22:08 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectTures
In-Reply-To: <20180627021843.GA31920@thunk.org>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <20180627021843.GA31920@thunk.org>
Message-ID: <20180627022208.GB31920@thunk.org>

On Tue, Jun 26, 2018 at 10:18:43PM -0400, Theodore Y. Ts'o wrote:
> There is a reason why the Linux kernel
> compiles with the GCC flags:
> 
> 	-fno-asynchronous-unwind-tables -fno-delete-null-pointer-checks \
> 	-fstack-protector-strong -fno-var-tracking-assignments \
> 	-femit-struct-debug-baseonly -fno-var-tracking \
> 	-fno-inline-functions-called-once -fno-strict-overflow \
> 	-fno-merge-all-constants -fmerge-constants -fno-stack-check

Oops, I forgot a biggy: -fno-strict-aliasing

						- Ted


From mutiny.mutiny at rediffmail.com  Sun Jun 24 17:50:42 2018
From: mutiny.mutiny at rediffmail.com (Mutiny)
Date: 24 Jun 2018 07:50:42 -0000
Subject: [TUHS] =?utf-8?q?Old_mainframe_I/O_speed_=28was=3A_core=29?=
In-Reply-To: <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
Message-ID: <1529749991.S.4934.17164.f4-234-225.1529826641.32118@webmail.rediffmail.com>

&gt;From: Johnny Billquist &lt;bqt at update.uu.se&gt;&gt;To me it sounds as if you are saying that DEC did/designed PCI.&gt;Are you sure about that? As far as I know, PCI was designed and created&gt;by Intel, and the first users were just plain PC machines.Work on PCI began at Intel&#39;s Architecture Development Lab c.&thinsp;1990. https://en.wikipedia.org/wiki/Conventional_PCI#History
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180624/d243b3d3/attachment.html>

From arnold at skeeve.com  Wed Jun 27 16:10:34 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Wed, 27 Jun 2018 00:10:34 -0600
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <870B7FD6-B6CF-44FC-ABBD-60BD0285524E@bitblocks.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <CALMnNGhKmKrkQYEKQd=zGaBY74oUf1HSS1PbpnWoMiSgnsr=Tw@mail.gmail.com>
 <870B7FD6-B6CF-44FC-ABBD-60BD0285524E@bitblocks.com>
Message-ID: <201806270610.w5R6AYl8032471@freefriends.org>

Bakul Shah <bakul at bitblocks.com> wrote:

> I primarily write code in Go these days and like it a lot (as
> a "better" C) but I am not sure it will have C's longevity.
> It still uses a flat shared memory model.

Digital Mars's D flips it around. Everything is thread-local storage
unless you explicitly mark something as shared. This makes a ton
of sense to me.

Arnold


From arnold at skeeve.com  Wed Jun 27 16:27:49 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Wed, 27 Jun 2018 00:27:49 -0600
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
Message-ID: <201806270627.w5R6Rnlu002094@freefriends.org>

Arthur Krewat <krewat at kilonet.net> wrote:

> Sometimes, I wonder... Programmers are supposed to be smarter than the 
> language. Not the other way around.
>
> art k.

After many years working in a variety of places, I have come to the
conclusion that Sturgeon's Law ("90% of everything is crud") applies to
working programmers as well.

:-(

W.R.T. the statement as given, the point is good; when a language is
huge and complex (yes C++, I'm looking at you) it becomes really hard
to be effective in it unless you are a super-star genius.

Arnold


From tfb at tfeb.org  Wed Jun 27 18:30:06 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Wed, 27 Jun 2018 09:30:06 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com>
 <117BE24B-C768-43ED-A470-40FCD1662ECD@bitblocks.com>
Message-ID: <43A3E865-2557-47C6-8359-DEAFDF8571E6@tfeb.org>

On 26 Jun 2018, at 23:20, Bakul Shah <bakul at bitblocks.com> wrote:
> 
> With new attacks like TLBleed etc. it is becoming increasingly clear that
> caching (hidden memory to continue with the illusion of a simple memory
> model) itself is a potential security issue. I didn't think anything the
> author said was particularly controversial any more. A lot of processor
> evolution seems to have been to accommodate C's simple memory model.

That's the strangest thing to see: *why do people think the point he's making is in any way controversial*, because it's so obvious.

(But then I'm also annoyed by the paper because I've been talking about 'giant PDP-11s' for a long time and now he's stolen (obviously not stolen: independently come up with) my term, pretty much.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/340e771a/attachment.html>

From dot at dotat.at  Wed Jun 27 21:26:26 2018
From: dot at dotat.at (Tony Finch)
Date: Wed, 27 Jun 2018 12:26:26 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
Message-ID: <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>

Ronald Natalie <ron at ronnatalie.com> wrote:

> C is often a “You asked for it, you got it” type paradigm/

Sadly these days it's more like, you asked for a VAX, you got a
Deathstation 9000. (Sadly the classic DS9000 web page has disappeared
and it was never saved by archive.org.)

http://wikibin.org/articles/deathstation-9000.html

It's worth reading Chisnall's other paper (cited by the CACM article) on
formalizing de-facto C. The background for all this is that Robert
Watson's team in Cambridge's Computer Lab has been working on a
capability-secure RISC processor for a number of years, with the goal of
being able to retro-fit hardware accelerated memory security to existing
software. Which means running C on hardware that doesn't look much like a
VAX. So it's helpful to get a better idea of exactly how far you can
deviate from the gcc/clang model of DS9000.

https://dl.acm.org/citation.cfm?id=2908081

Tony.
-- 
f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/
Southeast Iceland: Variable 3 or 4. Slight or moderate. Fog patches,
occasional rain at first. Moderate or good, occasionally very poor.

From clemc at ccc.com  Wed Jun 27 23:59:43 2018
From: clemc at ccc.com (Clem Cole)
Date: Wed, 27 Jun 2018 09:59:43 -0400
Subject: [TUHS] Old mainframe I/O speed (was: core)
In-Reply-To: <1529749991.S.4934.17164.f4-234-225.1529826641.32118@webmail.rediffmail.com>
References: <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
 <1529749991.S.4934.17164.f4-234-225.1529826641.32118@webmail.rediffmail.com>
Message-ID: <CAC20D2MMQKg6wnTCtM9WsVtt+BuCf2a0Jfhkk5NpZtq1Hx25Jw@mail.gmail.com>

Don't you love Wikipedia.   As it has been said by others follow the money
(or in this case the law suites)....
ᐧ

On Sun, Jun 24, 2018 at 3:50 AM, Mutiny <mutiny.mutiny at rediffmail.com>
wrote:

> >From: Johnny Billquist <bqt at update.uu.se>
> >To me it sounds as if you are saying that DEC did/designed PCI.
> >Are you sure about that? As far as I know, PCI was designed and created
> >by Intel, and the first users were just plain PC machines.
>
> Work on PCI began at Intel's Architecture Development Lab c. 1990.
> https://en.wikipedia.org/wiki/Conventional_PCI#History
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/4bf50e3c/attachment.html>

From clemc at ccc.com  Thu Jun 28 00:33:52 2018
From: clemc at ccc.com (Clem Cole)
Date: Wed, 27 Jun 2018 10:33:52 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>
Message-ID: <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>

I guess my take on it is mixed.   I see some of his points but over all I
disagree with most of them.  I firmly believe if you look at anything long
enough you will find flaws.  There is no perfect.   I think Fortran, C,
even Algol are credits for more what people were able to think about at the
time and how well they lasted.   As I have said in other places, Fortran is
not going away. Clem Cole's answer is the Future of Fortran Programming Dead
<https://www.quora.com/Is-the-future-of-Fortran-programming-dead/answer/Clem-Cole>
also
applies to C.   It's just not broken and he's wrong.   Go, Rust *et al* is
not going to magically overtake C, just as Fortran as not been displaced in
my lifetime (BTW, I >>like<< both Go and Rust and think they are
interesting new languages).   He thinks C is not long a low level language
because when Ken abstracted the PDP-7 into B and then Dennis abstracted the
PDP-11 into C, the systems were simple.  The HW designers are in a giant
fake out at this point, so things that used to work like 'register' no
longer make sense.  Now its the compiler that binds to the primitives
available to the functions under the covers and there is more to use than
the PDP-11 and PDP-7 offered.    But wait, that is not always true.   So I
think he's wrong.   I think you leave the language alone and if the HW
moves on great.   But if we have a simple system like you have on the Amtel
chips that most Arduino's and lots of other embedded C programs use, C is
very low level and most his arguments go away.

Cken
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/7d723694/attachment.html>

From clemc at ccc.com  Thu Jun 28 00:38:46 2018
From: clemc at ccc.com (Clem Cole)
Date: Wed, 27 Jun 2018 10:38:46 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>
 <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>
Message-ID: <CAC20D2Oy1-JymrniBdZgjGoX0Gei4VzMTupH+yjSX6WVLdpS_w@mail.gmail.com>

I need to get a keyboard who keys don't stick.... sigh....  Clem
ᐧ

On Wed, Jun 27, 2018 at 10:33 AM, Clem Cole <clemc at ccc.com> wrote:

> I guess my take on it is mixed.   I see some of his points but over all I
> disagree with most of them.  I firmly believe if you look at anything long
> enough you will find flaws.  There is no perfect.   I think Fortran, C,
> even Algol are credits for more what people were able to think about at the
> time and how well they lasted.   As I have said in other places, Fortran is
> not going away. Clem Cole's answer is the Future of Fortran Programming
> Dead
> <https://www.quora.com/Is-the-future-of-Fortran-programming-dead/answer/Clem-Cole> also
> applies to C.   It's just not broken and he's wrong.   Go, Rust *et al*
> is not going to magically overtake C, just as Fortran as not been displaced
> in my lifetime (BTW, I >>like<< both Go and Rust and think they are
> interesting new languages).   He thinks C is not long a low level language
> because when Ken abstracted the PDP-7 into B and then Dennis abstracted the
> PDP-11 into C, the systems were simple.  The HW designers are in a giant
> fake out at this point, so things that used to work like 'register' no
> longer make sense.  Now its the compiler that binds to the primitives
> available to the functions under the covers and there is more to use than
> the PDP-11 and PDP-7 offered.    But wait, that is not always true.   So I
> think he's wrong.   I think you leave the language alone and if the HW
> moves on great.   But if we have a simple system like you have on the Amtel
> chips that most Arduino's and lots of other embedded C programs use, C is
> very low level and most his arguments go away.
>
> Cken
> ᐧ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/d18c9d52/attachment.html>

From paul.winalski at gmail.com  Thu Jun 28 01:30:47 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Wed, 27 Jun 2018 11:30:47 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>
 <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>
Message-ID: <CABH=_VScs_KqX0v0-Cc6LaOjwnLvQk0B+wYzjWngYuu+YYsoQA@mail.gmail.com>

What Clem said.  Chisnall is right about C having been designed for a
sequential-programming world.  That's why Fortran (with array and
other parallel/vector operations built in) rules in the HPTC parallel
programming space.  But I don't buy most of his arguments.  Making
parallel programming easy and natural has been an unsolved problem
during my entire 30+ year career in designing software development
tools.  It's still an unsolved problem.  Modern compiler technology
helps to find the hidden parallelism in algorithms expressed
sequentially, but I think the fundamental problem is that most human
beings have great difficulty conceptualizing parallel algorithms.
It's also always been true that to get maximum performance you have to
somehow get close to the specific hardware you're using--either by
explicitly programming for it, or by having a compiler do that for
you.

Note also that there have been extensions to C/C++ to support
parallelism.  Cilk, for example.

-Paul W.


From tfb at tfeb.org  Thu Jun 28 02:55:04 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Wed, 27 Jun 2018 17:55:04 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CABH=_VScs_KqX0v0-Cc6LaOjwnLvQk0B+wYzjWngYuu+YYsoQA@mail.gmail.com>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <alpine.DEB.2.11.1806271209230.916@grey.csi.cam.ac.uk>
 <CAC20D2P6LcfP9HD6VrmGH35NkUL873h51ZJLs5HABB1B2ygADg@mail.gmail.com>
 <CABH=_VScs_KqX0v0-Cc6LaOjwnLvQk0B+wYzjWngYuu+YYsoQA@mail.gmail.com>
Message-ID: <FBAD3CDE-F23E-4BB3-AB9B-F03AC9C9D332@tfeb.org>

On 27 Jun 2018, at 16:30, Paul Winalski <paul.winalski at gmail.com> wrote:
> 
> What Clem said.  Chisnall is right about C having been designed for a
> sequential-programming world.  That's why Fortran (with array and
> other parallel/vector operations built in) rules in the HPTC parallel
> programming space.  But I don't buy most of his arguments.  Making
> parallel programming easy and natural has been an unsolved problem
> during my entire 30+ year career in designing software development
> tools.  It's still an unsolved problem.  [...]

I think that's right.  The missing bit is that while once the only people who had to worry about processors with a lot of parallelism were the HPC people, who fortunately often had algorithms which parallelised rather well.  Now you have to worry about it if you want to write programs for the processor in your laptop and probably the processor in your watch.  Or you would, if the designers of those processors had not gone to heroic lengths to make them look like giant PDP-11s.  Unfortunately those heroic lengths haven't been heroic enough as has become apparent, and will presumably fall apart increasingly rapidly from now on.

So he's right: the giant PDP-11 thing is a disaster, but he's wrong about its cause: it's not caused by C, but by the fact that writing programs for what systems really need to look like is just an unsolved problem.  It might have helped if we had not spent forty years sweeping it busily under the carpet.

A thing that is also coming of course, which he does not talk about, is that big parallel machines are also going to start getting increasingly constrained by physics which means that a lot of the tricks that HPC people use will start to fall apart as well.



From clemc at ccc.com  Thu Jun 28 03:40:54 2018
From: clemc at ccc.com (Clem Cole)
Date: Wed, 27 Jun 2018 13:40:54 -0400
Subject: [TUHS] Frank Heart,
	Who Helped to Design the ArpaNet IMPs at BBN Dies at 89
Message-ID: <CAC20D2NT5g7WZeE0gxDA8dh4JA2f0KEGEf=TxV+c017fcA6JJg@mail.gmail.com>

Note the NY Times head lines this as 'pre-Internet' which is a little sad,
but what can you do:

https://www.nytimes.com/2018/06/25/technology/frank-heart-
who-linked-computers-before-the-internet-dies-at-89.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/4ee7dd4a/attachment.html>

From bqt at update.uu.se  Thu Jun 28 04:43:56 2018
From: bqt at update.uu.se (Johnny Billquist)
Date: Wed, 27 Jun 2018 20:43:56 +0200
Subject: [TUHS] Old mainframe I/O speed
In-Reply-To: <CAC20D2MMQKg6wnTCtM9WsVtt+BuCf2a0Jfhkk5NpZtq1Hx25Jw@mail.gmail.com>
References: <48b2a3f8-66ca-2527-f471-062eead1c6fe@update.uu.se>
 <1529749991.S.4934.17164.f4-234-225.1529826641.32118@webmail.rediffmail.com>
 <CAC20D2MMQKg6wnTCtM9WsVtt+BuCf2a0Jfhkk5NpZtq1Hx25Jw@mail.gmail.com>
Message-ID: <37241c8b-dc57-e720-dd63-aaff8dcb9bd3@update.uu.se>

On 2018-06-27 15:59, Clem Cole wrote:
> Don't you love Wikipedia.   As it has been said by others follow the 
> money (or in this case the law suites)....

Well, in all fairness, what you are now claiming is that Intel ripped 
off DEC IP when designing PCI. That is not the same thing as claiming 
that DEC designed PCI themselves.

I have not really looked into details on TURBOchannel compared to PCI to 
say if they are related in some way, but they are definitely not 
identical. So PCI did originate within Intel, and the Wikipedia article 
is correct (and even if you point fingers at Wikipedia for various 
issues about money and whatnot, you most of the time do have source 
references that can be checked, and which are not fabricated by Wikipedia.)

And yes, Intel certainly ripped off DEC IP a couple of times, so I would 
not be surprised if this turns out to be another example. But DEC did 
not design PCI.

   Johnny

> ᐧ
> 
> On Sun, Jun 24, 2018 at 3:50 AM, Mutiny <mutiny.mutiny at rediffmail.com 
> <mailto:mutiny.mutiny at rediffmail.com>> wrote:
> 
>      >From: Johnny Billquist <bqt at update.uu.se <mailto:bqt at update.uu.se>>
>     >To me it sounds as if you are saying that DEC did/designed PCI.
>     >Are you sure about that? As far as I know, PCI was designed and created
>     >by Intel, and the first users were just plain PC machines.
> 
>     Work on PCI began at Intel's Architecture Development Lab c. 1990.
>     https://en.wikipedia.org/wiki/Conventional_PCI#History
>     <https://en.wikipedia.org/wiki/Conventional_PCI#History>
> 
> 


-- 
Johnny Billquist                  || "I'm on a bus
                                   ||  on a psychedelic trip
email: bqt at softjar.se             ||  Reading murder books
pdp is alive!                     ||  tryin' to stay hip" - B. Idol


From mparson at bl.org  Thu Jun 28 07:33:30 2018
From: mparson at bl.org (Michael Parson)
Date: Wed, 27 Jun 2018 16:33:30 -0500 (CDT)
Subject: [TUHS] off-topic list
In-Reply-To: <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
 <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
 <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>
Message-ID: <alpine.NEB.2.20.1806271632090.7100@neener.bl.org>

On Tue, 26 Jun 2018, Clem Cole wrote:

> Date: Tue, 26 Jun 2018 17:16:36 -0400
> From: Clem Cole <clemc at ccc.com>
> To: Warner Losh <imp at bsdimp.com>
> Cc: TUHS main list <tuhs at minnie.tuhs.org>,
>     Grant Taylor <gtaylor at tnetconsulting.net>
> Subject: Re: [TUHS] off-topic list
> 
> On Tue, Jun 26, 2018 at 2:04 PM, Warner Losh <imp at bsdimp.com> wrote:
> ​Ok, that all sounds right and I'll take your word for it.  I
> followed it only from the side and not directly as a customer, since
> by then I was really not doing much VMS anything.  That said, I had
> thought some of the original folks that were part of the PMDF work
> were the same crew that did SOL (Michel Gien - the Pascal rewrite of
> UNIX - whom I knew in those days from the OS side of the world).  I
> also thought the reason why the the firm was named after the TGV (and
> yes I stand corrected on the name) was because they were French and at
> the time the French bullet train was know for being one of the fastest
> in the world and the French were very proud of it.

I always thought TGV was "Three Guys and a VAX".

-- 
Michael Parson
Pflugerville, TX
KF5LGQ


From clemc at ccc.com  Thu Jun 28 08:27:34 2018
From: clemc at ccc.com (Clem cole)
Date: Wed, 27 Jun 2018 18:27:34 -0400
Subject: [TUHS] off-topic list
In-Reply-To: <alpine.NEB.2.20.1806271632090.7100@neener.bl.org>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
 <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
 <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>
 <alpine.NEB.2.20.1806271632090.7100@neener.bl.org>
Message-ID: <1D3E6798-6370-4C7C-98F9-3399AA88C76E@ccc.com>

Makes sense. 

Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 

> On Jun 27, 2018, at 5:33 PM, Michael Parson <mparson at bl.org> wrote:
> 
>> On Tue, 26 Jun 2018, Clem Cole wrote:
>> 
>> Date: Tue, 26 Jun 2018 17:16:36 -0400
>> From: Clem Cole <clemc at ccc.com>
>> To: Warner Losh <imp at bsdimp.com>
>> Cc: TUHS main list <tuhs at minnie.tuhs.org>,
>>    Grant Taylor <gtaylor at tnetconsulting.net>
>> Subject: Re: [TUHS] off-topic list
>> On Tue, Jun 26, 2018 at 2:04 PM, Warner Losh <imp at bsdimp.com> wrote:
>> ​Ok, that all sounds right and I'll take your word for it.  I
>> followed it only from the side and not directly as a customer, since
>> by then I was really not doing much VMS anything.  That said, I had
>> thought some of the original folks that were part of the PMDF work
>> were the same crew that did SOL (Michel Gien - the Pascal rewrite of
>> UNIX - whom I knew in those days from the OS side of the world).  I
>> also thought the reason why the the firm was named after the TGV (and
>> yes I stand corrected on the name) was because they were French and at
>> the time the French bullet train was know for being one of the fastest
>> in the world and the French were very proud of it.
> 
> I always thought TGV was "Three Guys and a VAX".
> 
> -- 
> Michael Parson
> Pflugerville, TX
> KF5LGQ


From scj at yaccman.com  Thu Jun 28 02:00:16 2018
From: scj at yaccman.com (Steve Johnson)
Date: Wed, 27 Jun 2018 09:00:16 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
Message-ID: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>


I agree that C is a bad language for parallelism, and, like it or not,
that's what today's hardware is giving us -- not speed, but many
independent processors.  But I'd argue that its problem isn't that it
is not low-level, but that it is not high-level enough.  A language
like MATLAB, whose basic data object is an N-diemsional tensor, can
make impressive use of parallel hardware.

Consider matrix multiplication.   Multiplying two NxN arrays to get
another NxN array is a classic data-parallel problem -- each value in
the result matrix is completely independent of every other one -- in
theory, we could dedicate a processor to each output element, and
would not need any cache coherency or locking mechanism -- just let
them go at it -- the trickiest part is deciding you are finished.

The reason we know we are data parallel is not because of any feature
of the language -- it's because of the mathematical structure of the
problem.  While it's easy to write a matrix multiply function in C
(as it is in most languages), just the fact that the arguments are
pointers is enough to make data parallelism invisible from within the
function.  You can bolt on additional features that, in effect, tell
the compiler it should treat the inputs as independent and
non-overlapping, but this is just the tip of the iceberg -- real
parallel problems see this in spaces.  

The other hardware factor that comes into play is that hardware,
especially memories, have physical limits in what they can do.  So
the "ideal" matrix multiply with a processor for each output element
would suffer because many of the processors would be trying to read
the same memory at the same time.  Some would be bound to fail,
requiring the ability to stack requests and restart them, as well as
pause the processor until the data was available.   (note that, in
this and many other cases, we don't need cache coherency because the
input data is not changing while we are using it).  The obvious way
around this is to divide the memory in to many small memories that are
close to the processors, so memory access is not the bottleneck.

And this is where C (and Python) fall shortest.  The idea that there
is one memory space of semi-infinite size, and all pointers point into
it and all variables live in it almost forces attempts at parallelism
to be expensive and performance-killing.  And yet, because of C's
limited, "low-level" approach to data, we are stuck.  Being able to
declare that something is a tensor that will be unchanging when used,
can be distributed across many small memories to prevent data
bottlenecks when reading and writing, and changed only in limited and
controlled ways is the key to unlocking serious performance.

Steve

PS: for some further thoughts, see
https://wavecomp.ai/blog/auto-hardware-and-ai


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180627/29438507/attachment.html>

From bakul at bitblocks.com  Thu Jun 28 14:12:37 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Wed, 27 Jun 2018 21:12:37 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
Message-ID: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>

On Jun 27, 2018, at 9:00 AM, Steve Johnson <scj at yaccman.com> wrote:
> 
> I agree that C is a bad language for parallelism, and, like it or not, that's what today's hardware is giving us -- not speed, but many independent processors.  But I'd argue that its problem isn't that it is not low-level, but that it is not high-level enough.  A language like MATLAB, whose basic data object is an N-diemsional tensor, can make impressive use of parallel hardware.
> 
> Consider matrix multiplication.   Multiplying two NxN arrays to get another NxN array is a classic data-parallel problem -- each value in the result matrix is completely independent of every other one -- in theory, we could dedicate a processor to each output element, and would not need any cache coherency or locking mechanism -- just let them go at it -- the trickiest part is deciding you are finished.
> 
> The reason we know we are data parallel is not because of any feature of the language -- it's because of the mathematical structure of the problem.  While it's easy to write a matrix multiply function in C (as it is in most languages), just the fact that the arguments are pointers is enough to make data parallelism invisible from within the function.  You can bolt on additional features that, in effect, tell the compiler it should treat the inputs as independent and non-overlapping, but this is just the tip of the iceberg -- real parallel problems see this in spaces.  
> 
> The other hardware factor that comes into play is that hardware, especially memories, have physical limits in what they can do.  So the "ideal" matrix multiply with a processor for each output element would suffer because many of the processors would be trying to read the same memory at the same time.  Some would be bound to fail, requiring the ability to stack requests and restart them, as well as pause the processor until the data was available.   (note that, in this and many other cases, we don't need cache coherency because the input data is not changing while we are using it).  The obvious way around this is to divide the memory in to many small memories that are close to the processors, so memory access is not the bottleneck.
> 
> And this is where C (and Python) fall shortest.  The idea that there is one memory space of semi-infinite size, and all pointers point into it and all variables live in it almost forces attempts at parallelism to be expensive and performance-killing.  And yet, because of C's limited, "low-level" approach to data, we are stuck.  Being able to declare that something is a tensor that will be unchanging when used, can be distributed across many small memories to prevent data bottlenecks when reading and writing, and changed only in limited and controlled ways is the key to unlocking serious performance.
> 
> Steve
> 
> PS: for some further thoughts, see https://wavecomp.ai/blog/auto-hardware-and-ai

Very well put. The whole concept of address-spaces is rather
low level.

There is in fact a close parallel to this model that is in 
current use. Cloud computing is essentially a collection of
"micro-services", orchestrated to provide some higher level
service. External to some micro-service X, all other services
care about is how to reach X and what comm. protocol to use to
talk to it but not about any details of how it is implemented.
Here concerns are more about reliability, uptime, restarts,
updates, monitoring, load balancing, error handling, DoS,
security, access-control, latency, network address space &
traffic management, dynamic scaling, etc. A subset of these
concerns would apply to parallel computers as well.

Current cloud computing solutions to these problems are quite
messy, complex and heavyweight. There is a lot of scope here
for simplification.... 







From arnold at skeeve.com  Thu Jun 28 15:57:14 2018
From: arnold at skeeve.com (arnold at skeeve.com)
Date: Wed, 27 Jun 2018 23:57:14 -0600
Subject: [TUHS] off-topic list
In-Reply-To: <1D3E6798-6370-4C7C-98F9-3399AA88C76E@ccc.com>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
 <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
 <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>
 <alpine.NEB.2.20.1806271632090.7100@neener.bl.org>
 <1D3E6798-6370-4C7C-98F9-3399AA88C76E@ccc.com>
Message-ID: <201806280557.w5S5vESf027039@freefriends.org>

I'd heard "Two Guys and a Vax"...

Clem cole <clemc at ccc.com> wrote:

> Makes sense. 
>
> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. 
>
> > On Jun 27, 2018, at 5:33 PM, Michael Parson <mparson at bl.org> wrote:
> > 
> >> On Tue, 26 Jun 2018, Clem Cole wrote:
> >> 
> >> Date: Tue, 26 Jun 2018 17:16:36 -0400
> >> From: Clem Cole <clemc at ccc.com>
> >> To: Warner Losh <imp at bsdimp.com>
> >> Cc: TUHS main list <tuhs at minnie.tuhs.org>,
> >>    Grant Taylor <gtaylor at tnetconsulting.net>
> >> Subject: Re: [TUHS] off-topic list
> >> On Tue, Jun 26, 2018 at 2:04 PM, Warner Losh <imp at bsdimp.com> wrote:
> >> ​Ok, that all sounds right and I'll take your word for it.  I
> >> followed it only from the side and not directly as a customer, since
> >> by then I was really not doing much VMS anything.  That said, I had
> >> thought some of the original folks that were part of the PMDF work
> >> were the same crew that did SOL (Michel Gien - the Pascal rewrite of
> >> UNIX - whom I knew in those days from the OS side of the world).  I
> >> also thought the reason why the the firm was named after the TGV (and
> >> yes I stand corrected on the name) was because they were French and at
> >> the time the French bullet train was know for being one of the fastest
> >> in the world and the French were very proud of it.
> > 
> > I always thought TGV was "Three Guys and a VAX".
> > 
> > -- 
> > Michael Parson
> > Pflugerville, TX
> > KF5LGQ


From tytso at mit.edu  Fri Jun 29 00:15:38 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Thu, 28 Jun 2018 10:15:38 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
Message-ID: <20180628141538.GB663@thunk.org>

Bakul, I think you and Steve have a very particular set of programming
use cases in mind, and are then over-generalizing this to assume that
these are the only problems that matter.  It's the same mistake
Chisnall made when he assrted the parallel programming a myth that
humans writing parallel programs was "hard", and "all you needed" was
the right language.

The problem is that not all people are interested in solving problems
which are amenable to embarassingly parallel algorithms.  Not all
programmers are interested in doing matrix multiply, or writing using
the latest hyped archiecture (whether you call by the new name,
"microservices", or the older hyped name which IBM tried to promote
"Service Oriented Architecture", or SOA).

I'll note that Sun made a big bet (one of its last failed bets) on
this architecture in the form of the Niagra architecture, with a large
number of super "wimpy" cores.  It was the same basic idea --- we
can't make big fast cores (since that would lead to high ILP's,
complex register rewriting, and lead to cache-oriented security
vulnerabilities like Spectre and Meltdown) --- so instead, let's make
lots of tiny wimpy cores, and let programmers write highly threaded
programs!  They essentially made a bet on the web-based microservice
model which you are promoting.

And the Market spoke.  And shortly thereafter, Java fell under the
control of Oracle....  And Intel would proceed to further dominate the
landscape.

					- Ted


From perry at piermont.com  Fri Jun 29 00:25:16 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 10:25:16 -0400
Subject: [TUHS] email filtering (was Re:  off-topic list)
In-Reply-To: <20180626090539.GB96296@accordion.employees.org>
References: <20180622145402.GT21272@mcvoy.com>
 <20180622151751.BEK9i%steffen@sdaoden.eu>
 <b6ef82de-739a-ed8e-0e91-3abfa2fb5f07@spamtrap.tnetconsulting.net>
 <20180622192505.mfig_%steffen@sdaoden.eu>
 <89e5ae21-ccc0-5c84-837b-120a1a7d9e26@spamtrap.tnetconsulting.net>
 <20180623144959.M9byU%steffen@sdaoden.eu>
 <ce6f617c-cf8e-63c6-8186-27e09c78020c@spamtrap.tnetconsulting.net>
 <alpine.NEB.2.20.1806231615130.17586@neener.bl.org>
 <alpine.BSF.2.21.999.1806251240100.68981@aneurin.horsfall.org>
 <201806250615.w5P6FgHA018820@freefriends.org>
 <20180626090539.GB96296@accordion.employees.org>
Message-ID: <20180628102516.0aa3e3f4@jabberwock.cb.piermont.com>

On Mon, Jun 25, 2018 at 12:15:42AM -0600, arnold at skeeve.com wrote:
>
> So what is the alternative?

I subscribe to a truly ridiculous number of email lists, and I run my
own mail server, so I thought I'd mention my Rube Goldberg style setup
given that it's being discussed. For those that don't like how long
this is, the main keys are "IMAP + sieve for filing all mailing lists
into their own IMAP folder". The rest is pretty boring.

Boring part:

0) My MTA is Postfix, which allows even a relatively small site to get
pretty damn good anti-spam capabilities. It's also really well written
(it's a flock of small tools each of which does only one thing well)
and quite secure, which is important. I have most of the interesting
bells and whistles (like opportunistic TLS) turned on.

1) Email for me goes through procmail to run it through a logging
system, then spam assassin to tag whatever spam got past Postfix, and
finally it's passed to Dovecot's "sieve" system for final filing in
the IMAP server.

2) Sieve breaks up my incoming email into many, many
mailboxes. There's a box for every mailing list I'm on, so I can read
them more or less like newsgroups. I don't put any email for a list
into my main inbox, which gets only mail addressed to me personally so
I can reply to such messages very fast.

Sieve is a nice, standardized language for mailbox filtering, and the
Dovecot implementation works quite well. It's flexible enough that I
can deal with the fact that many mailing lists are hard to pick out
from the mail stream thanks to their operators not following standards.

Spam that made it past Postfix into Spam Assassin might or might not
really be spam, so I have Spam Assassin tag it rather than deleting
it, and Sieve is instructed to file that into its own box just in
case. A cron job expires spam after a couple of weeks so that mailbox
doesn't get too large.

3) As I hinted, my email is stored in an IMAP server, specifically
Dovecot. This allows me to read my mail with a dozen different tools,
including my phone, a couple different MUAs on the desktop, etc. I
dislike (see below) that this limits my choice in tools to process the
mail, but I really need the multi-MUA setup, and I can't afford to
move the mail onto any of my diverse end systems because I need to be
able to read it on all of them.

4) I used MH for many years, and I really wish MH/NMH worked with IMAP
properly, because like others here, I really loved its scriptability,
but the need to use multiple MUAs trumps it at this point. It's also
necessary to communicate with muggles/mundanes/ordinary folk too often
and most of them use HTML email and so I need to be able to read it
even if I don't generate it, which means some use of GUI emailers.

5) As for the future, I'm hoping at some point to have something that
works sort of like MH did but which can speak native IMAP, and I'm
hoping to start using Emacs as an MUA most of the time if I can find a
client that will handle HTML email + IMAP a bit better than most of
them do. (Stop looking at me that way for saying Emacs — I've been
using it as my editor since 1983 and my fingers just don't know any
better any more.)

Perry
-- 
Perry E. Metzger		perry at piermont.com


From steffen at sdaoden.eu  Fri Jun 29 00:36:35 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Thu, 28 Jun 2018 16:36:35 +0200
Subject: [TUHS] PDP-11 legacy, C, and modern architectTures
In-Reply-To: <20180627021843.GA31920@thunk.org>
References: <CMM.0.96.0.1530035664.beebe@gamma.math.utah.edu>
 <BE30EA2F-6E25-4197-92B4-940BF8BBF0AD@ronnatalie.com>
 <1f8043fd-e8d6-a5e6-5849-022d1a41f5bf@kilonet.net>
 <20180626215012.GE8150@mcvoy.com>
 <B9CFE92A-0090-4576-8E11-83EB63AE05EF@ronnatalie.com>
 <20180626215905.GH8150@mcvoy.com> <20180627021843.GA31920@thunk.org>
Message-ID: <20180628143635.7UYJ_%steffen@sdaoden.eu>

Theodore Y. Ts'o wrote in <20180627021843.GA31920 at thunk.org>:
  ...
 |There is at least one of his points that I very much agree with, which
 |is the abstract model which is exported by the C programming paradigm
 |is very different from what totality of what the CPU can do.  Consider
 ...
 |That saying is essentially telling us that the compiler is capable of
 |doing a lot of things behinds our back (which goes against the common
 |understanding of what low-level language does), and second, that
 |modern CPU's has gotten too complicated for most humans to be able to
 |program directly, either comfortably or practically.
 ...
 |It's very easy to criticize Intel for engineering their CPU's the way
 |they did, but if you want to know who to blame for massively high
 |ILP's and caching, and compilers that do all sorts of re-ordering and
 |loop rewriting behind the developer's back, just look in the mirror.
 |It's the same as why we have climate change and sea level rise.
 |
 |"We have met the enemy and he is us". -- Pogo (Walt Kelly)

And my thinking is that the C language has been misused.  If you
can cast in stone a language policy that then allows compilers to
apply optimization X, and it gains your product ten percent
performance without the necessity to spend a single minute payed
manpower.

But even otherwise.  For example loop unrolling -- how could
a compiler know whether i want a loop to become unrolled or not?
It may have very clever heuristics, but despite that these have to
become coded themselves, they will not know whether i have the
desire to make this piece of code fast or not, shall it ever
execute.

In my opinion this "giving away decisions to the compiler" made
the C language worse, not better.  I think it would be better if
one could place attributes, like @unroll, @inline (since inline is
only a hint and placing not-to-be-inlined after their use cases
does not matter to intelligent CCs: @no-inline would be needed),
@prefetch[[X]], and @parallel.  And @align[[X]] and all these.
"@parallel @unroll for(;;)" could for example automatically split
a loop into X subranges.  Or it could hint secure prediction.  Or
at least indicate the programmer's intention or clearance of that,
if the hardware is capable.  And a good compiler could indicate
whether the desire could ever be fulfilled or not.  This looks
like the Julia language, but there these are user-defined "macros"
not compiler hints.  I mean, at least until quantum computers
happen C seems to map to what i know about hardware pretty good.
It is up to the programmer to localize data and avoid that threads
trumple on each other.  It is not so easy as if the language
offers a completely asynchronous shot with a single operator, but
in an ideal world you can get the same with a much smaller
footprint.

Unfortunately what you say seems to be right, and recalling what
the largest German computer magazine had to say about compilers,
it was mostly a series of graphs indicating the speed of the
generated code.  Whereas i had to deal with working around
compiler bugs for the same (free) CCs at the same time.  (Of
course, or maybe not of course, people did care to get them fixed,
but the story was not about "the walk", but what the eagles saw.)
There were two or three large interesting articles with quite some
context (except Mr. Stiller's "Prozessorgeflüster"; i recall he
once won a bottle of wine from some Intel manager), about at the
same time, one about the ARM architecture, one accompanying Donald
Knuth's 64th birthday regarding MMIX, and if you want one about
Smalltalk.  And indeed most think it is all about education,
despite the fact that each epoch seems to end regardless what you.
It has always been the fight to make people reflect, realize, and
be good with it.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From lm at mcvoy.com  Fri Jun 29 00:40:17 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 07:40:17 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628141538.GB663@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
Message-ID: <20180628144017.GB21688@mcvoy.com>

On Thu, Jun 28, 2018 at 10:15:38AM -0400, Theodore Y. Ts'o wrote:
> And the Market spoke.  And shortly thereafter, Java fell under the
> control of Oracle....  And Intel would proceed to further dominate the
> landscape.

Yep.  Lots of cpus are nice when doing a parallel make but there is 
always some task that just uses one cpu.  And then you want the fastest
one you can get.  Lots of wimpy cpus is just, um, wimpy.


From perry at piermont.com  Fri Jun 29 00:43:29 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 10:43:29 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628141538.GB663@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
Message-ID: <20180628104329.754d2c19@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 10:15:38 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
wrote:
> I'll note that Sun made a big bet (one of its last failed bets) on
> this architecture in the form of the Niagra architecture, with a
> large number of super "wimpy" cores.  It was the same basic idea
> --- we can't make big fast cores (since that would lead to high
> ILP's, complex register rewriting, and lead to cache-oriented
> security vulnerabilities like Spectre and Meltdown) --- so instead,
> let's make lots of tiny wimpy cores, and let programmers write
> highly threaded programs!  They essentially made a bet on the
> web-based microservice model which you are promoting.
>
> And the Market spoke.  And shortly thereafter, Java fell under the
> control of Oracle....  And Intel would proceed to further dominate
> the landscape.

I'll be contrary for a moment.

Huge numbers of wimpy cores is the model already dominating the
world. Clock rates aren't rising any longer, but (in spite of claims
to the contrary) Moore's law continues, very slightly with shrinkage
of feature size (which is about to end) and more dominantly with
increasing the number of transistors per square mil by going into
3D. Dynamic power usage also scales as the square of clock rate, so
larger numbers of lower clocked cores save a boatload of heat, and at
some point you have too many transistors in too small an area to take
heat out if you're generating too much.

Some data points:

1. All the largest compute platforms out there (Google, Amazon, etc.)
   are based on vast numbers of processors integrated into a giant
   distributed system. You might not see this as evidence for the
   trend, but it is. No one can make a single processor that's much
   faster than what you get for a few hundred bucks from Intel or AMD,
   so the only way to get more compute is to scale out, and this is
   now so common that no one even thinks of it as odd.

2. The most powerful compute engines out there within in a single box
   aren't Intel microprocessors, they're GPUs, and anyone doing really
   serious computing now uses GPUs to do it. Machine learning,
   scientific computing, etc. has become dependent on the things, and
   they're basically giant bunches of tiny processors. Ways to program
   these things have become very important.

   Oh, and your iPhone or Android device is now pretty lopsided. By
   far most of the compute power in it comes from its GPUs, though
   there are a ridiculous number of general purpose CPUs in these
   things too.

3. Even "normal" hacking on "normal" CPUs on a singe box now runs on
   lots of fairly wimpy processors. I do lots of compiler hacking
   these days, and my normal lab machine has 64 cores, 128
   hyperthreads, and a half T of RAM. It rebuilds one system I need to
   recompile a lot, which takes like 45 minutes to build on my laptop,
   in two minutes. Note that this box is both on the older side, the
   better ones in the lab have a lot more RAM, newer and better
   processors and more of them, etc.

   This box also costs a ridiculously small fraction of what I cost, a
   serious inversion of the old days when I started out and a machine
   cost a whole lot more compared to a human.

   Sadly my laptop is stalled out and hasn't gotten any better in
   forever, but the machines in the lab still keep getting
   better. However, the only way to take advantage of that is
   parallelism. Luckily parallel builds work pretty well.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From perry at piermont.com  Fri Jun 29 00:55:38 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 10:55:38 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628144017.GB21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
Message-ID: <20180628105538.65f82615@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 07:40:17 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> On Thu, Jun 28, 2018 at 10:15:38AM -0400, Theodore Y. Ts'o wrote:
> > And the Market spoke.  And shortly thereafter, Java fell under the
> > control of Oracle....  And Intel would proceed to further
> > dominate the landscape.  
> 
> Yep.  Lots of cpus are nice when doing a parallel make but there is 
> always some task that just uses one cpu.  And then you want the
> fastest one you can get.  Lots of wimpy cpus is just, um, wimpy.

And yet, there are few single core devices I can buy any more other
than embedded processors. Even the $35 Raspberry Pis are now four core
machines, and I'm sure they'll be eight core devices soon.

If you want a single core Unix machine, you need to buy the $5
Raspberry Pi Zero, which is the only single core Unix box I still can
think of on the market.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From lm at mcvoy.com  Fri Jun 29 00:56:09 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 07:56:09 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628104329.754d2c19@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
Message-ID: <20180628145609.GD21688@mcvoy.com>

On Thu, Jun 28, 2018 at 10:43:29AM -0400, Perry E. Metzger wrote:
> On Thu, 28 Jun 2018 10:15:38 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
> wrote:
> > I'll note that Sun made a big bet (one of its last failed bets) on
> > this architecture in the form of the Niagra architecture, with a
> > large number of super "wimpy" cores.  It was the same basic idea
> > --- we can't make big fast cores (since that would lead to high
> > ILP's, complex register rewriting, and lead to cache-oriented
> > security vulnerabilities like Spectre and Meltdown) --- so instead,
> > let's make lots of tiny wimpy cores, and let programmers write
> > highly threaded programs!  They essentially made a bet on the
> > web-based microservice model which you are promoting.
> >
> > And the Market spoke.  And shortly thereafter, Java fell under the
> > control of Oracle....  And Intel would proceed to further dominate
> > the landscape.
> 
> I'll be contrary for a moment.
> 
> Huge numbers of wimpy cores is the model already dominating the
> world. 

Got a source that backs up that claim?  I was recently dancing with
Netflix and they don't match your claim, nor do the other content
delivery networks, they want every cycle they can get.


From lm at mcvoy.com  Fri Jun 29 00:58:25 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 07:58:25 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628105538.65f82615@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
Message-ID: <20180628145825.GE21688@mcvoy.com>

On Thu, Jun 28, 2018 at 10:55:38AM -0400, Perry E. Metzger wrote:
> On Thu, 28 Jun 2018 07:40:17 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> > On Thu, Jun 28, 2018 at 10:15:38AM -0400, Theodore Y. Ts'o wrote:
> > > And the Market spoke.  And shortly thereafter, Java fell under the
> > > control of Oracle....  And Intel would proceed to further
> > > dominate the landscape.  
> > 
> > Yep.  Lots of cpus are nice when doing a parallel make but there is 
> > always some task that just uses one cpu.  And then you want the
> > fastest one you can get.  Lots of wimpy cpus is just, um, wimpy.
> 
> And yet, there are few single core devices I can buy any more other
> than embedded processors. Even the $35 Raspberry Pis are now four core
> machines, and I'm sure they'll be eight core devices soon.
> 
> If you want a single core Unix machine, you need to buy the $5
> Raspberry Pi Zero, which is the only single core Unix box I still can
> think of on the market.

You completely missed my point, I never said I was in favor of single
cpu systems, I said I the speed of a single cpu to be fast no matter
how many of them I get.  The opposite of wimpy.

Which was, I think, Ted's point as well when he said the market rejected
the idea of lots of wimpy cpus.


From imp at bsdimp.com  Fri Jun 29 01:07:28 2018
From: imp at bsdimp.com (Warner Losh)
Date: Thu, 28 Jun 2018 09:07:28 -0600
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628145609.GD21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
Message-ID: <CANCZdfqXjhssV+CqDLvY_fxeRm0zdFt0YwRHMDzYdD2XRjbAWg@mail.gmail.com>

On Thu, Jun 28, 2018 at 8:56 AM, Larry McVoy <lm at mcvoy.com> wrote:

> On Thu, Jun 28, 2018 at 10:43:29AM -0400, Perry E. Metzger wrote:
> > On Thu, 28 Jun 2018 10:15:38 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
> > wrote:
> > > I'll note that Sun made a big bet (one of its last failed bets) on
> > > this architecture in the form of the Niagra architecture, with a
> > > large number of super "wimpy" cores.  It was the same basic idea
> > > --- we can't make big fast cores (since that would lead to high
> > > ILP's, complex register rewriting, and lead to cache-oriented
> > > security vulnerabilities like Spectre and Meltdown) --- so instead,
> > > let's make lots of tiny wimpy cores, and let programmers write
> > > highly threaded programs!  They essentially made a bet on the
> > > web-based microservice model which you are promoting.
> > >
> > > And the Market spoke.  And shortly thereafter, Java fell under the
> > > control of Oracle....  And Intel would proceed to further dominate
> > > the landscape.
> >
> > I'll be contrary for a moment.
> >
> > Huge numbers of wimpy cores is the model already dominating the
> > world.
>
> Got a source that backs up that claim?  I was recently dancing with
> Netflix and they don't match your claim, nor do the other content
> delivery networks, they want every cycle they can get.
>

Well, we want to be able to manage 100G or more of encrypted traffic sanely.

We currently get this by lots (well 20) of not-so-wimpy cores to do all the
work since none of the offload solutions can scale.

The problem is that there's no systems with lots (100's) of wimpy cores
that we can do the offload with that also have enough bandwidth to keep up.
And even if there were, things like NUMA and slow interprocessor connects
make the usefulness of the boatloads of cores a lot trickier to utilize
than it should....

Then again, a lot of what we do is rather special case, even if we do use
off the shelf technology to get there...

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/e1a40966/attachment.html>

From clemc at ccc.com  Fri Jun 29 01:37:43 2018
From: clemc at ccc.com (Clem Cole)
Date: Thu, 28 Jun 2018 11:37:43 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628144017.GB21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
Message-ID: <CAC20D2NMPscxBpHif4YY8Uwnj8gd_9x997iU1AYHaWHPaR4cqA@mail.gmail.com>

On Thu, Jun 28, 2018 at 10:40 AM, Larry McVoy <lm at mcvoy.com> wrote:

> Yep.  Lots of cpus are nice when doing a parallel make but there is
> always some task that just uses one cpu.  And then you want the fastest
> one you can get.  Lots of wimpy cpus is just, um, wimpy.
>

​Larry Stewart would be better to reply as SiCortec's CTO - but that was
the basic logic behind their system -- lots of cheap MIPS chips. Truth is
they made a pretty neat system and it scaled pretty well.   My observation
is that they, like most of the attempts I have been a part, *in the end
architecture does not matter nearly as much as economics*.

In my career I have build 4 or 5 specially architecture systems.  You can
basically live through one or two generations using some technology
argument and 'win'.   But in the end, people buy computers to do a job and
the really don't give a s*t about how the job gets done, as long as it get
done cheaply.​   Whoever wins the economic war has the 'winning'
architecture.   Look x66/Intel*64 would never win awards as a 'Computer
Science Architecture'  or in SW side; Fortran *vs*. Algol *etc*...; Windows
beat UNIX Workstations for the same reasons... as well know.

Hey, I used to race sailboats ...  there is a term called a 'sea lawyer' -
where you are screaming you have been fouled but you drowning as your
boating is sinking.   I keep thinking about it here.   You can scream all
you want about goodness or badness of architecture or language, but in the
end, users really don't care.   They buy computers to do a job.   You
really can not forget that is the purpose.

As Larry says: Lots of wimpy cpus is just wimpy.    Hey, Intel, nVidia and
AMD's job is sell expensive hot rocks.   They are going to do what they can
to make those rocks useful for people.  They want to help people get there
jobs done -- period. That is what they do.   Amtel and RPi folks take the
'jelly bean' approach - which is one of selling enough it make it worth it
for the chip manufacture and if the simple machine can do the customer job,
very cool.  In those cases simple is good (hey the PDP-11 is pretty complex
compared to say the 6502).

So, I think the author of the paper trashing as too high level C misses the
point, and arguing about architecture is silly.  In the end it is about
what it costs to get the job done.   People will use what it is the most
economically for them.

Clem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/b9cd7ad5/attachment.html>

From tfb at tfeb.org  Fri Jun 29 01:39:58 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Thu, 28 Jun 2018 16:39:58 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628145825.GE21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
Message-ID: <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>

> On 28 Jun 2018, at 15:58, Larry McVoy <lm at mcvoy.com> wrote:
> 
> You completely missed my point, I never said I was in favor of single
> cpu systems, I said I the speed of a single cpu to be fast no matter
> how many of them I get.  The opposite of wimpy.

And this also misses the point, I think.  Defining a core as 'wimpy' or not is dependent on when you make the definition: the Cray-1 was not wimpy when it was built, but it is now.  The interesting question is what happens to the performance of serial code on a core over time.  For a long time it has increased, famously, approximately exponentially.  There is good evidence that this is no longer the case and that per-core performance will fall off (or has fallen off in fact) that curve and may even become asymptotically constant.  If that's true, then in due course *all cores will become 'wimpy'*, and to exploit the performance available from systems we will *have* to deal with parallelism.

(Note I've said 'core' not 'CPU' for clarity even when it's anachronistic: I never know what the right terminology is now.)


From lm at mcvoy.com  Fri Jun 29 02:02:02 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 09:02:02 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
Message-ID: <20180628160202.GF21688@mcvoy.com>

On Thu, Jun 28, 2018 at 04:39:58PM +0100, Tim Bradshaw wrote:
> > On 28 Jun 2018, at 15:58, Larry McVoy <lm at mcvoy.com> wrote:
> > 
> > You completely missed my point, I never said I was in favor of single
> > cpu systems, I said I the speed of a single cpu to be fast no matter
> > how many of them I get.  The opposite of wimpy.
>
> And this also misses the point, I think.  Defining a core as 'wimpy'
> or not is dependent on when you make the definition: the Cray-1 was not
> wimpy when it was built, but it is now.

That's not what I, or Ted, or the Market was saying.  We were not comparing
yesterday's cpu against todays.  We were saying that at any given moment,
a faster processor is better than more processors that are slower.

That's not an absolute, obviously.  If I am running AWS and I get 10x
the total CPU processing speed at the same power budget, yeah, that's
interesting.

But for people who care about performance, and there are a lot of them,
more but slower is less desirable than less but faster.  There too much
stuff that hasn't been (or can't be) parallelized (and I'll note here
that I'm the guy that built the first clustered server at Sun, I can
argue the parallel case just fine).

People still care about performance and always will.  Yeah, for your
laptop or whatever you could probably use what you have for the next
10 years and be fine.   But when you are doing real work, sorting
the genome, machine learning, whatever, performance is a thing and
lots of wimpy cpus are not.


From tfb at tfeb.org  Fri Jun 29 02:41:24 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Thu, 28 Jun 2018 17:41:24 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628160202.GF21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
Message-ID: <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>

On 28 Jun 2018, at 17:02, Larry McVoy <lm at mcvoy.com> wrote:
> 
> But when you are doing real work, sorting
> the genome, machine learning, whatever, performance is a thing and
> lots of wimpy cpus are not.

But lots of (relatively) wimpy CPUs is what physics says you will have and you really can't argue with physics.

I think this is definitely off-topic now so I won't reply further: feel free to mail me privately.


From paul.winalski at gmail.com  Fri Jun 29 02:45:39 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 28 Jun 2018 12:45:39 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628141538.GB663@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
Message-ID: <CABH=_VQyoKgCcfRGJgpac5freEGBAeGetr9v9L8bTh0Oe6PYzA@mail.gmail.com>

On 6/28/18, Theodore Y. Ts'o <tytso at mit.edu> wrote:
>
> It's the same mistake
> Chisnall made when he assrted the parallel programming a myth that
> humans writing parallel programs was "hard", and "all you needed" was
> the right language.

I''ve heard the "all you need is the right language" solution to the
parallel processing development problem since I joined DEC in 1980.
Here we are in 2018 and nobody's found that "right language" yet.

Parallel programming *is* hard for humans.  Very few people can cope
with it, or with the nasty bugs that crop up when you get it wrong.

> The problem is that not all people are interested in solving problems
> which are amenable to embarassingly parallel algorithms.

Most interesting problems in fact are not embarrassingly parallel.
They tend to have data interdependencies.

There have been some advancements in software development tools to
make parallel programming easier.  Modern compilers are getting pretty
good at loop analysis to discover opportunities for parallel execution
and vectorization in sequentially-written code.

-Paul W.


From paul.winalski at gmail.com  Fri Jun 29 02:59:28 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 28 Jun 2018 12:59:28 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
Message-ID: <CABH=_VSUSX4WtnbcnaiUJPSLJKtvRdDvTfMwfmsD=NVds037Zg@mail.gmail.com>

n re modern architectures, admiral Grace Hopper gave a talk at DEC's
Nashua software development plant ca. 1982 on the future of computer
architecture.  She predicted the modern multi-core situation.  As she
put it, if you were a carter and had to haul double the load in your
cart, you wouldn't breed bigger a horse twice as big--you'd hook up a
team of horses.

Another problem with solving problems fast is I/O bandwidth.  Much of
digital audio file, for example, is an embarrassingly parallel
problem.  But no matter how many cores you throw at the problem, and
no matter how fast they are, the time it takes to process the audio
file is limited by how fast you can get the original off the disk and
the modified file back onto the disk.  Or main memory, for that
matter.  Relative to the processor speed, a cache miss takes an
eternity to resolve.  Get your cache management wrong and your program
ends up running an order of magnitude slower.  It's a throwback to the
situation in the 1960s, where compute speeds were comparable to main
memory speeds, but vastly higher than the I/O transmission rates to
disk and tape.  Only now first-level cache is the new "main memory",
and for practical purposes main memory is a slow storage medium.

-Paul W.


From lm at mcvoy.com  Fri Jun 29 03:09:55 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 10:09:55 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
Message-ID: <20180628170955.GH21688@mcvoy.com>

On Thu, Jun 28, 2018 at 05:41:24PM +0100, Tim Bradshaw wrote:
> On 28 Jun 2018, at 17:02, Larry McVoy <lm at mcvoy.com> wrote:
> > But when you are doing real work, sorting
> > the genome, machine learning, whatever, performance is a thing and
> > lots of wimpy cpus are not.
> 
> But lots of (relatively) wimpy CPUs is what physics says you will have
> and you really can't argue with physics.

I'm not sure how people keep missing the original point.  Which was:
the market won't choose a bunch of wimpy cpus when it can get faster
ones.  It wasn't about the physics (which I'm not arguing with), it 
was about a choice between lots of wimpy cpus and a smaller number of
fast cpus.  The market wants the latter, as Ted said, Sun bet heavily
on the former and is no more.

If you want to bet on what Sun did, feel free, but do so knowing that
people have tried to tell you that is a failed approach.


From mparson at bl.org  Fri Jun 29 04:36:59 2018
From: mparson at bl.org (Michael Parson)
Date: Thu, 28 Jun 2018 13:36:59 -0500 (CDT)
Subject: [TUHS] off-topic list
In-Reply-To: <201806280557.w5S5vESf027039@freefriends.org>
References: <20180625161016.C16BA18C082@mercury.lcs.mit.edu>
 <CAC20D2O=ncZ3ybjgTyMcM5+rGBPnJX1-E-312sPcm4wb+GE8hA@mail.gmail.com>
 <3950d997-d310-7cfc-30bf-237e9d872739@spamtrap.tnetconsulting.net>
 <CAC20D2Nt+CuDriSh18ypOiG3+pJm+yvy4-VQ0wYUa8A7QiCrYA@mail.gmail.com>
 <alpine.DEB.2.11.1806261408380.916@grey.csi.cam.ac.uk>
 <CANCZdfpEuvUbBXUyxoKCHoA4cLkRHJ-8KoFOSVBgdOsAOsX6PQ@mail.gmail.com>
 <CAC20D2MEsnhU-Vw_te_hTPA5ht4V5h=CUnni9TqXXZfHrKucWg@mail.gmail.com>
 <alpine.NEB.2.20.1806271632090.7100@neener.bl.org>
 <1D3E6798-6370-4C7C-98F9-3399AA88C76E@ccc.com>
 <201806280557.w5S5vESf027039@freefriends.org>
Message-ID: <alpine.NEB.2.20.1806281333350.23807@neener.bl.org>

On Wed, 27 Jun 2018, arnold at skeeve.com wrote:
> Date: Wed, 27 Jun 2018 23:57:14 -0600
> From: arnold at skeeve.com
> To: mparson at bl.org, clemc at ccc.com
> Cc: tuhs at minnie.tuhs.org
> Subject: Re: [TUHS] off-topic list
>
> Clem cole <clemc at ccc.com> wrote:
>
>> Makes sense.
>>
>> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
>>
>>> On Jun 27, 2018, at 5:33 PM, Michael Parson <mparson at bl.org> wrote:
>>>
>>>> On Tue, 26 Jun 2018, Clem Cole wrote:
>>>>
>>>> Date: Tue, 26 Jun 2018 17:16:36 -0400
>>>> From: Clem Cole <clemc at ccc.com>
>>>> To: Warner Losh <imp at bsdimp.com>
>>>> Cc: TUHS main list <tuhs at minnie.tuhs.org>,
>>>>    Grant Taylor <gtaylor at tnetconsulting.net>
>>>> Subject: Re: [TUHS] off-topic list
>>>> On Tue, Jun 26, 2018 at 2:04 PM, Warner Losh <imp at bsdimp.com> wrote:
>>>> ​Ok, that all sounds right and I'll take your word for it.  I
>>>> followed it only from the side and not directly as a customer, since
>>>> by then I was really not doing much VMS anything.  That said, I had
>>>> thought some of the original folks that were part of the PMDF work
>>>> were the same crew that did SOL (Michel Gien - the Pascal rewrite of
>>>> UNIX - whom I knew in those days from the OS side of the world).  I
>>>> also thought the reason why the the firm was named after the TGV (and
>>>> yes I stand corrected on the name) was because they were French and at
>>>> the time the French bullet train was know for being one of the fastest
>>>> in the world and the French were very proud of it.
>>>
>>> I always thought TGV was "Three Guys and a VAX".
> 
> I'd heard "Two Guys and a Vax"...

Digging through google search results, I've found stuff suggesting that
it started as Two Guys and at some point they added a Third Guy.  My
VAX/VMS knowlege is long rotted, haven't touched it since the early 90s,
but ISTR having TGV Multinet on the VMS system I used, and "Three Guys
and a VAX" was the expn I remember reading at the time.

-- 
Michael Parson
Pflugerville, TX
KF5LGQ


From perry at piermont.com  Fri Jun 29 05:42:46 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 15:42:46 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628145609.GD21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
Message-ID: <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 07:56:09 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> > Huge numbers of wimpy cores is the model already dominating the
> > world.   
> 
> Got a source that backs up that claim?  I was recently dancing with
> Netflix and they don't match your claim, nor do the other content
> delivery networks, they want every cycle they can get.

Netflix has how many machines? I'd say in general that principle
holds: this is the age of huge distributed computation systems, the
most you can pay for a single core before it tops out is in the
hundreds of dollars, not in the millions like it used to be. The high
end isn't very high up, and we scale by adding boxes and cores, not
by getting single CPUs that are unusually fast.

Taking the other way of looking at it, from what I understand,
CDN boxes are about I/O and not CPU, though I could be wrong. I can
ask some of the Netflix people, a former report of mine is one of the
people behind their front end cache boxes and we keep in touch.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From paul.winalski at gmail.com  Fri Jun 29 05:55:15 2018
From: paul.winalski at gmail.com (Paul Winalski)
Date: Thu, 28 Jun 2018 15:55:15 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
Message-ID: <CABH=_VSwE51_a0XpMYfD0nzo0Wn1pRFtPCJdaq3vYGU6S=awkw@mail.gmail.com>

On 6/28/18, Perry E. Metzger <perry at piermont.com> wrote:
>
> Taking the other way of looking at it, from what I understand,
> CDN boxes are about I/O and not CPU, though I could be wrong.

That, and power consumption/heat dissipation.

-Paul W.


From perry at piermont.com  Fri Jun 29 06:37:49 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 16:37:49 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628160202.GF21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
Message-ID: <20180628163749.24410d93@jabberwock.cb.piermont.com>

So lets not forget: the original question was, do people need to do a
lot of parallel and concurrent computing these days. Keep that in
mind while thinking about this.

On Thu, 28 Jun 2018 09:02:02 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> But for people who care about performance, and there are a lot of
> them, more but slower is less desirable than less but faster.

Sure, but where can you get fewer but faster any more? Yes, you can
get it up to a point, but that point tops out fast. Yes, you can get a
4GHz processor from Intel instead of a bunch of fairly low end
Broadcom ARM chips. However, that top end Intel chip isn't much faster
than it was a few years ago, and it's a pretty cheap and slow chip
compared to what people need, so one of them still isn't going to do
it for you, and so you still need to go parallel, and that means you
still need to write parallel code.

Thirty years ago, you could pay as much money as you wanted, up to
tens of millions, to get a faster single CPU for your work. The
difference between the processors on an IBM PC class machine and on a
Cray was Really Really Different. These days, Intel won't sell you
anything that costs more than a couple thousand dollars, and that
couple thousand dollar thing has many CPUs. The most you can pay per
core, for the highest end, is in the low hundreds depending on the
moment. Taking inflation into account, that's a silly low amount of
money. IBM will sell you some slightly more expensive high end POWER
stuff, but very few people buy that and besides that, there's pretty
much nothing.

So it doesn't matter even if you'd rather spend 100x to get a core
that's 10x faster than the top of what is offered, the 10x faster
thing doesn't exist. You're stuck. You've got top of the line 64 bit
x86 and maybe POWER and there's nothing else. 

So, yes, I agree, all things being equal, people will prefer to buy
the faster stuff, but at the moment, no one can get it, so instead,
we're in an age of loads of parallel machines and cores. Your maximal
fast core is in the hundreds of dollars, but you've got millions of
dollars of computing to do, so you buy tons of processors instead.

> People still care about performance and always will.  Yeah, for your
> laptop or whatever you could probably use what you have for the next
> 10 years and be fine.   But when you are doing real work, sorting
> the genome, machine learning, whatever, performance is a thing and
> lots of wimpy cpus are not.

For all those pieces of work, people use hundreds, thousands, or
hundreds of thousands of cores, depending on the job. Machine
learning, shotgun sequencing, etc., all depend on parallelism these
days. Sometimes people need to buy top end processors for that, but
even then, they have to buy a _ton_ of top end processors because any
given one is too small to do a significant fraction of the work.

So, circling back to the original discussion, languages that don't
let you express such algorithms well are now a problem.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From imp at bsdimp.com  Fri Jun 29 06:42:47 2018
From: imp at bsdimp.com (Warner Losh)
Date: Thu, 28 Jun 2018 14:42:47 -0600
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
Message-ID: <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>

On Thu, Jun 28, 2018 at 1:42 PM, Perry E. Metzger <perry at piermont.com>
wrote:

> On Thu, 28 Jun 2018 07:56:09 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> > > Huge numbers of wimpy cores is the model already dominating the
> > > world.
> >
> > Got a source that backs up that claim?  I was recently dancing with
> > Netflix and they don't match your claim, nor do the other content
> > delivery networks, they want every cycle they can get.
>
> Netflix has how many machines?


We generally say we have tens of thousands of machines deployed worldwide
in our CDN. We don't give out specific numbers though.


> I'd say in general that principle
> holds: this is the age of huge distributed computation systems, the
> most you can pay for a single core before it tops out is in the
> hundreds of dollars, not in the millions like it used to be. The high
> end isn't very high up, and we scale by adding boxes and cores, not
> by getting single CPUs that are unusually fast.
>
> Taking the other way of looking at it, from what I understand,
> CDN boxes are about I/O and not CPU, though I could be wrong. I can
> ask some of the Netflix people, a former report of mine is one of the
> people behind their front end cache boxes and we keep in touch.


I can tell you it's about both. We recently started encrypting all traffic,
which requires a crapton of CPU. Plus, we're doing sophisticated network
flow modeling to reduce congestion, which takes CPU. On our 100G boxes,
which we get in the low 90's encrypted, we have some spare CPU, but almost
no space memory bandwidth and our PCI lanes are full of either 100G network
traffic or 4-6 NVMe drives delivering content up at about 85-90Gbps.

Most of our other boxes are the same, with the exception of the 'storage'
tier boxes. Those we're definitely hard disk I/O bound.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/ee0f0b40/attachment.html>

From stewart at serissa.com  Fri Jun 29 06:37:53 2018
From: stewart at serissa.com (Lawrence Stewart)
Date: Thu, 28 Jun 2018 16:37:53 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CAC20D2NMPscxBpHif4YY8Uwnj8gd_9x997iU1AYHaWHPaR4cqA@mail.gmail.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <CAC20D2NMPscxBpHif4YY8Uwnj8gd_9x997iU1AYHaWHPaR4cqA@mail.gmail.com>
Message-ID: <0B27CA75-F9AC-4DAE-95D9-858155980B12@serissa.com>

Thanks for the promotion to CTO Clem!  I was merely the software architect at Sicortex.

The SC systems had 6-core MIPS-64 cpus at 700 MHz, two channel DDR-2, and a really fast interconnect.  (Seriously fast for its day 800 ns PUT, 1.6 uS GET, north of 2 GB/sec end-to-end, this was in 2008.)  The achilles heel was low memory bandwidth due to a core limitation of a single outstanding miss.  The new chip would have fixed that (and about 8x performance) but we ran out of money in 2009, which was not a good time to look for more.

We had delighted customers who appreciated the reliabiity and the network.  For latency limited codes we did extremely well (GUPS) and still did well on the rest from a flops/watt perspective.  However, lots of commercial prospects didn’t have codes that needed the network and did need single stream performance.  We talked to Urs Holtze at Google and he was very clear - they needed fast single threads.  The low power was very nice, … but we were welcome to try and parallelize their benchmarks.

Which brings me back to the original issue - does C constrain our architectural thinking? 

I’ve spent a fair amount of time recently digging into Nyx, which is an adaptive mesh refinement cosmological hydrodynamics code.  The framework is in C++ because the inheritance stuff makes it straightforward to adapt the AMR machinery to different problems.  This isn’t the kind of horrible C++ that you can’t tell what is going to happen, but pretty close to C style in which you can visualize what the compiler will do.  The “solvers” tend to be Fortran modules, because, I think, Fortran is just sensible about multidimensional arrays and indexing in a way you have to use weird macros to replicate in C.  It isn’t I think that C or C++ compilers cannot generate good code - it is about the syntax for arrays.

For anyone interested in architectural arm wrestling, memory IS the main issue.  It is worth reading the papers on BLIS, an analytical model for writing Basic Linear Algebra libraries.  Once you figure out the flops per byte, you are nearly done - the rest is complicated but straightforward code tuning.  Matrix multiply has O(n^3) computation for O(n^2) memory and that immediately says you can get close to 100% of the ALUs running if you have a clue about blocking in the caches.  This is just as easy or hard to do in C as in Fortran.  The kernels tend to wind up in asm(“”) no matter what you wish for just in order to get the prefetch instructions placed just so.  As far as I can tell, compilers still do not have very good models for cache hierarchies although there isn’t really any reason why they shouldn’t.  Similarly, if your code is mainly doing inner products, you are doomed to run at memory speeds rather than ALU speeds.  Multithreading usually doesn’t help, because often other cores are farther away than main memory.

My summary of the language question comes down to: if you knew what code would run fast, you could code it in C.  Thinking that a new language will explain how to make it run fast is just wishful thinking.  It just pushes the problem onto the compiler writers, and they don’t know how to code it to run fast either.  The only argument I like for new languages is that at least they might be able to let you describe the problem in a way that others will recognize.  I’m sure everyone here has has the sad experience of trying to figure out what is the idea behind a chunk of code.  Comments are usually useless.  I wind up trying to match the physics papers with the math against the code and it makes my brain hurt.  It sure would be nice if there were a series of representations between math and hardware transitioning from why to what to how.  I think that is was Steele was trying to do with Fortress.

I do think the current environment is the best for architectural innovation since the ‘90s.  We have The Machine, we have Dover Micro trying to add security, we have Microsoft’s EDGE stuff, and the multiway battle between Intel/AMD/ARM and the GPU guys and the FPGA guys.  It is a lot more interesting than 2005!  

> On 2018, Jun 28, at 11:37 AM, Clem Cole <clemc at ccc.com> wrote:
> 
> 
> 
> On Thu, Jun 28, 2018 at 10:40 AM, Larry McVoy <lm at mcvoy.com <mailto:lm at mcvoy.com>> wrote:
> Yep.  Lots of cpus are nice when doing a parallel make but there is 
> always some task that just uses one cpu.  And then you want the fastest
> one you can get.  Lots of wimpy cpus is just, um, wimpy.
> 
> ​Larry Stewart would be better to reply as SiCortec's CTO - but that was the basic logic behind their system -- lots of cheap MIPS chips. Truth is they made a pretty neat system and it scaled pretty well.   My observation is that they, like most of the attempts I have been a part, in the end architecture does not matter nearly as much as economics.
> 
> In my career I have build 4 or 5 specially architecture systems.  You can basically live through one or two generations using some technology argument and 'win'.   But in the end, people buy computers to do a job and the really don't give a s*t about how the job gets done, as long as it get done cheaply.​   Whoever wins the economic war has the 'winning' architecture.   Look x66/Intel*64 would never win awards as a 'Computer Science Architecture'  or in SW side; Fortran vs. Algol etc...; Windows beat UNIX Workstations for the same reasons... as well know.
> 
> Hey, I used to race sailboats ...  there is a term called a 'sea lawyer' - where you are screaming you have been fouled but you drowning as your boating is sinking.   I keep thinking about it here.   You can scream all you want about goodness or badness of architecture or language, but in the end, users really don't care.   They buy computers to do a job.   You really can not forget that is the purpose.
> 
> As Larry says: Lots of wimpy cpus is just wimpy.    Hey, Intel, nVidia and AMD's job is sell expensive hot rocks.   They are going to do what they can to make those rocks useful for people.  They want to help people get there jobs done -- period. That is what they do.   Amtel and RPi folks take the 'jelly bean' approach - which is one of selling enough it make it worth it for the chip manufacture and if the simple machine can do the customer job, very cool.  In those cases simple is good (hey the PDP-11 is pretty complex compared to say the 6502).
> 
> So, I think the author of the paper trashing as too high level C misses the point, and arguing about architecture is silly.  In the end it is about what it costs to get the job done.   People will use what it is the most economically for them.
> 
> Clem
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/2e08440e/attachment.html>

From perry at piermont.com  Fri Jun 29 06:47:21 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 16:47:21 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CABH=_VQyoKgCcfRGJgpac5freEGBAeGetr9v9L8bTh0Oe6PYzA@mail.gmail.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <CABH=_VQyoKgCcfRGJgpac5freEGBAeGetr9v9L8bTh0Oe6PYzA@mail.gmail.com>
Message-ID: <20180628164721.25c3f4d4@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 12:45:39 -0400 Paul Winalski
<paul.winalski at gmail.com> wrote:
> On 6/28/18, Theodore Y. Ts'o <tytso at mit.edu> wrote:
> >
> > It's the same mistake
> > Chisnall made when he assrted the parallel programming a myth that
> > humans writing parallel programs was "hard", and "all you needed"
> > was the right language.  
> 
> I''ve heard the "all you need is the right language" solution to the
> parallel processing development problem since I joined DEC in 1980.
> Here we are in 2018 and nobody's found that "right language" yet.

Dunno. Rust does some amazing things because it has a linear type
system, which means both that it can be a fully safe language even
though it doesn't have a garbage collector, and that it can allow
sharing of memory without any fear of multiple writers touching the
same block of memory.

I used to think that there hadn't been much progress in computer
science in decades and then I fell down the rabbit hole of modern
type theory. The evolution of type systems over the last few decades
has changed the game in a lot of ways. Most people aren't aware of
the progress that has been made, which is a shame.

> There have been some advancements in software development tools to
> make parallel programming easier.  Modern compilers are getting
> pretty good at loop analysis to discover opportunities for parallel
> execution and vectorization in sequentially-written code.

You're not mentioning things like linear types, effect systems, etc.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From stewart at serissa.com  Fri Jun 29 06:52:26 2018
From: stewart at serissa.com (Lawrence Stewart)
Date: Thu, 28 Jun 2018 16:52:26 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
Message-ID: <FF50BD04-5137-47AA-A3C6-96168392856C@serissa.com>


> On 2018, Jun 28, at 3:42 PM, Perry E. Metzger <perry at piermont.com> wrote:
> 
> On Thu, 28 Jun 2018 07:56:09 -0700 Larry McVoy <lm at mcvoy.com> wrote:
>>> Huge numbers of wimpy cores is the model already dominating the
>>> world.   
>> 
>> Got a source that backs up that claim?  I was recently dancing with
>> Netflix and they don't match your claim, nor do the other content
>> delivery networks, they want every cycle they can get.
> 
> Netflix has how many machines? I'd say in general that principle
> holds: this is the age of huge distributed computation systems, the
> most you can pay for a single core before it tops out is in the
> hundreds of dollars, not in the millions like it used to be. The high
> end isn't very high up, and we scale by adding boxes and cores, not
> by getting single CPUs that are unusually fast.
> 
> Taking the other way of looking at it, from what I understand,
> CDN boxes are about I/O and not CPU, though I could be wrong. I can
> ask some of the Netflix people, a former report of mine is one of the
> people behind their front end cache boxes and we keep in touch.
> 
> Perry
> -- 
> Perry E. Metzger		perry at piermont.com

Some weird stuff gets built for CDNs!  We had a real-time video transcoding project at Quanta using Tilera chips to do transcoding on demand for retrofitting systems in China with millions of old cable boxes.  Not I/O limited at all!  There was a <lot> of I/O but still more computing.
-L



From perry at piermont.com  Fri Jun 29 07:03:17 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 17:03:17 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
Message-ID: <20180628170317.14d65067@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 14:42:47 -0600 Warner Losh <imp at bsdimp.com> wrote:
> > > Got a source that backs up that claim?  I was recently dancing
> > > with Netflix and they don't match your claim, nor do the other
> > > content delivery networks, they want every cycle they can get.  
> >
> > Netflix has how many machines?  
> 
> We generally say we have tens of thousands of machines deployed
> worldwide in our CDN. We don't give out specific numbers though.

Tens of thousands of machines is a lot more than one. I think the
point stands. This is the age of distributed and parallel systems.

> > Taking the other way of looking at it, from what I understand,
> > CDN boxes are about I/O and not CPU, though I could be wrong. I
> > can ask some of the Netflix people, a former report of mine is
> > one of the people behind their front end cache boxes and we keep
> > in touch.  
> 
> I can tell you it's about both. We recently started encrypting all
> traffic, which requires a crapton of CPU. Plus, we're doing
> sophisticated network flow modeling to reduce congestion, which
> takes CPU. On our 100G boxes, which we get in the low 90's
> encrypted, we have some spare CPU, but almost no space memory
> bandwidth and our PCI lanes are full of either 100G network traffic
> or 4-6 NVMe drives delivering content up at about 85-90Gbps.
> 
> Most of our other boxes are the same, with the exception of the
> 'storage' tier boxes. Those we're definitely hard disk I/O bound.

I believe all of this, but I think it is consistent with the point.
You're not trying to buy $100,000 CPUs that are faster than the
several-hundred-per-core things you can get, because no one sells
them. You're building systems that scale out by adding more CPUs
and more boxes. You might want very high end CPUs even, but the high
end isn't vastly better than the low, and there's a limit to what you
can spend per CPU because there just aren't better ones on the market.

So, all of this means that, architecturally, we're no longer in an
age where things get designed to run on one processor. Systems
have to be built to be parallel and distributed. Our kernels are
no longer one fast core and need to handle multiprocessing and all
it entails. Our software needs to run multicore if it's going to
take advantage of the expensive processors and motherboards we've
bought. Thread pools, locking, IPC, and all the rest are now a way of
life. We've got ways to avoid some of those things by using share
nothing and message passing, but even so, the fact that we've
structured our software to deal with parallelism is unavoidable.

Why am I belaboring this? Because the original point, that language
support for building distributed and parallel systems does help,
isn't wrong. There are a lot of projects out there using things like
Erlang and managing nearly miraculous feats of uptime because of it.
There are people replacing C++ with Rust because they can't reason
about concurrency well enough without language support and Rust's
linear types mean you can't write code that accidentally shares
memory between two writers by accident. The stuff does matter.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From perry at piermont.com  Fri Jun 29 07:07:05 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Thu, 28 Jun 2018 17:07:05 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <FF50BD04-5137-47AA-A3C6-96168392856C@serissa.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <FF50BD04-5137-47AA-A3C6-96168392856C@serissa.com>
Message-ID: <20180628170705.54d0b0ea@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 16:52:26 -0400 Lawrence Stewart
<stewart at serissa.com> wrote: 
> Some weird stuff gets built for CDNs!  We had a real-time video
> transcoding project at Quanta using Tilera chips to do transcoding
> on demand for retrofitting systems in China with millions of old
> cable boxes.  Not I/O limited at all!  There was a <lot> of I/O but
> still more computing.

Of course, Tilera is a many core architecture, so again, I think
this supports the original point, which is we're now in the parallel
and distributed age, and that demands software design techniques to
match. (Indeed, If I Recall Correctly, the guys at MIT that designed
the original Tilera stuff are now looking at building 1000 core
devices.)

Perry
-- 
Perry E. Metzger		perry at piermont.com


From doug at cs.dartmouth.edu  Fri Jun 29 08:19:57 2018
From: doug at cs.dartmouth.edu (Doug McIlroy)
Date: Thu, 28 Jun 2018 18:19:57 -0400
Subject: [TUHS] please ignore this message
Message-ID: <201806282219.w5SMJvwp010240@tahoe.cs.Dartmouth.EDU>

This tests a guess about anomalous behavior of the
mailing-list digest. I apologize for inflicting it
on everybody.

Doug
-------------- next part --------------
junk.txt
-------------- next part --------------
junk.diff

From tytso at mit.edu  Fri Jun 29 08:29:54 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Thu, 28 Jun 2018 18:29:54 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628170317.14d65067@jabberwock.cb.piermont.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
Message-ID: <20180628222954.GD8521@thunk.org>

On Thu, Jun 28, 2018 at 05:03:17PM -0400, Perry E. Metzger wrote:
> 
> Tens of thousands of machines is a lot more than one. I think the
> point stands. This is the age of distributed and parallel systems.

This is the age of distributed systems, yes.  I'm not so sure about
"parallel".  And the point remains that for many problems, you need
fewer strong cores, and a crapton of weak cores is not as useful.

Of course we should parllelize work where we can.  The point is that
very often, we can't.  And if you are really worried about potential
problems with Spectre and Meltdown, what that means is that sharing
caches is perilous.  So if you have 128 wimpy cores, you need 128
separate I and D cacaches.  If you have 32 stronger cores, you need 32
separate I and D caches.

And the fact remains that humans really suck at parallel programming.
Use a separate core for each HTTP request, with a load balancer to
split the incoming request to tens of hundreds servers?  Sure!  But
using a several dozen cores for each HTTP request?  That's a much
bigger lift.

You're conflating "distributed" and "parllel" computing, and they are
really quite different.

      	    	     	  	    	   - Ted


From lm at mcvoy.com  Fri Jun 29 10:18:31 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Thu, 28 Jun 2018 17:18:31 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628222954.GD8521@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
 <20180628222954.GD8521@thunk.org>
Message-ID: <20180629001831.GA29490@mcvoy.com>

On Thu, Jun 28, 2018 at 06:29:54PM -0400, Theodore Y. Ts'o wrote:
> On Thu, Jun 28, 2018 at 05:03:17PM -0400, Perry E. Metzger wrote:
> > 
> > Tens of thousands of machines is a lot more than one. I think the
> > point stands. This is the age of distributed and parallel systems.
> 
> This is the age of distributed systems, yes.  I'm not so sure about
> "parallel".  And the point remains that for many problems, you need
> fewer strong cores, and a crapton of weak cores is not as useful.

As usual, Ted gets it.

> You're conflating "distributed" and "parllel" computing, and they are
> really quite different.

Precisely (and well put Ted!)

Perry, please take this in the spirit in which is intended, but you're
arguing with people who have been around the block (there are people
on this list that have 5 decades of going around the block - looking at
you Ken).  I designed the first clustered product at Sun, I was the 4th
guy at google (working on clusters there), Ted is a Linux old timer,
Clem goes back in Unix farther than I do, Ken did much of Unix, etc.
There are a ton of people on this list that make me look like a nobody,
you want to be careful in that crowd.

This is a really poor place for a younger person to come in and make
loud points, that is frowned upon.  It's a fantastic place for a younger
person to come in and learn.  All of us old farts want to pass on what
we know and will gladly do so.  But some of us old farts, like me, are
really tired of arguing with people that don't see the whole picture.
This is not the place to bring the whole picture into focus for you,
sorry.  If you want to argue about stuff I'll eventually go away and so
will other old farts.

I kinda think you don't want to chase me away or other old farts away,
this is a place where we (mostly) talk about Unix history and if the
old farts go away, so does the history.

I'm not saying you can't voice your opinion and argue all you want, just
saying this might not be the list for that.  But that's just my view,
Warren will step in if he needs to.

Cheers and welcome,

--lm


From jnc at mercury.lcs.mit.edu  Fri Jun 29 11:02:44 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Thu, 28 Jun 2018 21:02:44 -0400 (EDT)
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
Message-ID: <20180629010244.0E45E18C08A@mercury.lcs.mit.edu>

    > From: Larry McVoy

    > This is a really poor place for a younger person to come in and make
    > loud points, that is frowned upon.  It's a fantastic place for a younger
    > person to come in and learn.

But don't forget Clarke's Third Law! And maybe you can remember what it's like
when you're young... :-)

But Ted does have a point. 'Distributed' != 'parallel'.

    Noel


From jnc at mercury.lcs.mit.edu  Fri Jun 29 11:06:01 2018
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Thu, 28 Jun 2018 21:06:01 -0400 (EDT)
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
Message-ID: <20180629010601.DC1B918C08A@mercury.lcs.mit.edu>

    > But don't forget Clarke's Third Law!

Ooops. Read my Web search results wrong. (Should have taken the time to click
through, sigh.) Meant 'Clarke's First Law'.

	 Noel


From jpl.jpl at gmail.com  Fri Jun 29 11:32:57 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Thu, 28 Jun 2018 21:32:57 -0400
Subject: [TUHS] please ignore this message
In-Reply-To: <201806282219.w5SMJvwp010240@tahoe.cs.Dartmouth.EDU>
References: <201806282219.w5SMJvwp010240@tahoe.cs.Dartmouth.EDU>
Message-ID: <CAC0cEp-p2cnhXd37msaRCazMacNXHnM3CEe98vSvzUw2cpd9cw@mail.gmail.com>

With gmail, both attachments showed just (constant-width) single-line names
of the attachments, "junk.txt" and "junk.diff" (without the quotes).

On Thu, Jun 28, 2018 at 6:19 PM, Doug McIlroy <doug at cs.dartmouth.edu> wrote:

> This tests a guess about anomalous behavior of the
> mailing-list digest. I apologize for inflicting it
> on everybody.
>
> Doug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180628/c0336f8d/attachment.html>

From bakul at bitblocks.com  Fri Jun 29 12:02:11 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Thu, 28 Jun 2018 19:02:11 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: Your message of "Thu, 28 Jun 2018 10:15:38 -0400."
 <20180628141538.GB663@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
Message-ID: <20180629020219.1AAB4156E517@mail.bitblocks.com>

On Thu, 28 Jun 2018 10:15:38 -0400 "Theodore Y. Ts'o" <tytso at mit.edu> wrote:
> Bakul, I think you and Steve have a very particular set of programming
> use cases in mind, and are then over-generalizing this to assume that
> these are the only problems that matter.  It's the same mistake
> Chisnall made when he assrted the parallel programming a myth that
> humans writing parallel programs was "hard", and "all you needed" was
> the right language.

Let me try to explain my thinking on this topic.

1. Cobol/C/C++/Fortran/Go/Java/Perl/Python/Ruby/... will be
   around for a long time.  Lots of computing platforms &
   existing codes using them so them languages will continue
   to be used. This is not particularly interesting or worth
   debating.

2. A lot of processor architecture evolution in the last few
   decades has been to make such programs run as fast as
   possible but this emulation of flat shared/virtual address
   spaces and squeezing out parallelism at run time from
   sequential code is falling further and further behind.

   The original Z80 had 8.5E3 transistors while an 8 core
   Ryzen has 4.8E9 transistors (about 560K Z80s worth of
   transistor).  And yet, it is only 600K times faster in
   Dhrystone MIPS even though it has > thousand times faster
   clock rate, its data bus is 8 times wider and there is no
   data/address multiplexing.

   I make this crude comparison to point out how much less
   efficient these resources are used due to the needs of
   above languages.

3. As Perry said, we are using parallel and distributed
   computing more and more. Even the RaspberryPi Zero has a
   several times more powerful GPU than its puny ARM "cpu"!
   Most all cloud services use multiple cores & nodes.  We may
   not set up our services but we certainly use quite a few of
   them via the Internet. Even on my laptop at present there
   are 555 processes and 2998 threads. Most of these are
   indeed "embarrassingly" parallel -- most of them don't talk
   to each other!

   Even local servers in any organization run a large number
   of processes.

   Things like OpenCL is being used more and more to benefit
   from whatever parallelism we can squeeze out of a GPU for
   specialized applications.

4. The reason most people prefer to use one very high perf.
   CPU rather than a bunch of "wimpy" processors is *because*
   most of our tooling uses only sequential languages with
   very little concurrency. And just as in the case of
   processors, most of our OSes also allow use of very little
   parallelism. And most performance metrics focus on single
   CPU performance. This is what is optimized so given these
   assumptions using faster and faster CPUs makes the most
   sense but we are running out that trick.

5. You may well be right that most people don't need faster
   machines. Or that machines optimized for parallel languages
   and codes may never succeed commercially.

   But as a techie I am more interested in what can be built
   (as opposed to what will sell). It is not a question of
   whether problems amenable to parallel solutions are the
   *only problems that matter*.  

   I think about these issues because
   a) I find them interesting (one among many).
   b) Currently we are using resources rather inefficiently
      and I'm interested in what can be done about it.
   c) This is the only direction in future that may yield
      faster and faster solutions for large set of problems.

   And in *this context* our current languages do fall short.

6. The conventional wisdom is parallel languages are a failure
   and parallel programming is *hard*.  Hoare's CSP and
   Dijkstra's "elephants made out of mosquitos" papers are
   over 40 years old. But I don't think we have a lot of
   experience with parallel languages to know one way or
   another. We are doing adhoc distributed systems but we
   don't have a theory as to how they behave under stress.
   But see also [1]

7. As to distributed vs parallel systems, my point was that
   even if they different, there are a number of subproblems
   common to them both. These should be investigated further
   (beyond using adhoc solutions like Kubernetes). It may even
   be possible to separate a reliability layer to simplify a
   more abstract layer that is more common to the both. Not
   unlike the way disks handle reliability so that at a higher
   level we can treat them like a sequence of blocks (or how
   IP & TCP handle reliability).

Here is a somewhat tenuous justification for why this topic does
make sense on this list: Unix provides *composable* tools. It
isn'just that one unix program did one thing well but that it
was easy to make them worked together to achieve a goal + a
few abstractions were useful from most programs. You hid
device peculiarities in device drivers and filesystem
peculiarities in filesystem code and so on. I think these same
design principles can help with distributed/parallel systems.

Even if we get an easy to use parallel language, we may still
have to worry about placement and load balancing and latencies
and so forth. And for that we may need a glue language.

[1] What I have discovered is that it takes some experience
and experimenting to think in a particular way that is natural
to the language at hand.  As an example, when programming in k
I often start out with a sequential, loopy program .  But if I
can think about in terms of "array" operations, I can iterate
fast and come up with a better solution. Not have to think
about locations and pointers make this iterativer process very
fast.

Similarly it takes a while to switch to writing forth (or
postscript) code idiomatically. Writind idiomatic code in any
language is not just a matter learning the language but being
comfortable with it, knowing what works well and in what
situation.

I suspect the same is true with parallel programming as well.

Also note that Unix hackers routinely write simple parallel
programms (shell pipelines) but these may seem quite foreign
to people who grew up using just GUI.


From michael at kjorling.se  Fri Jun 29 15:58:18 2018
From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=)
Date: Fri, 29 Jun 2018 05:58:18 +0000
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628222954.GD8521@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
 <20180628222954.GD8521@thunk.org>
Message-ID: <20180629055818.GR29822@h-174-65.A328.priv.bahnhof.se>

On 28 Jun 2018 18:29 -0400, from tytso at mit.edu (Theodore Y. Ts'o):
> And if you are really worried about potential
> problems with Spectre and Meltdown, what that means is that sharing
> caches is perilous.  So if you have 128 wimpy cores, you need 128
> separate I and D cacaches.  If you have 32 stronger cores, you need 32
> separate I and D caches.

What's more, I suspect that in order to get good performance out of
those wimpy cores, you'd rather need _more_ cache per core, than less
or the same, simply because there's less of an advantage in raw clock.
One doesn't have to look hard to find examples of where adding or
increasing cache in a CPU (these days on-die) have, at least for
workloads that are able to use such cache effectively, led to huge
improvements in overall performance, even at similar clock rates.

Of course, I can't help but find it interesting that we're having this
discussion at all about a language that is approaching 50 years old by
now (Wikipedia puts the earliest design in 1969, which sounds about
right, and even K&R C is 40 years old by now). Sure, C has evolved --
for example, C11 added language constructs for multithreaded
programming, including the _Thread_local storage class specifier) --
but it's still in active use and it's still recognizably an evolved
version of the language specified in K&R. I can pull out the manual
for a pre-ANSI C compiler and look at the code samples, and sure there
are things about that code that a modern compiler barfs at, but it's
quite easy to just move a few things around a little and end up with
pretty close to modern C (albeit code that doesn't take advantage of
new features, obviously). I wonder how many of today's programming
languages we'll be able to say the same thing about in 2040-2050-ish.

-- 
Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)


From wkt at tuhs.org  Fri Jun 29 17:48:30 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Fri, 29 Jun 2018 17:48:30 +1000
Subject: [TUHS] Any Definitive Unix Celebrations Next Year?
Message-ID: <20180629074830.GA9188@minnie.tuhs.org>

All, I'm planning to come to the Usenix ATC next year on July 10-12
on the assumption that there will be some Unix 50th celebration there.
Does anybody know of other events in the date range June 28 to July 9,
as those are the dates I'll be able to get off work.

And/or, if I'm in the U.S on these dates, does anybody want to catch up?

Just trying to build an itinerary for this time next year.

Thanks, Warren


From wkt at tuhs.org  Fri Jun 29 17:53:10 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Fri, 29 Jun 2018 17:53:10 +1000
Subject: [TUHS] Any Good dmr Anecdotes?
Message-ID: <20180629075310.GA9477@minnie.tuhs.org>

We do have ken on the list, so I won't be presumptious to ask for ken-related
anecdotes, but would anybody like to share some dmr anecdotes?

I never met Dennis in person, but he was generous with his time about my
interest in Unix history; and also with sharing the material he still had.

Dennis was very clever, though. He would bring out a new artifact and say:
well, here's what I still have of X. Pity it will never execute again, sigh.

I'm sure he knew that I would take that as a challenge. Mind you, it worked,
which is why we now have the first Unix kernel in C, the 'nsys' kernel, and
the first two C compilers, in executable format.

Any other good anecdotes?

Cheers, Warren


From steve at quintile.net  Fri Jun 29 18:51:55 2018
From: steve at quintile.net (Steve Simon)
Date: Fri, 29 Jun 2018 09:51:55 +0100
Subject: [TUHS] Faster cpus at any cost
In-Reply-To: <mailman.1.1530237601.20140.tuhs@minnie.tuhs.org>
Message-ID: <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>

I know this is a dangerous game to play, but what of the future?

I am intrigued by the idea of true optical computers,
perhaps the $10M super-computer will return?

GPUs have definitely taken over in some areas - even where
SIMD would not seem a good fit. My previous employer does motion
estimation of realtime video and has moved from custom electronics
and FPGAs to using off the shelf GPUs in PCs.

-Steve



From ches at cheswick.com  Fri Jun 29 20:53:02 2018
From: ches at cheswick.com (ches@Cheswick.com)
Date: Fri, 29 Jun 2018 06:53:02 -0400
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <20180629075310.GA9477@minnie.tuhs.org>
References: <20180629075310.GA9477@minnie.tuhs.org>
Message-ID: <7A8E301D-D8F7-40DA-9042-ABCA887821EC@cheswick.com>

Dennis, do you have any recommendations on good books to use the learn C?

I don’t know, I never had to learn C.   -dmr

Message by ches. Tappos by iPad.


> On Jun 29, 2018, at 3:53 AM, Warren Toomey <wkt at tuhs.org> wrote:
> 
> We do have ken on the list, so I won't be presumptious to ask for ken-related
> anecdotes, but would anybody like to share some dmr anecdotes?
> 
> I never met Dennis in person, but he was generous with his time about my
> interest in Unix history; and also with sharing the material he still had.
> 
> Dennis was very clever, though. He would bring out a new artifact and say:
> well, here's what I still have of X. Pity it will never execute again, sigh.
> 
> I'm sure he knew that I would take that as a challenge. Mind you, it worked,
> which is why we now have the first Unix kernel in C, the 'nsys' kernel, and
> the first two C compilers, in executable format.
> 
> Any other good anecdotes?
> 
> Cheers, Warren



From jpl.jpl at gmail.com  Fri Jun 29 22:51:32 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Fri, 29 Jun 2018 08:51:32 -0400
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <7A8E301D-D8F7-40DA-9042-ABCA887821EC@cheswick.com>
References: <20180629075310.GA9477@minnie.tuhs.org>
 <7A8E301D-D8F7-40DA-9042-ABCA887821EC@cheswick.com>
Message-ID: <CAC0cEp95R=e=Op__mi7g+w+8aY2JEYGVGUSZ-P=LUQRMy9XBdw@mail.gmail.com>

I heard about this second hand, so anyone with first hand knowledge should
feel free to correct me.

When AT&T took an ill-fated plunge into chip manufacture, our chips were on
the large side. Dennis noted that "When Intel spoils a wafer, they turn the
chips into tie-tacks. When we spoil a wafer, we turn the chips into belt
buckles". This so infuriated the VP in charge of manufacture that he wanted
"this dmr guy" fired. Needless to say, that didn't happen.

On Fri, Jun 29, 2018 at 6:53 AM, ches at Cheswick.com <ches at cheswick.com>
wrote:

> Dennis, do you have any recommendations on good books to use the learn C?
>
> I don’t know, I never had to learn C.   -dmr
>
> Message by ches. Tappos by iPad.
>
>
> > On Jun 29, 2018, at 3:53 AM, Warren Toomey <wkt at tuhs.org> wrote:
> >
> > We do have ken on the list, so I won't be presumptious to ask for
> ken-related
> > anecdotes, but would anybody like to share some dmr anecdotes?
> >
> > I never met Dennis in person, but he was generous with his time about my
> > interest in Unix history; and also with sharing the material he still
> had.
> >
> > Dennis was very clever, though. He would bring out a new artifact and
> say:
> > well, here's what I still have of X. Pity it will never execute again,
> sigh.
> >
> > I'm sure he knew that I would take that as a challenge. Mind you, it
> worked,
> > which is why we now have the first Unix kernel in C, the 'nsys' kernel,
> and
> > the first two C compilers, in executable format.
> >
> > Any other good anecdotes?
> >
> > Cheers, Warren
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/b19c8d58/attachment.html>

From steffen at sdaoden.eu  Fri Jun 29 22:54:12 2018
From: steffen at sdaoden.eu (Steffen Nurpmeso)
Date: Fri, 29 Jun 2018 14:54:12 +0200
Subject: [TUHS] please ignore this message
In-Reply-To: <CAC0cEp-p2cnhXd37msaRCazMacNXHnM3CEe98vSvzUw2cpd9cw@mail.gmail.com>
References: <201806282219.w5SMJvwp010240@tahoe.cs.Dartmouth.EDU>
 <CAC0cEp-p2cnhXd37msaRCazMacNXHnM3CEe98vSvzUw2cpd9cw@mail.gmail.com>
Message-ID: <20180629125412.Lv7iZ%steffen@sdaoden.eu>

John P. Linderman wrote in <CAC0cEp-p2cnhXd37msaRCazMacNXHnM3CEe98vSvzUw\
2cpd9cw at mail.gmail.com>:
 |With gmail, both attachments showed just (constant-width) single-line \
 |names of the attachments, "junk.txt" and "junk.diff" (without the quotes).
 |
 |On Thu, Jun 28, 2018 at 6:19 PM, Doug McIlroy <[1]doug at cs.dartmouth.edu[\
 |/1]> wrote:
 |
 |  [1] mailto:doug at cs.dartmouth.edu
 |
 ||This tests a guess about anomalous behavior of the
 ||mailing-list digest. I apologize for inflicting it
 ||on everybody.
 |
 ||Doug

Both attachments where declared as text/plain, only the filename
differed.  This is possibly not what Doug McIlroy wanted to test.

Mr. Doug McIlroy, in order to convince your version of Heirloom
mailx to create the correct MIME type for .diff files you need to
provide a mime.types(5) file, as in

  echo 'text/x-diff diff patch' >> ~/.mime.types

It will also find this in /etc/mime.types.
(And i think there may also be a difference in the TUHS mailman
instance or configuration, as opposed to the GNU managed one that
drives the groff list.  I suppose you referred to the problem of
the groff digest you are reading.)

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


From tytso at mit.edu  Fri Jun 29 22:58:48 2018
From: tytso at mit.edu (Theodore Y. Ts'o)
Date: Fri, 29 Jun 2018 08:58:48 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629020219.1AAB4156E517@mail.bitblocks.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180629020219.1AAB4156E517@mail.bitblocks.com>
Message-ID: <20180629125848.GD1231@thunk.org>

On Thu, Jun 28, 2018 at 07:02:11PM -0700, Bakul Shah wrote:
> 3. As Perry said, we are using parallel and distributed
>    computing more and more. Even the RaspberryPi Zero has a
>    several times more powerful GPU than its puny ARM "cpu"!
>    Most all cloud services use multiple cores & nodes.  We may
>    not set up our services but we certainly use quite a few of
>    them via the Internet. Even on my laptop at present there
>    are 555 processes and 2998 threads. Most of these are
>    indeed "embarrassingly" parallel -- most of them don't talk
>    to each other!

In order to think clearly about the problem, it's important to
distingish between parallel and distributed computing.  Parallel
computing to me means that you have a large number of CPU-bound
threads that are all working on the same problem.  What is mean by
"the same problem" is tricky, and we need to distinguish between
stupid ways of breaking up the work --- for example, in Service
Oriented Architectures, you might do an RPC call to multiple a dollar
value by 1.05 to calculate the sales tax --- sure, you can call that
"distributed" computing, or even "parallel" computing because it's a
different thread (even if it is I/O bound waiting for the next RPC
call, and most of the CPU power is spent marshalling and unmarshalling
the parmeter and return values).  But it's a __dumb__ way of breaking
up the problem.  At least, unless the problem is to sell lots of extra
IBM hardware and make IBM shareholders lots of money, in which case,
it's brilliant.  :-)

It's also important to distinguish between CPU-bound and I/O-bound
threads.  You may have 2998 threads, and but I bet they are mostly I/O
bound, and are there for programmer convenience.  Very often such
threads are not actually a terribly efficent way to break up the
problem.  In my career, where the number of threads are significantly
greater than the number of CPU's, you can actually make a tradeoff
between programmer convenience and CPU efficiency by taking those
hundreds of threads, and transforming PC and a small state structure
into something that is much more of a continuation-based
implementation which uses significantly fewer threads.  That
particular architecture still had cores that were mostly I/O bound,
but it meant we could use significantly cheaper CPU's, and it saved
millions and millions of dollars.

All of this is to point out that talking about 2998 threads really
doesn't mean much.  We shouldn't be talking about threads; we should
be talking about how many CPU cores can we usefully keep busy at the
same time.  Most of the time, for desktops and laptops, except for
brief moments when you are running "make -j32" (and that's only for us
weird programmer-types; we aren't actually the common case), most of
the time, the user-facing CPU is twiddling its fingers.

> 4. The reason most people prefer to use one very high perf.
>    CPU rather than a bunch of "wimpy" processors is *because*
>    most of our tooling uses only sequential languages with
>    very little concurrency.

The problem is that I've been hearing this excuse for two decades.
And there have been people who have been working on this problem.  And
at this point, there's been a bit of "parallelism winter" that is much
like the "AI winter" in the 80's.  Lots of people have been promisng
wonderful results for a long time; Sun bet their company (and lost) on
it; and there haven't been much in the way of results.

Sure, there are specialized cases where this has been useful ---
making better nuclear bumbs with which to kill ourselves, predicting
the weather, etc.  But for the most part, there haven't been much
improvement for anything other than super-specialzied use cases.
Machine learning might be another area, but that's one where we're
seeing specialied chips that are doing one thing and exactly one
thing.  Whether it's running a nueral network, or doing AES encryption
in-line, this is not an example of better parllel programming
languages or better software tooling.

> 5. You may well be right that most people don't need faster
>    machines. Or that machines optimized for parallel languages
>    and codes may never succeed commercially.
> 
>    But as a techie I am more interested in what can be built
>    (as opposed to what will sell). It is not a question of
>    whether problems amenable to parallel solutions are the
>    *only problems that matter*.

If we can build something which is useful, the money will take care of
itself.  That means generally useful.  The market of weather
prediction or people interested in building better nuclear bombs is
fairly small compared to the entire computing market.

As a techie, what I am interested in is building something that is
useful.  But part of being useful is that it has to make economic
sense.  That problably makes me a lousy acdemic, but I'm a cynical
industry engineer, not an academic.

> 6. The conventional wisdom is parallel languages are a failure
>    and parallel programming is *hard*.  Hoare's CSP and
>    Dijkstra's "elephants made out of mosquitos" papers are
>    over 40 years old.

It's a failure because there hasn't been *results*.  There are
parallel languages that have been proposed by academics --- I just
don't think they are any good, and they certainly haven't proven
themselves to end-users.

>    We are doing adhoc distributed systems but we
>    don't have a theory as to how they behave under stress.
>    But see also [1]

Actually, there are plenty of people at the hyper-scaler cloud
companies (e.g., Amazon, Facebook, Google, etc.) who understand very
well how they behave under stress.  Many of these companies regularly
experiment with putting their systems under stress to see how they
behave.  More importantly, they will concoct full-blown scenarios
(sometimes with amusing back-stories such as extra-dimensional aliens
attacking Moffet Field) to test how *humans* and their *processes*
manging these large-scale distributed systems react under stress.

> Here is a somewhat tenuous justification for why this topic does
> make sense on this list: Unix provides *composable* tools.

How many of these cases were these composable tools actually ones
where it allowed CPU resources to be used more efficiently?  A
pipeline that involves sort, awk, sed, etc. certainly is better
because you didn't have to write an ad-hoc program.  And i've written
lots of Unix pipelines in my time.  But in how many cases were these
pipelines actually CPU bound.  I think if you were to examine the
picture closely, they all tended to be I/O bound, not CPU bound.

So while Unix tools' composability is very good thing, I would
question whether they have proven to be a useful tool in terms of
being able to use computational resources more efficiently, and how
much they really leveraged computational parllelism.

						- Ted


From ralph at inputplus.co.uk  Fri Jun 29 23:00:53 2018
From: ralph at inputplus.co.uk (Ralph Corderoy)
Date: Fri, 29 Jun 2018 14:00:53 +0100
Subject: [TUHS] please ignore this message
In-Reply-To: <20180629125412.Lv7iZ%steffen@sdaoden.eu>
References: <201806282219.w5SMJvwp010240@tahoe.cs.Dartmouth.EDU>
 <CAC0cEp-p2cnhXd37msaRCazMacNXHnM3CEe98vSvzUw2cpd9cw@mail.gmail.com>
 <20180629125412.Lv7iZ%steffen@sdaoden.eu>
Message-ID: <20180629130053.2912021A56@orac.inputplus.co.uk>

Hi,

Steffen Nurpmeso wrote:
> John P. Linderman wrote:
> > Doug McIlroy wrote:
> > > please ignore this message
>
> This is possibly not what Doug McIlroy wanted to test.

No, you're probably guessing correctly WRT groff.
I suggest we all just follow Doug's request.

-- 
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy


From ron at ronnatalie.com  Fri Jun 29 23:16:12 2018
From: ron at ronnatalie.com (ron at ronnatalie.com)
Date: Fri, 29 Jun 2018 09:16:12 -0400
Subject: [TUHS] ATT Hardware
Message-ID: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>

The recent reference to the Dennis's comments on ATT chip production had me
feeling nostalgic to the 3B line of computers.  In the late 80's I was in
charge of all the UNIX systems (among other things) at the state university
system in New Jersey.   As a result we got a lot of this hardware gifted to
us.    The 3B5 and 3B2s were pretty doggy compared with the stuff on the
market then.   The best thing I could say about the 3B5 is that it stood up
well to having many gallons of water dumped on it (that's another story,
Rutgers had the computer center under a seven story building and it still
had a leaky roof).    The 3B20 was another thing.   It was a work of
telephone company art.    You knew this when it came to power it down where
you turned a knob inside the rack and held a button down until it clicked
off.    This is pretty akin to how you'd do things on classic phone
equipment (for instance, the same procedure is used to loopback the old 303
"broadband" 50K modems that the Arpanet/Milnet was built out of).    Of
course, the 3B20 was built as phone equipment.    It just got sort of
"recycled" as a GP computer.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/0e4e7687/attachment.html>

From jpl.jpl at gmail.com  Sat Jun 30 00:55:48 2018
From: jpl.jpl at gmail.com (John P. Linderman)
Date: Fri, 29 Jun 2018 10:55:48 -0400
Subject: [TUHS] ATT Hardware
In-Reply-To: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
Message-ID: <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>

You just pushed my "3B button". Rudd Canaday (who had a hand in the design
of the original UNIX file system) wanted to create a message-based
"Database Machine". We planned to base it on the MERT (Multi-Environment
Real Time) UNIX variant on VAXen, and had some of the architects of MERT on
the team with real expertise in VAX architecture. Unfortunately, just as we
were getting under way, the AT&T chip project needed clients, so we were
told "Thou shalt use AT&T computers".  Not only did we have no expertise,
the documentation was almost non-existent, so the only way to learn was
trial and error. They installed two 3B20-ish computers that looked like
racks of telephone equipment (because that's pretty much what the were).
The first time we lost power in the computer room, and tried to bring
it/them back up, all the fuses blew. The AT&T techs looked astonished, and
asked if we lost power often (in a switching office, battery backups
ensured that power was never lost). We told them power went down every few
months. They showed us how to power things up by removing all the fuses,
then using a charging device (we called it the fuse-gooser) to charge up a
capacitor, insert a fuse, and repeat until all the fuses had been
reinstalled. Eventually, one of our people discovered an (undocumented, of
course) dial which could be used to ramp the voltage up from 0 to full, so
we didn't have to go through the fuse routine. "Production" versions of the
3B20's had a lead-acid battery built in.

There was no floating point. (Why would a switch need floating point?).
There were things I wanted to do with awk that didn't need floating point,
so I just fiddled the code so AWKFLOAT was typedef-ed to int, and it darn
near worked. The only hitch was a couple of appearances of "%g" in print
statements. I couldn't typedef them away, but I suggested to the ANSI C
folks that I could have done that if the appearance of adjacent string
literals was treated as their concatenation, and they bought it. My only
contribution to ANSI C, courtesy of crappy hardware.

We were also gifted a 3B2. We brought it up single user, and it took 20
seconds to run a ps command. Our computers were theme-named after birds
(the 3B20 pair were heckle and jeckle), so we named the 3B2 junco. Our
director told us we couldn't do that, we had to play nice with the chip
folks. So we renamed it jay. But we all knew what the j stood for.

On Fri, Jun 29, 2018 at 9:16 AM, <ron at ronnatalie.com> wrote:

> The recent reference to the Dennis’s comments on ATT chip production had
> me feeling nostalgic to the 3B line of computers.  In the late 80’s I was
> in charge of all the UNIX systems (among other things) at the state
> university system in New Jersey.   As a result we got a lot of this
> hardware gifted to us.    The 3B5 and 3B2s were pretty doggy compared with
> the stuff on the market then.   The best thing I could say about the 3B5 is
> that it stood up well to having many gallons of water dumped on it (that’s
> another story, Rutgers had the computer center under a seven story building
> and it still had a leaky roof).    The 3B20 was another thing.   It was a
> work of telephone company art.    You knew this when it came to power it
> down where you turned a knob inside the rack and held a button down until
> it clicked off.    This is pretty akin to how you’d do things on classic
> phone equipment (for instance, the same procedure is used to loopback the
> old 303 “broadband” 50K modems that the Arpanet/Milnet was built out
> of).    Of course, the 3B20 was built as phone equipment.    It just got
> sort of “recycled” as a GP computer.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/ccea312a/attachment.html>

From clemc at ccc.com  Sat Jun 30 01:07:25 2018
From: clemc at ccc.com (Clem Cole)
Date: Fri, 29 Jun 2018 11:07:25 -0400
Subject: [TUHS] ATT Hardware
In-Reply-To: <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
 <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>
Message-ID: <CAC20D2MKYnyYcuAgR46yh=Hyqp4Z++pkSGFrB6j7G84VhXmmMA@mail.gmail.com>

On Fri, Jun 29, 2018 at 10:55 AM, John P. Linderman <jpl.jpl at gmail.com>
wrote:

> (we called it the fuse-gooser) to charge up a capacitor, insert a fuse,
>


​Yeah it was wild bit of mechanical design -- it pulled out on a small
rope/wire thingy.  I used to say the 3B was the only computer I knew with a
'pull starter' like on a lawn mower engine.
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/63e9c7a4/attachment.html>

From web at loomcom.com  Sat Jun 30 01:26:15 2018
From: web at loomcom.com (Seth J. Morabito)
Date: Fri, 29 Jun 2018 15:26:15 +0000
Subject: [TUHS] ATT Hardware
In-Reply-To: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
Message-ID: <87muvdlilk.fsf@loomcom.com>


ron at ronnatalie.com writes:

> The recent reference to the Dennis's comments on ATT chip production had me
> feeling nostalgic to the 3B line of computers.
> [...]

Oh I love hearing anecdotes about AT&T hardware. It should go without
saying that the 3B2, even with all its horrible flaws, is pretty special
to my heart, given all the effort I put into emulating it! I've learned
to really like the WE32000 architecture. It's just so well suited for
UNIX and C. It's a pity the clock speed was so slow, and that the
3B2/310 and 3B2/400 were so limited in memory. A 4MB maximum was not a
lot for a serious multi-user machine, even for 1985.

But, I have absolutely no experience with the 3B5 and 3B20. I would love
to hear more about them from those of you with experience. Were they
ever a marketing success? (And here, by marketing success, I mean as a
general purpose UNIX computer, not as a telephone switch)

Emulating a 3B5 or 3B20 would be fun, but I've seen even less internals
documentation about them than I have about the 3B2, so I fear it's a
hopeless task.

-Seth
--
Seth Morabito
https://loomcom.com/
web at loomcom.com


From tfb at tfeb.org  Sat Jun 30 01:32:59 2018
From: tfb at tfeb.org (tfb at tfeb.org)
Date: Fri, 29 Jun 2018 16:32:59 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180628170955.GH21688@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
 <20180628170955.GH21688@mcvoy.com>
Message-ID: <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>

On 28 Jun 2018, at 18:09, Larry McVoy <lm at mcvoy.com> wrote:
> 
> I'm not sure how people keep missing the original point.  Which was:
> the market won't choose a bunch of wimpy cpus when it can get faster
> ones.  It wasn't about the physics (which I'm not arguing with), it 
> was about a choice between lots of wimpy cpus and a smaller number of
> fast cpus.  The market wants the latter, as Ted said, Sun bet heavily
> on the former and is no more.

[I said I wouldn't reply more: I'm weak.]

I think we have been talking at cross-purposes, which is probably my fault.  I think you've been using 'wimpy' to mean 'intentionally slower than they could be' while I have been using it to mean 'of very tiny computational power compared to the power of the whole system'.  Your usage is probably more correct in terms of the way the term has been used historically.

But I think my usage tells you something important: that the performance of individual cores will, inevitably, become increasingly tiny compared to the performance of the system they are in, and will almost certainly become asymptotically constant (ie however much money you spend on an individual core it will not be very much faster than the one you can buy off-the-shelf). So, if you want to keep seeing performance improvements (especially if you want to keep seeing exponential improvements for any significant time), then you have no choice but to start thinking about parallelism.

The place I work now is an example of this.  Our machines have the fastest cores we could get.  But we need nearly half a million of them to do the work we want to do (this is across three systems).

I certainly don't want to argue that choosing intentionally slower cores than you can get is a good idea in general (although there are cases where it may be, including, perhaps, some HPC workloads).

---

However let me add something about the Sun T-series machines, which were 'wimpy cores' in the 'intentionally slower' sense.  When these started appearing I worked in a canonical Sun customer at the time: a big retail bank.  And the reason we did not buy lots of them was nothing to do with how fast they were (which was more than fast enough), it was because Sun's software was inadequate.

To see why, consider what retail banks IT looked like in the late 2000s.  We had a great mass of applications, the majority of which ran on individual Solaris instances (at least two, live and DR, per application).  A very high proportion (not all) of these applications had utterly negligible computational requirements.  But they had very strong requirements on availability, or at least the parts of the business which owned them said they did and we could not argue with that, especially given that this was 2008 and we knew that if we had a visible outage there was a fair chance that it would be misread as the bank failing, resulting in  a cascade failure of the banking system and the inevitable zombie apocalypse.  No one wanted that.

Some consolidation had already been done: we had a bunch of 25ks, many of which were split into lot of domains.  The smallest domain on a 25k was a single CPU board which was 4 sockets and therefore 8 or 16 (I forget how many cores there were per socket) cores.  I think you could not partition a 25k like that completely because you ran out of IO assemblies, so some domains had to be bigger.

This smallest domain was huge overkill for many of these applications, and 25ks were terribly expensive as well.

So, along came the first T-series boxes and they were just obviously ideal: we could consolidate lots and lots of these things onto a single T-series box, with a DR partner, and it would cost some tiny fraction of what a 25k cost, and use almost no power (DC power was and is a real problem).

But we didn't do that: we did some experiments, and some things moved I think, but on the whole we didn't move.  The reason we didn't move was nothing, at all, to do with performance, it was, as I said, software, and in particular virtualisation. Sun had two approaches to this, neither of which solved the problems that everyone had.

At the firmware level there were LDOMs (which I think did not work very well early on or may not have existed) which let you cut up a machine into lots of smaller ones with a hypervisor in the usual way.  But all of these smaller machines shared the same hardware of course.  So if you had a serious problem on the machine, then all of your LDOMs went away, and all of the services on that machine had an outage, at once.  This was not the case on a 25k: if a CPU or an IO board died it would affect the domain it was part of, but everything else would carry on.

At the OS level there were zones (containers).  Zones had the advantage that they could look like Solaris 8 (the machine itself, and therefore the LDOMs it got split into, could only run Solaris 10), which all the old applications were running, and they could be very fine-grained.  But they weren't really very isolated from each other (especially in hindsight), they didn't look *enough* like Solaris 8 for people to be willing to certify the applications on them, and they still had the all-your-eggs-in-one-basket problem if the hardware died.

The thing that really killed it was the eggs-in-one-basket problem.  We had previous experience with consolidating a lot of applications onto one OS & hardware instance, and no-one wanted to go anywhere near that.  If you needed to get an outage (say to install a critical security patch, or because of failing hardware) you had to negotiate this with *all* the application teams, all of whom had different requirements and all of whom regarded their application as the most important thing the bank ran (some of them might be right).  It could very easily take more than a year to get an outage on the big shared-services machines, and when the outage happened you would have at least 50 people involved to stop and restat everything.  It was just a scarring nightmare.

So, to move to the T-series machines what we would have needed was a way of partitioning the machine in such a way that the partitions ran Solaris 8 natively, and in such a way that the partitions could be moved, live, to other systems to deal with the eggs-in-one-basket problem.  Sun didn't have that, anywhere near (they knew this I think, and they got closer later on, but it was too late).

So, the machines failed for us.  But this failure was nothing, at all, to do with performance, let alone performance per core, which was generally more than adequate.  Lots of wimpy, low power, CPUs was what we needed, in fact: we just needed the right software on top of them, which was not there.

(As an addentum: what eventually happened / is happening I think is that applications are getting recertified on Linux/x86 sitting on top of ESX, *which can move VMs live between hosts*, thus solving the problem.)

From perry at piermont.com  Sat Jun 30 01:41:24 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 11:41:24 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629001831.GA29490@mcvoy.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
 <20180628222954.GD8521@thunk.org>
 <20180629001831.GA29490@mcvoy.com>
Message-ID: <20180629114124.3529a1a8@jabberwock.cb.piermont.com>

On Thu, 28 Jun 2018 17:18:31 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> On Thu, Jun 28, 2018 at 06:29:54PM -0400, Theodore Y. Ts'o wrote:
> > On Thu, Jun 28, 2018 at 05:03:17PM -0400, Perry E. Metzger
> > wrote:  
> > > 
> > > Tens of thousands of machines is a lot more than one. I think
> > > the point stands. This is the age of distributed and parallel
> > > systems.  
> > 
> > This is the age of distributed systems, yes.  I'm not so sure
> > about "parallel".  And the point remains that for many problems,
> > you need fewer strong cores, and a crapton of weak cores is not
> > as useful.  
> 
> As usual, Ted gets it.

My laptop's GPUs are a lot more powerful than the CPU and do much
more most of the time, and they're ridiculously parallel. Everything
from weather prediction to machine learning to Google's search
stuff runs _parallel_, not just distributed. All the simulations I do
of molecular systems run parallel, on lots and lots of machines.

> Perry, please take this in the spirit in which is intended, but
> you're arguing with people who have been around the block (there
> are people on this list that have 5 decades of going around the
> block - looking at you Ken).

You don't remember that you've known me for thirty years or so?

Hell, you used to help me out. Viktor and I and a couple of other
people built the predecessor of Jumpstart at Lehman Brothers like 25
years ago, it was called PARIS, and you were the one who did stuff
like telling us the secret IOCTL to to turn off sync FFS metadata
writes in SunOS so we could saturate our disk controllers during
installs. (I guess it wasn't that secret but it was a big help, we got
the bottleneck down to being network bandwidth and could install a
single workstation from "boot net" to ready for the trader in five
minutes.)

I guess I wasn't that memorable, but I'm sure at least Ted remembers
me, we've been paling around at conferences and IETF meetings for
decades.

Anyway, I've also been doing this for quite a while. Not as long as
many people here, I'm way younger than the people who were hacking in
the 1960s on IBM 360s (I was learning things like reading back then,
not computer science), but my first machine was a PDP-8e with an
ASR-33 teletype.

> This is a really poor place for a younger person

I wish I was young. People do still tell me that I look like I'm in my
30s but I think that's just that my hair isn't gray yet for some
unaccountable reason.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From emu at e-bbes.com  Sat Jun 30 01:43:50 2018
From: emu at e-bbes.com (emanuel stiebler)
Date: Fri, 29 Jun 2018 11:43:50 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <CABH=_VQyoKgCcfRGJgpac5freEGBAeGetr9v9L8bTh0Oe6PYzA@mail.gmail.com>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <CABH=_VQyoKgCcfRGJgpac5freEGBAeGetr9v9L8bTh0Oe6PYzA@mail.gmail.com>
Message-ID: <ee9e0f4f-520b-a8a5-cfad-9eee0d19f4fe@e-bbes.com>

On 2018-06-28 12:45, Paul Winalski wrote:
> On 6/28/18, Theodore Y. Ts'o <tytso at mit.edu> wrote:

> Parallel programming *is* hard for humans.  Very few people can cope
> with it, or with the nasty bugs that crop up when you get it wrong.

I'm not sure about it. Look how many processes/threads(?) some hardware
guys program in VHDL/Verilog at a time, how many of those run in
different clocking domains, parallel and the stuff works.

I think it is just a matter to get used to thinking this way ...

>> The problem is that not all people are interested in solving problems
>> which are amenable to embarassingly parallel algorithms.
> 
> Most interesting problems in fact are not embarrassingly parallel.
> They tend to have data interdependencies.
> 
> There have been some advancements in software development tools to
> make parallel programming easier.  Modern compilers are getting pretty
> good at loop analysis to discover opportunities for parallel execution
> and vectorization in sequentially-written code.

I was actually just musing about this, but we have multi-threaded
architectures, we have languages which would support this.

Probably we are just missing the problems, we would like to solve with it?

;-)


From perry at piermont.com  Sat Jun 30 02:09:42 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 12:09:42 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
 <20180628170955.GH21688@mcvoy.com>
 <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
Message-ID: <20180629120942.6cd4b67d@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 16:32:59 +0100 tfb at tfeb.org wrote:
> On 28 Jun 2018, at 18:09, Larry McVoy <lm at mcvoy.com> wrote:
> > 
> > I'm not sure how people keep missing the original point.  Which
> > was: the market won't choose a bunch of wimpy cpus when it can
> > get faster ones.  It wasn't about the physics (which I'm not
> > arguing with), it was about a choice between lots of wimpy cpus
> > and a smaller number of fast cpus.  The market wants the latter,
> > as Ted said, Sun bet heavily on the former and is no more.
> 
> [I said I wouldn't reply more: I'm weak.]
> 
> I think we have been talking at cross-purposes, which is probably
> my fault.  I think you've been using 'wimpy' to mean 'intentionally
> slower than they could be' while I have been using it to mean 'of
> very tiny computational power compared to the power of the whole
> system'.  Your usage is probably more correct in terms of the way
> the term has been used historically.
>
> But I think my usage tells you something important: that the
> performance of individual cores will, inevitably, become
> increasingly tiny compared to the performance of the system they
> are in, and will almost certainly become asymptotically constant
> (ie however much money you spend on an individual core it will not
> be very much faster than the one you can buy off-the-shelf).

You've touched the point with a needle. This is exactly what I was
getting at. It's not a question of whether you want a better single
CPU or not, they don't exist, and one CPU is a tiny fraction of the
power in a modern distributed or parallel system.

> So, if you want to keep seeing performance improvements (especially
> if you want to keep seeing exponential improvements for any
> significant time), then you have no choice but to start thinking
> about parallelism.
>
> The place I work now is an example of this.  Our machines have the
> fastest cores we could get.  But we need nearly half a million of
> them to do the work we want to do (this is across three systems).

And that's been the case for a long time now. Supercomputers are all
giant arrays of CPUs, there hasn't been a world-beating single core
supercomputer in decades.

> I certainly don't want to argue that choosing intentionally slower
> cores than you can get is a good idea in general (although there
> are cases where it may be, including, perhaps, some HPC workloads).

And anything where power usage is key. If you're doing something that
computationally requires top-end single core performance you run the
4GHz core. Otherwise, you save a boatload of power and also
_cooling_ by going a touch slower. Dynamic power rises as the square
of the clock rate so you get quite a lot out of backing off just a
bit.

But it doesn't matter much even if you want to run flat out, because
no single processor can do what you want any more. There's a serious
limit to how much money you can throw at the vendors to give you a
faster core, and it tops out (per core) at a lot less than you're
paying your top engineers per day.

> However let me add something about the Sun T-series machines, which
> were 'wimpy cores' in the 'intentionally slower' sense.  When these
> started appearing I worked in a canonical Sun customer at the time:
> a big retail bank.  And the reason we did not buy lots of them was
> nothing to do with how fast they were (which was more than fast
> enough), it was because Sun's software was inadequate.

Your memory of this is the same as mine. I was also consulting to quite
similar customers at the time (most of my career has been consulting
to large investment banks. Haven't done much on the retail side though
that's changed in recent years.)

> (As an addentum: what eventually happened / is happening I think is
> that applications are getting recertified on Linux/x86 sitting on
> top of ESX, *which can move VMs live between hosts*, thus solving
> the problem.)

At the investment banks, the appetite for new Sun hardware when cheap
Linux on Intel was available was just not there. As you noted, too
many eggs in one basket was one problem, but another was the
combination of better control on an open source OS and just plain
cheaper and (by then sometimes even nicer) hardware.

Sun seemed to get fat in the .com bubble when people were throwing
money at their stuff like crazy, and never quite adjusted to the fact
that the world was changing and people wanted lots of cheap boxes more
than they wanted a few expensive ones.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From ron at ronnatalie.com  Sat Jun 30 02:27:29 2018
From: ron at ronnatalie.com (ron at ronnatalie.com)
Date: Fri, 29 Jun 2018 12:27:29 -0400
Subject: [TUHS] C Threading
Message-ID: <012201d40fc6$137b3590$3a71a0b0$@ronnatalie.com>

Thread local storage and starting threads up is largely a rather
inconsequential implementation detail.   When it comes down to actual
parallel programming, of which I have done more than a little, the big thing
is thread synchronization.    It's rather hardware dependent.    You can
pretty much entirely wipe out any parallism gains with a synchronization
call that results in a context switch or even a serious cache impact.    On
one side you have machines like the Denelcor HEP where every memory word had
a pair of semaphores on it and the instructions could stall the process
while waiting for them and the hardware would schedule the other threads.
On the other hand you have your x86, which you can do a few clever things
with some atomic operations and inlined assembler but a lot of the
"standard" (boost, pthread, etc...) synchs will kill you.




From ron at ronnatalie.com  Sat Jun 30 02:29:13 2018
From: ron at ronnatalie.com (ron at ronnatalie.com)
Date: Fri, 29 Jun 2018 12:29:13 -0400
Subject: [TUHS] ATT Hardware
In-Reply-To: <87muvdlilk.fsf@loomcom.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
 <87muvdlilk.fsf@loomcom.com>
Message-ID: <013201d40fc6$5118ad60$f34a0820$@ronnatalie.com>

My favorite 3B2ism was that the power switch was soft (uncommon then, not so
much now).   I seem to recall that if the logged in user wasn't in a
particular group, pushing the power button was a no-op.   You didn't have
sufficient privs to operate the power.




From ron at ronnatalie.com  Sat Jun 30 02:31:22 2018
From: ron at ronnatalie.com (ron at ronnatalie.com)
Date: Fri, 29 Jun 2018 12:31:22 -0400
Subject: [TUHS] ATT Hardware
In-Reply-To: <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
 <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>
Message-ID: <014a01d40fc6$9e3c24f0$dab46ed0$@ronnatalie.com>

 

  We were also gifted a 3B2. We brought it up single user, and it took 20 seconds to run a ps command. Our computers were theme-named after birds (the 3B20 pair were heckle and jeckle), so we named the 3B2 junco. Our director told us we   couldn't do that, we had to play nice with the chip folks. So we renamed it jay. But we all knew what the j stood for.

 

Not unlike the “J” prefix in all the 5620 software, the last vestiges of the jab at PERQ calling DMD predecessor the JERQ.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/46d14b5e/attachment.html>

From imp at bsdimp.com  Sat Jun 30 02:59:03 2018
From: imp at bsdimp.com (Warner Losh)
Date: Fri, 29 Jun 2018 10:59:03 -0600
Subject: [TUHS] C Threading
In-Reply-To: <012201d40fc6$137b3590$3a71a0b0$@ronnatalie.com>
References: <012201d40fc6$137b3590$3a71a0b0$@ronnatalie.com>
Message-ID: <CANCZdfr7OLSsmBCV_1L0e+b0cd82k9H9MbyMEcWotCh5FBASOg@mail.gmail.com>

On Fri, Jun 29, 2018 at 10:27 AM, <ron at ronnatalie.com> wrote:

> Thread local storage and starting threads up is largely a rather
> inconsequential implementation detail.   When it comes down to actual
> parallel programming, of which I have done more than a little, the big
> thing
> is thread synchronization.    It's rather hardware dependent.    You can
> pretty much entirely wipe out any parallism gains with a synchronization
> call that results in a context switch or even a serious cache impact.    On
> one side you have machines like the Denelcor HEP where every memory word
> had
> a pair of semaphores on it and the instructions could stall the process
> while waiting for them and the hardware would schedule the other threads.
> On the other hand you have your x86, which you can do a few clever things
> with some atomic operations and inlined assembler but a lot of the
> "standard" (boost, pthread, etc...) synchs will kill you.
>

C11 also defines thread APIs and atomic operations sufficient to do many
types of locking. POSIX layers on threads as well that could be implemented
using those atomics.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/b3be640d/attachment.html>

From bakul at bitblocks.com  Sat Jun 30 03:14:30 2018
From: bakul at bitblocks.com (Bakul Shah)
Date: Fri, 29 Jun 2018 10:14:30 -0700
Subject: [TUHS] Faster cpus at any cost
In-Reply-To: Your message of "Fri, 29 Jun 2018 09:51:55 +0100."
 <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
References: <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
Message-ID: <20180629171437.D2B08156E410@mail.bitblocks.com>

On Fri, 29 Jun 2018 09:51:55 +0100 "Steve Simon" <steve at quintile.net> wrote:
> I know this is a dangerous game to play, but what of the future?

[In the same spirit :-)]
The spin orbit torque (SOT) MRAM devices seem very promising.

Spin tranfer torque (STT) MRAM are both seen as persistent
memories that will work beyond the feature sizes where 
current flash based CMOS devices won't work as they rely on
retaining charge, while STT/SOT decives depend on magnetism
(the spin direction is switched with a tiny bit of electrical
energy). What is more, they don't have number of write limits
of flash memory, nor the very long write times (STT/SOT writes
are on the order of 100ps to 10ns as opposed to 1us to 1ms for
flhas and takes million times less energy to write).

STT-MRAMs are already being used in small sizes (given no
write limits & high speed, it makes a good cache layer for
SSDs).  But STT devices have a number of limits that SOT don't
have.  SOTs can be written about 10 times faster and they can
be used at even smaller feaure sizes.

The really intersting part is logic gates have been constructed
using the same technologu. Unlike tradition charge based
devices where CPU and massive memory are kept separate, here
logic and memory can be on the same chip. In fact the same
device can perform logic and retain the results and the logic
can be electrically reconfigured.  These gates can be an order
of magnitude smaller (compared to 14nm FinFET CMOS) and are
ultra energy efficient => much less heat. And massive
parallelism.

Too early to tell whether this actually pans out or scales up
to billions of gates.

And of course, if we are to believe the crowd here, this will
be an utter failure since it won't really help us C programs
faster :-)

Also, I just read this stuff; I have no insight and I may have
misconstrued everything!

A reference that may be of interest:
    https://www.nature.com/articles/s41598-017-14783-1.pdf

> I am intrigued by the idea of true optical computers,
> perhaps the $10M super-computer will return?

Optics will more likely be used for a communication layer.


From lm at mcvoy.com  Sat Jun 30 03:51:26 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 29 Jun 2018 10:51:26 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
 <20180628170955.GH21688@mcvoy.com>
 <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
Message-ID: <20180629175126.GB10867@mcvoy.com>

On Fri, Jun 29, 2018 at 04:32:59PM +0100, tfb at tfeb.org wrote:
> On 28 Jun 2018, at 18:09, Larry McVoy <lm at mcvoy.com> wrote:
> > 
> > I'm not sure how people keep missing the original point.  Which was:
> > the market won't choose a bunch of wimpy cpus when it can get faster
> > ones.  It wasn't about the physics (which I'm not arguing with), it 
> > was about a choice between lots of wimpy cpus and a smaller number of
> > fast cpus.  The market wants the latter, as Ted said, Sun bet heavily
> > on the former and is no more.
> 
> [I said I wouldn't reply more: I'm weak.]
> 
> I think we have been talking at cross-purposes, which is probably
> my fault.  I think you've been using 'wimpy' to mean 'intentionally
> slower than they could be' while I have been using it to mean 'of very
> tiny computational power compared to the power of the whole system'.
> Your usage is probably more correct in terms of the way the term has
> been used historically.

Not "intentionally" as "let me slow this down" but as in "it's faster 
and cheaper to make a slower cpu so I'll just give you more of them".

The market has shown, repeatedly, that more slow cpus are not as fun
as less faster cpus.  

It's not a hard concept and I struggle to understand why it's a point
to discuss.

> But I think my usage tells you something important: that the performance
> of individual cores will, inevitably, become increasingly tiny compared
> to the performance of the system they are in ....

Yeah, so what?  That wasn't the point being discussed though you and Perry
keep pushing it.  


From web at loomcom.com  Sat Jun 30 03:51:43 2018
From: web at loomcom.com (Seth J. Morabito)
Date: Fri, 29 Jun 2018 17:51:43 +0000
Subject: [TUHS] ATT Hardware
In-Reply-To: <014a01d40fc6$9e3c24f0$dab46ed0$@ronnatalie.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
 <CAC0cEp_vkP9TDVFBWNxuq=LXsmWhs1-5C2Z6qfvKzia2EgQPgA@mail.gmail.com>
 <014a01d40fc6$9e3c24f0$dab46ed0$@ronnatalie.com>
Message-ID: <87wouhtr9s.fsf@loomcom.com>


ron at ronnatalie.com writes:

> Not unlike the “J” prefix in all the 5620 software, the last vestiges
> of the jab at PERQ calling DMD predecessor the JERQ.

Oh... I guess this should have been obvious to me, but I had no idea
this is where the JERQ moniker came from. You learn something new every
day!


-Seth
--
Seth Morabito
https://loomcom.com/
web at loomcom.com


From lm at mcvoy.com  Sat Jun 30 04:01:09 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 29 Jun 2018 11:01:09 -0700
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629114124.3529a1a8@jabberwock.cb.piermont.com>
References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
 <20180628222954.GD8521@thunk.org>
 <20180629001831.GA29490@mcvoy.com>
 <20180629114124.3529a1a8@jabberwock.cb.piermont.com>
Message-ID: <20180629180109.GC10867@mcvoy.com>

> You don't remember that you've known me for thirty years or so?
> 
> Hell, you used to help me out. Viktor and I and a couple of other
> people built the predecessor of Jumpstart at Lehman Brothers like 25
> years ago, it was called PARIS, and you were the one who did stuff
> like telling us the secret IOCTL to to turn off sync FFS metadata
> writes in SunOS so we could saturate our disk controllers during
> installs. (I guess it wasn't that secret but it was a big help, we got
> the bottleneck down to being network bandwidth and could install a
> single workstation from "boot net" to ready for the trader in five
> minutes.)

Heh, check this out:

$ call Viktor
Viktor Dukhovni                 718-754-2126 (W/Morgan Stanley)

Sorry Perry, I had completely forgotten about all that stuff.  Now that
you mention it, I do remember your install stuff, it was very cool.
But I'm horrible with names, when I was teaching at Stanford I'd have
students come up to me from a couple of semesters ago and I'd have no
idea what their names were.  Sorry about that.

> Anyway, I've also been doing this for quite a while. Not as long as
> many people here, I'm way younger than the people who were hacking in
> the 1960s on IBM 360s (I was learning things like reading back then,
> not computer science), but my first machine was a PDP-8e with an
> ASR-33 teletype.

Well welcome to the old farts club, I'll cut you some slack :)
I still think you are missing the point I was trying to make,
it's amusing, a bit, that you are preaching what you are to me
as a guy who moved Sun in that direction.  I'm not at all against
your arguments, I was just making a different point.


From toby at telegraphics.com.au  Sat Jun 30 04:12:06 2018
From: toby at telegraphics.com.au (Toby Thain)
Date: Fri, 29 Jun 2018 14:12:06 -0400
Subject: [TUHS] Faster cpus at any cost
In-Reply-To: <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
References: <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
Message-ID: <6650bebe-ebb3-3ee4-807c-96e9a3131f23@telegraphics.com.au>

On 2018-06-29 4:51 AM, Steve Simon wrote:
> I know this is a dangerous game to play, but what of the future?
> 
> I am intrigued by the idea of true optical computers,
> perhaps the $10M super-computer will return?
> 
> GPUs have definitely taken over in some areas - even where
> SIMD would not seem a good fit. My previous employer does motion
> estimation of realtime video and has moved from custom electronics
> and FPGAs to using off the shelf GPUs in PCs.

On this topic, 10 years old, but the Kogge whitepaper on "Exascale
Computing" might be of interest:

https://www.researchgate.net/publication/242366160_ExaScale_Computing_Study_Technology_Challenges_in_Achieving_Exascale_Systems

--T

> 
> -Steve
> 
> 



From perry at piermont.com  Sat Jun 30 04:13:04 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 14:13:04 -0400
Subject: [TUHS] Faster cpus at any cost
In-Reply-To: <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
References: <mailman.1.1530237601.20140.tuhs@minnie.tuhs.org>
 <a1117f6b1a451ac858ba5cd8c1e84280@quintile.net>
Message-ID: <20180629141304.4f10ed7f@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 09:51:55 +0100 "Steve Simon" <steve at quintile.net>
wrote:
> I know this is a dangerous game to play, but what of the future?
> 
> I am intrigued by the idea of true optical computers,
> perhaps the $10M super-computer will return?

One problem with optical is feature size. Even ultraviolet 200nm light
waves are pretty big compared to the feature size we now have in the
best chips (which is under 10nm at this point, though it will soon
stall out.) If you want EM waves that are close to the size of
current features, you're in the range of X-rays and aren't going to
have an easy time manipulating them.

All that said, a different kind of gedankenexperiment:

Say you wanted to store data as densely as possible. Lets ignore how
you would manage to read it and consider things on the same order of
magnitude of "as dense as we're going to get". If you stored 1s and
0s as C12 and C13 atoms in a diamond lattice, you get about
1.75e23 bits per cc, or about 12 zettabytes per cc. (Someone should
check my math.) I think it might be hard to do more than an order of
magnitude better than that. So, that's a crazy amount of storage, but
it looks like a pretty strong limit.

> GPUs have definitely taken over in some areas - even where
> SIMD would not seem a good fit. My previous employer does motion
> estimation of realtime video and has moved from custom electronics
> and FPGAs to using off the shelf GPUs in PCs.

Not surprised to hear. They're kind of everywhere at this point,
especially in scientific computation.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From tfb at tfeb.org  Sat Jun 30 04:27:39 2018
From: tfb at tfeb.org (Tim Bradshaw)
Date: Fri, 29 Jun 2018 19:27:39 +0100
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629175126.GB10867@mcvoy.com>
References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
 <20180628170955.GH21688@mcvoy.com>
 <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
 <20180629175126.GB10867@mcvoy.com>
Message-ID: <9749B511-FFA0-461D-9511-9B60B76BF3EE@tfeb.org>

On 29 Jun 2018, at 18:51, Larry McVoy <lm at mcvoy.com> wrote:
> 
> Yeah, so what?  That wasn't the point being discussed though you and Perry
> keep pushing it.  

Fuck this.  Warren: can you unsubscribe me please?


From perry at piermont.com  Sat Jun 30 04:41:00 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 14:41:00 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629125848.GD1231@thunk.org>
References: <af780f9fb5c14e37f12ce5c2a4e40376669c730f@webmail.yaccman.com>
 <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180629020219.1AAB4156E517@mail.bitblocks.com>
 <20180629125848.GD1231@thunk.org>
Message-ID: <20180629144100.474eccf4@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 08:58:48 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
wrote:
> All of this is to point out that talking about 2998 threads really
> doesn't mean much.  We shouldn't be talking about threads; we should
> be talking about how many CPU cores can we usefully keep busy at the
> same time.  Most of the time, for desktops and laptops, except for
> brief moments when you are running "make -j32" (and that's only for
> us weird programmer-types; we aren't actually the common case),
> most of the time, the user-facing CPU is twiddling its fingers.

On the other hand, when people are doing machine learning stuff on
GPUs or TPUs, or are playing a video game on a GPU, or are doing
weather prediction or magnetohydrodynamics simulations, or are
calculating the bonding energies of molecules, they very much _are_
going to use each and every computation unit they can get their hands
on. Yah, a programmer doesn't use very much CPU unless they're
compiling, but the world's big iron isn't there to make programmers
happy these days.

Even if "all" you're doing is handling instant message traffic for a
couple hundred million people, you're going to be doing a lot of stuff
that isn't easily classified as concurrent vs. parallel
vs. distributed any more, and the edges all get funny.

> > 4. The reason most people prefer to use one very high perf.
> >    CPU rather than a bunch of "wimpy" processors is *because*
> >    most of our tooling uses only sequential languages with
> >    very little concurrency.
>
> The problem is that I've been hearing this excuse for two decades.
> And there have been people who have been working on this problem.
> And at this point, there's been a bit of "parallelism winter" that
> is much like the "AI winter" in the 80's.

The parallelism winter already happened, in the late 1980s. Larry may
think I'm a youngster, but I worked over 30 years ago on a machine
called "DADO" built at Columbia with 1023 processors arranged in a
tree with MIMD and SIMD features depending on how it was
arranged. Later I worked at Bellcore on something called the Y machine
built for massive simulations.

All that parallel stuff died hard because general purpose CPUs kept
getting faster too quickly for it to be worthwhile. Now, however,
things are different.

> Lots of people have been promisng wonderful results for a long time;
> Sun bet their company (and lost) on it; and there haven't been much
> in the way of results.

I'm going to dispute that. Sun bet the company on the notion that
people wanted machines they could sell them at ridiculously high
margins with pretty low performance per dollar when they could go out
and buy Intel boxes running RHEL instead. As has been said elsewhere
in the thread, what failed wasn't Niagra as much as Sun's entire
attempt to sell people what were effectively mainframes at a point
where they no longer wanted giant boxes with low throughput per
dollar.

Frankly, Sun jumped the shark long before. I remember some people (Hi
Larry!) valiantly keeping adding minor variants on the SunOS release
number (what did it get to? SunOS 4.1.3_u1b or some such?) at a point
where the firm had committed to Solaris. They stopped shipping
compilers so they could charge you for them, stopped maintaining the
desktop, stopped updating the userland utilities so they got
ridiculously behind (Solaris still, when I last checked, had a /bin/sh
that wouldn't handle $( ) ), and put everything into selling bigger
and bigger iron, which paid off for a little while, until it didn't
any more.

> Sure, there are specialized cases where this has been useful ---
> making better nuclear bumbs with which to kill ourselves, predicting
> the weather, etc.  But for the most part, there haven't been much
> improvement for anything other than super-specialzied use cases.

Your laptop's GPUs beat it's CPUs pretty badly on power, and you're
using them every day. Ditto for the GPUs in your cellphone. You're
invoking big parallel back-ends at Amazon, Google, and Apple all the
time over the network, too, to do things like voice recognition and
finding the best route on a large map. They might be "specialized"
uses, but you're invoking them enough times an hour that maybe they're
not that specialized?

> > 6. The conventional wisdom is parallel languages are a failure
> >    and parallel programming is *hard*.  Hoare's CSP and
> >    Dijkstra's "elephants made out of mosquitos" papers are
> >    over 40 years old.
>
> It's a failure because there hasn't been *results*.  There are
> parallel languages that have been proposed by academics --- I just
> don't think they are any good, and they certainly haven't proven
> themselves to end-users.

Er, Erlang? Rust? Go has CSP and is in very wide deployment? There's
multicore stuff in a bunch of functional languages, too.

If you're using Firefox right now, a large chunk of the code is
running multicore using Rust to assure that parallelism is safe. Rust
does that using type theory work (on linear types) from the PL
community. It's excellent research that's paying off in the real
world.

(And on the original point, you're also using the GPU to render most
of what you're looking at, invoked from your browser, whether Firefox,
Chrome, or Safari. That's all parallel stuff and it's going on every
time you open a web page.)

Perry
-- 
Perry E. Metzger		perry at piermont.com


From perry at piermont.com  Sat Jun 30 04:48:17 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 14:48:17 -0400
Subject: [TUHS] ATT Hardware
In-Reply-To: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
References: <00f101d40fab$5af29aa0$10d7cfe0$@ronnatalie.com>
Message-ID: <20180629144817.18f76be7@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 09:16:12 -0400 <ron at ronnatalie.com> wrote:
> The recent reference to the Dennis's comments on ATT chip
> production had me feeling nostalgic to the 3B line of computers.
> In the late 80's I was in charge of all the UNIX systems (among
> other things) at the state university system in New Jersey.   As a
> result we got a lot of this hardware gifted to us.    The 3B5 and
> 3B2s were pretty doggy compared with the stuff on the market
> then.   The best thing I could say about the 3B5 is that it stood
> up well to having many gallons of water dumped on it (that's
> another story, Rutgers had the computer center under a seven story
> building and it still had a leaky roof).

We had huge numbers of 3B2s at Columbia that were gifted to us by
AT&T. They didn't know what to do with the things, so the undergrads
were subjected to using them for their labs for a few classes like
computer graphics. The blits attached to them were neat, though. If
only the same could have been said for the overall system.

> The 3B20 was another
> thing.   It was a work of telephone company art.    You knew this
> when it came to power it down where you turned a knob inside the
> rack and held a button down until it clicked off.

We had one of those donated, too. It was put into an extra machine
room and not used for very much, I think because the version of the
OS it came with didn't really do networking at a point where
everything else at the Columbia CS department did.

At one point we considered reusing its disk pack drives for some of
the Vaxes but unfortunately the cabling was incompatible. 

Perry
-- 
Perry E. Metzger		perry at piermont.com


From perry at piermont.com  Sat Jun 30 05:02:16 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 15:02:16 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629175126.GB10867@mcvoy.com>
References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org> <20180628144017.GB21688@mcvoy.com>
 <20180628105538.65f82615@jabberwock.cb.piermont.com>
 <20180628145825.GE21688@mcvoy.com>
 <2B710879-7659-47A4-AA86-03F232F7B78B@tfeb.org>
 <20180628160202.GF21688@mcvoy.com>
 <79022674-0FFA-4B1B-8A27-4C403D51540E@tfeb.org>
 <20180628170955.GH21688@mcvoy.com>
 <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>
 <20180629175126.GB10867@mcvoy.com>
Message-ID: <20180629150216.35c1a11b@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 10:51:26 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> > But I think my usage tells you something important: that the
> > performance of individual cores will, inevitably, become
> > increasingly tiny compared to the performance of the system they
> > are in ....  
> 
> Yeah, so what?  That wasn't the point being discussed though you
> and Perry keep pushing it.  

Perhaps we're not agreed on what was originally motivating the
discussion. I believe we started with whether modern parallel
computing environments might need something more than C. I think the
original hardware question this spawned was: is dealing with lots of
cores simultaneously here to stay, and do you get something out of
having language support to help with it? We were trying to address
that question, and given that, I think we've been on point.

Single programs that have to handle large numbers of threads and
cores are now common. Every interesting use of the GPU on your
machine is like this, including the classic ones like rendering
images to your screen, but also newer ones like being exploited
for all sorts of purposes by your browser that aren't related to
video as such. Your desktop browser is a fine example of other sorts
of parallelism, too: it uses loads of parallelism (not merely
concurrency) on a modern desktop to deal with rapid processing
of the modern hideous and bloated browsing stack.

As for whether there's an advantage to modern languages here, the
answer there is also "yes", Mozilla only really managed to get a
bunch of parallelism into Firefox because they used a language (Rust)
with a pretty advanced linear type system to assure that they didn't
have concurrency problems.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From perry at piermont.com  Sat Jun 30 05:07:42 2018
From: perry at piermont.com (Perry E. Metzger)
Date: Fri, 29 Jun 2018 15:07:42 -0400
Subject: [TUHS] PDP-11 legacy, C, and modern architectures
In-Reply-To: <20180629180109.GC10867@mcvoy.com>
References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com>
 <20180628141538.GB663@thunk.org>
 <20180628104329.754d2c19@jabberwock.cb.piermont.com>
 <20180628145609.GD21688@mcvoy.com>
 <20180628154246.3a1ce74a@jabberwock.cb.piermont.com>
 <CANCZdfrj1t=DvgBmYfBNuEUzXDyFZiY=uCzK4a_2rqvtPmO_NA@mail.gmail.com>
 <20180628170317.14d65067@jabberwock.cb.piermont.com>
 <20180628222954.GD8521@thunk.org>
 <20180629001831.GA29490@mcvoy.com>
 <20180629114124.3529a1a8@jabberwock.cb.piermont.com>
 <20180629180109.GC10867@mcvoy.com>
Message-ID: <20180629150742.5d6cf508@jabberwock.cb.piermont.com>

On Fri, 29 Jun 2018 11:01:09 -0700 Larry McVoy <lm at mcvoy.com> wrote:
> Well welcome to the old farts club, I'll cut you some slack :)
> I still think you are missing the point I was trying to make,
> it's amusing, a bit, that you are preaching what you are to me
> as a guy who moved Sun in that direction.  I'm not at all against
> your arguments, I was just making a different point.

I think we may be mostly talking past each other. To me, the
underlying question began at the start of this thread (see the
subject which we should have changed a long time ago): "is there any
benefit to new sorts of programming languages to deal with the modern
multiprocessor world".

I think we're now at the point where dealing with fleets of
processors is the norm, and on the languages side, I think Erlang was
a good early exemplar on that, and now that we have Rust I think the
answer is a definitive "yes". Go's CSP stuff is clearly also intended
to address this. Having language support so you don't have to handle
concurrent and parallel stuff all on your own is really nice.

Perry
-- 
Perry E. Metzger		perry at piermont.com


From dave at horsfall.org  Sat Jun 30 09:55:02 2018
From: dave at horsfall.org (Dave Horsfall)
Date: Sat, 30 Jun 2018 09:55:02 +1000 (EST)
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <20180629075310.GA9477@minnie.tuhs.org>
References: <20180629075310.GA9477@minnie.tuhs.org>
Message-ID: <alpine.BSF.2.21.999.1806300949310.68981@aneurin.horsfall.org>

On Fri, 29 Jun 2018, Warren Toomey wrote:

> We do have ken on the list, so I won't be presumptious to ask for 
> ken-related anecdotes, but would anybody like to share some dmr 
> anecdotes?

Wasn't it Dennis whose morphed picture featured on a t-shirt at an AUUG 
conference in the 80s?  I can't seem to find a reference right now,,,

-- Dave


From grog at lemis.com  Sat Jun 30 10:06:25 2018
From: grog at lemis.com (Greg 'groggy' Lehey)
Date: Sat, 30 Jun 2018 10:06:25 +1000
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <alpine.BSF.2.21.999.1806300949310.68981@aneurin.horsfall.org>
References: <20180629075310.GA9477@minnie.tuhs.org>
 <alpine.BSF.2.21.999.1806300949310.68981@aneurin.horsfall.org>
Message-ID: <20180630000625.GB17378@eureka.lemis.com>

On Saturday, 30 June 2018 at  9:55:02 +1000, Dave Horsfall wrote:
> On Fri, 29 Jun 2018, Warren Toomey wrote:
>
>> We do have ken on the list, so I won't be presumptious to ask for
>> ken-related anecdotes, but would anybody like to share some dmr
>> anecdotes?
>
> Wasn't it Dennis whose morphed picture featured on a t-shirt at an AUUG
> conference in the 80s?  I can't seem to find a reference right now,,,

No, that was Peter Weinberger, as dmr confirmed.  There's quite a
story behind what happened to the stencil.  I'll let Warren tell it,
but if you want to cheat, check
http://www.lemis.com/grog/diary-oct2002.php#17

Greg
--
Sent from my desktop computer.
Finger grog at lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: not available
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180630/877ed1ed/attachment.sig>

From wkt at tuhs.org  Sat Jun 30 10:21:12 2018
From: wkt at tuhs.org (Warren Toomey)
Date: Sat, 30 Jun 2018 10:21:12 +1000
Subject: [TUHS] Moderation on for a bit
Message-ID: <20180630002112.GA17705@minnie.tuhs.org>

OK, so now I'm a bit p*ssed off. We've losy Johnny a few days ago because of
the traffic on the list and now Tim is asking to be unsubscribed.

First up: Unix Heritage. I don't mind a bit of drift but seriously?! I
put in a nugde to wind up the comp.arch chat but that didn't help.

Secondly: a few of you need to pull your heads in.

The TUHS list is going to be moderated for the next couple of days. If you
really feel it necessary, you can post a _summary_ of your comp.arch position 
and then stop.

Posts on Unix anecdotes and on-topic stuff I will let through.

Cheers, Warren


From scj at yaccman.com  Sat Jun 30 10:50:21 2018
From: scj at yaccman.com (Steve Johnson)
Date: Fri, 29 Jun 2018 17:50:21 -0700
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <7A8E301D-D8F7-40DA-9042-ABCA887821EC@cheswick.com>
Message-ID: <3386fb80b5282f7bca0ccf34252182c2398232c1@webmail.yaccman.com>

That so sounds like Dennis!   I can add a couple of other things
about Dennis, outside of work.  

A group of us used to go into New York from time to time to take in a
concert, and Dennis was often one of the group.   One time in the
dead of winter, we went to an Early Music concert in a cold drafty
church in winter--it was part of a series called "Music before
1620".    The musicians were almost unable to get their instruments
to stay in tune, and the whole thing was kind of train wreck.   On
the way back, Dennis said "It's clear that pitch was invented sometime
after 1620."

On another occasion, we found ourselves in Brooklyn having blintzes
outside on picnic tables.  My 4-year-old son was with us.  My wife
was telling people how my son was starting to ask questions about
sex.   My son, hearing his name, said "What's sex?".   Dennis said
"See."

A work-related anecdote.  There was a manager in USG who managed to
get on the nerves of many of us in Research.   One day we came in to
discover that he had been promoted and took a job in Japan.   We
were discussing this at lunch, and someone said "Why Japan?".  
Dennis said "They haven't opened their office on the moon yet."

Steve

PS:  I certainly remember the belt buckle quip.  That chip was so
big that they could not finish the test patterns before the chip came
back.   So they generated random tests and compared the results
until they found two that were the same.  They concluded that these
two chips were fabricated correctly, and went on from there.   The
chip had a number of flaws, and we scrambled to fix them in
software.  One was that the branch instruction garbled the last 4
bits of the branch target.   So one of the guys hacked the assembler
to make every label the center of a "target" of 32 NOP instructions. 
We were able to get the chip to run, and even run an asteroid
game....  The guy to made the fix put up a sign outside of his office
that said "BellMac chips fixed while U wait."   The same VP was
rather PO'd about this as well...

----- Original Message -----
From: "ches at Cheswick.com" <ches at cheswick.com>
To:"Warren Toomey" <wkt at tuhs.org>
Cc:<tuhs at tuhs.org>
Sent:Fri, 29 Jun 2018 06:53:02 -0400
Subject:Re: [TUHS] Any Good dmr Anecdotes?

 Dennis, do you have any recommendations on good books to use the
learn C?

 I don’t know, I never had to learn C. -dmr

 Message by ches. Tappos by iPad.

 > On Jun 29, 2018, at 3:53 AM, Warren Toomey <wkt at tuhs.org> wrote:
 > 
 > We do have ken on the list, so I won't be presumptious to ask for
ken-related
 > anecdotes, but would anybody like to share some dmr anecdotes?
 > 
 > I never met Dennis in person, but he was generous with his time
about my
 > interest in Unix history; and also with sharing the material he
still had.
 > 
 > Dennis was very clever, though. He would bring out a new artifact
and say:
 > well, here's what I still have of X. Pity it will never execute
again, sigh.
 > 
 > I'm sure he knew that I would take that as a challenge. Mind you,
it worked,
 > which is why we now have the first Unix kernel in C, the 'nsys'
kernel, and
 > the first two C compilers, in executable format.
 > 
 > Any other good anecdotes?
 > 
 > Cheers, Warren


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/632177f8/attachment.html>

From lm at mcvoy.com  Sat Jun 30 11:18:40 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 29 Jun 2018 18:18:40 -0700
Subject: [TUHS] Moderation on for a bit
In-Reply-To: <20180630002112.GA17705@minnie.tuhs.org>
References: <20180630002112.GA17705@minnie.tuhs.org>
Message-ID: <20180630011840.GB18031@mcvoy.com>

For the record, it was me that made Tim want to bail.  I should have
used more tact, and if Tim is still here I apologize, you're a good guy
and the last thing I want to do is chase people like you off.

I'll try and be better, I love this list, it's one of the few that I'm
still on because of the people on it.

On Sat, Jun 30, 2018 at 10:21:12AM +1000, Warren Toomey wrote:
> OK, so now I'm a bit p*ssed off. We've lost Johnny a few days ago because of
> the traffic on the list and now Tim is asking to be unsubscribed.
> 
> First up: Unix Heritage. I don't mind a bit of drift but seriously?! I
> put in a nugde to wind up the comp.arch chat but that didn't help.
> 
> Secondly: a few of you need to pull your heads in.
> 
> The TUHS list is going to be moderated for the next couple of days. If you
> really feel it necessary, you can post a _summary_ of your comp.arch
> position and then stop.
> 
> Posts on Unix anecdotes and on-topic stuff I will let through.
> 
> Cheers, Warren

-- 
---
Larry McVoy            	     lm at mcvoy.com             http://www.mcvoy.com/lm 


From imp at bsdimp.com  Sat Jun 30 10:57:53 2018
From: imp at bsdimp.com (Warner Losh)
Date: Fri, 29 Jun 2018 18:57:53 -0600
Subject: [TUHS] Unix Kernel Size
Message-ID: <CANCZdfp5xy8+Z3gpZ4+OacM_WLqBwdz_hajufKGvi-MFS4jWXw@mail.gmail.com>

Greetings,

I'd like to thank everybody that sent me data for my unix kernel size
stuff. There's two artifacts I've crated. One I think I've shared before,
which is my spreadsheet:
https://docs.google.com/spreadsheets/d/13C77pmJFw4ZBmGJuNarBUvWBxBKWXG-jtvARxJDHiXs/edit?usp=sharing

The second are some simplified graphs and the first half of my BSDcan talk:

http://www.bsdcan.org/2018/schedule/events/934.en.html

The video should be available soon for my talk as well, but it's not up
yet. If there's interest, I'll post an update when it is.

Warner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180629/92e2de56/attachment.html>

From lm at mcvoy.com  Sat Jun 30 11:31:28 2018
From: lm at mcvoy.com (Larry McVoy)
Date: Fri, 29 Jun 2018 18:31:28 -0700
Subject: [TUHS] Any Good dmr Anecdotes?
Message-ID: <20180630013128.GC18031@mcvoy.com>

I've told this story so much that my kids hear me start it and go "is that
the Unix guy?  Yeah, we've heard this".  And I think many of you have heard
it as well so you can hit delete, but for the newbies to the list here goes.

Decades ago I was a grad student at UWisc and pretty active on comp.arch
and comp.unix-wizards (I was not a wizard but I've lived through, and
written up, the process of restoring a Masscomp after having done some
variant of rm -rf / so a stupid wizard wanna be?)

>From time to time, some Unix kernel thing would come up and I'd email
...!research!dmr and ask him how that worked.  

He *always* replied.  To me, a nobody.  All he cared was that the question
wasn't retarded (and I bet to him some of mine were but the questions showed
that I was thinking and that was good enough for him).  I remember a long
discussion about something, I think PIPEBUF but not sure, and at some point
he sent me his phone number and said "call me".  Email was too slow.

So yeah, one of the inventors of Unix was cool enough to take some young
nobody and educate him.  That's Dennis.

I've tried to pass some of that energy forward to my kids, telling them
that if you want to learn, smart people like that and they will help you.

Dennis was a humble man, a smart man, and a dude willing to pass on what
he knew.

I miss him and cherish the interactions I had with him.


From nobozo at gmail.com  Sat Jun 30 11:45:25 2018
From: nobozo at gmail.com (Jon Forrest)
Date: Fri, 29 Jun 2018 18:45:25 -0700
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <20180630013128.GC18031@mcvoy.com>
References: <20180630013128.GC18031@mcvoy.com>
Message-ID: <256ad6e7-ad2f-a56f-7873-d0d3a382dc06@gmail.com>


In the early 1980s my girlfriend (now my wife) and I were visiting
Bell Lab's Murray Hill research facility where a friend of mine
worked. My friend had nothing to do with Unix, but I asked him
to take us by the area where the Unix work was being done.

What my wife still remembers to this day is when we saw Dennis
Ritchie sitting at a terminal, with about 7 guys surrounding him.
They were all giggling loudly at something on the screen. My wife
couldn't understand what could be so funny to cause all those guys to
react that way.

Jon Forrest


From mike.ab3ap at gmail.com  Sat Jun 30 12:27:38 2018
From: mike.ab3ap at gmail.com (Mike Markowski)
Date: Fri, 29 Jun 2018 22:27:38 -0400
Subject: [TUHS] Masscomp oops [was: Any Good dmr Anecdotes?]
In-Reply-To: <20180630013128.GC18031@mcvoy.com>
References: <20180630013128.GC18031@mcvoy.com>
Message-ID: <353c4003-d61c-356f-2601-53ce03453c68@gmail.com>

On 06/29/2018 09:31 PM, Larry McVoy wrote:
> [...] (I was not a wizard but I've lived through, and
> written up, the process of restoring a Masscomp after having done some
> variant of rm -rf / so a stupid wizard wanna be?)

That reminds me of preparing a Masscomp to bring out in the field for 
radar data acquisition.  After some minor incident, I needed to run an 
fsck when a cute, slender gal came into the lab.  In my flustered state 
hoping to start a witty conversation I typed mkfs...  Unfortunately, no 
wizards around, let alone dmr, to save the day - which turned into a 
very late day.  On the plus side, I ended up marrying the girl.

Mike Markowski



From norman at oclsc.org  Sat Jun 30 21:15:07 2018
From: norman at oclsc.org (Norman Wilson)
Date: Sat, 30 Jun 2018 07:15:07 -0400
Subject: [TUHS] ATT Hardware
Message-ID: <1530357310.5184.for-standards-violators@oclsc.org>

Ron Natalie:

  My favorite 3B2ism was that the power switch was soft (uncommon then, not so
  much now).   I seem to recall that if the logged in user wasn't in a
  particular group, pushing the power button was a no-op.   You didn't have
  sufficient privs to operate the power.

====

Surely you mean the current user didn't have sufficent power.

Norman Wilson
Toronto ON


From arrigo at alchemistowl.org  Sat Jun 30 21:44:30 2018
From: arrigo at alchemistowl.org (Arrigo Triulzi)
Date: Sat, 30 Jun 2018 13:44:30 +0200
Subject: [TUHS] Any Good dmr Anecdotes?
In-Reply-To: <3386fb80b5282f7bca0ccf34252182c2398232c1@webmail.yaccman.com>
References: <3386fb80b5282f7bca0ccf34252182c2398232c1@webmail.yaccman.com>
Message-ID: <5D272962-0063-4D28-B551-F381D3D10239@alchemistowl.org>

My memories of dmr are limited to one encounter when he came to Italy, more precisely to the University of Milan, in the late 70s or early 80s (cannot remember exactly, there’s a picture in Peter Salus’ book though).

I was a child, had been introduced to Lisp as part of an experiment in teaching to primary school children but my dad, at the time teaching robotics in the nascent “Cybernetics” group of the Physics department, was starting me on C. 

As I was told this visitor was the R in the “K&R” book I felt I could finally ask “someone who knew” how printf() worked with a variable number of arguments. I was at best 10 and dmr patiently sat down and explained it to me in terms I could understand. I remember that he asked me if I understood pointers, I told him it was like putting a big arrow which you could move around, pointing to a house instead of actually using the house number and he smiled then taking the explanation on from there.

I wish I could have met him again in my life to thank him for that time he dedicated to a child to demystify printf().

Arrigo 




