Conical inductors--still $10!...

torsdag den 23. juli 2020 kl. 20.34.25 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

a saw than can\'t cut anything is fundamentally safe, it is also useless
 
torsdag den 23. juli 2020 kl. 20.34.25 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

a saw than can\'t cut anything is fundamentally safe, it is also useless
 
On Thursday, July 23, 2020 at 11:34:25 AM UTC-7, John Larkin wrote:

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has already been done; the abacus. The only problems that remain,
are operator errors.

The flaws in computer architecture are only visible because the
computers are useful regardless. Naval architecture, on the other hand,
always has its flaws sink out of sight...
 
On Thursday, July 23, 2020 at 11:34:25 AM UTC-7, John Larkin wrote:

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has already been done; the abacus. The only problems that remain,
are operator errors.

The flaws in computer architecture are only visible because the
computers are useful regardless. Naval architecture, on the other hand,
always has its flaws sink out of sight...
 
torsdag den 23. juli 2020 kl. 20.34.25 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

a saw than can\'t cut anything is fundamentally safe, it is also useless
 
On 7/23/20 2:40 PM, Tom Gardner wrote:

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

For multicore use see
https://docs.python.org/3/library/multiprocessing.html
 
On 7/23/20 2:40 PM, Tom Gardner wrote:

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

For multicore use see
https://docs.python.org/3/library/multiprocessing.html
 
On 7/23/20 2:40 PM, Tom Gardner wrote:

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

For multicore use see
https://docs.python.org/3/library/multiprocessing.html
 
On 23/07/20 21:50, Dennis wrote:
On 7/23/20 2:40 PM, Tom Gardner wrote:


Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)


For multicore use see
https://docs.python.org/3/library/multiprocessing.html

How can I put this... Maybe an analogy (in the full
realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes
(e.g. cc, ld) at the same time doesn\'t mean the cc compiler
or ld linker is meaningfully parallel.

That Python library is a thin veneer over the operating
system calls. It adds no parallelism that is not present
in the operating system; essentially it avoids all the
interesting problems and punts them to the operating system.
Hence it is only a coarse grain parallelism, and is not
sufficiently novel to be able to advance the ability to
create and control parallel computation.

In order to be interesting in this regard, I would want
to see either a much higher level very coarse-grain
abstraction (e.g. mapreduce), or finer grain abstractions
as found in, say, CSP derived languages/libraries, or Java,
or Erlang.
 
On 23/07/20 21:50, Dennis wrote:
On 7/23/20 2:40 PM, Tom Gardner wrote:


Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)


For multicore use see
https://docs.python.org/3/library/multiprocessing.html

How can I put this... Maybe an analogy (in the full
realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes
(e.g. cc, ld) at the same time doesn\'t mean the cc compiler
or ld linker is meaningfully parallel.

That Python library is a thin veneer over the operating
system calls. It adds no parallelism that is not present
in the operating system; essentially it avoids all the
interesting problems and punts them to the operating system.
Hence it is only a coarse grain parallelism, and is not
sufficiently novel to be able to advance the ability to
create and control parallel computation.

In order to be interesting in this regard, I would want
to see either a much higher level very coarse-grain
abstraction (e.g. mapreduce), or finer grain abstractions
as found in, say, CSP derived languages/libraries, or Java,
or Erlang.
 
On 23/07/20 21:50, Dennis wrote:
On 7/23/20 2:40 PM, Tom Gardner wrote:


Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)


For multicore use see
https://docs.python.org/3/library/multiprocessing.html

How can I put this... Maybe an analogy (in the full
realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes
(e.g. cc, ld) at the same time doesn\'t mean the cc compiler
or ld linker is meaningfully parallel.

That Python library is a thin veneer over the operating
system calls. It adds no parallelism that is not present
in the operating system; essentially it avoids all the
interesting problems and punts them to the operating system.
Hence it is only a coarse grain parallelism, and is not
sufficiently novel to be able to advance the ability to
create and control parallel computation.

In order to be interesting in this regard, I would want
to see either a much higher level very coarse-grain
abstraction (e.g. mapreduce), or finer grain abstractions
as found in, say, CSP derived languages/libraries, or Java,
or Erlang.
 
On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)
 
On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)
 
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought. So apart
from more and more Python libraries, I doubt that there are a lot more orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.
 
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought. So apart
from more and more Python libraries, I doubt that there are a lot more orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.
 
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought. So apart
from more and more Python libraries, I doubt that there are a lot more orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.
 
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
 
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
 
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
 
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
 

Welcome to EDABoard.com

Sponsor

Back
Top