Conical inductors--still $10!...

On Tuesday, August 11, 2020 at 10:50:28 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.


Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

Online code updates should of course be disallowed by default. It\'s an
invitation to ship crap code now and assume it will be fixed some day.
And that the users will find the bugs and the black-hats will find the
vulnerabilities.

Why is there no legal liability for bad code?

Benjamin, I\'ve got one word for you, ELUA!

Many states have passed laws making shrink wrap opening the same as signing a contract as well as clicking a button on a web page meaning you agree to a contract you have never read.

I\'ve seen web sites that have broken links for the \"terms and conditions\". The law should be written so that makes them subject to charges of fraud.

Similar things are done at face to face contract signings. I had power of attorney for a friend once who was out of the country and selling a house. They handed me the contract to sign which I read, then turned it over to find the back had the proverbial small print but also dark gray on light gray background!!! I cried foul, but it didn\'t go far. The lady offered to read it to me.

WTF is wrong with people? Why would they want to pull crap like this?

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209
 
On Tuesday, August 11, 2020 at 10:54:08 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 11:42:10 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.

That\'s impossible. Not even Intel understands Intel processors, and
they keep a lot secret too.

1) you need to know how to code and the various coding systems used.

There are not enough people who can do that.

2) you need to know 100% about what you are coding for.

Generally impossible too.

This is why all the really smart people are in software.

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209
 
On Wednesday, August 12, 2020 at 12:54:08 AM UTC+10, jla...@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 11:42:10 GMT, Jan Panteltje
pNaOnSt...@yahoo.com> wrote:

On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1...@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.

That\'s impossible. Not even Intel understands Intel processors, and
they keep a lot secret too.

It\'s not impossible, but it may limit your choice of hardware.

1) you need to know how to code and the various coding systems used.

There are not enough people who can do that.

Then we\'d better make the coding systems more transparent, and work out how to train more people to a higher level.

2) you need to know 100% about what you are coding for.

Generally impossible too.

You need to get a lot closer to 100% than the people who want to get the job done usually seem to imagine.
Waving your hands in the air and declaring it impossible isn\'t a constructive approach.

--
Bill Sloman, Sydney
 
On a sunny day (Tue, 11 Aug 2020 14:00:18 +0100) it happened Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgu4p4$1usg$1@gioia.aioe.org>:

On 11/08/2020 12:42, Jan Panteltje wrote:
On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.

Although I tend to agree with you I think a part of the problem is that
the people who are any good at it discover pretty early on that for
typical university scale projects they can hack it out from the solid in
the last week before the assignment is due to be handed in.

This method does not scale well to large scale software projects.

The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.
1) you need to know how to code and the various coding systems used.
2) you need to know 100% about what you are coding for.

What I see in the world of bloat we live in is
0) no clue
1) 1 week tinkering with C++ or snake languages.
2) Huh? that is easy ..

Although I have an interest in computer architecture I would say that
today 0) is almost completely irrelevant to most programming problems
(unless it is on a massively parallel or Harvard architecture CPU)

Teaching of algorithms and complexity is where things have gone awry.
Programmers should not be reinventing the square or if you are very
lucky hexagonal wheel every time they should know about round wheels and
where to find them. Knuth was on the right path but events overtook him.

And then blame everything on the languages and compilers if it goes wrong.

Compilers have improved a long way since the early days but they could
do a lot more to prevent compile time detectable errors being allowed
through into production code. Such tools are only present in the high
end compilers rather than the ones that students use at university.

And then there are hackers, and NO system is 100% secure.

Again you can automate some of the most likely hacker tests and see if
you can break things that way. They are not called script kiddies for
nothing. Regression testing is powerful for preventing bugs from
reappearing in a large codebase.

Some open source code I wrote and published runs 20 years without problems.
I know it can be hacked...

I once ported a big mainframe package onto a Z80 for a bet. It needed an
ice pack for my head and a lot of overlays. It was code that we had
ported to everything from a Cray-XMP downwards. We always learned
something new from every port. The Cyber-76 was entertaining because our
unstated assumption of IBM FORTRAN 32 bit and 64 bit reals was violated.

The Z80 implementation of Fortran was rather less forgiving than the
mainframes (being a strict interpretation of the Fortran IV standard)

We will see ever more bloat as cluelessness is build upon cluelessness,
problem here is that industry / capitalism likes that.
Sell more bloat, sell more hardware, obsolete things ever faster
keep spitting out new standards ever faster,

I do think that software has become ever more over complicated in an
attempt to make it more user friendly. OTOH we now have almost fully
working voice communication with the likes of Alexa aka she who must not
be named (or she lights up and tries to interpret your commands).
(and there are no teleprinter noises off a la original Star Trek)

I agree, funny you mention Z80, got an email from somebody a few month ago they are using my dz80 disassembler..
http://panteltje.com/panteltje/z80/index.html
have had more mail about that in the past.
And this newsreader is from 1998 basically (with some new stuff added later when they attempted to make Usenet html
but that html did not catch on....):
http://panteltje.com/panteltje/newsflex/index.html

Some webcam software I wrote is used for what you just described..
About 0)
I programmed a lot of PICs, in asm, close to the hardware.
In a very small code space you get near zero boot time, near zero power consumption and very high speed.
Try real time video processing on one of those <X GHz Y cores>:
https://www.youtube.com/watch?v=xS_K4caj7vc

So the hardware it runs on is extremely important!
The java way where it is not, today I had to work myself through 2 bank sites
and it always gets your adrenaline going so slow and weird are the logins, is my browser still good enough?
So and to run the latest browser you need to update the OS and the OS wants better hardware...

As to compiler warnings in gcc I always use -Wall but when porting code from x86 to ARM found gcc gives mysterious warnings,
one I have not been able to resolve, googled for that warning, others had it too,
finally left it that way,
For the rest most I have written does a clean compile, unlike the endless warning listings I have seen from others..
Compilers are not perfect, I like asm because there is no misunderstanding about what I want to do.
And really, asm is NOT harder but in fact much simpler than C++.
C++ is, in my view, a crime against humanity.
To look at computing as objects is basically wrong.
It gets worse with operator overloading and does not stop there.
And indeed the compiler writers hardly agree on what is what.
C is simple, you can define structures and call those objects if you are so inclined.
you can specify the bit width and basically everything,
Anyways am getting carried away,
Would it not be nice if those newbie programmers started with some embedded asm, just to know
what is happening under the hood so to speak.
 
tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

what are the rules?
 
>This is why all the really smart people are in software.

That\'s your best deadpan line to date. Keep \'em coming!

Cheers

Phil Hobbs
 
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.


what are the rules?

Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.
Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

Let\'s get rid of virtual memory too.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Thu, 23 Jul 2020 20:40:42 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 18:06, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

Yes indeed. C and C++ are an *appalling*[1] starting point!

Absolutely.

But better alternatives are appearing...

Wasting some execution speed on a pseudocode approach is worthwhile.
The x86 runtime can be made more reliable than random machine code
compiler applications could ever be... until we get rid of x86. That
would be a Python-like language (which is, in appearance and
implementation, awfully similar to Dec\'s Basic-Plus, which was
bulletproof.)

xC has some C syntax but removes the dangerous bits and
adds parallel constructs based on CSP; effectively the
hard real time RTOS is in the language and the xCORE
processor hardware.

Rust is gaining ground; although Torvalds hates and
prohibits C++ in the Linux kernel, he has hinted he won\'t
oppose seeing Rust in the Linux kernel.

Go is gaining ground at the application and server level;
it too has CSP constructs to enable parallelism.

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

[1] cue comments from David Brown ;}

--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tue, 11 Aug 2020 08:51:44 -0700 (PDT), pcdhobbs@gmail.com wrote:

This is why all the really smart people are in software.

That\'s your best deadpan line to date. Keep \'em coming!

Cheers

Phil Hobbs

“Anybody who can go down 3000 feet in a mine can sure as hell learn to
program as well... Anybody who can throw coal into a furnace can learn
how to program, for God’s sake!”

All too funnny.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On 11/08/20 17:18, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 20:40:42 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 18:06, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

Yes indeed. C and C++ are an *appalling*[1] starting point!

Absolutely.


But better alternatives are appearing...

Wasting some execution speed on a pseudocode approach is worthwhile.
The x86 runtime can be made more reliable than random machine code
compiler applications could ever be... until we get rid of x86.

In that case it would be unnecessary and impossible for intel
to change the operation of their processors after the processors
are operational after being installed on boards in customer
premises.

Intel does just that; it is the key to their being able to
(partially) contain and mitigate the recent security flaws.
Fundamentally the x86 ISA is unchanged, but the implementation
of the ISA is changed.


That
would be a Python-like language (which is, in appearance and
implementation, awfully similar to Dec\'s Basic-Plus, which was
bulletproof.)

Er. No, those systems most definitely weren\'t bulletproof.

Some of my friends comprehensively owned the university PDP11
in the late 70s. The university sysadmins were never able to
nail the perps, nor were they able to regain control.

The perps used many techniques, including replacing the
system monitoring and control programs with their own
doctored versions. Naturally part of the doctoring was
to prevent the programs from being able to detect they
had been doctored, nor to detect the extra programs that
were always running.

Another friend also managed to subvert a DEC VAX a few
years later.

Fundamentally, if you have physical access to a machine,
it is game over!


xC has some C syntax but removes the dangerous bits and
adds parallel constructs based on CSP; effectively the
hard real time RTOS is in the language and the xCORE
processor hardware.

Rust is gaining ground; although Torvalds hates and
prohibits C++ in the Linux kernel, he has hinted he won\'t
oppose seeing Rust in the Linux kernel.

Go is gaining ground at the application and server level;
it too has CSP constructs to enable parallelism.

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

[1] cue comments from David Brown ;}
 
On 11/08/2020 15:50, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

There are several examples of such. Z and VDM are amongst the foremost
specification proof languages that can do it if used correctly. The snag
is it takes a very skilled trained mathematician to use them and they
are in short supply. We need to deskill programming to a point where the
machine takes on some of the grunt work that catches people out.

Even wizards make the occasional fence post error it is just that we
*expect* to make them sometimes and test for any rough edges.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

It just makes it so that when it dies horribly it doesn\'t take anything
else with it. OS/2 had pretty effective user mode segmentation defences
Windows effectively dismantled them and did a VHS vs Betamax takeover.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

Sometimes you need to break the rules to get things done. Mickeysoft in
its infinite wisdom withdrew the ability to use 80 bit reals in C after
v6.0. I used to keep a copy on hand for awkward numerical work. I have
now made a small library of routines that will subvert the CPU into the
80 bit mode when I really need that extra precision. Unfortunately it
means I have to look at the optimiser output and sometimes hand tweak it
so that all crucial intermediate results stay on the stack.

One interesting observation on IEEE FP based on my recent work is that
although 2^x-1 and y*log2(1+x) is implemented very nicely and cos(x) is
OK because the identity cos(x) = 1 - 2sin(x/2)^2 allows cos(x)-1 to be
computed reliably to full numerical precision.

No such easily computed closed expression is available for x-sin(x) a
functional form that appears in several important physics problems. Most
practitioners are forced to roll their own and some inevitably get it
wrong. You have to add the terms together from smallest to largest to
avoid cumulative rounding errors from distorting the result.

Online code updates should of course be disallowed by default. It\'s an
invitation to ship crap code now and assume it will be fixed some day.
And that the users will find the bugs and the black-hats will find the
vulnerabilities.

It was ever thus. All that has changed is the frequency and gulp size of
updates - some really hurt now when you are on a wet string connection.
Why is there no legal liability for bad code?

Probably because there is so much of it about and many US lobbyists.

--
Regards,
Martin Brown
 
On Tue, 11 Aug 2020 18:00:09 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 11/08/2020 15:50, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

There are several examples of such. Z and VDM are amongst the foremost
specification proof languages that can do it if used correctly. The snag
is it takes a very skilled trained mathematician to use them and they
are in short supply. We need to deskill programming to a point where the
machine takes on some of the grunt work that catches people out.

Even wizards make the occasional fence post error it is just that we
*expect* to make them sometimes and test for any rough edges.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

It just makes it so that when it dies horribly it doesn\'t take anything
else with it. OS/2 had pretty effective user mode segmentation defences
Windows effectively dismantled them and did a VHS vs Betamax takeover.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

Sometimes you need to break the rules to get things done. Mickeysoft in
its infinite wisdom withdrew the ability to use 80 bit reals in C after
v6.0.

That\'s just another variable type in PowerBasic.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tuesday, August 11, 2020 at 12:11:02 PM UTC-4, jla...@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse..

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

An interesting dichotomy. It is ok to disrupt the world\'s computing resources that the world economy depends on in order to prevent computer viruses, but it\'s not ok to disrupt the economy in countries that a virus is killing 10 thousand people per week.


A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

I use a programming language called Forth. Some of the Forth programming environments are recognized by AVS as infected when it is not. It is virtually impossible to get the AVS companies to provide any info on how to write code to prevent false positive detection. It becomes a guessing game.

I think this silly idea would result in nearly every program being flagged as \"bad\" in one way or another.


Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.


what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.
Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

The company, the developer or the user?


> Let\'s get rid of virtual memory too.

You can still run CPM on a Z80 if you\'d like. They are pretty fast these days... opps, that\'s Z80 emulations on real computers.


Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

Obviously this is needed because there is zero incentive to make software not crash presently.

I really don\'t know about this guy. A lot of times he sees the world through crap colored glasses. My laptop virtually never crashes other than the Microsoft mandated crashes it does periodically to update the OS.

LTspice is the biggest crasher on my system. Should LTspice be blocked from running?


Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

FPGAs have fewer bugs because they can be tested better and are typically a lot more simple than the hardware they run on. It is very hard to test all the millions or billions of permutations in the software.

If you want software to be more reliable, don\'t ask it to do such complex tasks.


Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.

So you must still be running CPM then?

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209
 
On 2020-08-11 12:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.




Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.


what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.
Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

Let\'s get rid of virtual memory too.

Seconded. My boxes generally have minimal swap space.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.

It\'s got a lot harder to do since 1975. See e.g. this very readable and
illuminating paper, entitled \"C is not a low-level language. Your
computer is not a fast PDP-11.\"
<https://cacm.acm.org/magazines/2018/7/229036-c-is-not-a-low-level-language/fulltext>

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 11/08/20 19:30, Phil Hobbs wrote:
On 2020-08-11 12:10, jlarkin@highlandsniptechnology.com wrote:
Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.
Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

Let\'s get rid of virtual memory too.

Seconded.  My boxes generally have minimal swap space.

Yup.

I can\'t remember the last time my programs exceeded physical memory.
disk is the new tape
dram is the new core
cache is the new ram
and I\'m not sure where NUMA fits into that.


Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.

Regrettably not.

Human stupidity, laziness and misunderstanding are constants.


It\'s got a lot harder to do since 1975.  See e.g. this very readable and
illuminating paper, entitled \"C is not a low-level language. Your computer is
not a fast PDP-11.\"
https://cacm.acm.org/magazines/2018/7/229036-c-is-not-a-low-level-language/fulltext

Yes, C hit an abstraction (all the world is a PDP11) that
was good for a decade, but since then has caused untold pain.

New computation models are like scientific theories: they
change one (programmer) death at a time.

The new computational models presume multicore and distributed
processing. Good.

Now all we have to do is get programmers understand the concepts
of partial system failure, that \"a single universal time\" is
heretical, and the eight laws of distributed programming.
 
On 24/07/2020 23:34, John Larkin wrote:
On Fri, 24 Jul 2020 23:15:27 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

As for \"better\" languages, they help by reducing the
opportunities for making boring old preventable mistakes.

It should be flat impossible for any application program to compromise
the OS, or any other unrelated application. Intel and Microsoft are
just criminally stupid. I don\'t understand why they are not liable for
damages.

There is nothing much wrong with the Intel hardware it has been able to
support fully segmented protected address spaces for processes ever
since the 386. IBM OS/2 (and to a lesser extent NT) was quite capable of
terminating a rogue process with extreme prejudice and no side effects.

Other CPUs have more elegant instruction sets but that is not relevant.

The trouble is that Windows made some dangerous compromises to make
games go 5% faster or whatever the actual figure may be. There is far
too much privileged kernel code and not enough parameter checking.

In addition too many Windows users sit with super user privileges all
the time and that leaves them a lot more open to malware.

We are in the dark ages of computing. Like steam engines blowing up
and poaching everybody nearby.

More like medieval cathedral builders making very large buildings - if
it is still standing in five years time then it was a good one.

Ely and Durham cathedrals came uncomfortably close to falling down due
to different design defects. Several big UK churches famously have
crooked spires to say nothing of the leaning tower of Pisa.

--
Regards,
Martin Brown
 
On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.

You cannot tell reliably what code will do until it gets executed.

A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

Most decent software does what it is supposed to most of the time. Bugs
typically reside for a long time in seldom trodden paths that should
never normally happen like error recovery in weird situations.

C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.

That is motherhood and apple pie. It allows other programs and tasks to
keep running and was one of the strengths of IBM\'s OS/2 but apart from
in bank machines and air traffic control hardly anyone adopted it :(

IBM soured the pitch by delivering it late and not quite working and
conflating it with the horrible PS/2 hardware lockin that forced their
competitors to collaborate and design the EISA bus the rest is history.

Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

You are going to have a lot of time wasting checking against all these
rules which will themselves contain inconsistencies after a while.

> Let\'s get rid of virtual memory too.

Why? Disk is so much cheaper than ram and plentiful. SSDs are fast too.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

One that you can do either in hardware or software is to catch any
attempt to fetch an undefined value from memory. These days there are a
few sophisticated compilers that can do this at *compile* time.

One I know (Russian as it happens) by default compiles a hard runtime
trap at the location of the latent fault. I have mine set to warning.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Oh rubbish. You should stop using simulators and see how far you get -
since all software is all so buggy that you can\'t trust it can you?

Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.

Intel hardware has the capability to do full segmented protected modes
where you only get allocated the memory you ask for and get zapped by
the OS if you try anything funny. But the world went with Windows :(

I blame IBM for their shambolic marketing of OS/2.

--
Regards,
Martin Brown
 
On 25/07/2020 23:37, Tom Gardner wrote:
On 25/07/20 19:51, Phil Hobbs wrote:
Check out Qubes OS, which is what I run daily.  It addresses most of
the problems you note by encouraging you to run browsers in disposable
VMs and otherwise containing the pwnage.

I did.

It doesn\'t like Nvidia graphics cards, and that\'s all my
new machine has :(

Assuming it is a Pentium class machine and you don\'t do much video
editing, 3D rendering or gaming you may find that for 2D graphics the
built in Intel graphics 4000 is actually faster at 2D than most high
performance graphics cards. No use to you at all if you are using
programs that subvert the graphics card to do computation though.

There may be a way in your BIOS to disable the Nvidia card temporarily
and check it out. Its a waste to be running a texture rendering engine
when all you are doing is graphs and web browsing.

My office machine uses Intel 4000 graphics only and consumes under 60W
when not working hard and 100W flat out. A graphics card would double or
triple that consumption. Only snag I see is that I cannot run the latest
AI chess engines on it since they do require a graphics CPU cluster.

--
Regards,
Martin Brown
 
On 12/08/20 11:05, Martin Brown wrote:
On 25/07/2020 23:37, Tom Gardner wrote:
On 25/07/20 19:51, Phil Hobbs wrote:
Check out Qubes OS, which is what I run daily.  It addresses most of the
problems you note by encouraging you to run browsers in disposable VMs and
otherwise containing the pwnage.

I did.

It doesn\'t like Nvidia graphics cards, and that\'s all my
new machine has :(

Assuming it is a Pentium class machine and you don\'t do much video editing, 3D
rendering or gaming you may find that for 2D graphics the built in Intel
graphics 4000 is actually faster at 2D than most high performance graphics
cards. No use to you at all if you are using programs that subvert the graphics
card to do computation though.

There may be a way in your BIOS to disable the Nvidia card temporarily and check
it out. Its a waste to be running a texture rendering engine when all you are
doing is graphs and web browsing.

I would have tried that, but my AMD 3700X doesn\'t have inbuilt
graphics - so it has to be an external card.



My office machine uses Intel 4000 graphics only and consumes under 60W when not
working hard and 100W flat out. A graphics card would double or triple that
consumption. Only snag I see is that I cannot run the latest AI chess engines on
it since they do require a graphics CPU cluster.
 
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.

I\'ve written maybe a million lines of code, mostly realtime stuff, and
three RTOSs and two or three compilers, and actually designed one CPU
from MSI TTL chips, that went into production. I contributed code to
FOCAL (I\'m named in the source) and met with some of the guys that
invented the PDP-11 architecture, before they did it. Got slightly
involved in the dreadful HP 2114 thing too.

Have you done anything like that?

Bulletproof memory management is certainly not impossible. It\'s just
that not enough people care.

\"Computer Science\" theory has almost nothing to do with computers.
I\'ve told that story before.

You cannot tell reliably what code will do until it gets executed.

You can stop it from ransoming all the data on all of your servers
because some nurse opened an email attachment.

A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

Most decent software does what it is supposed to most of the time. Bugs
typically reside for a long time in seldom trodden paths that should
never normally happen like error recovery in weird situations.

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.

C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.

Or push everything into the cloud and not actually run application
programs on a flakey box or phone.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.

That is motherhood and apple pie. It allows other programs and tasks to
keep running and was one of the strengths of IBM\'s OS/2 but apart from
in bank machines and air traffic control hardly anyone adopted it :(

My point. Why do you call me ignorant for wanting hardware-based
security?

IBM soured the pitch by delivering it late and not quite working and
conflating it with the horrible PS/2 hardware lockin that forced their
competitors to collaborate and design the EISA bus the rest is history.

Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

You are going to have a lot of time wasting checking against all these
rules which will themselves contain inconsistencies after a while.

Let\'s get rid of virtual memory too.

Why? Disk is so much cheaper than ram and plentiful. SSDs are fast too.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

One that you can do either in hardware or software is to catch any
attempt to fetch an undefined value from memory. These days there are a
few sophisticated compilers that can do this at *compile* time.

The problem circles back: the compilers are written, and run, the same
way as the application programs. The software bad guys will always be
more ceative than the software defenders.

One I know (Russian as it happens) by default compiles a hard runtime
trap at the location of the latent fault. I have mine set to warning.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Oh rubbish. You should stop using simulators and see how far you get -
since all software is all so buggy that you can\'t trust it can you?

I\'ve done nontrivial OTP (antifuse) CPLDs and FPGAs that worked first
pass, without simulation. First pass. You just need to use state
machines and think before you compile. People who build dams
understand the concept. Usually.

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 

Welcome to EDABoard.com

Sponsor

Back
Top