The point of this post is this: How have things programmers have to do on a daily basis gone so far astray from reality? (Note: I am not here to beat up Lisa Lippincott or her presentation, to say her analysis is incorrect or wrong, or anything like that.)
I am writing to question how something as simple as a few gates that perform a simple, repeatable boolean operation gets exponentially magnified into an ultra complex "type theory" system and why that is needed when the theory is clearly so much more complex than the operations it describes.
In my view "types" as implemented in C++, particularly the features added over the last decade or so, are the downfall of C++. C and C++ operations were originally derived from the PDP 11/20 so the notion of the C type "int" is really just a PDP 11/20 16-bit register. The operation of add, for example, is a 2's complement addition using a 74181 ALU.
During this time other computers, such as the UNIVAC 1108, used, instead of 2's complement, 1's complement arithmetic (which means instead of an extra positive value you have a negative zero and symmetric positive and negative values).
As an impressionable youth with a knowledge of 7400 series logic the function of the PDP 11 arithmetic unit was fairly obvious (the full set of PDP 11/20 instructions is here):
You were given a full set of 2's complement 16-bit values.
Choosing any two 16-bit input values from that set and applying addition, subtraction or other arithmetic operations gives you a 16-bit output value from that set (note: there was no multiply instruction hence multiplication is not in the set of arithmetic operations). Hence the operations are closed over this set.
Additional signals for overflow, carry, zero, positive and negative reflect side effects (recorded as logical bits).
Learning integer type operations in C (which carry exactly into C++ basic types) was simple:
Learn the above rules.
Learn that shifting an int left or right may, depending on computer architecture, shift in a bit value you didn't expect.
Learn that the obvious and valuable hardware knowledge of overflow and carry were not available in C.
This last point is important and, from my perspective, a key item which makes C a "bogus" language.
Other languages of the era, for example PL/I, would trigger faults if a program generated an overflow with, say, and addition operation.
Hence, from the dawn of "C" time overflow and carry were the programmers responsibility, as was what bit value gets shifted in from left or right.
However, in the 1970's and 1980's, this mattered a lot more than it does today. Today Intel and ARM are by far the dominant hardware forms and these details have been smoothed over by decades of language fiddling.
Given this, what's the point?
The point is simple - during the last almost 50 years the six points above have served to address exactly what an int does in C and C++. Not just the 16 bit form but all the forms of int.
Why?
Because that's what the hardware did for 70-ish years and till does.
So a mental hardware model is the correct model because it accurately models reality.
It's also far, far simpler that the complex "type theory" model of addition. Further, the basic binary operations of addition are unlikely to change much at this point.
I find it fascinating how far programming language have gotten from the devices they generate code for. Unfortunately, the more "abstraction" you apply the harder the code is to understand.
Think about it: Type theory is complex and difficult. Hardware adders are relatively simple, at least compared to type theory.
And, unfortunately, the wrong kinds of abstraction are applied. It's one thing to create a set of routines, an interface, a simple LALR language, etc. to capture an abstraction and quite another to push the abstractions into places where they can't be seen.
What do I mean by this?
Routines that do a specific task are "visible" - a programmer can look at a library, use the debugger, etc. to gain understanding of what's happening.
But as more and more is "abstracted" either into the language itself or into "overloads" on operators in the language things become less and less clear.
Somehow a simple macro is more dangerous than a templated lambda...?
No comments:
Post a Comment