Search This Blog

Wednesday, February 29, 2012

HDMI and 21 Metric Tons of Cocaine

There are some interesting new computers becoming available very soon.

One is the Raspberry Pi.  This is a $35.00 USD computer that has "... RCA and HDMI outputs that allow them to be plugged into a television. Input devices, such as mouse and keyboard, can be plugged in via a USB port" according to this at Ars Technica.

Another is the Cotton Candy from cstick.  According to the cstick site "The size, raw horsepower and combined HDMI, USB and MicroUSB connectivity bring unprecedented flexibility to the portable market. ... Cotton Candy is the world’s first any screen cloud computer, weighing only 21 grams. It’s unique architecture will allow the device to serve as an ideal companion to smartphones, tablets, notebook PCs and Macs as well as will add smart capabilities to existing displays, set top boxes and game consoles."

So people are starting to create very small, cheap USB computers with HDMI ports on them.

Not only are their HDMI ports but also supports for video codecs.  Video codecs are software modules that are used decode video feeds.

The Raspberry computer is basically "open source" which means that the designs are public - all the software and circuit diagrams.  So anybody anywhere in the world with this info can build this processor.

(There is also the Arduino - which is similar but larger and supports other functions as well - but also open source.  We are using this over at Synthodeon for some work we are doing.)

What does this mean?

I think its a world-wide world-class piracy effort.

One that the congress, RIAA, and everyone else doesn't have clue is coming their way.

These devices are designed to run video to large screen displays via HDMI and to plug into computers and smartphones.  Basically a smart USB to HDMI adapter.

And because its a computer it can interact easily with whatever you plug it into.  The Cotton Candy also supports WiFi - so I can plug my TV via HDMI into my Cotton Candy and potentially stream video (or music) directly via my WiFi to it (the Cotton Candy just needs to be plugged into a powered USB hub near the TV set).

Pirating music or video isn't a problem on a computer as long as you like to watch/listen to it there.  But having music and video on the big screen at a party is much better.  So why not build a device that makes this easy to do.

Of course these web sites don't specifically talk about illegitimate uses for these computers - on the legitimate ones - like education and so on.

But don't be fooled - why else would you include HDMI output?

The Raspberry Pi device was so popular the day it came out that the website went down because of all the efforts of those interested in purchasing it.

Both devices will run Linux - an operating system that's free and one which supports a lot of free video technology as well.  So once convenient apps are developed using these devices will be as simple as downloading and installing some software packages with a few clicks of the mouse.

There is, I think, a lot of interesting irony here - perhaps to the point of "irony poisoning" for the traditional content providers.

Over the last at least decade we have endeavored to teach our children about "sharing" in a big way.  No need for war, no need for boundaries, help out your fellow man, share what you have, etc.

As these kids have wound their way through places like engineering school they have taken this to heart and have started to build technological products to support this.  After all, they probably grew up sharing pirated video and music on the Lime Wire and Napster. 

Why not take their college-provided skills and use them to actually take this form of sharing to the next level by building a perfectly legal "sharing" infrastructure?

Sadly, the old geezers at places like the RIAA, the US Congress, or Capitol Records probably don't view the world this way.  Unfortunately for them, though, all that money they spent sending their kids to college to learn how to be "better, kinder, sharing citizens" is probably going to come back to bite them.

Because that's exactly what they are going to do with content: share it.

And because these same kids were taught that there are no national borders and so on they will happily share that content with folks in parts of the world where there is little if any concern for the rights of copyright holders.

While the US government might feel they are making a dent in all this by taking down MegaUpload in reality they are probably about at the same point where the US DEA was when, years ago, they destroyed 21 metric tons of pure cocaine in South America.

Ah - they thought - at least that will sting (they looked for a blip in street-level cocaine prices).

In fact, it didn't - so much cocaine flowed into the US at that time that destroying 21 metric tons had no impact - more probably flows today.

The same will be true for these kids and their $25 computers.  No laws will stop them.

Beyond this I would be concerned if I were a cellphone provider.  These same $25 computers hold all the non-phone "smarts" of most smartphones (Blackberry, iPhone).

TV Screens and Android software will run off the Cotton Candy.  So today I can probably build a portable Skype phone with it (displays are cheap and easily interfaced too).

Tomorrow someone will figure out (if they haven't already) how to access cellphone networks with "home brew" technology.  They will publish "shareware" designs for chips.  These designs will find their way to China for cheap manufacture and show up as USB sticks.

Then there will be lawsuits and eventually, after a visit to the US Supreme court, the "private" networks of the cellphone companies will have to "open up" to allow properly formed "home brew" phones.  (As was the fate of the old "Bell Network" of phones: The government said "your service is ubiquitous and therefore must be opened up to allow other things to connect.  Soon after the "Bell Network" simply vanished...)

The kiddies will install their first freeware cell tower in a big city (after suing the US government for access to private network bands held by "greedy" companies that don't want to share.

Good by Verizon, Sprint and AT&T.

Meanwhile, grandma, who lives off the stock she was left in these companies will be thrown out into the street as their value (like that of the record companies) plummets.

Isn't sharing fun?
 

Tuesday, February 28, 2012

CPR and Heroic Measures: Are They Worth It?

We've all seen it on TV: Joe keels over grasping his chest, or the car with little Suzy smashes into a tree, or Uncle Bob is found slumped over his kitchen table.

A hero rushes up and administers CPR saving the person's life.

But how real is this?

According to this Science Blog article "77 percent of TV patients survived immediately after the cardiac arrest, and 67 percent appeared to enjoy long-term survival. Among actual patients, survival ranges from 2 percent to 30 percent for out-of-hospital cardiac arrests and 6.5 percent to 15 percent for in-hospital arrests."

At worst at 2% chance, at best a 30% chance - reduced to 15% if you're in a hospital.

Yet TV reports a 60% - 77% success rate...

This made me think about the AED devices I see hanging around everywhere.  Automated Electronic Defibrillators.  Three... Two... One... Clear - ZZZZZAP! And grandma or little Johnny come back to life.

Now I wondered "how would you know when to use an AED?"

Not being a doctor I really wouldn't have any idea when someone's heart was in defibrillation.  A quick check of Wikipedia shows that the AED's know when to apply themselves - I suppose the internal computer checks the electrical signals from the heart and, if it detects common forms of heart arrhythmias jolts the patient.

Though I think it would be something of a "tough call" to rip open the shirt of someone's who's collapsed and slap on the paddles.  What if they had choked?  The AED wouldn't do much good. 

Apparently neither would CPR.

I looked around for some statistics on the success of AEDs but I could not find anything other than a variety of anecdotal stories where the devices "saved someone's life."  You would think that if there were stats on this they would be trumpeted to the high heavens by the manufactures.  But since there aren't any out there to find, at least on Google, my guess is that there really aren't any (yet?)...

This leads to other conundrums as well: do I use CPR or the AED?  Which one first? 

Will the "patient" sue me? 

The real question, I think, is what would professionals do?

In this case medical professionals. 

So take a look at this WSJ article "Why Doctors Die Differently."  The upshot here is that in a 2003 study, compared to about 20% of us non-medical professionals, 64% of doctors take steps "specifying what steps should and should not be taken to save their lives should they become incapacitated" (bolding mine).

That's right - those who work in the field don't want medical magic applied to "save them" from dying.

I wonder why?

For one thing having some control over your own death offers a sense of closure for you and your family.  No one has to stand around in a hospital room wringing their hands every day.  You can do things that are important to you - spend time with family and love ones.  And so on.

Doctors, more than likely, understand that most "lifesaving" care is ultimately futile - especially in the "long run" which might be a few extra months or a year - and particularly if you are left with a diminished capacity during that time.

The WSJ article describes a few anecdotal stories where somebody, upon discovering they are doomed, simply go home and enjoy the remainder of their lives.

Imagine.

More interesting is that the "cost" of this kind of death, at least according to the WSJ article, is often ridiculously low: say $20 dollars of pills a month.

Yet TV and big pharma don't see much interest in pushing this kind of death.

I wonder why not?

Perhaps they don't make as much money from it...?

One medical study that I have never seen or heard of - and I suspect I never will - is this one:  What will it cost to die if I am first "saved" by medicine.

Let's say I am a smoker - so there is an increased chance I would die of heart disease - bang - one day I simply fall over dead with a heart attack.  Total cost: an ambulance ride to the ER where I am pronounced DOA - a few grand and off to the funeral home.

Now let's say we could somehow estimate the cost of both A) the fact that I survived the heart attack - what would the "cost" of keeping me alive be and B) what if I recover 100% but die of Alzheimer's after 10 years of wasting away.

Did society really "save money" by having me quit smoking?

Was my life really better off "dropping over dead" or lingering in misery for 10 years?

My mom is in her eighties.  She still holds the depression-era "doctors are gods" model in her mind.  So anytime she has a problem her first choice is to call one or run to the hospital.

Over the last seven or eight years since my father became ill and passed away (Alzheimer's) I have worked hard to convince her that once the "medical system" gets hold of her she is doomed.  I have heard too many stories about aged parents, friends and relatives who enter the hospital for seemingly minor issues only to be pronounced dead a few weeks later having never left.

Mom's been in relatively good health (not that she would agree) and only been in the hospital once in that time for a couple of days.  I work hard to keep her out because I know that once they get her the same will happen to her.

Mom still drives and gets around on her own and her only medication is a blood pressure pill.

Not bad for someone in her eighties.

A while back she was in the hospital for a gastrointestinal problem for a couple of days.

I think she saw the writing on the wall and realized this was not a "safe" place to be.  I think it was kind of an epiphany for her because before this running to the hospital was top on her list.  I am not sure exactly what happened but in the end I think she saw that her interests were not the primary interests of the hospital and that, ultimately, she would lose control of her own destiny there.

Heady stuff for someone with her background and predisposition for seeking medical care as a first choice.

Medical treatment today, I think, is only about control and money - and its getting worse.

But TV and popular culture display a far different picture than reality - and even those professionals making a living at it don't seem to really trust it.

I've come to believe that these days its so bad that even my mom, who came from an era where doctors where the ultimate, now sees it all for what it is.

A shame.

Monday, February 27, 2012

Decrypting Your Fifth Amendment Right

A few weeks ago I wrote "Ramona Fricosu and Your Privacy." 

Ramona, it seems, had a password that decrypted her laptop that the court wanted the files from.  Ramona recently appealed to the court have her case dropped because revealing the password would be a violation of her Fifth Amendment right against self incrimination.

Ramona recently lost (see this).

However, Ramona lost because it was the wrong time in her case to make such a claim.  Instead she must wait until her case is won or lost (though the case may change based on another defendant supplying passwords) and file an appeal.

However, the principle was decided, I guess in Ramona's ultimate favor, in a separate United States Court of Appeals for the Eleventh Circuit decision here.   I suspect that this will go to the Supreme Court ultimately - though the result will not change.

While a different case from Ramona's altogether the principles are the same.

The details are as follows.

"John Doe" had a laptop on which the government suspect Mr. Doe had stored child pornography.  (Unlike Fricosu's case where the laptop was known through other means to contain documents related to Fricosu's guilt.)

Suspected is the first key.

In deciding the case the Appeals Court relied on two cases.

The first is Fisher v. United States.  In Fisher the case revolved around a defendant supplying documents requested by the IRS.

The principle, it seems, is whether or not the governments request for information from the suspect that it does not already know: "The existence and location of the papers are a foregone conclusion and the taxpayer adds little or nothing to the sum total of the Government’s information by conceding that he in fact has the papers. Under these circumstances by enforcement of the summons “no constitutional rights are touched. The question is not of testimony but of surrender.”"

The second is United States v. Hubbell.  This harkens back to the Presidency of Bill Clinton and his (and his wife's relationship) to the "Whitewater Development Corporation."

In this case the government asked Hubbell to turn over eleven categories of documents.  However, in this case the government did not know what was in the documents in advance.  In Hubbell the Supreme Court found that first "the Fifth Amendment privilege is not triggered where the Government merely compels some physical act ...[where the]... individual is not called upon to make use of the contents of his or her mind.

Secondly, "... under the “foregone conclusion” doctrine, an act of production is not testimonial—even if the act conveys a fact regarding the existence or location, possession, or authenticity of the subpoenaed materials—if the Government can show with “reasonable particularity” that, at the time it sought to compel the act of production, it already knew of the materials, thereby making any testimonial aspect a “foregone conclusion.”"

What does this mean?

It would seem that as long as the government does not know or merely suspects what your encrypted material is through other means, i.e., you're not simply surrendering what they already know about, your encrypted materials are "safe" in that surrendering them through the act of decryption would be the equivalent of violating your Fifth Amendment's rights.

The government could, on the other hand, simply request the encrypted material and, for example, guess at the password, so best not use passwords like "1234" (see this).

Ms. Ficosu's case aside I feel that these are the correct conclusions.

However, this is probably an very serious turn of events for "government" and for "us."

This means, for example, that if you put everything on your hard drive, tell no one about what its contents are, and encrypt the drive then government agents knocking on you door will not be able to access the information unless you provide it to them.

So theoretically someone could create, say, a ponzi scheme, like Madoff, but simply emit statements to investors and never divulge the process or details which were encrypted.

So long as you violated no other law (a stretch admittedly) then you would be very difficult to prosecute.  Clearly the government could separately piece together your actions, such as by using third party input such as bank statements, and claim that decryption of the Ponzi scheme is merely a "surrender" of what they already know.

For whistle-blowers, on the other hand, things might be improved by this decision.  For example, I take information from a wrong-doer, encrypt it, tell no one, but "blow the whistle" (declare so-and-so guilty of X).  So long as I only declare the "conclusion" from the encrypted data I think it would not be available to the courts...

For "us" this means that in order to prevent scenarios such as the ones I just described yet more laws will be passed criminalizing encryption, withholding information, and so on.

More laws.

More nonsense.

Less freedom.

Friday, February 24, 2012

Your Name, Your Destiny?

Lady with the Ermine
The other day Mrs. Wolf and I were sitting in a restaurant.

Somehow the topic of names and their meaning came up.  I quick search on the iPhone revealed that mine meant "fox."   Some poking around led us to the word "stoat" - another English word (and animal).  The stoat is a type of weasel (mustela erminea).

(No, we are not talking about Lyra Belacqua's dæmon Pantalaimon though I suppose the picture right could be her.)

Names are always an interesting topic.

Recently I came across this PDF on how first names are a good predictor of income and social status.

On page 21 of the paper you find this: "It would thus seem that first names retain a strong role overall in determining lifetime outcomes even after controlling for a respondent’s labor market experience" as well as this from the introduction: "We find evidence that first name features are independent predictors of lifetime outcomes that are likely related to labor productivity such as education, happiness and early fertility."

Now this is an interesting idea.  How your parents name you, i.e., your first name, predicts much about your future.

Kind of gives new meaning to the old idea of "fortune tellers."

The paper itself is fairly long and detailed but at the end of the day they reach what I think are reasonable conclusions (specifically that your name does convey information about you outside of the obvious).  And they conclusions go well beyond simply guestimating, for example, one's race from a name.

The paper also talks about how, using a child's name, one can make reliable predictions about the parents and whether the person with a given name will bear a child before age 25!

Interesting stuff because clearly the individual with a given name was, well, given the name and had no choice in the matter.

I guess this means that, with only a couple of syllables, we can encode a number of reasonable accurate and predictable outcomes to your life...

Or can we?

The paper talks a bit about the "popularity" of names and its impact on this.

Now, over the years, I've been involved in naming a few children and from my experience the process is not so straightforward.

It seems pretty clear that a child's name has a lot to do with family, i.e., do you name the male child as a "Jr" for example or name little Suzy after great aunt "Suzanne," and so forth.

And clearly you're less likely to pick a name that falls outside your "racial heritage," i.e., Asian parents would unlikely pick a Native American name, black parents a name from India, and so forth.

In the past new parents were often relatively young, say in their early 20's, so their life's fortunes were less set and the selection of a child's name is going to reflect that as well.  If you're a social person you may gravitate toward names your peers are using for their children, for example.

And its likely your peers are going to be, well, your peers.  If you're poor its unlikely you going to be hanging out with rich folks who are also having children at the same time.  (Though again, in the olden days, mom's typically spent at least a couple of days in the hospital where in my experience you'd find a diverse mix of social, racial and economic backgrounds.)

So to some degree the finding of this paper would seem to bear out reality.

On the other hand the supposed "science" of picking the "right" name for you child has been around for a longtime, see this for example, and this site on baby names.

It seems from this that there's a lot of "voodoo" involved - parents wanting the best name for their child.  But picking the "best" name also tells us about the culture and thinking of the parents.  Did they "buy up" by going to a fortune teller or numerology expert to pick a "better" name for their child?

Then, on the other hand, there's a lot of cultural bias today as well.

For example, someone looking at job applicants might associate, say "Asian" names with greater proficiency in math or science than, say, "white" names, or a southern name, like "Jethro" with laziness.

So I think there is a bias from that perspective.

Over all I'd have to agree that, at least to a certain extent, a name does capture a lot of sociological and economic information about your birth situation, and all in only a few syllables.

I'd also have to say that my old friend Kahneman and his "System 1" plays a large role in this as well.  People have life experience with people and their names and they are going to have associations with types of names with races and socio-economic backgrounds.

I personally don't see how this could be avoided because ethnic groups tend to name their children relative to familial tendencies.  And outsiders are going to have experience with this.

A more interesting question is whether this is a meaningful avenue of study.

Clearly family history, ethnic and racial issues, as well as economics all come into play when naming a child.  Child naming is an important part of many cultures.

Why would we expect it not to convey this sort of information?

A young, uneducated white couple in Arkansas is simply not likely to pick a name popular with Japanese mothers.

This seems obvious.

I think that this study leads to a potentially dangerous intellectual idea - namely that simply "hearing" a name creates bias.  Bias as in, for example, a "hiring bias."

If I merely hear your name do I associate it with some racial or ethnic class which I am predisposed to thinking is unsuitable for a particular job?

Following this line of thinking all job seekers will have their names encoded as numeric values so that we cannot determine anything about them from their name.

Of course then we would biased by the numerological aspects of their randomizing number (here is applicant "100300666" for your consideration)...

Thursday, February 23, 2012

Monk: The Birth of Be Bop

I've been spending some time recently reading about and listening to the music of Thelonious Monk.  Monk was a jazz innovator in the US during the late 1940's and 50's.

I had never heard of him until a few years ago when I watched some sort of black and white movie on his life (probably "Thelonious Monk: Straight, No Chaser" 1988).  Recently I have been reading "Thelonious Monk - The Life and Times of an American Original" by Robin D. G. Kelley.

The movie is very dark and unusual - I suppose because it contains a lot of late 1950's and 60's black and white footage.

What stood out I my mind was Monk's music.

I have always had an interest in jazz - though being uneducated in music I never knew "what" it was - only what I liked.  Until I saw this movies I would have to say the most of my musical interest in jazz revolved around people like Bill Evans.

Now most jazz is about harmonization and many people don't like it because of this.  Typically modern and popular songs revolve around a melody - people like melodies because they are easy to remember (who can't recall the melody of, for example, Judy Garland's "Somewhere (Over the Rainbow)").

Popular music is often very simple in terms of structure as well - many successful song writers, like Johnny Cash, using only a few chords to create memorable songs such as "I Walk the Line."

Jazz, however, is different.

Certainly it has melody and rhythm - but how its played is much different.

In Jazz the point of the playing is harmonization (think barber shop quartet).  While someone might be playing the melody other instruments are adding harmony to that sound.  The "point," if you will, being that the melody and song are enhanced by the skill of those adding the harmony.

Jazz also involves improvisation.  Improvisation is playing some parts of the song uniquely while still retaining, for example, the melody.  Unique elements can include extending the melody, harmonization, adding chords based on the melody, and so on.

While Monk is a jazz musician his ideas about harmony are extremely interesting (of course, I am no jazz expert).

Monk creates music based on what are called dissonant harmonies - harmonies that, at least a first blush, sound wrong or out of place.  His genius, of course, is doing this in such a way that the songs are still wonderful songs.

Monk was also a pianist and played a lot of unusual sounding piano for this songs.

This style today is collectively called "be bop."

To my ear his best recordings are his earliest done for "Blue Note."


For some reason today Monk has been lost to time - at least outside the jazz world.

New York, where Monk lived and played, produced a great number of popular musicians at the time: Charlie Parker, Sonny Rollins, Fats Navarro, and Dizzie Gillespie - just to name a very, very few.

Like all "creative geniuses" Monk was thought to suffer from some form of bipolar disorder - though at least in Kelley's book I find this disingenuous.  (I am tired of reading about "stayed up for days" and "worked through the night" and so on as "mental illness.  Perhaps instead "mental illness" is living a life so boring and free of passion that you have no need to do those things.)

I am still reading the book so I am sure the "mental illness" aspect will be played up at the end.

Both Thelonious and his wife Nellie had the benefit of on the of the greatest (if not the greatest) patron of jazz in the US:  Baroness Pannonica (Nica) de Koenigswarter (1913 - 1988).


Nica was a Rothschild (of the famed Rothschild dynasty) and discovered US jazz in the early 1950's.  Before this she was a pilot and fought the Nazi's in France as part of the resistance.  She spent much of her life promoting jazz and supporting jazz musicians including Monk.

(And no, they were never romantically linked.)

As a patron she literally drove musicians to gigs in her Bentley, helped pay the rent, and much else including living under the threat of three years in prison for "taking the fall" for a musician friend over $10 worth of marijuana she declared as her own when pulled over by the police (she was ultimately let off).

A few of Monk's recorded performances below (many more are on YouTube):



Wednesday, February 22, 2012

Climage Change and Quantum Noise

From the WSJ response to "No Need to Panic..."


There is a lot of interesting debate in the WSJ opinion pages regarding "climate change."

On January 27, 2012 the WSJ published an opinion piece "No Need to Panic About Global Warming."  Needless to say it generated a lot of interest.

The main points of the article were

- CO2 is not a pollutant.

- The temperature of the planet hasn't changed significantly in 10 years.

- Why are significant changes in public policy based on these non-facts.

The article was authored by some sixteen or so different scientists - though none from the "Climate Research" area.

A huge attack followed with a slew of letters and comments decrying how wrong this analysis was, how the scientists lacked standing, and so forth.

All standard fare.

Recently a second opinion column appeared by these same scientists.  This time answering the critics of their first article.

One of the most important comments was that regarding the standard IPCC climate modeling.  The chart above (from the WSJ) shows the various IPCC predictions (starting in 1990) about the "surface global temperature."  Each is significantly wrong and each significantly over-predicts a temperature increase that fails to materialize.

The article goes on to discuss the various political and policy science aspects of the arguments.

But the issue for this post is not "Climate Change."  I mention this because it segues nicely into the real topic: predictability.

Many of you have no doubt heard of the "butterfly effect."  The idea is that a butterfly can flap its wings in Peking and a day later you get rain instead of sunshine in Central Park (or a year later you get a hurricane that destroys New Orleans instead of a sunny day).

Now in point of fact it turns out that there may not really be such a thing as the butterfly effect.

And the reason is quite interesting.

The basic scenario of things like climate modeling implicitly include the following: I take a "snapshot" of the Earth in, say, 1950, and I run a simulation (A) of it forward to 2012 with man dumping CO2 into the atmosphere  and I run simulation (B) of it forward without man I show that man's CO2 is warming the planet (or with or without our butterfly).

Simulation (A) shows one thing, (B) another.

The problem is that, on the quantum level (as well as at any level of any mathematical precision in the  "simulation" of the climate) that even if I ran (A) or (B) through a second time I would not get the same result.  (See this blog by Graham Morehead at Nature.)

The reason for this is that at the quantum level there is a lot of "noise" - heat transfer and other non-predictable quantum events - will not be the same - they cannot be.  This is simulated (perhaps not intentionally) in various climate models by the precision of the calculations (imagine 3.14159.......234... in one simulaton and 3.14159......235... in another).

The difference between the 4 and the 5 in the two values, perhaps many decimal to the right, can play a significant role in the outcome down the line.

Well, you might say, the model of climate is a mathematical one and it should always give the same result, i.e., it must be precise.

Unfortunately you'd be wrong because of the quantum effects.  The real "earth" is not "repeatable" in the same sense as a precise mathematical simulation.  The quantum effects of everyday things add noise and noise changes the predictability of the model.

And climate models, which are all about heat, have to accurately portray randomness because, in fact, there is unpredictable quantum heat noise in the real world.

(What is heat, after all, but higher energy quantum activity; quantum in the unpredictable sense of not knowing what the vibrating atoms will exactly do.  There are statistical means to talk about this but they are not exact in the predictive sense.)

So what's the upshot of all this?

For one thing, there cannot be a precise mathematical model climate because that would involve quantum thermodynamic "noise" which would make the model unpredictable because these quantum effects are unpredictable.

So even if you had a model if you ran it over and over your results would be different.

Nonsense you might say...

Well, try this simple experiment at home.   Take a thermometer and measure the temperature in your living room at five different random points around the room.  Chances are that these measurements will vary.

Why?

Because if the locations are in fact random some measurements will come out by, say a heating vent or window, others will not.

If someone else took five measurements at another random set of points at the same time they would likely get different results than you.

So even if you "averaged" these reading the results would be different.

The bottom line is that the mathematics and physics of temperature make modeling climate accurately impossible.

So of course the IPCC will be wrong - they have to be.

Tuesday, February 21, 2012

The Dogma of Stupid: US Infrastructure Security

There has been some recent news "chatter" about the threat to our country'ss infrastructure by the hacker group known as "Anonymous."

You've probably seen them, or at least pictures of them, with the "V for Vendetta" masks on.

About six months ago the US Department of Homeland Security warned us about them: "Several racist, homophobic, hateful, and otherwise maliciously intolerant cyber and physical incidents throughout the past decade have been attributed to Anonymous, though recently, their targets and apparent motivations have evolved to what appears to be a hacktivist agenda."  Again, recently, US General Keith Alexander of the NSA, brought these fears to light (see this).

But this really makes me wonder...

For one thing, STUXNET and it friends, like Duqu, have been around a while and are specifically designed to attack the kind of infrastructure you see in "the real world" - Iranian centrifuges not withstanding.

Unfortunately for all of us Microsoft Windows and its variants (like Windows XP, Windows 2000, Windows 2003 Server, Windows 2008 Server) as well as things like the Microsoft web server IIS have very well taken over the world.  There probably are very few businesses left on earth that do not have some form of Windows-based software in use.

Sadly, many large infrastructure-type business, e.g., power plants, waste processing facilities, water treatment plants, and so on, have picked up on Windows systems over the years as well.  Now, in and of themselves the only real danger from these Windows-based systems in a stand-alone environment, i.e, not connected to the internet, is that they might crash - probably an acceptable risk for run-of-the-mill systems in offices. 

For industrial controllers, i.e., software running the power plant or waste water system, again not connected to the internet, the danger is even less.  Typically these commercial systems are well tested and debugged before being put into production.

The real danger here is that, for what ever reason, some moron has decided to connect these systems to the internet.

And this is where the fun begins.  Most computers running industrial equipment have a very specific job to do.  For example, in a waste water system this might involve monitoring pumps, pump speeds, value positions, that sort of thing.  The infrastructure of the plant was designed to work a certain way, i.e., fill tank A, when tank A reaches point X turn on agitator motor C for 10 minutes, open valve D, ...

The Windows-based controller, when installed, is programmed to perform these tasks.  It receives inputs from various sensors, it controls various devices, and it does its job.

Normally these systems have no need to work over and use the internet - at least until probably 15 years ago or so.  After all, why would they?

Updates and monitoring where handled with dialup systems - you remember - 1200 baud or 56K baud modems - that worked over phone lines.  These dialups spoke specific "languages" and talked only to specific phone numbers so hacking into them really required a lot of specialized knowledge and access.

But more recently all this has changed.  For many reasons, cost and convenience chief among them, the dogma of stupid drove everyone to connect these systems to the internet.  Now, rather than paying for a special phone line, the remote technician can simply "telnet" in and fix the system.  Diagnostic data can simply flow automatically to remote systems for monitoring.

(This is well documented in the IT world.)

Seems simple enough.

Until Anonymous shows up.

Now Anonymous doesn't have to be very smart to hack these systems - they're probably not really even geniuses.

Why?

Because Microsoft has already done their work for them.

You see, when all these Windows-based systems were connected to the internet all the well-understood flaws of Windows - particularly in an internet environment - where also exposed.

Long ago Bill Gates (remember him?) thought the internet was a "joke."  "Who would use that?" he wondered.

So when working on Windows the whole notion of security was sort of, well, not worried about.  Microsoft, you see, liked code to be efficient, not secure.  Secure was expensive because the software, to be secure, had to do more work and more work means more processing which means the "user experience" is "slow."

So everywhere in Microsoft's code all sorts of shortcuts were taken to ensure things were fast.  No one ever considered the consequences of this (and to be fair this was not an issue at the time).  Billions and billions of lines of code where written.

All of it buggy.

So for the last ten or fifteen years hackers have been figuring out these bugs and exploiting them.  Hacking into remote windows systems is trivial - there are web sites around that provide tools kits and instructions for writing viruses for Windows.

So Anonymous really doesn't have to do any work to be successful, especially at the hardware level.

Above this lots of folks at these various infrastructure businesses (like the local "Water Works") don't worry (or even know) about configuring these controllers for security.  So most passwords are defaults, like "1234."  And once someone figures out one there are probably hundreds more with the same password.

Do you think they patch their systems?  Do they upgrade from old, insecure versions to the latest ones?

Nope - the budget crisis and their pension obligations have left them all cash-strapped - so no patches or updates for many years.

(Of course, there are many, many other types of hacking as well involving WiFi, exploiting humans, etc.)

All of this available on the internet by simply Googling "buffer overflow example code windows."

But the US government doesn't want you to know all this - because then they, the collective "government" - look stupid.  Stupid for installing probably millions of Windows-based systems at every level of infrastructure and hooking them up to the internet.

So instead they work to create a "dogma" about Anonymous - evil racist, homophobic bastards.  No doubt they will come and eat your children.

The truth, of course, is that Anonymous is simply pointing out that the government (and industry) is simply following the dogma of stupid.  Not checking passwords, not securing things, hooking mission-critical systems to the internet, that sort of thing.

Basically the government (and industry) are the ones at fault.  Hooking up known-to-have-lax-security mission critical systems to the internet.

But you'd never get that (or all this) by reading the Homeland Security PDF.

Now ask yourself.

Does the local nuke plant really have that much better security on their software systems?

There's a good reason, I think, that Anonymous chose the "V for Vendetta" masks as their moniker.

The movie is about how the "repressed" everyday "joe" stages a revolt against a controlling, totalitarian and stupid government.  V is the "leader" of this revolt and wears the "V" mask.

All this is old news.

Anonymous are indeed hackers, but they are resting on the shoulders of those who came before them in terms of technology and technique.

They seem to be more "political" than anything else.

Could Anonymous take down the power gird?

You bet...  And they probably wouldn't even break a sweat.  How many old Windows computers in the guise of "industrial controllers" do you think are wound into import elements of the power grid control in the US?  Most significant US power grid failures result from the propagation of one failure - just watch the Discovery Channel for details (I wonder of Anonymous does?).

How many of these controllers probably has the default password of "1234?"

Figure this out and you will see that the probability of attack is basically 100% and only a matter of time.

This is the dogma of stupid.

Monday, February 20, 2012

A Delightfully Appetizing Lunch

I wrote this morning in my last post about the epidemic of stupid plaguing our country.

Sadly, I forgot the mention the wonderful new wave of social manners that have come a long for the ride.

Today I was at a local mall for a business meeting.  We sat in the food court along with probably several hundred other people.

As we were discussing our project my associate gasped and mentioned something about a "shitty diaper."  He turned away.  Since we were having what amounted to a complex IT design discussion I was not sure what he was talking about or why he looked ill.

"Behind you," he said.

As I turned around I saw a mother changing her infants (well, actually, probably a one year old) diaper on the table in the food court.

I did not witness the contents of the diaper but my associate apparently did.

Now I don't get out to the mall much but this is apparently a new aspect of modern social behavior I am unfamiliar with.


As you can see this involved a full change of a diaper - right there on the table - a table that I considered sitting at prior to the seat I actually selected.  Had I arrived a hour later I suppose our meeting would be, well, er, would involve excrement.

This woman looks quite focused on her task.

Of course, the restrooms are perhaps 100 feet away - to far, I suppose, to walk too.  And when I personally checked the restroom (because I did not see fit to relieve myself at the table) there was a modern family restroom available (no doubt with a changing table).


Modern cellphones are wondrous things as far as cameras are concerned - though in this case it only serves to amplify my belief that the very social fabric of our country is coming apart.

My guess is that this will become the norm soon enough.

Fortunately for me I don't go to the mall much...

An Epidemic of Stupidity

The US House of Representatives in the movie "Idiocracy"
A while back I wrote about "The 'Risk' of the Cure."  In it I touched on ideas about the cost of risky things.  The other day I came upon this Milwaukee Journal Sentinel article on opiate pain killers.

Basically the article describes a "big university science" industry that takes money from big pharma and promotes various "big pharma" causes.  In the case of this article talks about Purdue Pharma "fraudulently misleading doctors by claiming, with no proof, that its narcotic painkiller OxyContin was less addictive, less likely to cause withdrawal and less subject to abuse than other pain medications."

Really?  Not addicting?

(The folks described in this article were promoting these pain killers for "every day" pain - not their initial focus of post-operative and end-of-life pain reduction.)

These doctors went around promoting this in the context of everyday pain for a decade.

Yet college educated doctors couldn't put two and two together to see that opiates where dangerously addictive.  The FDA did nothing.

Now let's look at this web site: Opioid Abuse Facts.

Here we find that opioid addiction costs the US nearly $500 billion (with a 'B') dollars each year.  Of course, like I wrote in "The Risk..." this includes all sorts of "lost work," justice, and other costs.  But given the current economic state of affairs half a trillion US dollars is a lot of money.

Couple that with the annual cost alcohol problems and you are looking at three quarters of a trillion dollars per year.

I think that we, as a people, really need to start re-examining what's going on here.

For one thing I think that's today's university environment is producing graduates who are not up to the standards of a few decades ago.  This Washington Post article describes how 36% of college graduates, tested at graduation, show that they have not progressed academically during their four years through college.

The cost of this?

Close to a trillion (with a 'T') dollars annually in the US according to this.

Universities turning out students who know nothing and have learned nothing.  Its little wonder that they move into post-graduate jobs and then become involved with "big pharma" promotion based on obvious nonsense.

Yet this education is expensive.


More and more its becoming obvious, at least to me, that here in the USA we spend literally trillions of dollars each year on nothing - on wasted effort: addiction, bogus education, and other similar goose chases.

In each case, at the root, is some sort of basic deception: opioid pain killers aren't addicting or little Johnny has a college degree (when in fact he demonstrably learned nothing).

Then we, as a nation, spend hundreds of billions of dollars covering over these facts.  We employ tens or hundreds of thousands of people in support of the lie.  We build up government bureaus and institutions.  We develop add campaigns to promote the them.  We hire ad agencies to weave these things into our societal fabric.

In the case of opiate pain killer addiction the death rate is around 10,000 per year in the US.  Just ahead of HIV and just behind murder.

Yet there is not campaign to "save the addicts."

No promotion to build upon research to save the lives of these victims.

Even though these addicts our are own family and friends.

The problem here in the US is an epidemic.

An epidemic of stupidity.

People so focused on the "love of money" that they are willing to promote the destruction of others (either their health or their financial health) to benefit themselves.

Clearly universities are of no help because they offer no value in this regard to the student: no ethics, no right versus wrong.  The student doesn't gain a perspective beyond what they started with from high school (where, no doubt, the addictive behavior began...)

Yet we pour good money after bad into these "institutions".

No one says l"et's put on the brakes" so we can examine what we are really buying.

A large percentage of our US economy is made up of things that deal with addiction and the results of addiction.

Probably trillions of US dollars each year.

No one says a word about it.  No one fights it.  Its accepted as a simple "fact of life."

Yet forty years ago, in the ancient past, when things were not so "progressive," things were different.  Sure there was addiction - but far less.  Sure there was bad education - but far less.  People knew better - knew how to tell their money was going for something bad.  A bad education was a waste of money (but today its not their money being spent so no one cares).

Imagine that the government debt for 2012 (some 1.3 trillion US dollars) in the context of these two items alone: addiction and lousy, valueless education.

Each year we are borrowing the full amount to "cover" these costs.

Meanwhile our children's futures are destroyed by addiction and lousy education.

Children who, in the future, will be voting and participating in dealing with these problems.

But will they be equipped?

Certainly not on the educational front.

And likely they will be battling their own addiction problems as well.

So we borrow this money and, instead of building something of value with it for the future, we piss it away on addiction.  On bad education.

And these victims will be leading our country someday soon...

Friday, February 17, 2012

Ötzi the Iceman: No Healthcare

Ötzi the Iceman - from Wikipedia
I read an opinion piece by J. D. Kleinke in the WSJ about "runaway" health spending (Kleinke is a resident fellow for the American Enterprise Institute).

Basically he says the rate of increase of the cost of healthcare relative to everything else is diminishing to the point were one should not say healthcare costs are running away.

He presents a couple of examples in his article pointing out that people are discovering "... a $5 generic drug might work just as well as a $50 branded one" and "... a nurse practitioner in an urgent care clinic can spot an ear infection for $30 a whole lot faster than an emergency-room physician can for $1,000."

Now this is all well and good but what bugs me is that he, and in fact, no one, is really looking beyond the existing paradigm here.

Antibiotics and cholesterol drugs are, as I have written extensively here (example and example), bad for you.  So, while I am glad that you are now harming yourself with less expensive forms of poison and less expensive delivery systems, I am still sad that you are poisoning yourself in the first place.

And this is really the problem with articles like this.

Am I glad that little Jr. is diagnosed with something less invasive than an ER?  Of course.  Not only is the local health clinic more pleasant but its less expensive too.

But no one is asking what's really wrong with little Jr. in the first place.

I contend that in fact little Jr. is actually malnourished and his sinus hygiene is probably being neglected.  No more iodine in his diet.  No fatty acids in his parent brains to help them think clearly about the problem.  Everyone too busy to do their own research on the internet to see what these antibiotics are really doing.

And on and on.

There is a relative in my life (not my own) who is about to have "tubes" put into his ears.

His parents believe that this is the only solution to his chronic earache problems.

Another relative (a different one, er, actually two) have chronic sinus problems and operations.

Again, thinking only modern medicine can solve these problems.

But people like Ötzi the Iceman (pictured above) lived sophisticated lives 5,300 years ago: complex shoes, implements, clothing and the like.

But what did Ötzi do without the local urgent care?

No doubt he figured out how to solve his own health problems - something that many "moderns", like my relatives, apparently are unable to do.

I've gone down this road so far that I no longer believe there is such a thing as a "sore throat" and that throat pain is simply a sign of sinus problems that have not been addressed.

I used to get sore throats all the time - but now I don't, er, rather, I get them and then I fix them with low-cost tools at home and they just go away.  I only have sore throat pain if I am lazy and don't attend to them.

I used to suffer a variety of running-related problems - but no more.  I think this is related to my on-going study of cod liver oil.  I think that low cholesterol is really a "death sentence" for healthy people and I have written extensively about it before here in "Lower Cholesterol = Memory Loss."

No one questions basic assumptions (like the one that says "cholesterol is bad" and assumes that the flow of information from their video set (phone, TV, whatever) is basically correct and must not be questioned.

My guess is that Ötzi did not carry an iPhone.  Hence no one told him what to eat save for his wife.  No one told him cholesterol is bad.  So he ate it.

Perhaps, though he lived 5,300 years ago, he was smarter that your average Lipitor-consuming Joe (or Joette) today.

Why?

His brain had enough cholesterol to function correctly.

Certainly during his time there were problems like blindness, deafness, and so on.  And modern medical treatment has addressed those.

But today, 5,300 years later, we have taken it too far.  To the point where it runs our lives.

My guess is that if someone 5,300 years in the future studied our time on earth via video they would conclude that we all had problems pooping.  (Thousands of hours of TV time devoted to various products that support varying degrees of "digestive health.")

Another easily curable problem if you simply do the work of understanding how your lifestyle is killing you.

Even today people from primitive societies who come to live in the USA fall ill because of our lifestyle and nutrition problems (see this I wrote a while back for documentation).

Poor Ötzi... (No, its not me in the picture, but its close...)

He doesn't know what he's missing.

Thursday, February 16, 2012

Apple, iDon't

Dear Mr. Cook

I have been a steady and reliable purchaser of all of your computer and iOS-based products for almost a decade.  I use your Mac OS X products extensively in my software development business (we create products for processing PDF files, MIDI, and other performance-related technologies on iOS).  I develop for all platforms (Mac, Windows and Linux) on my Apple devices.

During this time I have always been able to count on Apple's devotion to creating reliable, stable software products and, in particular, products that, unlike Windows, were without all the "distractions" provided by Microsoft.

Each new release of OS X was been stable, worked well, and was a true improvement over the last.  I recall the care Apple put into creating the OS X UI.  For example, the trade show efforts to really develop a great new UI for the Mac.

But it seems things changed with Lion.

I am not sure what happened but from my perspective, as someone who uses your products to make a living writing software professionally, you have lost your way.

First of all, Lion is unreliable in many ways as is its software development tool XCode 4.2.  I have blogged about this extensively (see this as an example).  While I understand the need to lead in the "device world" you must also be aware of the danger of backsliding and losing your way in the process.

Issuing products that are "not ready for prime time" is an example of this.  The performance of Lion with its 64-bit OS has been sadly lacking from my perspective.  I have two laptops - basically identical - one running 10.6 and the other running Lion (10.7).  The Lion machine is sluggish.

Software development on Lion and XCode 4.2 is nasty and unpleasant.  Things don't seem to work quite right and seemingly simple issues (which come up frequently) take too many hours to resolve.

On the audio/MIDI side Lion has issues as well (see this as one example).  For example, I have blogged about mounting simple USB-based devices and how things there don't work well.


Mr. Cook, I love your products, but I think the collective "you" is becoming far too infatuated with the iOS world.  As an iOS/App developer I agree that the future is there.  But please, not at the expense of making software development for that platform as painful, stupid and slow as it was in the Microsoft world.

I have thirty five years of experience developing software and I expect that your tools should make it easy to develop for your products.  Up until OS X 10.6 and XCode 3.2.6 you were ahead.

Now, sadly, I must start to at least think about alternatives.  I already have had to keep several versions of OS X-based laptops around to ensure that features that I relied on have not disappeared.  From the look of this WSJ article I will have to worry more about this in just a few more months.

Developing non-trivial Apps with iOS and Mac software is hard - particularly with your App stores and their requirements for compatibility with their respective sets of rules.  For a small developer it seems that you release software so quickly that compatibility with one version is not even finished before the next OS or development platform is released.

In closing I'd like to bring up the famous 1984 ad with the blindfolded folks following each other off the cliff.  I feel that Apple today is like that train of blindfolded "Windows" followers - except the wonder of iOS has replaced Windows.

You are "dumbing down" are great OS so that the masses can use it.  Unfortunately, to make really great, smart software you need development tools that have not been "dumbed down."

Those of us developing for your new environment, iOS and now future OS X platforms, need to be able to focus on developing great products and not on relearning a new release environment every six months - particularly if the learning takes more than six months to accomplish.

Sincerely,
The Lone Wolf

Wednesday, February 15, 2012

Ancient Computing, Ancient Art

In about 1976 I worked in the University of Wisconsin Computer Science Laboratory.  It was a basement area in the "Computer Science" building which housed the massive Univac 1108/1110.

For a small place there was a surprising about of old computer equipment around.

One of the computers was an ancient IBM 7094.  This was a "mainframe" from the era of the Apollo moon missions (it was used to calculate the orbits for the lunar lander) and, as I understand it, one of the first computers to perform "time sharing" - letting more than one use onto the computer interactively at a time.

I worked for Paul Pierce (his museum website here) at the time, the director of the lab.

This computer came with its own generator to convert three-phase power to some bizarre, specialized DC voltage level it required for operation.  The generator - basically a large 3-phase AC motor that turned a DC generator - was about 3 feet deep, 5 feet high and about 8 feet long.  It put out some serious noise while operating the computer.

The computer was, I think, a "36-bit" machine.  (I know the IBM 1401 was a "decimal" computer - it did not use binary but instead "decimal" numbers for memory and processing.  IBM was a big supporter of decimal in the 1950s.  However I think the 7090/7094 was a transition for them to binary.)

It had a large console of switches and lights, a 300 Mb disk drive, and a printer.  It ran on punched cards.  The best Paul could do with it at the time was enter and run programs in via the switches and lights (actual bulbs).

This was one of the first commercial computers built from transistors.  It came with a library of books containing the exact circuit diagrams and layouts.  I recall him fixing something on one of the boards.  The diagnostics could identify specific transistors that were not working.

Eventually he collected this (?) machine and others into a sort of museum for old computers.

I bring this up because I found this NASA blog.


NASA is retiring their Z9 mainframe.  This is a newer (vintage 2005) series mainframe - a descendant of some 40 years over the old 7094.

The Z9 is a far cry from the 7094.  It runs Java, its 64 bits, its Linux-based, it runs all sorts of backward-compatible mainframe software.  It represents decades of refinement over these original primitive designs.

But if you think about how these z/machines are designed they are really just collections of processors like any other large-scale server farm.  Really just specialized processors all linked with special hardware to work quickly.

Today what used to be a "mainframe" is now really more of a "supercomputer" - basically a specialized processor that addresses specific tasks that typical servers cannot perform: protein folding, simulations, modelling, and so on.

The US still leads in supercomputers according to this article but others such as Japan are able to build individual computers which are faster.  (For example the Japanese "K" computer is currently the worlds fastest according to www.top500.org.)

There have been a lot of consequences to all this.  For one thing, in the past, supercomputers were very large specialized pieces of hardware.  I had a college friend, for example, that worked on the chemistry of future mainframes.  But alas, as these system shrank in physical size, so did the need for specialized hardware, connectors and so forth.

Today the "future mainframe" technology is used instead commercially in an IBM spinoff company to build other types of specialized hardware.

Today's iPhone, of course, has more horsepower than the 7090/7094 series of computers.

And in those days the only output was on a line printer.  Unlike today where there are small hi-res color screens in those days you only had this:


and this:


Of course, the principles are the same as your iPhone screen, only the media has changed.

Tuesday, February 14, 2012

DMSO - Fifty Odd Years of History

I have been reading a bit about DMSO (Dimethyl Sulfoxide).

This is a chemical, known since the 1800's, that's been ascribed a number of miraculous abilities: curing cancer, stopping pain, stopping sinus problems, healing sore and injuries, acting as an analgesic, anti-oxidant and anti-inflamatory.

One of the things that is, to me, most interesting is that DMSO has the properties of a solvent and it can penetrate the skin.

Why is this interesting?

Well, for one thing if you apply DMSO to your skin in flows through your skin and, within a few minutes, you can "taste" it in your mouth and others can smell it on your breath.

The other interesting thing is that if you paint, say, Lugol's iodine, on your skin and you paint DMSO over that the Lugol's is dissolved by the DMSO and transported into your body within a few minutes.

DMSO first came to the attention of "modern medicine" around 1963 when a doctor named Stanley Jacob at the University of Oregon became involved with DMSO (which is a byproduct of making paper) and began to study its medical properties.

DMSO is not, however, without controversy.

Initially the FDA and Dr. Jacob had a variety of issues over how Dr. Jacob was investigating DMSO and its medical uses.  (There are many books and web sites chronicling this: dmso.org, "The Persecuted Drug: The Story of DMSO" (1980) - chapter 6 here, this, and of course Wikipedia.  The bottom line is that it is approved by the FDA for only a few very specific human uses.)

Today there is something like a million published scientific articles on DMSO - none of which really "proves" anything one way or the other about DMSO and its effects on humans.  Is it dangerous, to some degree yes, just like anything else.  Does it help people, for example, with arthritis?  Yes.

I recall the controversy from the 1970's.  DMSO was one of the first examples at the time of "big government" interfering with people doing what they wanted to do.  DMSO was seen as a cheap and wondrous chemical that would cure many diseases and diminish a variety of pain and the evil FDA as preventing people from accessing it.

My own interest here is this.  Considering the effectiveness of iodine I wonder if using Lugol's or povidine iodine with DMSO, for example, would create an effective treatment things like MRSA or other bacterial skin infections.

Iodine from my own experimentation seems to work well but if you have a problem under your skin its more difficult to apply.  DMSO might provide a simple and effective solution.

In researching povidone iodine I discovered that its one of the few 100% effective treatments for MRSA (see this).

I was also wondering if such a mixture would work for cold sores...

I have used the various cold sore medicines but I really don't know much about them or how they work - plus they are expensive.  Others have written about this as well using other solutions.

So I have been looking around for "medical grade" DMSO - apparently there is no such thing.  The best I can find is the 99.98% pure stuff (not injectable, not a drug) at this site:  www.jacoblab.com.

This site is related to the Dr. Jacob who first became involved with DMSO as I described above.

(Note that DMSO and various mixtures all have been in use in the veterinary world for many years.)

Quality DMSO is cheap as is povidone iodine: a jug (1 gallon) of povidone iodine can set you back around $18 USD (at amazon).  DMSO, depending on the source, about $15 USD for 8oz or so.

In any case I think that having these solutions handy is a good thing in any case.

I bought a used copy of the "The Persecuted Drug" mentioned above - copyright 1980.  Interesting stuff but not much science.

I think that DMSO, which is a relatively simple molecule, says a lot about how little we know.  A million scientific papers - is the stuff good for you over the long run?

Who knows...

A lot of people are willing to try it based on the hype.

I've known about it for 40 years or so...  And so far nothing concrete.

Still, people swear by it and use it - in some cases use it a lot - and it seems to work.

I am certain that because DMSO is cheap and plentiful there isn't much interest in commercializing it.  What would be the point.  No matter how wonderful it really is...

Monday, February 13, 2012

Education: Falling Further Behind

When people still cared about learning and education.
In the spring of 1963 I had a chance to spend part of the day in my future grade school.

I still recall the experience today.  There was a single, large room in the basement of the local church.  All of the classes were there save for some older kids who were in different rooms.  Each "class" was separated by groups of desks.

There was a single teacher - a nun.

I was assigned a temporary desk where I sat.  Of course, at six years old I had very little understanding of what was going on.  There was "lunch" and recess.

One of the older kids showed me around the playground - right next to the grave yard.

That fall I entered school.  By then a new addition to the church had been completed and the basement school room was no more.  My first grade teacher, Sister Ann Joseph, was also a nun.

All the nuns lived in a rectory behind the church, next to the playground and graveyard.  It was a large, forbidding-looking building - dark and scary to a six year old.

Nuns in those days were tough - some had lived and worked in South-Side Chicago schools.  They didn't take any shit.

They also worked basically for room and board.  You didn't pay a nun like you would  a "lay teacher."  The nuns provided education as part of their "calling."  Not all the nuns in the rectory taught.  I suppose they just lived there until they died - the younger nuns taking care of the older ones.

If you were a lay teacher in 1963 the average salary was around $5,100.00 USD per year.  In 2007 dollars that would be about $34,200.00 USD.  My guess is that in 1963 you got paid only for the time you worked - about 70% of the working days in a year.

In 2007, in the local Penn Hills school district, teachers are paid around $51,000.00 USD per year.

This is almost 50% over the normal inflation during the same period.

And then we find things like this.  Basically states passing laws to require kids to be held back if they don't pass specific tests in third grade.

As a child there were few kids "held back" in my day.  Maybe one or two I can remember in my time in Catholic grade school.  To be held back your behavior was horrendous and you hand to work hard at not trying in school.  Particularly because the nuns viewed your learning as something God had ordained and they were required to properly execute or burn in hell.

Almost an educational jehad, if you will.  And the nuns were serious about it.

But now the jihad seems to be about money "federal funding."  Federal law, which provides money to school districts, requires that children reach a certain level of proficiency on standardized tests in order for the district to get the money it expects.

Now this federal/school policy seems to me to be at odds with "learning."

Almost as if the adults have set a bar for the children to reach without any consideration for what the child actually learns (save test taking skills).  If the child fails to reach that bar then the child is punished.

Yet isn't the job of the school to educate the child - whether or not its hard or easy, takes a little or a long time?

In 1963 the class was divided into three groups - based on your ability to learn.  Fastest learns in one group, slower in the next, slowest in the last.

This ensured that groups moved along at a comfortable pace. 

Today, again through laws created by adults, this is not allowed.  Everyone must pretend to learn at the same pace because that way everyone is perceived by the adults as "equal."  But this is simply like pretending not to keep score in a T-Ball game - the kids know the score - its only the adults creating a fiction that the score "doesn't matter."

Which is what's been done to education.

Now children will be punished by law if they "fail to take the test properly."

Seems pretty Draconian to me - kids are not all the same and certainly children struggling to keep up should not be stigmatized by law...

But that's what you get when the jihad becomes about money.

Learning ability and intelligence, just like looks, vision, hair color, facial forms, body type, stamina, over all health and so forth all vary from child to child.  And each child may have gifts in some areas and deficits in others.

Isn't it our job as adults to make sure the child learns?

Another thing common in the "olden days" was to direct children into areas where they were best suited.  For example, in my day farming was a key area.  Some kids just liked farming and weren't interested in "higher education." 

Why educate them as if they were going to college?  Would these skills help them if a cow fell and was injured in the field?  Many kids in my class were interested in various trades - cars, that sort of thing.  Again, perhaps they were gifted body men and liked doing that work. 

It seems as if education is now a business run by detached adults who care little for learning and care a lot about funding and money.

Little wonder we are falling behind the rest of the world in education.

Friday, February 10, 2012

Orwell's Big Brother in Your Pocket

Criminals implicated in deforesting Central Africa (Nature)
Modern communications like Twitter, Facebook, email, cellphone, etc. offer a bounty of mechanisms to convey our thoughts and feelings to others, our spouses, friends, family.

But I think that there is a new and darker side of this: conformity memes.

First of, for those who do not know, a meme is a word meaning "idea" or "thought".  It was created by British evolutionary biologist Richard Dawkins and described in a book called "The Selfish Gene."  Dawkins used it to describe how things like the idea of an "arch" in building is transmitted across time and culture.  In today's world its seen more like something which is almost alive or takes on a life of its own, e.g., global warming, the destruction of rain forests, the evil nature of George Bush, that sort of thing.

Typically a meme enters our culture inadvertently but spreads as those who associate themselves with the underlying idea spread that idea.

For example, Al Gore's movie "An Inconvenient Truth" and its relationship to world wide interest in global warming and its consequences is a good representation of the meme "global warming is killing us."

The idea is simple - CO2 pouring out of man-made devices poisoning our world.

As I wrote in my series of posts on Kahneman the "System 1" portion of our mind can grasp this simple threat and brings it to the forefront of our consciousness, e.g., "the sky is falling."  We really cannot help this.

Now even fifty years ago, prior to widespread communications, an idea like this or a movie or even a book, in order to get general traction in the mind of the public, would have to find a means to do so.  Certainly Al Gore could have made this movie fifty years ago and its content, which does not rely on anything really current other than the notion of man using fossil fuels, would be just a relevant.

However, there were far fewer societal mechanisms that would allow these ideas to propagate.  For example, movies by-and-large only showed up in movie theaters or on commercial network TV.  Yes there were public television stations and 16mm movies at the local Rotary Club but to reach even these "markets" the movie would have to cross a significant line: cost of duplication of film, getting the movie into the right hands, and so on.

Save for 16mm black and white "stag" films from that era there were probably no other types of movies that made it on even a semi-regular basis into peoples's homes.

Similarly for audio.  Virtually all audio was "manufactured" and "distributed" either via records and record companies or via the radio (AM, FM, shortwave).

The world was divided into "news" - facts like the "Cleveland Plain Dealer" farm report or editorials.

Notions of "the sky is falling" had to beat out necessary news, more immediate fears like the "Cuban Missle Crisis" and nuclear proliferation, and so forth.

There was a high hysteresis level that screened out all but the most important news.  (And during this time news, being reported on FCC-licensed radio frequencies, was required to be "balanced.")

But over the last fifty years the "cost" of getting a "sky is falling" meme launched has gone from tremendous (both in terms of real dollars as well as the "cost" to overcome the media outlet hysteresis for such stories) down to basically zero.

But there is more.

In the 1972 Olympics eleven members of the Israeli Olympic team were kill by a terrorist group known as "Black September."  This was remarkable to me because, prior to that, you could really only read about these types of events in newspapers or watch "second hand" B/W news footage with Walter Cronkite. 

However, in 1972 the Olympics, which held a large sway on the public's imagination, brought this event directly into our homes via live "TV" coverage.

Terrorism as a "meme" moved from abstract black and white newsprint to color video.

With each new step in the "media" capability of the public - VCR, home video cameras, YouTube, cellphone video - this "meme" has made its way closer and closer to our very own pocket or purse.

But there is another effect here which also interesting.

As the meme of "terrorism" has spread and multiplied via media the very "definition" of what terrorism is has "flattened out" as well.

In 1972 "terrorism" required a "nation" or "nationals" to conduct it - it was done with guns or maybe bombs and airplanes.

But over the years the notion "flattened out" to include much more.  For many years in the 1980's, for example, while you might be able to carry a bomb onto an airplane due to lax security, you always thought twice before making a "bomb" joke that someone might overhear.

Columbine brought "terrorism" to our schools and with it metal detectors.

Today metal detectors are common fair in public buildings of all sorts including many high schools.

And the "defense" against terrorism is the vigilant pursuit by the "authorities" of anything that Big Government considers such: jokes, comments, writings, and so on.

So by casting a very wide net the idea is to capture terrorists before they strike.

And because terrorists are now prevalent in our very pockets via smartphone videos we all "know" the cost of it.

Or do we?

According to this you are 390 times more likely to be killed by an automobile wreck than an international terrorist attack.  In 2010 some 32,708 people died in automobiles according to Wikipedia.  This leaves some 83 or so annual deaths of US citizens due to international terrorism per year.

A number dwarfed by lightening strike deaths in the US (some 750 deaths per year).

Yet today even Tweeting about "blowing up an airport" in general, i.e., tweeting to someone else in anger over a delayed flight, can get you arrested, at least in the UK, as a terrorist.

Meme's in this form are not limited to "terrorism."

Another case is the "deforestation" meme.

We all know what this is: giant corporate loggers chainsawing down huge, old-growth rain forests to install strip malls.

In the 1970's the term was only used in reference to things like Agent Orange and Vietnam.

But today the "destruction of the rain forest" is drilled into every 6 year old's head at daycare, on TV, on the internet.

Today "deforestation" is a meme that can include, well, even primitives living in ancient hunter-gather societies as well: see this Nature article.

Really, early African humans (some 3,000 years ago) "deforesting" entire swaths of central Africa?

Fifty years ago this was heralded as progress by a primitive society to pull itself from the "stone age" into modern times.

But no more - the meme of "progress" is today evil and outmoded (save for preventing terrorism I suppose).  Much better to have those ancient Africans in the Nature article living in squalid, primitive huts dying young of disease and starvation than have them "deforesting" the central African plane.

And the Nature article describes this discovery as "implicated" - implicated like "I was implicated in a terrorist plot" (never mind the silly "science" described).

No, these are ancient peoples trying to live a better life through smelting iron and farming.

Not terrorists or common criminals implicated in some Ponzi scheme.

The new communications models mean that merely thinking outside a commonly accepted meme like airport terrorism or deforestation can cause big trouble.

These new memes are very powerful because modern governments act on them (such as in the Tweeting case I mentioned above) against citizens.

And you can bet that the "discovery" of those evil primitives implicated in deforesting Central Africa millenia ago was also government funded research.  No doubt these primitive folks (pictured above) are contributing substantially to Global Warming as well - burning fires, smelting iron, chopping down forests.  Perhaps they will be implicated in a vast African "global warming conspiracy" and have to pay reparations for their acts.

If your thinking does not "conform" to what these meme's stand for you are in trouble.

Before our very eyes George Orwell's 1984 is being implemented right in our own pockets.