Search This Blog

Thursday, March 31, 2011

Global Dimwits

Lest you think that this blog is just random rambling let me point out that many topics covered here are well (years) ahead of the curve.

Recently I came across this article at www.wired.com about jet contrails and their contribution to global warming.  Basically it discusses the impact of high cirrus clouds on global temperature and how the creation of these clouds from jet contrails is impacting climate.

Readers of my blogs would not have found this surprising.

About five years ago I wrote "Global Dimming" on my personal blog.  The premise is based on this statement: "DR DAVID TRAVIS (University of Wisconsin, Whitewater) We found that the change in temperature range during those three days was just over one degrees C." (Originally taken from BBC program here.)

During 9/11 all commercial planes in the US were grounded for several days.  Dr. Travis is discussing that during this time the average temperature changed one degree (1) C.

Now this is what I like about modern science.

Rather than investigate an actual occurrence of climate change based on contrails such as the incidents around 9/11 the scientists in the wired article talk about their climate model (ECHAM4) and how to simulate this effect there?!?

Clearly the model does not predict this effect from whatever input it takes in so what use will the model be after the fact?

And this is what's wrong with science and scientific endeavors today.

Models don't tell you anything that not derived from the assumptions you put into the model in the first place and basically, from my perspective, they just confirm (reaffirm) what you already thought you knew when you created the model.  Certainly models might be useful in pinpointing specific problems or providing different perspectives, but they do not "create new knowledge".

Models for things like airplane wings and flight do a very good job of predicting behavior.  But these are closed systems in the sense that air flowing over a wing has a very small set of fixed properties - humidity, density, temperature, and so on.  No airplane will suddenly find itself flying through water or jello and the models would utterly fail were that the case.

But climate is a open system.  That is, heretofore unknown things like clouds forming from high-altitude jet contrails, which have a direct observable effect on the resulting temperature, are (and were) not known to the makers of the ECHAM4 climate model.  They came into play "after the fact" and hence the model did not consider or predict them.

So if the model did not "know" about this effect how could it accurately predict the future?

How could it accurately predict anything at all?

And this is what I point out in my original article.

I've had discussions about this with self appointed climate "experts" in the intervening five or so years.  All expressed either amazement at the effect or were certain it was already "in the model".

On March 18th I posted this image of fallout from Fukushima reaching the US:

As time goes by and the truth "leaks out" about what's really happening the maps become more dire:
And then there is this nifty animation of radioactive releases from Fukushima blowing around the world:

http://www.nytimes.com/interactive/2011/03/16/science/plume-graphic.html?ref=science#

So what does this mean?  Are we in any more danger today than we were a couple of weeks ago?

No... The objective level of danger has been consistent all along.

What's changing is the "model" of the danger.  And the model is not based on science or engineering or anything else.  Its based on "chatter" and "noise" in the public news space.

Like the climate model the "nuclear fallout models" are simply that - models.  They cannot predict what they cannot know - namely how much and what type of radioactive particles will be put into the air at Fukushima.  And, like climate models, only after something happens can the models be updated to include whatever was not previously known.

The bottom line is that no one knows nor can know what's going to happen.

Clearly plutonium is already escaping from these plants.

As to were it goes, no one knows....

Wednesday, March 30, 2011

Seeing Inside Fukushima for $1,500 USD


It seems very likely that one of the reasons that no one knows what's going on inside the Fukushima nuclear reactors is that no one has the right technology to do so.

For example, if these were my personal nuclear plants, I would very much want to know why things were the way they were.  So how would I do that?

On way would be to use something like this:
These quadro-copters are sold by Ascending Technologies in Germany and offer a way to view things from a great distance.  The commercial versions (listed here) offer the means to photograph or take videos remotely.

As you can see in the videos the copters are quite agile and would be able to maneuver around something like the wreckage or damaged buildings at Fukushima.

Now its quite possible that the radiation levels would pose a problem for these devices but my guess is that for a short term view of the situation they would work fine.   The levels may be so high that you might not even want the devices to return.

The copters are sold in Germany here and, for the hire end models, cost around 1,000 euros.

Slap on a nice $100 USD WiFi camera (some examples here) and there you would have a way to see into the places in the plant where no man can go.

One also imagines that for the underground areas where there is little room to maneuver a flying machine a cheap RC car with the aforementioned WiFi camera would again do the trick.

Reading the various press reports about the Fukushima situation makes it clear that at a minimum seriously radioactive water is leaking out of the spent fuel pools and probably out of the reactors (probably though broken pipes, fitting, or a cracked containment).  No doubt the radio activity is so high near these areas that it would be difficult for people to go in and examine these problems first hand.

To me this would be a solution.

Secondly, I think its a mistake for there to be so little information available outside Japan on this issue.

Having cameras and other technology probing the insides of the reactor complex is only going to help someone figure out solutions.  Certainly there are likely things that TEPCO doesn't want to be seen but at this point I think any issue of embarrassment or legality is mute.

There are experts around the world with computers who could access these videos and make suggestions and/or recommendations.

Japan is also expert in various types of robotic technologies (I have also posted about other companies here with advanced robots).  Once leaking pipes or fittings are found how is anyone going to be able to go into the plant and fix them?  With plutonium and other poisonous radioactive compounds being emitted into the air and water no human would last long enough to do any good.

So robotic repair technologies will probably be the only answer.  For example, there are many dexterous industrial robots in use for building cars.  Take some of these and modify them: add a moving platform, cameras, welding tools, etc. so that they can repair things.

Just my $.02 USD...

Tuesday, March 29, 2011

While My Reactor Gently Weeps

Its interesting to me to watch how news of the disaster in Fukushima is "leaking" (not pun intended) out.

One point of interest is the concept of "melt down".

A "melt down" is the ultimate sin of the nuclear industry - made famous by Jane Fonda in 1979 in the movie "The China Syndrome".  The idea, of course, is that when the nuclear reactor insides melt down into a pool of seething hot radioactive mess the molten mess then proceeds to burn through the bottom of the reactor containment vessel and head straight for China.  This was also the fear of most 1970's environmentalists protesting these plants at the time.

No one wants to be associated with a "melt down".

Number two on the list of evil sins is "core breach" or "containment failure".  In the US (unlike Russia) all reactors have a "containment dome" surrounding them.  Unlike Fukushima where you see the innards of the reactors systems inside the exploded outer building shell US reactors are built inside a giant reinforced concrete dome.  The idea being that should something like Fukushima happen here the explosive hydrogen gas, radioactive steam, etc. would all be "contained" in the dome.  (Though I doubt having a dome full of radioactive water, steam and explosive hydrogen would be any easier to manage.)  Obviously this system is also designed to contain things like radioactive Iodine, Cesium, Uranium and Plutonium should there be problems.

So what do we see at Fukushima?

Well, first of all, no one will ever admit to any of the big nuclear sins - that would be catastrophic - both in terms of panic and in terms of finances.

No, instead we get news information leaking out bit by bit - like a leaky faucet.

So let's look at today's evidence: Plutonium in the soil.

The first question you have to ask is "how did it get there?"  Of course its only "a tiny amount" and "not dangerous"…

But plutonium does not occur on earth naturally - its a man made element - bred inside nuclear reactors (reactor number 3 in Fukushima).  So if its not inside the reactor any more it must have gotten out somehow…?

How could it do that?  Well no one is admitting to a "containment breach" - but let's look further.

If you read the news reports they all talk about radioactive water (at a very high level) in the basements of the reactor buildings.  Now, taking their word that there was no "containment breach", we read that there may be problems with the cooling systems (looking at the pictures I have been posting its hard to imagine how much of anything would be functioning inside the damaged reactor buildings, much less complete cooling systems).  They admit here and there that their might be some "problems" with the cooling systems, the pumps, the pipes, and so on.  Well - if water and plutonium is leaking out I would agree there must be a problem somewhere.

So while the reactor containment might not have failed (with a hole or other problem) apparently that's not the same as the pipe and things connected to the reactor containment failing.  They keep pumping water (sea and fresh) into the reactors to "cool" them.  Where does it go?

Now apparently the distinction of molten fuel burning down through the containment versus dribbling out through broken pipes is rather a fine point.

The reactor building explodes, the piping, pumps and cooling systems fail, water and plutonium leaks out - but, the reactor containment doesn't fail.  (Apparently the MOX plutonium/uranium fuel has also come out of the fuel rods in reactor number 3.  Its normally inside a pellet that's inside a metal fuel rod. The halflife of Plutonium-239 in MOX is 24,000 years and just a few milligrams of P-239 escaping into the atmosphere will contaminate whatever it falls on for tens of thousands of years.)  So it seems that the insides of the reactor systems and the containment might be described as "in disarray" but don't worry - no big sins have been committed.

Perhaps fairies transported it to the outside...

I suspect that as time goes on all of the gory details of a "containment breach" and "melt down" will leak out (again no pun intended) - but it will be done on Friday night at 11:00 PM after something else happens so that the details will be lost on the public.

Clean, safe nuclear energy.

Monday, March 28, 2011

Inside Fukushima

More images at NGS and Gawker.
Years ago I worked in the defense software industry.  We worked indirectly on various things related to flight control, radar, space flight and so on.

During that time I had a lot of opportunity to talk to and work with people that were developing front line systems - flight avionics (things having to do with controlling an airplane in flight).  In those days (the early 1980's) there weren't flight simulators to test things on before the airplane, missile or helicopter first flew.  Things had to be designed and built right "out of the box".  The flap controller on a commercial airliner, for example, had three completely separate computer systems to interpret the motion of the flap controls.  At least two out of the three had to agree on what was to be done in order for the flaps to work and the system would work if only one computer was functioning.

Now these systems were well defined and "closed" in the sense that input, e.g., from controls operated by pilots during flight, was limited.  The flaps would only move so far up and down, the controls only so far back and forth, hence there were limits as to what the inputs and outputs might be.

People that worked for me in those areas eventually moved on and did some work for a local nuclear energy company working on nuclear reactor simulators.

The idea behind the simulators was to create the experience of operating a nuclear reactor without any of the danger of melting it down.  Now, as far as I now today, and certainly not in the 1970's when Japan's reactors were built, computer do not operate nuclear reactors.  People do.  The images of vast control rooms with thousands of dials, controls and gauges on the walls, were set up for operators to control the reactor directly.  So the simulators provided responses to the input of the operators.

Now on the flight control side, as I said, things were fairly limited.  The limits, for example of how much stress the wing would take, were calculated and tested outside the flight software.  (Somewhere there are videos of the first Boeing 747 wing being stressed tested to failure.  This test tells the engineers exactly how much stress the wing an endure during flight.  The wing fails in the test at just about the point the engineers predict.)  Things like the flap controllers are designed to prevent the airplane for operating in such a way as to not even come close to putting that much stress on the wing.

The aerodynamics of flight are a fairly well understood engineering discipline these days.  For example, its possible to calculate  the position of the plane in any positions (yaw, pitch, roll) and to calculate airflows over the surfaces of the wings and determine how the plane will (or will not) recover if placed in that situation.  Plus, in flight, a pilots experience (as well as gravity) help to right any wrong.

On the nuclear side, things are not so clear cut - particularly in training.   There are hundreds or thousands of valves, controls, and sensors in a nuclear reactor.  There are engineering modifications, particularly as in Fukushima, which have been made over four decades of operation.  There is staff turn-over.  There are numerous regulations and requirements placed on the design, function and operation of the plant.  There are engineering issues (errors, omissions, design changes) that impact the functioning of the plant and its systems.

Simulating this is not so easy.  Sure its easy to simulate things as they should be.  But what about an incorrectly connected or broken sensor.  (Three mile island, for example, began with a problem in a secondary (non-critical system) and was followed by a "stuck valve" which operators failed to recognize.)

How would this be represented in a simulator?  It can only be simulated if someone can A) figure out that it would be a realistic problem, B) determine what would happen should it occur, and C) accurately predict how to fix it.

Its very unlikely anyone would be able to create or guess all possible such scenarios and program them into a simulator.

The bottom line here is that something like a nuclear power system is very unpredictable and very prone to human error.  Not just operational error but systemic error in the sense that bureaucratic meddling over time in terms of design, function, safety, etc. create a compounding of unpredictability that no simulator can express. 

Leaving operators to guess at problems and fixes - just like Three Mile Island.

So in Fukushima you have the same issue, except compounded at a national level - regulators,  companies, people, protestors, all meddling in a forty year old design and creating an unsafe safe system.

(Simple explanation of the Fukushima problems and why the three safety systems failed here.)

Friday, March 25, 2011

Fukushima Reactor Wreckage

I've been thinking about the picture associated with this post.  Its from a video on the WSJ (www.wsj.com) site.  A recent picture showing two of the Fukushima reactor buildings.

If you follow the railing along the bottom of the picture (silver/grey) to the very right of the image you will see a small bulge.  This is the hood of a car (to give some idea of scale).

You can see the reinforced concrete reactor building smoking in the foreground.

One has to wonder what sort of explosive force is required to destroy a reinforced concrete structure like this.  (Reinforced concrete contains steel reinforcing rods, e.g., "rebar", to make it stronger.)

The second reactor building, to the right of the smoking, collapsed one, doesn't look to be in much better shape.

We have been reading how the plants have been reconnected to the power grid in order to run things like cooling pumps.  These pictures might give one second thoughts about the integrity of the cooling systems.





The above picture, from a slightly different angle, provides additional perspective on the extent of the damage.

Had this been a regular coal or gas-fired plant you'd probably see constructing equipment on-site removing debris and clearing access for repair crews.

Unfortunately, in the case of nuclear power, this is not possible.  I read one article where workers at these plants accidentally stepped into a puddle - a puddle with 10,000 times the normal amount of radiation - and were immediately burned and hand to be taken to the hospital.

One puddle.

Here is a schematic of what's inside the wrecked building.


As you can see the top of the reactor appears to extend up through the second-to-topmost floor of the building.  From the pictures its pretty clear that for at least one reactor the building no longer has that floor.

Supposedly the reactor containment (the kind of metal tube in the center) is still intact.  This would prevent nuclear material from the core leaking out.  I'm not so sure from these pictures that things around the core are still in good shape.

From the perspective of the plant operations personnel things are pretty grim.  Here is a link to images from inside the plants.  From these images it looks like some of the control rooms have been destroyed along with the reactor buildings.

And finally, a picture from the air.

Thursday, March 24, 2011

US Nukes and US Geographic Faults

Nukes and US Faults
I find it interesting that, for something as supposedly safe as nuclear power is supposed to be, how little is really known about its "safety".

Here's what I mean...

The Wall Street Journal has an excellent article here on safety zones surrounding US reactors.  In the US there is a 10 mile "evacuation zone" around each of the 100 or so nuclear power plants.  This zone, based on a 1978 report on possible melt down scenarios, is the radius around the plant that would have to be cleared.

Now, interestingly, that zone was barely enough even in Japan - where the US recommended a 50 mile radius.

Now why is a 50 mile radius good in Japan for US citizens but not here in our own country?

I have no statistics but I would guess that a very significant percentage of the US urban population lives within 25 to 50 miles of a nuclear plant.  So, given any type of situation like that in Japan, it would be very, very hard to evacuate everyone.

For example, I used to live in New York City.  The Indian Point nuclear plant is located about 35 miles up the Hudson River from Manhattan.  According to the article about 20 million people live within 50 miles of that plant.  Having seen the effect even a simple loss of power has on the city (watching tens of thousands of people walk across the Brooklyn Bridge) its hard to imagine what the effect of a serious nuclear problem would be.

Now you might say that NYC does not have earthquakes so what's the worry?

Well, you probably haven't heard of the New Madrid earthquake of 1812.  This earthquake was powerful enough to, according to eye witnesses, to cause the Mississippi River to run backwards for a while. 

However, there is one nuclear reactor very near the epicenter of this quake and many not far to the north in Illinois.

So are we safe?

Another interesting bit of data is that there are only simulations to tell us how nuclear fallout travels through the air and sea.   Since no one really knows and no one is likely to do any tests to find out (things like radioactive Iodine and Cesium are heavy) these "radii of safety" and just guesses.

That's right - your safety from nuclear fallout is just a guess.

And no one really knows where there might be another earthquake, e.g., southern Illinois.  Given the  unpredictability of earthquakes to begin with it seems very unlikely that any bureaucrat really has any good idea what's safe and what's not.

Its also clear from the recent warnings in Japan regarding radioactive Iodine-131 in Tokyo's water supply that the effects of fallout can reach far outside these radii.  For example, polluting water which will become useless for human consumption downstream.

Something that no one had predicted.

Unfortunately nuclear power's safety is not just questionable, its down right scary.

And then there is the issue of liability.  Companies like GE have worked hard to make the liability for nuclear disaster limited (see this).  What this means is that the risks of building these plants are ultimately carried by the taxpayers (what is not covered by the utilities insurance carrier).  So mistakes will be very costly for all of us.

So you really have to wonder about the safety of all this.

(Map is a composite of the WSJ nuke map and this USGS US Fault map.  After creating this I can see that Illinois, North and South Carolina, Louisiana, and the West Coast (California, Oregon and Washington state) are all potential problem spots...)

Wednesday, March 23, 2011

Judge Chin to Google: Don't Be Evil

Judge Denny Chin
So I have been reading about Google's plan to scan all the books with (both with claimed and unclaimed copyright ownership) it can get its hands on in order to create a giant digital library.  This has been an on-going project dating back to 2004.

What's interesting to me here is what's going on around the actual project.

First of all, the mighty "Don't Be Evil" company, according to Judge Denny Chin, began scanning books into its digital library project without permission of the original authors (see this, page #4).

Now I would have thought that violating copyright law as something "evil".  After all, the author, publisher, and so forth all have an economic interest in published works - many make their living by writing.  So wouldn't taking away someone's livelihood be an act of "evil"?  After all Google does not own these books.

Then there is the "violation of the law" aspect.  Copyright law, for whatever you think its worth, has a well established place in society.  While that place has been pushed on pretty hard by the digital age non-the-less it still garners a certain amount of respect - accept, apparently, at Google.

Now Google has interesting ideas about all this.

According to this article, Eric Schmidt, CEO of Google from 2001 until recently, seems to think that "evil" is a scalable concept and that sometimes doing small evils is okay if the larger result is for the "greater good".

No I find this whole discussion very interesting.

The "ends justify the means" is a form of Consequentialism (see this).  Consequentialism is a belief system where one accepts that only the results of one's actions are the basis for judging actions.

Here Google's claim is that a world wide digital library would be nice to have: people in remote areas would have access to books that would otherwise not be available to them, scholars access to inaccessible works, and so on.  I don't think anyone would disagree that it would be a "nice thing" to have.  But Google's notion here is obviously either very naive very insidious.

By this same argument one could assert that all companies software, placed into a giant library without their permission, would also be a good thing.  Certainly it would all everyone to access and review said software without the owner's permission. 

I wonder if Google would like their software placed into a publicly library for free searching and free excerpts?

The problem with consequentalism is that its simply a matter of convenient self-serving justification for the perpetrator of a crime.

I can argue that by breaking into Fort Knox and distributing all the gold to the poor society would be a better place.  And by measuring my actions by distributing billions of dollars to the poor I would in fact be a really great, stand up guy.  The only problem would be that I was breaking the law by commit any number of federal, state and local crimes.

Google further tried to perpetrate this crime by creating a faux "settlement" with groups claiming to represent various groups of authors.  Not all authors - but groups representing authors.

This is another "slight of hand" aspect to consequentialism being exploited by Google.  When you cannot directly jump to the "greater good" create a settlement with a group that ostensibly represent "everyone" - even though they don't - and then claim all is well.  This looks like everyone's issues are addressed when in fact the "group" is usually in a position to benefit from the "new" greater good - especially over those which it does not represent.

Fortunately Judge Denny Chin sees this Google grab to usurp copyright as exactly for what it is - which is why he is requiring that authors who do not want their works involved in Google's effort  to "opt out".

No doubt this creates a legal headache for Google, with its 12 million scanned books, because now it will have to clear copyright on each and every one.

Somehow I don't feel bad...

Tuesday, March 22, 2011

If Then Else

One thing I have learned over the years in software development is that its usually either A) the problems you don't foresee or B) that quick, last minute change that's totally obvious that causes the most problems.

A lot of things that I work on tend to have long lifetimes.  This is good because it generally allows me to have a sustained business relationship as well.  On the other hand things that I had worked on say, five years ago, are much less obvious today than they were at the time.  I have also learned to be very careful not to "second guess" what I was thinking at the time.

So recently a customer from another country asked me to make a change to a system that had been in production for a couple of years.  The system is large and complex and controlled by some XML configuration files.  I have a duplicate system here so I was able to work through the requirements, modify the XML configurations files, and test the results.

With that completed I had to transfer the edits I made locally to the XML provided by the customer that was in use for their system - which I did.

The only problem was that I only saved some of the changes into the file.  I made all the changes but inadvertently forget to hit "save" on last time. 

Of course it was my intention to save the file before copying it to email - but I didn't check it completely.  The first few edits made it but not all of them.  When I opened the file to check I saw the first edit or two (as they were near the top of the file) so I assumed the rest were there.

Another thing I have discovered over the years has to do with the concept of if-then-else.

When working on programs you will often write code that looks like this:

  if (A) then B

This seems simple enough.

But as time has gone by I have discovered a few things...

For one, most programming languages allow B to be several statements.  Usually they are grouped by special characters, e.g., '{' and '}' that mean do all of the grouped things.


  if (A) then { E, F, G }


Again it seems simple enough - in this case E, F and G are all processed if, and only if, A is true.  Otherwise none of E, F and G will be processed.


Over the years, though, I have found that I should write this:

  if (A) then { B }


even if there is only B to perform if A is true.

(I get flak for this from other programmers.)

However, what I have discovered is that during the ensuing years after code is written you often come back and add things, e.g.,


  if (A) then { B, C }



So my idea, which works well, is to always assume that there will be more to do if A is true and to always write { B } even though I could write just B.

The reason is that later on, usually years, I will need to add something that needs to happen along with B but I will forget in my haste that the form without '{' and '}' only allows one thing to be the consequence of A, e.g.,

   if (A) then { B, C }

means if A is true do B and C.

  if (A) then B,

means if A is true do B and always do C regardless of A.

Not the same thing.

You might say "well, that's just your own fault" - and it is.  But consider this.  Usually when I am creating a software product I have a lot of time to do it - time to carefully consider what's going on.  In later years, during maintenance, I am make changes at the behest of a customer.

This means I am pressed for time because often there is some production work being held up and some new work that cannot be done without the change.

So its easy to be less careful and make foolish mistakes in haste.

Adding the '{' and '}' makes it easier to get things right later.

Its always completely clear were what is done as a consequence of A starts and ends.

 

Monday, March 21, 2011

Radiation, Food and Iodine

Its interesting to see how the the insidious spread of radiation in Japan food supply is so quickly dismissed when news of the invasion of Libya takes the front page.

I think the idea is that "the masses" - that is us - need to be focused elsewhere so that when it comes time to upgrade our "clean, safe" nuclear reactors we will have forgotten all about Japan and its problems.

Currently Japan is experiencing an increase of Iodine-131 in its food supply.

Iodine-131 is a direct product of fission - about 3% of the byproducts of fission in uranium and plutonium.  Though its half life is only eight days it is still very dangerous.

Iodine-131 decays through the emission of beta radiation (described in this post).  Since Iodine is a critical nutrient required for the functioning of the human body and because most (maybe 94%) people on the planet are Iodine deficient radioactive Iodine is readily absorbed by the body.  The radioactive Iodine travels directly to your thyroid where its beta radiation destroys this important gland leaving you without a thyroid.

This is why you see people rushing out to buy "Iodine pills" in places like California.

Now there are some interesting issues related to this.  First of all the daily recommended dosage of Iodine for humans is 150 mcg (that's micro grams).  As I have written in the link above I believe this is far to low - people should consume more like 2 mg (milligrams) per day (or about ten times as much).   The "Iodine pills" sold to prevent radioactive Iodine damage to your thyroid have a dosage of between 65 mg and 120 mg (that's from Googling around for "iodine pills dosage radiation").  About 30 times a safe daily dosage.

Now the Japanese are one of the few people who are generally not Iodine deficient.  This is because of their diet.  They eat a lot of seaweed and seafood-based products. Since seawater has a high concentration of natural iodine the life in it does as well - and since this is the bulk of what Japan consumes in its diet they generally are not iodine deficient.

So actually they are probably less likely than most other cultures to be damaged by Iodine-131.

On the other hand, those that are consuming the "Iodine pills" in fear of the Japanese radiation are very likely getting way to much Iodine - which is also dangerous.

The problem with all of this is that once the radioactive isotopes like Cesium-137 and Iodine-131 are in the environment they are hard to detect without special equipment and/or labs.  So that means that average joe at home would likely never know if they were consuming radiation-tainted food.  Radiation is also not routinely checked in food - both in Japan and in the USA.  So unless a producer or government agency specifically works to detect contamination it will likely never be discovered.

All this leads to yet another reason why nuclear power is bad:  radiation contamination is not something that is easy, cheap or quick to detect.  So if and when it occurs no one will likely know.

Secondly, radiation in food accumulates in your body - that is if its part of an element like Iodine which is required for proper bodily function and health your body will absorb and retain it.  In doing so the radioactive decay of that element will be what does damage to you health.

Sadly as I watched coverage of the crisis in Japan no one bothered to explain any of this to the public.  While the underlying physics are complicated and will make most people's eyes roll back into their heads the common sense aspects of it are not hard to grasp.

But I suppose factual explanations don't sell news.

Friday, March 18, 2011

Cesium-137 in your Seafood

Cesium-137 blowing safely out to sea..!!
As the Japanese reactor crisis continues we now find that radioactive byproducts are blowing safely out to sea...? (Depiction at the right from Wikipedia.)

Well, (no pun intended) let's see now...

Reactors (and associated things like pools full of fuel rods) with problems tend to behave in predictable ways.  They emit gases like xenon, tritium and krypton.  They emit radioactive cesium-137, strontium, tellurium and iodine.

Radiation from these things comes in the form of alpha, beta and gamma.

Alpha radiation is basically the nucleus of the helium atom (two protons and two neutrons) with its electrons stripped away.  Beta radiation consists of electrons.  Gamma rays are high frequency radiation produced by sub-atomic particle interaction.

Alpha radiation, since it consists of relatively heavy atomic nuclei, is easily stopped by various everyday materials (clothing, skin, and so on).  Though in large or highly charged doses it poses danger (because the particles are moving very fast) its generally something that you can easily avoid, say by going in doors.  On the other hand, material emitting alpha radiation is dangerous if inhaled or swallowed.

The reason for this is that then the material is inside the body and the alpha particles are directly striking the internal surfaces of your lungs or digestive system.  Because these material may remain static (in place) over a period of time the alpha particles can damage your tissues.

Beta radiation involves electrons striking the body, rather than atomic nuclei.  Electrons are smaller and lighter and therefore, with the same amount of energy, travel faster and can penetrate further into tissues than alpha radiation.  As the electrons penetrate the body they can strike molecules and alter their physical structure.  In particular, beta radiation can alter DNA to cause spontaneous mutations, e.g., cancer.

Shielding from beta radiation is more involved because beta radiation can penetrate about 100 times further than alpha particles, i.e., clothes, buildings and so on are insufficient.

Both alpha and beta radiation cause relatively localized damage, i.e., radiation burns, because they are relatively non-penetrating.

Gamma radiation is the most dangerous of the three.  Gamma radiation is electromagnetic radiation - like light or radio waves - and is highly penetrating and ionizing.  Extensive shielding (like thick lead walls) is required to stop gamma radiation.

Gamma radiation, like beta radiation, can alter the structure of molecules and cells within the body.  Because its penetrating the effect occurs throughout your entire body, i.e., the rays pass through your body altering cells and molecules throughout.  The result is radiation sickness.

The cesium-137 from Fukushima is "blowing harmlessly out to sea"?

I wouldn't be so sure.  First of all cesium-137 has a "half life" of 30 years.  That means if you have one gram of it in 30 years only 1/2 of that gram will still be cesium-137 - the rest will be other things that cesium  decays into.

Cesium-137 also easily dissolves in water.

So as this cesium blows out out to sea it will be integrated into the aquatic life - fish, plants, animals.  It will become embedded in their flesh, cells and tissues.

And for 30 years anything or any one that consumes any of these things will have cesium-137 embedded in them as well because cesium-137 is not altered by digestion.

Once the cesium is so embedded it can emit gamma radiation directly into your body destroying your tissues from the inside.

Cesium-137 in the air is readily breathed in and, once in your lungs, sticks to the mucous membranes, again emitting gamma radiation directly into your internal organs.

For hundreds of years these materials will exist in the aquatic environment - posing a risk to us, our children, our grand children, and great grand children, and so on.

Safe, clean nuclear energy.

Thursday, March 17, 2011

Diablo Canyon - America's Fukushima?

America's Fukushima? - On shore one mile from a fault.
So I am watching the coverage of the Japanese nuclear problems on CNN (Anderson Cooper 360).

Dr. Gupta is there.

He explains about the type of suits the people remaining at the Fukushima nuclear plants are wearing.  He shows the suit (some sort of plastic/nylon) and a respirator.

Then he explains the respirator will prevent the wearer from breathing gamma radiation...

(Homer Simpson moment "Duh!")

Gamma rays (not radiation) are exactly that - rays.  They are like light or X-rays - not bad odors.  They travel in straight lines.   They penetrate plastic and nylon and disrupt molecules inside your cells - much like X-rays.

You don't "breath" them in...  Good thing Gupta is a doctor.  Who knows what he might have said otherwise.


Now they show pictures of the empty "cooling pools" where the spent fuel rods sit.  These are on top (!) of the buildings housing the reactors.  The pools (formerly glowing nuclear blue because of Boron doping) are now empty.

Of course the fuel is still "hot" - radiation hot.  Nuclear fission in a reactor only reduces the fuel in the rods by one or two percent.  The rods would last longer except that the fission reaction in the reactor splits apart various atoms in the rods.  The results of these splits create barriers for the neutrons and slow down the reaction.

So the rods in the pools are still 98% full of fuel.

Since the rods sit in pools on top of reactor vessels I imagine that the extra radiation from the not-fully-in-control reactor below are contributing to the heating - particularly after the water is gone from the pool.

You have to ask yourself "Where did the water in the pools go?" and "Why is it not there?"

No doubt the plant requires electricity to run the pumps which run the water into and out of the reactor as well as into the pools.  Electricity not available in a disaster.

(Terrorists take note: Crashing airplanes into the mess of wires and pools and equipment outside the reactor will create a lot of havoc.)

What are these governments thinking?

What are these power companies thinking?

What were the engineers thinking?

Oh wait - they aren't and weren't...

Clean safe nuclear energy.

Now I'm watching Glenn Beck explain how when all the fuel pellets fall out of the melting rods and land the bottom of the pool everything is perfectly safe.

Glenn Beck is ignorant of finance and science.

The helicopters are now supposedly dropping water into the pools - its convenient they are on the roof (!) of the reactor buildings - with fire retardant equipment.  Large helicopters with big tanks of water.

Most of the water in the live RT video blows away in the wind.  I doubt its doing much to fill up the pools.

Science, government and media at its best...

Lost.

GE BWR Mark I reactors - the top designers resigned in the 1970's because they thought the reactors would not stand up to natural disasters (video here).

Spent fuel is safe - at least that's what they say...  Let us ship it through you neighborhood.

After all of this things will be the same - SNAFU.

My sympathies are with the Japanese people.  This is a horrific situation.

Sadly, my point here is that ignorance is obviously compounding the problem.  While pretending the water dropped by the helicopters is actually doing something its clear from the videos that it cannot be - so why do it?

Politics and new media require immediate gratification - which means that an added burden to placate these elements is added to every decision.

What you are watching on TV could easily be happening in Diablo Canyon in California - which, by the way - was not required to have an emergency plan for an earthquake (or tsunami I suppose) even though its located less than a mile from a off-shore fault.  I suppose there is some good news in that the plant does not have a GE BWR Mark I reactor.

Wednesday, March 16, 2011

Why Nuclear Power Cannot Be Safe

Let's take a look at the nuclear power industry from a slightly different perspective.

There are a number of ways to categorize safety but I would like to lay out my own personal idea with regard to nuclear power.

First of all, no one is "safe" - there are, at any given time, any number of potential natural disasters that could bring harm to you.  For example, being struck by lightning.  From this site the odds are around 1 in 200 that your house will be struck by lightning and 1 in 280,000 that you will be.  There are similar risks and statistics for flooding, earthquakes, and so.

What's more interesting is that these kinds of odds are odds people will live with and accept every day without much consideration - mostly because the events are relatively unlikely.  Though there are many people that live in floodplains, for example, that know they could suffer damage at any point yet they still live there.  Similarly with earthquakes, etc.

However, since these types of disasters are natural, i.e., types of disaster than people have little or no control over, as well as unlikely people seem to have little problem with them until the strike.

The next type of "safe" is more interesting.

How "safe" are you in your car?

According to this site if you drive an average amount for 50 years your chances of dying in a care wreck are about 1 in 100.  So if I drove 7,500 miles per year x 50 years = 375,000 miles my chances of dying are 1 in 37,500,000.   My chances for a dangerous non-fatal accident are much greater.

But the point here is not the numbers - its the idea that when people drive their own cars they feel much more safe - perhaps more safe than they actually are.

During the oil crisis in the 1970's speed limits were reduced from 65 mph to 55 mph.  There were also measurably fewer accidents.  Yet people wanted  the high speed limit because they valued their time more highly than the potential risk the higher speed limit represented.

Next we can look at things like bridges, tunnels and other public works.  These all carry a risk of failure and who can say they haven't driven through a tunnel or over a long bridge and felt a twinge of fear?  However, these types of structures have been present in human lives for several millennia so people accept the risk with the convenience they bring.

Now let's think about those that create these things: cars, bridges, and so on.

If I make a mistake driving I could kill myself, perhaps a few others.

If I make a mistake designing a car or an airplane the consequences of that mistake can affect far more people - and people, if they don't like the odds my design offers, still have the option to buy a different type of car, travel by train instead of flying, or find another route that does not involve a bridge.

Nuclear power, to me, represents a much different kind of risk - one that people do not normally face.

First of all the consequences of a failure, as we see in Chernobyl, Fukushima, or Three Mile Island have an enormous radius of impact.  In the case of Chernobyl a radius of hundreds of miles.  Even a bridge failure or airliner crash impacts a very tiny area by comparison.  The only thing with an equivalent radius of impact would be a meteor or comet strike.

Second companies and governments are accepting the risk on the behalf of the people.   Very few people want the nuclear power plant in their backyard but somehow companies and governments always find a way to get them built.  Unlike a highway or bridge no one wants in their backyard the nuclear power plant puts those near it squarely in the radius of impact - whether they like it or not.

Third the impact is long term.  If an airplane crashes into the suburbs people die, houses burn.  But the next day life goes on - more airplanes don't fall out of the sky each day.  In a nuclear accident life cannot go on for years, centuries or even millennia.  Its as if airplanes keep crashing day after day for centuries.

Fourth, there is a lot of complex technology involved.  Bridges and buildings haven't (and don't) change all that much - Roman's built the Colesseum with concrete - not unlike buildings today.  Though humans are new to flight animals have been flying for millions of years - and I can build a hang-glider from simple parts I can buy at hardware stores or lumber yards.  Nuclear power, on the other hand, was invented only in the last century - cannot function without computer, electronics, complex and dangerous supply chains for fuel, complex metallurgy, and so on.  Each of these involves opportunities for failure and risk on their own as well as contributing the over all chances of failure.

Fifth, there aren't enough nuclear plants in the world to create a proper experience base given the level of objective danger.  There are a total of about 500 plants (operating or under construction according to this) in the world.  Compare that to 500,000 or so aircraft in the world (a thousand times more) or 50,000,000 cars (a hundred thousand times more).  You know a lot more when you do something a a thousand or hundred thousand more times than something else.

So to summarize: Nuclear power is dangerous because

1. The radius of impact is enormous.

2. Risk is not taken on by the actual stakeholders (the population living near by).

3. The effects of disaster are long term.

4. Its overly complex and therefore subject to the significant effects of human error.

5. It hasn't been around long enough nor is there enough of it for people to have a true understanding.

Tuesday, March 15, 2011

Clean, Safe Nuclear Energy - Big Government Lies

The Japan Times reports that the nuclear crisis continues at Fukushima, Japan.

As I wrote yesterday I have known about issues and concerns with the GE BWR Mark I (Boiling Water Reactors), the type installed in Fukushima, for decades.  This type of reactor, present in about 1/3 of all US nuclear facilities was developed in the 1950's by GE and the Idaho National Laboratory.

Here's what I don't like an why I don't like it:

Inside the reactor core are columns of fuel rods - they are about 4 meters long (say 15 feet or so).  Inside each fuel rod is a set of uranium pellets - stacked up to fill the rod.  The rods are laid out in a grid-type arrangement.  The rods sit inside a metal vessel.  Water is pumped into the vessel - it passes between the rods moving upward and turns to steam.  The steam drives the turbines to make electricity.

Individual pellets by themselves are not dangerous and do not react.  For a nuclear reaction to occur a "critical mass" is required.  So, for example, say two grams of uranium is a critical mass - meaning an uncontrollable nuclear reaction will occur if that much uranium is present in a single pellet.  In the fuel rods the uranium is broken down into 1/4 gram pellets (just for illustration).  These are stacked up in a rod 15 feet long and because there isn't 2 grams of uranium in one place the rod is basically inert.

A nuclear reaction occurs when neutrons from an atom of uranium hit other atoms of uranium causing them to emit more neutrons.  So one atom emits two neutrons - which hit two other atoms which each emit two more neutrons which hit four atoms which emit eight neutrons until a critical reaction occurs.  (This is like growing at the power of two: 1, 2, 4, 8, 16, 32, ...  things get very large very fast.)

A reaction is controlled by inserting a substance (a "moderator") between uranium pellets that causes the number of neutrons to grow much less rapidly: water or cadmium.  These substances offer a place where neutrons emitted by the uranium atoms are harmlessly absorbed.

When you have a lot of these rods located near each other without a moderator present the reaction will sustain until all of the uranium fuel is depleted.

Thus in a nuclear reactor the reaction proceeds when the reactor is fully loaded with fuel unless either 1) control rods have been inserted or 2) water is placed in the system.

So my first problem is this: Unless something is done to moderate (slow down the rate of reaction) the reactor will overheat and the rods holding the fuel will melt.  Melting rods are bad because the pellets inside will fall down into the base of the reactor vessel and react uncontrollably.

In the GE BWR design this requires that control rods be present and/or water be present in the reactor vessel.  The control rods enter this type of reactor from the bottom under hydraulic power (generated by electricity).  If the rods have been withdrawn and there is an incident like an earthquake its possible that the rods will not return to the "fully inserted" position which would shut down the reaction.  (Other reactor designs pull the control rods out from the top and use gravity in an emergency to pull the rods back into the reactor - to my mind a better idea.)

So the problem is that when there is a catastrophic failure of some sort, like an earthquake, that damages the power available to run the water pumps and/or the control rod assemblies, you have a situation like you do in Japan.

Now the reports also say that they are having trouble keeping the rods "covered".  Normally this would mean water flowing through the system which would remove heat in the form of steam and moderate the reaction.

So if the control rods were all the way "in" then one presumes the reaction would stop completely.  However its either the case that both water and fully inserted control rods are required for this or there is a problem with the water and/or control rods.

We know that there isn't enough power to drive the temporary water cooling pumps - which is why there was a problem after the earth quake in the first place...

So I think that the real problem is the basic BWR reactor design itself.  To my mind a good design would be one where the default configuration of the reactor is to not react unless something is done.   And further, in case of any problem, the reactor should automatically return to its default "do nothing" state.

What we are seeing in Japan is the case where the reactor cannot be "shut off" for some reason.  There may be leaks in the cooling system leaving the core uncovered, the control rods may not have returned to the fully off position, or a combination.

This in turn is causing the reaction to continue and boil away any water.  In addition, the nuclear reaction breaks down the water and converts it to hydrogen.

The recent explosions at the plants are said to be hydrogen based.

Pumping in sea water would slow the reaction - but since they are doing this and the reaction clearly has not stopped there must be other issues as well - like broken cooling pipes or other unreported problems.  (And its unlikely we'll ever hear the full truth.)

BTW - "Officials in President Barack Obama's administration sought to reassure Americans that nuclear power is safe..." according the the San Jose Business Times.

Really?

What part of my scenario is unclear?

What part of a reactor design that melts down in its "default" configuration is "safe?

If this energy is so clean why is the US naval fleet off the coast of Japan withdrawing to a safe distance as the crisis proceeds?

Monday, March 14, 2011

Boiling Water

I have been reading about the nuclear situation in Japan.

Sadly it would appear that the troubled plants are all based on the old GE "boiling water" reactor model.  As I have written here before back in the 1970's my family was involved in successfully stopping the construction of a nuclear power plant across the road from our house.

During this time, as I high school student, I was very interested in the reactors and the technology they used.  There were two main types: pressure vessel and boiling water (made exclusively by GE).  Pressure vessel reactors are designed around the concept that a sealed loop of high-pressure water is the only thing that comes into contact with the nuclear fuel.

The water is heated by the reactor core, pumped around to what is called a heat exchanger where it is cooled, and then pumped back into the reactor for reheating.  This water is highly radioactive and is basically sealed in the high-pressure piping.  The heat exchanger is used to heat water to make steam in a secondary loop which drives the turbines that generate electricity.

The boiling water reactors work differently.

In these reactors the nuclear fuel heats water into steam that drives the electricity-generating turbines directly - hence the name "boiling water."

The safety issues, which are no doubt what's causing so much trouble in Japan, are related to the fact that a single break in the piping can cause the coolant flow out of the "boiling chamber".  When this happen the fuel rods overheat and can melt - just as we see in Japan today.

Today of the 110 or so nuclear reactors in the USA 35 are "boiling water" types (see this).

Sadly, even as a kid in high school I knew that "boiling water" reactors were not as good as the "closed loop" types.  Though I do not recall how my father and others in the area who were against the plant were able to find this out.

The reactors in Japan were probably not designed to withstand the types of earthquake shocks being felt there - certainly not the 8.9/9.1 being reported.  These quakes appear to be damaging the cooling piping and causing the coolant to drain from the "boiling" vessel.  When this occurs the fuel begins to react more intensely, heat up and melt the fuel rods.

Once the fuel rods melt they become distorted and can lead to a number of additional problems - deformation, loss of control, and so on.

One of the issues that was obvious in the early seventies was how the industry knew - at that point without the benefit of experience - that the safety systems in these plants (all nuclear plants, not just "boiling water" plants) would work in an emergency - particularly an unforeseen emergency.  The arrogance of the designers and companies was fairly obvious - even to a 15 year old kid.   Its unfortunate that the worst fears are being realized.

Six USA nuclear plants are configured identically to those with problem in Japan (see this).

Lexigraph has customers in Japan and we have worked extensively with the Japanese for several years on a variety of projects.

We hope and pray that things turn out well.

Friday, March 11, 2011

Designing for iPad, iPhone and iOS

As I have been working with the iPad I have learned a great deal about how this type of platform supports graphics.

The basic idea with the iPad (and iPhone and the other iOS devices) is that it uses special hardware to do a lot of things which would normally be done in software.  For me the model that works the best is Adobe Illustrator.

(As opposed to the "manual" model you see in the associated image.)

Basically you can think of what happens on the iPad display as if it where the pasteboard in Illustrator.  There are various objects which can be presented there - images, lines, boxes, paths, etc. - each with its own transparency, position, color, transformation and so on.

In Illustrator you think of each object (line, box, placed image) as a separate item - controllable with the little blue bounding and control points which appear when you select the it.  By dragging these control points or the object itself you can change its shape and position on the pasteboard.  (I don't know that there is a name for these objects in Illustrator other than an "Items.")

On the iPad its easiest to match this model with what is called a UIView.  A UIView is a iOS class that's used for manipulating the display.  I like to think of each UIView as a single Illustrator object because, just like Illustrator, each UIView has its own position, scale, rotation, transparency, and so on.

Just like Illustrator you can group these UIView objects together.  In the UIView world you do this by making a UIView a child of another UIView - sort of like Group and Ungroup in Illustrator.  You can also have nested UIView groups.  Moving the parent of the group moves all the children - just like moving a set of grouped objects.  Similarly you can scale and manipulate the UIView in other ways and the nesting and organization of the subviews is preserved.

Like Postscript RIPs UIViews are very fast doing some things and very slow at doing others.

The reason for this is that each UIView is associated with a display hardware context.  On the iPad there is both a computing processor and a graphics processor.  The graphics processor is used to move around bit planes (RGB color with transparency).  It can scale, rotate and manipulate the planes very efficiently in real time.

Each UIView is its own bit plane inside the graphic processor.  The graphics processor gives the computing processor a name for each object but the actual bits are stored inside the graphics processor.  So instead of the computational processor moving around the bits it simply tells the graphic processor the name of the object it wants to move (for example) and where to move it too.  The graphics processor does all the work.

This technique is used to create the various types of animations you see on the iPad display.  In the graphic arts world you can think of animations on the iPad display as if they were automated or scripted Illustrator options, e.g., select object A, move it to position B while rotating it from orientation C to D.  The UIView supports these automated operations directly where as in Illustrator you would have to use a scripting technology such as Apple Script.

On the other hand loading a UIView with a given image is very slow.  You can place PNG or JPGs into UIViews, but the cost in terms of performance is very high - so high in fact that you do not want to do it any more than necessary.  The reason for this is that on the iPad the computational processor is needed to load in the image, convert it to a form the graphics processor can use, and then load it into the graphics processor along with any other information needed (display position, transparency, etc.)

The Quartz display system on the iPad is basically a Postscript interpreter.  So in addition to images you can load what amount to (or use actual) PDF or Postscript-style descriptions of images.  These must be converted from PDF or Postscript representations to images which can then be loaded into the graphics processor.

For me, then, the best way to design iPad-type application interfaces is to use Illustrator and Photoshop.  I create items in Illustrator or Photoshop (which are treated as "Placed" in Illustrator) and organize them to meet the need of my UI.  Once the design is complete I simply map each element into a UIView.

Thursday, March 10, 2011

Messages to Nowhere

So as my Apple work continues I have had occasion to deal with a number of problematic issues involved with Objective C.  I touched on the nil issues in "Apple Rage - Cocoa Suck."

However, at this point I need to complain a bit more - mostly on philosophical grounds.

First of all my perspective is somewhat different than what you might see on other blogs (here or here).  These discuss the practicality of doing the following:
 
  Foo * fred = nil;


  [ fred doSomething ];

Now what you find out is that sending a message to nil is supposedly defined to do nothing.  However, there appear to be some problems when you use the result of sending a message to nil to do something else, e.g., as in the following:

  [ printer someSelector: [ fred doSomething ] ];

In this case we are sending the "value" of  "[ fred doSomething ]" as the parameter for printSelector:.

Now from what I can read (see the official Apple documentation on the matter) the "value" of "[ fred doSomething ]" is also supposed to be nil all the time.

Now all of this discussion is related to what you can "do" with the code, i.e., a discussion about what actually happens.  But when you think about writing software you have to think about the number one issue: cost.

Since I am an old geezer I tend to think about this a lot more than many youngsters - particularly since I have customers have had systems I have designed, written and installed and running for upwards of a decade.  Now you might think that software today would never last a decade - think of iPhone apps, games, and so on - but you would be wrong.

Many times the application might not last that long but the software components that its build from, which may be substantial, do.  For example, in the world of gaming a lot of work is involved in creating libraries of functions to handle 3D-related issues, e.g., physics (so that when the character lets go of the gun if falls to the ground in a predictable way), color (red is still red), animation, and so on.

So there are several issues with the nil thing.

First and foremost is the basic idea that this behavior is fluid (reading the docs there are changes from OS X 10.4 to 10.5 in this regard).  Now, if I write an app, say a medical app, I don't want someone to break my code by merely upgrading their computers.  The fact that some amount of "discussion" can change behavior so fundamental in a commercial software release is troubling.

The example I like to use is that of the flap controller on a commercial airliner (cause I'm a geezer and fiddled around with realtime software in the hey day of Boeing's push for reliability) or perhaps a "drive by wire" controller in a car.  In either case when I push the controller in the drivers compartment I expect the flap or brakes or steering to do what I intend.  A lot of professional work goes into these sorts of systems to make sure that that's exactly the case (see Toyota's vindication).

A pro writes reliable software.  To do this requires that the underlying system on which its based is predictable and reliable.  It also nice if the code you write does not depend on things which are easy to get wrong - particularly if "wrong" is subtle. 

A pro understands that testing and maintenance are 90% of the cost of software.

Imagine the engineers a Toyota.  They wrote code to handle the braking system in a "fly-by-wire" mode (meaning that the brake peddle controls an input to a computer and the computer controls the actual braking). 

Cars crashed.  People died.  Their software was NOT at fault.

I imagine that there was a lot of anguish (the sphinctometer was probably off the scale) in the engineering center where this system was developed when these problems initially came to light.

Another problem is that "expected behavior" is being masked by the behavior of whatever part of the Objective C runtime system is compensating for sending messages to nil.

Suppose I write code as follows:


  [ [ Image loadFromFile: "foo.png" ] display ];

Now what this would do is display the "foo.png" image.  But what if the "Image loadFromFile:" returns nil in the case of an error, i.e., the image is missing or bad?  So since telling nil to display is okay I now don't see the image and I don't know why.  In fact, I can't even tell in the debugger what the problem is because I can't step into the code for Image (assuming Image is some Apple system code).

Now in a large development effort I may have other people creating images, naming images, and so on.  Suppose someone saves foo.png as a non-PNG file type not recognized by the Apple software (easy to do in something like Photoshop).

So if I don't see an image on the screen I have a lot of checking and rechecking to do all because things just assume that messaging nil is okay.

The problem, of course, is that all that time is expensive.  Expensive when the software is being developed and even more expensive if the fix has to be dealt with after the software is deployed.

Sadly Microsoft and its .NET framework have gotten this part right.

Wednesday, March 9, 2011

Flash, iOS, and Apple

I have been following along with the moaning and groaning by Apple as it relates to Flash.

I purchased an iPad a few months back and have been using it frequently since then - mostly for software development but also for general web browsing.  The lack of Flash is really an issue as far as I can see.

The problems manifest themselves in some obvious and some not so obvious ways.

Obviously you cannot access any sort of Flash directly - which is a pain but at least you know where you stand.

The more difficult issues arise when there is some Flash element embedded in a page that is required for the page to work.  For example, a login page.  Since there is no way to tell that there is Flash involved you have to compare your actions against a browser with Flash - not very convenient.

Another problem is simply blank portions of a web pages.  No explanation, no errors like "This page has Flash and I cannot display it."  Again you are never sure what is really wrong until you do some forensic evaluation against a browser that supports Flash.

For a casual user this is not really a problem.  But if you intend to use the iOS device for something more serious it can become an issue.  For example, I cannot write this blog from the iPad because the Google tools for creating posts requires it (well, I can write in HTML with the Google tool but that's so much of a pain I consider it not being able to post).

From the software developer perspective its less clear what the issues with Flash are.  Clearly the iOS software has some form of "limited memory".  What this means exactly is not so clear.  iOS seems to be a unix-style OS under the hood directly related to Mac OS X in that it supports a lot of the same features, Frameworks, classes, technologies (networking, file system, etc.), and so on.

The Darwin unix on which its based clearly does not have this limit as part of its nature.  However, Apple has done some things curtail its ability to support large applications.  (The memory limit on the graphics side is a separate issue.)  There seems to be 128 Mb associated with programs of which about 30 Mb is used for running programs.

My guess is that other than very specific circumstances the iOS processors are very slow - particularly with things like swapping memory out to the flash memory used for storing things like music and so forth.  Hence the limits. But the limits also keep down the costs and the iPad is very cheap compared to the other tablets out there.

Its not hard to believe that Flash does not (cannot?) respect these limits.

Adobe has created a new tool, called Wallaby, to convert Flash to HTML5-based objects.  However, its not clear that this is a good or long term solution.  Not everything converts and some things that are converted are converted to unlike HTML5 object.

I don't see this as a solution long term.

Apparently Flash scripts are so ugly and convoluted that no technology accessible to Adobe is able to resolve them.  This is hard to believe as well - all Mac OSX devices have virtual machines for running the old G5-based processor executables. 

Why can't Flash run in a "black box" or on a "virtual machine" in iOS?

Who knows...

Tuesday, March 8, 2011

Tweeting Toilets

An Arduino Computer
It always continues to amaze me what people do with their time and resources. 

In the early 1980's there was a Coke machine in the Computer Science department.  Enterprising engineering students discovered a spare Ethernet controller and were able to build an Ethernet interface into the Coke machine.

I worked a job with a guy who had went to school there and the company we worked for had close ties to CMU as well - we often found ourselves over there for work. 

"Want something to drink from the machine?" he asked on day.

"Sure..." I said.


"One second," he said as he brought up the piece of software to interrogate the machine and see what was in it.  No need to leave our seats.  No need to get our hopes up if what we wanted was sold out.


That was about thirty years ago.  Little did I realize that I was witnessing the future.


Today I find this: a Toaster that Tweets...


The guy looks like some kind of hardware hacker with too much time on his hands though on his blog the creator of the Tweeting toaster says it has 600+ followers.  That's probably 600 people with the most interesting lives imaginable...

But not as interesting as the 800+ people following the Tweeting toilet or the 660 people following the Tweeting washing machine.

Now I get there are technological reasons for wanting to create this kind of thing - who wouldn't want to be to be able to check from Florida that their furnace was still working in mid-winter. 

Whatever you think of these Tweeting wonders the technology behind them is also interesting.

Many of these projects are built with something called an Arduino.  This is a small computer - complete with internet access that you can wire up to virtually anything.  It has extra input and output ports so you can connect things to it - like switches to tell whether or not the door is open, etc.

The Arduino is very popular and there is a lot of free software out there for it - including a Twitter software library that lets it post to its own Twitter account.

Really, you have to wonder where this will go.  On the one hand how many people really need a Tweeting lawnmower?   But imagine if your local convenience store could Tweet about its low gas prices...  Or when the fresh supply of donuts just got delivered.

On the corporate level ADT (or whatever they're called now) sells a home security system with many of these ideas.  You see it on TV: Mom, from her smart phone, can view the front door from work and see the kiddies getting home from school.  Dad can turn down the temperature on the thermostat from work.

(Let's just hope mom isn't driving when the naughty kiddies turn up on the video monitor with a crack pipe...)

Think of the possibilities...

Monday, March 7, 2011

Lawsuits, PS/3s and You...

Toshiba's New Self-Encrypting Disk Drive
In January I wrote about George Hotz and his PlayStation 3 hacking (see "Inconsistencies of Law").

Well if you visited his site you are now going to be caught up in the wide net of Sony and its legal team.  It seems that Sony's legal team has been granted the right to collect everyone's IP information (traceable back to you and your computer) who has visited Hotz's site.

The situation is discussed in the letter linked here.

Basically the idea behind this is for Sony to try and prove that Hotz's "customers" for the hack (though it was free) were in California near where Sony is trying to establish the venue for the case.  Hotz lives in New Jersey and apparently his lawyers believe that NJ should be the venue for the proceedings.

Originally the subpoena's were to include companies like Bluehost (Hotz's ISP), Twitter, Google, YouTube, Softlayer and PayPal as well as others and were to include everyone's information.  Hotz's attorney's and others (such EFF) argued that subpoena's for anonymous web visitors to Hotz site were overly broad (no kidding, huh?).

The EFF letter is here.  Basically it argues that there is constitutional precedent for "reading anonymously", that is, the right to visit a web site to read its contents is constitutionally protected.  Ultimately the Judge rejects the EFF letter (see first link).

Eventually both Hotz attorney's and Sony agreed that the information can only be used to determine where "users" of his hacks reside - a necessary element in determine in what court the case should be heard.

What's interesting is that virtually all ISPs in most countries are required to track and log this data.

Which means that in addition to any logging sites like Google do with respect to ads and tracking your ISP is logging your connections (at least) as well.  Connections at least in terms that a government agency can use against you.

The EU, among others, have extensive requirements for tracing internet usage: who, what, when and where as well as six month to two year retention periods.

So even if I connect to Hotz's site in order to write about it I am traced and logged and part of the lawsuit.

With this sort of retention requirement, as well as retention of texting, voicemail and other electronic communications its little wonder that disk storage has plummeted in price over the last few decades - all this information has to go somewhere to be recorded. 

(You can see the price of disk storage here over the last several decades.)  Basically in 1991 I paid $800 USD for a 200 Mb disk drive.  Today I can pay $90 for a 2 Tb disk drive.  Its hard to imagine how much disk storage is currently in use for storage purposes at any given moment.

Every five years or so that means the disk drive industry adds a factor of 10 to storage quantity for the same price (of course today's dollars also buy less, so actually the price of storage is falling over time).

That's a lot of recording (given a rate of 2.5 billion (with a 'B') text messages per day in the USA alone.

(And rest assured that each and every message is recorded along with when, where, who it went to - right along with a link to you.)

Friday, March 4, 2011

Google: Censoring Your Searches...

So Google has completed their changes as I discussed a while back - changing how sites are ranked.

I found this interview, by Steven Levy, at Wired regarding this project.  Its with two engineers (Amit Singhal, Google Fellow, and Matt Cutts, Principal Engineer) that worked on the "problem" and came up with a solution.

They talk about how they updated the Google "indexing" processing in 2009 - indexing is where the Google engine goes out and searches through all the websites it can find looking for links.  Google rates sites by how many links point to it.  The more links that point to your site the more "relevant" your site supposedly is.

Google also matches text to the links.

So somewhere inside Google there's a mapping, for example, of the phrase "Lady Gaga Meat Dress" to a number of appropriate sites.  Since Google can't know everything everyone will want to look up they instead keep separate links to phrases like "Lady Gaga", "meat", "dress" and so on and combine them (for example by seeing which sites are linked to all three of these phrases) and displaying those results.

So the problem Google faces is that the number of links pointing to sites is not necessarily a good indicator of "relevance". 

So I, as an entrepreneur, might rent out a lot of cheap server space and create a bunch of "faux" content linking to my web site.  This will make Google believe that my site has a lot of links pointing to it therefore in Google's eyes makig it relevant.  So if I sell "fooma widgets" and I have created thousands of faux links to my site when you type "fooma widgets" into Google my site will come out on top. 

If my competitors lack my IT savvy and cannot duplicate or exceed my trickery then they will lose out on ads.

So Google, in their wisdom, now attempts to weed out the "shallow content" sites.  That is, when the Google search engine scours the web for links they want to eliminate links from sites that they believe don't really offer anything relevant (whatever that might mean).  So if their new search engine happens upon my thousands of faux sites linking to my "fooma widget" page they don't want them to be considered or considered with the same weight as other sites linking to my page.

As Singhal says in the interview "That’s a very, very hard problem that we haven’t solved, and it’s an ongoing evolution how to solve that problem."

At issue here is the Google concept of relevance.  The basic idea that Google started with - that links to a site measure its relevance is a deeply flawed one.

1. Google, since it does not attempt to "understand" the site, cannot really know what, if any, relevance links from that site might have.  If I created links to my own pages why does Google get to judge that this is not relevant?

2. Without "understanding" of the pages containing links Google is working with correlation and not causation.  Correlation merely means that things like "lady gaga", "meat", and "dress" happen to occur and result in links to some sites.  But it cannot tell of a bunch if foolish children were linking to these sights or whether someone with knowledge and insight was.

Google doesn't and cannot care.

Now in the case of Lady Gaga's Meat Dress it really doesn't matter.  But reading the interview further we see some more troubling comments.

After Google adjusted the search engine Cutts got this email from Google user according to the article: "Hey, a couple months ago, I was worried that my daughter had pediatric multiple sclerosis, and the content farms were ranking above government sites,” Now, she said, the government sites are ranking higher. So I just wanted to write and say thank you."

And here is the problem.  Exactly why are the "government" sites more relevant?  Relevant in what way?  Why do Cutts and Singhal get to "decide" this?  What if the girl dies because Google put up a bogus site?  What if the government is plain wrong and someone has found a cure that is ranked much lower?

(Google won't search for porn either - even though its protected free speech - at least in the US.  You might say "hey -- that's great!  I don't want my kids to see that..." - but what if you're child is involved in the industry and goes missing?  Now you simply can't use Google to help you search for them...)

No, sadly this is social engineering and censorship at its absolute worst.  Tinkering with things so the "right" answers come out in the judgment of these two.

What if this tinkering were done in a country like, say Egypt or Yemen, to adjust the search results so they were "right"?

What if they weren't right and people died?

The basic problem is that since Google makes money selling ads on the web they face pressure from their paying ad users to ensure that no one is out "gaming' the system.  Cutts and Singhal say that money is absolutely not the motivating factor.

One probably could believe that no one told them out and out what to do and why... But who is paying their salary?  Why are they paying their salary?

Long ago, when Google first started, people realized that they could just sit at their desk and Google away for their competitors sites.  Each time the competitor's site came up on a search the competitor was dinged for whatever the Google ad cost was for the competitor, say $0.10 USD.   So if I had my staff of ten or twenty do that all day for a few days I could cause my competitor to owe Google a lot of money.

So Google, in their infinite technical wisdom, had to create a means to defeat that process.  Then users figured out more things to do, and Google figured out a way to beat them, and on and on...

Leaving a very flawed system which Google continually has to rationalize as "good".

(This is kind of like what ebay has turned into - a world of its own where pricing is based on the ebay world of who will likely buy it.  Can I buy it cheaper and better outside of ebay - often yes.  But inside ebay there is an artificial world of prices for things which ebayer's accept...  Ebay is full of the same scamers and gamers as well.)

Unfortunately most people do not and probably will never understand these issues.  They will simply Google for "cheapest high healed shoes" or "most popular watch" or whatever and "trust" the results Google provides.

Sadly Google displaced a lot of good search sites which showed the good, the bad and the ugly.  Like Alta Vista.  This was a really good search site - it didn't sell anything and so the results were what the results were.  (Eventually Google hired one of its creators and then fired him right before his stock options kicked in - allegedly because he was too old... but that's another story.)

But now its too late...  Google is an icon of search wonder.

But like the old saying goes: "The bigger they are the harder they fall..."