Search This Blog

Wednesday, August 31, 2011

Goldbach Conjecture and Decking

About a year ago I was working on some imaging related to the Goldbach Conjecture.  The Goldbach Conjecture says that every number greater than two (2) is the sum of two unique primes.

For example

  2 + 5 = 7
  3 + 5 = 8
  7 + 2 = 9
  7 + 3 = 10
 
and so on.  In thinking about this I created an image to show the role each prime plays in generating sums.


This last year we decided to extend our porch to include a new, decked area.  I wanted the decking to reflect the notion of the overlapping primes.

The deck was constructed like this:


Tuesday, August 30, 2011

Medical Confusion

First off, in December of last year I wrote "Type 2 is Not Diabetes."

I talked about how various dietary elements like processed plant oils can inhibit and otherwise wreak havoc on your ability to process glucose.  Today I came across an article here talking about giving an Alzheimer brain a glucose "fuel" (actually a "medical food") which contains elements of a good oil (coconut oil in this case).

Among other things the article also indicates that there is medical research linking glucose uptake with Alzheimer's.

In "ADHD & A Spoon Full Of Sugar" I wrote about how I was testing cod liver oil on myself.  I work hard not to ingest the bad oils (soybean, cottonseed, corn, etc.) these days - though they are unavoidable in foods you eat out or do not prepare yourself.

For the last nine months I have been taking a tablespoon of cod liver oil once a day.  I feel a significant and noticeable improvement in mental function.  One of the primary benefits is that I am much less forgetful - particularly about things like keys, cell phones, putting things away, that sort of thing - and I have taken over the roll in the house of "finding it" as in "Honey - I can't find my ____" - now I remember where I saw things that are not even my own.

(Cod liver oil naysayers typically complain that it has "too much" vitamin A and D - just make sure you check the label and have a grasp on your over all intake...)

My ability to focus has dramatically improved as well.  For me focus involves staying on task and not having to "break" to gather my thoughts.  A year ago I could not focus for more than an hour or two without need of a break.  Today I can easily focus for six or even eight hours at a stretch.

To anyone over age 40 I would recommend looking into taking cod liver oil (and its not nearly as unpleasant as its cracked up to be).

I believe that Alzheimer's (as well as other elderly mental ailments) are probably a nutritional problem stemming from a long term lack of proper oils in your diet.  Clearly a "low cholesterol" causes mental issues (see "Lower Cholesterol = Memory Loss") - I think in part because the brain not provided necessary nutrients.  Given such a significant impact on the brain of diet I wonder what a lifetime of improper diet might do.  With these latest thoughts and articles on Alzheimer's and glucose I have come to believe this even more strongly.

Second today is the latest saga of medical incompetence as it relates to my poor mother.  Mom, who is in her 80's) was in a fender-bender and the airbags went off giving her a fractured sternum.  Apparently this is a common injury with air bags and, though it hurts, its not really dangerous and heals on its own.  Nor does it require any special behavior or restrictions.

So in the emergency room the doctor told her she could take up to two Advil three times a day - six Advil every 16 hours (they missed the cracked sternum the first time).   At a second visit (a few days later after having excruciating pain) they also gave her a prescription for five days of 3x a day 5/325 Vicodin.

Mom, aware of the bad reputation of these types of pain killers, was concerned so she began to ask around (other doctors, pharmacists, various other health professionals) about what to take and in what dosages.

The range of answers (compiled over a couple of days) were completely astounding:

1) One 5/325 Vicodin 3x

2) Two 325 Advil 3x a day.

3) One 325 Advil every 4 hours.

4) Don't take Advil at all, take Tylenol one every 4 hours.

5) Taking even 3 Advil a day for two weeks will ruin your stomach.

6) Two 325 Tylenol every 8 hours.

7) One 325 Tylenol every 4 hours.

8) Take as little as you can of X (where X was all of the above).

However, none of these professionals were concerned about the problems with Tylenol (see this as an example) or Advil (save one doc though admittedly we are only talking about taking this for a few weeks total).

No wonder they call it "practicing medicine".

Imagine giving this range of answers to a cop who has just pulled you over ("were you drinking, sir?" - and then giving a set of answers with this level of consistency - off to DUI school you would go...)

The problem here as well as with Alzheimer's is that modern medicine has a huge vested interest in the status quo for a number of reasons: legal, income, laziness, and so on.  No one wants to "rock the boat" by running out and screaming the "emperor has no clothes" - so they confuse my poor elderly mother with this sort of drivel.

Medicine, a supposed "science" where each of the tens of thousands of practitioners all give their own, unique and conflicting answers.

Its little wonder people have such littler regard for medicine.

Arthur C. Clarke once wrote: "Any sufficiently advanced technology is indistinguishable from magic."

Paraphrasing, the more I see and read about this kind of stuff the more I believe "Any sufficiently (stupid, asinine, idiotic, moronic, foolish, lazy, ...) act is indistinguishable from pure evil."

Monday, August 29, 2011

Antibiotics = Childhood Asthma?

From the "Village of Long Life"
In October of last year I wrote "Antibiotics - A Scourge on Humanity".  A paper in Nature was recently published that confirms some of my comments (see Blaser M., J. Stop the killing of beneficial bacteria. Nature 476, 393–394 (25 August 2011).

Surprisingly science today does not seem to fully recognize that a compound that kills bacteria does not distinguish between "good" bacteria and "bad" bacteria.  An article in Wired (here) claims the current scientific "wisdom" in this matter says that "good" bacteria will repopulate.

But the bad bacteria, e.g., strep, don't repopulate.

Why would the rest?

"Good" bacteria are necessary for life - without them you simply could not live.  Period.

So let's consider H. pyloriH. pylori is known to cause gastric ulcers.

So it must be a bad bacteria, right?

Studies now indicate that antibiotics, which kill H. pylori, have been largely successful in recent decades.  But consider this: Science has uncovered an inverse relationship between H. pylori and childhood disease like asthma.  If a child is not exposed to H. pylori during his or her very early life their chance of having asthma is increased.

This is trend apparent not just in the United States but the entire developing world.

This goes along with something called the "hygiene hypothesis" that states (from Wikipedia) "a lack of early childhood exposure to infectious agents, symbiotic microorganisms (e.g., gut flora or probiotics), and parasites increases susceptibility to allergic diseases by suppressing natural development of the immune system."

There are a number of factors that have changed during the last several decades related to how humans acquire "good" bacteria.

For one, antibiotics are prescribed almost like candy - an average child receiving some twenty or more doses before adulthood.  For another, the dramatic rise in C-sections reduce a child's first exposure to "good" bacteria by eliminating the trip through the birth canal.

Antibiotics and other antibiotic compounds are present in virtually all commercial animal food (see "Animal Drugs").  These pass from the animal to humans.

And finally there is the ubiquitous "hand sanitizer" that indiscriminately kills all bacteria (see "Death by Hand Sanitizer")


Science is now speculating that a number of other problems besides childhood asthma may be caused by kill off "good" bacteria: type 2 diabetes (though I doubt this), obesity, depression.

What's interesting is that we spend billions and trillions of dollars on other things but virtually nothing on trying to understand what makes us healthy in the first place.

Western civilization is doing poorly at keeping people healthy and most forget that staying healthy in the first place is far more important that having the most advanced medical technology to take care of you once you are not.

Western science is also doing a poor job of understanding what helps people to live healthy lives - is it diet, lack of stress, or some other factor (see this as an example).

In any case I'd think twice before taking a does of antibiotics.

(Which at this point I won't even give to my dog...)


Friday, August 26, 2011

US Patent 7,178,140

From May 31, 2002 we have this patent issued in 2007: "Speeding up application downloading from a remote server."

So the idea here is pretty simply.  I have "virtual machine" which has some code on it.  In the code are references to other classes:

// Class method that takes a request and books
// me in a meeting if I am not busy today...
//
Routine Method "If I Am Available I'll Go"
  ADD
  ADD
  CALL "Today's Date"
  SUBTRACT
  SUBTRACT
  CALL "Get Today's Calendar"
  MOVE
  MOVE
  CALL "Save Today's Calendar"

And we have the CALL's referencing various external routines like "Today's Date".  Now let's further suppose that routines (methods as the patent calls them) are all part of some larger calendar system, let's call it "Calendar System." with hundreds or thousands of other routines - none of which we use save for the three mentioned above.

So let's look at the first claim in the patent:

"1. A method for transmitting methods to a client running a virtual machine, the methods contained in one or more classes, the method comprising:

    - recursively scanning through application code beginning at an application entrypoint to determine which methods may be called and the classes to which they correspond wherein, said recursively scanning includes:

    - finding all methods referenced in said application code; and

    - finding all methods referenced in said methods referenced in said application code;

    - generating a method usage map having an identifier corresponding to all said referenced methods organized under its corresponding class;


    - storing said identifiers corresponding to all said methods referenced in said application code and all said methods referenced in said methods referenced in said application code in said method usage map organized by said classes;


    - receiving a request for an application from a client;


    - processing all application and library classes for the application specified by the request, after consulting said method usage map upon execution of the virtual machine, removing unused methods that are not contained in said method usage map from it as per said method usage map; and


    - transmitting any remaining methods to the client running the virtual machine
."

So according to this patent we take the routine method "If I Am Available I'll Go" and we scan it looking for CALLs that it makes to "outside" routines.

In this case "Today's Date," "Get Today's Calendar," and "Save Today's Calendar."

So if  the "Calendar System" has other routines, like "Is Today Christmas" or "Number of Days in the Month" it seems reasonable that we don't care about them if they routines we do need do not use them.  And since there may be thousands of similar routines that take up a lot of space it would indeed be a waste to mess about with them if we don't need to.

So this patent says we basically make a list of these - something like this:

"If I Am Available I'll Go"
  - "Calendar System"
    + "Today's Date"
    +"Get Today's Calendar,"
    + "Save Today's Calendar."

So now, on some other computer a client is running that needs to make use of  "If I Am Available I'll Go."

So the patent says that, instead of say transmitting the full bulk of "Calendar System" to the client, we instead take the list above and remove all but the listed parts of "Calendar System" and transmit only those; thereby saving a lot of transmission time presumably.

Being an old geezer this sounds to me like something called "dynamic linking."

An old computer concept that's been around for some time.  So I fire up "Google" and enter "dynamic linking."

A few entries down from the top is this link.

A bit down from the top we see "Loading a Dynamically Linked Program."  Now a dynamically linked program is exactly the same thing as described above - a program that references a bunch of outside routines that are not included in the basic "package" but instead referred to and added later (linked later).

Then we see, further down in this section, "Finding the Libraries".  Well, a library is a collection of routines, er, kind of like, well, a "class".  One could even imagine that each routine is a method in some sort of logical class.  So we discover that the dynamic linking process searches for the libraries.

A bit further below that is "Lazy Procedure Linkage with the PLT " which says in part: "Programs that use shared libraries generally contain calls to a lot of functions. In a single run of the program many of the functions are never called, in error routines or other parts of the program that aren't used. Furthermore, each shared library also contains calls to functions in other libraries, even fewer of which will be executed in a given program run since many of them are in routines that the program never calls either directly or indirectly. "

The bottom line here is that on-the-fly the procedures are bound into the running program as they are called.

Of course, this doesn't quite match the patent - we need a client to do this.

However, since there are not limits on where the libraries might be they could be pulled in over a network link of some sort - a mounted remote drive or system.

Which exactly duplicates the scenario described by this patent.

At the top of the article it states: "Sun Microsystems' SunOS introduced dynamic shared libraries to UNIX in the late 1980s."  This, along with network mounts, exactly duplicates the patent.

So we have Sun Microsystems patenting what they have done for a couple of decades.

I guess they forgot what they themselves were using 15 years ago...

Thursday, August 25, 2011

US Fukushima ... Almost

From Wikipedia - New Madrid Quake in 1895
So we see in this WSJ article that the North Anna nuclear plant in Virgina shut down during the 5.8 earthquake on this last Tuesday.

Then we see in this follow up article that the shaking in the quake exceeded the plant's rating for earthquakes.

In March I wrote "US Nukes and US Geological Faults."

Interestingly enough the quake, centered near Richmond Virginia, was not even near a fault listed on the USGS map.

So what does this say?

Well, for one thing that no one really has any clue about what might happen in the US as far as nukes and, say, earthquakes, are concerned.  Events that are "not even on the map" would appear to be just as likely as anything else.

And though during the time of Fukushima we have assurances from the US nuclear regulator that we need not worry it would seem that in fact they have no good idea what they are talking about.

Consider the "Missouri" new-Madrid earthquakes of 1812-1811 (see this in Wikipedia).  These were about twenty times stronger (at 7.2 - 8.0 as compared to 5.8 for Virginia on Tuesday) and affected a very large area.  (Ten times stronger than the one pictured at the top of the post.)

The bottom line is that a quake like new Madrid is going to create serious damage - and in particular create Fukushima-like scenarios for us here in the US.

The regulators can whine and moan all they want but A) predicting a "big one" like new-Madrid is unlikely and B) a new-Madrid event will cause another Fukushima.

For me Tuesday's earthquake was a first.  I was sitting alone upstairs - Mrs. Wolf having just left to go to an appointment.  Suddenly I felt the floor moving and heard the steps creaking as if someone very heavy (probably 500 pounds worth or so) was coming up the steps (I heard three steps).

The dogs, asleep by my side, did not move or bark.  The couch I was sitting on moved back and forth. At first I assumed Mrs. Wolf had somehow returned early from her appointment and avoided disturbing the dogs. 

But Mrs. Wolf does not weight 500 pounds.

I live in a very solid house - one Mrs. Wolf and I designed and solidly built by my good friend - sixty mile per hour winds do not make it creak nor do they make the floor move - and moving a Hammond C3 up the same stairs a few years back did not make the floor move as much.

I jumped up and looked down the stairs - the gate was still closed so no large dogs (we have one that does sneak up the stairs from time to time) could have come up the stairs, and no Mrs. Wolf.

Short of ghosts, I reasoned, it must have been an earthquake, and sure enough, Facebook friends from many miles around confirmed they had felt the same thing.

I live about a six hour drive from the epicenter - or about 300 miles.  I would venture that the house and I moved back and forth at least a 1/2 inch during the shaking.

A magnitude 6 quake would have multiplied that by 10 - 5 inches - probably enough to do a lot of damage. 

A new-Mardrid quake at magnitude 7 - 10 inches of shaking.

I probably would not be alive to tell this story or write this blog entry.

Similarly - most nearby nukes would be in a similar state to Fukushima.

Just thank God that the US nuclear regulatory bodies and private industry don't have to worry about any of this and can happily build their plants where ever its convenient.  Most plants were approved twenty or more years ago so there will be no one to blame but ourselves should their be a problem.

I know I can sleep easily knowing all this... can you?


Wednesday, August 24, 2011

US Patent 6,349,344

The Pigeon Sisters
Link for this patent here on the Google patent site.

This Java patent is assigned to Microsoft.

The first claim from the patent:

"1. A method of preparing one or more class files prior to run time for use by a virtual machine in a manner so that during run time the prepared one or more class files require less memory and require less time for processing during program execution than would otherwise be required for the one or more class files, and which do not require application programs to incorporate the prepared one or more class files prior to run time to be executable by the virtual machine, the method comprising steps for:

    loading the one or more class files and accounting for any differences in byte order at the virtual machine;
    parsing the one or more class files into individually parsed instructions derived from the one or more class files; and
    creating a native executable run time image comprised of pre-loaded and pre-parsed instructions derived from the one or more class files and that are dynamically linkable in a manner so that the prepared one or more class files may be accessed during run time by any of one or more applications that may be developed separately and independently from the preparation of the one or more class files, without a need to incorporate in such applications the pre-loaded and pre-parsed instructions.
"

(Note: I typically focus on the first claim as it tends to be the most basic "concept" related to the patent.)

Now taken literally this patent patents most operations of any computer system.  Since it was filed in 1997 that would mean virtually everything done on computers would somehow be under Microsoft's aegis.

At its most basic level this patent describes taking a set of classes and combining them into an executable file.

But classes are just computer programs or fragments of computer programs.

So let's look at a description of something called the "binder" on Multics (Section 6.4, paper written in 1987, Multics was invented in the 1960's):

"The binder combines separately compiled programs into a single segment, such that linkage between programs in a bound segment is pre-resolved. The input to the binder is an archive file that contains object segments and an optional ASCII text control file called a bindfile. The binder uses relocation bits output by the compiler to relocate instructions, like a traditional linkage editor. The binder's output is a standard object segment with intra-segment linkage prelinked."

The binder is apropo here because classes are merely code.

Sort of like saying I have patented safety razors.

Then saying a red safety razor is unique and new and deserves a patent of its own.

So we have a 1997 patent addressing what was done some thirty years prior.  More than likely everyone involved in this had no idea what Multics was or how much impact it had on computers since.

Multics was old news by the mid-1980's.  I ran across it indirectly when a company I was involved with, Lexeme, was involved with an RFP to convert a large system written in PL/I for Multics to Ada for a defense contractor.  Multics was being phased out and the new system was to run Ada.

The contractor we were a sub to did not take this seriously at first, assigning a pair of women to the project we called the "pigeon sisters."  They were young and had no clue what was going on and their personal interaction reminded you of the "pigeon sisters" from the movie and/or TV series "The Odd Couple."

These two led the project astray for some time (though they did know how to run the mainframe Unix ed program on 3270 terminals which was something we thought quite unusual).

Eventually the higher ups assigned someone who know what they were doing and the project was a success in that the converted PL/I program actual worked.

So again we have a "Java patent" that's just a rehash of thirty year old technology.


Tuesday, August 23, 2011

The End of the Space Shuttle...



Dark humor about the end of the shuttle program: unemployed astronauts and what happens to them.

Notes from the World of Hacking...

Shionogi, a US arm of a Japanese pharmaceutical company with offices in both New Jersey and Georgia, was hacked by Jason Cornish, 37, a former employee. Cornish delete some 15 virtual servers containing all sorts of important company data, i.e., email servers, etc., after resigning from his position over a dispute with a senior manager.

The cost of the damages was over 1/2 million USD.

Cornish attempted to disguise his actions by performing the hack from a local McDonalds offering free WiFi.

Unfortunately Cornish forgot one thing - don't pay with your credit card if you're up to no good.

According to this court document (PDF) Cornish had purchased $4.95 worth of food just prior to his attack.  FBI personnel were able to link Cornish via Web Logs, a gmail account, the VISA number on his card, and IP of the McDonalds.

But McDonald's and its free WiFi aren't the only nexus of hacking in your neighborhood.

Recently, at the  recent Black Hat security conference in Las Vegas, it was revealed that medical devices like insulin pumps and blood sugar monitors which use wireless controllers are vulnerable to hacking.

Jay Radcliffe, a security research who also has diabetes, was able to show how to wireless access his insulin pump and sugar monitor - controlling how much or how little insulin in produced as well as controlling what information was displayed regarding the state of his insulin.  Radcliffe could even make the pump show that it was providing insulin when it was not and show that it was not providing insulin when it was.

In the case of Radcliffe the pump and monitor were controlled by a wireless remote which was easily reprogrammed using a USB interface.

At the end of the day computers and computer security are only about as reliable as the people who run them - you can have all the unbreakable encryption in the world but if you leave the key lying around on the desk someone will steal it.

Monday, August 22, 2011

US Patent 6,915,508

I continue my diatribe against Java patents with this...



US Patent 6,915,508 refers to a "Method and apparatus for eliminating C recursion from a Java™ bytecode."

First, some background:  Java is a programming language that was designed around a "bytecode" system.  This means that when you write a Java program the Java compiler does not generate native machine instructions but instead emits a series of generalized pseudo "bytecodes" which are like machine instructions but instead processed by a program.  The program, the Java interpreter, reads the bytecodes and processes them instead of the actual computer doing it.
For example, Java is a stack machine so you might have a series of bytecodes like this:

  PUSH 123 
  PUSH 1
  ADD

Where "PUSH" places 123 on the stack, the second "PUSH" places 1 on the stack and "ADD" adds the top to stack elements, removes them from the stack, and pushes the result 124.

PUSH may be implemented as a "byte" in memory, save with value 56.  ADD as a "byte" with the value 45.  So the series of bytecodes in memory would look like:

   56, 123, 56, 1, 45

Because Java is using a program to process the "instructions" like PUSH its possible for the Java interpreter to have a significant amount of control over what's going on - it can check for errors, like having too many things on the stack, and so on.  These things are more difficult to implement when hardware is directly executing instructions.  Java was designed to run in a number of different environments so this also eliminates the need to have a special hardware implementation of the Java interpreter.

Now let's suppose we want our bytecodes to call a method in a class called fred - say something like "fred.add(1, 2)".

In this case the Java interpreter has to do a lot of work - something like:

  PUSH 1
  PUSH 2
  LOAD "fred"
  INVOKE "add"

Here the "LOAD" operation has to go and find the class "fred" - this can involve it either already being in memory (which makes accessing it easy) or it can involve loading "fred" form outside the Java interpreter, e.g., a file, initializing the class (and any super class), and so on.  Finally "add" is called.

This patent addresses the issue where the bytecode interpreter is recursively invoked, for example as in the case where "fred" is loaded, where the interpreter would be reinvoked in the context of the class "fred" to initialize it. 

The "problem" is that the bytecode (Java) interpreter can run out of resources if there is too much "recursion" - for example running off the top of the stack in the interpreter.  Since this is hard to detect Sun is attempting to address this issue with this "solution".

The "solution" Sun patents involves replacing a recursive call to the bytecode (Java) interpreter with the invocation of some "replacement Java bytecodes" - basically converting the operations that the interpreter would have done into Java bytecode and thereby avoiding a recursive call to the interpreter.

However, in my view, this is simply a scheme to "flatten bytecode", for example as described in 1992 here (in this case the language is LISP, but the principles are the same) and to replace tail-recursive calls with continuations in the interpreter at an existing level albeit by generating additional bytecodes for the interpreter.

The real question for this patent is just how "unique" and "novel" are the concept espoused.

The LISP paper talks about replacing bytecodes to "flatten" execution - now in my view "flattening" is independent of whether an interpreter (as in the case of Java) is involved or whether raw machine instructions are involved.  Similarly "flattening" by using what amounts to effectively restarting the interpreter at the same level rather than a recursive call is the same as implementing a "tail recursion" optimization (something I used in Portable Standard Lisp in the 1980's).

Are we really talking about something novel and new here or is this a simple rehash of older ideas hidden by the words "Java" and "C"?

Certainly C is a programming language with roots in the 1970's.  From the perspective of a patent it predates this particular patent by decades.  Why use it?  Certainly a Java interpreter could be written without using C - the fact that its used in the patent title is deceiving I think.

Java is also not the first interpretive "bytecode" language - TinyBasic the first microprocessor Basic - used it in the 1970's (see this).  A lot of commercial work was done with Basic at that time as was a lot of research into optimization and performance.

This all boils down to this: Does this patent really talk about anything new and novel or is it just disguising a rehash of older work under the cloak of Java and C? 

Because computers have all processed instructions in roughly the same way for sixty or more years is it really new to put a different 'wrapper' around something old and claim it as new?

In this case I see a novel approach to two very old principles - "flattening" and replacing "tail recursion" with a "jump".  Again - like putting a new paint job on an old car and claiming that the old car is now in some way "unique".

Had this patent invented tail recursion elimination and flattening that would be one thing. 

But it doesn't.

It mere I think disguises them with new, and unnecessary, labels like "Java" and "C".


Friday, August 19, 2011

More Patent Misery...

So after writing about Motorola and Google a few posts ago I spent some time scanning through the various other patents and patent citations related to Java and Sun Microsystems.

My my...

The patent system surely is broken.

For example, James Graham of Sun patents "Methods and apparatus for facilitating the sharing of computer graphics" (US Patent 6,760,019).  The first claim says:

"generating a lookup table having a plurality of software algorithm implementation descriptions,
wherein each description comprises at least one attribute; and generating a chain of a plurality of objects,
wherein the objects correspond to at least one of the attributes of the software algorithm implementation description,
wherein at least one of the objects has a supertype that corresponds to another object in the chain, and
wherein at least one of the objects has a subtype that corresponds to another object in the chain,
wherein the subtype is connected to the supertype by the chain,
wherein the supertype is a more general description of a composition algorithm,
wherein the subtype is a more specific description of a composition algorithm, and
wherein the chain is defined by the plurality of objects accessible by the supertype,
wherein the lookup table and the chain of the plurality of objects are generated dynamically,
wherein the lookup table is searched for a rendering operation implementation that best matches a specified criteria.
"

This is not even an algorithm or process - its just a description of something (though its not clear what - probably some sort of in memory class structure of Java objects I guess).  Like patenting the description of a red car with a pink floozinator, a green radio and purple hub caps.

Then there is this gem: US Patent  7,197,750 "Java Bytecode instruction for determining hashcode values" which has the first claim of:

"A computer readable medium for storing a Java Bytecode instruction for determining a hashcode value for a Java object,
wherein said hashcode value uniquely identifies said Java object among all other Java objects in said Java computing environment, and said Java Bytecode instruction is suitable for execution by a Java virtual machine in a Java computing environment,
wherein said Java Bytecode instruction when interpreted at runtime by said virtual machine operates to determine and store, a hashcode value for said Java object on an execution stack used by said virtual machine to interpret said Java Bytecode instruction,
by accessing a field of said Java object which stores said hashcode value for said Java object,
thereby allowing said virtual machine to determine said hashcode value for said java object without invoking a Java method;

wherein said Java Bytecode instruction when executed further operates to:

    pop a reference to said Java object from the top of an execution stack when said Bytecode instruction is executed;
    determine a hashcode for said Java object by accessing the field of said Java object after said reference has been popped; and
    push said hashcode on top of said execution stack after said hashcode has been calculated
."


The claim here appears to be that I have a Java object that, among other things, has a slot in it that holds its own unique hash code.  At run time my Java machine can grab that value out of the object without doing more than a lookup.

More ideas lifted from the old Smalltalk world and rehashed as "Java".

In my view this patent is rubbish - its totally obvious and it unclear why this would even warrant a patent in the first place.  These are generic computer science ideas re-branded with "Java" to make "Java bytecode" and "Java this" and "Java that" somehow magically unique in order to claim a patent.

And finally for today, US Patent 6,339,841: "Class Loading Model" which claims:

"A method of processing a class file on a computer system, said computer system having a server and a runtime domain, said method comprising the steps of:

    identifying independent class metadata from a class and independent methods within the class;
    creating separate components for each of the class metadata and methods;
    storing the class metadata component and the method components in the server, so that each component of the class is individually identifiable and accessible;
    sending the class metadata component without the method components from the server to the runtime domain upon receiving a request to load a class from the runtime domain; and
    sending a specific method component without the class metadata component and without any other method component from the server to the runtime domain upon receiving a request to load a specific method component from the runtime domain
."

The idea here seems to be that we have a Java class stored on a "server" and some client that needs to access the class.  So the client downloads the "class metadata", i.e., a list of methods when the class is used.  Later, when a method is invoked the actual class method code is transferred to the client.

But this is no more than the old Multics "demand" memory model for things rehashed for Java.  Classes are just routines - just code - they are not somehow special in the world of computer science.  Code being brought into memory when its needed.  Wow, what a unique, new and novel concept - from the 1960's.

The problem is that no lawyer has a clue what any of this means.  The only way they have to work with this is to hire expensive "experts" to try and help them (I know this from personal experience).  The battle lines are drawn up by one side (say the side claiming the patent is nonsense) assembling a list of "prior art".  Prior art is some publication or paper or something that shows the patented process being commercially sold prior to the earliest filing date on the patent.  The other side (the patent owner) then has to "disprove" the prior art claim - usually by showing how the patent is somehow "not obvious" related to the claimed "prior art".

These lawyers and experts cost around $500 an hour.  And filing claims and motions in this type of federal case burns through ten or twenty thousand dollars in the blink of an eye.

Do you really think anyone has a clue what, for example, US Patent 6,760,019, is really about?

Do you think a jury in a patent trial would understand it?

Do you think a jury could make a rational decision about what might or might not be infringing or prior art relative to its first claim?

Patents were invented for things like "safety razors" - where a completely new and novel device is invented form scratch (and initially the patent office required an actual device to be submitted with the application).

Today software patents rule the patent roost - with their vague and useless descriptions of what is more than likely obvious prior art - dolled up to make them seem novel and non-obvious.

The patents themselves are merely pawns in the posturing of big companies.  No one understands them (if they even have valid meaning at all) and no one could possible use them effectively in court.

Thursday, August 18, 2011

Androcles and the Lion... (A thorn in my paw)

So Apple has released OS X Lion (10.7) and the woes of upgrading are many.

First off, in order to get the latest XCode 4 development software you must be running Lion.  I have written here before about XCode 4 and at this point, while still not necessary, it seems like its going to eventually be a "must have" for app development - particularly in light of iOS 5 being released this fall.

Secondly, many applications which you know an love do not work under Lion.  I suppose this is so we can have a nice, organized iPhone-like desktop under Apple's control on our $3K USD MacBook Pros.

There is a site called www.roaringapps.com that provides a link to people's experience with Lion.  As you will see, surprise, surprise, only Adobe CS5 and later works under Lion (so CS2, 3, and 4 will be headed for the dumpster after this release).

Developers in general have to be careful because as OS X has progressed its use of all things 64-bit has increased.  Unfortunately most older intel macs doe not support 64-bit apps so its easy to create things that are not backward compatible without realizing it given the developer settings in XCode.

Some OS X release-date history:

10.0 - March, 2001 (?)

10.1 - September, 2001

10.2 - August, 2002

10.3 - October 2003 (Panther0

10.4 - April, 2005 (Tiger)

10.5 - October, 2007 (Leopard, Intel)

10.6 - August, 2009 (?) (Snow Leopard, Intel 64-bit)

10.7 - July, 2011 (Lion)

So all and all it looks like a major release is good for about two years even considering significant changes like the move to 64-bit.

Hardware-wise Lion requires Intel Core 2 Duo, Core i3/i5/i7, or Xeon CPU to function - without one of these its a no-go.  This basically limits you to machines built since the end of 2006 or so.

Personally I keep around older machines with older major releases on them as backups should something come up that does not work in the latest OS X.  Sadly there is not other good way to reliably access the past state of affairs.

For me the "past state of affairs" is the hideous mess of PDF output options multiplied by the various CS versions and PDF drivers.  Generally these do not work consistently from OS X major release to major release. 

(With each release things like font embedding, various color schemes, nitty PDF details of how patterns, images and other PDF goodies are defined all change.  Downstream software that does things like validate or check files or perform operations on them then can fail leading to a cascade of upgrade misery.  Most of my customers run very old software that is kept off-line form the internet - that way it continues to work regardless of upgrades.  The worst thing in the world is that some poor operations sap hits an "upgrade" button and brings down an entire manufacturing line because now the software is occasionally popping up a "do you want to download the lastest version of X" dialog and hanging a machine.)

Generally my customers like consistency - that is every day when the come in to work to run their manufacturing lines they want the same file save formats to be available.  This allows manufacturing processes tied to those file formats to work reliably.  My customers often have millions of dollars tied up in their manufacturing and they get unhappy when for no other reason than a simple upgrade things stop.  (Now its far worse on the Windows side, but that's another story...)

Today's Apple upgrade mania is driven by iOS and the desire by Apple to "do away with the file system."

This is so your iOS device acts like a dumb device - say a flashlight.  God forbid little Jr. figures out how to access the underlying file structure and does something stupid.

When I end up upgrading to Lion I will no doubt be stuck turning off the features that make it look like an iOS device.

Nice!

iOS devices don't allow "programming" or "downloading of code" - which of course is the antithesis of what I, a software developer, does.

Fortunately the "jailbreak" folks are around to keep Apple honest by putting some "back pressure" on their schemes.

"Jailbreaking" is an interesting phenomena in its own right - granted legitimate status in the courts - as a way to say "I bought this, its mine to do what I like with it."\

The one thing that's interesting is that, given some 15 million iPads in the "wide", for example, Apple does not remained focused on what's in the "past" - leaving a trove of devices available for various things that run outside the current "Apple dogma".  Jobs and Apple care little about version 1 iPads, or version 3 iPhones, or what people do with them. 

I predict that this will eventually come back to haunt them.

Why?

Because as everyone (like HP, Samsung, etc.) and their brother trot out new "tablet devices" the market will become saturated.  This will make the upgrade justification harder - my iPad 1 does the job I need - I'd like an iPad 2 but what is the cost-benefit of doing it.  This does not figure into the equations when you buy the iPad because its new - there was no previous version to worry about.

Similarly Google and Android are out there today with more new activations that iOS.

Android has its problems but eventually the big boys like HP will get devices out there that have corporate apps and features that will hurt Apple.

I myself have thought about using Android devices but they are still too new (not that Apple's development environment is all that great as compared to, say, .NET).  But that's changing.

And the one thing that Andriod does not do that Apple does is "restrict" what you can do to save "battery life".

iPad 1's will be come like old toys - pick one up cheap, jailbreak it, do what you will with it - the cost will be so low it won't matter if it breaks.  And with 15 million spares floating around you can afford to have fun...

Wednesday, August 17, 2011

Ethics and "the Good of the Many..."

Polio vaccination in the 1950's
There are some interesting articles around on "ethics" - in particular the ethics of what you would experiment on with humans if there were no ethics.

First of all, what does that mean "experiment without ethics".

Well, for one thing many experiments, such as testing the toxicity of various chemicals or injecting humans with viruses modified to perform certain functions, is never done on humans because it might harm them.

So they're done on rodents instead.

So I started to think about this.

What about something like a polio vaccine?  What about the discovery of vaccines in general?

Without violating ethics and trying something on humans never done before we would not have these things.

On the other hand things like new drugs are tested, at least in my opinion, at almost an amateurish level before being released into the wild where they cause serious problems.

Take Vioxx (and Celebrex) and the way it increases risk for heart attach (see this on Celebrex).  These drugs were approved by the FDA in the late 1990's

After the drugs were released clinical studies began (see this PDF) to determine if there were significant side effects.  A study, run immediately after Vioxx's approval, discovered that out of 8,000 patients, 50% taking Vioxx and 50% not in a blind study, there was a significant increase in cardiovascular events in those taking Vioxx.

Talk about closing the barn door after the horses escape.

Now how is this ethical - to perform tests on humans with an unknown toxic substance?

For one thing, if you read the history of this you will see that in the case of polio money was not the primary reason for the release of the vaccine.  Prior to its eradication polio was a serious killer - as a child you often saw pictures of kids on crutches or in "iron lungs". 

However, the rise of polio in the 1900's may have been due, ironically, to improvements in sanitation.  Prior to the 1900's polio antibodies were passed between mother and child through breast milk providing infants immunity - even though they were commonly exposed to the polio virus through poor sanitation.  As sanitation increased and breastfeeding decreased infants were left exposed to the virus.

Jonas Salk developed the first polio vaccine (there are actually three types of viruses and several different types of vaccines) at the University of Pittsburgh starting in the lat 1940's.  As far as I can tell, unlike Vioxx, for example, it was not developed in order for the University to make a profit (though it undoubtedly did after the fact).

In think few would argue that the introduction of the vaccine for polio was a mistake.

So what was the role of the FDA in the polio vaccine era?

According to its own web site this: The hiring of 48 investigators to ensure that no black market vaccines were produced.  Something which required (the hiring) congressional approval.

So it seems that the ethics model has changed quite a bit in the last 60 or so years. 

Polio vaccines were developed at universities and were based on research interests for the good of everyone.  Things like Vioxx are developed by businesses in order to make money.

The FDA worked to ensure that polio vaccines were not created on the black market.  Today the FDA approves things like Vioxx for reasons known only to them and then lets the companies who make it conduct post-release studies to see if the drug is safe (which it obviously is not).

Where did we as a country go wrong?

I think the FDA link says it all - 48 investigators to insure that face vaccines for polio were not being made.  That was their job.

Today the FDA's job is not so clear cut, nor is the process, nor is its ability to distinguish between good and bad.

Do we imagine that Salk was concerned over the risk of widespread use of his vaccine?

Of course (see this).  The initial trial involved some six hundred thousand school children.

I think that today there is an unholy alliance between the FDA and "big pharma".  I believe that the clinical trial model used by the FDA is far, far too simple in terms of the testing it performs before drugs are released.

While some argue that the FDA is too restrictive the problem there is one of responsibility.  If I use an experimental drug and something bad happens who's fault is it?

The fundamental "ethics violation" here is A) the replacement of "the good of humanity" with "money" by big pharma, and B) the implicit blessing FDA gives that substitution.

Did Vioxx kill some 55,000 people as various sources allege?

Where those deaths worth the "reduced pain" experienced by the remaining millions of Vioxx takers?

This is the real ethics question.

If one child dies a horrible death because of a pool drain cover outrage ensues requiring an entire industry to retool for a accident caused by other factors (the fact that no one was watching the child).

Yet in the case of the FDA and big pharma most of this goes by as just another news story...

Tuesday, August 16, 2011

Facebook Police: Blocking Cellphones, Saving Lives...?

A BART train in the station.
In "Is Social Media Free Speech" I discussed the nature of things like Facebook relative to free speech.

Today we have the California Bay Area Transport (BART) in the  news for shutting off cellphone (I've always wondered why Google spell check thinks "cellphone" is one word but not "highschool") coverage inside its tunnels as a planned mechanism to limit an anticipated protest.

Apparently in the recent past a BART police officer (and why do thing like BART or the NY Subway have their own police forces? Are the crimes somehow different?) shot a homeless man who was supposedly wielding a knife.  The protest was related in someway to this event.

In anticipation of the event BART turned off cellphone coverage in four of its stations for several hours. One of the reasons was ostensibly that, like London, the protestors would be using cellphones and social media to coordinate the protests (see this).

So now we have an interesting dilemma.

On the one hand, using social media to coordinate a riot protest would be a crime - though a crime if and only if it was not approved by the government and did not remain peaceful.  (I've often wondered about violent protests against those who conduct violence on innocents.)

On the other hand, what if grandma has a heart attack during her ride and no one can call for help so she subsequently dies?

Neither event happened - no riot protest, nobody's grandma is dead.

And BART claims that since it owns the cell antenna's in its stations it has the right to do what it like with them.

But what if their actions bring harm to others - even indirectly?  Isn't cell service today like a utility?  Something people rely on for daily on-going life? Can I rely on cell coverage to coordinate the pickup of needed medicine?  A ride to the hospital?  A life and death matter with a doctor?

Certainly there would be problems shutting off power to a few blocks of the city in anticipation of trouble.

As for cellphone "blocking" the US Criminal Code 47, Sections 301, 302a, 333 says that "The Act prohibits any person from willfully or maliciously interfering with the radio communications of any station licensed or authorized under the Act or operated by the U.S. government. 47 U.S.C. Section 333. The manufacture, importation, sale or offer for sale, including advertising, of devices designed to block or jam wireless transmissions is prohibited. 47 U.S.C. Section 302a(b). Parties in violation of these provisions may be subject to the penalties set out in 47 U.S.C. Sections 501-510. Fines for a first offense can range as high as $11,000 for each violation or imprisonment for up to one year, and the device used may also be seized and forfeited to the U.S. government."

Cell jammers are illegal so one imagines that shutting off a cell antenna is a similar problem.

The problem here is that government, in this case BART in the guise of the State of California has taken it upon itself to limit your free speech in anticipation of you doing something illegal or wrong with that free speech.

Mind you there is a difference between texting "meet me in front of Joe's Diner" and "help me destroy Joe's Diner" and "OMG I can't believe what's happening at Joe's Diner."

In the first and last cases I may or may not know why I am to meet at Joe's - leaving other events or evidence to determine if I am involved in the "conspiracy" to trash Joe's Diner.  In the second case merely showing up is proof of my involvement in a conspiracy to damage the diner.

And even if previous public Facebook plans are made to riot protest in a BART station doesn't shutting off cell service leave the remaining patrons without a means to call for help?  The riot protest has already been coordinated - its not like someone is actively standing on the BART platform publicly calling for assistance in rioting protesting.  (If they were it would be like shouting "fire" in a theater - there are already laws on the books to address that.)

Instead I think that BART is acting more like a building owner who chains the EXIT doors of a theater reasoning that there won't be a fire.

It seems like BART is taking the view that some or all of its patrons are potentially guilty of conspiring to riot protest illegally and therefore it is okay to deny everyone else their rights to use their phones for which they have paid, for which the service providers have paid, and for which services have become a public "utility" like power or water.

Sadly this sort of "you must be guilty" action of the part of government has been creeping into society for years.  If, for example, police find drugs on one person in a car in most states you, who have none, are also considered "guilty" do to an implicit conspiracy.

The long term effect of these types of laws and actions such as BART create chilling societal effects.

For example, I no longer give anyone I do not know a ride.  Why?  Because I don't know what you might have on you and I do not want to pay the price for your stupidity should I be stopped. 

This makes society a less friendly place in general because those that are responsible are treated as if they are not.

Another effect is that, since you treat me as if I were a criminal or outlaw in the first place, then I lose nothing by acting like one.

Actions such as BARTs make using the subway a negative choice for patrons - particularly if they have need of communication while in route.

It seems like all of this is the slow and steady drive to "thought crime".  Social media makes it easier for the government to "listen in" on what the citizenry is up to (due to their own ignorance or lack of understanding that posting it on a publicly viewable website might be a crime).

Of course, this will only catch "stupid" criminals who Facebook about their criminal activities.

Which will lead to 'profiling' of the stupid and criminals.

Which will lead to making places like Facebook 'fairer' so that the stupid and criminal are not singled out.

Which will lead to more government agencies to help stupid people use Facebook "properly".

Which will lead to the "virtual Facebook police".

Monday, August 15, 2011

Googling for Motorola

Multics (from Multicians.org)
It looks like Google will purchase the phone business from Motorola Mobility.

While this may seem like a nice move to expand Google's Android system the real reason is a bit more interesting.  (Though Larry Page provides the usual platitudes to justify the acquisition.)

Google has been on the short end of the patent train regarding Android.  In a rather foolish move they chose Java for a large part of Android.  Unfortunately Java, developed and licensed by Sun, has fallen into the hands of Oracle and Larry Elison.  Oracle claims Google is violating its US Patents 6,125,447, 6,192,476, 5,966,702, 7,426,720, 6,910,205 and 6,061,520.

US Patent 6,125,447 covers "Protection Domains to Provide Security in a Computer System."

Patent 6,192,476 addresses "Controlling Access to a Resource."

Patent 5,966,702 covers "Method and Apparatus for Pre-Processing and Packaging Class Files".

Patent 7,426,720 is "System and Method for Dynamic Preloading of Classes Through Memory Space Cloning of a Master Runtime System Process."

Patent 6,910,205 is "Interpreting Functions Using a Hybrid of Virtual and Native Machine Instructions."

And finally US 6,061,520 covers "Method and System for Performing Static Initialization."

What interesting here is how old and more than likely invalid most of these patents seem to be.

Let's take 7,426,720.  This basically describes a system that is used to create a new process in an operating system.  The first claim is as follows:

A system for dynamic preloading of classes through memory space cloning of a master runtime system process, comprising:

    A processor; A memory a class preloader to obtain a representation of at least one class from a source definition provided as object-oriented program code;

    a master runtime system process to interpret and to instantiate the representation as a class definition in a memory space of the master runtime system process;

    a runtime environment to clone the memory space as a child runtime system process responsive to a process request and to execute the child runtime system process; and

    a copy-on-write process cloning mechanism to instantiate the child runtime system process by copying references to the memory space of the master runtime system process into a separate memory space for the child runtime system process, and to defer copying of the memory space of the master runtime system process until the child runtime system process needs to modify the referenced memory space of the master runtime system process.


Now the basic concept here is one I learned in about 1976.  The idea is you have a process running on a computer and you need another process to start up doing the same thing.

Typically a "process" on a computer is represented as a chunk or block of "read only" machine instructions that comprise the set of what the process can do.  Anything (for example like starting a version of the same program) would share those instructions.  Much like a new window or tab in a browser - each tab or window represents the browser and works exactly like all the other tabs or windows.

Then there is a hunk of "data" that makes the program "unique" - just like two browser windows point to two different web sites.  The browser works the same way but the data unique to each URL makes what you see different.

So in your computer's memory is a chunk of memory for your browser, let's say.  Part of that chunk is "read only" instructions that never change and part is related to the "data" of whatever page the browser is pointing to.

When a program starts up from scratch, say after you reboot your computer, it has to load the "read only" instructions as well as some of the "data" into memory from disk.  Disk is slow, memory is fast, so it takes a few seconds to do this.  This is why starting a new program always takes longer than doing something within the program.

So in the late 1960's or so programmers figured out that when you went to start the second version of an already running program it was stupid to copy all the everything ("read only" and "data") into memory a second time.  The reason was simple.  The "read only" instructions were already there as part of some other running version of the program.

So they invented mechanisms to "share" the "read only" instructions among multiple versions of a program.  This meant that only one copy of those instructions need be in memory across the entire computer system at one time.

On the "data" side they figured out that it was faster to do nothing when the program started up and wait until the program actually tried to reference some portion of "data".  Once the program tried to reference "data" the host operating software (like Windows) noticed what was happening and only then bothered to locate and make a unique copy of the data available.  Again, this saved copying in all the data at startup each time the program was started and required data to only be copied as needed.

A lot of this was pioneered in an operating system called Multics - which was the forerunner of Unix, Linux and OS X.

For example, much of what I describe above is covered in excruciating detail here in "The Multics Virtual Memory: Concept and Design" - which was written in 1972.

Now the '720 patent tries to hide this by talking about "classes."

In computer land classes are ways to organize the writing of code so that common attributes of a program can share common actions.  For example, suppose I write a routine to handle "color".  Lots of things have color: birds, balls, cars, and so on.  Instead of creating software for an onscreen bird and giving it its own notion of color and creating software for an onscreen car and giving it its own notion of color we instead create a color "class" and use that in the definition of both birds and cars.

Classes are old computer science news as well - originating in the mid 1970's at Xerox Parc in the form of Smalltalk.  The details of this can be read about here:

Adele Goldberg and D. Robson. Smalltalk-80: The Language and its Implementation. Addison-Wesley, 1983.

Adele Goldberg and D. Robson. Smalltalk-80: The Interactive Programming Environment. Addison-Wesley, 1984.


In 1984 the Goldberg/Robson books were the height of geek fascination.

So what's Sun really done here that's new?

My though is nothing but create a "more specific" version of something that is as old as the computer science hills.  Kind of like putting a bird cage on the roof of a car and painting the car yellow and trying to patent it as some unique kind of car. 

(See the US Patent rules here.)

In summary "an invention cannot be patented if: "(a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent," or "(b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country more than one year prior to the application for patent in the United States . . ."

and

"The subject matter sought to be patented must be sufficiently different from what has been used or described before that it may be said to be nonobvious to a person having ordinary skill in the area of technology related to the invention. For example, the substitution of one color for another, or changes in size, are ordinarily not patentable. (Underline my own.)"

A class is just code as far as a computer is concerned and the old Multics VM model had shown the way for Java a decade or two before.

My vote is that if pressed hard the '720 is invalid.

As to why it was granted... I can only guess.  But no doubt someone at the Patent office had little knowledge of Multics or Smalltalk.  Mutlics has been lost to time - primarily because it was never used widely like, say Windows.  A few diehards have stuck around to to document what it was and did - but your average "man on the street" - even educated in Computer Science at a big U will have little knowledge of it.

Friday, August 12, 2011

Is Social Media Free Speech?

Aftermath of the London riots
This is a question almost every I know who uses it assumes "yes, of course."

I can "say what I want" on my Facebook page.  I can post "what I want" on my blog.

But is it a place for truly free speech?

Surely I an organize a group to go down to the town square and clean up litter?

But can I use racially charged words, talk about ethnic groups, things like that?

Can I suggest going to the town square and rioting after cleaning up the litter?

This article at wired made be stop and think about it...  At present in the UK Facebook, texting and so forth are being used to coordinate rioting, planned destruction of property and so on.  Similarly in Egypt and other places in the middle east people are using social media to cause changes in government.

The government and the police have figured out what's being used to coordinate these activities and is working to stop them - sometimes in advance.

Social media is not, I believe, the "free press" as recognized by the US Constitution.  And, of course, even the free press is not allowed to print some things without some form of repercussion.

The law recognizes two broad categories of crime - the kind that you do directly such as assault or theft.

It also recognizes conspiracy.  Conspiracy is where two or more parties conspire together about some illegal action.  Even if only on person directly commits the crime other co-conspirators can be convicted of the same crime even if all they did was convince someone else to commit the crime.

At its simplest conspiracy might be: "Joe's Diner has a lot of cash on Thursday's, Bob.  You should go down their and rob it."  While Bob might get caught robbing the diner I may also be a "conspirator" because Bob and I discussed the robbery together in advance.

So now lets think about Facebook and, say, middle school.

I don't like Suzy and so I post on my Facebook that fact: "I don't like Suzy."

Others engage in similar posting on my "Wall" in response to the comment.  Pretty soon there's a litany of 50 comments.

So far no crime...

Except of someone else reads that long list and decides to harm Suzy.  For example, someone might post "Suzy needs to be taken down to size..." and the next day Suzy gets a beat down in the bathroom.

Again someone might say lets jump Suzy at the bus stop and show her how we feel - and the next day Suzy gets a beat down.

Clearly the "jump Suzy" is an attempt to draw others into a conspiracy against Suzy and if a crime is committed those who post "I'm in" or who "Like" that comment could be looking at criminal legal problems.

The "taken down to size" comment, on the other hand, is less clear evidence of a conspiracy.

But in either case the consequences are not as serious as replacing an assault with a riot where people are killed.

Interestingly conspiracy is fairly hard to prove when the conspiring occurs only during a walk down a secluded lane in the middle of the night - after all there is no direct evidence in that case.  But the local Facebook wall provides a simple and clear chain of evidence in today's digital age.

There are writers exploring this topic.  For example, Charles Stoss (his blog here) and Rule 34.

These are near-future stories where cops wear 3-D glasses wired into "CopSpace" (a kind of working police "Facebook" where criminal information is shared instantaneously amongst investigators and Artificial Intelligences (AI) process input from sources that include things like the future of Facebook trolling for leads).  The glasses are location aware and allow the characters to post notices at crime scenes that only other cops using "CopSpace" can see.

All things which are not far at all from where we are today technologically as a society.

After all, police now carry laptops in their cars and most certainly GPS.  How long before those spiffy sunglasses they wear are become 3D glasses linked into database and sport cameras for facial recognition?  (This is not science fiction.  The London police are currently using facial recognition to capture rioters.) No more real need for a drivers license...  Everything the state knows about you is now online.

Just ask my friend - a "miscommunication" over an warrant in a non-criminal case got his front door broken down.

The problem is that the law enforcement computer systems used by police are not always accurate or up to date.  And a face, like fingerprints, can be "replaced" (see this).

Today bullies in schools are being rounded up through Facebook posting.

Today rioters in London are being rounded up through Facebook.

My guess is that if you have any unpleasant thoughts about anyone or anything you'd best not write them down in Facebook much less anywhere else on the Internet.

But Charles Stoss and Rule 34 go even farther than this:  AI scans future Facebooks looking for postings that imply a future crime - whether bullying or murder.

And this is also not far from where things are today.  My guess is that busy-bodies are currently working on web crawlers to mine high school Facebook posting looking for potential "violence" - and if they're not actively working on it today they soon will be.

Bye bye freedom.

These things are bogus - just like the little Google ad thingy that sits up in the corner of this blog.  It scans the contents of the blog and puts an ad there related to the topic.  But, as I have written before, it really can't tell if you are writing sarcastically or seriously about a topic.

So while you rail against the evils of "blah" with blistering sarcasm Google happily puts an ad from "blah" right along side your very post.

Google is a big, smart company.  Bigger and smarter than most police organizations.

They can't get it right.

Do you think the police "violence crawler" running over the local high school Facebook posts will be any smarter?

I doubt it very much.

And your witty little Johnny's sarcastic posting about whatever will land him, and you for conspiring with him, in jail.

Isn't the future fun?

Thursday, August 11, 2011

Unfunded Weapons Liability

Interestingly there is another casualty in the "debt crisis" - nuclear waste disposal (see this).

Turns out that utilities, since about 1982, that produce nuclear power have been contributing about 1/10th of a cent per kilowatt hour to a "fund" whose purpose is to dispose nuclear waste, e.g., to fund things like Yucca Mountain.

This fee generates about $750 million USD per year giving the fund a value of about $25 billion USD.

Sadly, government bureaucrats discovered this fund and have bled it dry using the fees instead for "the general fund."  Just like Social Security and Medicare which are also full of empty IOUs.  Quirks in laws and the handling of these payments allow government officials to grab the cash without concern for the consequences.

So, instead of the federal government working to clear up the nuclear waste stored at the local nuke, nothing is happening and the utilities that own the plants are piling up waste (they have to because the government tells them what to do with it).

The next time a terrorist cooks up an idea to crash an airliner into the local nuclear waste pool you'll be able to thank the glad-handed bureaucrats in Washington for giving them the opportunity.

Today states like Illinois and Pennsylvania are leaders with 8,000 and 5,000 metric tons of waste - and no where to put it.

Which brings us back to Fukushima.

As I wrote in "US Nukes and Geological Faults" there's plenty to be concerned about here in the US. Many reactors are near faults and the issue of stored waste further compounds this however because not only is the reactor itself endangered by the fault but also the stored waste.

And since the waste is radioactive - often at very high levels - no one is going to clean it all up with just a mop and broom.

Now the mess in Fukushima as had a number of unpleasant consequences - recent discoveries of cesium in beef and things like that (see this NY Times article).

Sadly in the US the need for use of monies in the "general (slush) fund" outweighs safety concerns.

Quite honestly I think its abhorrent that government officials, who, by the way, must take an oath of office, can affect transfers of billions of dollars of funds from necessary purposes to who knows what.  And sadly, it unlikely we will ever even know the names of those responsible much less where the money went.  The utilities involved are suing but that will no doubt take years and, with government's interests in the case merely one of removing money from other "funds" to pay this problem in the end the only losers will be the US citizens.

Had a similar shell game occurred at a private company, say like Enron, the there would be a very large hew and cry - arrests, trials, jail time.

Not so with well meaning bureaucrats - surely leaving the country full of tens of thousands of tons of nuclear waste was not as important as whatever they were doing at the time.

Wednesday, August 10, 2011

An Short Iodine Story...

Mrs. Wolf told me an interesting iodine story the other day.

Six months ago her friend, Miss C, had confided that she had recently had a mammogram that revealed a small lump in her breast.  The doctor, while concerned, suggested she come back in six months to have a follow up.

They had been previously discussing iodine and its affects so Miss C decided that she would purchase some over-the-counter skin-use-only iodine from the local drug store and apply it to her skin.

After doing applying this to herself regularly for six months it came time to visit the doctor again.

The doctor was shocked to discover that the lump in Miss C. breast could not be found.  So shocked, in fact, that she sent Miss C for additional tests - none of which could locate the lump.

Miss C received a clean bill of health with an admonishment to periodically checking in with the doctor.

Did this simple at-home iodine treatments alter Miss C's outcome?

You'll have to decide that for yourself.

However, there are many sites around the internet (such as this, this and this) claiming links between breast cancer and iodine deficiency.

A few months ago Mrs. Wolf was talking to a woman who was in graduate school (I forget the exact discipline - microbiology or something like that) studying the genetic aspects of cancer.  Mrs. Wolf asked if the woman thought that cancer was in any way related to diet.

"Oh no," said the woman, "its a genetic disease."

So Mrs. Wolf asked "don't some cancer's come from the environment?"

"Well, yes," replied the woman, "certain forms..."

"Isn't your diet part of your environment?" asked Mrs. Wolf.

A lengthy discussion ensued.

The woman conceded that indeed it would be possible that cancer and malnutrition could be related.

More interesting, though, was that until her discussion with Mrs. Wolf it would have never occurred to the woman to think that diet and/or environmental changes could improve a cancer outcome.  Her education was such that looking at cancer beyond the standard dogma was simply out of the question.

(Kind of like "Of course the world is flat.")

While its nice the FDA is so very worried about all the evils of things like cigarettes and child safety you have to wonder why there is so much less interest in something as simple as proper nutrition.

Then there is lung cancer and smoking.

I have always been fascinated by the fact that 1/3 of all smokers don't get lung cancer.  Why one third - what's so special about them.  The relationship of smoking to cancer is a statistical one.  Not every one who smokes gets cancer and not everyone with lung cancer smokes.

I am starting to wonder if its in fact the case that things like diet play a significant roll in lung cancer.  Could it be that those with certain dietary elements, for example, high iodine, could react to smoking differently.  The Japanese, for example, smoke quite a bit and yet their lung cancer rates are much lower (though rates for other cancer are equivalent to the US and some types are higher.)

Could it be their high-iodine diet?

See this for some interesting results:  "The risk of lung cancer in the United States study population was at least 10 times higher than in Japanese despite the higher percentage of smokers among the Japanese."