Multics (from Multicians.org) |
While this may seem like a nice move to expand Google's Android system the real reason is a bit more interesting. (Though Larry Page provides the usual platitudes to justify the acquisition.)
Google has been on the short end of the patent train regarding Android. In a rather foolish move they chose Java for a large part of Android. Unfortunately Java, developed and licensed by Sun, has fallen into the hands of Oracle and Larry Elison. Oracle claims Google is violating its US Patents 6,125,447, 6,192,476, 5,966,702, 7,426,720, 6,910,205 and 6,061,520.
US Patent 6,125,447 covers "Protection Domains to Provide Security in a Computer System."
Patent 6,192,476 addresses "Controlling Access to a Resource."
Patent 5,966,702 covers "Method and Apparatus for Pre-Processing and Packaging Class Files".
Patent 7,426,720 is "System and Method for Dynamic Preloading of Classes Through Memory Space Cloning of a Master Runtime System Process."
Patent 6,910,205 is "Interpreting Functions Using a Hybrid of Virtual and Native Machine Instructions."
And finally US 6,061,520 covers "Method and System for Performing Static Initialization."
What interesting here is how old and more than likely invalid most of these patents seem to be.
Let's take 7,426,720. This basically describes a system that is used to create a new process in an operating system. The first claim is as follows:
A system for dynamic preloading of classes through memory space cloning of a master runtime system process, comprising:
A processor; A memory a class preloader to obtain a representation of at least one class from a source definition provided as object-oriented program code;
a master runtime system process to interpret and to instantiate the representation as a class definition in a memory space of the master runtime system process;
a runtime environment to clone the memory space as a child runtime system process responsive to a process request and to execute the child runtime system process; and
a copy-on-write process cloning mechanism to instantiate the child runtime system process by copying references to the memory space of the master runtime system process into a separate memory space for the child runtime system process, and to defer copying of the memory space of the master runtime system process until the child runtime system process needs to modify the referenced memory space of the master runtime system process.
Now the basic concept here is one I learned in about 1976. The idea is you have a process running on a computer and you need another process to start up doing the same thing.
Typically a "process" on a computer is represented as a chunk or block of "read only" machine instructions that comprise the set of what the process can do. Anything (for example like starting a version of the same program) would share those instructions. Much like a new window or tab in a browser - each tab or window represents the browser and works exactly like all the other tabs or windows.
Then there is a hunk of "data" that makes the program "unique" - just like two browser windows point to two different web sites. The browser works the same way but the data unique to each URL makes what you see different.
So in your computer's memory is a chunk of memory for your browser, let's say. Part of that chunk is "read only" instructions that never change and part is related to the "data" of whatever page the browser is pointing to.
When a program starts up from scratch, say after you reboot your computer, it has to load the "read only" instructions as well as some of the "data" into memory from disk. Disk is slow, memory is fast, so it takes a few seconds to do this. This is why starting a new program always takes longer than doing something within the program.
So in the late 1960's or so programmers figured out that when you went to start the second version of an already running program it was stupid to copy all the everything ("read only" and "data") into memory a second time. The reason was simple. The "read only" instructions were already there as part of some other running version of the program.
So they invented mechanisms to "share" the "read only" instructions among multiple versions of a program. This meant that only one copy of those instructions need be in memory across the entire computer system at one time.
On the "data" side they figured out that it was faster to do nothing when the program started up and wait until the program actually tried to reference some portion of "data". Once the program tried to reference "data" the host operating software (like Windows) noticed what was happening and only then bothered to locate and make a unique copy of the data available. Again, this saved copying in all the data at startup each time the program was started and required data to only be copied as needed.
A lot of this was pioneered in an operating system called Multics - which was the forerunner of Unix, Linux and OS X.
For example, much of what I describe above is covered in excruciating detail here in "The Multics Virtual Memory: Concept and Design" - which was written in 1972.
Now the '720 patent tries to hide this by talking about "classes."
In computer land classes are ways to organize the writing of code so that common attributes of a program can share common actions. For example, suppose I write a routine to handle "color". Lots of things have color: birds, balls, cars, and so on. Instead of creating software for an onscreen bird and giving it its own notion of color and creating software for an onscreen car and giving it its own notion of color we instead create a color "class" and use that in the definition of both birds and cars.
Classes are old computer science news as well - originating in the mid 1970's at Xerox Parc in the form of Smalltalk. The details of this can be read about here:
Adele Goldberg and D. Robson. Smalltalk-80: The Language and its Implementation. Addison-Wesley, 1983.
Adele Goldberg and D. Robson. Smalltalk-80: The Interactive Programming Environment. Addison-Wesley, 1984.
In 1984 the Goldberg/Robson books were the height of geek fascination.
So what's Sun really done here that's new?
My though is nothing but create a "more specific" version of something that is as old as the computer science hills. Kind of like putting a bird cage on the roof of a car and painting the car yellow and trying to patent it as some unique kind of car.
(See the US Patent rules here.)
In summary "an invention cannot be patented if: "(a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent," or "(b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country more than one year prior to the application for patent in the United States . . ."
and
"The subject matter sought to be patented must be sufficiently different from what has been used or described before that it may be said to be nonobvious to a person having ordinary skill in the area of technology related to the invention. For example, the substitution of one color for another, or changes in size, are ordinarily not patentable. (Underline my own.)"
A class is just code as far as a computer is concerned and the old Multics VM model had shown the way for Java a decade or two before.
My vote is that if pressed hard the '720 is invalid.
As to why it was granted... I can only guess. But no doubt someone at the Patent office had little knowledge of Multics or Smalltalk. Mutlics has been lost to time - primarily because it was never used widely like, say Windows. A few diehards have stuck around to to document what it was and did - but your average "man on the street" - even educated in Computer Science at a big U will have little knowledge of it.
Ahem, credit where due:
ReplyDeleteAtlas Supervisor (1962) Shared code between processes, paged virtual memory
Simula 67 (1967) (objects and classes)