I've always liked robots...
My fascination began as a child watching the Munsters where Eddie creates a robot for the school science fair.
After that my neighbor and I spent a lot of time with Erector sets and various mechanical junk scavenged from anywhere we could get our hands on it trying to create a robot. Of course we could get a small wheeled cart to roll around with a dummy body on top - but that was about it.
This of course led to other thoughts about how the robot would be able to see, or hear, or think.
Having no computer skills about all we could imagine was simple logic based on switches or relays. But trying to build something interesting that would act on its own was simply out of the question.
Then there was the 1965 "Lost In Space" series. At the time probably the most interesting "sci fi" TV show - complete with the "robot". This, along with Star Trek reruns fueled my interests in robots and space travel.
But that was a lot of years ago.
These days I am most impressed by the Boston Scientific robots, "BigDog" in particular (also video here, there are other videos around the net but some have been removed by Stanford University for "copyright" reasons):
What I find most fascinating about these is how their legs move. Very animal-like.
I also find it kind of ironic that the "BigDog" is powered by a gasoline engine. That's right - no nasty lithium ion batteries or anything like that. Plain and simple fossil fuel. From the video it sounds like a simple two-cycle engine of some sort.
"BigDog" had been preceded by "LittleDog" (video here, not youtube so I cannot embed it...).
This little guy is quite a bit more tentative than his big brother - though he seems to get the job done albeit more slowly.
I guess the most interesting part of all this is that the practical robots being created today are most like animals, dogs or mules, I suppose. This makes a lot of sense when you think about it because its a lot harder to balance on two legs than four.
I suppose that a lot of today's practical interests in technology are fueled by childhood ideas spawned by TV and books. Flat screen TVs, for example, a staple of 1960s sci fi, are a reality today.
I have been involved in high tech, graphic arts, computer software and hardware design for more than 40 years. I've been blogging about vaping since early 2009. I work on advanced robot vision, 3D, SONAR, LIDAR, and software technology. I own my own business. I have set up this blog to talk about who I am, what I do, and to publish my opinions...
Search This Blog
Monday, February 28, 2011
Friday, February 25, 2011
Adventures in Printing a Single Page...
OS X 10.6 - See the PPD selector third from top... |
So last night I needed to create a design for the physical portion of my iPad project. This involved sketching out some mechanical designs in my note book.
This is standard fair - simple 8.5" x 11" graph paper.
So I sketched out my design.
Next step is to get it into the computer so that I can trace over it in an ancient CS2 version of Illustrator I happen to have lying around.
So I snap a picture with my iPhone.
Once its in the iPhone I have found the easiest way to get camera images directly into the computer is by using an app called "Image Capture". This runs on the Mac and is located in the Applications folder.
This bypasses all the useless iTunes nonsense involved in accessing images on your iPhone.
You hook up your iPhone and launch "Image Capture". It perceives the phone as a generic camera devices and presents all the pictures for you to select from. You simply select the images you want and have it copy them to the hard drive. So no fiddling about with iTunes.
Once I had my design in Illustrator I completed the tasks at hand.
The final sheet size ended up being 20" x 20".
Since I need this printed at size for manufacturing a prototype I figured I could just tile my way out of trouble. But not so fast...
For the purposes of iPhone development I had switched from OS X 10.5 to 10.6 on one of my development machines - the same machine that I was using to print the sheet out of Illustrator. But, low and behold, all the printing options have been buggered completely up.
The first problem was somehow, after years of PPD-less pleasure on the Mac, the old PPD print menu had reappeared. My heart sank... PPD's are like - well the worst possible annoyance associated with printing from a computer you can imagine.
So the PPD and printer driver (yes, it the new 10.6 print screens want you to have a printer selected as well) conspire to prevent you from printing anything. There are now thousands of useless menus to help you print (this is a B/W print job so I don't need all of this...)
So I fiddle about some more - probably spending 20 minutes - surely there must be a way to print tiled output.
Alas, no - apparently in their wisdom at Apple (and perhaps Adobe as well) this option appears to be gone. Image, someone wanting to print out something with tiling. I suppose I'll have to run out and buy a large format device... er, well, perhaps not in this economy...
So now what?
After more fiddling I decide that the best option is to Save the job as PDF and try my luck elsewhere.
Fortunately that still seems to work.
Next up, an old creaky version of Acrobat.
I open my 20" x 20" PDF and hunt around for tiling options.
At least these appear to exist. So I foolishly figure that I'll set the page size to 11" x 17" (to get the fewest tiles) and print it out...
No luck.
Though in Page Setup I can select printing to 11" x 17" the tiling portion of the print dialog quietly ignores this and prints to 8.5" x 11" - leaving most of what I want to print in the bit bucket on the floor.
More fiddling.
Finally I tell Acrobat that the page size is 8.5" x 11".
I try tiling and finally I get 12 (yes, count'em 12) 8.5" x 11" sheets to cover a 20" x 20" print out.
Nice.
So, with tens of thousands of dollars of computers and software at my disposal this is what I have to go through to print out a large sheet.
For me, though, the reemergence of PPDs is quite simply beyond belief (I touched on PPDs in my "Cloud Printing" post back in September). I made a lot of money over the years helping people to understand that the last thing that they needed when printing was a PPD.
I remember struggling with Acrobat and PPDs back in 2000 or so (perhaps 1999). Thinking "what the hell kind of nonsense is this".
Now, of course, things have come full circle and PPDs are back again...
Thursday, February 24, 2011
Woman are Insane, Men are Stupid...
(With limited time these days I couldn't resist this topic on my personal blog - the Lone Wolf will have to wait for another day...)
Woman are Insane, Men are Stupid...
Woman are Insane, Men are Stupid...
Wednesday, February 23, 2011
Quantizing our Lives
I came on some interesting statistics the other day.
It seems that teenagers, according to this site, process about 3,400 text messages a month. For a normal 175 hour adult work month (4.3 weeks/month x 5 days a week x 8 hours) that means about 20 text messages an hour.
Think about that - every three minutes on average you are receiving/sending a text message. And while its less for older age groups today it seems obvious that this trend will stick with this age group as they grow.
I know that in the past there have been and still are arguments about TV "shortening our attention spans" but it would seem that this statistic tells the tale - you're going to get interrupted every three minutes.
Then there is the time it takes to process the text message - say at least 30 seconds. This means that you really only have 2.5 minutes of free time between the current text and the next one.
When you have a processing being interrupted on a regular basis like this its called quantization. Quantization means that things related to the process happen in regular "chunks". While you may not realize it everything you do with technology is quantized: TV, video and Flash are all displayed at a frame rate of 30 to 60 frames a second - that is, you are not seeing a smoothly changing video image but rather a sequence of still images that only change a small bit from image to image.
Similarly any digital sound is "sampled" or quantized to 44.1 kHz - that is you don't hear a smooth analog signal like you might on an old AM radio. Instead the sound is sampled 44.1 thousand times per second and turned into bits. The bits are sent to your audio player which reconstructs them back to smooth-sounding audio.
However, quantization of something causes information to be lost.
What I mean by this is that if I am sampling my audio signal very rapidly on average I am going to get sound out of the process which sounds very nice - like I would expect. However, information about the original audio that appears between the samples is lost.
If you have ever used the "single frame" video button on a DVR you will see that while each successive frame changes very litter there are still places where something between frames can and is lost.
So this continuous interruption, or quantization, is happening in our lives. And, more importantly, in the lives of our children. What's lost is the time it takes to get back to where you were mentally after an interruption. So if you are interrupted every 3 minutes and it takes a 30 seconds to answer the text and a full minute to gather your thoughts up to the point you were at before the text that leaves you about 90 seconds in which to accomplish something.
Now, as an adult, particularly when working, I find it important to be able to concentrate on something for long periods. Concentration, while obviously involving mental effort, also takes time.
So suppose your child is working on a simple math word problem: "Frank was hauling a wide load out of Boston. At 8:00 a.m. he headed west from exit #102 at and average speed of 35 mph. Victor headed west from the same exit at 11 a.m. in his Lexus. By 2 p.m. the same day Victor was 15 miles ahead of Frank. What was Victor's average speed?"
So Frank drives 6 hours at 35 mph or 210 miles by 2:00 pm.
Victor is at 225 miles by 2:00 pm, but since he left at 11:00 AM he has only been driving 3 hours.
So Victor drives at 225 mph / 3 hrs or 75 mph.
Not too hard. But if I am being interrupted frequently enough I doubt even I could do this problem.
It takes me about a minute to do this problem, maybe two if I count the time to write down the answer.
But, if I were 13 or 14 years old in 8th grade it would probably take a few more minutes of work - lets be generous and say four minutes.
Now, in my quantized reality that means I will get interrupted at least once during my effort to complete the problem.
A significant part of the time it takes to solve the problem is figuring out what the problem is telling me - and therefore I need to concentrate. And concentration is sequential and linear, i.e., I have to start at the beginning and work through to the end. So I first have to figure out what the "knowns" are in the problem - when did Frank leave, how long did he drive, and so on. I can't solve the problem unless I have that information.
But what if I am interrupted at a rate that interferes with me building up what I need to know... say I get 80% of the way there and the phone pings with a text. I spend 30 seconds or a minute to deal with the text. Can I pick up where I left off in my concentration?
I doubt it...
Maybe I can get back to 20% or 40% of where I was, but probably not 80%.
So this means that what should take me 4 minutes to complete takes two or three times as long. And I probably don't really learn what I need to (not withstanding that I am texting someone else for the answers).
Now imagine that I am doing something important - and the phone is constantly pinging. How is this loss of concentration different than being over tired or drunk? (This is exactly why texting is illegal while driving.)
So given this rate of interruption how can you concentrate, have a life, have a relationship, drive, or concentrate at all?
I think this is an important question.
Further, this sort of quantized existence can be addicting. You literally become so used to the interruptions that you crave them when they are absent.
As adults managing interruptions is a skill. You cannot literally respond every time a text message arrives or you simply thrash - never getting anything done. Sadly this is a skill most adults do not seem to have mastered. (Perhaps they are too busy answering texts to notice the problem.)
If you look at the performance of the US in science and math compared to the rest of the world its lackluster - and this is probably one of the reasons why.
Perhaps this is why much of today's world is not well thought out.
No one literally has the time in the corporate world to sit down, uninterrupted, and reason out a complete chain of events.
And then there are doctors and others performing important tasks related to our well being - do we really want them interrupted all the time?
It seems that teenagers, according to this site, process about 3,400 text messages a month. For a normal 175 hour adult work month (4.3 weeks/month x 5 days a week x 8 hours) that means about 20 text messages an hour.
Think about that - every three minutes on average you are receiving/sending a text message. And while its less for older age groups today it seems obvious that this trend will stick with this age group as they grow.
I know that in the past there have been and still are arguments about TV "shortening our attention spans" but it would seem that this statistic tells the tale - you're going to get interrupted every three minutes.
Then there is the time it takes to process the text message - say at least 30 seconds. This means that you really only have 2.5 minutes of free time between the current text and the next one.
When you have a processing being interrupted on a regular basis like this its called quantization. Quantization means that things related to the process happen in regular "chunks". While you may not realize it everything you do with technology is quantized: TV, video and Flash are all displayed at a frame rate of 30 to 60 frames a second - that is, you are not seeing a smoothly changing video image but rather a sequence of still images that only change a small bit from image to image.
Similarly any digital sound is "sampled" or quantized to 44.1 kHz - that is you don't hear a smooth analog signal like you might on an old AM radio. Instead the sound is sampled 44.1 thousand times per second and turned into bits. The bits are sent to your audio player which reconstructs them back to smooth-sounding audio.
However, quantization of something causes information to be lost.
What I mean by this is that if I am sampling my audio signal very rapidly on average I am going to get sound out of the process which sounds very nice - like I would expect. However, information about the original audio that appears between the samples is lost.
If you have ever used the "single frame" video button on a DVR you will see that while each successive frame changes very litter there are still places where something between frames can and is lost.
So this continuous interruption, or quantization, is happening in our lives. And, more importantly, in the lives of our children. What's lost is the time it takes to get back to where you were mentally after an interruption. So if you are interrupted every 3 minutes and it takes a 30 seconds to answer the text and a full minute to gather your thoughts up to the point you were at before the text that leaves you about 90 seconds in which to accomplish something.
Now, as an adult, particularly when working, I find it important to be able to concentrate on something for long periods. Concentration, while obviously involving mental effort, also takes time.
So suppose your child is working on a simple math word problem: "Frank was hauling a wide load out of Boston. At 8:00 a.m. he headed west from exit #102 at and average speed of 35 mph. Victor headed west from the same exit at 11 a.m. in his Lexus. By 2 p.m. the same day Victor was 15 miles ahead of Frank. What was Victor's average speed?"
So Frank drives 6 hours at 35 mph or 210 miles by 2:00 pm.
Victor is at 225 miles by 2:00 pm, but since he left at 11:00 AM he has only been driving 3 hours.
So Victor drives at 225 mph / 3 hrs or 75 mph.
Not too hard. But if I am being interrupted frequently enough I doubt even I could do this problem.
It takes me about a minute to do this problem, maybe two if I count the time to write down the answer.
But, if I were 13 or 14 years old in 8th grade it would probably take a few more minutes of work - lets be generous and say four minutes.
Now, in my quantized reality that means I will get interrupted at least once during my effort to complete the problem.
A significant part of the time it takes to solve the problem is figuring out what the problem is telling me - and therefore I need to concentrate. And concentration is sequential and linear, i.e., I have to start at the beginning and work through to the end. So I first have to figure out what the "knowns" are in the problem - when did Frank leave, how long did he drive, and so on. I can't solve the problem unless I have that information.
But what if I am interrupted at a rate that interferes with me building up what I need to know... say I get 80% of the way there and the phone pings with a text. I spend 30 seconds or a minute to deal with the text. Can I pick up where I left off in my concentration?
I doubt it...
Maybe I can get back to 20% or 40% of where I was, but probably not 80%.
So this means that what should take me 4 minutes to complete takes two or three times as long. And I probably don't really learn what I need to (not withstanding that I am texting someone else for the answers).
Now imagine that I am doing something important - and the phone is constantly pinging. How is this loss of concentration different than being over tired or drunk? (This is exactly why texting is illegal while driving.)
So given this rate of interruption how can you concentrate, have a life, have a relationship, drive, or concentrate at all?
I think this is an important question.
Further, this sort of quantized existence can be addicting. You literally become so used to the interruptions that you crave them when they are absent.
As adults managing interruptions is a skill. You cannot literally respond every time a text message arrives or you simply thrash - never getting anything done. Sadly this is a skill most adults do not seem to have mastered. (Perhaps they are too busy answering texts to notice the problem.)
If you look at the performance of the US in science and math compared to the rest of the world its lackluster - and this is probably one of the reasons why.
Perhaps this is why much of today's world is not well thought out.
No one literally has the time in the corporate world to sit down, uninterrupted, and reason out a complete chain of events.
And then there are doctors and others performing important tasks related to our well being - do we really want them interrupted all the time?
Tuesday, February 22, 2011
Our Future is not MIL SPEC
At least this British guitar amp is MIL SPEC. |
But first a little background. Long ago I worked in the defense industry and before that, at the dawn of time, in the electronics industry that make parts for the defense industry.
In those days militarized equipment was much different than it is now - particularly on the computer and integrated circuit front. There was something called "MIL SPEC". This meant a number of things. First off the circuitry required to make something "MIL SPEC" (and hence acceptable to the military purchasing department) had to be rated for MIL SPEC - that meant operational between about -50 degrees centigrade to about 100 degrees centigrade.
Each IC (integrated circuit) did not just get soldered to the circuit board - the leads were also bent in a way as to ensure physical contact. So lots of little old ladies sat around making sure things were properly attached to the circuit boards.
Software had to meet various rigorous testing requirements and not be "industrial". This meant that the military spent a lot of money making something up new so that it would not be susceptible to various forms of hackery and failure. Software was designed and built from the ground up to be reliable. After all, you can't have your computerized gun sight crap-out just as you are taking aim at a nasty enemy combatant...
But like everything else all of this costs too much today...
No one wants to spend $100 million USD building what amounts to a MIL SPEC PC or laptop if I can go down the street and buy one for $100 USD.
So as the defense budget gets cut further and further our troops end up with run-of-the-mill technology.
And, for that $100 you, as they say, get what you pay for.
So now we see the Pentagon jumping on the "Cloud Computing" bandwagon. Secure, safe world-wide computing and storage capacity - just like Google...
Oh wait, Google, aren't they the folks busy tinkering with the search engines to help turn over dictators?
I bet the Pentagon doesn't have it own cloud services - I bet it ends up buying them because, well, its not given the money to do the job right in the first place.
Only problem is that those same flip-flop wearing, red bull drinking hackers busy with STUXNET know all about these platforms, this software, how it works, and what its flaws are.
And that's what the real issue is.
The US Military is busy focusing on a foe, particularly in the middle east, that's technologically ignorant. Sure they have cellphones, satellite TV, and so forth... but they just buy them already made. They are not developing the technology on their own: no satellite launches, nothing.
So we develop advanced model airplanes to fly around with cameras and spy on them - like watching puppies play from the balcony.
But what about a real foe? Someone who can actual think up technological weapons on their own?
Aren't we more focused on the technologically ignorant than we should be?
Someone with a brain might figure out that drone communication and information gathering can be interfered with remotely. Someone who doesn't life in mud hut...
So what does this mean? It means that we, as a country, are downgrading our defenses to match our enemies. Meanwhile China, for example, is busy launching satellites, building stealth fighters, buying up our debt, and so forth. Leaving us, as they say, holding the "bag".
Our technology companies, like Google, are no longer our friends - just look at the involvement of Google execs in the unrest in Egypt. Is this technology the US Military should be depending on?
No, our future safety and our technological military leadership has been forfeited promised health care and pensions, owned by the Chinese, that we can never afford to pay...
Monday, February 21, 2011
Rail Guns and Laser Beams
Those pesky Naval weaponry developers have been at it again.
I have always been interested in Naval weapons development. Back in the late 1980's I had a company that was looking at work from various defense agencies. We made a sales trip to the Naval weapons storage facility in Indiana (Naval Surface Warfare Center Crane Division).
After entering the main gate you drove past what seemed like miles of "bunkers" - "bunkers" containing charges for the large naval guns. Once we arrived at the main building out front was a mock Polaris missile. The bunkers all contain the charges used to propel large shells from the big on-deck naval guns. Since the charges are affected by age and other factors the Navy keeps close track of their age.
The bunkers themselves look like Indian mounds with a slot cut through one end. The slot is lined with concrete on both sides. This is so if there is an explosion the blast will be deflected away from other mounds.
Fascinating stuff...
But not a as fascinating as some recent Naval weapons developments...
First there is the "Mach-8 Rail Gun" (see the video here). The idea is replacing all those endless miles of charges in bunkers with a big, powerful electromagnetic system that can launch a piece of metal just as far - no gun powder, no danger. The idea is that you charge up a big bank of capacitors (devices that store electrical energy when the power - sort of like you when you walk around a dry house in the winter building up a static electrical charge - except there is real current involved in this). You discharge the capacitors through a coil of wire that makes a magnetic field. The magnet field, coil of wire and capacitors are all designed to start pulling the "shell" into the field of the magnet on the discharge. As the shell accelerates into the magnetic field the charge dissipates and the shell keeps going.
Going at about 6,000 feet per second - that's about six times faster than a rifle bullet.
There is no reverse recoil as with a traditional powder-fired shell and you can knock incoming missiles, planes, and drones out of the sky with deadly efficiency.
But, if you don't like actually firing "shells" - even simple metallic ones at incoming weapons there is always the new death ray. This the FEL or free-electron laser. The idea of this is that to get enough power to create a laser beam that can knock incoming planes, shells and other weapons out of the air you need a lot of energy in your laser beam - more than you can get by simply using light by itself.
So the FEL uses an accelerator to generate a high-energy beam of electrons instead. A big magnetic "ring" is created to hold the electrons. Electrons are injected into the ring and the magnets are used to accelerate the electrons around the ring at nearly the speed of light. Once accelerated the electrons are used to stimulate the laser activity.
This is done by passing the electron beam through a lasing cavity to create a laser beam. The electron beam is synchronized (made coherent) with the laser activity in the cavity to create the most powerful laser on the planet. The power of the laser depends on how many electrons are used - more electrons means more power.
The FEL laser can be tuned to multiple frequencies at the same time in order to allow it to pass easily through the air to reach its target.
So, like the rail gun, the Navy is wisely replacing traditional shells and gun powder with magnets and electrons. Unfortunately it will probably take at least eight to ten more years before these weapons are actually in use on ships.
I have always been interested in Naval weapons development. Back in the late 1980's I had a company that was looking at work from various defense agencies. We made a sales trip to the Naval weapons storage facility in Indiana (Naval Surface Warfare Center Crane Division).
After entering the main gate you drove past what seemed like miles of "bunkers" - "bunkers" containing charges for the large naval guns. Once we arrived at the main building out front was a mock Polaris missile. The bunkers all contain the charges used to propel large shells from the big on-deck naval guns. Since the charges are affected by age and other factors the Navy keeps close track of their age.
The bunkers themselves look like Indian mounds with a slot cut through one end. The slot is lined with concrete on both sides. This is so if there is an explosion the blast will be deflected away from other mounds.
Fascinating stuff...
But not a as fascinating as some recent Naval weapons developments...
First there is the "Mach-8 Rail Gun" (see the video here). The idea is replacing all those endless miles of charges in bunkers with a big, powerful electromagnetic system that can launch a piece of metal just as far - no gun powder, no danger. The idea is that you charge up a big bank of capacitors (devices that store electrical energy when the power - sort of like you when you walk around a dry house in the winter building up a static electrical charge - except there is real current involved in this). You discharge the capacitors through a coil of wire that makes a magnetic field. The magnet field, coil of wire and capacitors are all designed to start pulling the "shell" into the field of the magnet on the discharge. As the shell accelerates into the magnetic field the charge dissipates and the shell keeps going.
Going at about 6,000 feet per second - that's about six times faster than a rifle bullet.
There is no reverse recoil as with a traditional powder-fired shell and you can knock incoming missiles, planes, and drones out of the sky with deadly efficiency.
But, if you don't like actually firing "shells" - even simple metallic ones at incoming weapons there is always the new death ray. This the FEL or free-electron laser. The idea of this is that to get enough power to create a laser beam that can knock incoming planes, shells and other weapons out of the air you need a lot of energy in your laser beam - more than you can get by simply using light by itself.
So the FEL uses an accelerator to generate a high-energy beam of electrons instead. A big magnetic "ring" is created to hold the electrons. Electrons are injected into the ring and the magnets are used to accelerate the electrons around the ring at nearly the speed of light. Once accelerated the electrons are used to stimulate the laser activity.
This is done by passing the electron beam through a lasing cavity to create a laser beam. The electron beam is synchronized (made coherent) with the laser activity in the cavity to create the most powerful laser on the planet. The power of the laser depends on how many electrons are used - more electrons means more power.
The FEL laser can be tuned to multiple frequencies at the same time in order to allow it to pass easily through the air to reach its target.
So, like the rail gun, the Navy is wisely replacing traditional shells and gun powder with magnets and electrons. Unfortunately it will probably take at least eight to ten more years before these weapons are actually in use on ships.
Friday, February 18, 2011
Artificial Life and Intelligence
The game of "Life"... |
She is in her eighties and expressed some fear with regard to Watson; her thoughts on talking computers shaped forever by the HAL 9000 in 2001 A Space Odyssey.
What's interesting is how people associate intelligence with something like Watson - not the intelligence it took to create Watson but instead the fact that Watson appears intelligent even though its a simple machine.
Since 1950 the measure of whether something like a computer is intelligent has been the Turing Test. Basically this is a test where a human judge and a "subject" (a computer) communicate remotely to each other (so the human cannot tell simply by external evidence whether the subject is a machine or not. The human gets to ask questions and the subject responds. If the human judge cannot reliable determine if the subject is a machine the machine is said to have passed the test.
Something like Jeopardy! is not a Turing Test for several reasons. One is that the questions are simply "right or wrong" questions. So, if I had a book of that compiled all questions every asked on Jeopardy! I could always find the correct answer - effectively Jeopardy! itself is the database of all such questions and answers.
This database of questions does not represent intelligence - merely simple look ups. Certainly Watson has to string together information to "find" answers - but the answers have to be in his database in the first place for him to find them.
Then there is the "domain" of Watson's knowledge. This is basically a measurement of how big the "realm of knowledge" is that Watson can work from. IBM said that this was comprised of 200 million pages of documents: dictionaries, encyclopedias, and so forth. This realm is actually very small because its limited to reasoning about a fixed set of information (fixed in the sense that it does not change for the "life" of Watson's play on Jeopardy!) and it does not address current information (news, weather, and so on), knowledge about social things, information about doing things, and much, much more. Watson also cannot "see" or "hear" or "touch" nor does it have a body with which to relate feelings.
All this means Watson cannot know about what it feels like to do anything, cannot be asked about whether he likes the weather today or weather (he/she/it) loves (he/she/its) significant other.
Watson has no internet connection - which means Watson cannot know about things that are changing or that are wrong (for example seeing the face of Elvis on Mars). So Watson only deals with correct information - correct in the sense that it does not have to decide which of two conflicting facts is correct (this may happen inside Watson at a small scale but not in the sense that someone or something is actively trying to deceive another, e.g., a Bernie Madoff).
Watson is also "deterministic" - that is it is going to come up with the same answer today or ten years from now given the same database. So he is not "learning from his mistakes."
Don't get me wrong - Watson is certainly an impressive technical achievement - be he is certainly no HAL 9000.
There are many other impressive technical feats of computer engineering on par with Watson. For example, Google's self driving cars, the "Deep Blue" chess computer, and many others. But they are just that - engineering.
Humanity certainly has the technology to attempt to create something like a HAL 9000 or a SkyNet today. A few hundreds of millions of available computers all networked together across the face of the earth could be linked with AI-type software to demonstrate intelligence. But if each computer equals a single neuron that network provides only about as much intellectual horse power as an octopus (see this link).
The problem and challenge with all this is simple: Intelligence exists for a purpose. You cannot have software acting intelligently without that software having some reason to be intelligent, i.e., survival.
The first thing I would do is create an artificial environment in the vast see of computers - a place where something "alive" could be represented, could move, learn, perceive and act, could carry the equivalent of genes, and could have a purpose. This concept already exists to a degree for projects like SETI, for networked games (Second Life, World of Warcraft, etc.)
I would then construct some type of artificial "life" designed to live in that environment - think of it as a computerized player in one of the games I mentioned.
It would not be hard to have some sort of "life" living in this type of world - what it might be or might look like would be hard to say, but I don't see why humanity isn't working on this... Well, they are or rather were, actually (see this), but most of the links don't exist so perhaps the artificial life they were working on is dead???
Thursday, February 17, 2011
Apple Rage... (Cocoa Sucks!)
I have spent the last couple of days trying to prove to myself that I had not gone insane...
(Skip this if you are not interested in computers and programming but take away this: Imagine if one day you discovered that your car would not started if you were wearing your red jacket. Of course the red jacket itself was not really the issue and the fact that, unbeknownst to you, someone accidentally or unkowningly left a jammed key fob in the pocket that messed up the cars ignition system was. You would think you were going insane because how could the car know what jacket you were wearing? The point being that the unknown key fob is making it appear that the jacket is the issue. This is a story about how you eventually figure out the real problem...)
I have been working with Apple software to develop an iPad application. Apple software is based on something called Objective-C developed by Brad Cox. This is a holdover from the days that Steve Jobs was at NeXT computer. NeXT was a company Job started after leaving Apple in the late 1980's.
Objective-C is an unpleasant version of the language C. C is a programming language popular since the 1970's and C++ is its main evolutionary branch for language advancement.
Unfortunately NeXT began using Objective-C and when Jobs returned to Apple he bought NeXT and used it as the platform for what is today OS X. Objective-C is an "object oriented" language - which I will not explain here - that is based on the concept of sending "messages" to "objects".
So conceptually you can think of a "door". I can tell the door to "open" or to "close" by sending the "door" a message. This, of course, is all just fine and dandy. The problem with Objective-C and Apple is that in order to make software development work on their platforms they have to tinker with the machinery associated with just how this simple concept works...
So in Objective-C you might send the door a message like this:
[ door close ]
The "door" is the receiver and "close" is the message. I might create a "door" like this:
door = [[Door alloc] type: wooden ];
What this does is allocate a new "door" object and initialize it to being of type "wooden".
In this model the "door" holds a value which represents the door object in the computers memory.
The first problem comes in when the alloc message, which causes a new door to be created, fails.
Objective C represents this by setting the door's value to something called nil. So, perhaps we are out of doors, and instead of there being some kind of exception or error we just get nil.
The next issue is that in common usage Apple software doesn't care much to check to see if things have failed so instead of saying something like:
if (door == nil) { ... error ... }
they just rely on the fact that in Objective-C you can send any message you want to nil. When nil receives a message is simply does nothing at all.
Unfortunately this seems to be a life-style choice for Apple programmers. Over the years Apple has expanded the Objective-C concept to something called Cocoa. Cocoa and, to some extend, the iPhone platform, rely on Objective-C along with a lot of enhancements and additions to make their software function. So the notion of things simply ignoring situations that are "wrong" is now embedded into these platforms.
So in Cocoa you have a much more rich and complex system for dealing with things like windows on the display, disks and networking. This concept includes something called delegates. A delegate, like my example of the "door" above, can receive messages. So, for example, you might have a mouse on your computer.
In the code in the computer that deals with the mouse you might have a mouse object. That mouse object might assign a delegate to receive messages, e.g., the fact that the mouse moved.
You might ask why have a delegate for a mouse when the mouse should be handling things on its own.
The reason for this is that most modern computer systems like Windows and OS X and Linux have something called "event loops" in them to monitor the computer's activity. The event loop monitors whatever the computer is doing, for example, it watches the information that the mouse is sending to it as it moves (perhaps over a wireless connection or through a USB cable). If a mouse object in Objective C has asked the computer to notify it about things concerning the mouse then the event loop will take mouse information and send it to the mouse's delegate.
Now at least to me this seems wrong. Why not just tell the mouse what to do? Why bother to create software that requires the mouse to have a delegate to do its work for it?
What's even worse in Objective C is that you need to tell your code that, when you are creating a mouse object for example, the mouse must be prepared to officially delegate notifications. That way the rest of the computer knows what to do when mouse changes occur.
What made my life miserable for the last day and a half was that some code I had been working on did not have the delegate part yet it continued to work. Since I used some code from Apple as a starting point I foolishly assumed that what was it it was right - and it was to a point - but not to the point I was relying on.
Normally when you write good code you put all of the things that are relevant to something, i.e., like delegation, together in one place. That way when someone looks at the code they can see the intention.
Since 90% of all software costs have to do with support and maintenance and not with writing new code making it clear what your intention was when you wrote the code is key. But in the case of Apple I think that the sloppy Objective-C model has done them wrong.
In the code I started with everything but the information about the delegation was in one file. The delegation was in another file off by itself. Now it was my fault for not seeing it. Sadly I assumed that everything I needed was all in one place because that's how I work. I have software that has been in the field in production for literally a decade or more and I cannot rely on my memory to tell me what I was thinking when I wrote it.
So I copied the code (without the necessary delegate) and it worked fine - somehow the Apple software figured out that the delegation was to work and made it so (without any sort of notification to me of course). So the code was morphed over time in to a complex system unrelated to the original sample code and, all the while, it continued to work - until yesterday.
Yesterday I made a change that was quite innocuous. My code was running over the wireless network just fine. I wanted to change it to work over a USB cable and I just had to remove some things in one spot and add a bit of code to communicate in the new way. All of this even worked except that some of the delegates that were working stopped. Of course without any error or explanation.
I went over and over the code, checking and rechecking my work. (Of course since the code was long changed from the sample code I had started with I never saw nor would see the missing necessary piece.) Finally, in a bout of total frustration after many hours of messing about unsuccessfully, I took the offending functionality that was not working and I wrote a new, small program to test just it.
I hooked up the code to a button on a simple display so that it would run when I pressed the button. But low and behold while I was building the project the compiler (a program that makes the human readable instructions into "machine language" so it will actually be understood by the computer) pointed out I was missing the declaration that a delegate was needed:
Controller.m:31: warning: class 'Controller' does not implement the 'NSStreamDelegate' protocol
Hmm. My real project doesn't offer this message - it merely partially (unreliably) works - even though I can tell my real project the the same thing about delegates.
And this is my big beef with Apple.
Things work right if you know what I call the proper "Mumbo Jumbo" (magic, voodoo, etc.) The tools Apple offers mostly tell you what you need to know but not always. Just like sending messages to nil. While in general I think the Apple hardware is excellent I am not real pleased with the development environment from the perspective of being able to know what's correct and what's not. If XCode and the run-time system can tell you or can notice you are doing something wrong - it should tell you instead of simply remaining silent...
All of this, of course, leads you to believe that you must question your own sanity.
(Skip this if you are not interested in computers and programming but take away this: Imagine if one day you discovered that your car would not started if you were wearing your red jacket. Of course the red jacket itself was not really the issue and the fact that, unbeknownst to you, someone accidentally or unkowningly left a jammed key fob in the pocket that messed up the cars ignition system was. You would think you were going insane because how could the car know what jacket you were wearing? The point being that the unknown key fob is making it appear that the jacket is the issue. This is a story about how you eventually figure out the real problem...)
I have been working with Apple software to develop an iPad application. Apple software is based on something called Objective-C developed by Brad Cox. This is a holdover from the days that Steve Jobs was at NeXT computer. NeXT was a company Job started after leaving Apple in the late 1980's.
Objective-C is an unpleasant version of the language C. C is a programming language popular since the 1970's and C++ is its main evolutionary branch for language advancement.
Unfortunately NeXT began using Objective-C and when Jobs returned to Apple he bought NeXT and used it as the platform for what is today OS X. Objective-C is an "object oriented" language - which I will not explain here - that is based on the concept of sending "messages" to "objects".
So conceptually you can think of a "door". I can tell the door to "open" or to "close" by sending the "door" a message. This, of course, is all just fine and dandy. The problem with Objective-C and Apple is that in order to make software development work on their platforms they have to tinker with the machinery associated with just how this simple concept works...
So in Objective-C you might send the door a message like this:
[ door close ]
The "door" is the receiver and "close" is the message. I might create a "door" like this:
door = [[Door alloc] type: wooden ];
What this does is allocate a new "door" object and initialize it to being of type "wooden".
In this model the "door" holds a value which represents the door object in the computers memory.
The first problem comes in when the alloc message, which causes a new door to be created, fails.
Objective C represents this by setting the door's value to something called nil. So, perhaps we are out of doors, and instead of there being some kind of exception or error we just get nil.
The next issue is that in common usage Apple software doesn't care much to check to see if things have failed so instead of saying something like:
if (door == nil) { ... error ... }
they just rely on the fact that in Objective-C you can send any message you want to nil. When nil receives a message is simply does nothing at all.
Unfortunately this seems to be a life-style choice for Apple programmers. Over the years Apple has expanded the Objective-C concept to something called Cocoa. Cocoa and, to some extend, the iPhone platform, rely on Objective-C along with a lot of enhancements and additions to make their software function. So the notion of things simply ignoring situations that are "wrong" is now embedded into these platforms.
So in Cocoa you have a much more rich and complex system for dealing with things like windows on the display, disks and networking. This concept includes something called delegates. A delegate, like my example of the "door" above, can receive messages. So, for example, you might have a mouse on your computer.
In the code in the computer that deals with the mouse you might have a mouse object. That mouse object might assign a delegate to receive messages, e.g., the fact that the mouse moved.
You might ask why have a delegate for a mouse when the mouse should be handling things on its own.
The reason for this is that most modern computer systems like Windows and OS X and Linux have something called "event loops" in them to monitor the computer's activity. The event loop monitors whatever the computer is doing, for example, it watches the information that the mouse is sending to it as it moves (perhaps over a wireless connection or through a USB cable). If a mouse object in Objective C has asked the computer to notify it about things concerning the mouse then the event loop will take mouse information and send it to the mouse's delegate.
Now at least to me this seems wrong. Why not just tell the mouse what to do? Why bother to create software that requires the mouse to have a delegate to do its work for it?
What's even worse in Objective C is that you need to tell your code that, when you are creating a mouse object for example, the mouse must be prepared to officially delegate notifications. That way the rest of the computer knows what to do when mouse changes occur.
What made my life miserable for the last day and a half was that some code I had been working on did not have the delegate part yet it continued to work. Since I used some code from Apple as a starting point I foolishly assumed that what was it it was right - and it was to a point - but not to the point I was relying on.
Normally when you write good code you put all of the things that are relevant to something, i.e., like delegation, together in one place. That way when someone looks at the code they can see the intention.
Since 90% of all software costs have to do with support and maintenance and not with writing new code making it clear what your intention was when you wrote the code is key. But in the case of Apple I think that the sloppy Objective-C model has done them wrong.
In the code I started with everything but the information about the delegation was in one file. The delegation was in another file off by itself. Now it was my fault for not seeing it. Sadly I assumed that everything I needed was all in one place because that's how I work. I have software that has been in the field in production for literally a decade or more and I cannot rely on my memory to tell me what I was thinking when I wrote it.
So I copied the code (without the necessary delegate) and it worked fine - somehow the Apple software figured out that the delegation was to work and made it so (without any sort of notification to me of course). So the code was morphed over time in to a complex system unrelated to the original sample code and, all the while, it continued to work - until yesterday.
Yesterday I made a change that was quite innocuous. My code was running over the wireless network just fine. I wanted to change it to work over a USB cable and I just had to remove some things in one spot and add a bit of code to communicate in the new way. All of this even worked except that some of the delegates that were working stopped. Of course without any error or explanation.
I went over and over the code, checking and rechecking my work. (Of course since the code was long changed from the sample code I had started with I never saw nor would see the missing necessary piece.) Finally, in a bout of total frustration after many hours of messing about unsuccessfully, I took the offending functionality that was not working and I wrote a new, small program to test just it.
I hooked up the code to a button on a simple display so that it would run when I pressed the button. But low and behold while I was building the project the compiler (a program that makes the human readable instructions into "machine language" so it will actually be understood by the computer) pointed out I was missing the declaration that a delegate was needed:
Controller.m:31: warning: class 'Controller' does not implement the 'NSStreamDelegate' protocol
Hmm. My real project doesn't offer this message - it merely partially (unreliably) works - even though I can tell my real project the the same thing about delegates.
And this is my big beef with Apple.
Things work right if you know what I call the proper "Mumbo Jumbo" (magic, voodoo, etc.) The tools Apple offers mostly tell you what you need to know but not always. Just like sending messages to nil. While in general I think the Apple hardware is excellent I am not real pleased with the development environment from the perspective of being able to know what's correct and what's not. If XCode and the run-time system can tell you or can notice you are doing something wrong - it should tell you instead of simply remaining silent...
All of this, of course, leads you to believe that you must question your own sanity.
Wednesday, February 16, 2011
Red Bull, Flip Flops and War Games
In 1983 the movie "War Games" detailed how a high school student with an IMSAI computer nearly causes atomic war between the Soviet Union and the United States. The IMSAI computer, one of the first home computers ever sold, was used to dial into a what turned out to be NORAD. Once connected the young protagonist finds games which turn out to be realist NORAD war simulations.
Books like "Ender's Game" (written by Orson Scott Card) follow similar concepts where children use games to develop strategies to defend the planet for hostile aliens and computer games to run the actual attacks.
But all this was nearly thirty years ago.
What's interesting is how wrong it all was.
True cyber war like the kind practiced by the authors of STUXNET against the Iranians are much different than the Hollywood reality. The roll of computers and technology in world-changing and war-like events comes down much more on the side of unlikely things like sloppy coding at Microsoft, cellphone networking, and Facebook.
The ousting of middle eastern despots has been enabled as much by cellphones and Facebook as any sort of computerized plan of attack. These tools, instead of directly attacking the regime instead provide a means for the humans involved to organize their own plan. The act as a "social lubricant" that allows the underlying feelings in society to be more easily and freely communicated, and with much less risk, than in the pre-computer days.
Posting anonymously on Facebook is much safer than "nailing 95 theses" (an idea for which there is not historical evidence) to the local church door. No one can see you, no one knows who you are, no one can prove you did it.
Similarly with cellphones - text your friends to meet in Tahrir Square. Who started the texting? Who's idea was it? How would anyone find out?
No, Facebook and cellphones are simply tools for literally "crowd sourcing" and "trash mobbing" attacks on repressive regimes. While I might personally be afraid to go down to the local square and throw bricks at the authorities if I text twelve buddies about how I feel I won't have to show up alone and there's a good chance of finding at least one person with a worse attitude than me.
Hardly the "War Games" scenario...
And then there is Microsoft.
Windows 2003 - the technology of choice for the Iranian centrifuge control - is so full of security holes its nearly impossible to fix.
Why is this?
Mostly because nearly 100% of all Windows programmers and software did little or no "bounds checking".
"Bounds checking" is a very simple concept. Let's say that I have a field to type your name into on my web page. Say I am generous and I leave you 100 characters for a name and I leave a corresponding 100 characters in my program and database for said name. As long as no one types in more than 100 characters everything works. If I type in 101 characters the extra character has no room and goes "off the end" of one field and steps into another - wreaking havoc along the way.
Since Windows was designed at the bleeding edge of technology in it 1980's hey days there was no reason in much of the code (much of which survives to this day) to check for these boundary conditions - the cost in terms of processor performance, program size, and programmer time was simply too great.
Which leaves us the STUXNET.
I would say the first real cyber weapon.
Which brings us to Red Bull and flip flops...
Deputy Defense Secretary William Lynn recently commented at a San Francisco security gathering: “...it is possible for a terrorist group to develop cyberattack tools on their own or to buy them on the black market...” he said, “As you know better than I, a couple dozen talented programmers wearing flip-flops and drinking Red Bull can do a lot of damage.”
Personally I prefer boxers, sandals and coffee...
But Lynn rightly wonders what al-Qaeda might accomplish with weapons like STUXNET.
The problem is that when companies like Microsoft develop software they cannot do it from the perspective of what nefarious purposes others might use it for. It simply cannot be done. At the time Windows was developed there was no issue like "bounds checking" to contend with - it was hackers dialing in from IMSAI home computers.
Books like "Ender's Game" (written by Orson Scott Card) follow similar concepts where children use games to develop strategies to defend the planet for hostile aliens and computer games to run the actual attacks.
But all this was nearly thirty years ago.
What's interesting is how wrong it all was.
True cyber war like the kind practiced by the authors of STUXNET against the Iranians are much different than the Hollywood reality. The roll of computers and technology in world-changing and war-like events comes down much more on the side of unlikely things like sloppy coding at Microsoft, cellphone networking, and Facebook.
The ousting of middle eastern despots has been enabled as much by cellphones and Facebook as any sort of computerized plan of attack. These tools, instead of directly attacking the regime instead provide a means for the humans involved to organize their own plan. The act as a "social lubricant" that allows the underlying feelings in society to be more easily and freely communicated, and with much less risk, than in the pre-computer days.
Posting anonymously on Facebook is much safer than "nailing 95 theses" (an idea for which there is not historical evidence) to the local church door. No one can see you, no one knows who you are, no one can prove you did it.
Similarly with cellphones - text your friends to meet in Tahrir Square. Who started the texting? Who's idea was it? How would anyone find out?
No, Facebook and cellphones are simply tools for literally "crowd sourcing" and "trash mobbing" attacks on repressive regimes. While I might personally be afraid to go down to the local square and throw bricks at the authorities if I text twelve buddies about how I feel I won't have to show up alone and there's a good chance of finding at least one person with a worse attitude than me.
Hardly the "War Games" scenario...
And then there is Microsoft.
Windows 2003 - the technology of choice for the Iranian centrifuge control - is so full of security holes its nearly impossible to fix.
Why is this?
Mostly because nearly 100% of all Windows programmers and software did little or no "bounds checking".
"Bounds checking" is a very simple concept. Let's say that I have a field to type your name into on my web page. Say I am generous and I leave you 100 characters for a name and I leave a corresponding 100 characters in my program and database for said name. As long as no one types in more than 100 characters everything works. If I type in 101 characters the extra character has no room and goes "off the end" of one field and steps into another - wreaking havoc along the way.
Since Windows was designed at the bleeding edge of technology in it 1980's hey days there was no reason in much of the code (much of which survives to this day) to check for these boundary conditions - the cost in terms of processor performance, program size, and programmer time was simply too great.
Which leaves us the STUXNET.
I would say the first real cyber weapon.
Which brings us to Red Bull and flip flops...
Deputy Defense Secretary William Lynn recently commented at a San Francisco security gathering: “...it is possible for a terrorist group to develop cyberattack tools on their own or to buy them on the black market...” he said, “As you know better than I, a couple dozen talented programmers wearing flip-flops and drinking Red Bull can do a lot of damage.”
Personally I prefer boxers, sandals and coffee...
But Lynn rightly wonders what al-Qaeda might accomplish with weapons like STUXNET.
The problem is that when companies like Microsoft develop software they cannot do it from the perspective of what nefarious purposes others might use it for. It simply cannot be done. At the time Windows was developed there was no issue like "bounds checking" to contend with - it was hackers dialing in from IMSAI home computers.
Tuesday, February 15, 2011
iOS, PPML and AFP...
So after spending a lot of time with the various Apple developer platforms I see the potential to create some AFP-based tools for iPhones and iPads.
For one thing the Quartz system on iOS provides a convenient means to rasterize PDF and PS to bitmaps so its likely that creating an iPad or iPhone APP to convert these file types to AFP PSEG (Page Segments) would be fairly simple.
We already have the tools and infrastructure in place to do this...
The question is does anyone care?
The second element would be to allow viewing of something like a PSEG on an iPhone or iPad. Again we have the code written and in production to support this if someone is interested.
(AFP PSEG files are basically like image files - just stored in the AFP format. They contain image data like a JPEG or TIFF but the "wrapper" is different.)
I imagine there would be other types of viewers possible as well. For example, a tool that read an AFP file and then displayed all the image content on-screen. Again this would be easy to do as we have all the technology in place today - but would anyone care?
Lexigraph also has a PPML/PDF tool called Argon. This allows you to take a PPML file and composite it to a PDF. Again, because of Quartz, this would be simple to have on the iPhone or iPad.
Since the iPhone and iPad only work on wireless and don't have easy access to traditional means (like FTP or SMB) to easily share files the only easy way to use it would be via a web page. Thus a potential user would only be able to access content (AFP files or files to convert to or from AFP) via a web page.
A full AFP viewer is probably out of the question at this point. There is some opensource work in this area and there are some partially functioning viewers. So it would be possible to get an iPhone-based app up and running for full AFP...
Just some business thoughts...
For one thing the Quartz system on iOS provides a convenient means to rasterize PDF and PS to bitmaps so its likely that creating an iPad or iPhone APP to convert these file types to AFP PSEG (Page Segments) would be fairly simple.
We already have the tools and infrastructure in place to do this...
The question is does anyone care?
The second element would be to allow viewing of something like a PSEG on an iPhone or iPad. Again we have the code written and in production to support this if someone is interested.
(AFP PSEG files are basically like image files - just stored in the AFP format. They contain image data like a JPEG or TIFF but the "wrapper" is different.)
I imagine there would be other types of viewers possible as well. For example, a tool that read an AFP file and then displayed all the image content on-screen. Again this would be easy to do as we have all the technology in place today - but would anyone care?
Lexigraph also has a PPML/PDF tool called Argon. This allows you to take a PPML file and composite it to a PDF. Again, because of Quartz, this would be simple to have on the iPhone or iPad.
Since the iPhone and iPad only work on wireless and don't have easy access to traditional means (like FTP or SMB) to easily share files the only easy way to use it would be via a web page. Thus a potential user would only be able to access content (AFP files or files to convert to or from AFP) via a web page.
A full AFP viewer is probably out of the question at this point. There is some opensource work in this area and there are some partially functioning viewers. So it would be possible to get an iPhone-based app up and running for full AFP...
Just some business thoughts...
Monday, February 14, 2011
Voynich, Manuscripts and STUXNET in the 15th Century
Illustrations from Yale Library and Wikipeida |
In the 15th century a document called the Voynich manuscript was created (illustration at the top). The manuscript, named after a Polish-Lithuanian-American book dealer named Wilfrid M. Voynich, was acquired by him as part of a rare book purchase in 1912 in Italy.
Like STUXNET the origins of the manuscript are unknown, its purpose is unknown as is its author. Its text is thought to be encrypted with an as of yet unbreakable nature.
The Voynich manuscript is in and of itself a fascinating document. It consists of several major elements: botanical illustration of some 113 different and currently unknown plant species, astronomical and astrological drawings, a myriad of small, unusual females with swollen bellies interacting with an odd collection of tubes and liquids, an array of very large fold outs with cosmological drawings, pharmacological drawings of 100 plants in jars, many textual sections which appear to be recipes.
(If you have never seen medieval manuscripts up close I would strongly urge you to make an effort to do so. Others I have seen are remarkable in many ways: for their detail, their brilliant colors, for the shear magnitude of effort to create something so beautiful and intricate with only a quill pen, some pigments and ink, and some animal skin.)
Carbon dating, which was only completed in the last few years, indicates the document was authored between about 1404 and 1438. Like virtually all documents written at this time it was written on vellum - animal skin - which makes carbon dating very reliable.
The text itself is not written in any known language and even the glyphs themselves are unknown. There are about 200 thousand glyphs which can, for the most part, be broken down into an alphabet of 20-30 glyphs. However, the words composed of these glyphs are very unusual. For example, there are about 35,000 "words" in the manuscript and the glyphs within the words appear to follow some sort of phonological rules, i.e., some glyphs never appear with others, and so on.
Various studies of the text evaluating the frequencies of letters and words suggest that the text has something to do with medieval Latin or English. Most likely the work represents some sort of as yet unknown cypher.
There are various unusual illustrations through the manuscript.
Many involve strange images of nude, perhaps pregnant, women wading and bathing (as seen above). Many of these women appear with a variety of tubes and pipes that would appear to represent bodily organs.
There are also many images that would appear to relate to astrological or cosmological events.
Many also relate to biology and plants.
To date no one has been able to determine what this book is about nor what purpose, if any, it serves.
Some believe it to be a hoax. However, given the amount of work and cost required in the 15th century to create such a work makes me believe this is unlikely.
More than likely its what it appears to be - a collection of "knowledge" about subjects known to the authors at the time. For example, the fact that the plant illustrations do not appear to match any known species in not unusual for books written in this time period. It wasn't until later that authors actually worked to create accurate representations of natural things like plants. At the time this book was written it was not unusual for illustrations of natural things to have fanciful aspects.
Just like today people in the 15th century would appear to have reasons to create complex technological artifacts and release them without explanation into the real world.
Like STUXNET the Voynich manuscript represents a significant amount of work in creating something of a perhaps dubious purpose. And while we believe that our modern technology allows us a full and complete grasp of the world around us this particular document serves as a reminder that our hubris is not justified.
Friday, February 11, 2011
Cursive Writing - A New Lost Art?
Along with many other things the very nature of how we write is changing.
Today, according to this link, few college students can write (or read?) in what we used to call "cursive". Cursive being a continuous script writing separate from the "printed" form of letters one sees on the computer, cellphone or laptop screen.
I learned to print in first grade. I was taught by nuns and they meant business. We had lined paper with a dashed line in the center of each row where we were to print. Capital letters had to reach from the bottom line to the top. Lower case letters and elements of letters that appeared in the "middle" of the letter had to be on the dashed line. There were no exceptions and we practiced this for an entire school year.
Above the blackboard where pictures of printed letters - organized alphabetically and demonstrating how each letter should be formed - right along with the dashed line. Since there was no kindergarten and the previous year my "school" had been a one-room school house (I got to visit it at the end of the prior year for a day to see what it was like) we also had to learn to read and do arithmetic right along with learning to print.
I was always interested in what the other, higher grades were doing so I could not help but notice that for the third graders there were different types of letters displayed above the blackboard. (First and third grade shared rooms.)
For second grade we moved to a different room but alas still printed.
I had to wait an entire two years before I was able to learn cursive writing.
However, the reality was disappointing. I was not as good at cursive as I was at printing and so I never wrote in cursive unless required to for school.
For me printing came into its own in high school. There we were required to write long biology reports and for me printing was the way to go. I got fairly good at it and was able to print far faster than type or write cursively.
All this is becoming ancient history.
My own children do not put pen to paper as far as I can see - they live in the worlds of MSDN, PowerPoint, web sites, cell phones and note pad computers.
And cursive is not the first victim of progress here in the US. Prior to the 1970's or so shorthand was used extensively in business. This was a form of writing designed to allow the writer to quickly capture the spoken word and was used (perhaps still is used) by secretaries and journalists.
Dictation machines, invented in the 1960's however, changed all this and shorthand has been in decline ever since.
So where does all this lead?
For one thing typing on a screen as I am now takes away from what I can do printing or writing in cursive - while I can change font size, add bolding, and so on no one can tell by looking at what I am typing how I might feel about what I am saying.
(Not that I run around with big, thick black markers SHOUTING on big rolls of white paper...)
So as "I Love You" goes from "I Love You" to "<3" what is happening in our minds?
Do young people think in terms of text symbols the more they use cellphones and texting?
Do these things take away from their ability to add emotional content to their communications? (Though I suppose, at least on a computer or in email, you can still say "<3" or "<3". And then there is "breaking up" via text message.)
And then there is "r u gng 2 b l8?" - as opposed to "Are you going to be late?" I suppose the former works well while driving and they do convey the same message - at least superficially. However the later, to me, indicates that writer might also be literate.
But if you send me the former because you are late for a meeting with me for your job interview what might I think? (I suppose I could reply "2l8"...)
I have noticed that quite often those using the more compressed forms of text speech often can only think in those terms. In fact, for me, on a system like Craigs List seeing that type of posting is clearly a danger sign. Why? Because how one expresses one's self of indicates how one thinks...
On the other hand, there is a lot of, well, shorthand, for texting (see this).
Perhaps shorthand is just making a resurgence...
Today, according to this link, few college students can write (or read?) in what we used to call "cursive". Cursive being a continuous script writing separate from the "printed" form of letters one sees on the computer, cellphone or laptop screen.
I learned to print in first grade. I was taught by nuns and they meant business. We had lined paper with a dashed line in the center of each row where we were to print. Capital letters had to reach from the bottom line to the top. Lower case letters and elements of letters that appeared in the "middle" of the letter had to be on the dashed line. There were no exceptions and we practiced this for an entire school year.
Above the blackboard where pictures of printed letters - organized alphabetically and demonstrating how each letter should be formed - right along with the dashed line. Since there was no kindergarten and the previous year my "school" had been a one-room school house (I got to visit it at the end of the prior year for a day to see what it was like) we also had to learn to read and do arithmetic right along with learning to print.
I was always interested in what the other, higher grades were doing so I could not help but notice that for the third graders there were different types of letters displayed above the blackboard. (First and third grade shared rooms.)
For second grade we moved to a different room but alas still printed.
I had to wait an entire two years before I was able to learn cursive writing.
However, the reality was disappointing. I was not as good at cursive as I was at printing and so I never wrote in cursive unless required to for school.
For me printing came into its own in high school. There we were required to write long biology reports and for me printing was the way to go. I got fairly good at it and was able to print far faster than type or write cursively.
All this is becoming ancient history.
My own children do not put pen to paper as far as I can see - they live in the worlds of MSDN, PowerPoint, web sites, cell phones and note pad computers.
And cursive is not the first victim of progress here in the US. Prior to the 1970's or so shorthand was used extensively in business. This was a form of writing designed to allow the writer to quickly capture the spoken word and was used (perhaps still is used) by secretaries and journalists.
Dictation machines, invented in the 1960's however, changed all this and shorthand has been in decline ever since.
So where does all this lead?
For one thing typing on a screen as I am now takes away from what I can do printing or writing in cursive - while I can change font size, add bolding, and so on no one can tell by looking at what I am typing how I might feel about what I am saying.
(Not that I run around with big, thick black markers SHOUTING on big rolls of white paper...)
So as "I Love You" goes from "I Love You" to "<3" what is happening in our minds?
Do young people think in terms of text symbols the more they use cellphones and texting?
Do these things take away from their ability to add emotional content to their communications? (Though I suppose, at least on a computer or in email, you can still say "<3" or "
And then there is "r u gng 2 b l8?" - as opposed to "Are you going to be late?" I suppose the former works well while driving and they do convey the same message - at least superficially. However the later, to me, indicates that writer might also be literate.
But if you send me the former because you are late for a meeting with me for your job interview what might I think? (I suppose I could reply "2l8"...)
I have noticed that quite often those using the more compressed forms of text speech often can only think in those terms. In fact, for me, on a system like Craigs List seeing that type of posting is clearly a danger sign. Why? Because how one expresses one's self of indicates how one thinks...
On the other hand, there is a lot of, well, shorthand, for texting (see this).
Perhaps shorthand is just making a resurgence...
Thursday, February 10, 2011
Trash Mobs Middle Esatern Style
It is beyond fascinating to read about how Facebook is changing the Middle East. This Facebook page describes how the youth of this tiny Arabian country are attempting to organize a mass protest on February 14th. The goals of this are (from the Facebook page):
1. A new constitution written by the people
2. The establishment of a new body that has authority to investigate and hold to account economic, political and social violations, including the return of stolen public wealth and reversal of political naturalistion, in order to reach national conciliation.
And the youth of Bahrain are not alone.
Then there is Syria and the "The Syrian Revolution 2011" Facebook page.
Until recently Facebook, twitter, and the like were banned in Syria. But the recent Tunisian revolt and Egyptian protests have made way for a relaxing of these policies.
As the popularity of Facebook spreads around the world it is now also influencing Middle Eastern governments.
For example, Omar al-Bashir, the president of Sudan and an indicted war criminal for his involvement in the Darfur genocide, has recently begun promoting his own pro-Sudan Facebook page. The purpose? To use Facebook to overcome opposition to his rule (according to this Sudanese site).
Sudan has 41 million people - with only about 10% having internet access. Yet al-Bashir still grasps the power of this medium because he is working to give more of his countrymen internet access. (Does he realize this may come back to bite him?)
It is also well known that Facebook activities have been heavily involved in the protests in Egypt.
It would appear that this technology and the use of social networking for revolutoin is far stronger than anyone realized.
But how is this different than, say, the invention of the printing press and movable type by Gutenberg in 1440? Before that time if someone wanted to perform a mass communication of some sort it would require that all the notices be literally "written by hand". However, with the printing press operational, it was possible to produce up to 3,600 pages per day of printed content (see Wikipedia).
I think we are literally witnessing a repeat of history. Only this time the cellphone and computer screen are unwittingly taking on the role of paper and press.
In 1440 it would have cost a significant amount of money to produce those 3,600 pages on the Gutenberg press. Today, to create a Facebook page, requires only a cellphone or web browser, which is well within the reach of the average Middle Eastern middle class citizen. Even more interesting is that governments like Syria installed the technology in the first place.
Yet how could they avoid it?
And if you think about it, just installing this technology in the hands of the elite is still asking for trouble. For example, though you're faithful government employee might not use the technology against his employer his college educated child just might. So, like Pandora's box, once the technology is in place things can slip away very quickly.
Clearly the Syrian government foresaw the potential for Facebook mischief because they initially banned it along with twitter and other similar sites.
For sixty years, according to CNN, there has not been a successful Arab revolt. Yet in only 23 days Tunisia's government was toppled. Whether this was as a direct result of the internet or not is hard to say, but the message cannot be lost on Mubarak, Assad, al-Bashir and others.
Extrapolating beyond what we see today its going to become harder and harder to imagine any sort of world where instantaneous "trash mobs" cannot immediately sprout up to torture the status quo. Take the rioting in Britain over the University tuition increases in recent months as an example.
And Britain is not a dictatorship; yet the same principles are being applied - angry youth attempting to change the status quo based on technological rabble rousing.
The problem here is that technology doesn't care what kind of meme its spreading - whether for good, as in Bahrainian youth wishing for a more democratic state, or for bad, when that democratic wish is high-jacked by a wannabe dictator.
1. A new constitution written by the people
2. The establishment of a new body that has authority to investigate and hold to account economic, political and social violations, including the return of stolen public wealth and reversal of political naturalistion, in order to reach national conciliation.
And the youth of Bahrain are not alone.
Then there is Syria and the "The Syrian Revolution 2011" Facebook page.
Until recently Facebook, twitter, and the like were banned in Syria. But the recent Tunisian revolt and Egyptian protests have made way for a relaxing of these policies.
As the popularity of Facebook spreads around the world it is now also influencing Middle Eastern governments.
For example, Omar al-Bashir, the president of Sudan and an indicted war criminal for his involvement in the Darfur genocide, has recently begun promoting his own pro-Sudan Facebook page. The purpose? To use Facebook to overcome opposition to his rule (according to this Sudanese site).
Sudan has 41 million people - with only about 10% having internet access. Yet al-Bashir still grasps the power of this medium because he is working to give more of his countrymen internet access. (Does he realize this may come back to bite him?)
It is also well known that Facebook activities have been heavily involved in the protests in Egypt.
It would appear that this technology and the use of social networking for revolutoin is far stronger than anyone realized.
But how is this different than, say, the invention of the printing press and movable type by Gutenberg in 1440? Before that time if someone wanted to perform a mass communication of some sort it would require that all the notices be literally "written by hand". However, with the printing press operational, it was possible to produce up to 3,600 pages per day of printed content (see Wikipedia).
I think we are literally witnessing a repeat of history. Only this time the cellphone and computer screen are unwittingly taking on the role of paper and press.
In 1440 it would have cost a significant amount of money to produce those 3,600 pages on the Gutenberg press. Today, to create a Facebook page, requires only a cellphone or web browser, which is well within the reach of the average Middle Eastern middle class citizen. Even more interesting is that governments like Syria installed the technology in the first place.
Yet how could they avoid it?
And if you think about it, just installing this technology in the hands of the elite is still asking for trouble. For example, though you're faithful government employee might not use the technology against his employer his college educated child just might. So, like Pandora's box, once the technology is in place things can slip away very quickly.
Clearly the Syrian government foresaw the potential for Facebook mischief because they initially banned it along with twitter and other similar sites.
For sixty years, according to CNN, there has not been a successful Arab revolt. Yet in only 23 days Tunisia's government was toppled. Whether this was as a direct result of the internet or not is hard to say, but the message cannot be lost on Mubarak, Assad, al-Bashir and others.
Extrapolating beyond what we see today its going to become harder and harder to imagine any sort of world where instantaneous "trash mobs" cannot immediately sprout up to torture the status quo. Take the rioting in Britain over the University tuition increases in recent months as an example.
And Britain is not a dictatorship; yet the same principles are being applied - angry youth attempting to change the status quo based on technological rabble rousing.
The problem here is that technology doesn't care what kind of meme its spreading - whether for good, as in Bahrainian youth wishing for a more democratic state, or for bad, when that democratic wish is high-jacked by a wannabe dictator.
Wednesday, February 9, 2011
More Unforeseen Consequences and a Future History
As I do a bit more research on the liquid thorium reactors (LTR) I talked about in "From the Land of Unintended Consequences" I am surprised to see how the rhetoric of the "Green Nuclear Energy" folks (those in support of LTR) matches the rhetoric from the 1950's and 1960's in regard to nuclear energy.
As I wrote in the first article the original nuclear energy model was sold to US consumers as the be-all end-all of electrical power generation. At that time being "green" was not much of an issue but safety was. An the GE's and Westinghouse's (who made these reactors) told of how safe they were, how wonderful their benefits to society would be and so on.
Clean, safe nuclear power.
Experiments were done with nuclear reactors for airplane, spaceships and planes in the 1950's. I recall reading my Time/Life ENERGY coffee table book as child about these experiments and studying the pictures and captions related to nuclear airplanes.
Before nuclear power tried to come to my town no one in my family thought much about it. It seemed, in those years like a government project aimed at making our lives better. No one questioned the rhetoric - it was just accepted.
And that's what I see about the wonderful new LTR reactors. The story today goes something like this:
- Wicked humanity is going to destroy the planet with greenhouse gases.
- We must all have cheap economical energy.
- Therefore, LTR will make us all happy.
In the 50's and 60's the first item was replaced with something like "We need more energy for our future." and "LTR" in the last point was just plain old "nuclear energy." So really its the same old story.
But this time it will all be different.
Right. I scoured the internet looking for information about exactly why this is or might be the case.
I found lots of articles describing the LTR process itself: hot liquid thorium fluoride at low pressure (1 atmosphere), little danger of a melt down (heat slows the reaction process), snappy cleanup and no waste.
And Thorium itself is a common actinide (element at the bottom of the periodic chart). It is found in vast quantities (millions of tons) in nature outside the US (funny how that is). Its "mildly radioactive" on its own.
Unfortunately, as they say, the devil is in the details.
No where is there any discussion of just how the Thorium Fluoride will be created. As I wrote in the original article things like uranium processing were big "military industrial complex" sources of profits. No doubt it will be the same with this.
Then there is our own alphabet soup of regulatory agencies: DOE, FDA, and so on. What sort of regulations will be applied to this? Surely mom and pop will not be able to afford to create LTR fuel or reactors in their backyards - regardless of safety. No, No, this will be only for the "big boys" who can afford the Washington lobbyists who will lobby for the small number of special permits required to become wealthy billionaires with LTR.
Studies will be done at the finest universities. They will whatever results they are paid to find. It will all point to a brilliantly green future.
Then there will be the pilot plant - the "first one" for commercial energy. Where will it go? Our very green friends at places like the Daily Kos who ardently support the technology won't want it in their city - no, no - let someone else take the plunge first they will say. Protests will begin.
No doubt a huge area will be required to ensure safety. This will displace wildlife, water ways, migratory bird flight paths, you name it. The EPA will require studies. Lots of them. And that will be just for the plant. More research will be required on the waste products - where will they go? More years will be taken up studying things.
Then the military industrial complex will make its resurgence by actually building the first plant. The initial bids will be a few billion - but there will be tens of billions in cost overruns for "unforeseen circumstances" and "safety". It will take years to build. Protesters will line up each day with skulls and cross bones. TV and internet news will lose interest after the first few weeks - but it will pick up again as the plant nears completion.
Somewhere out west a "fuel processing" facility will be built. After all, we can't have LTR without fuel. Eventually some economically desperate state or town will be found to host the pending disaster. Billions will be spent to build a facility to employ a few hundred people. They will be sworn to secrecy and work behind barbed wire. Three headed fish will turn up downstream.
Finally the big day will arrive. Time to "load the fuel" for the first time. Time to test the plant. This will take months. Videos of trucks hauling the fuel from "out west" will air. There will be some "set backs" but eventually the plant will wobble to life. More years will pass and more billions will be spent "working out the bugs".
As with the original nuclear power plants it will turn out that the cost to operate the plant will exceed what customers will pay for the power it produces. No problem, the government clean energy backers will say, we will subsidize this because in the end it will all be wonderful. More tax dollars will be pissed away.
The "Thorium Partnership" will form outside the US to sell us their thorium - every country on earth will be a member except US. It will become expensive to mine and prepare because of the US EPA regulations. Each ton will cost a fortune. Mining thorium here will be banned because its radioactive and small children go to school nearby. We will save our thorium for the future.
With the first plant "operational" more will be planned. The cycle will repeat except the next time the designers, the government, the protesters, and the consumers will know more. Lawyers will gear up for the attack. Lawsuits will be filed.
The first plant's "first waste" will be produced. Unlike whatever was promised things will be "different". No one will want it. Good thing the plant was built on a huge plot of land. "Temporary" waste storage facilities will be built to store the spent fuel and used equipment "temporarily" - just like Zion, Illinois.
As the decades pass the promises will all be broken and the US tax payers and utility customers will be left holding the bag.
George Santayana: "Those who cannot remember the past are condemned to repeat it."
As I wrote in the first article the original nuclear energy model was sold to US consumers as the be-all end-all of electrical power generation. At that time being "green" was not much of an issue but safety was. An the GE's and Westinghouse's (who made these reactors) told of how safe they were, how wonderful their benefits to society would be and so on.
Clean, safe nuclear power.
Experiments were done with nuclear reactors for airplane, spaceships and planes in the 1950's. I recall reading my Time/Life ENERGY coffee table book as child about these experiments and studying the pictures and captions related to nuclear airplanes.
Before nuclear power tried to come to my town no one in my family thought much about it. It seemed, in those years like a government project aimed at making our lives better. No one questioned the rhetoric - it was just accepted.
And that's what I see about the wonderful new LTR reactors. The story today goes something like this:
- Wicked humanity is going to destroy the planet with greenhouse gases.
- We must all have cheap economical energy.
- Therefore, LTR will make us all happy.
In the 50's and 60's the first item was replaced with something like "We need more energy for our future." and "LTR" in the last point was just plain old "nuclear energy." So really its the same old story.
But this time it will all be different.
Right. I scoured the internet looking for information about exactly why this is or might be the case.
I found lots of articles describing the LTR process itself: hot liquid thorium fluoride at low pressure (1 atmosphere), little danger of a melt down (heat slows the reaction process), snappy cleanup and no waste.
And Thorium itself is a common actinide (element at the bottom of the periodic chart). It is found in vast quantities (millions of tons) in nature outside the US (funny how that is). Its "mildly radioactive" on its own.
Unfortunately, as they say, the devil is in the details.
No where is there any discussion of just how the Thorium Fluoride will be created. As I wrote in the original article things like uranium processing were big "military industrial complex" sources of profits. No doubt it will be the same with this.
Then there is our own alphabet soup of regulatory agencies: DOE, FDA, and so on. What sort of regulations will be applied to this? Surely mom and pop will not be able to afford to create LTR fuel or reactors in their backyards - regardless of safety. No, No, this will be only for the "big boys" who can afford the Washington lobbyists who will lobby for the small number of special permits required to become wealthy billionaires with LTR.
Studies will be done at the finest universities. They will whatever results they are paid to find. It will all point to a brilliantly green future.
Then there will be the pilot plant - the "first one" for commercial energy. Where will it go? Our very green friends at places like the Daily Kos who ardently support the technology won't want it in their city - no, no - let someone else take the plunge first they will say. Protests will begin.
No doubt a huge area will be required to ensure safety. This will displace wildlife, water ways, migratory bird flight paths, you name it. The EPA will require studies. Lots of them. And that will be just for the plant. More research will be required on the waste products - where will they go? More years will be taken up studying things.
Then the military industrial complex will make its resurgence by actually building the first plant. The initial bids will be a few billion - but there will be tens of billions in cost overruns for "unforeseen circumstances" and "safety". It will take years to build. Protesters will line up each day with skulls and cross bones. TV and internet news will lose interest after the first few weeks - but it will pick up again as the plant nears completion.
Somewhere out west a "fuel processing" facility will be built. After all, we can't have LTR without fuel. Eventually some economically desperate state or town will be found to host the pending disaster. Billions will be spent to build a facility to employ a few hundred people. They will be sworn to secrecy and work behind barbed wire. Three headed fish will turn up downstream.
Finally the big day will arrive. Time to "load the fuel" for the first time. Time to test the plant. This will take months. Videos of trucks hauling the fuel from "out west" will air. There will be some "set backs" but eventually the plant will wobble to life. More years will pass and more billions will be spent "working out the bugs".
As with the original nuclear power plants it will turn out that the cost to operate the plant will exceed what customers will pay for the power it produces. No problem, the government clean energy backers will say, we will subsidize this because in the end it will all be wonderful. More tax dollars will be pissed away.
The "Thorium Partnership" will form outside the US to sell us their thorium - every country on earth will be a member except US. It will become expensive to mine and prepare because of the US EPA regulations. Each ton will cost a fortune. Mining thorium here will be banned because its radioactive and small children go to school nearby. We will save our thorium for the future.
With the first plant "operational" more will be planned. The cycle will repeat except the next time the designers, the government, the protesters, and the consumers will know more. Lawyers will gear up for the attack. Lawsuits will be filed.
The first plant's "first waste" will be produced. Unlike whatever was promised things will be "different". No one will want it. Good thing the plant was built on a huge plot of land. "Temporary" waste storage facilities will be built to store the spent fuel and used equipment "temporarily" - just like Zion, Illinois.
As the decades pass the promises will all be broken and the US tax payers and utility customers will be left holding the bag.
George Santayana: "Those who cannot remember the past are condemned to repeat it."
Tuesday, February 8, 2011
Rocket Cars - Then and Now
When I was a kid I was fascinated by jet cars. One of my childhood idols was Craig Breedlove - the first human to drive past 400, 500, and 600 MPH.
In those days my cousin and I had a lot of model rockets as well - made by Estes and Century. We also had wooden airplanes made from balsa wood powered by Jetex engines.
But real rocket and jet cars were never very far away. Where I grew up we lived about five miles from the Great Lakes Dragway. On Friday and Saturday summer nights you could hear the announcers talking and, best of all, you could hear the dragsters.
Of particular interest to me was the Green Monster - a dragster built and driven by Art Arfons. You could hear it start up, its jet engine roaring, and hear it run down the track. One year my grade school friends family took me to the drag strip. The Green Monster was there but it didn't run that day.
However, I was able to see a rocket car using a Gemini spacecraft reentry motor. This was a small aluminum framed car with a rocket nozzle protruding from the back. Like the Estes and jetex motors of the time it was a simple, solid fuel engine. It did a nice job of taking the car down the track.
Given all of this excitement what could a geek child do but build his own rocket car.
Of course, building one large enough to ride in was out of the question at that point - so instead I tried the next best thing. I sat down with my X-acto knife, some Estes rocket parts and some sheets of balsa wood and rubber airplane wheels purchased from the local hobby shop and I built my own.
It was designed along the lines of the Spirit of America pictured above - or at least as close to that as my ten-year-old fingers could manage. It had four wheels and a tail and was powered by a standard model rocket engine.
The only problem I could foresee was how to make it run in a straight line.
For that I borrowed a concept from CO2 rocket cars. These were small metal cars into which you would put a CO2 pellet gun cartridge. They came with a little spring-loaded "starter" that had a sharp needle on one end. You put this up to the CO2 cartridges throat, pulled back the trigger, and let it sprint forward to pierce the metal cap holding in the gas. I had one of these cars at some point but the cartridges were expensive, at least compared to rocket motors at the time, and were a pain to acquire.
The cars ran on a string and since we were fortunate enough to have an asphalt driveway the solution was to simply put two nails in at each end and run a string down the middle.
All of this violated the Model Rocket Code of Ethics, of course, but since it was done in the name of science I figured it was okay to bend the rules.
Another rule we bent was using jetex fuses to light the motors. You were supposed to have a battery-based ignition system with a safety key and so forth - but keeping a charged batter around was a tough job and the fuses were much more reliable in practice.
The maiden run of the car was successful, but since it was top heavy it tipped over near the end.
My cousin, upon hearing this, built his own version and on subsequent visits we were able to run our cars.
Eventually my car was retired to my model car shelf were it sat until I moved away to go to college.
After Breelove came the "Blue Flame" in about 1970. This was a rocket car that used natural gas run by Gary Gabelich.
This car pushed the land speed record up to around 622 MPH.
It wasn't until 27 years later in 1997 that Thrust SSC finally pushed the land speed record beyond the speed of sound.
Today Richard Noble is working on a car that he hopes to drive in excess of 1,000 MPG - the Bloodhound SSC. This car is powered by both a jet and rocket engine - the jet being used to get things rolling and the rocket to boost the speed from there.
For me, retiring from this business at age 10 or so was a good idea. Rocket cars are an expensive hobby regardless of age and, as an adult, there is the added danger of having a wreck.
Cars at higher speeds are really just airfoils and they tend to have too much lift. This means that your front steering wheel wants to lift up off the ground. Since, unlike my cars, these don't run on strings nailed to the road, that can be a problem.
But I guess some people haven't quite figured all this out...
In those days my cousin and I had a lot of model rockets as well - made by Estes and Century. We also had wooden airplanes made from balsa wood powered by Jetex engines.
But real rocket and jet cars were never very far away. Where I grew up we lived about five miles from the Great Lakes Dragway. On Friday and Saturday summer nights you could hear the announcers talking and, best of all, you could hear the dragsters.
Of particular interest to me was the Green Monster - a dragster built and driven by Art Arfons. You could hear it start up, its jet engine roaring, and hear it run down the track. One year my grade school friends family took me to the drag strip. The Green Monster was there but it didn't run that day.
However, I was able to see a rocket car using a Gemini spacecraft reentry motor. This was a small aluminum framed car with a rocket nozzle protruding from the back. Like the Estes and jetex motors of the time it was a simple, solid fuel engine. It did a nice job of taking the car down the track.
Given all of this excitement what could a geek child do but build his own rocket car.
Of course, building one large enough to ride in was out of the question at that point - so instead I tried the next best thing. I sat down with my X-acto knife, some Estes rocket parts and some sheets of balsa wood and rubber airplane wheels purchased from the local hobby shop and I built my own.
It was designed along the lines of the Spirit of America pictured above - or at least as close to that as my ten-year-old fingers could manage. It had four wheels and a tail and was powered by a standard model rocket engine.
The only problem I could foresee was how to make it run in a straight line.
For that I borrowed a concept from CO2 rocket cars. These were small metal cars into which you would put a CO2 pellet gun cartridge. They came with a little spring-loaded "starter" that had a sharp needle on one end. You put this up to the CO2 cartridges throat, pulled back the trigger, and let it sprint forward to pierce the metal cap holding in the gas. I had one of these cars at some point but the cartridges were expensive, at least compared to rocket motors at the time, and were a pain to acquire.
The cars ran on a string and since we were fortunate enough to have an asphalt driveway the solution was to simply put two nails in at each end and run a string down the middle.
All of this violated the Model Rocket Code of Ethics, of course, but since it was done in the name of science I figured it was okay to bend the rules.
Another rule we bent was using jetex fuses to light the motors. You were supposed to have a battery-based ignition system with a safety key and so forth - but keeping a charged batter around was a tough job and the fuses were much more reliable in practice.
The maiden run of the car was successful, but since it was top heavy it tipped over near the end.
My cousin, upon hearing this, built his own version and on subsequent visits we were able to run our cars.
Eventually my car was retired to my model car shelf were it sat until I moved away to go to college.
After Breelove came the "Blue Flame" in about 1970. This was a rocket car that used natural gas run by Gary Gabelich.
This car pushed the land speed record up to around 622 MPH.
It wasn't until 27 years later in 1997 that Thrust SSC finally pushed the land speed record beyond the speed of sound.
Today Richard Noble is working on a car that he hopes to drive in excess of 1,000 MPG - the Bloodhound SSC. This car is powered by both a jet and rocket engine - the jet being used to get things rolling and the rocket to boost the speed from there.
For me, retiring from this business at age 10 or so was a good idea. Rocket cars are an expensive hobby regardless of age and, as an adult, there is the added danger of having a wreck.
Cars at higher speeds are really just airfoils and they tend to have too much lift. This means that your front steering wheel wants to lift up off the ground. Since, unlike my cars, these don't run on strings nailed to the road, that can be a problem.
But I guess some people haven't quite figured all this out...
Lexigraph Website is DOWN...
It was hacked and we are in the process of replacing it... Hope to be back up in a few days.
Monday, February 7, 2011
Toto - I don't think we're in 1984 any more...
As a metric of where society and technology are headed its always interesting to look to advertising. While the transition from paper to video has always been interesting to me from the perspective of computers and printing I think that one of the most pivotal ads was the Apple "1984" ad (see it here).
This ad is a classic and will probably always remain so. The reasons for this are simple yet interesting.
For one thing there is really no talking during the crucial portion of the ad - from the start to the explosion at the end. The pictures tell the entire story: everyone understands being in "trouble" and being chased by the "authorities". Everyone understand the representation of "conformity". Everyone understand the emotional elements involved in "breaking out" from that conformity.
Then there was "Lemmings". I always liked this ad and I thought it never was quite as famous as the original 1984 ad. You see a long line of blindfolded office workers marching through desolation, each with their hand on the shoulder of the one in front. "High Ho, High Ho, its off to work we go" whistling eerily in the background. You see that the line ends in a cliff where each worker in turn falls into the abyss. Again the visualization tells the story: Everyone understands mindless conformity at work. Everyone relates to "High Ho, High Ho" as it relates to work. Everyone understands the urban myth of lemmings falling off the cliff into the sea.
The image is selling the products: The image of "I hate to be like everyone else" and "I hate to conform".
Who would miss this? Who would not understand what was meant.
In the days of print ads it was necessary to convey the entire concept in a single image - an image someone might only glance at for a second or two. Maybe there was some text as well - which had to be short and sweet. The ad had to have an impact or people would not remember it.
And, after all, remembering it was the entire point.
Fast forward twenty five or so years to today. In yesterday's Super Bowl, was the Motorola ad ("Empower the People"). (Supposedly this ad makes fun of the Apple 1984 ad because the novel 1984 is being read on the tablet at one point.)
This by a company that produces, at least in my opinion, junk phones (I got rid of the last one I had years ago...). The point is again one about conformity. Only in this ad the non-conformist (at least by his clothing) is a 30-something guy wondering through the modern city trappings of the work-a-day world (subway, office building, elevator). All the workers are wearing white hooded clothes - all with Apple (supposedly) ear-buds listening to some unknown music - all slogging through their conformist universe. His eye is on a girl who he sends a video of flowers he creates on his Motorola tablet.
At the end we are told that its "The table to create a better world" and, below, "Android is a Google trademark."
First of all there is no real emotion here. The ad does not portray where things might go (like 1984) nor does it portray the folly of where things are (like Lemmings). Instead it portrays a sad picture of today's reality - 30-somethings (along with wanabees) wandering about in a digital haze, on their way to work, plugged into their music players.
Secondly, why does this guy need his tablet computer to talk to this girl? Can't he just tap her on the shoulder and hand her the flowers instead of taking a picture of them?
Sadly, since this is how things actually are in today's hip young digital world and since the ad does not poke fun at this in any significant way we don't have any emotional motivation for liking this for this character. This lack of empathy is further reinforced because the protagonist is dressed to conforming perfection in 30-something work wear.
(Perhaps this touches youngsters in their 30's in someway that I don't understand - though no one that age watching the ad with me seemed to be touched or to care about this ad...)
Interestingly without Motorola telling me that the protagonist is using their tablet computer I would have simply thought it was an iPad - again the message of "me to" conformity is reinforced in a bad way: as Motorola conforming to the Apple model.
I suppose that the modern ad agencies (like Anomaly who created this ad) don't understand the purpose of the original Apple ads: to thumb its nose in the face of overwhelming conformity. Or it may be that they don't understand that they are a generation of conformists - everyone being unique - just like everyone else.
No, the point Steve Jobs made in 1984 was that he we building tools for non-conformists. That he and his company was not a conformist. And that you, as a purchaser of his products, could proudly display that you were also a non-conformist.
Sadly for Motorola all they did was make me think they were all about "conforming" to Apple's view of the world...
This ad is a classic and will probably always remain so. The reasons for this are simple yet interesting.
For one thing there is really no talking during the crucial portion of the ad - from the start to the explosion at the end. The pictures tell the entire story: everyone understands being in "trouble" and being chased by the "authorities". Everyone understand the representation of "conformity". Everyone understand the emotional elements involved in "breaking out" from that conformity.
Then there was "Lemmings". I always liked this ad and I thought it never was quite as famous as the original 1984 ad. You see a long line of blindfolded office workers marching through desolation, each with their hand on the shoulder of the one in front. "High Ho, High Ho, its off to work we go" whistling eerily in the background. You see that the line ends in a cliff where each worker in turn falls into the abyss. Again the visualization tells the story: Everyone understands mindless conformity at work. Everyone relates to "High Ho, High Ho" as it relates to work. Everyone understands the urban myth of lemmings falling off the cliff into the sea.
The image is selling the products: The image of "I hate to be like everyone else" and "I hate to conform".
Who would miss this? Who would not understand what was meant.
In the days of print ads it was necessary to convey the entire concept in a single image - an image someone might only glance at for a second or two. Maybe there was some text as well - which had to be short and sweet. The ad had to have an impact or people would not remember it.
And, after all, remembering it was the entire point.
Fast forward twenty five or so years to today. In yesterday's Super Bowl, was the Motorola ad ("Empower the People"). (Supposedly this ad makes fun of the Apple 1984 ad because the novel 1984 is being read on the tablet at one point.)
This by a company that produces, at least in my opinion, junk phones (I got rid of the last one I had years ago...). The point is again one about conformity. Only in this ad the non-conformist (at least by his clothing) is a 30-something guy wondering through the modern city trappings of the work-a-day world (subway, office building, elevator). All the workers are wearing white hooded clothes - all with Apple (supposedly) ear-buds listening to some unknown music - all slogging through their conformist universe. His eye is on a girl who he sends a video of flowers he creates on his Motorola tablet.
At the end we are told that its "The table to create a better world" and, below, "Android is a Google trademark."
First of all there is no real emotion here. The ad does not portray where things might go (like 1984) nor does it portray the folly of where things are (like Lemmings). Instead it portrays a sad picture of today's reality - 30-somethings (along with wanabees) wandering about in a digital haze, on their way to work, plugged into their music players.
Secondly, why does this guy need his tablet computer to talk to this girl? Can't he just tap her on the shoulder and hand her the flowers instead of taking a picture of them?
Sadly, since this is how things actually are in today's hip young digital world and since the ad does not poke fun at this in any significant way we don't have any emotional motivation for liking this for this character. This lack of empathy is further reinforced because the protagonist is dressed to conforming perfection in 30-something work wear.
(Perhaps this touches youngsters in their 30's in someway that I don't understand - though no one that age watching the ad with me seemed to be touched or to care about this ad...)
Interestingly without Motorola telling me that the protagonist is using their tablet computer I would have simply thought it was an iPad - again the message of "me to" conformity is reinforced in a bad way: as Motorola conforming to the Apple model.
I suppose that the modern ad agencies (like Anomaly who created this ad) don't understand the purpose of the original Apple ads: to thumb its nose in the face of overwhelming conformity. Or it may be that they don't understand that they are a generation of conformists - everyone being unique - just like everyone else.
No, the point Steve Jobs made in 1984 was that he we building tools for non-conformists. That he and his company was not a conformist. And that you, as a purchaser of his products, could proudly display that you were also a non-conformist.
Sadly for Motorola all they did was make me think they were all about "conforming" to Apple's view of the world...
Friday, February 4, 2011
Paper in the 21st Century...
Today's paper is probably not paper at all - but an LCD display. My guess is that if you multiplied the minutes people spend focusing their eyes on paper versus the time spend on LCD displays world wide LCD displays would win hands down.
But today's paper isn't as simple as the paper of yesteryear. In the olden days paper came in reams of nice, bright white sheets. No one knew or cared how the paper got that way. It was just assumed to have been manufactured that way by the vendor. Whatever patents or intellectual property that was involved in the manufacturing of the paper was as opaque as the paper itself. While, if you were in the industry, you might have imagined some legal battles over techniques and processes to create the paper it certainly never touched the "lunch box joe" reading things on it.
Similarly for type. Typesetting, the means for marking the paper, was also largely free of direct intellectual property issues. Certainly the machines that made type, like the one I fondly recall in the Chicago Museum of Science and Industry in the 1970's, were subject to such issues, but not the type itself (perhaps save for fonts) nor the traditional production processes.
But today's paper is a much different story.
Today's paper is not inert because behind it is a computer. I know that I, even in this business, really don't think much about how what I am seeing on the display actually got there. I just think about it as if it were paper for the most part. Of course I can zoom and scroll, but from a work perspective, I am just reading.
(For example, I am involved in a large, complex iOS (iPhone)/Mac OSX software development project. The amount of manuals, diagrams, documents, tutorials, samples, and so on is mind numbing. I spend a lot of time each staring at mypaper screen.)
However, very much unlike paper the content that appears on my "paper" each day may not, like paper and type of old, be unencumbered.
What do I mean by unencumbered?
Well, from an intellectual property standpoint something like an image (for example the one at the top of this post) requires that the computer behind the paper process it. This processing may involve some sort of intellectual property owned by someone besides me. However, since I am using the process to view the image I could become obligated to the owner of that intellectual property for some sort of payment or fee. So, if my use of such a process, whether known or unknown to me, causes obligation to me we can say the use of that process encumbers me.
A real world example of this is the patents involving JPEG image compression. Toward the end of the life of these patents (mid 2000's) a patent troll whipped up claims against anyone they could find and demanded payment for use of their intellectual property. Apparently they collected some hefty payments.
As time rolls on more and more complex software elements are required to drive what you see on yourpaper screen each day. Static images such as JPEGs are old news. Today's hot intellectual property issues involve video codecs.
A video codec is a piece of software that decodes a compressed videos stream from a web server so that it can be displayed on your screen. Without video codecs on-screen video (whether for computers, satellite or cable) wouldn't be practical (the files would simply be too large). Companies spend millions of dollars developing these codecs to gain competitive advantages in the market place.
The problem is that to make something like YouTube even possible it is necessary for everyone that wishes to view the videos to have the right codec installed on their computer.
Initially codecs where few and far between - mostly being installed in expensive professional video equipment. But as the web expanded codecs were developed to install into web browsers as plug-ins.
The problem today is that in the intervening decades thousands of video codec related patents have been issued. This plethora of patents makes it increasingly difficult to determine if a browser plug-in that can process video is encumbered by a patent owner in some way. (The reason for this is that the patent office does not make a determination as to who or what else might using a particular patent it grants. That is left up to the market place. Since it can take years to be granted a patent often some other company will be making use of a patented process without knowing it. Then, once the patent is issued, they become violators.)
So today this issue has boiled down to one of who can create the most unencumbered codecs. No one, not even Google and Microsoft, want to be blind-sided by lawsuits from patent holders claiming that they are responsible for violating a patent.
Unfortunately for us, though, companies like Google are not being fully honest about this. At issues is the notion of "open source" codec. Google likes to be open and to use "open source" technology - that is, technology that does not require Google to pay royalties. However, just because software is "open source" does not mean that it is free of potential patent violations.
So Google, in the light of wanting the codecs to be based on community work, i.e., open source, is trying to steer developers and users to the WebM and Theora codecs. Google is also casting out codecs that use H.264 from its Chrome project as tainted or potentially encumbered. However, while doing this, Google is not claiming that its opensource codecs will require users to pay "no royalties" but merely that there are no "known royalty issues" with their new codecs (see this).
Microsoft, sensing that Google is busy trying to push the liability of any potential patent infringement for these codecs onto unsuspecting developers and users has gallantly asked Google to legally indemnify anyone making use of this technology. While I suspect that Microsoft's interest here is probably not altruistic I still believe they are correct.
Google is again working the system to foist off on others what it should be taking responsibility for. While it might seem wonderful and altruistic to want the web full of "open standards" a careful reading of the situation exposes the truth. Namely that Google, through slight of hand, is foisting all future potential liability off on us, the unsuspecting public.
But today's paper isn't as simple as the paper of yesteryear. In the olden days paper came in reams of nice, bright white sheets. No one knew or cared how the paper got that way. It was just assumed to have been manufactured that way by the vendor. Whatever patents or intellectual property that was involved in the manufacturing of the paper was as opaque as the paper itself. While, if you were in the industry, you might have imagined some legal battles over techniques and processes to create the paper it certainly never touched the "lunch box joe" reading things on it.
Similarly for type. Typesetting, the means for marking the paper, was also largely free of direct intellectual property issues. Certainly the machines that made type, like the one I fondly recall in the Chicago Museum of Science and Industry in the 1970's, were subject to such issues, but not the type itself (perhaps save for fonts) nor the traditional production processes.
But today's paper is a much different story.
Today's paper is not inert because behind it is a computer. I know that I, even in this business, really don't think much about how what I am seeing on the display actually got there. I just think about it as if it were paper for the most part. Of course I can zoom and scroll, but from a work perspective, I am just reading.
(For example, I am involved in a large, complex iOS (iPhone)/Mac OSX software development project. The amount of manuals, diagrams, documents, tutorials, samples, and so on is mind numbing. I spend a lot of time each staring at my
However, very much unlike paper the content that appears on my "paper" each day may not, like paper and type of old, be unencumbered.
What do I mean by unencumbered?
Well, from an intellectual property standpoint something like an image (for example the one at the top of this post) requires that the computer behind the paper process it. This processing may involve some sort of intellectual property owned by someone besides me. However, since I am using the process to view the image I could become obligated to the owner of that intellectual property for some sort of payment or fee. So, if my use of such a process, whether known or unknown to me, causes obligation to me we can say the use of that process encumbers me.
A real world example of this is the patents involving JPEG image compression. Toward the end of the life of these patents (mid 2000's) a patent troll whipped up claims against anyone they could find and demanded payment for use of their intellectual property. Apparently they collected some hefty payments.
As time rolls on more and more complex software elements are required to drive what you see on your
A video codec is a piece of software that decodes a compressed videos stream from a web server so that it can be displayed on your screen. Without video codecs on-screen video (whether for computers, satellite or cable) wouldn't be practical (the files would simply be too large). Companies spend millions of dollars developing these codecs to gain competitive advantages in the market place.
The problem is that to make something like YouTube even possible it is necessary for everyone that wishes to view the videos to have the right codec installed on their computer.
Initially codecs where few and far between - mostly being installed in expensive professional video equipment. But as the web expanded codecs were developed to install into web browsers as plug-ins.
The problem today is that in the intervening decades thousands of video codec related patents have been issued. This plethora of patents makes it increasingly difficult to determine if a browser plug-in that can process video is encumbered by a patent owner in some way. (The reason for this is that the patent office does not make a determination as to who or what else might using a particular patent it grants. That is left up to the market place. Since it can take years to be granted a patent often some other company will be making use of a patented process without knowing it. Then, once the patent is issued, they become violators.)
So today this issue has boiled down to one of who can create the most unencumbered codecs. No one, not even Google and Microsoft, want to be blind-sided by lawsuits from patent holders claiming that they are responsible for violating a patent.
Unfortunately for us, though, companies like Google are not being fully honest about this. At issues is the notion of "open source" codec. Google likes to be open and to use "open source" technology - that is, technology that does not require Google to pay royalties. However, just because software is "open source" does not mean that it is free of potential patent violations.
So Google, in the light of wanting the codecs to be based on community work, i.e., open source, is trying to steer developers and users to the WebM and Theora codecs. Google is also casting out codecs that use H.264 from its Chrome project as tainted or potentially encumbered. However, while doing this, Google is not claiming that its opensource codecs will require users to pay "no royalties" but merely that there are no "known royalty issues" with their new codecs (see this).
Microsoft, sensing that Google is busy trying to push the liability of any potential patent infringement for these codecs onto unsuspecting developers and users has gallantly asked Google to legally indemnify anyone making use of this technology. While I suspect that Microsoft's interest here is probably not altruistic I still believe they are correct.
Google is again working the system to foist off on others what it should be taking responsibility for. While it might seem wonderful and altruistic to want the web full of "open standards" a careful reading of the situation exposes the truth. Namely that Google, through slight of hand, is foisting all future potential liability off on us, the unsuspecting public.
Thursday, February 3, 2011
Single Serving Egyptian Friends (I hope you are well...)
I have been following the protests in Egypt with interest. About six years or so ago Lexigraph, my business, was working with a couple of people from the Egyptian offices of a large, international computer business. This was for a potential project related to some printing for an international gathering.
As part of the project Basem and Asra came over from Egypt to the US to spend a week on the design of the system we were building. Basem, the IT Specialist, was a Coptic Christian and Asra, the Project Leader, a practicing Muslim. Both are a relative rarity here in rural western Pennsylvania where I live. Over the course of the week they were here I had a chance to get to know them and learn a little bit about their culture.
What made me think of them was the fact that much has been said about the fact that many of the protesters are "young", use cellphones and the internet for communication, and so on. Of course, both Basem and Asra were relatively young and no doubt fit the profile as "tech savy" types that would be plugged into the protests, at least according to the news accounts.
As the visit progressed we were able to take our guests to lunch and sometimes dinner. Each outing was an interesting cross-cultural affair.
As a Christian Basem was considerably more westernized in his views - though perhaps more with an flavor of the 1800's than the 21st century. Coptic Christianity, which originated int the first century, is a faith practiced by about 1/6th of all Egyptians (10 million out of 60 million) - a figure surprising to me at the time as I considered Egypt to be a Muslim country.
(Even the name Egypt is a western creation. It was first used by the ancient Greeks, Egyptos, from the ancient Egyptian words (Hut-Ka-Ptah), one of the names for “Memphis”, the first capital of Ancient Egypt. I spent some time studying the Ancient Egyptian and Ancient Greek languages in school.)
Basem was gregarious and cheerful. He was happy to talk about his culture, his life and his family.
For example, he told us dating was allowed only as a group affair - there were no western-style boy-girl dates. Basem, who I estimated to be in his middle-late twenties, described of how mixed gender groups of friends would get together and go out to restaurants or parties in order to get to know one another. As two people's interest in each other would grow there was eventually a formal process for the male "asking for the hand in marriage" of the female dictated by their culture and faith.
There was no "living together" or any of the common western-style relationships one finds today. Beyond this he was generally familiar with the west and our views - though he considered our model for male/female courting and dating absolutely bizarre.
Asra, on the other hand, told us that she was initially frightened of us. Being a practicing Muslim woman from the middle east alone in the USA her perspective on the west was that we were probably all war-mongering barbarians (sort of along the lines of the Capitol One airline mile credit card barbarians you see on TV). However, as the days passed her views changed, at least a little. By our second or third group trip to lunch she began to believe that we would not attack and kill her and began to relax a bit.
We found out that she was concerned, for example, that she would not be able to eat anything here because of Islamic dietary laws. However, that turned out not to be the case as she found that most places we went to had a large variety of food on the menu - much of which that could be fit to her dietary requirements. Asra, did not talk much about herself, her family or her social life I think out of fear.
At one point my wife and I took the two of them to dinner. Up until this point both Basem and Asra had only interacted with males since the entire corporate staff of four at that time was all male. Upon meeting my wife Asra seemed to open up considerably talking about how afraid she was initially that we were all barbarians and talking openly how she believed that everyone in the USA was out to destroy and kill all Muslims. I think that this dinner to some degree gave her a different perspective on us western barbarians.
At the end of the week when they were preparing to leave they offered us gifts - papyrus paintings of the pyramids and Spinx they had brought with them.
The project was ultimately canceled (run by a shady Brit it turned out to be just hot air) and he left Basem and Asra's employers with a very large unpaid bill. Unfortunately, like so many "single serving friends" you meet in the corporate world we lost touch over the years.
As part of the project Basem and Asra came over from Egypt to the US to spend a week on the design of the system we were building. Basem, the IT Specialist, was a Coptic Christian and Asra, the Project Leader, a practicing Muslim. Both are a relative rarity here in rural western Pennsylvania where I live. Over the course of the week they were here I had a chance to get to know them and learn a little bit about their culture.
What made me think of them was the fact that much has been said about the fact that many of the protesters are "young", use cellphones and the internet for communication, and so on. Of course, both Basem and Asra were relatively young and no doubt fit the profile as "tech savy" types that would be plugged into the protests, at least according to the news accounts.
As the visit progressed we were able to take our guests to lunch and sometimes dinner. Each outing was an interesting cross-cultural affair.
As a Christian Basem was considerably more westernized in his views - though perhaps more with an flavor of the 1800's than the 21st century. Coptic Christianity, which originated int the first century, is a faith practiced by about 1/6th of all Egyptians (10 million out of 60 million) - a figure surprising to me at the time as I considered Egypt to be a Muslim country.
(Even the name Egypt is a western creation. It was first used by the ancient Greeks, Egyptos, from the ancient Egyptian words (Hut-Ka-Ptah), one of the names for “Memphis”, the first capital of Ancient Egypt. I spent some time studying the Ancient Egyptian and Ancient Greek languages in school.)
Basem was gregarious and cheerful. He was happy to talk about his culture, his life and his family.
For example, he told us dating was allowed only as a group affair - there were no western-style boy-girl dates. Basem, who I estimated to be in his middle-late twenties, described of how mixed gender groups of friends would get together and go out to restaurants or parties in order to get to know one another. As two people's interest in each other would grow there was eventually a formal process for the male "asking for the hand in marriage" of the female dictated by their culture and faith.
There was no "living together" or any of the common western-style relationships one finds today. Beyond this he was generally familiar with the west and our views - though he considered our model for male/female courting and dating absolutely bizarre.
Asra, on the other hand, told us that she was initially frightened of us. Being a practicing Muslim woman from the middle east alone in the USA her perspective on the west was that we were probably all war-mongering barbarians (sort of along the lines of the Capitol One airline mile credit card barbarians you see on TV). However, as the days passed her views changed, at least a little. By our second or third group trip to lunch she began to believe that we would not attack and kill her and began to relax a bit.
We found out that she was concerned, for example, that she would not be able to eat anything here because of Islamic dietary laws. However, that turned out not to be the case as she found that most places we went to had a large variety of food on the menu - much of which that could be fit to her dietary requirements. Asra, did not talk much about herself, her family or her social life I think out of fear.
At one point my wife and I took the two of them to dinner. Up until this point both Basem and Asra had only interacted with males since the entire corporate staff of four at that time was all male. Upon meeting my wife Asra seemed to open up considerably talking about how afraid she was initially that we were all barbarians and talking openly how she believed that everyone in the USA was out to destroy and kill all Muslims. I think that this dinner to some degree gave her a different perspective on us western barbarians.
At the end of the week when they were preparing to leave they offered us gifts - papyrus paintings of the pyramids and Spinx they had brought with them.
The project was ultimately canceled (run by a shady Brit it turned out to be just hot air) and he left Basem and Asra's employers with a very large unpaid bill. Unfortunately, like so many "single serving friends" you meet in the corporate world we lost touch over the years.
Wednesday, February 2, 2011
From the Land of Unintended Consequences...
The legacy of clean, efficient nuclear power. |
One of the important limits of "green" is really understanding what the differences are between things that might be green and things that are green. At issue is that many people like to talk about green but really don't understand what that means.
For example, is buying a Prius hybrid really more green that using and existing vehicle? My existing truck sits in my driveway. It uses no energy unless I drive it. However, its big and old and probably not as efficient as a new hybrid one. A new one would use less gas but I'd have to think about what the overhead and cost in terms of energy and "green" it would take to build it. What about battery disposal and/or recycling? As you answer these questions you see that even though something might be new and more "green" the actual cost of it in terms of "green" when you factor in all of the elements makes it much less attractive.
The same is true of "climate change."
Here there is a lot more rhetoric. And one theme in particular is loud and clear: "Big coal fired power plants are filling the atmosphere with CO2." The implication is, of course, that this CO2 is triggering "climate change."
This mantra has been going on from some years and what's interesting to me is the effect its having on other countries.
China in particular.
And China is often demonized for building a lot of coal-fired power plants (such as this article).
Like any self-respecting country China has decided to do something about these problems - both the perception that their country is a big polluter as well as their problem of being one of the most energy-hungry countries on earth.
Their solution?
Clean, efficient nuclear energy.
But not just any nuclear energy.
No, not at all. They plan to use something called a molten salt thorium reactor.
So why write about this? Well, a long time ago as a child I lived in southeastern Wisconsin. We lived on the rural farmland where the number of animals far out numbered the people. In the early 1970's the local power companies decided they needed some nuclear power plants to beef up the local electrical power grid. The initial idea was that one of these plants would be built across the street from my parents home.
My father became sort of an activist against nuclear power and by 1972 was busy with other locals putting up signs, protesting, and so on against having this plant built. (One of my contributions was "No Nukes is Good Nukes.") As a geek child I became interested in the details of how these things worked, what the issues where and so on.
My interest was also fueled by a high school friend who's father worked at the now defunct Zion nuclear plant in Zion Illinois. We were able to spend time at the utilities test nuclear site, see the computers, examine and venture inside their small test reactor, and so on. A geek child's dream...
From all of this a few things were crystal clear:
- Nuclear power, while efficient and non-polluting as it operates, was a big mess in terms of creating fuel, managing fuel, maintenance and handling spent fuel.
- Nuclear power was best done with traditional plant designs that us water for coolants. Exotic nuclear systems were nothing but trouble and a black hole of engineering for the unknown.
- The technology was so complex that there would be a variety of unintended consequences.
And this was the early 1970's. And since then there have been a variety of minor safety issues (Karen Silkwood, Three Mile Island, Chernobyl)...
So now the Chinese are going to build molten salt thorium reactors. This is a new and untested technology.
What's interesting to me is the fact that these reactors involve some of the most ugly, nasty and dangerous chemicals on earth: fluorides, uranium hexaflouride gas, thorium, and in much larger proportions than traditional nuclear systems. And this particular reactor type uses all sorts of exotic versions of these chemicals and new, untested and interesting ways as well as requires a lot of pre- and post-processing of the fuel (think large complex nuclear processing facilities - the kind Silkwood blew the whistle on - but in a country where you just commit suicide after they found out you've been poisoning the animal food for the last couple of years with melamine.)
So off they go on a twenty year plan to "develop" this technology. Reading the link above its clear that some of there reasons for this are to alleviate their perception as a polluting country, particularly relative to greenhouse gases.
Which takes us back to my original point.
The Zion plant from my childhood sits abandoned today - large pools of 30 years of radioactive waste sitting right next to the shores of lake Michigan (which is busy eroding the shore right near the plant). The plant itself is large and ugly and takes up acres of land - land which can never be used again - at least not in my lifetime or the lifetime of my grandchildren.
The 1970's promise of a nuclear power bonanza for the region has been replaced with a traditional coal fired plant near my childhood home belching steam into the sky day and night. (This way we have power to surf the internet, text and use our cellphone while driving our hybrid cars to the organic food store.)
And the Zion plant has as waste fuel rods, chemicals, parts and coolant stored on site. (There is no place to move them to - no one wants nuclear waste in their backyard - so it sits in Zion's backyard...?)
The Chinese plants will have God-knows what sort of bizarre and horrific waste chemicals that will require as yet undeveloped chemical processing technologies to prepare, manage and store. And as we know the Chinese are somewhat less fastidious about making sure that the right chemicals are used for the right things manufacturing-wise...
No, I think the Chinese are simply
Leaving the rest of us to suffer with the unintended consequences of our actions for centuries.
Don't believe me?
Just as the people of Chernobyl...
Subscribe to:
Posts (Atom)