Over Memorial Day the WSJ published an article (see this) describing how the US Pentagon is now taking "cyber attack" seriously.
How seriously you ask?
"Act of War" seriously.
They are kicking around the notion of "equivalence" - say a cyber attack hits the US Power Grid (which is pretty rickety already) and takes it down. Usually when this happens, at least on a broad scale on or near the east coast, things go off for some time. And when they do people are usually injured, harmed or killed - for example a hospital where a surgery fails or the traffic light goes off and there is an accident.
This is "equivalent" to some terrorist or foreign national showing up and cutting the wires themselves. So, in the case of the terrorist or foreign national the US could declare this as an act of aggression and retaliate.
Now on the one hand I can see their point - times are changing and it seems unlikely that any country is likely to physically attack the US outright - so in one way it makes sense to make today's type of cyber "attack" an "equivalent" to physical one.
It allows us to formally tell our allies what we consider aggression and it allows us to create policy related to such things.
The report goes on to discuss alternate notions of "equivalent" - was the attack actual or was it just an attempt, and, if it was an attempt, what would have been the amount of damage.
No doubt "outsiders", e.g., China let's say, will take this declaration as a reason to create their own version of this - replete with rules for counterattack and "damage assessment".
You get the idea.
So let's imagine that this gets all set up and in place.
We have the NATO Cyber Warfare center monitoring various countries and so on. We also have foreign countries, i.e., Iran or China, on the outside. Both sides have extensive cyber attack and defense capabilities.
Now let's image those clever folks who hacked the Sony PS/3 sites set about creating a cyber war.
With all this in place it shouldn't be to hard... Pretend to be from China (hack into China's cyber war facilities and launch an attack on the US) or vice versa.
Targeting is easy - defense contractors, the Pentagon, whatever - these have all been hacked before.
Except now it starts a war.
Sure each side will deny any involvement but that won't matter - their IP addresses will be all over the attack.
To prevent this from happening each side will develop a "Dr. Strangelove" type approach to monitoring the "cyber gap" between countries. Country A might get ahead of suspect country B by developing some clever software.
Soon enough there will be issues over the "cyber gap" - just like nuclear weapons in the old days.
Each side plotting the destruction of the other sides servers, switches, etc.
So where does it all end?
Perhaps where Episode #23 of Star Trek ended (see this). A sort of cyber war is conducted (in this case between two planets). Instead of real bombs the inhabitants simply allow computers to conduct the war with a simulation. There are simulated attacks and defenses - and actual deaths. If you're a causality in one of the attacks then you have to report to the "disintegration tube" to get zapped into nothing.
Sort of like the old 1964 movie "Fail Safe" where an accidental US attack on Moscow is made "better" with the Russians by the US bombing New York City - both sides lose about the same - one equivalent nuke per equivalent city - and the balance of power is the same.
An what about little Jr. in the basement or at the kitchen table - what if he hacks into some Chinese computer - could he start a war?
As if parents don't already have enough to worry about.
And then there will be cyber "war damages" (real or imagined) followed closely by ambulance chasing cyber lawyers. It won't matter if there are any actual damages - I am sure the cyber lawyers will find a way to sue anyway. Such treaties will no doubt open the way for international copyright attacks - say Microsoft declaring that Chinese Windows piracy is an act of war or some movie studio claiming excessive BitTorrenting from a foreign country of some movie is an act of war.
And what does this mean for the rest of us...
More taxes, more cost, more nonsense, more expense, more trouble.
Soon enough we'll see old Mom, Dad and little Jr. sitting before the Senate Armed Services Committee explaining how little Jr. started a war with China from the kitchen table.
I have been involved in high tech, graphic arts, computer software and hardware design for more than 40 years. I've been blogging about vaping since early 2009. I work on advanced robot vision, 3D, SONAR, LIDAR, and software technology. I own my own business. I have set up this blog to talk about who I am, what I do, and to publish my opinions...
Search This Blog
Tuesday, May 31, 2011
Friday, May 27, 2011
Too Much Information?
(No, I don't mean the kind you don't want to hear about, i.e., your parents sex life.)
Instead I'd like to talk about situations such as the Air France jet that crashed into the Atlantic Ocean in 2009. Recently the "black box" recorder was recovered from the bottom of the sea.
It paints an interesting picture of what happened.
But before getting into the accident lets talk a little bit about what the cockpit is like on a modern jet.
Like modern cars jets are full of high-tech devices to prevent "mistakes". For example, setting the flaps to the "wrong" position in many situations is "locked out" by safety devices that tell the pilot that doing so would cause a crash or a problem. There are warnings associated with this as well - audible sounds that indicate various problems.
Most large modern jets fly across the ocean on their own under the control of the "autopilot". Stories abound about crews falling asleep (see this as an example) in the cockpit while the plane happily flies on itself. Autopilots can do a lot including some of the landing tasks these days - however for a variety of reasons pilots are still required.
Most of the safety systems I mentioned above have been developed as a result of reviewing crash data.
The same is true for automobiles. This is why your doors lock automatically when you start to move the car - apparently someone fell out (or more likely their child fell out) long ago and so automakers decided to automatically lock the doors for you. ABS brakes, traction control, lights on with wipers, headlights on all the time, automatic headlights off, etc. all fall into this category.
In reading about the Air France crash (see this WSJ article) we see something interesting.
What apparently happened is special "pilot tubes" which measure the velocity of the air moving past the plane to provide the pilots with air speed information filled with ice. When this happens the "air speed indicator" (basically the same as a "speedometer" in your car) began to fluctuate wildly.
This caused the planes "autopilot" to disengage. (Just like tapping your brakes causes the "Cruise Control" to disengage in your car.)
What happened next is the interesting part. The pilots, who had extensive procedures for dealing with frozen "pilot tubes" and the resulting problems got, well, confused. They did a number of things that were inconsistent with what should have been done including stalling the plane and turning the engines back to idle.
Now don't misunderstand - these guys were heroes to the very end - desperately trying to fix the situation until the very last second. I believe they did their absolute best.
Instead what I think the real problem was was the "safety systems" (warnings, sounds, lights, etc.) set the pilots into a state of confusion. In their confused state they basically went off into the weeds in terms of procedures trying to fix what was not wrong and not seeing clearly what actual was.
This harks back to the "Through the Keyhole" post I wrote a while back. But beyond not seeing the bigger picture there is also the notion of "too much help" from the airplane itself.
What do I mean by that.
Let's take "ABS brakes" as an example. I don't like them and I don't like what they do.
I grew up in southern Wisconsin - we had a lot of snow and cold (including at least 30 days below zero (not 32F) the year we got married. In those days you learned how to drive in the snow and ice or you didn't drive. Cars had no ABS brakes or other safety nonsense.
It was up to you to learn how to drive - which meant driving on icy roads (there were too many to salt) and dealing with skids and other problems.
ABS brakes prevent the driver from "locking up the brakes" by sensing when one wheel is turning slower than the others during breaking. The ABS then reduces the braking of the slower turning wheel so that all the wheels continue to turn at roughly the same speed. The idea is to give you more "control" when slamming on the brakes so that the wheels don't "lock up".
In the olden days locking the brakes (on show, for example) would stop them from turning completely. An inexperienced driver would then try to steer out of whatever skid they were in by turning the wheel. However, since the wheels were not turning all they did was set the wheels pointing in a different direction than the skid - but the car continued to skid in the initial directly.
The wheels would then suddenly grab again (either because the brakes were released or the car moved onto another surface). However, since they were now pointed in some random direction (because people tended to turn the steering wheel all the way left or right) the car would flip or go out of control.
When I learned to drive you learned how to manually perform the same function as the ABS brakes by "pumping the brakes" in a skid - so instead of slamming them on and holding them there you pumped them on and off for the same effect.
Now you might think that the technology solved all the skid problems in world - but sadly this is not true.
And this is my point. Modern drivers who have grown up with ABS brakes rely on them to do all their specialty braking functions. They cannot do them themselves because they were not taught to (in fact, pumping brakes usually causes the ABS to get confused, so now I have to account for that as well should I skid). The ABS brakes also make a loud, funny noise - one that's confusing to the driver because the car never makes that noise unless the ABS brakes kick in.
Unfortunately ABS brakes don't always help. A lot of times they do - but not always. And when they don't you can be in a lot of danger if you don't know how to control a skid.
Similarly with the Air France crash. I believe that there is too much being done for the pilots by the airplane these days - so much that they can fall asleep. And when the "autopilot off" alarm sounds, waking them from their slumber or coffee or whatever, they have a very poor picture of how things got into the state they are in (with something out of wack).
And that's critical.
Kind of like waking up to a loud noise, screaming, banging, etc. At first you are confused - what's that sound - is it a bugler, etc. After a few minutes you realize its the garbage men or a dog stuck in closet or something like that and you relax.
But as an airline pilot you have much more (like everyone's life) to be concerned about. You are supposed to know "what to do". But the warning systems and the fact that the "autopilot" and safety systems have removed your perspective creates a problem.
You suddenly become involved in a problem from "scratch" and have to piece together what's going on amidst alarms, faulty input, etc. In a wooden biplane you would never be so far removed from what your plane is doing. However, today you are in a plane supposedly flying itself - until it doesn't.
And that's a serious problem.
Too much help and "autopilots" and safety systems makes the pilots complacent and adds to the time it takes the pilot (or driver, etc.) to figure out what's wrong and do the right thing.
If I control the bakes at all times and I go into a skid there is total continuity between all the events and I can react precisely.
On the other hand, if some safety system starts up on my behalf and confuses me (either with noise or alarms or by taking control from me) I am now at a disadvantage mentally. I am not directly connected to my situation and I have to peer back through the keyhole to figure out what's gone wrong and to figure what to do about it.
From what I see this is what caused the plane to crash in this case. The pilots were unable to get enough information back out through the keyhole to rationally address what was wrong. Instead they went off into the weeds desperately trying to fix perceived problems (caused by a lack of information) by stalling the plane and putting the engines on idol among other things.
This is a great danger in our society as lawyers and manufactures constantly push to make our world safer.
Unfortunately I think that we are crossing the tipping point and that we must stop and examine what the "larger picture" is - whether for flight systems or drugs or ABS brakes.
Are we really fixing a problem? Are we really safer? Are we really helping a pilot make the best decisions by removing him from the loop right up until a problem occurs? Or are we adding technology or false claims of "unsafe products" to situations that would benefit from the users being more directly responsible for their own fates?
Are the safety systems providing too much information in terms of alarms and warnings?
Or are we better off remaining in charge of our own fates...
Instead I'd like to talk about situations such as the Air France jet that crashed into the Atlantic Ocean in 2009. Recently the "black box" recorder was recovered from the bottom of the sea.
It paints an interesting picture of what happened.
But before getting into the accident lets talk a little bit about what the cockpit is like on a modern jet.
Like modern cars jets are full of high-tech devices to prevent "mistakes". For example, setting the flaps to the "wrong" position in many situations is "locked out" by safety devices that tell the pilot that doing so would cause a crash or a problem. There are warnings associated with this as well - audible sounds that indicate various problems.
Most large modern jets fly across the ocean on their own under the control of the "autopilot". Stories abound about crews falling asleep (see this as an example) in the cockpit while the plane happily flies on itself. Autopilots can do a lot including some of the landing tasks these days - however for a variety of reasons pilots are still required.
Most of the safety systems I mentioned above have been developed as a result of reviewing crash data.
The same is true for automobiles. This is why your doors lock automatically when you start to move the car - apparently someone fell out (or more likely their child fell out) long ago and so automakers decided to automatically lock the doors for you. ABS brakes, traction control, lights on with wipers, headlights on all the time, automatic headlights off, etc. all fall into this category.
In reading about the Air France crash (see this WSJ article) we see something interesting.
What apparently happened is special "pilot tubes" which measure the velocity of the air moving past the plane to provide the pilots with air speed information filled with ice. When this happens the "air speed indicator" (basically the same as a "speedometer" in your car) began to fluctuate wildly.
This caused the planes "autopilot" to disengage. (Just like tapping your brakes causes the "Cruise Control" to disengage in your car.)
What happened next is the interesting part. The pilots, who had extensive procedures for dealing with frozen "pilot tubes" and the resulting problems got, well, confused. They did a number of things that were inconsistent with what should have been done including stalling the plane and turning the engines back to idle.
Now don't misunderstand - these guys were heroes to the very end - desperately trying to fix the situation until the very last second. I believe they did their absolute best.
Instead what I think the real problem was was the "safety systems" (warnings, sounds, lights, etc.) set the pilots into a state of confusion. In their confused state they basically went off into the weeds in terms of procedures trying to fix what was not wrong and not seeing clearly what actual was.
This harks back to the "Through the Keyhole" post I wrote a while back. But beyond not seeing the bigger picture there is also the notion of "too much help" from the airplane itself.
What do I mean by that.
Let's take "ABS brakes" as an example. I don't like them and I don't like what they do.
I grew up in southern Wisconsin - we had a lot of snow and cold (including at least 30 days below zero (not 32F) the year we got married. In those days you learned how to drive in the snow and ice or you didn't drive. Cars had no ABS brakes or other safety nonsense.
It was up to you to learn how to drive - which meant driving on icy roads (there were too many to salt) and dealing with skids and other problems.
ABS brakes prevent the driver from "locking up the brakes" by sensing when one wheel is turning slower than the others during breaking. The ABS then reduces the braking of the slower turning wheel so that all the wheels continue to turn at roughly the same speed. The idea is to give you more "control" when slamming on the brakes so that the wheels don't "lock up".
In the olden days locking the brakes (on show, for example) would stop them from turning completely. An inexperienced driver would then try to steer out of whatever skid they were in by turning the wheel. However, since the wheels were not turning all they did was set the wheels pointing in a different direction than the skid - but the car continued to skid in the initial directly.
The wheels would then suddenly grab again (either because the brakes were released or the car moved onto another surface). However, since they were now pointed in some random direction (because people tended to turn the steering wheel all the way left or right) the car would flip or go out of control.
When I learned to drive you learned how to manually perform the same function as the ABS brakes by "pumping the brakes" in a skid - so instead of slamming them on and holding them there you pumped them on and off for the same effect.
Now you might think that the technology solved all the skid problems in world - but sadly this is not true.
And this is my point. Modern drivers who have grown up with ABS brakes rely on them to do all their specialty braking functions. They cannot do them themselves because they were not taught to (in fact, pumping brakes usually causes the ABS to get confused, so now I have to account for that as well should I skid). The ABS brakes also make a loud, funny noise - one that's confusing to the driver because the car never makes that noise unless the ABS brakes kick in.
Unfortunately ABS brakes don't always help. A lot of times they do - but not always. And when they don't you can be in a lot of danger if you don't know how to control a skid.
Similarly with the Air France crash. I believe that there is too much being done for the pilots by the airplane these days - so much that they can fall asleep. And when the "autopilot off" alarm sounds, waking them from their slumber or coffee or whatever, they have a very poor picture of how things got into the state they are in (with something out of wack).
And that's critical.
Kind of like waking up to a loud noise, screaming, banging, etc. At first you are confused - what's that sound - is it a bugler, etc. After a few minutes you realize its the garbage men or a dog stuck in closet or something like that and you relax.
But as an airline pilot you have much more (like everyone's life) to be concerned about. You are supposed to know "what to do". But the warning systems and the fact that the "autopilot" and safety systems have removed your perspective creates a problem.
You suddenly become involved in a problem from "scratch" and have to piece together what's going on amidst alarms, faulty input, etc. In a wooden biplane you would never be so far removed from what your plane is doing. However, today you are in a plane supposedly flying itself - until it doesn't.
And that's a serious problem.
Too much help and "autopilots" and safety systems makes the pilots complacent and adds to the time it takes the pilot (or driver, etc.) to figure out what's wrong and do the right thing.
If I control the bakes at all times and I go into a skid there is total continuity between all the events and I can react precisely.
On the other hand, if some safety system starts up on my behalf and confuses me (either with noise or alarms or by taking control from me) I am now at a disadvantage mentally. I am not directly connected to my situation and I have to peer back through the keyhole to figure out what's gone wrong and to figure what to do about it.
From what I see this is what caused the plane to crash in this case. The pilots were unable to get enough information back out through the keyhole to rationally address what was wrong. Instead they went off into the weeds desperately trying to fix perceived problems (caused by a lack of information) by stalling the plane and putting the engines on idol among other things.
This is a great danger in our society as lawyers and manufactures constantly push to make our world safer.
Unfortunately I think that we are crossing the tipping point and that we must stop and examine what the "larger picture" is - whether for flight systems or drugs or ABS brakes.
Are we really fixing a problem? Are we really safer? Are we really helping a pilot make the best decisions by removing him from the loop right up until a problem occurs? Or are we adding technology or false claims of "unsafe products" to situations that would benefit from the users being more directly responsible for their own fates?
Are the safety systems providing too much information in terms of alarms and warnings?
Or are we better off remaining in charge of our own fates...
Thursday, May 26, 2011
Sowing False Memories
A story about the pain false memories create. |
It turns out that those who saw the fabulous imagery commercial were just as likely to report using the product as those who actually used the product.
This study (reference here and related article here) leads to some very disturbing results at two levels.
At the first level it points out how what we actually remember is different than what happened. For most people memories are not like photographs that do not change. Instead they are like a box of items. You want to remember something, you pick up the box, say "Mom's Last Birthday", and dump it out on the desk - so you see the cake, the cards, and so on.
But wait!
There were other birthday things dumped out on the desk as well. So when we move on to thinking about something else and we pick the things back up to put them into the box we forget some and we mix in others that were already on the desk.
Thus the memory of "Mom's Last Birthday" evolve over time to include other parts of our memories.
This study demonstrates concretely that memories are in fact totally fallible and can exist without the actual events ever occurring (not that this is new). More importantly, these "false memories" can occur as a result of a single, well crafted commercial. Apparently the commercials target "common items" in everyone's "box of memories", bucolic horse or farm landscapes, football games, and so forth. In with this mix are sprinkled just a few false facts - you drank that drink, you eat that snack food. Later on, when you go back through the "box of memories" you basically don't notice these few false items - I suppose because they are so well mixed into the real ones that you just don't realize it and that your memory system is always a little "fuzzy" to begin with.
I suppose this also explains deja vu to a degree. You go somewhere that looks like somewhere else - at least mostly - and if your memory of the previous place is faded just enough you feel like the new place is the old place.
The second level of this is even more troubling.
(Consider this too against the big "rage" in the 1990's of false abuse "memories" - particularly stories like the McMartin Preschool Trial - where all the initial horrendous allegations turned out to all be made up. In cases like this the "false memories" rose up out specific situations affecting only a small number of individuals in a specific setting. This study I describe points out that this effect is occurring in each and every person watching ads, whether on TV, video, the internet, cell phones every minute of every day.)
First of all, I suspect that today's students as described in the study are far more susceptible to this than old geezers. I watch virtually zero ads these days - with DVR's, site blocking, and other effects few ads get my attention. I can also tell the difference between something I have done and something I have not because I have spent many years in a "low BS" mode - actually profiting from others who are less able to discern reality from nonsense.
Today's young people, I think, have been exposed to these ads (and hence their associated "false reality") for their entire lives and there is much less of a requirement that they operate in a "concrete reality" than me. For example, political correctness demands that thoughts telling you that you are living in "Animal Farm" must be swept away because, if the truth were properly examined, others might "feel bad" and your reality would not match "the narrative".
And what does this do to a young mind? It creates a "false reality" where consequences and actions are totally unrelated. Hipocrisy (interesting that old blogger here thinks that "hipocrisy" is always misspelled - it can't be just a coincidence) cannot be detected - they do not see that doing both A and not doing A is inconsistent.
I myself have noticed this often. Those under thirty or so simply accept simultaneously conflicting realities without question. And we're not talking about "big picture" issues, e.g., science or God. Instead we see all sorts of asymmetric thinking - group #1 is good because it does A, group #2, also doing A, is bad because of who or what is in it. "Hate crimes" versus "crimes" - why are serial killers guilty of "hate crimes"? Making things "equal" by "dumbing down" some aspect of things - are they really still equal?
Without the ability to accurately discern concrete reality from some false reality injected into their minds by outside forces how can young minds be making good decisions?
How does this color their relationships? Personally I see how this type of thinking enables all sorts of excuses for bad and even troubling behavior on the part of one spouse. Can you have a good relationship with someone when your mind automatically sweeps conflicting input about the other person under the rug?
And how about children? Children are notorious for discerning inconsistent behavior on the part of adults. What do they see in this?
What do political campaigns become? Battles to get people to falsely believe as opposed to battles over truth?
No, sadly I think this study is perhaps one of the most damning assessments of modern thinking I have ever seen. If a single ad can create a false reality in a college student's mind what does a lifetime of it do?
For me I see the false reality every day, growing by leaps and bounds, totally unchecked and integrated into societies very fabric. The false realities are not created by a single "big brother" but instead by a collective "big brother" synthesized out of the wants and desires of business men, activities, Hollywood types, factions and special interests.
Like a black hole emitting Hawking radiation the "false reality" is spewed into the minds of the weak drawing them further and further away from what is concretely real into the warm, fuzzy, false memory-based reality of the black hole.
One of the things my father taught me was that "advertising was evil". He was born in 1930 at a time when the only reality was one of struggle. I always believed this but I never really knew why. Certainly advertising was designed to appeal to vanity but was it evil?
Now I know why for certain that it is...
(And imagine, every single aspect of modern society is about ads - they are even on this very blog.)
Wednesday, May 25, 2011
Talent and Children
As both a programmer and musician I often get asked about "little Johnny" - does he have talent?
(First of all I make no claim to be an expert in any of this nor am I an expert in talent. This post is the result of personal observation over many years.)
When talking about talent, especially in with the young, the most important thing to do is to separate the aspirations of the parents from the ability of the child. Many parents seem to want their child to be some sort of prodigy - most likely to fulfill their own desires as much as anything else. They see a talented child as opposed to a talented child.
Over the last 35 or so years I have had little use for aspiring parents of seemingly talented children. (Please helpmy parental self esteem my little Johnny - he's a programming wiz!)
So what's a truly talented child look like? Is he the youngest guitar wiz ever on youtube?
Perhaps, but I doubt it.
Instead, I perceive "talent", particularly in a child, as a small burning "ember" of ability that sets them apart from what most other children can do. In the case of music this might be the ability to tease out the notes of a song on some instrument. In the case of programming or computers it might be the ability to piece together some impressive Lego MindStorm robot or hack something on the home computer.
And remember, we are talking here about talent here - not genius. If little Suzy is writing full piano concertos at age 11 she's probably a musical genius. If little Johnny can add up one hundred five digit numbers in his head at age 4 he's probably a mathematical genius. But genius is not the same as talent. Genius is, in my experience, not a burden you really want placed on your child.
(A properly developed "talent" will give someone a lifetime career or a lifetime of fun and enjoyment. Don't destroy that opportunity for a child.)
Parents often confuse "talent" with "genius" in their rush to turn their child into a prodigy. And this is often the worst thing you can do for you child because the end result is the child become disillusioned with their gift at an early age because of the pressure brought to bear by the parents or other adults.
Another problem a talented child faces is "lessons" - often in music. The poor tot exhibits some inclination for this or that and whammo - they are going to "piano lessons" twice a week and practicing under the whip of mom or dad every day. There is nothing better for discouraging a child than to take something they have an innate interest in and make it an unbearable chore.
Talent does not necessarily include desire or ability. I may be talented with music but my hands may not be coordinated enough to play an instrument - particularly when I am young. I may be incredibly talented as a musician but I may be innately lazy so no amount of prodding gets me to practice.
Parents often believe that talent implies desire and ability - it does not - particularly in small children and teenagers where their physical and mental development are not in sync.
I think the most important thing a parent can do for a child with some small ember of "talent" is to create an environment where the child can grow that ember into a small flame on their own. Does six year old little Johnny like to bang the pots and pans along with the radio? Don't buy him a $1,000 drum set (if you want one then man-up and buy one for yourself, don't use little Johnny as an excuse).
Take him to the second hand music store and let him wander around. If you buy him anything spend $15 dollars on something he likes - not something you like. If he loses interest in an hour or a day or a week, take it back to the store.
(In today's society we often tend to want to provide the child everything we think they need, i.e., the drum set. If, on the other hand, the kid has to make do at home without the drum set he may actually learn much more on his own. The modern drum set is the culmination of 10,000 years of human musical development - plopping it down in front of him takes away from his learning. By not allowing the kid to develop with what he has at hand is to rob him of experiencing the full path to playing the drums. Sure, at some point he will need a real drum set, but not when he's six...)
Let Johnny dictate to you what his interests are. Make sure, if he likes to bang pots and pans, that he gets time to do it every day. Listen to the songs he likes. Pay attention to what he does on the computer.
As an adult you may find the pots and pans banging annoying and hard on your ears. But remember that as little Johnny does it he is developing his motor skills, his hearing, his hand and ear coordination, he is learning about rhythm, he is experimenting with sound. If you take these things away from little Johnny at six the ember will grow cold.
Let little Johnny decide when he's ready for the next step - generally kids will find things that they like to do on their own. They may lose interest in their talents for days, months or years - this is normal. Maybe their body needs to catch up with their mind, or vice versa, and so engaging in their interests becomes difficult. They may discover the opposite sex. Anything. Don't force them or the ember may go out permanently.
Don't disrupt the child's play - as with the pots and pans. They have to learn on their own.
As the child grows into a teenager you can start thinking about creating more opportunities for the child to explore their interests. Find the child a mentor, not a "teacher".
A mentor has successfully grown up to do whatever it is that little Suzy or Johnny likes to do - they know the score, they know the pitfalls, they've done it themselves. They more than likely know what's going on in the kids head and know how to guide him or her without poisoning their interests and extinguishing the ember.
Find one that can relate to the kid - not to you. A skilled mentor will know how to motivate your child and will know what steps to take next.
Don't spend money on a "teacher" - a teacher makes a living expounding on "what the book says to do".
Should I send my kid to college for music? Maybe - maybe not.
Making a career in music is difficult and putting yourself or your kid in debt for 20 years for music might not be the best thing to do.
Don't expect Mozart or Gauss.
Instead expect to give your child a gift that will last a lifetime.
If your child loves music he or she will find time on their own to invest in their love.
(First of all I make no claim to be an expert in any of this nor am I an expert in talent. This post is the result of personal observation over many years.)
When talking about talent, especially in with the young, the most important thing to do is to separate the aspirations of the parents from the ability of the child. Many parents seem to want their child to be some sort of prodigy - most likely to fulfill their own desires as much as anything else. They see a talented child as opposed to a talented child.
Over the last 35 or so years I have had little use for aspiring parents of seemingly talented children. (Please help
So what's a truly talented child look like? Is he the youngest guitar wiz ever on youtube?
Perhaps, but I doubt it.
Instead, I perceive "talent", particularly in a child, as a small burning "ember" of ability that sets them apart from what most other children can do. In the case of music this might be the ability to tease out the notes of a song on some instrument. In the case of programming or computers it might be the ability to piece together some impressive Lego MindStorm robot or hack something on the home computer.
And remember, we are talking here about talent here - not genius. If little Suzy is writing full piano concertos at age 11 she's probably a musical genius. If little Johnny can add up one hundred five digit numbers in his head at age 4 he's probably a mathematical genius. But genius is not the same as talent. Genius is, in my experience, not a burden you really want placed on your child.
(A properly developed "talent" will give someone a lifetime career or a lifetime of fun and enjoyment. Don't destroy that opportunity for a child.)
Parents often confuse "talent" with "genius" in their rush to turn their child into a prodigy. And this is often the worst thing you can do for you child because the end result is the child become disillusioned with their gift at an early age because of the pressure brought to bear by the parents or other adults.
Another problem a talented child faces is "lessons" - often in music. The poor tot exhibits some inclination for this or that and whammo - they are going to "piano lessons" twice a week and practicing under the whip of mom or dad every day. There is nothing better for discouraging a child than to take something they have an innate interest in and make it an unbearable chore.
Talent does not necessarily include desire or ability. I may be talented with music but my hands may not be coordinated enough to play an instrument - particularly when I am young. I may be incredibly talented as a musician but I may be innately lazy so no amount of prodding gets me to practice.
Parents often believe that talent implies desire and ability - it does not - particularly in small children and teenagers where their physical and mental development are not in sync.
I think the most important thing a parent can do for a child with some small ember of "talent" is to create an environment where the child can grow that ember into a small flame on their own. Does six year old little Johnny like to bang the pots and pans along with the radio? Don't buy him a $1,000 drum set (if you want one then man-up and buy one for yourself, don't use little Johnny as an excuse).
Take him to the second hand music store and let him wander around. If you buy him anything spend $15 dollars on something he likes - not something you like. If he loses interest in an hour or a day or a week, take it back to the store.
(In today's society we often tend to want to provide the child everything we think they need, i.e., the drum set. If, on the other hand, the kid has to make do at home without the drum set he may actually learn much more on his own. The modern drum set is the culmination of 10,000 years of human musical development - plopping it down in front of him takes away from his learning. By not allowing the kid to develop with what he has at hand is to rob him of experiencing the full path to playing the drums. Sure, at some point he will need a real drum set, but not when he's six...)
Let Johnny dictate to you what his interests are. Make sure, if he likes to bang pots and pans, that he gets time to do it every day. Listen to the songs he likes. Pay attention to what he does on the computer.
As an adult you may find the pots and pans banging annoying and hard on your ears. But remember that as little Johnny does it he is developing his motor skills, his hearing, his hand and ear coordination, he is learning about rhythm, he is experimenting with sound. If you take these things away from little Johnny at six the ember will grow cold.
Let little Johnny decide when he's ready for the next step - generally kids will find things that they like to do on their own. They may lose interest in their talents for days, months or years - this is normal. Maybe their body needs to catch up with their mind, or vice versa, and so engaging in their interests becomes difficult. They may discover the opposite sex. Anything. Don't force them or the ember may go out permanently.
Don't disrupt the child's play - as with the pots and pans. They have to learn on their own.
As the child grows into a teenager you can start thinking about creating more opportunities for the child to explore their interests. Find the child a mentor, not a "teacher".
A mentor has successfully grown up to do whatever it is that little Suzy or Johnny likes to do - they know the score, they know the pitfalls, they've done it themselves. They more than likely know what's going on in the kids head and know how to guide him or her without poisoning their interests and extinguishing the ember.
Find one that can relate to the kid - not to you. A skilled mentor will know how to motivate your child and will know what steps to take next.
Don't spend money on a "teacher" - a teacher makes a living expounding on "what the book says to do".
Should I send my kid to college for music? Maybe - maybe not.
Making a career in music is difficult and putting yourself or your kid in debt for 20 years for music might not be the best thing to do.
Don't expect Mozart or Gauss.
Instead expect to give your child a gift that will last a lifetime.
If your child loves music he or she will find time on their own to invest in their love.
Tuesday, May 24, 2011
iPhone Scripting with Squirrel
So after a bit more work I have completely integrated Squirrel into a functioning iPhone application.
My application is a semi-realtime game-like system with response times required to be less than a millisecond or so. There is a bluetooth component as well as various networking and other requirements. When events happen a sequence of data is passed to other applications (running on OS or another iOS device).
My first iOS application of Squirrel is to pre-process the data before sending it off. This is nothing fancy - basically mapping values, checking ranges, that sort of thing - based on a users-selected "personality".
The first decision was where to put the scripting files. In my case this means putting them into the iOS application resources area (where things like images, audio files, and so on are also stored). At this point I am using Squirrel source (.nut) files. This makes for easy testing and debugging at this point. At some point closer to the product release I plan to switch that over to compiled binary files (.cnut) in order to avoid the wrath of Apple.
At some point in the future I can also see having some sort of app-based purchasing function that integrates with Squirrel as well - enabling features, loading in layers of functionality that allow a user access to purchased components, and so on.
At this point I plan to have the purchasing happen outside the app, say via some web site. The iOS app would be told to synchronize with the site and, when it did, it would discover feature packs to down load and install (all based on an encrypted device-based key). The Apple problems with security and scripting are many - for example someone figures out how to edit the scripts and changes them to do harm in some way. However, for binary compiled scripts I don't see any problem - they are certainly as safe as paying $100 USD for an Apple developer license.
So with all this in mind I adopted the convention of declaring each Squirrel file to have an initialize(self) function (self being, in this case, the invoker of the file). This allows any sort of "installation" functionality (checking permissions, moving extra files, etc.) to be handled.
I also adopted the convention of using, at least for menu-based selections of functionality, the notion of a global Squirrel pool of class instances along with the notion of a currently selected one. So, for example, suppose I have a "personality" capability in the iOS app.
I create a base Squirrel file that declares the base class for the personality, a global table of loaded personalities, and a global "current personality pointer". When the iOS app starts up this is loaded and initialized and the current pointer is set to the loaded class (typically the "default" class").
As alternate personalities are loaded (as other Squirrel files) they come in as extensions to the bass class and get stored as entries in the global table (indexed by some user-readable name).
I set up calls to return the list of entries in the table and to allow the user to set the current pointer to one of them.
The only tricky part of this was figuring out in a .c file what to do to invoke the class instances. Basically you do the code below to access a Squirrel global called "currentPointer" that holds a class instance that responds to "classFunction":
sq_pushroottable(vm);
// "currentPointer" to be fetched from "global table"
sq_pushstring(vm, _SC("currentPointer"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -2)))
{
// the class instance held by "currentPointer" now on
// the top of the Squirrel stack.
//
// Now we want to find a function in the class.
// We do a "get" relative to the instance for this
//
sq_pushstring(vm, _SC("classFunction"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -2)))
{
// Now the closure for "classFunction" is
// on the TOS.
//
// Next get the class instance for that closure.
//
sq_pushstring(vm, _SC("currentPointer"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -4)))
{
// STACK(-4) "classFunction" closure
// STACK(-3) instance of class
//
// user parameters to function
// STACK(-2) param1
sq_pushuserpointer(vm, (SQUserPointer)self);
// STACK(-1) param2
sq_pushinteger(vm, a);
// STACK(0) param3
sq_pushinteger(vm, b);
if (SQ_SUCCEEDED(sq_call(vm, 4, SQTrue, SQTrue)))
{
sq_getinteger(vm, sq_gettop(vm), &retVal);
success = YES;
// function returns integer
*result = (int)retVal;
}
}
}
}
Once you have this in place invoking class instances in Squirrel is easy.
Another problem area I uncovered is that of reentrancy. This is where you have an application running with multiple threads (either directly or indirectly through use of something like NSTimer).
In either case you have to make sure that for a given Squirrel VM you only have one instance active in the VM at any given time. (At least this was my experience - if I am wrong hopefully someone will let me know...)
For a variety of reasons this is trickier than it might seem at first because you can run into serious problems as you increase the number of locks involved beyond one. Fortunately for my application the locking happens at a fairly high level above the Squirrel calls (for other reasons unrelated) which leaves any calls into the Squirrel VM already effectively serialized.
Squirrel is a scripting language and does not appear to support the notion of things asynchronous outside events (yes it has its own threads and so forth but that is all within the VM). So, for example, if I want some Squirrel code to run on an NSTimer tick I have to ensure that that when this happens the timer is the exclusive owner of the VM for the duration of the tick.
I think this could solved by having allowing Squirrel to support multiple local "stacks" and having read/write access to the core VM resources controlled by a lock specific to that VM. This would allow things like timer actions and asynchronous events to work seamlessly within the Squirrel architecture without having to support high-level lock granulatiry. The locking could be supported by callbacks (which could be compiled in) when referencing global resources so efficiency would not be impacted unless necessary.
You could also add a language construct like "volatile" that would tell Squirrel that a given object had to be locked before accessing it.
It looks like to me that in the long run support for volatility would be a requirement but for my purposes I can get around all this for now...
My application is a semi-realtime game-like system with response times required to be less than a millisecond or so. There is a bluetooth component as well as various networking and other requirements. When events happen a sequence of data is passed to other applications (running on OS or another iOS device).
My first iOS application of Squirrel is to pre-process the data before sending it off. This is nothing fancy - basically mapping values, checking ranges, that sort of thing - based on a users-selected "personality".
The first decision was where to put the scripting files. In my case this means putting them into the iOS application resources area (where things like images, audio files, and so on are also stored). At this point I am using Squirrel source (.nut) files. This makes for easy testing and debugging at this point. At some point closer to the product release I plan to switch that over to compiled binary files (.cnut) in order to avoid the wrath of Apple.
At some point in the future I can also see having some sort of app-based purchasing function that integrates with Squirrel as well - enabling features, loading in layers of functionality that allow a user access to purchased components, and so on.
At this point I plan to have the purchasing happen outside the app, say via some web site. The iOS app would be told to synchronize with the site and, when it did, it would discover feature packs to down load and install (all based on an encrypted device-based key). The Apple problems with security and scripting are many - for example someone figures out how to edit the scripts and changes them to do harm in some way. However, for binary compiled scripts I don't see any problem - they are certainly as safe as paying $100 USD for an Apple developer license.
So with all this in mind I adopted the convention of declaring each Squirrel file to have an initialize(self) function (self being, in this case, the invoker of the file). This allows any sort of "installation" functionality (checking permissions, moving extra files, etc.) to be handled.
I also adopted the convention of using, at least for menu-based selections of functionality, the notion of a global Squirrel pool of class instances along with the notion of a currently selected one. So, for example, suppose I have a "personality" capability in the iOS app.
I create a base Squirrel file that declares the base class for the personality, a global table of loaded personalities, and a global "current personality pointer". When the iOS app starts up this is loaded and initialized and the current pointer is set to the loaded class (typically the "default" class").
As alternate personalities are loaded (as other Squirrel files) they come in as extensions to the bass class and get stored as entries in the global table (indexed by some user-readable name).
I set up calls to return the list of entries in the table and to allow the user to set the current pointer to one of them.
The only tricky part of this was figuring out in a .c file what to do to invoke the class instances. Basically you do the code below to access a Squirrel global called "currentPointer" that holds a class instance that responds to "classFunction":
sq_pushroottable(vm);
// "currentPointer" to be fetched from "global table"
sq_pushstring(vm, _SC("currentPointer"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -2)))
{
// the class instance held by "currentPointer" now on
// the top of the Squirrel stack.
//
// Now we want to find a function in the class.
// We do a "get" relative to the instance for this
//
sq_pushstring(vm, _SC("classFunction"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -2)))
{
// Now the closure for "classFunction" is
// on the TOS.
//
// Next get the class instance for that closure.
//
sq_pushstring(vm, _SC("currentPointer"), -1);
if (SQ_SUCCEEDED(sq_get(vm, -4)))
{
// STACK(-4) "classFunction" closure
// STACK(-3) instance of class
//
// user parameters to function
// STACK(-2) param1
sq_pushuserpointer(vm, (SQUserPointer)self);
// STACK(-1) param2
sq_pushinteger(vm, a);
// STACK(0) param3
sq_pushinteger(vm, b);
if (SQ_SUCCEEDED(sq_call(vm, 4, SQTrue, SQTrue)))
{
sq_getinteger(vm, sq_gettop(vm), &retVal);
success = YES;
// function returns integer
*result = (int)retVal;
}
}
}
}
Once you have this in place invoking class instances in Squirrel is easy.
Another problem area I uncovered is that of reentrancy. This is where you have an application running with multiple threads (either directly or indirectly through use of something like NSTimer).
In either case you have to make sure that for a given Squirrel VM you only have one instance active in the VM at any given time. (At least this was my experience - if I am wrong hopefully someone will let me know...)
For a variety of reasons this is trickier than it might seem at first because you can run into serious problems as you increase the number of locks involved beyond one. Fortunately for my application the locking happens at a fairly high level above the Squirrel calls (for other reasons unrelated) which leaves any calls into the Squirrel VM already effectively serialized.
Squirrel is a scripting language and does not appear to support the notion of things asynchronous outside events (yes it has its own threads and so forth but that is all within the VM). So, for example, if I want some Squirrel code to run on an NSTimer tick I have to ensure that that when this happens the timer is the exclusive owner of the VM for the duration of the tick.
I think this could solved by having allowing Squirrel to support multiple local "stacks" and having read/write access to the core VM resources controlled by a lock specific to that VM. This would allow things like timer actions and asynchronous events to work seamlessly within the Squirrel architecture without having to support high-level lock granulatiry. The locking could be supported by callbacks (which could be compiled in) when referencing global resources so efficiency would not be impacted unless necessary.
You could also add a language construct like "volatile" that would tell Squirrel that a given object had to be locked before accessing it.
It looks like to me that in the long run support for volatility would be a requirement but for my purposes I can get around all this for now...
Monday, May 23, 2011
My Quantized Life...
Mugs, R2, Kylie, and Bully |
But that's not what this is about.
I often work from home and, when doing so, have to deal with quantization of a much different sort.
We have many dogs - four and a half of which live in the house with us (some participate in the daily grind only superficially). They range from the tiny "R2" to Mugs (whom I have written about before in "Mugs and the Vet") to the giant English Mastiff "Kylie" to the professional Russian pissing dog "Bully".
I have found that much of my daily at-home existence is quantized by complex and intricate interaction of four and a half dogs worth of bowel and bladder cycles. Each dog, of course, operates on their own cycle of eating, drinking and having to go out. But my life is regulated by the synchronicity of the alignment of these cycles from moment to moment.
"R2", for example, (who is named after "R1" and not the Star Wars character) has a body which operates like fine Swiss clockwork. At precisely 7:30 AM each morning he awakens to be let out. At precisely 8:25 AM he has to go out again. Beyond that he might go out once or twice more until dinner time (which for all of the inside dogs is 11:30 PM). He goes out once after dinner and then to bed.
(11:30 or later being the only time, given our schedule, when we can be sure that we will not leave a house full of recently fed dogs alone for many hours unattended.)
So "R2" is about as maintenance free as they come.
"Mugs", on the other hand, always has to go out. Starting at 11:30 PM after dinner, then at 2:50 AM, then at 8:30 AM, then at 9:30 AM, then at noon, then at 5:00 PM, then at 7:00 PM. Mugs is efficient with his business and does not like to be kept waiting at the door on his return.
"Kylie", like Mugs goes out as often as possible: 9:00 AM, 9:30 AM, 10:00 AM, 10:30 AM, and so on.
Kylie is also camera shy (why, I cannot even guess) so you never get to see the front of her. But then Kylie only comes inside after "going out" if I go out the door, turn left, then right and "wave her in" to the door - sort of like a ritual dance. She spends a lot of time patrolling for ground hogs, crow, deer and chew toys taken outside before last winter.
(Now, fortunately for us both Kylie and R2 work hard to prevent "horse attacks" from the neighboring horse farm. Apparently the horses have the potential for attack at any time of the day or night - requiring one or more of the dogs to place themselves between the horses in their field and the house. Loud barking and baying also helps to keep away the nasty horses...)
And last, but certainly not least, is "Bully".
Through some miracle of biology Bully has a full quart-sized bladder. However, nature - always the trickster, accomplished this by overlapping the space for Bully's stomach with his prestigious bladder, i.e., only one can be full at any given time. Bully, after three full outside trips in the morning to completely empty his bladder can last until at least 5:00 PM if not longer before having to go out again.
The only real problem is that Bully hates to go out side unless absolutely necessary. Like Gandhi, Bully practices "passive resistance" to the mere thought of going out unless he needs too.
"Bully, do you have to go out?" I say.
Bully sits down indicating no. Which is fine. Except that Bully willingly agrees to go out only when his bladder is within 3-drops of full (I guess another miracle of biology). Then he has to go "now" - literally running to the door which, if I don't arrive ahead of him to open, may impede his success.
Sadly he doesn't always make it - dribbling all the way to the door on occasion. If he won't go out before eating he is unable to finish his dinner before hitting the limit. I have to watch him carefully - if he stops in mid-bowl and heads for the door I have to run ahead and clear the way to get him out in time.
(The speculation is that "Bully" worked his way out of a Siberian subsistence living by winning "bladder emptying" contests in local bars, turning professional, and coming to the US...)
But back to quantization.
Now, if you overlay all of these cycles you find that during the morning hours I am getting up to let someone in or out every paragraph (or sentence).
Thus my own activities, as husband, the blogger or professional software writer, are quantized down to points where every dogs bladder is simultaneously empty, i.e., no more than a few minutes pass without interruption of some sort. (Of course, a stray UPS, Fedex or well-tender man really sets things off.)
Since all the bladders fill at different rates I am left with a "quantized life" - things, like email, phone calls, software development, blogging, etc. all must be broken down into sub-tasks which can be accomplished in a few minutes - at least before lunch. (And that does not count other demands, such as petting.)
I have found that peanut butter stuffed bones, frozen solid, can buy just enough time to get organized in the morning.
After lunch some of us go "upstairs" to work. During this time there is a general serendipitous synchronicity of bladder cycles (and hence in the quantization) until about 5:30 or 6:00 PM - when things pick up until dinner.
Oops! I have to run...
Friday, May 20, 2011
Scripting in OS X with Squirrel
I have been working on a new product, part of which requires a server application to run on a Macintosh - at least for now. The other part of the application runs on an iPhone and/or iPad.
I needed a way to create programming scripts to control certain aspects of the product, i.e., changing the products functionality in the field without rebuilding the code. To accomplish this I am use a scripting language called Squirrel.
Over the last decade or so I have used a number of languages for scripting among them Chicken which is a Scheme-based Lisp. Chicken worked well for most things but, alas, the Lisp syntax with its bizillions of parends is really too much to take over the long haul - even with editors that help out by doing the balancing for you.
Squirrel comes as two small directories of C++ code. One for the main language and compiler, one for a series of standard "libraries" that support things like file I/O, math and so on. While there are plenty of scripting languages out there (JavaScript, Lua, Chicken, etc. etc.) all of these appear to be much larger than Squirrel - larger in the sense of more files, more complexity, more build issues, and so on.
Squirrel consists of about a dozen .CPP files for the main language, compiler and runtime and another half dozen or so for the standard library. Not bad when you consider what you get (from the web site):
- Open Source MIT license
- dynamic typing
- delegation
- classes & inheritance
- higher order functions
- lexical scoping
- generators
- cooperative threads(co-routines)
- tail recursion
- exception handling
- automatic memory management (CPU bursts free; mixed approach ref counting/GC)
- both compiler and virtual machine fit together in about 7k lines of C++ code.
- optional 16bits characters strings
- lambda functions
Lambda-functions, threads and generators in 7K lines of code - a geeks delight for sure.
When checking out the site I did not see a Mac or iOS port. Since these are the features I needed I decided to port it to both.
Now the target system (Mac and iOS) have some peculiar issues. The Mac is either a 64-bit or 32-bit OS and iOS requires both a arm and i386 version (arm for the physical phones and i386 (32-bit) for the iPhone/iPad simulator you use for virtual testing. This means four different libraries, eight if you build for debug and release.
So I set about porting - which took zero time since - despite no Mac/iOS being listed on the squirrel web site - the code just built and ran (a standard Makefile is provided so all you have to do is cd into the top level and type "make").
More work (which posted about here) was required to get things to link.
(At this point I have to say the XCode 4 is not quite ready for prime time. It does work but its so ridiculously confusing to use its almost to frustrating to deal with. But more on this another time...)
Once I had the libraries working for the various platforms I set about doing some basic testing - making sure that for both Mac and iOS I could call and run Squirrel .nut files from running Mac and iOS applications. This also worked without any hitches right out of the box.
The next tricky thing to do was to integrate the build environment. As I described in the last post the only way I could get that to work in XCode 4 was to put Squirrel into a library (.a) file.
Now on the iPhone/Mac platforms there are multiple languages you can use: C, C++, and Objective-C among the main ones. Both the GCC 4.2 and the LLVM GCC 4.2 support all languages on both platforms. However, I was somewhat concerned about mixing these all together.
It turns out that the only really tricky part is adding a "-libstdc++" to the "Linker Flags" section of the build page in either XCode 3.2 or 4 after you create and add a squirrel .a library. Squirrel is built with C++ and hence requires this.
However, beyond that I had no other operational problems. It was easy enough to create both .c and .m files and to call into the Squirrel code from both. (I suspect that this is more to do with GCC/LLVM than Apple.)
At this point I have put together an extensive set of code surrounding Squirrel for my application and I must say that other than some minor documentation issues and a small change to Squirrel it all went great.
Squirrel is designed for gaming and my application, which has realtime aspects to it, should work well with its architecture.
I did make on change to Squirrel - the compiler, which can compile .nut files into the VM or into .cnut (compiled .nut) files, takes a lexer function as a parameter. You pass the compiler a user pointer when you call it and the lexer function is repeated called with this parameter so you can handle all of your own lexing issues. For example, in my case, I wanted a way to include certain symbols and constants from .h and other files without having to duplicate them.
The only issue I found so far was that the compiler error function, which the user can also define, is not called with the user pointer - which means that if you are augmenting the lexer the compiler error function cannot know the state of your lexer. I changed Squirrel to make the user pointer a parameter to the compiler error function - hopefully this change will make its way into the official code base.
Documentation-wise I really didn't have too much trouble. The only part so far that I have found unclear is the difference between assigning to local and global variables (using '=' versus '<-'). There was some forum help for this but I think adding some extra documentation around this would be helpful.
(I do have one question at this point, however. The issue of reentrancy. My guess is that for a given Squirrel VM you cannot have more than one active outside OS thread or event loop simultaneously active, i.e., say I have one VM and two events invoking Squirrel functions through the C API. I have to assume that these would need to be synchronized though the documentation and web site are not clear on this.)
In my .m and .c code I am able to call into Squirrel and easily access class instances and invoke members which makes the functionality I need very simple and straight forward.
While I have most of this also working for iOS I have not completely tested it - though since much of my code runs on both platforms and based on what i have seen so far I do not expect any problems.
Squirrel was written by Alberto Demichelis and I have to say that over all this is an excellent effort. The language is fairly intuitive, powerful (supporting much of what you can do in Lisp without the ugliness), and straight forward to learn. The provided .CPP code is clean and straightforward - I added my extra user pointer to the compiler after about 10 minutes of work. (From his resume it looks like Alberto is just one talented guy - basically completely self taught.)
The license is MIT Open source (linked from the Squirrel web site: "Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.") which means that there isn't much required to make use of it in a commercial product.
My hat is off to Alberto on this one.
I needed a way to create programming scripts to control certain aspects of the product, i.e., changing the products functionality in the field without rebuilding the code. To accomplish this I am use a scripting language called Squirrel.
Over the last decade or so I have used a number of languages for scripting among them Chicken which is a Scheme-based Lisp. Chicken worked well for most things but, alas, the Lisp syntax with its bizillions of parends is really too much to take over the long haul - even with editors that help out by doing the balancing for you.
Squirrel comes as two small directories of C++ code. One for the main language and compiler, one for a series of standard "libraries" that support things like file I/O, math and so on. While there are plenty of scripting languages out there (JavaScript, Lua, Chicken, etc. etc.) all of these appear to be much larger than Squirrel - larger in the sense of more files, more complexity, more build issues, and so on.
Squirrel consists of about a dozen .CPP files for the main language, compiler and runtime and another half dozen or so for the standard library. Not bad when you consider what you get (from the web site):
- Open Source MIT license
- dynamic typing
- delegation
- classes & inheritance
- higher order functions
- lexical scoping
- generators
- cooperative threads(co-routines)
- tail recursion
- exception handling
- automatic memory management (CPU bursts free; mixed approach ref counting/GC)
- both compiler and virtual machine fit together in about 7k lines of C++ code.
- optional 16bits characters strings
- lambda functions
Lambda-functions, threads and generators in 7K lines of code - a geeks delight for sure.
When checking out the site I did not see a Mac or iOS port. Since these are the features I needed I decided to port it to both.
Now the target system (Mac and iOS) have some peculiar issues. The Mac is either a 64-bit or 32-bit OS and iOS requires both a arm and i386 version (arm for the physical phones and i386 (32-bit) for the iPhone/iPad simulator you use for virtual testing. This means four different libraries, eight if you build for debug and release.
So I set about porting - which took zero time since - despite no Mac/iOS being listed on the squirrel web site - the code just built and ran (a standard Makefile is provided so all you have to do is cd into the top level and type "make").
More work (which posted about here) was required to get things to link.
(At this point I have to say the XCode 4 is not quite ready for prime time. It does work but its so ridiculously confusing to use its almost to frustrating to deal with. But more on this another time...)
Once I had the libraries working for the various platforms I set about doing some basic testing - making sure that for both Mac and iOS I could call and run Squirrel .nut files from running Mac and iOS applications. This also worked without any hitches right out of the box.
The next tricky thing to do was to integrate the build environment. As I described in the last post the only way I could get that to work in XCode 4 was to put Squirrel into a library (.a) file.
Now on the iPhone/Mac platforms there are multiple languages you can use: C, C++, and Objective-C among the main ones. Both the GCC 4.2 and the LLVM GCC 4.2 support all languages on both platforms. However, I was somewhat concerned about mixing these all together.
It turns out that the only really tricky part is adding a "-libstdc++" to the "Linker Flags" section of the build page in either XCode 3.2 or 4 after you create and add a squirrel .a library. Squirrel is built with C++ and hence requires this.
However, beyond that I had no other operational problems. It was easy enough to create both .c and .m files and to call into the Squirrel code from both. (I suspect that this is more to do with GCC/LLVM than Apple.)
At this point I have put together an extensive set of code surrounding Squirrel for my application and I must say that other than some minor documentation issues and a small change to Squirrel it all went great.
Squirrel is designed for gaming and my application, which has realtime aspects to it, should work well with its architecture.
I did make on change to Squirrel - the compiler, which can compile .nut files into the VM or into .cnut (compiled .nut) files, takes a lexer function as a parameter. You pass the compiler a user pointer when you call it and the lexer function is repeated called with this parameter so you can handle all of your own lexing issues. For example, in my case, I wanted a way to include certain symbols and constants from .h and other files without having to duplicate them.
The only issue I found so far was that the compiler error function, which the user can also define, is not called with the user pointer - which means that if you are augmenting the lexer the compiler error function cannot know the state of your lexer. I changed Squirrel to make the user pointer a parameter to the compiler error function - hopefully this change will make its way into the official code base.
Documentation-wise I really didn't have too much trouble. The only part so far that I have found unclear is the difference between assigning to local and global variables (using '=' versus '<-'). There was some forum help for this but I think adding some extra documentation around this would be helpful.
(I do have one question at this point, however. The issue of reentrancy. My guess is that for a given Squirrel VM you cannot have more than one active outside OS thread or event loop simultaneously active, i.e., say I have one VM and two events invoking Squirrel functions through the C API. I have to assume that these would need to be synchronized though the documentation and web site are not clear on this.)
In my .m and .c code I am able to call into Squirrel and easily access class instances and invoke members which makes the functionality I need very simple and straight forward.
While I have most of this also working for iOS I have not completely tested it - though since much of my code runs on both platforms and based on what i have seen so far I do not expect any problems.
Squirrel was written by Alberto Demichelis and I have to say that over all this is an excellent effort. The language is fairly intuitive, powerful (supporting much of what you can do in Lisp without the ugliness), and straight forward to learn. The provided .CPP code is clean and straightforward - I added my extra user pointer to the compiler after about 10 minutes of work. (From his resume it looks like Alberto is just one talented guy - basically completely self taught.)
The license is MIT Open source (linked from the Squirrel web site: "Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.") which means that there isn't much required to make use of it in a commercial product.
My hat is off to Alberto on this one.
Thursday, May 19, 2011
What We Don't Cover...
From the SF Body Art Expo Page - by Cecil Poter |
I usually cover multiple sources each day: The Wall Street Journal, Wired, various geek publications, various other news outlets, law, copyright and technology sources, to name but a few.
Today, for example here are just a few of the possible topics for the Lone Wolf:
Girl Talk by the Tesla Orchestra
(Notice there is no one seated in the audience... I wonder why???)
I have always been a big fan of Tesla Coils. This video was created by the Tesla Orchestra in Cleveland, OH USA. You never know what you are going to get when mixing geeks and music.
Then there is the fascinating circumcision ban on the ballot in San Francisco (see this link). I have been to a few over the years (though I am no Jewish nor am I a doctor). I distinctly remember my first time. It was the late 1970's and I lived and worked in NYC. My boss had a business on 14th street and lived a few blocks over on 12th. He and his wife just had a baby - little Arnie Jr. - and he we going to be circumcised (actually its called a bris in Yiddish).
(I learned a lot of Yiddish in those days - a fabulous language. Known for lots of fun slang words like the apropo putz and shmuck.)
So I donned a yamaka and headed over with the family to watch the mohel conduct the affairs. Unlike circumcisions in most of the western world this one was conducted in the dinning room where the ceremonious results were proudly held up for all to see.
Unless they plan on banning tattoos, piercing and other fun body art as well (like the Body Art Expo to be held at the San Francisco Cow Palace the August) it seems like the height of hipocrisy.
Then there is the issue of "nuclear safety" in wake of Fukushima (see this). Nuclear power accidents, as I have chronicled in this blog have a wonder history of human involvement - particularly on the "human error" side.
Human's always seem to make wrong decisions just at the tipping point of the disaster - letting the cooling water out of the core (Three Mile Island), doing the wrong thing right after the earthquake (Fukushima), and so forth.
In the US the Nuclear Regulatory Commission (NRC) is ultimately in charge should their be a nuclear accident. However, I doubt very much that the US (both the government and the utilities) would be any less bureaucratic than the Japanese in the case of an accident.
Then there is Squirrel - a scripting language system - that I have ported over to Mac OS X and the iPhone. (This is highly technical but these posts turn out, at least in the "geek community" to be at least as popular as the other types of posts.) This is not quite complete so it will probably have to wait a bit.
There is also music. Open mics at the "Steel City Steakhouse" on Wednesday's with Ed Jenkins and the Zig Zag open mic at the Wicked Witches Tuesdays. This is with traditional instruments as opposed to tesla coils.
Wednesday, May 18, 2011
XCode 4 Static Library Hell
The other day I set about creating some static libraries in XCode4 for a project I was working on.
The project involved a couple of libraries of .CPP (C++) code that I needed to integrate into my iPad project.
Basically the C++ code was a stand-alone library of functionality that I needed to call from my iPad application. I wanted the libraries to be statically linked, i.e., built into the application, as opposed to becoming dynamic libraries linked at run time. I also wanted to use the LLVM GCC 4.2 compiler because the rest of my project was built using this compiler. A final requirement was that the library work on the iPhone, iPad, both iPhone and iPad simulators for debugging, and on Mac OSX.
I started out by building the libraries by hand with some scripts provided by with the library. Basically the scripts looked like this initially:
gcc -O2 -fno-rtti -Wall -fno-strict-aliasing -c $(SRCS) $(INCZ) $(DEFS)
ar rc lib.a *.o
This created lib.a which I could link with a second script that compiled a .c file and linked it to the library:
g++ -O2 -fno-rtti -o app $(SRCS) $(INCZ) lib.a
Now using XCode 3.2 makes this very easy. You simply create a new project, select either iOS/Library or Mac OSX/Framework & Library, add your files, check your compiler settings, and build.
In the case of Mac OSX you can set whether you want a build for production or test. I also needed to create both 32-bit and 64-bit versions of my app.
For 32-bit OSX I could use the existing command line and just change it as follows:
llvm-gcc -O2 -arch i386 -fno-rtti -Wall -fno-strict-aliasing -c $(SRCS) $(INCZ) $(DEFS)
...
llvm-g++ -O2 -fno-rtti -o app $(SRCS) $(INCZ) lib32.a
Both llvm-gcc and gcc both create the same type of .o output files for the library.
I then replaced the second command line, llvm-g++, with an XCode4 project for a Mac OSX 32-bit application. I included the .a library and the second pass .c files and everything linked and ran just fine.
For a 64-bit OS X app I changed "-arch i386" to "-arch x86_64" to create a "lib64.a".
However, now I was left with two libraries in a project where I only wanted one. That way I could simply toggle the project "scheme" to 32 or 64 bit.
To do this I used lipo. This app allows you to combine libraries together. Somehow XCode then knows which architecture (as defined by -arch) to pull out at link time to build the app. You just say
lipo -create $(LIB64) $(LIB32) -output commonosxlib.a
Now you can include commonosxlib.a in your projects (XCode 3.2 or 4) and the right library will be linked based on the selected application bit size.
I next tried this same scheme for the iPhone. I changed "-arch 386" to "-arch armv6" thinking that this would be easy.
No dice.
I got strange errors:
llvm-gcc-4.2: error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/arm-apple-darwin10-llvm-gcc-4.2': execvp: No such file or directory
Now what?
I went back to XCode 3.2 and created a new project as above (iOS/Library), added the source, targeted the iPhone, and boom - I had a iPhone library. I also had to make a second build targeting the simulator (because that actually is 386-type architecture as opposed to arm).
I used the same lipo trick to build a common lib for simulator and iphone and I was done - that way I could have one .a in my iPhone/iPad project for both the actual device and the simulator.
The libraries work just fine in XCode4 projects.
Now I figured I could do all of this library work in XCode4 eliminating the need for XCode 3.2.
That's where the fun began.
I found this article which claims to accomplish this.
The only problem was that after hours of thrashing with the XCode4 library stuff I could not get it to produce a .a file no matter how hard I tried. I was able to follow all the steps but no libraries that I could find ever came out.
I am sure I could create workspaces in XCode4 and import the XCode 3.2 projects in order to cheat and get everything all into one place but at this point I am so pissed off about how ugly the library building is in XCode4 that I probably won't bother.
Fortunately the library is a very static one and I can afford some ugliness just to stay away from the XCode4 nonsense.
Apple's XCode4 is nice if you don't do anything but build your simple project from scratch. But as soon as you get into all the the "scheme's" and other uglies things quickly fall apart. There is no good way I have found to see what its actually doing (unlike 3.2 which created logs). I think 4 creates logs but they are in weird places.
Oh well - I created this post so others would not bother with XCode4 and libraries until Apple comes up with something nicer. It took me less than one minute to create the iPhone library (and I am not an XCode wiz by any means). After about four hours with XCode4 I gave up...
The project involved a couple of libraries of .CPP (C++) code that I needed to integrate into my iPad project.
Basically the C++ code was a stand-alone library of functionality that I needed to call from my iPad application. I wanted the libraries to be statically linked, i.e., built into the application, as opposed to becoming dynamic libraries linked at run time. I also wanted to use the LLVM GCC 4.2 compiler because the rest of my project was built using this compiler. A final requirement was that the library work on the iPhone, iPad, both iPhone and iPad simulators for debugging, and on Mac OSX.
I started out by building the libraries by hand with some scripts provided by with the library. Basically the scripts looked like this initially:
gcc -O2 -fno-rtti -Wall -fno-strict-aliasing -c $(SRCS) $(INCZ) $(DEFS)
ar rc lib.a *.o
This created lib.a which I could link with a second script that compiled a .c file and linked it to the library:
g++ -O2 -fno-rtti -o app $(SRCS) $(INCZ) lib.a
Now using XCode 3.2 makes this very easy. You simply create a new project, select either iOS/Library or Mac OSX/Framework & Library, add your files, check your compiler settings, and build.
In the case of Mac OSX you can set whether you want a build for production or test. I also needed to create both 32-bit and 64-bit versions of my app.
For 32-bit OSX I could use the existing command line and just change it as follows:
llvm-gcc -O2 -arch i386 -fno-rtti -Wall -fno-strict-aliasing -c $(SRCS) $(INCZ) $(DEFS)
...
llvm-g++ -O2 -fno-rtti -o app $(SRCS) $(INCZ) lib32.a
Both llvm-gcc and gcc both create the same type of .o output files for the library.
I then replaced the second command line, llvm-g++, with an XCode4 project for a Mac OSX 32-bit application. I included the .a library and the second pass .c files and everything linked and ran just fine.
For a 64-bit OS X app I changed "-arch i386" to "-arch x86_64" to create a "lib64.a".
However, now I was left with two libraries in a project where I only wanted one. That way I could simply toggle the project "scheme" to 32 or 64 bit.
To do this I used lipo. This app allows you to combine libraries together. Somehow XCode then knows which architecture (as defined by -arch) to pull out at link time to build the app. You just say
lipo -create $(LIB64) $(LIB32) -output commonosxlib.a
Now you can include commonosxlib.a in your projects (XCode 3.2 or 4) and the right library will be linked based on the selected application bit size.
I next tried this same scheme for the iPhone. I changed "-arch 386" to "-arch armv6" thinking that this would be easy.
No dice.
I got strange errors:
llvm-gcc-4.2: error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/arm-apple-darwin10-llvm-gcc-4.2': execvp: No such file or directory
Now what?
I went back to XCode 3.2 and created a new project as above (iOS/Library), added the source, targeted the iPhone, and boom - I had a iPhone library. I also had to make a second build targeting the simulator (because that actually is 386-type architecture as opposed to arm).
I used the same lipo trick to build a common lib for simulator and iphone and I was done - that way I could have one .a in my iPhone/iPad project for both the actual device and the simulator.
The libraries work just fine in XCode4 projects.
Now I figured I could do all of this library work in XCode4 eliminating the need for XCode 3.2.
That's where the fun began.
I found this article which claims to accomplish this.
The only problem was that after hours of thrashing with the XCode4 library stuff I could not get it to produce a .a file no matter how hard I tried. I was able to follow all the steps but no libraries that I could find ever came out.
I am sure I could create workspaces in XCode4 and import the XCode 3.2 projects in order to cheat and get everything all into one place but at this point I am so pissed off about how ugly the library building is in XCode4 that I probably won't bother.
Fortunately the library is a very static one and I can afford some ugliness just to stay away from the XCode4 nonsense.
Apple's XCode4 is nice if you don't do anything but build your simple project from scratch. But as soon as you get into all the the "scheme's" and other uglies things quickly fall apart. There is no good way I have found to see what its actually doing (unlike 3.2 which created logs). I think 4 creates logs but they are in weird places.
Oh well - I created this post so others would not bother with XCode4 and libraries until Apple comes up with something nicer. It took me less than one minute to create the iPhone library (and I am not an XCode wiz by any means). After about four hours with XCode4 I gave up...
Tuesday, May 17, 2011
"It's Just a Flesh Wound..."
Last March I asked where could all the water in the Fukushima nuclear plants have gone (see "Diablo Canyon - America's Fukushima" from March 17th).
Well, according to this WSJ story and others apparently its all going out the bottom of the reactors (No.'s 1, 2 and 3)...
And, surprisingly, it looks like three of the reactors had at least partial meltdowns as well.
How nice! Clean safe nuclear energy...
TEPCO, after injecting 10 million liters of water into a reactor vessel designed to hold about 400,000 liters, now thinks there might be a leak problem - how fantastically observant.
TEPCO says that the fuel probably did not melt through the bottom and that the water is instead coming out of broken pipes and so forth connected to the reactor vessel (GE's Imelt says "Thank God for careful parsing of sentences... I wonder if his name "I Melt" is some sort of nuclear reactor pun...)? At any rate the GE-built vessel is safe but the fuel inside is melted and highly radioactive liquid is leaking magically out of the vessels through broken pipes...
This reminds me of "Monty Python and the Holy Grail" where the Black Knight, defending the bridge (or perhaps nuclear power) has all his limbs hacked off but remarks "Its only a flesh wound..."
Now in the linked post from March I pointed out that Diablo Canyon was on the coast and by a fault. But what if a reactor inland over, say a large freshwater aquifer, had the same problem? Well, if you check out the list of US aquifers you would see a lot of potential problem areas (map of US aquifers). All the highly radioactive waste would leak down through the ground poison the aquifer.
There's nothing under Japan but rock, faults, and the sea - so leaking reactors near the sea aren't a problem for drinking water and aquifers.
Obama's nuclear energy policy will only suffer a "flesh wound" with the Fukushima meltdowns - at least that will be the spin.
I was thinking the other day about the supposed 10,000 deaths a year due to coal fired electricity plants.
I wonder how many people would die decades sooner without electricity?
I wonder how many premature births would end in death without electricity to run the incubators?
How many of us would die young sitting around wood fires instead of in houses powered by electricity.
Before electricity human life expectancy was some decades (two, three?) shorter than today.
Sadly, no one thinks today about the bigger picture.
In fact, I doubt many people even know their is a picture, much less how big it might be...
Well, according to this WSJ story and others apparently its all going out the bottom of the reactors (No.'s 1, 2 and 3)...
And, surprisingly, it looks like three of the reactors had at least partial meltdowns as well.
How nice! Clean safe nuclear energy...
TEPCO, after injecting 10 million liters of water into a reactor vessel designed to hold about 400,000 liters, now thinks there might be a leak problem - how fantastically observant.
TEPCO says that the fuel probably did not melt through the bottom and that the water is instead coming out of broken pipes and so forth connected to the reactor vessel (GE's Imelt says "Thank God for careful parsing of sentences... I wonder if his name "I Melt" is some sort of nuclear reactor pun...)? At any rate the GE-built vessel is safe but the fuel inside is melted and highly radioactive liquid is leaking magically out of the vessels through broken pipes...
This reminds me of "Monty Python and the Holy Grail" where the Black Knight, defending the bridge (or perhaps nuclear power) has all his limbs hacked off but remarks "Its only a flesh wound..."
Now in the linked post from March I pointed out that Diablo Canyon was on the coast and by a fault. But what if a reactor inland over, say a large freshwater aquifer, had the same problem? Well, if you check out the list of US aquifers you would see a lot of potential problem areas (map of US aquifers). All the highly radioactive waste would leak down through the ground poison the aquifer.
There's nothing under Japan but rock, faults, and the sea - so leaking reactors near the sea aren't a problem for drinking water and aquifers.
Obama's nuclear energy policy will only suffer a "flesh wound" with the Fukushima meltdowns - at least that will be the spin.
I was thinking the other day about the supposed 10,000 deaths a year due to coal fired electricity plants.
I wonder how many people would die decades sooner without electricity?
I wonder how many premature births would end in death without electricity to run the incubators?
How many of us would die young sitting around wood fires instead of in houses powered by electricity.
Before electricity human life expectancy was some decades (two, three?) shorter than today.
Sadly, no one thinks today about the bigger picture.
In fact, I doubt many people even know their is a picture, much less how big it might be...
Monday, May 16, 2011
"Oh, Byzantium!" Privacy, Sex and the Law
There is a substantial and growing danger in our culture to focus the law on ridiculously low levels of detail. I am not sure where it comes from but the result is that we now live in a society where having "Byzantine Law" would be relief.
The first area of confusion is what is intended for "privacy".
For example, a file exchange site called "Drop Box" offers a service whereby you can exchange files with others. Initially in the myriad of disclaimers and disclosures the site claimed that "All files stored on Dropbox servers are encrypted (AES256) and are inaccessible without your account password." Later on that claim was changed to "All files stored on Dropbox servers are encrypted (AES 256)." (See this PDF.)
The real issue here is whether its possible that someone outside of you, the uploader, can access the files and are you and your intended recipient satisfied with the service. The service should not have to worry about exactly how secure this is and, if you care, then you need to take the burden upon yourself to figure it out.
The big additional problem here is that neither of these statements make any sense to begin with. "Servers" cannot be incrypted - only information on the servers. Clearly the first sentence indicates that the "servers" would be inaccessible without your account password. In no case does it mention files.
The second sentence cannot be true either because any server on which all files where encrypted would not function - because the operating system files must be free of encryption in order for the server to run and obviously you would need to view unencrypted web pages in order to use them.
At issue is what is the "intended" purpose of Drop Box from the perspective of a user.
Users see the site as protecting their information for casual interception for sure. But what else is expected? Protection from governments? Aliens? Where doe the line get drawn and at what cost...?
If an arms dealer uses this site and the government intercepts his information then I guess he was at fault for using the site in the first place.
Now let's look at another area of confusion: sexual encounters.
The most recent Byzantine case I can think of is the Julian Assange (WikiLeaks) case where part of the allegations include a statement by Miss A that “but that it was too late to stop Assange as she had gone along with it so far” (see this for the full statement). Similarly Miss W claims "during the night, they had both woken up and had sex at least once .... She had awoken to find him having sex with her."
Again we have nonsensical statements being parsed by the law.
Can you be willing doing something you do not consent to?
Previously "macro" events - for example exchanging a file with someone or have sex with someone - were what was considered before the eyes of the as "atomic" - that is either you did it or you did not - the "it" being, in either case I suppose, the "exchange".
There was little or no consideration of the "low level details".
But today all of that has changed dramatically - particularly before the law.
It used to be that if you were in bed with someone else naked that was sufficient to be considered to be "having sex" - specific details of who did what to whom were of little consequence - the assumption being that if both parties had willing gone along together to that point then it wasn't important what else might or might not have happened unless a pregnancy resulted. There were considered to be having consensual sex regardless.
Its much less clear today.
And what is my expectation if I buy an iPhone that is supposed to know where I am...? Should I expect to the phone to in fact know where I am? Doesn't that expectation fly directly in the face of my expectation of "location privacy?"
And no matter what a company does to try and mitigate all of this someone will always find a "flaw" that could be used to circumvent any protection.
The law today is far worse than the Byzantine Law of the 14th century because it attempts to spell out specific details of each and every possible type of offense, isolates specific activities already covered by other laws as well as include specific penalties for different types of "intention" while committing crimes, e.g., hate crimes as opposed to crimes.
In the past if you killed someone you killed someone - while motive might be an issue determining the length and severity of your punishment the crime itself was the principle issue - killing someone. Today there is a myriad of other details - details that do not mitigate the killing - yet consume enormous amounts of legal cost, time and effort and do not change the outcome. These details offer no "value" to society and open even more loopholes for lawyers to maneuver guilty defendants to innocence.
And what is the cost of all of this nonsense to the rest of us?
I believe that it is becoming enormous and will begin to exceed the cost (if it has not already) of imposing basic justice in the first place.
The cause of this, of course, is unscrupulous defense attorney's that manage to claim a defendant did not break the letter of the law when in fact he did. "He was copying files, not stealing your honor blah blah blah..."
And because judges, legal systems and juries are not technologically sophisticated things only become worse.
Now we see the Obama Administration asking for mandatory three year prison sentences for "critical infrastructure hacking" - whatever that might be. One imagines that associated stealing, breaking and entering, trespassing, trafficking in stolen goods, and God knows what else would be sufficient for prosecutors to obtain a conviction - but apparently no longer.
There is a very great danger here that the number and specificity of laws will become so great that society will be unable to accomplish anything at all - in terms of generating work for its members and in terms of what its members can do.
And now we have copyright trolls literally suing people based solely on IP addresses (see this). Will you be one of the 23,000 defendants?
The first area of confusion is what is intended for "privacy".
For example, a file exchange site called "Drop Box" offers a service whereby you can exchange files with others. Initially in the myriad of disclaimers and disclosures the site claimed that "All files stored on Dropbox servers are encrypted (AES256) and are inaccessible without your account password." Later on that claim was changed to "All files stored on Dropbox servers are encrypted (AES 256)." (See this PDF.)
The real issue here is whether its possible that someone outside of you, the uploader, can access the files and are you and your intended recipient satisfied with the service. The service should not have to worry about exactly how secure this is and, if you care, then you need to take the burden upon yourself to figure it out.
The big additional problem here is that neither of these statements make any sense to begin with. "Servers" cannot be incrypted - only information on the servers. Clearly the first sentence indicates that the "servers" would be inaccessible without your account password. In no case does it mention files.
The second sentence cannot be true either because any server on which all files where encrypted would not function - because the operating system files must be free of encryption in order for the server to run and obviously you would need to view unencrypted web pages in order to use them.
At issue is what is the "intended" purpose of Drop Box from the perspective of a user.
Users see the site as protecting their information for casual interception for sure. But what else is expected? Protection from governments? Aliens? Where doe the line get drawn and at what cost...?
If an arms dealer uses this site and the government intercepts his information then I guess he was at fault for using the site in the first place.
Now let's look at another area of confusion: sexual encounters.
The most recent Byzantine case I can think of is the Julian Assange (WikiLeaks) case where part of the allegations include a statement by Miss A that “but that it was too late to stop Assange as she had gone along with it so far” (see this for the full statement). Similarly Miss W claims "during the night, they had both woken up and had sex at least once .... She had awoken to find him having sex with her."
Again we have nonsensical statements being parsed by the law.
Can you be willing doing something you do not consent to?
Previously "macro" events - for example exchanging a file with someone or have sex with someone - were what was considered before the eyes of the as "atomic" - that is either you did it or you did not - the "it" being, in either case I suppose, the "exchange".
There was little or no consideration of the "low level details".
But today all of that has changed dramatically - particularly before the law.
It used to be that if you were in bed with someone else naked that was sufficient to be considered to be "having sex" - specific details of who did what to whom were of little consequence - the assumption being that if both parties had willing gone along together to that point then it wasn't important what else might or might not have happened unless a pregnancy resulted. There were considered to be having consensual sex regardless.
Its much less clear today.
And what is my expectation if I buy an iPhone that is supposed to know where I am...? Should I expect to the phone to in fact know where I am? Doesn't that expectation fly directly in the face of my expectation of "location privacy?"
And no matter what a company does to try and mitigate all of this someone will always find a "flaw" that could be used to circumvent any protection.
The law today is far worse than the Byzantine Law of the 14th century because it attempts to spell out specific details of each and every possible type of offense, isolates specific activities already covered by other laws as well as include specific penalties for different types of "intention" while committing crimes, e.g., hate crimes as opposed to crimes.
In the past if you killed someone you killed someone - while motive might be an issue determining the length and severity of your punishment the crime itself was the principle issue - killing someone. Today there is a myriad of other details - details that do not mitigate the killing - yet consume enormous amounts of legal cost, time and effort and do not change the outcome. These details offer no "value" to society and open even more loopholes for lawyers to maneuver guilty defendants to innocence.
And what is the cost of all of this nonsense to the rest of us?
I believe that it is becoming enormous and will begin to exceed the cost (if it has not already) of imposing basic justice in the first place.
The cause of this, of course, is unscrupulous defense attorney's that manage to claim a defendant did not break the letter of the law when in fact he did. "He was copying files, not stealing your honor blah blah blah..."
And because judges, legal systems and juries are not technologically sophisticated things only become worse.
Now we see the Obama Administration asking for mandatory three year prison sentences for "critical infrastructure hacking" - whatever that might be. One imagines that associated stealing, breaking and entering, trespassing, trafficking in stolen goods, and God knows what else would be sufficient for prosecutors to obtain a conviction - but apparently no longer.
There is a very great danger here that the number and specificity of laws will become so great that society will be unable to accomplish anything at all - in terms of generating work for its members and in terms of what its members can do.
And now we have copyright trolls literally suing people based solely on IP addresses (see this). Will you be one of the 23,000 defendants?
Friday, May 13, 2011
More Google...
Our old friend Google seems to have run afoul of the US Government - regardless of their "Don't Be Evil" policies.
A cryptic regulatory file recently disclosed that Google was setting aside $500 million USD (as in half a billion with a 'B' dollars) to potentially resolve a case with the Justice Department.
Supposedly, according to the filing, this involves"the use of Google advertising by certain advertisers." (This would be the largest fine ever paid by a company - so much for "Don't Be Evil"...)
Unlike a telephone line or ISP that provides internet services a search engine company can be liable for what it does if if makes money with it. (Telephone laws subsequently applied to ISPs provide that the transmission services are not responsible for what others transmit over their lines or services.)
At issue here is a policy Google had in place to allow Canadian pharmacies to sell drugs to US customers.
Now in general online drug sales are illegal - particularly if no prescription is involved or if the sale occurs outside the US. In addition it is illegal to import drugs into the US. Now generally many US citizens purchase drugs from online Canadian pharmacies because they are cheaper. Yet Google allowed these types of pharmacies to advertise.
Google continues to run afoul of the law both in the US and in other countries. It seems that they have certain ideas about what is legal and what isn't and they don't seem to understand that another country may have its own ideas about law. (Perhaps this is a result of "globalist" thinking - that there are no countries or boarders and we all hold hands and sing "Kumbaya"…)
In the long term I would say that what companies like Google do in terms of tracking and location data is going to become more and more "regulated".
For example, in the EU there is currently talk of making your "location" part of your personal information, i.e., private. Even though Apple, Google and others gain your consent to track your location this may impact what those companies can do with it - and hence limit their potential to profit from that knowledge.
There is also "do not track" which is a set of initiatives that are focused on preventing companies like Google from tracking what you do and where you go ad-wise on the web.
So from my perspective their "markets" for tracking your information are becoming more and more limited as time goes by.
In the meantime Google is busy preparing to sell its new Chrome laptop - kind of a netbook running Google's Chrome Operating System. The concept here is a web-based service that corporate types can use - I guess the reasoning is that corporations will store all their private data on Google's servers.
The last think the world needs is another computer operating system - yet here they are with Android and now Chrome - pounding the pavement to get their name out there.
I have no doubt that somewhere in all the registration nonsense to turn on the Chrome laptop is an "I Agree" button that gives Google access to all of your movements about the web.
(While I write this offline Google claims it is busy restoring lost posts from blogspot - the service I use to post. Yesterday's post on "Through the Keyhole" is gone as of right now - I hope they get it back as I don't have any backup of that post. The blog part seems to be up but I cannot add posts… Oh well.)
A cryptic regulatory file recently disclosed that Google was setting aside $500 million USD (as in half a billion with a 'B' dollars) to potentially resolve a case with the Justice Department.
Supposedly, according to the filing, this involves"the use of Google advertising by certain advertisers." (This would be the largest fine ever paid by a company - so much for "Don't Be Evil"...)
Unlike a telephone line or ISP that provides internet services a search engine company can be liable for what it does if if makes money with it. (Telephone laws subsequently applied to ISPs provide that the transmission services are not responsible for what others transmit over their lines or services.)
At issue here is a policy Google had in place to allow Canadian pharmacies to sell drugs to US customers.
Now in general online drug sales are illegal - particularly if no prescription is involved or if the sale occurs outside the US. In addition it is illegal to import drugs into the US. Now generally many US citizens purchase drugs from online Canadian pharmacies because they are cheaper. Yet Google allowed these types of pharmacies to advertise.
Google continues to run afoul of the law both in the US and in other countries. It seems that they have certain ideas about what is legal and what isn't and they don't seem to understand that another country may have its own ideas about law. (Perhaps this is a result of "globalist" thinking - that there are no countries or boarders and we all hold hands and sing "Kumbaya"…)
In the long term I would say that what companies like Google do in terms of tracking and location data is going to become more and more "regulated".
For example, in the EU there is currently talk of making your "location" part of your personal information, i.e., private. Even though Apple, Google and others gain your consent to track your location this may impact what those companies can do with it - and hence limit their potential to profit from that knowledge.
There is also "do not track" which is a set of initiatives that are focused on preventing companies like Google from tracking what you do and where you go ad-wise on the web.
So from my perspective their "markets" for tracking your information are becoming more and more limited as time goes by.
In the meantime Google is busy preparing to sell its new Chrome laptop - kind of a netbook running Google's Chrome Operating System. The concept here is a web-based service that corporate types can use - I guess the reasoning is that corporations will store all their private data on Google's servers.
The last think the world needs is another computer operating system - yet here they are with Android and now Chrome - pounding the pavement to get their name out there.
I have no doubt that somewhere in all the registration nonsense to turn on the Chrome laptop is an "I Agree" button that gives Google access to all of your movements about the web.
(While I write this offline Google claims it is busy restoring lost posts from blogspot - the service I use to post. Yesterday's post on "Through the Keyhole" is gone as of right now - I hope they get it back as I don't have any backup of that post. The blog part seems to be up but I cannot add posts… Oh well.)
Thursday, May 12, 2011
Through the Keyhole
For many years I have been interested in the general problem of how easy it is to delude yourself with logic. As a logician and computer programmer I am often involved in situations where there are literally millions or billions of things involved in a particular problem, and, of those million things, maybe one or two are slightly wrong and then only sometimes.
Solving these types of problems have taught me one thing of particular interest.
I call it the "keyhole dilemma".
Let's suppose I am in one room looking through an old fashioned keyhole in a locked door into another room. On the wall next to me is a lever. When I pull this lever a bell above (on my side of the door) is supposed to ring. The keyhole only affords me a partial view of what's in the room. Inside the other room is some kind of machinery about which I know only what I can see through the keyhole that causes the bell to ring in response to pulling the lever.
The bell does not always ring and my job is to fix it.
Ideally I would like to fix the problem by only using the keyhole - perhaps by inserting a wire and poking some switch or control that resolves the bell ringing reliability problem - this would use the least of my time. However, I can spend more money and time cutting a larger hole (as well as fixing the mess) so that I can see more of what's going on if I think that will more quickly and cheaply solve the overall problem. On the other hand I could make such a larger opening only to find that it was unnecessary and I only needed to use the keyhole.
Then again, part of the bell ringing machinery may be attached to the door, or worse require that the door be present and closed, in order to work. So I might have to go through the wall instead - which is an even bigger mess and takes more time. Then again if a monkey is ringing the bell tearing the door or wall apart might scare him off completely leaving not only the bell not working but me having to figure out what was on the platform and what did it do in order to ring the bell.
The real issue is then is that, through the keyhole, do I see enough of the problem to formulate a reliable solution.
Now inside the room there can be all manner of bizarre nonsense to ring the bell: a monkey on shelf who is poked by the lever that is trained to pull a string on the bell, a string directly from the lever to the bell, some complex Rube Goldberg machine involving mice, bowling balls, a set of fun-house mirrors that invert my keyhole perspective, etc. - literally anything.
So now, standing before the door, what do I do to fix the problem?
On solution is to use science. Science uses a process called the "scientific method". Basically you collect observations about the problem until you can form a hypothesis about what you see. You then reason about your hypothesis and conduct experiments to prove your hypothesis.
So, for example, I might see part of a rope through the keyhole and observe that, when I pull the lever, the rope becomes taut and then the bell rings. I might also see that sometimes the rope does not become taut and the bell does not ring.
From this I would form the hypothesis that the rope is not solidly attached to the lever and that sometimes when I pull the lever the rope slips and does not ring the bell.
I may then sit down and try pulling the lever twenty times to see if that's indeed what's happening after which I take whatever steps necessary to attach the rope more solidly to the lever (perhaps by cutting a hole in the wall, etc.)
This seems reasonable, doesn't it?
Well, only if twenty tries is actually a useful amount of testing...
The problem, of course, is that through the keyhole I do no see that the lever is actually poking a monkey who pulls the rope to ring the bell and the monkey, being lazy, does not always do his job when poked by the lever. Cutting a hole in the wall frightens the monkey who runs out the window leaving me with nothing to fix...
So while science might provide answers it can only do so if the perspective through the keyhole is wide enough to expose what is actually going on. Otherwise science can only provide a perspective that is bound by the limits of what I actually observe - which in fact might be very little of the true problem.
So as I wrote yesterday in "Failing the Future" we have things like contraceptive hormones.
In this case the "keyhole" is the action of becoming pregnant. The observation is that without ovulation pregnancy cannot occur so the "fix" is to block ovulation.
What is not "seen" through this keyhole is what else is involved in ovulation - the bigger picture as it were - involving selecting mates which would appear to fundamentally affect the most basic genetic aspects of humanity. And then there is the impact on society as a whole - what is that impact and is it an improvement, and is it an improvement as compared to what else...?
Similarly for "anti-depressants" - through the keyhole we see "sad people" who we think should be happy. We create drugs to make them happy but do not realize that perhaps there is a sound reason for their unhappiness and that it in fact serves a useful purpose.
Or for pain - we create oxycodone - but why do we have pain in the first place? Through the keyhole we see only someone sad - so we make them feel better. Did we foresee their subsequent addiction problem, its impact on their spouse, family or children? On their life? Did we trade two days of a sore back for twenty years of heartbreak?
The "delusion" here is that we humans always see enough of the problem to create a safe solution.
In fact we don't - we can't even predict the weather reliably.
Human arrogance always tells the inventor or scientist that the product or solution he is creating will be the "be all, end all" of whatever situation is being addressed. In the case of something like hormone contraceptives most of the scientists from the 1960's that created the technology are probably long dead.
Yet as generations pass affected by consequences of this the true consequences are only revealed.
The reason I am writing this today is that modern society is more and more focused only on the view through the keyhole. Solve what we see of the problem right now. "Look, see, isn't the solution wonderful!" while in fact the solution is slowing and insideously destroying those who embrace it.
The next iteration of "science" will not be to refine the existing scientific method but instead to step back and study the "keyhole dilemma" - how do I make sure that what I am doing is not harmful and in fact useful, how do I make sure it reveals "enough" of the real problems, what is "enough" in order to go forward with one idea over another.
Just because you can does not mean that you should...
Solving these types of problems have taught me one thing of particular interest.
I call it the "keyhole dilemma".
Let's suppose I am in one room looking through an old fashioned keyhole in a locked door into another room. On the wall next to me is a lever. When I pull this lever a bell above (on my side of the door) is supposed to ring. The keyhole only affords me a partial view of what's in the room. Inside the other room is some kind of machinery about which I know only what I can see through the keyhole that causes the bell to ring in response to pulling the lever.
The bell does not always ring and my job is to fix it.
Ideally I would like to fix the problem by only using the keyhole - perhaps by inserting a wire and poking some switch or control that resolves the bell ringing reliability problem - this would use the least of my time. However, I can spend more money and time cutting a larger hole (as well as fixing the mess) so that I can see more of what's going on if I think that will more quickly and cheaply solve the overall problem. On the other hand I could make such a larger opening only to find that it was unnecessary and I only needed to use the keyhole.
Then again, part of the bell ringing machinery may be attached to the door, or worse require that the door be present and closed, in order to work. So I might have to go through the wall instead - which is an even bigger mess and takes more time. Then again if a monkey is ringing the bell tearing the door or wall apart might scare him off completely leaving not only the bell not working but me having to figure out what was on the platform and what did it do in order to ring the bell.
The real issue is then is that, through the keyhole, do I see enough of the problem to formulate a reliable solution.
Now inside the room there can be all manner of bizarre nonsense to ring the bell: a monkey on shelf who is poked by the lever that is trained to pull a string on the bell, a string directly from the lever to the bell, some complex Rube Goldberg machine involving mice, bowling balls, a set of fun-house mirrors that invert my keyhole perspective, etc. - literally anything.
So now, standing before the door, what do I do to fix the problem?
On solution is to use science. Science uses a process called the "scientific method". Basically you collect observations about the problem until you can form a hypothesis about what you see. You then reason about your hypothesis and conduct experiments to prove your hypothesis.
So, for example, I might see part of a rope through the keyhole and observe that, when I pull the lever, the rope becomes taut and then the bell rings. I might also see that sometimes the rope does not become taut and the bell does not ring.
From this I would form the hypothesis that the rope is not solidly attached to the lever and that sometimes when I pull the lever the rope slips and does not ring the bell.
I may then sit down and try pulling the lever twenty times to see if that's indeed what's happening after which I take whatever steps necessary to attach the rope more solidly to the lever (perhaps by cutting a hole in the wall, etc.)
This seems reasonable, doesn't it?
Well, only if twenty tries is actually a useful amount of testing...
The problem, of course, is that through the keyhole I do no see that the lever is actually poking a monkey who pulls the rope to ring the bell and the monkey, being lazy, does not always do his job when poked by the lever. Cutting a hole in the wall frightens the monkey who runs out the window leaving me with nothing to fix...
So while science might provide answers it can only do so if the perspective through the keyhole is wide enough to expose what is actually going on. Otherwise science can only provide a perspective that is bound by the limits of what I actually observe - which in fact might be very little of the true problem.
So as I wrote yesterday in "Failing the Future" we have things like contraceptive hormones.
In this case the "keyhole" is the action of becoming pregnant. The observation is that without ovulation pregnancy cannot occur so the "fix" is to block ovulation.
What is not "seen" through this keyhole is what else is involved in ovulation - the bigger picture as it were - involving selecting mates which would appear to fundamentally affect the most basic genetic aspects of humanity. And then there is the impact on society as a whole - what is that impact and is it an improvement, and is it an improvement as compared to what else...?
Similarly for "anti-depressants" - through the keyhole we see "sad people" who we think should be happy. We create drugs to make them happy but do not realize that perhaps there is a sound reason for their unhappiness and that it in fact serves a useful purpose.
Or for pain - we create oxycodone - but why do we have pain in the first place? Through the keyhole we see only someone sad - so we make them feel better. Did we foresee their subsequent addiction problem, its impact on their spouse, family or children? On their life? Did we trade two days of a sore back for twenty years of heartbreak?
The "delusion" here is that we humans always see enough of the problem to create a safe solution.
In fact we don't - we can't even predict the weather reliably.
Human arrogance always tells the inventor or scientist that the product or solution he is creating will be the "be all, end all" of whatever situation is being addressed. In the case of something like hormone contraceptives most of the scientists from the 1960's that created the technology are probably long dead.
Yet as generations pass affected by consequences of this the true consequences are only revealed.
The reason I am writing this today is that modern society is more and more focused only on the view through the keyhole. Solve what we see of the problem right now. "Look, see, isn't the solution wonderful!" while in fact the solution is slowing and insideously destroying those who embrace it.
The next iteration of "science" will not be to refine the existing scientific method but instead to step back and study the "keyhole dilemma" - how do I make sure that what I am doing is not harmful and in fact useful, how do I make sure it reveals "enough" of the real problems, what is "enough" in order to go forward with one idea over another.
Just because you can does not mean that you should...
Wednesday, May 11, 2011
Failing Our Future
Here at the Lone Wolf we cover a lot of material every day to bring you this blog.
Sometimes its funny and relevant (see Emma Weylin on "SlutWalks"), serious, or just plain weird.
In any case looking over all of this brings on some different perspectives - ones you may not get from the "standard view point" prescribed by whatever stereotype you choose to inhabit.
So over the last couple of days I have become aware of some interesting studies: The first is "The Tricky Chemistry of Attraction" from the WSJ.
This article discusses how use of modern hormone based contraceptives changes how women as well as men think about choosing a mate. Basically the interesting part, at least to me, is that without these hormones (contraceptives) there is a documented tendency for couples to choose partners based on the greatest differences in their immune systems, i.e., greatest natural genetic differences and resulting the largest possible genetic diversity of the offspring.
The greatest genetic distance between a pair of parents in terms of immune system yields the most effective immune system for the child.
Not surprisingly, when natural hormones are replaced with human decision making there is a documented decline in successfully selecting an mate relative to this.
With these contraceptive hormones in play the decision process changes. Of course, most or all of this is not obvious to the participants and the studies involved use smell and pheromones.
The next is "Performance Benefits of Depression: Sequential Decision Making in Healthy Sample and a Clinically Depressed Sample" (also described here in Wired).
Here we see that the "clinically depressed" tend to spend a lot of time focused on their problems (surprise, surprise). However, in scenarios were making good decisions is important, the endless ruminating tends to produce documentably better results. As if its a mechanism for the mind to study what went wrong so as to do better the next time.
Emma Weylin's article on SlutWalks is also relevant and we'll see why in a minute...
So from the quarterback's arm chair what I see here is that western society has been heavily investing "solutions" in what seem superficially like important advances in medicine (and I choose medicine here as an example, there are other relevant fields here as well) such as birth control pills and anti-depressants.
However, the solutions are focused not on an actual "problem" per se, i.e., contraceptives are not focused on stopping all human reproduction and anti-depressants are not focused on stopping all heavy duty, thrashing-in-the-bed-at-3AM sulking. No, these "advances" are focused on removing the superficial aspects of the problems (an unwanted pregnancy and making "Joe" happy at work).
However, as time goes by we see that there are perhaps reasons for things to be the way that they were in the first place:
- Strong sexual attractions and random couplings producing more healthy (more able or more intelligent?) offspring.
- Depressed minds deeply focused solving otherwise unsolvable problems.
And, in the case of Emma Weylin's SlutWalks, the notion that behaving disrespectfully to yourself in exactly the way the very notion feminism decries, i.e., rather than being liberated from having to using your body to survive because of your gender you wallow in doing exactly that.
So what is my point?
In societies without instantaneous communication and advanced technology progress is slow and deliberate because there is no way to "jump ahead" to an apparently obvious solution like creating an "anti-depressant."
Humanity itself is not designed for these kinds of changes, particularly at such high speed, where the benefits of the status quo are not well understood by those creating the supposed "solution".
(Imagine the ad for Anti-Depressant drugs featuring Vincent van Gogh. The unfinished "Starry Night" on a dimly lit easel in the far background. Van Gogh, on anti-depressents, in the foreground, face lit by a video game with fingers on the game controller, smiling, leaving the unfinished painting to trash heap of history in his lust for killing one more Orc. The caption: "Why spend your day being depressed? Talk to your doctor about X and start enjoying life again...")
Perhaps there are so many depressed people in today's society because there is something wrong!
(Or, put another way, is shooting the town crier as he rides through the streets shouting "Fire!" because he is making too much noise for you to sleep really addressing the problem?)
Perhaps the divorce rates, the rates of unwanted births, and such are based on the fact that our decision making at the crucial moment of selecting a mate was dimmed by hormones.
(See "Sexual Devolution" here.)
Perhaps today's youth are less able because of their genetics!
(Show dogs, whose breeding has been controlled by humans for dozens and in some cases hundreds of years are notoriously unhealthy gene-wise with bad hips, genetic disease, etc... but they really do look cute.)
Perhaps in our rush to make our own world more comfortable and convenient for ourselves we are relegating our children to a future scrap heap of gene-based disease, stupidity and misery.
Consequences are tricky things and humans are notoriously bad at predicting the future.
Perhaps the rush to create the bestest, smallest, fastest widget is not the same is making wise societal decisions for the good of everyone...
Sometimes its funny and relevant (see Emma Weylin on "SlutWalks"), serious, or just plain weird.
In any case looking over all of this brings on some different perspectives - ones you may not get from the "standard view point" prescribed by whatever stereotype you choose to inhabit.
So over the last couple of days I have become aware of some interesting studies: The first is "The Tricky Chemistry of Attraction" from the WSJ.
This article discusses how use of modern hormone based contraceptives changes how women as well as men think about choosing a mate. Basically the interesting part, at least to me, is that without these hormones (contraceptives) there is a documented tendency for couples to choose partners based on the greatest differences in their immune systems, i.e., greatest natural genetic differences and resulting the largest possible genetic diversity of the offspring.
The greatest genetic distance between a pair of parents in terms of immune system yields the most effective immune system for the child.
Not surprisingly, when natural hormones are replaced with human decision making there is a documented decline in successfully selecting an mate relative to this.
With these contraceptive hormones in play the decision process changes. Of course, most or all of this is not obvious to the participants and the studies involved use smell and pheromones.
The next is "Performance Benefits of Depression: Sequential Decision Making in Healthy Sample and a Clinically Depressed Sample" (also described here in Wired).
Here we see that the "clinically depressed" tend to spend a lot of time focused on their problems (surprise, surprise). However, in scenarios were making good decisions is important, the endless ruminating tends to produce documentably better results. As if its a mechanism for the mind to study what went wrong so as to do better the next time.
Emma Weylin's article on SlutWalks is also relevant and we'll see why in a minute...
So from the quarterback's arm chair what I see here is that western society has been heavily investing "solutions" in what seem superficially like important advances in medicine (and I choose medicine here as an example, there are other relevant fields here as well) such as birth control pills and anti-depressants.
However, the solutions are focused not on an actual "problem" per se, i.e., contraceptives are not focused on stopping all human reproduction and anti-depressants are not focused on stopping all heavy duty, thrashing-in-the-bed-at-3AM sulking. No, these "advances" are focused on removing the superficial aspects of the problems (an unwanted pregnancy and making "Joe" happy at work).
However, as time goes by we see that there are perhaps reasons for things to be the way that they were in the first place:
- Strong sexual attractions and random couplings producing more healthy (more able or more intelligent?) offspring.
- Depressed minds deeply focused solving otherwise unsolvable problems.
And, in the case of Emma Weylin's SlutWalks, the notion that behaving disrespectfully to yourself in exactly the way the very notion feminism decries, i.e., rather than being liberated from having to using your body to survive because of your gender you wallow in doing exactly that.
So what is my point?
In societies without instantaneous communication and advanced technology progress is slow and deliberate because there is no way to "jump ahead" to an apparently obvious solution like creating an "anti-depressant."
Humanity itself is not designed for these kinds of changes, particularly at such high speed, where the benefits of the status quo are not well understood by those creating the supposed "solution".
(Imagine the ad for Anti-Depressant drugs featuring Vincent van Gogh. The unfinished "Starry Night" on a dimly lit easel in the far background. Van Gogh, on anti-depressents, in the foreground, face lit by a video game with fingers on the game controller, smiling, leaving the unfinished painting to trash heap of history in his lust for killing one more Orc. The caption: "Why spend your day being depressed? Talk to your doctor about X and start enjoying life again...")
Perhaps there are so many depressed people in today's society because there is something wrong!
(Or, put another way, is shooting the town crier as he rides through the streets shouting "Fire!" because he is making too much noise for you to sleep really addressing the problem?)
Perhaps the divorce rates, the rates of unwanted births, and such are based on the fact that our decision making at the crucial moment of selecting a mate was dimmed by hormones.
(See "Sexual Devolution" here.)
Perhaps today's youth are less able because of their genetics!
(Show dogs, whose breeding has been controlled by humans for dozens and in some cases hundreds of years are notoriously unhealthy gene-wise with bad hips, genetic disease, etc... but they really do look cute.)
Perhaps in our rush to make our own world more comfortable and convenient for ourselves we are relegating our children to a future scrap heap of gene-based disease, stupidity and misery.
Consequences are tricky things and humans are notoriously bad at predicting the future.
Perhaps the rush to create the bestest, smallest, fastest widget is not the same is making wise societal decisions for the good of everyone...
Tuesday, May 10, 2011
Microsoft and Skype (Skyprosoft or Microskype?)
For many years I have been a Skype user.
For those that do not know Skype is a program that allows you to create a free on-line account which can send and receive free calls to other Skype users. Skype was started about seven years ago (2003) by Niklas Zennstrom and Janus Friis - a couple of Danes with a penchant for "file sharing". They had previously created Kazaa - a file sharing app which was synonymous with music piracy until LimeWire took over.
Skype runs on a magic internet protocol that does not require any sort of "master servers" (which made it different than things like Napster which kept a big list in one place - easy to do and easy to be sued). Skype sort of digs around on its own to find "routes" for calls through other Skype users computers. This means that its quite robust in terms of call quality and service and it means that Skype always seems to be able to complete the call (this was not always the case in the beginning but its vastly improved at this point).
Over the years it has steadily expanded to include a myriad of features including free video calls and video conference calls.
Today you can purchase "local numbers" in other countries, route calls to or from those numbers to Skype voicemail, call over WiFi from a smartphone or iPad, and so on. Its very flexible and fairly easy to use (though customer service is in Latvia, Lithuania or someplace like that and they have some odd ideas about how web sites should work).
There are also "Skype Phones" which you can buy at Walmart that connect directly to Skype.
Lexigraph uses a Vonage line for the main business number which forwards directly into Skype. Skype then makes the voicemail and/or calls available on my email, phone, and so on. We also use it when employees travel to foreign countries like Thailand, Japan, China or to Europe. WiFi calls are free and most technologically advanced countries have lots of free WiFi spots. Compared to using phone systems in the past this costs virtually nothing (I pay about $11 USD per month for my current setup).
The only thing Skype needs is a way for it take over your main phone number. Once it can do that that will be the end of my Vonage accounts. Right now you can only receive a number they assign you. Hence the Vonage number forwarded to Skype. (Vonage also has online voicemail but its not as flexible.)
eBay purchased Skype from Zennstrom and Friis in 2005 for about $2.6 billion USD. Unfortunately I do not think it was a good fit and it was sold to some private equity people in 2009.
Today the WSJ announced that Microsoft is acquiring Skype for $8.5 billion USD.
This is probably not good news for us Skype users in the long run. If Microsoft doesn't fiddle around with how it works I can see things going well. However, if they start to meddle with the business model things could go down hill quickly.
Microsoft does not have a good reputation on the consumer end (remember the Zune). While they tend to pour lots (perhaps too much) of money into this type of stuff they often do not understand the business model or customer mind set.
The advantage today of Skype is that its Skype - free and independent of any corporate nonsense but their own.
If Microsoft leaves things alone it will be fine - but after spending $8.5 billion USD for a company without substantial profits I doubt that will remain the case.
If Microsoft were smart they could work at spending time and money on making Skype even more ubiquitous - free calls from anywhere to anywhere 24x7.
But that would undermine their Windows Phone OS sales... (oops - maybe they forgot about their phone business).
Then again they could add things like GoToMeeting which would allow better sharing of desktops and things like that for their corporate users.
All in all I don't see that Microsoft will do much good for Skype. Skype is a quirky little deal that, for someone like me who is small, works very well. Its cheap and easy to use and since my business relies on the internet anyway there is no concern if the internet is down - nothing will get done anyway.
(Though I do have Comcast business service and run two independent WiFi's - one on the Comcast consumer side and one on the business side - both over the same wire. What I see is that the consumer side goes down a lot and flakes out while the business side keeps chugging along. Since their are on the same wire its clearly a routing issue for Comcast. In any case Skype over Comcast business works quite well...)
My predictions for this deal are simple:
A) nothing at all will happen for a year.
B) Microsoft will start to meddle at the fringes by trying to increase the reach of Skype until they realize that with Skype you don't really need a very smart phone if you live where WiFi is plentiful. Since Microsoft is in the phone business this will become a problem.
C) They will loose interest and it will languish because no one will want to over pay to buy it back.
If I were Microsoft I would make sure that Skype replaced all the existing GoToMeeting type stuff.
I would go to war with smart/cellphones replacing them with Skype-like services - smartphones that knew when there was WiFi and simply routed the calls away from the cellphone to WiFi.
We'll see what happens...
For those that do not know Skype is a program that allows you to create a free on-line account which can send and receive free calls to other Skype users. Skype was started about seven years ago (2003) by Niklas Zennstrom and Janus Friis - a couple of Danes with a penchant for "file sharing". They had previously created Kazaa - a file sharing app which was synonymous with music piracy until LimeWire took over.
Skype runs on a magic internet protocol that does not require any sort of "master servers" (which made it different than things like Napster which kept a big list in one place - easy to do and easy to be sued). Skype sort of digs around on its own to find "routes" for calls through other Skype users computers. This means that its quite robust in terms of call quality and service and it means that Skype always seems to be able to complete the call (this was not always the case in the beginning but its vastly improved at this point).
Over the years it has steadily expanded to include a myriad of features including free video calls and video conference calls.
Today you can purchase "local numbers" in other countries, route calls to or from those numbers to Skype voicemail, call over WiFi from a smartphone or iPad, and so on. Its very flexible and fairly easy to use (though customer service is in Latvia, Lithuania or someplace like that and they have some odd ideas about how web sites should work).
There are also "Skype Phones" which you can buy at Walmart that connect directly to Skype.
Lexigraph uses a Vonage line for the main business number which forwards directly into Skype. Skype then makes the voicemail and/or calls available on my email, phone, and so on. We also use it when employees travel to foreign countries like Thailand, Japan, China or to Europe. WiFi calls are free and most technologically advanced countries have lots of free WiFi spots. Compared to using phone systems in the past this costs virtually nothing (I pay about $11 USD per month for my current setup).
The only thing Skype needs is a way for it take over your main phone number. Once it can do that that will be the end of my Vonage accounts. Right now you can only receive a number they assign you. Hence the Vonage number forwarded to Skype. (Vonage also has online voicemail but its not as flexible.)
eBay purchased Skype from Zennstrom and Friis in 2005 for about $2.6 billion USD. Unfortunately I do not think it was a good fit and it was sold to some private equity people in 2009.
Today the WSJ announced that Microsoft is acquiring Skype for $8.5 billion USD.
This is probably not good news for us Skype users in the long run. If Microsoft doesn't fiddle around with how it works I can see things going well. However, if they start to meddle with the business model things could go down hill quickly.
Microsoft does not have a good reputation on the consumer end (remember the Zune). While they tend to pour lots (perhaps too much) of money into this type of stuff they often do not understand the business model or customer mind set.
The advantage today of Skype is that its Skype - free and independent of any corporate nonsense but their own.
If Microsoft leaves things alone it will be fine - but after spending $8.5 billion USD for a company without substantial profits I doubt that will remain the case.
If Microsoft were smart they could work at spending time and money on making Skype even more ubiquitous - free calls from anywhere to anywhere 24x7.
But that would undermine their Windows Phone OS sales... (oops - maybe they forgot about their phone business).
Then again they could add things like GoToMeeting which would allow better sharing of desktops and things like that for their corporate users.
All in all I don't see that Microsoft will do much good for Skype. Skype is a quirky little deal that, for someone like me who is small, works very well. Its cheap and easy to use and since my business relies on the internet anyway there is no concern if the internet is down - nothing will get done anyway.
(Though I do have Comcast business service and run two independent WiFi's - one on the Comcast consumer side and one on the business side - both over the same wire. What I see is that the consumer side goes down a lot and flakes out while the business side keeps chugging along. Since their are on the same wire its clearly a routing issue for Comcast. In any case Skype over Comcast business works quite well...)
My predictions for this deal are simple:
A) nothing at all will happen for a year.
B) Microsoft will start to meddle at the fringes by trying to increase the reach of Skype until they realize that with Skype you don't really need a very smart phone if you live where WiFi is plentiful. Since Microsoft is in the phone business this will become a problem.
C) They will loose interest and it will languish because no one will want to over pay to buy it back.
If I were Microsoft I would make sure that Skype replaced all the existing GoToMeeting type stuff.
I would go to war with smart/cellphones replacing them with Skype-like services - smartphones that knew when there was WiFi and simply routed the calls away from the cellphone to WiFi.
We'll see what happens...
Subscribe to:
Posts (Atom)