So as my Apple work continues I have had occasion to deal with a number of problematic issues involved with Objective C. I touched on the nil issues in "Apple Rage - Cocoa Suck."
However, at this point I need to complain a bit more - mostly on philosophical grounds.
First of all my perspective is somewhat different than what you might see on other blogs (here or here). These discuss the practicality of doing the following:
Foo * fred = nil;
[ fred doSomething ];
Now what you find out is that sending a message to nil is supposedly defined to do nothing. However, there appear to be some problems when you use the result of sending a message to nil to do something else, e.g., as in the following:
[ printer someSelector: [ fred doSomething ] ];
In this case we are sending the "value" of "[ fred doSomething ]" as the parameter for printSelector:.
Now from what I can read (see the official Apple documentation on the matter) the "value" of "[ fred doSomething ]" is also supposed to be nil all the time.
Now all of this discussion is related to what you can "do" with the code, i.e., a discussion about what actually happens. But when you think about writing software you have to think about the number one issue: cost.
Since I am an old geezer I tend to think about this a lot more than many youngsters - particularly since I have customers have had systems I have designed, written and installed and running for upwards of a decade. Now you might think that software today would never last a decade - think of iPhone apps, games, and so on - but you would be wrong.
Many times the application might not last that long but the software components that its build from, which may be substantial, do. For example, in the world of gaming a lot of work is involved in creating libraries of functions to handle 3D-related issues, e.g., physics (so that when the character lets go of the gun if falls to the ground in a predictable way), color (red is still red), animation, and so on.
So there are several issues with the nil thing.
First and foremost is the basic idea that this behavior is fluid (reading the docs there are changes from OS X 10.4 to 10.5 in this regard). Now, if I write an app, say a medical app, I don't want someone to break my code by merely upgrading their computers. The fact that some amount of "discussion" can change behavior so fundamental in a commercial software release is troubling.
The example I like to use is that of the flap controller on a commercial airliner (cause I'm a geezer and fiddled around with realtime software in the hey day of Boeing's push for reliability) or perhaps a "drive by wire" controller in a car. In either case when I push the controller in the drivers compartment I expect the flap or brakes or steering to do what I intend. A lot of professional work goes into these sorts of systems to make sure that that's exactly the case (see Toyota's vindication).
A pro writes reliable software. To do this requires that the underlying system on which its based is predictable and reliable. It also nice if the code you write does not depend on things which are easy to get wrong - particularly if "wrong" is subtle.
A pro understands that testing and maintenance are 90% of the cost of software.
Imagine the engineers a Toyota. They wrote code to handle the braking system in a "fly-by-wire" mode (meaning that the brake peddle controls an input to a computer and the computer controls the actual braking).
Cars crashed. People died. Their software was NOT at fault.
I imagine that there was a lot of anguish (the sphinctometer was probably off the scale) in the engineering center where this system was developed when these problems initially came to light.
Another problem is that "expected behavior" is being masked by the behavior of whatever part of the Objective C runtime system is compensating for sending messages to nil.
Suppose I write code as follows:
[ [ Image loadFromFile: "foo.png" ] display ];
Now what this would do is display the "foo.png" image. But what if the "Image loadFromFile:" returns nil in the case of an error, i.e., the image is missing or bad? So since telling nil to display is okay I now don't see the image and I don't know why. In fact, I can't even tell in the debugger what the problem is because I can't step into the code for Image (assuming Image is some Apple system code).
Now in a large development effort I may have other people creating images, naming images, and so on. Suppose someone saves foo.png as a non-PNG file type not recognized by the Apple software (easy to do in something like Photoshop).
So if I don't see an image on the screen I have a lot of checking and rechecking to do all because things just assume that messaging nil is okay.
The problem, of course, is that all that time is expensive. Expensive when the software is being developed and even more expensive if the fix has to be dealt with after the software is deployed.
Sadly Microsoft and its .NET framework have gotten this part right.
No comments:
Post a Comment