There are some other thoughts I want to throw out at this point regarding the stretching and shrinking. One is that the nature of the shrinking/stretching is where the real work gets done. The nature of this is basically do we stretch/shrink things evenly, and so forth. Also there is the issue of A and B ability to produce given colors or ranges of colors - which is not addressed by a transform per se.
So let's go back to this example for a moment:
So let's think for a moment about what happens in the case where we transform the color from Source #2 to Source #1 (B to A). If we stretch B in the region indicated by #5 then our color response will be similar to A. But there are two questions you should have been asking in the previous post: 1) What happens to the "left over" part of B (to the right of #5 on Source #2) when we stretch things and 2) what if the stretching required to align A and B is not smooth. There is also a third question about how to resolve color for a device that produces quantized output.
Let's first address the "left over" part of B (Source #2) above. Let's imagine that the left edge of #5 where it touches Source #2 moves right, stretching the area where #5 touches Source #2, until the angle of #5 is basically eliminated. (Alternatively the bottom of #5 is stretched to the right expanding the area it touches Source #2 until the length of the bottom of #5 matches the top.) Below I will try and illustrate this:
So in terms of color the shading of Source #2 is more like Source #1. But something else has happened - the distance between the tick marks has changed as well and some of Source #2 has extended beyond the 0% point (the end of Source #1).
For a minute let's forget a about the fact we've extended #2 beyond the end of #1.
First let's requantize #2 by taking away the stretched tick marks and replacing them with tick marks that match things before stretching, i.e., they are in the same position as Source #1.
So you now now see that Source #2 does not "line up" with Source #1 any more - but that's okay.
So if we now compare each section of Source #1 and Source #2 and we re-stretch and re-shrink sections of Source #2 its easy to see that eventually we can get them to match. Since the lines represent the points where the color is quantized and if we quantized the color down to a point where the user cannot easily distinguish between the quantized colors we have matched the sources as closely as we can.
Note - this is not the same as matching color with a ICC profile or with a calibrated color system. That's not the point of this process and those systems cannot address the full requirements of what we will cover here. This system's success requires that the devices be calibrated to start with. What we are doing here is extending the notion of color matching beyond what that type of system can do.
And what about the left over part of Source #2? A couple of points on this. First the left over part is not usable because if the last tick mark areas of Source #1 and Source #2 match up then the left over part represents a part of the color space that doesn't make any sense in terms of Source #1. Second, while there may be color values represented in this area we would never use them (because they make no sense) so we don't care about them. Bottom line - we can take our scissors and get rid of them.
Now if you think about it there is an analogous situation if we reverse Source #1 and Source #2 and think about shrinking Source #1 to match Source #2. Since the right of Source #1 is at 0% we would basically extend 0% to the left.
Because the range of what's happening always occurs between 100% and 0% for both devices and we never are in the position of somehow "creating" colors that the devices cannot produce. We may be reducing the range of color for the device - but we can always produce the color.
Similarly - this works at the 100% end in the same way.
Now we have to translate what we are doing back into the context of "industrial color management".
We have not choice but to limit the color spectrum of what we produce to the given device, i.e., we cannot "print outside the color gamut of the device". All we are doing is "moving the gamut around" to best match the two devices.
To this point you will probably say "so what - this is just ICC profile management" and, given what we have covered to this point you'd be right - at least to a point. The difference being that we can parameterize these operations, i.e., make them automatic and driven by data. Secondly the ultimate expression of this is designed for non-color production people. As I pointed out earlier the economics of this require that color is handled by regular operational personnel.
No comments:
Post a Comment