There are a number of services in the operating system that can return CGImages to your application. The most obvious source is Core Graphics, which offers a number of routines for creating CGImages from various data sources. However, in addition to Quartz, you can obtain CGImages from other operating system services. For example, QuickTime provides the routine GraphicsImportCreateCGImage, which can create a CGImage from a QuickTime Graphics importer.
The next few sections explore some of the Quartz 2D routines that your application can use to directly create images from data sources that are directly supported by Core Graphics itself. The next chapter examines some handy techniques your application can use to import images from external sources, like image files, and in other data formats, like TIFF images.
Given the breadth of color spaces that Quartz 2D supports, CGImage has to work with quite a variety of pixel types. The different characteristics that describe pixel formats are very important when you want to create an image or find out more about an image you have obtained elsewhere. Before talking about creating CGImage objects, take a look at the characteristics that describe the pixels they contain.
Pixel Format Information
There are two general categories of information that describe the pixels of a CGImage. The first category describes the color characteristics of the pixels. This information is related to the color space and alpha channel. The computer needs to know how many color components the application will use to represent each pixel, and it needs to know which of those components represent which channel. The second category describes the actual bit and byte-level characteristics of each pixel. This information describes how the pixels are arranged in memory. For example, a 32 bit-per-pixel ARGB Image will use one byte for each color channel, while a 16 bit-per-pixel ARGB image might use only five bits for each color channel. The computer needs to know all of this information to understand the pixels of an image completely.
Pixel Color Space
All of the color samples, and therefore the pixels, in a CGImage come from the same color space. Correspondingly, the color space of an image defines the number of components that go into each pixel. For example, if an image lives in an RGB color space, each pixel will have at least three color components. CMYK images, in contrast require four color components per pixel.
If you are creating an image from your own data, you will have to tell the computer what color space the pixels are coming from. You represent that color space using an instance of the CGColorSpace opaque data type. If your code already has a CGImage and wants to know how many color channels each pixel of the image requires, you can retrieve the CGColorSpace object CGImageGetColorSpace. You can then ask the color space for the number of components in each color sample using the CGColorSpaceGetNumberOfComponents method.
Pixel Alpha Channels
In addition to the color channel information, Mac OS X also supports alpha channel transparency. One option for your pixel data is that it will not contain any alpha information at all. In the case that the image data does not contain an alpha channel, the pixels may still contain an additional component that the computer needs to ignore.
If an image does contains alpha information, then each pixel requires an additional component that carries the alpha information. Quartz 2D will need to know two other important bits of information about how the alpha channel relates to the color channels. From a byte-ordering standpoint, the library will have to know where the alpha channel is found in relation to the color channels. CGImage supports pixels with the alpha channel as either the first or last component of each pixel. Quartz 2D will also need to know if the components in the color channels have already been multiplied by the alpha value or not.
Quartz 2D collects these two characteristics of the alpha channel in an enumeration called CGImageAlphaInfo. The names of the values in this enumeration are very descriptive. For example, kCGImageAlphaLast indicates that the pixel format contains an alpha channel and that it is the last component of each pixel. The value kCGImageAlphaNoneSkipFirst indicates that the image does not contain alpha information but that the computer should skip the first component of each pixel anyway.
Pixel Format Information
The other important characteristics of images is the way that the pixel channels are organized in memory. We've just seen a bit of this information in the discussion of the different alpha channel options where the CGImageAlphaInfo enumeration tells us whether or not the alpha channel is the first component in a pixel or the last. There are additional features of the pixels that help a CGImage extract color information from its pixel data.
The first attribute like this is a value that tells how many bits the pixel data uses to represent the value of each color component. For example, a popular way of encoding RGB data in 16 bits is to use five bits of data for each color channel. With five bits per channel and three channels, this RGB pixel format only uses 15 bits for color information. The last bit is ignored, so each pixel uses 16 bits.
To help the library identify situations like this, where bits might be ignored, the second piece of information Quartz 2D likes to know is the number of bits that go into a single pixel.
A third important aspect of how pixel data is organized is the number of bytes in a single row of the image (commonly called rowBytes). The computer will often organize image data so that the number of bytes in one row of the image is wider than its pixel data alone would suggest. This is often done for performance reasons. For example, the registers of the AltiVec programming unit work best with data that are aligned to 16-byte (128 bit) boundaries, that is, addresses whose value is an even multiple of 16. The AltiVec processor, therefore, will find it easiest to access the pixels in a row if each row starts on a 16-byte boundary. To help ensure that this is the case, the computer may pad images to 16 bytes in order to ensure that the AltiVec processor can access the rows as efficiently as possible. This is just one example of a reason the image data might be padded. The important point to realize is that an image may have a larger value for its rowBytes than the pixel layout would indicate alone.
Valid Pixel Types
Taking into account all the pixel formatting options we've just discussed, alpha information, bits per component, bytes per pixel, rowBytes, and the color space, you might imagine that Quartz has a lot of flexibility when working with different image types. The system does not support every combination of the different variables. In practice, this is rarely a problem, as CGImage does support creating images from popular image formats like those shown in Table 8.1.
Image masks do not contain any color information, of course, nor do they include alpha information. In spite of this, masks have very similar characteristics to images. The pixels in an image mask have a single component value that represents the mask value for that pixel. Most image masks use 1, 2, 4, or 8 bits per component. Usually, with an image mask, the number of bits per component and the number of bits per pixel will match. Image masks also have a rowBytes value, which enjoys the same oddities as the rowBytes field in images.
Image Data Sources
This chapter has covered the way that the application describes pixel information to the computer, but no time has been spent on where the pixel data comes from. To retrieve its pixel information, a CGImage works with a class known as a data provider. A data provider is little more than an object that encapsulates a bundle of bytes and returns those bytes on demand. Core Graphics represents data providers using the opaque data type CGDataProvider. Your application can create a data provider in a number of ways.
The simplest data provider is one that provides access to a memory buffer that your application owns. In the case of working with images, this memory buffer is more likely than not, a pixel buffer. However, when working with CGImages, that is not your only choice. Core Graphics also allows you to work with blocks of memory that contain compressed image data in JPEG or PNG format.
Regardless of what the block of memory contains, you create the data provider the same way using the routine CGDataProviderCreateWithData. This routine takes a pointer to the data buffer and an integer expressing the size of the memory block. The routine also takes a routine pointer. When the CGDataProvider is finished using the memory buffer, it will call the routine pointer, giving your application the opportunity to release that buffer. In addition to the routine pointer, CGDataProviderCreateWithData accepts a pointer value that the computer will pass to the routine. If your application does not want to use the routine pointer, you can simply pass NULL for both of these parameters.
Image files are another popular source of pixel data. If you want to create a data provider that provides CGImage with the contents of a file, you can use the routine CGDataProviderCreateWithURL. In most cases you will pass a local file URL to this routine, however you can also pass network URLs. If the operating system is able to resolve that URL and extract data from it, then it can create a CGImage from that network data. However, network access can be a complex process, subject to network outages and the like. Because Core Graphics doesn't have any opportunity to return network errors, there are better mechanisms in Mac OS X for retrieving remote data.
A third technique for creating a data provider allows your computer to return pixel data through callback routines. Your application can create a structure containing routine pointers, and the computer will call those routines in response to data requests sent to the CGDataProvider object. The Quartz API allows you to create two different types of callback data providers. One of the simplest kinds of data provider is one that sequentially reads a stream of data. A direct access data provider should only be used if your application can skip around in the data set and access the data randomly. If your application can skip around in the data stream and access the data randomly, then you can create a direct access data provider. The direct access and stream-based data providers have slightly different callback structures.
Scaling Pixel DataThe Decode Array
While the pixel data as presented by the data provider should be immutable, you can ask the computer to scale the pixel values as it draws the image by adding a decode array to your image.
The decode array is simply represented as an array of floating point values. Although your application constructs the array as a single dimensional array, the computer interprets those values in pairs. The decode array for the image should contain the same number of value pairs as there are color channels (plus the alpha if the image data contains it) in the image.
Each pair of values in the decode array specifies a valid range for the pixel values of the corresponding color channel. For example, if your decode array contains the values (0.5, 0.8) for the red channel of an RGB image, then as the computer takes each pixel from the data provider, it will scale the red value for that particular pixel so that it lies between 0.5 and 0.8 in the representation of that image on screen.
Core Graphics also allows you to supply a decode array for image masks. Using a decode array with a mask allows you to scale the mask value just as you scale color channel values using the decode array in color images.
Your application can use the decode array to provide a number of image effects. For example, you can use the decode array to remove the alpha channel of an image by using (0, 0) as the value pair for the alpha channel value of the image. If you want to invert an image, you can pass (1,0) as the value pair for each channel. When the computer maps the color channels through the decode array, it will invert each color channel.
The decode array is an intrinsic part of the CGImage object. If you want to use a different decode array with your pixel data, you will have to create another image with the alternate decode array.
Creating an Image from a Pixel Buffer
Creating an image from a pixel buffer in memory is probably one of the most popular techniques. Quartz 2D supplies the routine CGImageCreate for this purpose. This routine takes quite a few parameters, but you should not let this put you off. The sample code snippet in Listing 8.1 illustrates a simple application of this routine. The code in this listing is used to explore the basic technique, and you can modify it to support your own particular pixel formats as needed.
Listing 8.1. Creating an Image from a Pixel Buffer in Memory
Our sample code creates an image based on a very specific pixel format. This routine creates an image from 32 bit-per-pixel (8 bits per component), premultiplied, ARGB data. Your own code may have to support other pixel formats, but the basic technique is the same.
Our routine accepts the height and width of the image as parameters. It also accepts the pointer to the image data and the number of bytes in each image row. As was said, in some cases the number of bytes in one row of an image may be more than the width of the image multiplied by the number of bytes in a pixel.
As with all images, we need to create a data provider to serve as the conduit between our data source (the pixel buffer) and the CGImage. The code sample uses CGDataProviderCreateWithData. This routine creates a simple data provider based on a pointer to a block of memory and the size of that block. In this case, we do not give the data provider any mechanism for releasing the data. This code would rely on some external agent to ensure that the pixel data exists until the image is released and that the computer frees the pixel data when it is no longer needed.
After creating the data provider, the code also creates a color space for your image. In this case, you know the image data to be RGB data and can assume that the color samples in the pixel data come from the generic RGB space on the operating system. If your image data comes from some other source, say a scanner, it would be much better to use the color space of that source rather than the generic space.
With the data provider and color space in hand, you can finally create an image based on the pixel data. CGImageCreate ties together all the different pieces and returns an instance of the CGImageRef opaque data type. Note that in this case we indicate our data is using premultiplied alpha values and that the alpha channel is the first component of each pixel. The code passes in NULL as the decode array, indicating the choice to use the color values from the pixel buffer directly without scaling them.
The last two parameters of the CGImageCreate call involve concepts that haven't been covered yet since they have more to do with drawing the image than creating it. The next to last parameter tells the operating system that the image can be drawn with interpolation. This will come into play if Quartz wants to draw the image with a transformation that requires it to resample the graphic and generate additional pixels.
The final parameter relates to the way the computer transforms the colors of the image when drawing the image to a device that uses a color space other than the one the application has created the image in. This parameter is called the rendering intent for the image. Color management documentation can offer more information about the different rendering intents. In this sample code the default value is requested.
After calling CGImageCreate, you have the return value for the routine; however, keeping in line with the ownership rules of CG objects, you need to release the auxiliary objects created within the routine. Release both the color space, and the data provider that you created. If the image itself has retained those objects, then these release calls will not free the objects, but you still must release your claim on them.
This routine returns the CGImage that it has created and expects the calling routine to release that image when it has finished with it.
Creating from a QuickDraw PixMap
If you are a QuickDraw programmer and are trying to transition your code to Quartz, you may find that creating a CGImage from a PixMap is a useful transition strategy. Creating an image from a QuickDraw PixMap is no different than creating an image from any other block of pixel memory. The PixMap will have to have a pixel format that is compatible with Quartz 2D. That means, for example, that you cannot use 8 bit-per-pixel indexed color spaces with Quartz 2D.
One aspect of PixMaps that seems to trip up QuickDraw code from older Macintosh applications is the rowBytes fields. Older versions of the Mac OS treated the high order bits of the rowBytes fields as flags. Application code would often mask out these high order bits using either 0x7FFF or 0x3FFF as the bit mask.
Later versions of the Mac OS, including Mac OS X, include routines that you should use to extract the base address and rowBytes of a PixMap. These routines are GetPixBaseAddr and GetPixRowBytes respectively. If your code uses the old masking behavior for the PixMap's rowBytes field, you should update the code to use GetPixRowBytes instead.
When you are using a PixMap as the source of an image on Mac OS X, you should probably ensure that the PixMap's pixels are locked for as long as the CGImage exists. You lock a PixMap using the LockPixels routine and unlock them with UnlockPixels. On Mac OS X, there is a very small, but non-zero chance, that a PixMap's pixels will move over the life time of the image. LockPixels and UnlockPixels are not expensive routines and calling them, potentially without cause, shouldn't be an egregious burden.
Listing 8.2 includes a sample routine that converts a 32-bit color QuickDraw PixMap into a Core Graphics image. This routine was taken from a code sample, called CGImageFromPixMap, that is included in the Chapter 8 code samples.
Listing 8.2. Creating a CGImage from a QuickDraw PixMap
Creating from PNG and JPEG data
If your application has access to PNG or JPEG data, then Core Graphics includes the ability to work with that data directly. Creating a CGImage from compressed data in either of these formats is as easy as creating an appropriate data provider and calling Quartz to create an image from that provider.
Because your application can create a data provider from a number of sources, you have a lot of options for the source of the compressed data. For example, if you have JPEG data in a block of memory owned by your application, you can use the same call to CGDataProviderCreateWithData used in the previous example.
In this code sample, a CGImage is created from a PNG file included in the application's resources. Listing 8.3 demonstrates one potential implementation for this kind of routine.
Listing 8.3. Creating an Image from a PNG Resource
Creating an image from PNG or JPEG data frees you from having to work with a lot of the details concerning the pixel format that are important when creating an image from a pixel buffer. These settings are taken from the JPEG or PNG data itself. This simplifies the sample code somewhat.
The code begins by asking the main bundle to return a URL to the PNG file in the Resources directory of the application package whose name matches the one passed in as a parameter to this routine. If the computer returns this URL successfully, you create a data provider from the URL and call CGImageCreateWithPNGDataProvider to create the actual CGImage instance. If this had been working in JPEG data instead, you could use the routine CGImageCreateWithJPEGDataProvider. As in the previous example, NULL is passed in for the decode array, true to allow the computer to use interpolation when drawing the image, and the default rendering intent is used for the image.
As in the previous code sample, the routine ends by releasing the auxiliary objects it has created and returns the CGImage it has created to the caller. The caller is responsible for releasing the CGImage after finishing with it.
Creating Image Masks
Creating image masks is very similar to creating images. The largest difference is that color masks do not require a color space and have a single component per pixel. The primary routine your application can use to create an image mask is CGImageMaskCreate. Aside from the fact that the color space parameter is missing, this routine accepts arguments that are identical to those passed to CGImageCreate.
Creating from Another Image
Another popular task is creating an image based on the contents of another image. This can helpful if, for example, you have cached a background for a view in an offscreen port and want to copy part of that image to the window as part of redrawing your view.
To perform such a task, you would like to take an existing image, the entire background, and create a new image with the same pixels as a rectangle in that larger image. Mac OS X 10.4, Tiger, provides a routine for just this purpose. On Tiger you can call CGImageCreateWithImageInRect passing in a source image and a rectangle. The computer extracts the portion of the image in the rectangle and creates a new image with those pixels as its contents.
Unfortunately, older versions of Mac OS X don't include this routine. However, a strategy that may be useful is to create an offscreen pixel map, draw the image into that map, and use a sub-rectangle of the pixel map to create a new image.
This technique suffers a bit in that it is not always possible to match the color settings of the original image exactly. For example, a JPEG image can be 24 bit-per-pixel RGB, but the operating system cannot create 24-bit RGB offscreen graphics contexts. However, your code may be able to use a pixel map with similar color characteristics (say 32-bit XRGB in this case) to create an image that very closely matches the color space of the original image.
If you do not need to create an independent CGImage, then you may be better off simply using context clipping or an image mask to select only that portion of the source image you are interested in drawing.
Another useful routine is CGImageCreateCopyWithColorSpace. This routine allows you to create a new image that is identical to the source image except that the color samples will be interpreted as having been taken from another color space. For this to work, the original image's color space and the color space passed to CGImageCreateCopyWithColorSpace must have the same number of components
It is important to understand that this routine will not correct the image to the color space you pass it. Instead the routine uses the exact same image data and just changes the environment used to interpret those pixel values.
If you need to color correct an image from one color space to another, you have several choices. If Quartz 2D can create a bitmap graphics context for your desired color space, then you can create an offscreen bitmap and use CGContextDrawImage to draw into that context. Quartz will color correct the source image to the destination color space as part of that drawing operation. This technique is demonstrated with sample code shortly when discussing image drawing performance.
If Quartz 2D cannot create an offscreen bitmap that uses the color space you want, then you will have to revert to using ColorSync (or some other color management system) to correct the image data. Unfortunately this kind of operation is beyond the scope of this book.
Combining an Image and a Mask
If you want to combine an image and a mask into a new image, you can do so with the routine CGImageCreateWithMask. This routine accepts two parameters.
The first is a CGImageRef that represents the source image you want to mask. Just about any image will do, but it may not already have a mask or masking color applied to it.
The second parameter, also a CGImageRef, can either be an image mask or an image. If you pass in an image, the computer will reinterpret the source samples of the image as the alpha channel values for the resulting image. For this reason, the image should contain a single channel per pixel.
Ideally the second parameter, the masking image, should match the size of the source image in the first parameter. If the second image does not match the size of the first, then the computer will scale it so that their sizes do match.