If you've read this far into the book, it's probably okay to let you in on the secret of object-oriented computing. The secret is: It's all a sham, a hoax, a cover-up. That's right, your computer does not really perform any processing with objects, no matter what their orientation. The CPU in your computer processes data and logic statements the old-fashioned way: one step at a time, moving through specific areas in memory as directed by the logic, manipulating individual values and bits according to those same logic statements. It doesn't see data as collective objects; it sees only bits and bytes.
One moment, I've just been handed this important news bulletin. It reads, "Don't be such a geek, Tim. It's not the computer doing the object-oriented stuff, it's the programmer." Oh, sorry about that. But what I said before still stands: The final code as executed by your CPU isn't any more object-oriented than old MS-DOS code. But object-oriented language compilers provide the illusion that OOP is built into the computer. You design your code and data in the form of objects, and the compiler takes it from there. It reorganizes your code and data, adds some extra code to do the simulated-OOP magic, and bundles it all up in an EXE file. You could write any OOP program using ordinary procedural languages, or even assembly language. But applications that focus on data can often be written much more efficiently using OOP development practices.
The core of object-oriented programming is, of course, the object. An object is a person, place, or thing. Wait a minute, that's a noun. An object is like a noun. Objects are computer data-and-logic constructs that symbolize real-world entities, such as people, places, or things. You can have objects that represent people, employees, dogs, sea otters, houses, file cabinets, computers, strands of DNA, galaxies, pictures, word-processing documents, calculators, office supplies, books, soap opera characters, space invaders, pizza slices, majestic self-amortizing canals, plantations of ripening tea, a few of my favorite things, and sand.
Objects provide a convenient software means to describe and manage the data associated with one of these real-world objects. For instance, if you had a set of objects representing DVDs in your home video collection, the object could manage features of the DVD, such as its title, the actors performing in the content, the length of the video in minutes, whether the DVD was damaged or scratched, its cost, and so on. If you connected your application to the DVD-ROM player in your system, your object could even include a "play" feature that (assuming the DVD was in the drive) would begin to play the movie, possibly from a timed starting position or DVD "chapter."
Objects work well because of their ability to simulate the features of real-world objects through software development means. They do this through the four key attributes of objects: abstraction, encapsulation, inheritance, and polymorphism.
Throughout this chapter, the term "object" usually refers to an instance of something, a specific in-memory use of the defined elementan instance with its own set of data, not just its definition or design. Class refers to the design and source code of the object, comprising the implementation.
An abstraction indicates an object's limited view of a real-world object. Like an abstract painting, an abstracted object shows just the basic essentials of the real-world equivalent (see Figure 8-1).
Figure 8-1. Actually, the one on the left is kind of abstract, too
Objects can't perfectly represent real-world counterparts. Instead, they implement data storage and processes on just those elements of the real-world counterpart that are important for the application. Software isn't the only thing that requires abstraction. Your medical chart at your doctor's office is an abstraction of your total physical health. When you buy a new house, the house inspector's report is an abstraction of the actual condition of the building. Even the thermometer in your back yard is an abstraction; it cannot accurately communicate all of the minor temperature variations that exist just around the flask of mercury. Instead, it gathers the information it can, and communicates a single numeric result.
All of these abstract tools record, act on, or communicate just the essential information they were designed to manage. A software object, in a similar way, only stores, acts on, or communicates essential information about its real-world counterpart. For instance, if you were designing an object that monitored the condition of a building, you might record the following:
Although a building would also have color, a number of doors and windows, and a height, these elements may not be important for the application, and therefore would not be part of the abstraction. Those values that are contained within the object are called properties. Any processing rules or calculations contained within the object that act on the properties (or other supplied internal or external data) are known as methods. Taken together, methods and properties make up the members of the object.
The great advantage of software is that a user can perform a lot of complex and time-consuming work quickly and easily. Actually, the software takes care of the speed and the complexity on behalf of the user, and in many cases, the user doesn't even care how the work is being done. "Those computers are just so baffling; I don't know and I don't care how they work as long as they give me the results I need" is a common statement heard in management meetings. And it's a realistic statement too, because the computer has encapsulated the necessary data and processing logic to accomplish the desired tasks.
Encapsulation carries with it the idea of interfaces. Although a computer may contain a lot of useful logic and data, if there was no way to interact with that logic or data, the computer would basically be a useless lump of plastic and silicon. Interfaces provide the means to interact with the internals of an object. An interface provides highly controlled entries and exits into the data and processing routines contained within the object. As a consumer of the object, it's really irrelevant how the object does its work internally, as long as it produces the results you expect through its publicly exposed interfaces.
Using the computer as an example, the various interfaces include (among other things) the keyboard, display, mouse, power connector, USB and 1394 ports, speakers, microphone jack, and the power button. Often, the things I connect to these interfaces are also black boxes, encapsulations with well-defined public interfaces. A printer is a mystery to me. How the printer driver can send commands down the USB cable, and eventually squirt ink onto 24-pound paper is just inexplicable; but I don't know and I don't care how it really works, as long as it does work.
Inheritance in .NET isn't like inheritance in real life; no one has to die before it works. But as in real life, inheritance defines a relationship between two different objects. Specifically, it defines how one object is descended from another.
The original class in the object relationship is called the base class. It includes various and sundry interface members, as well as internal implementation details. A derived class is defined using the base class as the starting point. Derived classes inherit the features of the base class. By default, any publicly exposed members of the base class automatically become publicly exposed members of the derived class, including the implementation. A derived class may choose to override one, some, or all of these members, providing its own distinct or supplementary implementation details.
Derived classes often provide additional details specific to a subset of the base class. For instance, a base class that defines animals would include interfaces for the common name, Latin species name, number of legs, and other common properties belonging to all animals. Derived classes would then enhance the features of the base class, but only for a subset of animals. A mammal class might define gestation time for birthing young, whereas a parallel avian-derived class could define the diameter of an egg. Both mammal and avian would still retain the name, species name, and leg count properties from the base animal class. An instance of avian would be an animal; an instance of mammal would be an animal. However, an instance of avian would not be a mammal. Also, a generic instance of animal could be considered as an avian only if it was originally defined as an avian.
Even though a base and derived class have a relationship, implementation details that are private to the base class are not made available to the derived class. The derived class doesn't even know that those private members exist. A base class may include protected members that, although hidden from users of the class, are visible to the derived class. Any member defined as public in the base class is available to the derived class, and also to all users of the base class. (Visual Basic defines another level named "friend." Members marked as friend are available to all code in the same assembly, but not to code outside of the assembly. Public members can be used by code outside of the defining assembly.)
Examples of inheritance do exist in the real world. A clock is a base object from which an alarm clock derives. The alarm clock exposes the public interfaces of a clock, and adds its own implementation-specific properties and methods. Other examples include a knife and its derived Swiss-army knife, a chair and its derived recliner, a table and its derived Periodic Table of the Elements.
The concepts introduced so far could be implemented using standard procedural programming languages. Although you can't do true inheritance in a non-OOP language like C, you can simulate it using flag fields. If a flag field named "type" in a non-OOP class-like structure was set to "mammal," you could enable use of certain mammal-specific fields. There are other ways to simulate these features, and it wouldn't be too difficult.
Polymorphism is a different avian altogether. "Polymorphism" means "many forms." Because a derived class can have its own (overridden) version of a base class's member, if you treat a mammal object like a generic animal, there could be some confusion as to which version of the members should be used, the animal version or the mammal version. Polymorphism takes care of figuring all of this out, on an ad hoc basis, while your program is running. Polymorphism makes it possible for any code in your program to treat a derived instance as if it were its base instance. This makes for great coding. If you have a routine that deals with animal objects, you can pass it objects of type animal, mammal, or avian, and it will still work. This type of polymorphism is known as subtyping polymorphism, but who cares what its name is.
Another variation of polymorphism is overloading. Overloading allows a single class method (forget about derived classes for now) to have multiple forms, but still be considered as a single method. For instance, if you had a house object with a paint method (that would change the color of the house), you could have one paint method that accepted a single color (paint the house all one color) and another paint method that accepted two colors (main color plus a trim color). When these methods are overloaded in a single class, the compiler determines which version to call based on the data you include in the call to the method.
Interfaces and Implementation
OOP development differentiates between the public definition of a class, the code written to implement that class, and the resulting in-memory use of that class as an object. It's similar to how, at a restaurant, you differentiate between a menu, the cooking of your selection, and the actual food that appears at your table.