Section 4.8. Solaris Doors


4.8. Solaris Doors

Doors provide a facility for processes to issue procedure calls to functions in other processes running on the same system. Using the APIs, a process can become a door server, exporting a function through a door it creates with the door_create(3X) interface. Other processes can then invoke the procedure by issuing door_call(3X), specifying the correct door descriptor. Our goal here is not to provide a programmer's guide to doors but rather to focus on the kernel implementation, data structures, and algorithms. Some discussion of the APIs is, of course, necessary to keep things in context, but we suggest that you refer to the manual pages and to Steven's book [35] to understand how to develop applications with doors.

The door APIs were first available in Solaris 2.6. The Solaris kernel ships with a shared object library, libdoor.so, that must be linked to applications using the doors APIs. Table 4.10 describes the door APIs available in Solaris. During the course of our coverage of doors, we refer to the interfaces as necessary for clarity.

Table 4.10. Solaris Doors Interfaces

Interface

Description

door_create(3X)

Creates a door. Called from a door server to associate a procedure within the program with a door descriptor. The door descriptor, returned by door_create(3X), is used by client programs that need to invoke the procedure.

door_revoke(3X)

Revokes client access to the door. Can only be called by the server.

door_call(3X)

Invokes a function exported as a door. Called from a client process.

door_return(3X)

Returns from a door function. Typically used as the last function call in a routine exported as a door.

door_info(3X)

Fetches information about a door.

door_server_create(3X)

Specifies a door thread create function.

door_cred(3X)

Fetches client credential information.

door_bind(3X)

Associates the calling thread with a door thread pool.

door_unbind(3X)

Removes current thread from door pool.


4.8.1. Doors Overview

Figure 4.6 illustrates broadly how doors provide an interprocess communication mechanism. The file abstraction used by doors is the means by which client kernel threads retrieve the proper door handle required to issue a door_call(3X). It is similar to the methodology employed when POSIX IPC facilities are used; a path name in the file system namespace is opened, and the returned file descriptor is passed as an argument in the door_call(3X) to call into the desired door. An argument structure, door_arg_t, is declared by the client code and used for passing arguments to the door server function being called. The address of the door_arg_t structure is passed as the second argument by the client in door_call(3X).

Figure 4.6. Solaris Doors


On the server side, a function defined in the process can be made available to external client processes by creation of a door (door_create(3X)). The server must also bind the door to a file in the file system namespace. This is done with fattach(3C), which binds a STREAMS-based or door file descriptor to a file system path name. Once the binding has been established, a client can issue an open to the path name and use the returned file descriptor in door_call(3X).

4.8.2. Doors Implementation

Doors are implemented in the kernel as a pseudo file system, doorfs, which is loaded from the /kernel/sys directory during boot. Within a process, a door is referenced through its door descriptor, which is similar in form and function to a file descriptor, and, in fact, the allocation of a door descriptor in a process uses an available file descriptor slot.

The major data structures required for doors support are illustrated in Figure 4.7. The two main structures are door_node, linked to the process structure with the p_door_list pointer, and door_data, linked to the door_node with the door_data pointer. A process can be a door server for multiple functions (multiple doors). Each call to door_create(3X) creates another door_node, which links to an existing door_node (if one already exists) through the door_list. door_data is created as part of the setup of a server thread during the create process, which we're about to walk through. door_data includes a door_arg structure that manages the argument list passed in door_call(3X), and a link to a door descriptor (door_desc) that passes door descriptors when a door function is called.

Figure 4.7. Solaris Doors Structures


To continue: A call to door_create(3X) enters the libdoor.so library door_create() enTRy point (as is the case with any library call). The kernel door_create() is invoked from the library and performs the following actions.

  1. Allocates kernel memory for door_node and initializes several fields of door_node and the door vnode (part of the door_node structure).

  2. Links the door_target field to the process structure of the calling kernel thread.

  3. Sets door_pc, a function pointer, to the address of the function being served by the door (the code that will execute when a client calls door_call(3X)).

  4. Sets door_flags as directed by the attributes passed by the caller.

  5. Initializes the vnode mutex lock (v_lock) and condition variable (v_cv). Initializes several other fields in the vnode to specify the vnode type (VDOOR) and references to the vnode operations and virtual file system (VFS) switch table entries of the doorfs file system.

  6. Adds the door_node to the process's door list (p_door_list) and allocates a file descriptor for the door descriptor by means of the kernel falloc() function, which allocates a file structure and user file descriptor.

  7. The kernel door_create() now completed, the code returns to the libdoor.so door_create() code.

  8. The library code makes sure that the calling process has been linked with the Solaris threads library, libthread.so and returns an error if the link has not been made.

    A door server requires linking with libthread.so because the door code uses the threads library interfaces to create and manage a pool of door server threads.

  9. The last thing the library-level door_create() code does is call thr_create(3T) to create a server thread for the door server, as an execution resource for calls into the function being exported by the door server.

  10. thr_create(3T) creates a detached, bound thread that executes the library door_create_func() routine, which disables cancellation of the current thread (pthread_setcancelstate(3T)) and enters the kernel door_return() code.

    door_return(3X) is part of the doors API and is typically called at the end of the function being exported by the door_create(3X) call.

  11. door_return(3X) returns processor control to the thread that issued door_call(3X) and causes the server thread to sleep, waiting for another invocation of the door function.

    When entered (remember, we're in the kernel now, not in the doors library), door_return() allocates a door_data structure for the calling thread and links it to the kernel thread's t_door pointer.

    This sequence is done if the current thread's t_door pointer is NULL, signifying a door_data structure has not yet been allocated.

The next bit of code in door_return() applies to argument handling, return data, and other conditions that need to be dealt with when a kernel thread issues door_call(3X). We're still in the door create phase, so a bit later we'll revisit what happens in door_return() as a result of door_call(3X).

Continuing with the door create in the door_return() kernel function:

  1. The kernel door_release_server() code is called to place the current thread on the list of threads available to execute on behalf of door calls into the server.

  2. The kernel thread is linked to the process's p_server_thread link, and cv_broadcast() is done on the door condition variable, door_cv, causing any threads blocked in door_call(3X) to wake up.

    At this point, the door create is essentially completed.

  3. A call into the shuttle code to place the kernel thread to sleep on a shuttle synchronization object is made (shuttle_swtch()); the thread is thus placed in a sleep state and enters the dispatcher through swtch().

We now digress slightly to explain shuttle synchronization objects. Typically, execution control flow is managed by the kernel dispatcher (see Chapter 5), using condition variables and sleep queues. Other synchronization primitives, mutex locks, and reader/writer locks are managed by turnstiles, an implementation of sleep queues that provides a priority inheritance mechanism.

Shuttle objects are a relatively new (introduced in Solaris 2.5, when doors first shipped) synchronization object that essentially allows very fast transfer of control of a processor from one kernel thread to another without incurring the overhead of the dispatcher queue searching and normal kernel thread processing. In the case of a door_call(), control can be transferred directly from the caller (or client in this case), to a thread in the door server pool, which executes the door function on behalf of the caller. When the door function has completed, control is transferred directly back to the client (caller), all using the kernel shuttle interfaces to set thread state and to enter the dispatcher at the appropriate places. This direct transfer of processor control contributes significantly to the IPC performance attainable with doors. Shuttle objects are currently used only by the doors subsystem in Solaris.

Kernel threads sleeping on shuttle objects have a 0 value in their wait channel field (t_wchan) and a value of 1 in t_wchan0. The thread's t_sobj_ops (synchronization object operations table) pointer is set to the shuttle object's operations structure (shuttle_sops); the thread's state is, of course, TS_SLEEP, and the thread's T_WAKEABLE flag is set.

Getting back to door creation, we see the following.

  1. A default of one server thread is created unless there are concurrent invocations, in which case a thread will be created for each door call. The API allows for programs creating their own separate, private pool of door threads that have different characteristics than the default thread properties.

  2. The doors library creates a bound, detached thread with the default thread stack size and signal disposition by default.

This completes the creation of a door server. A server thread in the door pool is left sleeping on a shuttle object (the call to shuttle_swtch()), ready to execute the door function.

Application code that creates a door to a function (becomes a door server) typically creates a file in the file system to which the door descriptor can be attached, using the standard open(2) and fattach(3C) APIs, to make the door more easily accessible to other processes.

The fattach(3C) API has traditionally been used for STREAMS code, where it is desirable to associate a STREAM or STREAMS-based pipe with a file in the file system namespace, for precisely the same reason one would associate a door descriptor with a file name: that is, to make the descriptor easily accessible to other processes on the system so application software can take advantage of the IPC mechanism. The door code can build from the fact that the binding of an object to a file name, when that object does not meet the traditional definition of what a file is, has already been solved.

fattach(3C) is implemented with a pseudo file system called namefs, the name file system. namefs allows the mounting of file systems on nondirectory mount points, as opposed to the traditional mounting of a file system that requires the selected mount point to be a directory file. Currently, fattach(3C) is the only client application of namefs; it calls the mount(2) system call, passing namefs as the file system name character string and a pointer to a namefs file descriptor. The mount(2) system call enters the VFS switch table through the VFS_MOUNT macro and enters the namefs mount code, nm_mount().

With the door server in place, client processes are free to issue a door_call(3X) to invoke the exported server function.

  1. The kernel door_call() code (nothing happens at the doors library level in door_call()) allocates a door_data structure from kernel memory and links it to the t_door pointer in the calling kernel thread.

  2. If a pointer to an argument structure (door_arg) was passed in the door_call(3X), the arguments are copied from the passed structure in user space to the door_arg structure embedded in door_data.

  3. If no arguments were passed, the door_arg fields are zeroed and the d_noresults flag in door_data is set to specify that no results can be returned.

    The door_call(3X) API defines that a NULL argument pointer means no results can be returned. A lookup is performed on the passed door descriptor and returns a pointer to the door_node. Typically, file descriptor lookups return a vnode pointer. In this case, the vnode pointer and the door_node pointer are one and the same because the vnode is embedded in the door_node, located at the top of the structure.

  4. The kernel door_get_server() function retrieves a server kernel thread from the pool to execute the function.

  5. The thread is removed from the list of available server threads (p_server_threads) and changed from TS_SLEEP to TS_ONPROC state (this kernel thread was sleeping on a shuttle object, not sitting on a sleep queue).

  6. The arguments from the caller are copied to the server thread returned from door_get_server(). The door_active counter in the door_node is incremented, the calling (client) thread's d_error field (in door_data) is set to DOOR_WAIT, the door server thread's d_caller field (door_data structure for the server thread) is set to the client (caller), and a pointer to the door_node is set in the server thread's door_data d_active field.

    With the necessary data fields set up, control can now be transferred to the server thread; this transfer is done with a call to shuttle_resume().

  7. shuttle_resume() is passed a pointer to the server thread removed from the door pool.

Just to get back to the forest for a moment (in case you're lost among the trees), we're into shuttle_resume() as a result of a kernel thread issuing door_call(3X). The door_call() kernel code up to this point essentially allocated or initialized the necessary data structures for the server thread to have the exported function executed on behalf of the caller. The shuttle_resume() function is entered from door_call(), so the kernel thread now executing in shuttle_resume() is the door client. So, what needs to happen is really pretty simple (relatively speaking)the server thread, which was passed to shuttle_resume() as an argument, needs to get control of the processor, and the current thread executing the shuttle_resume() code needs to be put to sleep on a shuttle object, since the current thread and the door client thread are one and the same.

  1. shuttle_resume() sets up the current thread to sleep on a shuttle object in the same manner described previously (t_wchan0 set to 1, state set to TS_SLEEP, etc.); the server thread has its T_WAKEABLE flag, t_wchan0 field, and t_sobj_ops field cleared.

  2. The code tests for any interesting events that may require attention, such as a hold condition on the thread, and checks for posted signals. If any signals are posted, setrun() is called with the current (client) thread.

  3. The dispatcher swtch_to() function is called and is passed the server thread address. swtch_to() updates the per-processor context-switch counter in the cpu_sysinfo structure (pswitch) and calls resume() to have the server thread context-switched onto the processor. The general flow is illustrated in Figure 4.8.

    Figure 4.8. door_call() Flow with Shuttle Switching

  4. The server thread executes the function associated with the door_node, as specified by the first argument passed when the server executed door_create(3X).

  5. The last call made by the server function is door_return(3X), which returns results and control to the calling thread (client) and blocks in the server, waiting for another door_call(3X).

  6. The kernel door_return() code copies the return data back to the caller and places the server thread back in the door server pool. The calling (client) thread, which we left in a sleep state back in door_call(), is set back to an T_ONPROC state, and the shuttle code (shuttle_resume()) is called to give the processor back to the caller and have it resume execution.

Some final points to make regarding doors. There's a fair amount of code in the kernel doorfs module designed to deal with error conditions and the premature termination of the calling thread or server thread. In general, if the calling thread is awakened early, that is, before door_call() has completed, the code figures out why the wakeup occurred (signal, exit call, etc.) and sends a cancel signal (SIGCANCEL) to the server thread. If a server thread is interrupted because of a signal, exit, error condition, etc., the door_call() code bails out. In the client, an EINTR (interrupted system call) error is set, signifying that door_call() terminated prematurely.




SolarisT Internals. Solaris 10 and OpenSolaris Kernel Architecture
Solaris Internals: Solaris 10 and OpenSolaris Kernel Architecture (2nd Edition)
ISBN: 0131482092
EAN: 2147483647
Year: 2004
Pages: 244

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net