8.4 IO, Events, and Threads

     

8.4 I/O, Events, and Threads

Parrot has comprehensive support for I/O, threads, and events. These three systems are interrelated, so we'll treat them together. The systems we talk about in this section are less mature than other parts of the engine, so they may change by the time we roll out the final design and implementation.

8.4.1 I/O

Parrot's base I/O system is fully asynchronous I/O with callbacks and per-request private data. Since this is massive overkill in many cases, we have a plain vanilla synchronous I/O layer that your programs can use if they don't need the extra power.

Asynchronous I/O is conceptually pretty simple. Your program makes an I/O request. The system takes that request and returns control to your program, which keeps running. Meanwhile, the system works on satisfying the I/O request. When the request is satisfied, the system notifies your program in some way. Since there can be multiple requests outstanding, and you can't be sure exactly what your program will be doing when a request is satisfied, programs that make use of asynchronous I/O can be complex.

Synchronous I/O is even simpler. Your program makes a request to the system and then waits until that request is done. There can be only one request in process at a time, and you always know what you're doing (waiting) while the request is being processed . It makes your program much simpler, since you don't have to do any sort of coordination or synchronization.

The big benefit of asynchronous I/O systems is that they generally have a much higher throughput than a synchronous system. They move data around much faster ”in some cases three or four times faster. This is because the system can be busy moving data to or from disk while your program is busy processing data that it got from a previous request.

For disk devices, having multiple outstanding requests ” especially on a busy system ”allows the system to order read and write requests to take better advantage of the underlying hardware. For example, many disk devices have built-in track buffers. No matter how small a request you make to the drive, it always reads a full track. With synchronous I/O, if your program makes two small requests to the same track, and they're separated by a request for some other data, the disk will have to read the full track twice. With asynchronous I/O, on the other hand, the disk may be able to read the track just once, and satisfy the second request from the track buffer.

Parrot's I/O system revolves around a request. A request has three parts: a buffer for data, a completion routine, and a piece of data private to the request. Your program issues the request, then goes about its business. When the request is completed, Parrot will call the completion routine, passing it the request that just finished. The completion routine extracts out the buffer and the private data, and does whatever it needs to do to handle the request. If your request doesn't have a completion routine, then your program will have to explicitly check to see if the request was satisfied.

Your program can choose to sleep and wait for the request to finish, essentially blocking. Parrot will continue to process events while your program is waiting, so it isn't completely unresponsive . This is how Parrot implements synchronous I/O ”it issues the asynchronous request, then immediately waits for that request to complete.

The reason we made Parrot's I/O system asynchronous by default was sheer pragmatism . Network I/O is all asynchronous, as is GUI programming, so we knew we had to deal with asynchrony in some form. It's also far easier to make an asynchronous system pretend to be synchronous than it is the other way around. We could have decided to treat GUI events, network I/O, and file I/O all separately, but there are plenty of systems around that demonstrate what a bad idea that is.

8.4.2 Events

An event is a notification that something has happened : the user has manipulated a GUI element, an I/O request has completed, a signal has been triggered, or a timer has expired . Most systems these days have an event handler, [7] because handling events is so fundamental to modern GUI programming. Unfortunately , the event handling system is not integrated, or poorly integrated, with the I/O system, leading to nasty code and unpleasant workarounds to try and make a program responsive to network, file, and GUI events simultaneously . Parrot presents a unified event handling system, integrated with its I/O system, which makes it possible to write cross-platform programs that work well in a complex environment.

[7] Often two or three, which is something of a problem.

Parrot's events are fairly simple. An event has an event type, some event data, an event handler, and a priority. Each thread has an event queue, and when an event happens it's put into the right thread's queue (or the default thread queue in those cases where we can't tell which thread an event was destined for) to wait for something to process it.

Any operation that would potentially block drains the event queue while it waits, as do a number of the cleanup opcodes that Parrot uses to tidy up on scope exit. Parrot doesn't check each opcode for an outstanding event for pure performance reasons, as that check gets expensive quickly. Still, Parrot generally ensures timely event handling, and events shouldn't sit in a queue for more than a few milliseconds unless event handling has been explicitly disabled.

When Parrot does extract an event from the event queue, it calls that event's event handler, if it has one. If an event doesn't have a handler, Parrot instead looks for a generic handler for the event type and calls it instead. If for some reason there's no handler for the event type, Parrot falls back to the generic event handler, which throws an exception when it gets an event it doesn't know how to handle. You can override the generic event handler if you want Parrot to do something else with unhandled events, perhaps silently discarding them instead.

Because events are handled in mainline code, they don't have the restrictions commonly associated with interrupt-level code. It's safe and acceptable for an event handler to throw an exception, allocate memory, or manipulate thread or global state safely. Event handlers can even acquire locks if they need to, though it's not a good idea to have an event handler blocking on lock acquisition.

Parrot uses the priority on events for two purposes. First, the priority is used to order the events in the event queue. Events for a particular priority are handled in a FIFO manner, but higher-priority events are always handled before lower-priority events. Parrot also allows a user program or event handler to set a minimum event priority that it will handle. If an event with a priority lower than the current minimum arrives, it won't be handled, instead it will sit in the queue until the minimum priority level is dropped. This allows an event handler that's dealing with a high-priority event to ignore lower-priority events.

User code generally doesn't need to deal with prioritized events, so programmers should adjust event priorities with care. Adjusting the default priority of an event, or adjusting the current minimum priority level, is a rare occurrence. It's almost always a mistake to change them, but the capability is there for those rare occasions where it's the correct thing to do.

8.4.3 Signals

Signals are a special form of event, based on the Unix signal mechanism. Parrot presents them as mildly special, as a remnant of Perl's Unix heritage, but under the hood they're not treated any differently from any other event.

The Unix signaling mechanism is something of a mash, having been extended and worked on over the years by a small legion of undergrad programmers. At this point, signals can be divided into two categories, those that are fatal, and those that aren't.

Fatal signals are things like SIGKILL, which unconditionally kills a process, or SIGSEGV, which indicates that the process has tried to access memory that isn't part of your process. There's no good way for Parrot to catch these signals, so they remain fatal and will kill your process. On some systems it's possible to catch some of the fatal signals, but Parrot code itself operates at too high a level for a user program to do anything with them ”they must be handled with special-purpose code written in C or some other low-level language. Parrot itself may catch them in special circumstances for its own use, but that's an implementation detail that isn't exposed to a user program.

Nonfatal signals are things such as SIGCHLD, indicating that a child process has died, or SIGINT, indicating that the user has pressed ^C on the keyboard. Parrot turns these signals into events and puts them in the event queue. Your program's event handler for the signal will be called as soon as Parrot gets to the event in the queue, and your code can do what it needs to with it.

SIGALRM, the timer expiration signal, is treated specially by Parrot. Generated by an expiring alarm( ) system call, this signal is normally used to provide timeouts for system calls that would otherwise block forever, which is very useful. The big downside to this is that on most systems there can only be one outstanding alarm( ) request, and while you can get around this somewhat with the setitimer call (which allows up to three pending alarms) it's still quite limited.

Since Parrot's I/O system is fully asynchronous and never blocks ”even what looks like a blocking request still drains the event queue ”the alarm signal isn't needed for this. Parrot instead grabs SIGALRM for its own use, and provides a fully generic timer system which allows any number of timer events, each with their own callback functions and private data, to be outstanding.

8.4.4 Threads

Threads are a means of splitting a process into multiple pieces that execute simultaneously. It's a relatively easy way to get some parallelism without too much work. Threads don't solve all the parallelism problems your program may have. Sometimes multiple processes on a single system, multiple processes on a cluster, or processes on multiple separate systems are better. But threads do present a good solution for many common cases.

All the resources in a threaded process are shared between threads. This is simultaneously the great strength and great weakness of threads. Easy sharing is fast sharing, making it far faster to exchange data between threads or access shared global data than to share data between processes on a single system or on multiple systems. Easy sharing is dangerous, though, since without some sort of coordination between threads it's easy to corrupt that shared data. And, because all the threads are contained within a single process, if any one of them fails for some reason the entire process, with all its threads, dies.

With a low-level language such as C, these issues are manageable. The core data types, integers, floats, and pointers are all small enough to be handled atomically. Composite data can be protected with mutexes , special structures that a thread can get exclusive access to. The composite data elements that need protecting can each have a mutex associated with them, and when a thread needs to touch the data it just acquires the mutex first. By default there's very little data that must be shared between threads, so it's relatively easy, barring program errors, to write thread-safe code if a little thought is given to the program structure.

Things aren't this easy for Parrot, unfortunately. A PMC, Parrot's native data type, is a complex structure, so we can't count on the hardware to provide us atomic access. That means Parrot has to provide atomicity itself, which is expensive. Getting and releasing a mutex isn't really that expensive in itself. It has been heavily optimized by platform vendors because they want threaded code to run quickly. It's not free, though, and when you consider that running flat-out Parrot does one PMC operation per 100 CPU cycles, even adding an additional 10 cycles per operation can slow down Parrot by 10%.

For any threading scheme, it's important that your program isn't hindered by the platform and libraries it uses. This is a common problem with writing threaded code in C, for example. Many libraries you might use aren't thread-safe, and if you aren't careful with them your program will crash. Although we can't make low-level libraries any safer, we can make sure that Parrot itself won't be a danger. There is very little data shared between Parrot interpreters and threads, and access to all the shared data is done with coordinating mutexes. This is invisible to your program, and just makes sure that Parrot itself is thread-safe.

When you think about it, there are really three different threading models. In the first one, multiple threads have no interaction among themselves . This essentially does with threads the same thing that's done with processes. This works very well in Parrot, with the isolation between interpreters helping to reduce the overhead of this scheme. There's no possibility of data sharing at the user level, so there's no need to lock anything.

In the second threading model, multiple threads run and pass messages back and forth between each other. Parrot supports this as well, via the event mechanism. The event queues are thread-safe, so one thread can safely inject an event into another thread's event queue. This is similar to a multiple-process model of programming, except that communication between threads is much faster, and it's easier to pass around structured data.

In the third threading model, multiple threads run and share data between themselves. Although Parrot can't guarantee that data at the user level remains consistent, it can make sure that access to shared data is at least safe. We do this with two mechanisms.

First, Parrot presents an advisory lock system to user code. Any piece of user code running in a thread can lock a variable. Any attempt to lock a variable that another thread has locked will block until the lock is released. Locking a variable only blocks other lock attempts. It does not block plain access. This may seem odd, but it's the same scheme used by threading systems that obey the POSIX thread standard, and has been well tested in practice.

Second, Parrot forces all shared PMCs to be marked as such, and all access to shared PMCs must first acquire that PMC's private lock. This is done by installing an alternate vtable for shared PMCs, one that acquires locks on all its parameters. These locks are held only for the duration of the vtable function, but ensure that the PMCs affected by the operation aren't altered by another thread while the vtable function is in progress.



Perl 6 and Parrot Essentials
Perl 6 and Parrot Essentials, Second Edition
ISBN: 059600737X
EAN: 2147483647
Year: 2003
Pages: 116

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net