Java New IO (java.nio)

Java New IO (java.nio)

The java.nio package was introduced with JDK 1.4 to provide more powerful IO and accommodate for some of the shortcomings of the existing java.io package and other IO-related packages. It is important to note that NIO does not replace the standard IO package but serves as a new approach to some IO problems. The NIO package was designed to provide the following:

  • Character sets

  • Scalable network IO

  • High-performance file IO

As explained in the Character Stream section, character streams use charsets, which are comprehensive mechanisms for converting between bytes and characters. Asynchronous or nonblocking network IO has been a long-standing problem for Java. Prior to NIO, an application could run into severe limitations if it needed to establish and maintain thousands of connections. The typical approach had been to create a thread for every connection. This approach is not a problem for a few dozen connections, but resource consumption becomes an issue when dealing with hundreds of threads, which is typical for server-side code. High-performance file IO is also important, and by providing a new approach to file IO, it has become possible to make many other performance improvements in other areas. java.nio introduced Buffers, which are one of the best additions ever made to Java, and in fact, games are one of the biggest beneficiaries of them. A significant benefit of Buffers is that they allow for efficient sharing of data between Java and native code. Because data such as textures, geometry, sound, and files is passed between Java and native code and is in turn passed to the corresponding device, Buffers can result in a noticeable performance gain. Before getting to Buffers, let’s look at channels, which are NIO’s version of streams.

Channels

Channels are used to represent a connection to an entity such as a file, network socket, or even a program component. They are in many ways comparable to streams and can be arguably referred to as more platform-dependent versions of streams. Because they have closer ties to the underlying platform, in conjunction with NIO buffers, they have the potential for achieving very efficient IO. Different types of channels such as DatagramChannel, FileChannel, and SocketChannel exist. Each of them provides a new way of dealing with IO. A DatagramChannel is a selectable channel for datagram-oriented sockets. A FileChannel is a channel for reading, writing, mapping, and manipulating a file. A SocketChannel is a select-able channel for stream-oriented connecting sockets and supports nonblocking connections.

A Channel object can be obtained by calling the getChannel methods of classes such as:

java.io.FileInputStream java.io.FileOutputStream java.io.RandomAccessFile java.net.Socket java.net.ServerSocket java.net.DatagramSocket java.net.MulticastSocket 

For example, a FileChannel can simply be obtained by using the following code:

File inFile = new File(inputFilename); FileInputStream fis = new FileInputStream(inFile); FileChannel ifc = fis.getChannel();

A FileChannel is tightly bound to the object from which it is obtained. For example, reading a byte from the FileChannel updates the current position of the FileInputStream. A FileChannel can be obtained from a FileInputStream, FileOutputStream, or a RandomAccessFile. Note that if a FileChannel is obtained from a FileInputStream, you can only read from it; you cannot write to it. Similarly, if a channel is obtained from a FileOutputStream, it can only be written to and not read from. File channels can be used to duplicate a file in much the same way that file stream objects can be used to duplicate a file. One distinction is that file streams are typically read from and written to using an array of bytes, whereas channels are read from and written to using ByteBuffer objects A FileChannel can also be used to map a file into memory, which we will look at in an upcoming section.

// using ByteBuffer void test(String inputFilename, String outputFilename){     File inFile = new File(inputFilename);     File outFile = new File(outputFilename);     FileInputStream fis = new FileInputStream(inFile);     FileOutputStream fos = new FileOutputStream(outFile);     FileChannel ifc = fis.getChannel();      FileChannel ofc = fos.getChannel();        ByteBuffer bytes = ByteBuffer.allocate(bufferSize);     while (true){         bytes.clear();         int count = ifc.read(bytes);           if (count <= 0)             break;           bytes.flip();         ofc.write(bytes);     } }   

Buffers

Buffers can result in substantial performance gains for tasks that involve getting a rather large chunk of data to the operating system or the hardware. This gain is not specific to file IO. Figure 5.4 shows the different buffer types in the NIO package.

image from book
Figure 5.4: Buffer classes.

As you can see in Figure 5.4, a different buffer is available for every primitive type. A buffer is in many ways similar to an array. It represents a contiguous chunk of data and has a capacity that is fixed during its life. One of the fundamental differences between an array and a buffer is that a buffer can be direct, whereas arrays are always nondirect. A buffer can be thought of as an object that contains a reference to an array. A nondirect buffer’s array is a typical Java array that resides in the Java heap, but a direct buffer’s array resides in system memory, as shown in Figure 5.5.

image from book
Figure 5.5: Direct versus nondirect buffers.

It is important to understand the difference between the Java heap and the system memory. Here we define the Java heap as the memory maintained by the garbage collector and system memory as all the memory on the host machine minus the Java heap. Typical native APIs including the OS API take the memory address of (or a pointer to) the data that needs to be manipulated. Because the data in the Java heap is managed by the garbage collector, direct access to data in the Java heap cannot be safely granted to native APIs without serious implications. This means that Java applications or the VM have an extra problem to deal with.

Note that the garbage collector may decide to move objects around in the heap when performing housekeeping. If some data in the Java heap must be passed to, say, the hard drive or the video card, the data in the heap is not guaranteed to stay where it is until the transfer is completed. One workaround would be to disable the garbage collector when any data in the heap must be passed to native APIs, such as the operating system. However, this action can result in serious problems. For example, if the garbage collector is disabled and a data transfer to a device is time consuming, the VM may run out of memory, even if there is a substantial amount of dead objects that could have been collected to free up some space. The other option would be to pin or lock the object so that the garbage collector knows not to move it. This technique would require less efficient and more complicated collection algorithms and has other side effects. Because of this, the VM does not allow direct access to the data in the Java heap.

Before the introduction of direct byte buffers, the solution to dealing with this problem was to copy the data from the Java heap to the system memory before it could be passed to a native function. Direct byte buffers allow the data to reside in the system memory so that when it needs to be passed to a native function, it is already in the system memory. This is the main idea behind why direct byte buffers can improve the performance of an application.

The overhead of copying the data may not be a bottleneck for small chunks of data that are not time critical and are rarely passed to native functions. However, the copy overhead can prove to be a bottleneck in many situations in different applications, especially games. Games rely heavily on the manipulation of data that is passed to the OS or the hardware. Geometry that needs to be manipulated by Java code is one example of where direct buffers can result in considerable performance gains. Refer to Chapter 12, “Java Native Interface,” for additional information about using Java direct buffers from native code and additional details about direct byte buffers.

A ByteBuffer is for storing any binary data. The CharBuffer, DoubleBuffer, FloatBuffer, IntBuffer, and ShortBuffer classes, however, are for dealing with data of a specific type. There are different ways to allocate the array that stores the content of a buffer. The following code segment results in a creation of a nondirect buffer:

ByteBuffer byteBuffer = ByteBuffer.allocate(bufferSize); IntBuffer intBuffer = IntBuffer.allocate(bufferSize);

In addition, a buffer can simply wrap an existing array of appropriate type using its wrap method. Because Java arrays reside in the Java heap, the following buffer is implicitly a nondirect buffer:

int myArray[] = new int[8]; IntBuffer intBuffer = IntBuffer.wrap(myArray); 

ByteBuffer objects provide the allocateDirect method that can be used to explicitly ask the array to reside in the system memory as opposed to the Java heap. The following call is used to create a direct byte buffer:

ByteBuffer byteBuffer = ByteBuffer.allocateDirect(bufferSize);

Direct buffers can also be created by wrapping any region of the system memory through the NewDirectByteBuffer function, which is provided by JNI. In addition, a MappedbyteBuffer, which is a form of direct byte buffer, can be used to access memory in the OS buffer that was shown in Figure 5.1.

A byte buffer also provides methods that create other buffers to view its content as other primitive types. A view buffer is a buffer whose array is the same as the array of the byte buffer that was used to create it. A view buffer is not indexed in bytes but in corresponding primitive type. The only way to create a direct, say, float buffer would be to first create a direct byte buffer and then call its asFloatBuffer method to create a view. Because a byte buffer provides an interface for reading and writing primitive types, when dealing with multibyte data, the order of the bytes matters. The byte order can be either big-endian or little-endian. In the big-endian byte order, the most significant byte is stored at the lowest storage address (that is, the first address). In a little-endian byte order, the least significant byte is stored first (see Figure 5.6). Note that if you put an integer that occupies four bytes as little-endian and get it back as big-endian, the result will be entirely different. This is not a concern specific to Java but has to be dealt with in any language, especially when different applications share data through means such as a file or network packets.

image from book
Figure 5.6: Little-endian versus big-endian byte order.

The default byte order is big-endian, but it can be set by calling order (ByteOrder newOrder). Note that different hardware uses different byte order. For example, Intel hardware uses little-endian byte order. The method ByteOrder.nativeOrder can be used to obtain the byte order of the underlying hardware.

A Buffer is allocated once and is typically reused. A Buffer contains member variables such as capacity, limit, and position. It also contains methods such as clear, rewind, flip, and reset that manipulate these members. Once allocated, the capacity of a buffer cannot be changed. When the clear method of a buffer is called, its position is 0 and its limit is set to its capacity. This is shown in Figure 5.7.

image from book
Figure 5.7: A buffer whose clear method has been called.

The position indicates the next index of the array where data can be written to or read from. As data is written to the buffer, the position is updated. The position of a buffer can be retrieved and set using the position() and position(int newPosition) accessors. Subclasses of the Buffer class can choose to use position to provide relative put and get methods. The limit of a buffer is used to keep track of the segment of the buffer that contains valid data. For example, when a buffer is filled only half way, it is important to know the index up to where valid data has been filled. As with position, the limit can be retrieved and set using the limit() and limit(int newPosition) accessors. If a relative put operation tries to write past the limit, a BufferOverflowException is thrown and if the get operation tries to read past the limit, a BufferUnderflowException is thrown. The remaining method simply returns the difference between position and limit. Figure 5.8 shows a buffer that has some data in it.

image from book
Figure 5.8: A buffer that contains some data.

The flip method is one of the most important. Once all the data of interest has been written to a buffer, the limit of the buffer must be set to the end of the new or relevant data in the buffer so that an object that reads the data knows when it has reached the end of relevant data. It is important to note that reading until the capacity is reached is not correct because the buffer might not be full. The flip method sets the limit to position and position to 0. By doing so, a reading object can start reading from position and stop when position has reached the limit. Figure 5.9 shows a buffer whose flip method has been called.

image from book
Figure 5.9: A buffer whose flip method has been called.

As the data in a buffer is read, the position is updated accordingly. If you want to reread the same data, you can invoke the reset method. Buffers also allow an index to be marked using the mark method. If you care to set the position back to a marked index, the reset method can be used. The rewind method simply sets the position to 0. In general, it is correct to say that the mark can be greater or equal to 0, the position can be greater or equal to the limit, and the limit can be greater or equal to the capacity.

         0 <= mark <= position <= limit <= capacity 

MappedByteBuffers

MappedByteBuffers are a form of direct byte buffer that were also introduced with JDK 1.4. They are rather complicated to understand compared to typical buffers, and if you want to use them to improve the performance of your game, you really must understand how they work. They can be rather mysterious if you do not understand what happens behind the scenes. They can bring efficiency if they are used properly but can also decrease performance, increase memory consumption, and reduce available disk space if they are not used properly. It is important to note that MappedByteBuffers are one of the most OS-dependent objects that have been added to Java. Some exact specifics of their behavior are not defined in the JDK documentation, and you are advised to check the OS-specific documentation. When necessary, this section assumes the behavior of MappedByteBuffers under the Windows operating system.

A MappedByteBuffer is a direct byte buffer whose content corresponds to a region of a file, and is managed by the operating system. In fact, the content of a MappedByteBuffer resides in the buffer or cache labeled as the OS buffer in Figure 5.1. In other words, a MappedbyteBuffer can be used to wrap a region of the OS buffer. This means that an application can manipulate the data in the OS buffer without having to use an intermediate buffer, or a buffer managed by the application. By being able to access the OS buffer, an application does not have to endure the cost of copying data from the OS buffer to the application buffer. However, this is not the main advantage. The main advantage is that because the OS is managing the data, it can do certain things that would not be possible if the application managed the data. The operating system can decide to update some of the data in the buffer so that it corresponds with the data on the disk, or to update some of the data on the disk to correspond to the data in the buffer that has been manipulated by the application. Because the OS is aware of the state of the system, it may decide to unload parts of the file data from the OS buffer if it is low on memory. It can also avoid reloading the content of the file until it knows for a fact that an application needs to use a specific portion of a file. In addition, because the data is maintained at the OS level, if multiple applications need to access a certain file, they can simply gain access to the same region of the OS buffer. In fact, this technique is sometimes used to share data between multiple applications. The shared part of the OS buffer can be looked at as a form of shared memory backed by a file. All of this can happen without the application having to know about what happens behind the scenes. In fact, once the buffer is retrieved, it is used as a typical direct byte buffer.

Figure 5.10 shows a region of a file that resides in the OS buffer. The region is visible to two different processes. Each process, represented by a block, stands for the memory local to the process. Each view in a process indicates a different MappedByteBuffer object. The diagram shows three different MappedByteBuffers of which one belongs to Process 1 and two belong to Process 2. The memory addresses wrapped by each byte buffer is actually local to its corresponding process. That is, as far as the process is concerned, it thinks that it is accessing local data. The OS, however, knows that the memory region mapped by each view really refers to the memory pages in the OS buffer. Therefore, even though each process thinks that it is using some chunk of memory to represent the file region, the data is strictly resident in the OS buffer.

image from book
Figure 5.10: Memory mapped file region.

Process 1 has a view that can see the entire memory mapped region. Process 2 has two different views that see parts of the region visible to Process 1. As you can see, the content of the byte buffers are overlapping. Therefore, if Process 2 modifies the content of the buffer denoted by View 2, both View 1 of Process 2 and View 1 of Process 1 would be able to access the updated data. When the content of Region 1 is modified through any of the byte buffers, the OS does not immediately update the content of the file. It updates the file only when it finds it necessary.

Region 1 is only part of the actual file on the disk. Therefore, to modify a region of the file, the entire file does not have to be loaded. Furthermore, the amount of memory occupied in the OS buffer is not as much as the size of the region. If the three views are created but none of them is accessed, no memory is committed in the OS buffer. However, when the first byte of the byte buffer of Process 1 is accessed, one page of memory is committed in the OS buffer and updated to reflect the data of the corresponding section of the file. If the second byte of the same byte buffer is accessed, no IO will occur and no additional memory is committed. If the first byte of the byte buffer corresponding to View 1 of Process 2 is accessed, the data is available and already up to date. In fact, 4K of consecutive bytes can be read without any need for the disk to be accessed. If the last byte of the second byte buffer of Process 2 is accessed, another page is committed, which would mean that only two pages have actually been allocated.

Now that you see when the OS actually reads data from the disk, let’s see when it decides to unload them. There is no direct way to force the OS to unload the committed pages because that would intervene with the management strategy of the OS. Additionally, the memory can be uncommitted only if no other views of the region exist. To decrease the time it takes before the OS uncommits the pages, you can unmap the buffers and close any streams or channels that refer to the file. A mapped byte buffer is unmapped when its finalizer is called, which is some time after the object is no longer reachable.

The OS may decide to release a page if it needs to make room for other purposes. For example, if the game starts to perform some memory-intensive computations, the OS may decide to unload some of the pages. Whether the pages are written back to the disk depends on whether the pages have been modified. If the pages have not been modified, the memory is simply freed. In fact, when a buffer is mapped, a parameter is used to specify its mode. A MappedByteBuffer can be either read-only, read-write, or private. If a buffer is read-only, it slightly helps the unloading process. When a buffer that is in private mode is written to, the OS copies any of the necessary pages and modifies only the copies. The advantage of this approach over making a local copy of the file content and storing it locally to the process is that the OS will still manage the pages. If the OS needs to make more room, a modified copy is saved to the system page file instead of the original file.

It is important to note that the use of a MappedByteBuffer can affect the amount of memory used as well as the amount of space used on the disk. If, for example, many sections of a rather large file are loaded into the OS buffer, the virtual memory manager may consider saving out other memory pages to the system page file, which can in turn increase the amount of space used on the drive.

The data maintained in the OS buffer is not saved to the file immediately, so it is crucial to have a way of forcing an update. This is exactly what the force method of MappedByteBuffer does. Note that forcing the changes to be written to the file is different from trying to make the OS uncommit or free some of the pages that it has allocated to store sections of the file. As mentioned already, the latter is not possible. The MappedByteBuffer object also has a load method that forces the entire mapped region to be loaded into the OS buffer. The load method simply walks the direct buffer at 4K (or page size) intervals and touches each page to achieve the desired result. Keep in mind, however, that loading an entire mapped region causes the application to use a significant amount of memory if the mapped region is large.

A MappedByteBuffer can be retrieved from a file channel. As mentioned earlier, a file channel can be retrieved from a FileInputStream, FileOutputStream, or RandomAccessFile. The following example uses a MappedByteBuffer to copy a file:

File inFile = new File(inputFilename); File outFile = new File(outputFilename); FileInputStream fis = new FileInputStream(inFile); RandomAccessFile fos = new RandomAccessFile(outFile, "rw"); FileChannel ifc = fis.getChannel();  FileChannel ofc = fos.getChannel();    long sourceSize = ifc.size();   MappedByteBuffer bytes1 = ifc.map(     FileChannel.MapMode.READ_ONLY, 0, sourceSize); bytes1.load(); MappedByteBuffer bytes2 = ofc.map(     FileChannel.MapMode.READ_WRITE, 0, sourceSize); bytes2.load(); // finally, copy the data     bytes2.put(bytes1);

How expensive do you think executing the put method is? Note that because the mapped regions have been loaded already through the load method, the put method results only in copying of data from one part of the OS buffer to another. This is a memory copy and the disk is not even accessed, so the put method alone is extremely fast. However, if the buffer’s load method is not called before the put method is invoked, the put method would end up being very expensive because the entire source file would have to first be loaded into the OS buffer. Also, note that when the put method is done, the destination file has not been literally updated yet. To have the data written to the file, you can call the force method of the buffer. Note that closing the channel does not affect the mapped buffer. Once the file is mapped into memory, it has nothing to do with the channel from which it was created. As a side note, the behavior of the map method of Channel is unspecified and OS dependent when the requested region is not completely contained within the corresponding channel’s file. The example just shown assumes that the source and destination files of the given size exist already and the intention is to overwrite the content of the destination file.



Practical Java Game Programming
Practical Java Game Programming (Charles River Media Game Development)
ISBN: 1584503262
EAN: 2147483647
Year: 2003
Pages: 171

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net