File System Interfaces

 < Day Day Up > 

The first time a file's data is accessed for a read or write operation, the file system driver is responsible for determining whether some part of the file is mapped in the system cache. If it's not, the file system driver must call the CcInitializeCacheMap function to set up the per-file data structures described in the preceding section.

Once a file is set up for cached access, the file system driver calls one of several functions to access the data in the file. There are three primary methods for accessing cached data, each intended for a specific situation:

  • The copy method copies user data between cache buffers in system space and a process buffer in user space.

  • The mapping and pinning method uses virtual addresses to read and write data directly to cache buffers.

  • The physical memory access method uses physical addresses to read and write data directly to cache buffers.

File system drivers must provide two versions of the file read operation cached and noncached to prevent an infinite loop when the memory manager processes a page fault. When the memory manager resolves a page fault by calling the file system to retrieve data from the file (via the device driver, of course), it must specify this noncached read operation by setting the "no cache" flag in the IRP.

Figure 11-13 illustrates the typical interactions between the cache manager, memory manager, and file system drivers in response to user read or write file I/O. The cache manager is invoked by a file system through the copy interfaces (the CcCopyRead and CcCopyWrite paths). To process a CcFastCopyRead or CcCopyRead read, for example, the cache manager creates a view in the cache to map a portion of the file being read and reads the file data into the user buffer by copying from the view. The copy operation generates page faults as it accesses each previously invalid page in the view, and in response the memory manager initiates noncached I/O into the file system driver to retrieve the data corresponding to the part of the file mapped to the page that faulted.

Figure 11-13. File system interaction with cache and memory managers


The next three sections explain these cache access mechanisms, their purpose, and how they're used.

Copying to and from the Cache

Because the system cache is in system space, it is mapped into the address space of every process. As with all system space pages, however, cache pages aren't accessible from user mode because that would be a potential security hole. (For example, a process might not have the rights to read a file whose data is currently contained in some part of the system cache.) Thus, user application file reads and writes to cached files must be serviced by kernel-mode routines that copy data between the cache's buffers in system space and the application's buffers residing in the process address space. The functions that file system drivers can use to perform this operation are listed in Table 11-4.

Table 11-4. Kernel-Mode Functions for Copying to and from the Cache

Function

Description

CcCopyRead

Copies a specified byte range from the system cache to a user buffer

CcFastCopyRead

Faster variation of CcCopyRead, but limited to 32-bit file offsets and synchronous reads

CcCopyWrite

Copies a specified byte range from a user buffer to the system cache

CcFastCopyWrite

Faster variation of CcCopyWrite, but limited to 32-bit file offsets and synchronous, non-write-through writes (used by NTFS, not FAT)


You can examine read activity from the cache via the performance counters or system variables listed in Table 11-5.

Table 11-5. System Variables for Examining Read Activity from the Cache

Performance Counter (frequency)

System Variable (count)

Description

Cache: Copy Read Hits %

(CcCopyReadWait +CcCopyReadNoWait) / (CcCopyReadWait +(CcCopyReadWaitMiss +CcCopyReadNoWait) + CcCopyReadNoWaitMiss)

Percentage of copy reads to parts of files that were in the cache (A copy read can still generate paging I/O the Memory: Cache Faults/Sec counter reports page fault activity for the system working set but includes both hard and soft page faults, so the counter still doesn't indicate actual paging I/O caused by cache faults.)

Cache: Copy Reads/Sec

CcCopyReadWait + CcCopyReadNoWait

Total copy reads from the cache

Cache: Sync Copy Reads/ Sec Cache: Async Copy Reads/ Sec

CcCopyReadWait

Synchronous copy reads from the cache

Cache: Sync Copy Reads/ Sec Cache: Async Copy Reads/ Sec

CcCopyReadNoWait

Asynchronous copy reads from the cache


Caching with the Mapping and Pinning Interfaces

Just as user applications read and write data in files on a disk, file system drivers need to read and write the data that describes the files themselves (the metadata, or volume structure data). Because the file system drivers run in kernel mode, however, they could, if the cache manager were properly informed, modify data directly in the system cache. To permit this optimization, the cache manager provides the functions shown in Table 11-6. These functions permit the file system drivers to find where in virtual memory the file system metadata resides, thus allowing direct modification without the use of intermediary buffers.

Table 11-6. Functions for Finding Metadata Locations

Function

Description

CcMapData

Maps the byte range for read access

CcPinRead

Maps the byte range for read/write access and pins it

CcPreparePinWrite

Maps and pins the byte range for write access (reads are not valid)

CcPinMappedData

Pins a previously mapped buffer

CcSetDirtyPinnedData

Notifies the cache manager that the data has been modified

CcUnpinData

Releases the pages so that they can be removed from memory


If a file system driver needs to read file system metadata in the cache, it calls the cache manager's mapping interface to obtain the virtual address of the desired data. The cache manager touches all the requested pages to bring them into memory and then returns control to the file system driver. The file system driver can then access the data directly.

If the file system driver needs to modify cache pages, it calls the cache manager's pinning services, which keep the pages being modified in memory. The pages aren't actually locked into memory (such as when a device driver locks pages for direct memory access transfers). Most of the time, a file system driver will mark its metadata stream "no write", which instructs the memory manager's mapped page writer (explained in Chapter 7) to not write the pages to disk until explicitly told to do so. When the file system driver unpins (releases) them, the cache manager flushes any changes to disk and releases the cache view that the metadata occupied.

The mapping and pinning interfaces solve one thorny problem of implementing a file system: buffer management. Without directly manipulating cached metadata, a file system must predict the maximum number of buffers it will need when updating a volume's structure. By allowing the file system to access and update its metadata directly in the cache, the cache manager eliminates the need for buffers, simply updating the volume structure in the virtual memory the memory manager provides. The only limitation the file system encounters is the amount of available memory.

You can examine pinning and mapping activity in the cache via the performance counters or system variables listed in Table 11-7.

Table 11-7. System Variables for Examining Pinning and Mapping Activity

Performance Counter (frequency)

System Variable (count)

Description

Cache: Data Map Hits %

(CcMapDataWait +CcMapDataNoWait)/(CcMapDataWait + CcMapDataNoWait) + (CcMapDataWaitMiss + CcMapDataNoWaitMiss)

Percentage of data maps to parts of files that were in the cache (A copy read can still generate paging I/O.)

Cache: Data Maps/Sec

CcMapDataWait +CcMapDataNoWait

Total data maps from the cache

Cache: Sync Data Maps/Sec

CcMapDataWait

Synchronous data maps from the cache

Cache: Async Data Maps/Sec

CcMapDataNoWait

Asynchronous data maps from the cache

Cache: Data Map Pins/Sec

CcPinMappedDataCount

Number of requests to pin mapped data

Cache: Pin Read Hits %

(CcPinReadWait +CcPinReadNoWait) / (CcPinReadWait + CcPinReadNoWait) +(CcPinReadWaitMiss + CcPinReadNoWaitMiss)

Percentage of pinned reads to parts of files that were in the cache (A copy read can still generate paging I/O.)

Cache: Pin Reads/Sec

CcPinReadWait +CcPinReadNoWait

Total pinned reads from the cache

Cache: Sync Pin Reads/Sec

CcPinReadWait

Synchronous pinned reads from the cache

Cache: Async Pin Reads/Sec

CcPinReadNoWait

Asynchronous pinned reads from the cache


Caching with the Direct Memory Access Interfaces

In addition to the mapping and pinning interfaces used to access metadata directly in the cache, the cache manager provides a third interface to cached data: direct memory access (DMA). The DMA functions are used to read from or write to cache pages without intervening buffers, such as when a network file system is doing a transfer over the network.

The DMA interface returns to the file system the physical addresses of cached user data (rather than the virtual addresses, which the mapping and pinning interfaces return), which can then be used to transfer data directly from physical memory to a network device. Although small amounts of data (1 KB to 2 KB) can use the usual buffer-based copying interfaces, for larger transfers, the DMA interface can result in significant performance improvements for a network server processing file requests from remote systems.

To describe these references to physical memory, a memory descriptor list (MDL) is used. (MDLs were introduced in Chapter 7.) The four separate functions described in Table 11-8 create the cache manager's DMA interface.

Table 11-8. Functions That Create the DMA Interface

Function

Description

CcMdlRead

Returns an MDL describing the specified byte range

CcMdlReadComplete

Frees the MDL

CcMdlWrite

Returns an MDL describing a specified byte range (possibly containing zeros)

CcMdlWriteComplete

Frees the MDL, and marks the range as modified


You can examine MDL activity from the cache via the performance counters or system variables listed in Table 11-9.

Table 11-9. Variables for Examining MDL Activity from the Cache

Performance Counter (frequency)

System Variable (count)

Description

Cache: MDL Read Hits %

(CcMdlReadWait + CcMdlReadNoWait) /(CcMdlReadWait + CcMdlReadNoWait) +(CcMdlReadWaitMiss +(CcMdlReadWaitMiss + CcMdlReadNoWaitMiss)

Percentage of MDL reads to parts of files that were in the cache (References to pages satisfied by an MDL read can still generate paging I/O.)

Cache: MDL Reads/Sec

CcMdlReadWait +CcMdlReadNoWait

Total MDL reads from the cache

Cache: Sync MDL Reads/Sec

CcMdlReadWait

Synchronous MDL reads from the cache

Cache: Async MDL Reads/Sec

CcMdlReadNoWait

Asynchronous MDL reads from the cache


     < Day Day Up > 


    Microsoft Windows Internals
    Microsoft Windows Internals (4th Edition): Microsoft Windows Server 2003, Windows XP, and Windows 2000
    ISBN: 0735619174
    EAN: 2147483647
    Year: 2004
    Pages: 158

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net