Microsoft Versus
Dissecting Microsoft | Directory

Microsoft Windows Performance

Disk I/O

Disk I/O (input/output) is one of the bigger performance bottlenecks for any computer. Efficient use of disk space is therefore critical to peak performance.

Microsoft® Windows™ file systems (FAT, FAT32, NTFS, and maybe the upcoming WinFS) are based on file allocation tables. These are lookup tables which map files to clusters (smallest sections) of disk space. For a complete file to be read from a disk, the file allocation table must be traversed as each cluster is read so the next can be found. This means to delete a file, the table must also be traversed and the clusters marked clear. Both operations must be performed to completely move a file from one location to another on the disk (read every cluster, write it somewhere else, and remove the original cluster's allocation). Performance is reliant on file size - the bigger the file the slower the operations.

One side affect of this implementation is degredation of performance over time as files are deleted and added. Clusters for many files become scattered in different locations. To read one complete fragmented file, the file allocation table must be traversed again, but it can't be done as quickly because entries are scattered throughout the table instead of one after another. It's quickest to read through a table when the records are sequential (think of the head of the hard disk jumping around making the familiar clicking sound instead of reading continuously). The solution is disk defragmentation, a slow process required periodically on Microsoft Windows operating systems to improve performance.

There are a wide variety of file systems available for UNIX® and Linux® operating systems. Traditionally, these systems follow a different design. A lookup table notes only the beginning point of every file on disk. Each successive cluster for a file contains a link to the location of the next cluster. In programming terms, reading a complete file is therefore like traversing a linked list. To delete a file, only the beggining pointer - the single entry in the lookup table - needs to be deleted. To change a file's directory (moving a file from a user's perspective), that single table entry can be removed and re-inserted or updated to reflect the new location. The benefits to this design include linear performance - file size is irrelevant to delete and move operations. It also means only very small improvements to performance could be gained by performing disk defragmentation. No table lookup is required to determine the next cluster to read.

Microsoft Windows users are familiar with the delay of file operations (see "[User Interface]"). Deleting large files and operations like emptying the recycle bin are slow and force the user to wait. UNIX operating systems and variations do not have the disk performance issues of Microsoft Windows due to more efficient design. Most UNIX file systems are also designed to operate as much as possible in a separate thread of execution (i.e. "in the background"). The end result is less wait for other processing, such as drawing of the user interface and working with user input.

Comparisons

An elementary comparison found Oracle 9i performs much better on Red Hat Linux than Windows 2000. The average throughput is 38.4% higher on Linux.

In May of 2003 Hewlett-Packard's Superdome computer running Microsoft®'s Windows™ Server 2003 operating system scored 658,000 transactions per minute on the Transaction Processing Performance Council's TPC-C test. This is the first time a Windows server has reached top place. Just two weeks later IBM's p690 Turbo Unix server scored 681,000, removing Microsoft from the top spot. While retaining a first place position for only two weeks is bad enough for public relations, it was toppled by only half the number of processors - 32 versus 64. This is a genuinely poor display of Microsoft Windows' performance.
Copyright © 2004-2007 Matthew Schwartz