How an Operating System’s File System Works

File systems are an integral part of any operating system with the capacity for long-term storage. There are two distinct parts of a file system, the mechanism for storing files and the directory structure into which they are organized. In modern operating systems where several users can access the same files simultaneously, it has also become necessary to implement such features as access control and different forms of file protection.

File System Works

A file is a collection of binary data. A file could represent a program, a document, or in some cases, part of the file system itself. In modern computing, it is quite common for several different storage devices attached to the same computer. A common data structure such as a file system allows the computer to access many different storage devices in the same way; for example, when you look at the contents of a hard drive or a cd, you view it through the same interface even though they are completely different mediums with data mapped on them in completely different ways. Files can have very different data structures but can all be accessed by the same methods built into the file system. The arrangement of data within the file is then decided by the program creating it. The file system also stores several attributes for the files within it.

READ MORE :

All files have a name by which the user can access them. In most modern file systems, the name consists of-of three parts, its unique name, a period, and an extension. The operating system maintains a list of file extension associations. For example, the file ‘bob.jpg’ is uniquely identified by the first word ‘bob’; the extension jpg indicates a jpeg image file. The file extension allows the operating system to decide what to do with the file if someone tries to open it. Should a user try to access ‘bob.jpg,’ it would most likely be opened in whatever the system’s default image viewer is.

The system also stores the location of a file. In some file systems, files can only be stored as one contiguous block. This has simplified storage and access to the file as the system then only needs to know where the file begins on the disk and how large it is. It does, however, lead to complications if the file is to be extended or removed as there may not be enough space available to fit the larger version of the file. Most modern file systems overcome this problem by using linked file allocation. This allows the file to be stored in any number of segments. The file system then has to store where every block of the file is and how large they are. This greatly simplifies file space allocation but is slower than contiguous allocation as the file can be spread out all over the disk. Modern operating systems overcome this flaw by providing a disk defragmenter. This utility rearranges all the files on the disk so that they are all in contiguous blocks.

Information about the files’ protection is also integrated into the file system. Protection can range from the simple systems implemented in the FAT system of early windows where files could be marked as read-only or hidden to the more secure systems implemented in NTFS where the file system administrator can set up separate read and write access rights for different users or user groups. Although file protection adds a great deal of complexity and potential difficulties, it is essential in an environment where many different computers or users can access the same drives via a network or time-shared system such as raptor.

Some file systems also store data about which user created a file and when they created it. Although this is not essential to the running of the file system, it is useful to the system’s users. For a file system to function properly, they need several defined operations for creating, opening, and editing a file. Almost all file systems provide the same basic set of methods for manipulating files.

A file system must be able to create a file. To do this, there must be enough space left on the drive to fit the file. There must also be no other file in the directory to be placed with the same name. Once the file is created, the system will make a record of all the attributes noted above.

Once a file has been created, we may need to edit it. This may be simply appending some data to its end or removing or replacing data already stored within it. The system keeps a write pointer marking where the next write operation to the file should take place when doing this.

For a file to be useful, it must, of course, be readable. To do this, all you need to know is the name and path of the file. From this, the file system can ascertain where on the drive the file is stored. While reading a file, the system keeps a read pointer. This stores which part of the drive is to be read next.

In some cases, it is not possible to read all of the files into memory. File systems also allow you to reposition the read pointer within a file. The system needs to know how far into the file you want the read pointer to jump to perform this operation. An example of where this would be useful is a database system. When a query is made on the database, it is obviously inefficient to read the whole file up to the required data. Instead, the application managing the database would determine where the required bit of data is in the file and jump to it. This operation is often known as a file seek.

File systems also allow you to delete files. , To delete a file, the system removes its entry from the directory structure. It adds all the previously occupied space to the free space list (or whatever other free space management system it uses). To do this, it needs to know the name and path of the file.

These are the most basic operations required by a file system to function properly. They are present in all modern computer file systems, but the way they function may vary. For example, performing the delete file operation in a modern file system like NTFS with file protection built into it would be more complicated than the same operation in an older file system like FAT. Both systems would first check to see whether the file was in use before continuing; NTFS would then have to check whether the user currently deleting the file has permission to do so. Some file systems also allow multiple people to open the same file simultaneously and decide whether users have permission to write a file back to the disk if other users currently have it open. If two users have read and write permission to file, should one be allowed to overwrite it while the other still has it open? Or if one user has read-write permission and another only has read permission on a file, should the user with write permission be allowed to overwrite it if there’s no chance of the other user also trying to do so?

Different file systems also support different access methods. The simplest method of accessing information in a file is sequential access. This is where the information in a file is accessed from the beginning, one record at a time. Changing the position in a file can be rewound or forwarded several records or reset to the beginning of the file. This access method is based on file storage systems for tape drives but works as well on sequential access devices (like modern DAT tape drives) as it does on random-access ones (like hard drives). Although this method is straightforward and ideally suited for certain tasks such as playing media, it is very inefficient for more complex tasks such as database management. A more modern approach that better facilitates reading tasks that aren’t likely to be sequential is direct access. Direct access allows records to be read or written over in any order the application requires. This method of allowing any part of the file to be read in any order is better suited to modern hard drives as they too allow any part of the drive to be read in any order with little reduction in transfer rate. Direct access is better suited to most applications than sequential access. It is designed around the most common storage medium today instead of one that isn’t used very much anymore except for large offline backups. Given the way direct access works, it is also possible to build other access methods on top of direct access, such as sequential access or creating an index of all the file records, speeding to speed up finding data in a file.

Share

Alcohol scholar. Bacon fan. Internetaholic. Beer geek. Thinker. Coffee advocate. Reader. Have a strong interest in consulting about teddy bears in Nigeria. Spent 2001-2004 promoting glue in Pensacola, FL. My current pet project is testing the market for salsa in Las Vegas, NV. In 2008 I was getting to know birdhouses worldwide. Spent 2002-2008 buying and selling easy-bake-ovens in Bethesda, MD. Spent 2002-2009 marketing country music in the financial sector.