(c) Peter E. C. Dashwood - 2017
The Indexed Sequential Access Method (ISAM) was one of the early implementations of indexed data, initially on mainframe systems, where it was largely superseded by the Virtual Storage Access Method (VSAM) and, in particular, Keyed Sequential Data Sets (KSDS) which were a subset of the VSAM. For economy, I will refer to Indexed Sequential datasets in this document as "ISAM" but it does not mean any specific implementation of this access method. We should also note in passing that indexed data does NOT have to be sequential, although it usually is... Burroughs Corporation (who became Unisys) implemented an Indexed RANDOM Access Method which allowed data to be appended immediately to the DATA portion of the file, with an appropriate index entry appended to the INDEX portion (but, obviously, not in sequence). This meant the operation could be done with blazing speed but in order to "find" the new data, the sequential part of the index was searched efficiently with a binary chop (and it failed), so it then did a serial search of the appended entries. When the file was CLOSED, the index was then sorted and updated, ready for the next use.
Old-time programmers became attached to their ISAM files, which generally performed extremely well. There was bewilderment and consternation when the new DBMS started making inroads into the management of corporate data. But there were very sound reasons WHY the new DBMS were going to take over. Everything is NOT about pure efficiency in terms of access time, and as computers became more and more powerful and disks became faster and faster, the "efficiency gap" between ISAM and DBMS became smaller and smaller. Here are some of the "pros" and "cons":
As far as using DBMS goes, the World has mostly voted with its feet and most sites implement DB in replacement of indexed files. The benefits of DBMS are generally considered to outweigh the time to access (which is undetectable by Humans when using modern equipment and proper DB design.) The question this then raises is why RELATIONAL datbases have established de facto acceptance, over the other DB models described.
The relational model proposed by Codd and Date gave clarity and mathematical purity to the idea of data elements relating to each other and the connection of the relationships through the use of Primary and Foreign keys. The whole model immediately lends itself to a physical implementation by the use of keyed tables and, since around 1983, software that implements some or all of the model has been available to computers. Although each "row" is really a number of data elements connected by a common key, the "rules" ensure that data is not duplicated (apart from the keys, which can appear as "Foreign Keys" in tables other than the one that they are Primary Keys for...), and that "repeating groups" (COBOL OCCURs...) do not require fixed limits but can be extended dynamically as required.
(For information about the "nitty gritty" of SQL, DDL, Normalization and other DB related matters, visit the Portal)