(c) Peter E. C. Dashwood - 2017
COBOL programmers are used to having file structures defined in their code, and accessing them with COBOL IO verbs. COBOL has very good facilities for doing this.
It is therefore "instinctive" to try and do the same with RDBMS. The definitions are replaced by CONNECT statements, "host variables" are defined for each field (column) on the database that will be processed, and Procedure Division verbs are replaced by embedded SQL commands. (The host variables act like a fleet of taxis, ferrying data elements to and from the RDBMS). It works, but it is very difficult to maintain because any changes to the database structure may require review of all the host variables and embedded SQL statements, and they are scattered throughout the code...
It can be addressed by 2 principles that are very well known to Object Oriented Programming (OOP) programmers: ENCAPSULATION and INTERFACE. Instead of accessing the RDBMS directly from your application, SEPARATE (encapsulate) the data that is required by the application into a normal COBOL record definition. (01 level...) This is the data that the application will "see"; it is NOT an application problem to populate it. To populate the data, you need an INTERFACE that will communicate with code that accesses the database.
For each "tableset" (a base table and any associated repeating group tables) write an "access module" that implements all of the possible actions against the RDB (sequential access, random access, insert, update, and delete), using ESQL/COBOL (or LINQ/C#). (You can write a set of "base" code and "clone" this base code for each tableset.) This collection of modules constitutes a "Data Access Layer (DAL)". The LOGIC of the DAL NEVER CHANGES, (the only difference between DAL components is the tableset they manage...) so the only time you need to maintain it, is when there are structural changes on the database, and you can immediately locate the DAL module that needs changing. (If it is extensive you might need to change the COBOL definition in the interface but that is pretty routine for most COBOL people.)
There is NO ESQL in your application any more; the "business logic" has been separated from the "mechanics" of managing data. (You could theoretically move to a different DBMS (not even Relational...) and it would have NO IMPACT on your application.) It also means that you can tweak and twiddle the DBMS without any impact or disruption to your application code. Here's a picture...:
There are HUGE advantages in separating the Business Logic from the mechanics of data retrieval. The Business Logic can simply say:"I need this customer data, based on this key, in order to process this sale." It CALLs/INVOKEs the DAL component that handles the Customer tableset, passing it an Interface block which contains the key, and a "GetRandom" action. (If you have written the DAL using OO COBOL or C#, you INVOKE the "GetRandom" method of the DAL object, passing it the Interface block with the required key in it.) The DAL object returns the COBOL record as defined for CUSTOMER. It builds it automagically from host variables it populated with ESQL (or direct data from LINQ). BUT, the process of constructing and deconstructing the COBOL interface is TRANSPARENT to the application, and it doesn't care about HOW the data was retrieved..
The DAL modules/objects are compiled code which targets specific tablesets on the database. They are typically less than 8KB, the most used ones remain resident in memory, along with the Interface Block which is shared storage in the application. Experience has shown that (especially when the DAL is coded as objects using LINQ) performance is every bit as good as it would be with native ISAM.
COBOL has always been "very good" at processing collections of data elements as "records". You can play to this advantage by using the DAL approach. RDBs are intended to allow processing of "collections" of elements from tables by means of joins and grouping, WITHOUT necessarily returning a a full "record" every time. If you use the DAL approach, you can add specific data Views into the DAL objects and map the view to a COBOL data record in the Interface. With the old approach there is duplicated ESQL scattered throughout the programs; with DAL there is NO duplicated ESQL in the entire system.
As you might expect, PRIMA has tools that can analyze your ISAM file definitions and create an optimized Relational Database (in 3NF) from the data definitions in your COPY books. But, possibly even more importantly, IT CAN INSTANTLY GENERATE ALL THE DAL OBJECTS YOU NEED TO ACCESS THE NEW DB!
(You can SEE the whole story about DAL objects in video 7. It includes how and where they fit in the system and shows COBOL code for a DAL object, generated by the PRIMA Migration Toolset.)
(For information about the "nitty gritty" of SQL, DDL, Normalization and other DB related matters, visit the Portal)