PRIMA's unique approach to COBOL modernization


PRIMA's unique Modernization approach

Here's the fundamental dilemma:

We want all the good stuff that the New Technology brings, but we need to keep our business running while we get it, and we'd really like to be able to have our existing investment run alongside the new development on the same platform (.Net), so we are only supporting and running ONE platform.

PRIMA's approach to modernization is unique because we recognize this dilemma and accommodate it. We know that you can't "modernize" by compiling your COBOL to Java, or to COBOL for .Net; that just perpetuates the problem.

Today's Networks require objects, and layers of software, so the Object Oriented programming paradigm is ideal. Languages like Java, C#, and VB.Net are designed for use in the network environment, support Object Orientation innately, are easy to learn, and are free. Procedural (old) COBOL is expensive and kludgy in this environment.

BUT, that doesn't mean you CAN'T use COBOL! Our tools allow you to use OO COBOL into the future AND/OR mix your code with modern languages as you prefer. It removes the pressure and allows you to migrate the codebase at your own pace.

Here are the points which underpin PRIMA's approach to modernizing COBOL:

* - Our Migration process is DATA DRIVEN.

    Our tools analyze the existing data structures which are used by your COBOL legacy and create a relational database that includes all of these elements in a Normalized, optimized form. Objects are created which manage these new data structures and New Technology can use these objects to get "COBOL views" of data. The existing code is automatically transformed so it uses these objects, but no change is made to the existing logic.

(In effect, all of the indexed file IO in your existing code is refactored into calls to a Data Access Layer.)

The result is that the legacy continues to run exacly as it always has, but now the data it processes is being sourced from the new Relational Database instead of indexed files. And the New Technology shares the same database, so Old and New are working from the same data repository.

* - The conversion of Legacy data and code is FULLY AUTOMATED.

    You set up a Migration environment (sandbox) in about 10 minutes, then let the Tools do all the work. Your existing data structures are analyzed, your new RDB is created and loaded with the data from your existing indexed files, a Data Access Layer (DAL) comprised of objects that manage that RDB is generated, and your existing codebase is transformed to use the DAL objects. Everything continues to work as it always has, but now you have a repository that can be shared between Old and New Technology. This takes all the pressure off having to Migrate. Your Legacy can continue to run for as long as it needs to, feeding results to your New Technology until you gradually phase everything to the new environment.

* - Legacy runs alongside New Technology development, and shares data with it directly.

    There are no "downstream feeds" or "overnight Batch synchronization" runs. Your Legacy becomes "object aware" and can operate in the same Object Oriented environment as your new developments. That means there is no rush to get rid of it!

What normally happens is that maintenance on the old codebase stops, and changes are applied using New Technology, wherever possible, and economically sensible. Eventually, all of your COBOL legacy is going to be replaced, but you can control the pace at which this happens and you can prioritize it in accordance with maintenance priorities. There is no need to convert your existing COBOL to Java or C# and there are very good reasons why you should NOT do that (unless you re-factor it all into objects at the same time).

Because the Old and New codebases are sharing the same data resource, you can decide which base to amend for a given change requirement, based on your own criteria and priorities. (How urgent is the change? how much work is needed to implement it? etc...)

Because there is separation between the business logic and the data access, any code from either codebase that requires change to the data structure can be accommodated and a new DAL object generated. The programs which need the new data fields will see and use them; the programs which are not affected will see but not use them. ONLY the programs affected by the change need to be recompiled, everything else, in both codebases runs as it always has, and both codebases use the new DAL object. (If you are not familiar with the concept of using a Data Access Layer (DAL), please review the discussion of it in the "RDB and SQL" area of this site.)

Furthermore, in particularly "tricky" exceptional cases, you can create a database View and clone it into a new DAL object. Code that needs that view simply invokes that object. The point is that separation is maintained between your Business logic and your Data access for both the Old and New codebases.

Note that the New Technology doesn't have to be Java or C# or VB.Net; it COULD be OO COBOL, although there are other reasons why you might want to move on from COBOL altogether.

* - Our solution recognizes and caters for the difference in paradigm between procedural Legacy COBOL and Object Oriented Network objects. We automatically refactor all of your file access into a Data Access Layer comprised of objects. (You can have them written in modern OO COBOL or in C# using LINQ; the source language is irrelevant because this code will NEVER BE MAINTAINED. If it needs to change, you re-generate it with a mouse click...)

    Migration is not about simply converting code. Generating Java or C# from Legacy COBOL does NOT solve the problem, it perpetuates it!

(You need to review and refurbish your existing COBOL code into Classes. (This is called "refactoring"). It is difficult to do this with monolithic, integrated programs that don't separate functions like "Presentation to the User", "Data Access", and "Business Logic", and legacy COBOL programs generally are in this category. The PRIMA solution AUTOMATICALLY separates out and refactors the Data Access from Business Logic, and legacy batch programs generally have limited or no access to User Interfacing. We also enable the legacy code to recognise objects (so it can use DAL) and there is a "spin off" from this in that objects written in other modern languages can now also be recognised by the Legacy code.)

* - There is NO middleware with our solution. You are NOT locked in.

    We change your existing codebase to support objects, in exactly the same way as if you had done it yourself. We refactor the I/O verbs into object invokes and we add all the necessary "housekeeping". (Code that has been Transformed in this way is called "Werewolf code" to distinguish it from code that has not been Transformed. (see our videos for details...)

Everything runs without any additional PRIMA middleware or support software. It is YOUR source and you have control of it. You can retain the Toolset to instantly generate new Data Access Layer objects or you can use our generated source to clone your own solutions... we do not lock you in to our solution. Neither do we constrain the way you use our tools or the code produced by them.

SUMMARY OF THE ADVANTAGES OF THIS APPROACH:

This is NOT a full list!:

1. There is no pressure to phase out legacy code. Instead you can let it be decommissioned in a controlled and prioritized way. Legacy functions that prove to be useful are easily wrapped as objects and thus available to Old and New codebases.

2. Your programming resources are not tied up for months writing Conversion software or manually converting data and code. The Tools do all of this and can achieve in days or weeks what would take months or years of manual effort. They don't get bored with tedious repetitive work that requires long periods of concentration and attention to detail, and they generally don't make mistakes.

3. The Tools have expert knowledge and will create the kind of database you would normally need an experienced DBA to do. PRIMA is the ONLY company we know of in this market that actually provides a properly optimized database, usually in Third Normal Form, with repeating groups broken out, COBOL group fields handled, COBOL datatypes properly matched to DB datatypes, date fields detected and typed correctly, REDEFINES recognized, and a Data Access Layer of objects generated to match the database.

Bottom line: We analyze the data your company is running on, create an optimized Relational Database to accommodate it, load that DB with information from your current files, then change your existing COBOL codebase so it can access it. Your COBOL is now "object aware" and can share functionality with your New Technology.

There are more details about our approach, including videos showing the tools in action, on the Migration Toolset pages of the site.

Settings