(c) Peter E. C. Dashwood - 2017
Why are "objects" better for Network access?
There are only a few factors that determine what the performance of code running on a processor will be like:
1. Queue wait time: How long is your code sitting in a queue waiting for the OS to load and execute it?
2. Load time: How long does it take the OS to allocate a memory space for it and load it into that space?
Size is critical here; small blocks of code are more easily accommodated and load much faster than large blocks of code.
3. CPU Time: How much processor time does your code need in order to complete?
We can see at once that, in the event driven paradigm there is NO Queue wait time; hardware interrupts are
trapped and vectored immediately to the code they require...
Load time is still a factor, but the OS tries to keep "often used" routines in memory. This is easier if they are "small"...
Execution Time is segmented by the OS depending on a number of architecture and OS dependent algorithms, but it is generally true that when you need to do file access, you will relinquish the CPU. (The OS saves the state of things and restores it all back when the file access completes, and it comes round to your turn on the processor again.) The reason it does this is because "input/output (IO)" operations take millions of times longer than normal instruction execution speed. Instructions typically take nanoseconds; IO typically takes milliseconds. To a CPU, if it sat and waited while you got your data, it would seem like Ice Ages came and went and glaciers melted, before your data was safely in memory.
BOTTOM LINE observation: Things go better when code is SMALL. It is better still if it doesn't have to do IO...
Of course, there is only a very limited number of computer applications that don't need to do IO. But traditional Procedural COBOL programs have generally included their own IO code. This makes them larger than if they "outsourced" this service to a (maybe specialized) piece of code that did it for them (Data Access Layer (DAL)). There is a full discussion about Data Access Layers and why you would use them, here.
Objects implemented from OO Classes come ready equipped with an ideal mechanism for "outsourcing" the services they may need. It is called INTERFACE and is introduced when we look at a general overview of OOP for COBOL people later in these pages, and in detail in the OOP Tutorial (Objects 101). For now, it is enough to know that using Objects helps us get smaller code in an event driven environment, than using traditional Procedural code.
1. For event driven environments, SMALL is good when it comes to code.
2. Objects can be made small more easily than procedural code.
3. Objects can be used to load level across a network.
4. Objects can be created (and destroyed) much more easily than procedural code.
5. Remote object references can be used to reference data and code across the network.
Procedural code cannot innately do that.
NOTE: ALL of the points above CAN be done with procedural code; it is just much more clumsy and difficult than using Classes and Objects.
(Coming up: Classes and Objects and how a technical community missed the boat.)