by Jim Connett, on June 06, 2019
Integration is the art of harmonizing hardware, software, and equipment systems in order to optimize, visualize, and automate manufacturing processes.
Automation is the art of transforming manually performed business activities into processes that are orchestrated and controlled through software solutions.
Optimization is the art of maximizing manufacturing efficiency, throughput, OEE, yield, and quality by monitoring, analyzing, and iteratively tuning manufacturing processes.
Visualization is the art of providing transparency into manufacturing, engineering, and supply chain operations in order to enable continuous optimization.
Migration is the art of exchanging critical business processes and IT systems without disrupting manufacturing operations.
A white paper is an authoritative report or guide that informs readers concisely about a complex issue and presents the issuing body's philosophy on the matter.
Best practices documents describe manufacturing IT solutions which are accepted within the manufacturing industry as being correct or most effective.
Previously recorded webinars provide in-depth discussion regarding specific manufacturing topics and solutions.
Demos are brief videos that showcase a specific aspect of a manufacturing topic or solution.
Presentations and recordings from past events hosted or attended by SYSTEMA are available to view or download.
Case studies are up-close and detailed examinations of challenges faced within a real-world manufacturing environment along with proven solutions.
Data sheets provide critical pieces of information, such as features and technical details, related to SYSTEMA’s products and services.
Blogs are informal discussions or informational pieces related to manufacturing optimization topics, solutions, and SYSTEMA-related news.
What do you get when you combine a 59-year-old programming language (National Museum of American History: COBOL, n.d.) with high-speed and high-throughput modern server architectures, hosting mission-critical applications? You get the potential for large log files – creating a scenario where looking for the problems in diagnostic logs becomes the search for the proverbial “needle-in-a-haystack”.
If you have written, managed or maintained any application – especially legacy applications, you know log files are important. They allow you to peer into the “thinking” and behavior of your application at virtually any point of entry or exit. Log files can span several files, over several days and across several megabytes (or gigabytes in larger applications) of data. Despite the volumes of resulting text files, they are the first and final source in investigating, diagnosing, and resolving problems.
I started my programming career in two languages: COBOL, in support of WorkStream™, and Java in supporting SYSTEMA’s Equipment Controllers (EQCs). Some could say I had the best of both worlds, the old and the new, a combination of functional and object-oriented languages (although “modern” COBOL is more object-oriented than its prior versions). To a large extent, I agree with this sentiment! However, the logging capabilities in Java and the logging capabilities built into SYSTEMA’s products are worlds apart; I was left longing for creative ways to log WorkStream™ transactions generated by my COBOL code. If “necessity is the mother of invention”, then I offer you the following “inventions” acquired partly through mentorship, partly through trial-and-error – and largely through blood-sweat-and-tears – along my COBOL journey.
In COBOL, any information one wishes to view in log files requires a DISPLAY statement. The DISPLAY statement functions similarly to the System.err.println() method in Java or the ‘echo’ command in many scripting languages. Sometimes I wanted to search for a specific transaction point in a log file, so I would prepend my outputted DISPLAY statement with an odd word like “TACO”. Do not laugh! I then load up my log file editor with the log files and search for TACO, and voila, I find my output. The key here is to make sure your anchor word does not conflict with something that MAY appear in the log file. That is – I wouldn’t use “DISP” as my anchor word because a search through the log files would highlight all words containing D-I-S-P, including the thousands of DISPLAY statements common in WorkStream™ and COBOL. I’m pretty sure I’m safe with using the anchor word “TACO”. A former Unix administrator colleague used “FROG” as his anchor word. Whatever the anchor word, choose a unique word and reap the benefits!
While a Java class file may contain one or more methods designed to accomplish one task or action, the COBOL language is segmented by “paragraphs” containing executable code. WorkStream™ defines generally accepted paragraph naming conventions. For example, a search through most any WorkStream™ source file will show that a B000-HOUSEKEEPING paragraph exists. Housekeeping tasks may include preparing a database, initializing record variables, and a host of other preparatory steps before the real work begins in the C-level paragraphs. The definition of paragraph naming conventions is long and clearly defined in WorkStream™ documentation. Of course, these paragraphs can call other paragraphs within (and outside of) the source file. Sometimes, it is good to know when the point of execution enters and exits various paragraphs. In order to clearly mark these entry and exit points, I have found it useful to add a DISPLAY statement as the first action in a paragraph…like:
Even though COBOL is an old (sorry, “mature”) language, it is stable, easy-to-understand, and fast. Transactions execution times in WorkStream™ can often be measured in the tenths-of-a-second…if not hundredths. This is, most often, the normal case. However, there are times when a transaction may seem to be taking a long time to complete, especially if the transaction involves a database insert, update, or delete. It can be difficult to identify the point of delay. Therefore, it is very helpful to log a date-time stamp at the entry point and exit point of any paragraph of interest.
Recently, I was working with a source file with an UPDATE database transaction. The WorkStream™ transaction was taking a very long time to complete. The transaction was erroring out due to a timeout condition set in the gateway program. So, to determine the source of the delay, I created a DISPLAY statement at the entry of the controlling paragraph (exactly as I have described in #2 above!). This controlling paragraph then called an “X” paragraph (X paragraphs are reserved in WorkStream™ for database-related transactions) that executed an UPDATE SQL transaction. I added another DISPLAY statement in the“X” paragraph just before the UPDATE transaction and then added a final DISPLAY statement immediately after the UPDATE transaction. These DISPLAY statements in the“X” paragraph called a function named FUNCTION CURRENT-DATE1 which recorded the date and time down to the microsecond. With these debugging DISPLAY statements in place, it was easy to see by the time stamps recorded in the log file that the UPDATE transaction was the source of the delay, not the controlling paragraph.
Log files exist to show you the path, intent, and result of the application, and – in most cases – will help you see that the program or module is doing exactly what you have programmed it to do, even if it is logically or semantically incorrect. Seeding the log files with unique keywords or date/time stamps will further increase productivity and reduce downtime and stress. I hope these ideas spur other creative ideas in all of your programming domains. While the implementations of these suggestions are very much language dependent, the outcome and benefit are universal – minimizing the search and maximizing productivity.
National Museum of American History: COBOL. (n.d.). Retrieved from Smithsonian
1 This function is available with the MicroFocus COBOL compiler. If you do not use a MicroFocus compiler, you can easily create a similar capability by following these instructions.