OptimizationdatabasemaintenanceWorkStream

Proper Care and Feeding of WorkStream Databases

by Jim Connett, on September 02, 2019

The WorkStream® MES application continues to play a prominent role in manufacturing for a variety of reasons. It’s built on COBOL, a mature, expressive, business-oriented language still popular, with over 220 billion lines of code in existence1. WorkStream is a stable application. One could purchase the application today, install it, configure it, and then—barring any security patches—leave it running for years without any upgrade. WorkStream is also extensible. When additions and new features are required, WorkStream has many avenues through which it can be modified to meet your end goals; here at SYSTEMA, we regularly provide custom functions, screens, and capabilities requested by our worldwide user base. WorkStream can interact with message-oriented middleware and, with some creativity, communicate with any number of systems outside WorkStream (again, we have years of experience publishing custom messages onto TIBCO®, ActiveMQ™ and other message-oriented-middleware architectures). These are just a few areas where WorkStream excels.

However, one area that requires the attention of any WorkStream administrator and/or IT manager is the database undergirding WorkStream that records all context information changes, including (but not limited to) all aspects of material movement and configuration, data collection, and equipment information. While the core application can run without intervention for many years, the database—comprised of 165 tables (according to documentation)—must be managed intentionally. Most “slowness” reported by our WorkStream install base is related to database issues resulting from the effect of millions of transactions over the course of a year. Most often, we can point to database fragmentation as the source of slowness.

So…when was the last time you optimized your WorkStream database?

Databases Come with a Cost

In most modern databases, disk space is allocated to support the expected size of the tablespace needed for the application. If the only SQL transactions from the application were SELECT statements, then the database would run at optimal speeds for a long time. However, a database is designed to INSERT data, UPDATE the data already received, and, if not needed anymore, DELETE the data. In high-volume manufacturing, tables can receive thousands upon thousands of INSERT, UPDATE, and DELETE SQL transactions each day, which invariably leads to the need for increased tablespace. The removal of data, combined with the growth of the tablespace, can be a breeding ground for fragmentation.

Fragmentation Generally Defined

Every database is different2, but, generally speaking, one of the tasks of a database’s schema is to manage and defend data boundaries. INSERT SQL statements may force these boundaries to expand to handle the new data. Then, as a normal course of operation, DELETE SQL transactions may leave gaps in the growing tablespace. We don’t see these gaps in the tablespace when we transact a ‘SELECT * from TABLENAME’ in our favorite IDE because the IDE returns all records to us in a continuous list. However, it is likely that the database had to jump around the tablespace to find the data requested in the SQL. These ‘jumps’ initially don’t measure up to anything, but over time, these ‘jumps’ will become perceptible in the form of degraded application and database performance, similar to performance degradation seen when a desktop HDD is fragmented. In sum, WorkStream optimizes its tables during installation with the proper indexes and schema structure to support a massive amount of data, but managing fragmentation is outside WorkStream’s scope. An unattended database can become the Achilles Heel3 of WorkStream, dramatically impacting the performance of the WorkStream application to the point where WorkStream transactions may begin to timeout. When reports of timeouts begin ringing your mobile phone at 1:30 am, it will feel like the beginning of the IT apocalypse. Trust me…I’ve been there. It’s not fun.

Help Is Available

But all is not lost—the worst-case, apocalyptic scenario can be averted! One of the most significant steps you can take to maintain WorkStream performance is to move the WorkStream schema to a new, unfragmented tablespace on a regular (scheduled) basis. For Informix® databases, this would involve unloading and reloading the database. For Oracle®, specific commands exist to reclaim tablespace (like coalesce) and the process often involves complex DML scripts to ensure space is properly reclaimed and indexes are properly restored.

Advice Worth Noting

With any enterprise database application, the first time you create a plan for database maintenance will be time-consuming; many pre-maintenance planning meetings will (or should) occur, script writing will need to be done, and database backups will need to be verified (you have database backups, right?). However, the resulting scripts and planning can be used for future database optimizations, reducing the required person-hours for future database maintenance activity. Reclaiming tablespace will likely require your database to be offline for a period of time; therefore, these optimizations are best done over scheduled shutdown periods or during the weekend when your location may be nonproductive. If you are a 24-hour-a-day facility with no scheduled shutdown periods in the future, then you will need to schedule the database downtime with management. We understand this request will compete with other, seemingly more pressing and value-added priorities, but our evidence and experience leads us to confidently state that the few hours of downtime for this type of maintenance will be quickly recovered. After maintenance, database admins and IT groups will no longer consume hours chasing down reports of problems with transaction timeouts and slow database response times directly linked to a fragmented database. And a database with low—or no—fragmentation is a happy database!

Conclusion

WorkStream databases are an absolute necessary component of the WorkStream MES application, but databases need to be properly maintained; they simply cannot be neglected. A few hours of downtime a year to reclaim space in fragmented databases goes a long way to a happy, snappy, healthy WorkStream application.

 

1 Bob Resselmen, “The Beauty Of The COBOL Programming Language“ 

2 We acknowledge that different database vendors support the WorkStream™ application, and these vendors use a variety of terminologies and methodologies specific to fragmentation. For the sake of this article, we intentionally speak in high-level generalities about database fragmentation. Your situation may differ. Please feel free to contact SYSTEMA with specific questions about your database engine and how we may be able to help!

3 HISTORY: Achilles

Jim Connett

Written by Jim Connett

Jim came to SYSTEMA in 2016 and has 18 years of manufacturing and software automation experience in advanced automation environments, specializing in Workstream/COBOL customizations, tool integration, and data collection and aggregation. In 2014, he and his family spent two years in Ethiopia teaching English and Computer Science courses to international high school students. He enjoys traveling, reading historical and science fiction books, and all kinds of Japanese food.

Comments