With increasing focus on factory floor optimizations, machine learning technology remains underleveraged as a tool for improving cycle times, throughput, and OEE. Whether it lives in spreadsheets, paper travelers, or a manufacturing execution system, manufacturers naturally generate massive amounts of production data from which machine learning algorithms can be trained to make predictions about the future state of a factory.
Some use-cases for predictive optimizations that have previously been covered on our blog include:
Another use-case for predictive optimization relates to production planning and scheduling efforts, specifically dealing with process run-times.
To enable a schedule that accounts for production dynamics and variability, the current state of the shop floor must be digitized so that a holistic data model (aka digital factory twin) can be created to provide visibility into what’s going on at any given moment as well as to predict the future state of various processes on the shop floor.
A digital factory twin is the digital representation of all available material, personnel, plant resources, and capacity on the shop floor and provides the basis for predictive analytics. A digital factory twin provides full visibility into production processes and can leverage machine learning models based on historical data to enable predictive optimizations. Here we focus on the prediction of process time. Process time is the time it takes to complete one process step. It is also sometimes referred to as equipment cycle time, process run-time, or, in the semiconductor industry, recipe run-time.
When a production order is executed for a particular product, that product is manufactured based on a series of process steps, or a “product flow”. Each process step within the product flow is assigned to a specific machine and is associated with a set of instructions detailing equipment-specific settings of its processing program for the product. To begin a process step, the product is identified for processing, and the equipment is set up according to recipe specifications for that equipment and product (such as temperature, pressure, processing time, etc.). The product is then placed at the equipment, processed, and queued for the next process step. In this context, process time does not include placing the product at the equipment, equipment set-up, or any queue time before or after the process begins.
A simplistic linear method to predict process run-time in semiconductor manufacturing involves estimating the time it takes to process a lot and then dividing that by the lot size. This method satisfies the requirement to have some basis for planning and scheduling. However, machine learning is more adaptive and accurate than traditional prediction methods. Machine learning models are built from historical shop floor data. Historical data likely exists within your factory to support predictive process run-time capabilities.
Predictive capabilities for process run-times can be used as follows to optimize a manufacturing process:
To establish a predictive model for process run-times you will need:
Digital twin technology is an invaluable tool for manufacturers as it provides the basis for predictive capabilities and simulations that can be applied to virtually any aspect of your production environment.
SYSTEMA provides data analytics solutions across many industries and, for more than 25 years, has developed and implemented solutions that help manufacturers drive optimizations, account for variability, and understand the root causes.