Tommy E White*
Department of Engineering Technology, Wayne State University, USA
*Corresponding author:Tommy E White, Department of Engineering Technology, Wayne State University, Detroit, Michigan, USA
Submission: May 02, 2025;Published: May 14, 2025
ISSN:2694-4421 Volume4 Issue1
Job shop enterprise aims to sustain production without significant capital expenditure. This paper presents a methodology combining Discrete Event Simulation (DES) with lean six sigma’s DMAIC approach to analyze results and optimize throughput. It identifies bottlenecks by examining machine standard deviation in a balanced production system.
Keywords:Theory of constraints; Stay time; Standard deviation; Throughput
Job shop enterprises experiences numerous challenges that affect growth and sustainability. Some key hurdles are resource constraints, such as limited access to financing, adequate facilities and capital markets.
One cost-effective measure to improve company throughput is the methodologies of discrete event simulation and lean six sigma to identify constraints in the process by using the Theory of Constraints (TOC). The use of theory of constraints is an effective theory that focuses improvement efforts on the process constraint to enhance production line throughput. The theory of constraints involves identifying the most significant limiting factor that impedes achieving a goal and systematically improving it until it is no longer the constraint. In manufacturing, this constraint is often referred to as a bottleneck [1]. TOC applies a scientific approach to improvement by hypothesizing that every complex system, including job shop processes, comprises multiple linked activities, with one acting as a constraint upon the entire system (i.e., the constraint activity is the “weakest link in the chain”) [2]. Lean Six Sigma (LSS) is a popular approach for continuous improvement of processes and quality within job shop enterprises. LSS focuses on the elimination of unnecessary, non-value-added steps in the job shop operation.
The use of standard deviation is the key measurement of the variability inherent in this case the process. There are other important measurements that practitioners use to describe a data set, such as the mean or average, which represents the center of the data set. Both measures are important, but being able to reduce the standard deviation is key to improving the throughput for the job shop, production line or manufacturing systems. The smaller the standard deviation in a data set, the greater the process capability. In each step, it is important to get useful and reliable information for decision-making. In this sense, there are multiple tools and techniques that can be used in each step and represent a vital role in the success of the implementation process. Discrete event simulation is an applied technology that is especially useful for analyzing and solving problems. Applying simulation begins by being clear on the problem definition, the reasons for simulating and the expected outcomes. Simulation with no objective is counterproductive [3]. The practitioner using discrete event simulation must balance their understanding of the problem with their knowledge of the details of simulation, the underlying simulation concepts, application and the analysis methodologies that are employed. A discrete event simulation is the limitation of the operation of a realworld production line or process systems over a specified period. The behavior of this production line or system as it evolves over time is studied by developing a simulation model [4].
Discrete event simulation can be used for the following
purposes:
a. Simulation enables the study of experimentation with the
internal interaction of a complex system.
b. By changing the simulation inputs and studying the outputs,
valuable information about the production line or systems can
be obtained about which variables are most important and
how the variables interact.
c. By simulating different capabilities for a production line or
systems, additional requirements can be determined.
Use of discrete event simulation for the job shop enterprises model represents that variables change only at a discrete set of points over a given time. In a discrete event simulation, a system’s behavior is represented as a sequence of discrete events that occur at specific moments in time. These events mark changes in the system’s state. Unlike continuous simulation, where changes happen continuously based on differential equations, DES focuses on specific event occurrences.
Throughput is an important parameter to evaluate in any job
shop operation. The bottleneck machine has the most influence on
the throughput of the process. Identifying the bottleneck machine
is an important task that faces most practitioners, but the actual
selection of the bottleneck machine is not an easy task. The
definition of what constitution a bottleneck machine is not uniform
across practitioners. Some definitions are listed below:
Definition 1: A bottleneck constrains the performance of the
systems.
Definition 2: The machine which has the smallest isolated
production rate is the bottleneck. Production rate is defined as the
average number of parts produced by a machine per cycle time.
Definition 3: If the work in process inventory is a given buffer
is the largest of all buffers in the systems, then the machine
immediately downstream of this buffer is the bottleneck.
Definition 4: A machine is the bottleneck if the sensitivity of the
system’s production rate to its production rate is the highest of all
the machines in the system.
A typical job shop operation was modeled using Flex Sim
Simulation Software package. Figure 1, the designed model as
shown below consists of a part generator, processor test station
number 1, processor test station number 2, processor test station
number 3 and an exit station. The first in and first out queue
method is applied. The set-up time 2 seconds and processing time
10 seconds of each test station is taken as normal distribution
values with a standard deviation value of 10. The bottleneck station
is identified as the station that has the largest average stay time.
The stay time is defined as the total time to process the job through
the station consists of:
StayTime=Set upTime+ProcessingTime+StandardDeviation
Figure 1:The processor test stations layout.
Once the bottleneck station has been identified based on the use of the stay time data, the production line throughput is analyzed. The production line throughput can be defined as:
The machines have identical repair times. The parameters for the remaining machines, such as mean time between failures and total time to repair are expected to follow the exponential distribution.
The baseline discrete event simulation model was run to capture the current state of the job shop system. The statistical analysis capture was total throughput of the system and the stay time for each machine in the systems. The result of these statistical analysis is shown in Figure 2 & 3. After going through the investigative and mitigate stages using lean six sigma methodology with the focus of reducing the total stay time by reducing the standard deviation from 10 sigma in station number 1 to 2 sigma, the discrete event simulation model was run and the results are provided in Figure 4 & 5. As shown in Figure 6, the processor test station 1, which is the bottleneck station, had a 13.28 percentage decrease in total stay time. This improvement contributed to the decrease in processor test station 1 standard deviation. This improvement in standard deviation validates a new metric for detecting the bottleneck station. As shown in Figure 7, the throughput increases by 10.73 percent. This improvement contributed to the decrease in processor test station 1 standard deviation. This improvement in standard deviation and stay time validates a new methodology for detecting the bottleneck station and improvement of throughput.
Figure 2:Throughput analysis from baseline model 1539 parts per 8 hours shift.
Figure 3:Stay time analysis for each processor test station. Notice that processor test station 1 is the bottleneck
Figure 4:Throughput analysis after using LSS Methodology 1724parts per 8 hours shift.
Figure 5:Stay time analysis for each processor test station after using the LSS methodology notice that processor test station 1 is the bottleneck station.
Figure 6:Stay time analysis of each machine.
Figure 7:Throughput analysis.
To increase the throughput of a job shop operation it is
important to first identify the bottleneck station. One method to
find the bottleneck is to identify the machine that has the highest
standard deviation in a balance line system. Further research is
needed to answer these questions:
A. Does this methodology work for other production systems
such as continuous systems?
B. With focus on the stay time work for other production systems
such as batch manufacturing?
C. Does focusing on the time component of standard deviation
work in the service Industry?
While the method uses deterministic cycle time only, research is in progress to extend the proposed method to variable cycle time.
The incorporation of discrete event simulation and lean six sigma methodologies into job shop operations offers a significant opportunity for both researchers and practitioners to enhance key performance indicators within the company. By investigating identified research gaps related to production performance in various manufacturing systems, additional methods can be provided to detect constraints and improve overall company performance. The analysis results indicate that monitoring machine stay time and focusing on the standard deviation of time components show great potential for increasing the throughput of a job shop enterprise without requiring substantial monetary or physical resources.
© 2025 Tommy E White. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.