Conceptualizing predictive methodology in automotive industry: Implication on business operations and strategies

CONCEPTUALIZING PREDICTIVE METHODOLOGY IN  
AUTOMOTIVE INDUSTRY: IMPLICATION ON BUSINESS  
OPERATIONS AND STRATEGIES  
Huyen Trang Le  
Bao Ngoc Le  
Faculty of Marketing, Economic Research Institute of Posts and  
Telecommunications, Posts and Telecommunications Institute of Technology,  
Hanoi, Vietnam  
Abstract  
Due to an on-going increase in competition level of automotive industry in  
Vietnam and the Vietnamese ambition to localize the automotive industry, quality  
issues of assemblies have become a vital concern. Understanding quality failures may  
support business operations, avoid or reduce mistakes and bring forward a chance  
of developing high-quality assemblies, that is a unique selling point for a motor  
company. It is challenging to accurately identify and model the quality issues of  
assemblies in an assembly line. So, this study intends to develop a conceptualization  
for businesses, using data mining analysis and regression method, to build a  
predictive model for engine assembly failures.  
Initially, the paper reviews literature on different aspects of quality data  
modelling. Typical techniques and variables are briefly mentioned. Subsequently, data  
mining with clustering methods and generalized linear regression model is conducted  
to analyse the current pattern of the dataset and introduce a potential modelling system  
for the future behaviour of quality issues. To validate the model quality, the model is  
tested with both training and testing data. These datasets are generated from Ford  
assembly line in a 6-month period focusing mostly on failure occurrence.  
The predictive power of the proposed model is supported in both data sets.  
Results suggest that Station, Model, Feature and Operation are four significant  
control parameters as they possess a more noticeable predictive power in comparison  
to time factor. To generalize the study area, other potential modelling methods are  
also implied for future research and other scenarios.  
Keywords: Automotive industry, data mining, predictive model, quality issues  
43  
1. Introduction  
1.1. Motivation  
Automotive industry is a fast-growing, potential sector and one of the most  
important contributors to Vietnamese national economic growth. However, automotive  
industry in Vietnam has mainly relied on knock-down kit production in a self-assembly  
structure i.e. models are domestically built with almost exclusively foreign-imported  
assemblies. One of the main reasons is that the quality of domestically manufactured  
assemblies has always been questioned. Moreover, due to the 0% tax rate for imported  
whole cars validated since 2018 of Vietnamese government and the mindset “foreign  
products equal to great quality” of Vietnamese people, it has become more  
challenging for domestic car manufacturers in Vietnam to compete in terms of  
quality-cost-delivery approach. This leads to a hindrance towards the  
ambition/strategy to localize automotive industry of Vietnam as a country in general  
and Vietnamese organisations in particular. In order to overcome the difficulties and  
obtain a better quality, understanding and modelling quality issues are significantly  
vital to Vietnamese automotive industry. This paper aims to study and conceptualize  
the methodology to determine an appropriate model for the quality issues of engine  
assemblies. The research dataset is collected from Ford Motor Company in the UK,  
which is famous for its quality control system producing well-known high-quality  
models over time periods e.g. Ford, Mazda, Jaguar, Land Rover and so on; especially,  
the manufacturer in the UK is the biggest technical centre consisting of an end-to-end  
process i.e. design, development, engineering and support activities. By understanding  
how a leading automotive organisation in a developed country was beneficial from the  
methodology as the most optimised example, the method might be generalized and  
applied to many aspects in Vietnamese automotive industry ranged from solely  
assemblies-produced to wholly car-manufactured business.  
1.2. Objectives  
In details, the objectives of this study are summarized into three questions  
as follow:  
To mine and understand the behaviour of Quality data:  
What is the pattern and behaviour of the dataset?  
To establish a modelling method for engine assembly quality issues from  
the dataset:  
What is the proper method to model the pattern of the dataset?  
To give recommendations for automative businesses:  
How can the data be modelled in future?  
44  
1.3. Framework  
To present the method, the research focuses on the quality issues of one  
particular part i.e. engine assemblies for manufacturing diesel and patrol engines, off  
which a fully-assembled engine is built up from various assemblies in the assembly  
line. The discussion would concentrate on the role of time, failure rate, the impact  
and interaction of main variables (Stations, Models, Operations and Features) on the  
distribution of failures.  
1.4. Literature review  
According to Law (2007), in manufacturing systems, quality issues are the  
major cause of variability, largely influence overall system and should be modelled  
accurately. Literature to date has mostly focused on modelling failure time, the time  
between failures and the duration between failures of the entire population of an  
engine over time (Proschan, 1963; Lamoureux, 1991; and George, 1994). In contrast,  
little academic attention has been given concerning the frequency of failures or the  
tendency of failure occurrence regardless of engine types. Moreover, the importance  
of practical modelling failures for real-life situations in the automotive industry has  
just been realised in a few decades. As such, there are not many publications about  
the practical implementation. Typical literature about modelling failures in real-life  
scenarios are Lu (2009); Kulkarni and Gohil (2012); Hillmann et al. (2014); and  
Chwif et al. (2015). Using a regression formula and data mining to model, the  
background of formulas and statistical techniques of this paper is outstanding  
references on failures-modelling methods in manufacturing system, such as Law  
(2007); McCullagh and Nelder (1989); Baesens (2014); MacQueen (1967)  
According to the manufacturing-based approach, quality can be defined as the  
conformance of products to a specific design or specification (Crosby, 1979 and  
Gilmore, 1974). These authors also emphasized the concept of ‘right at the first time’  
as products should be manufactured properly in the first trial. So, quality issues  
happen when products are associated with scraps or reworks, indicating existing  
flaws within the first operation, which lead to increased production cost. In the  
automotive industry, an assembly failure is identified when the testing machine  
indicates that the engine possesses a problematic feature, for example, the design of  
the engine assembly is not followed original specification. Besides features, the  
probability that a failure occurs within a certain period of time (i.e. reliability) is used  
to assess product quality. The most typical measurements of reliability are the mean  
time between failures (MTBF), the mean time to first failure (MTFF), and the failure  
rate per time unit (Juran, 1998). This paper considers the role of features and failure  
rate per time unit in modelling the variability of engine assembly failures.  
45  
Generally, the variability of quality issues has been debated throughout time.  
Many researchers have claimed the existing relationship between engine assembly  
failure rates and time factor, namely the classical Bathtub curve figure 1 (Amstadter  
(1977); and Proschan (1963, 2000)). Contrastingly, Venton and Ross (1984) argued  
that there were two types of engine failure rates: mechanical (physical failure, time  
dependence) versus electronic (fragile system events, time independence,  
randomness). Since engine assemblies may consist of both electronic and mechanical  
components, there is a possibility that the variability of engine assembly failures will  
be either time dependent or independent. This will be discussed in result and  
discussion sections.  
Figure 1. Bathtub curve - life time product  
Modelling quality issues can be differently established depending on business  
requirements of suppliers and manufacturers, two main sides of the automotive  
industry. Relevant papers from both stakeholders are reviewed to provide a better  
vision on how analysts may conduct research and build models using different  
methods i.e. system thinking, data mining, spreadsheet-based softwares and  
simulation techniques. Hanifin and Liberty (1976) found out that Machining-A  
GPSS-V Simulation technique could enhance the quality control process of material  
handling and production in the USA for both stakeholders. Lu (2009) suggested a  
new classification method, Arrow, and applied finite mixture distributions to the  
current simulation system in Ford Motor Company UK to model the breakdowns  
duration of machines in assembly lines. Hillmann et al. (2014) claimed the potential  
advantage of the Failure Process Matrix (FPM) and spread-sheet based software to  
improve product quality and securing high qualified products for manufacturers. As  
for suppliers, Kulkarni and Gohil (2012) applied Soft system methodology (SSM) to  
improve the assembly line in the Sweden automotive industry.  
46  
Understanding data behaviour is vital to the success of one company but raw  
data is difficult to understand in its initial appearance. Fortunately, data patterns can  
be exploited by processing data with Big Data analytics. Accordingly, the behaviour  
of failures can be identified and there is a chance to predict the re-occurrence  
(Baesens, 2014). A normal process of data mining tends to start with Descriptive  
analytics, which enables analysts dig deeply into a massive amount of data for a  
condensed and more focused information. Then, historical events will be derived to  
a conclusive summary with appropriate and useful patterns to users. The next step of  
Big Data analytics is Predictive analytics, which combines machine learning,  
statistical techniques, modelling and technical softwares to examine historical data  
(Baesens, 2014). The difference between Descriptive and Predictive analytics is that  
the former is used to discover the trend of current data, then the latter can provide a  
future forecast for categorical and continuous variables with a high rate of certainty.  
Thereupon, necessary business and decision patterns can be extracted and  
organisations can make the right decisions. There are many techniques to analyse and  
understand data, such as: neuro network, decision tree, data visualisation, rule  
induction and clustering... For a dataset with categorised variables, looking for data  
significance will be easier with clustering technique. As a typical descriptive  
analytics technique, Clustering has been introduced to categorised a large dataset into  
many groups which share similar characteristics. Each member in the group possesses  
a high degree of similarity within its group (MacQueen, 1967). Two traditional  
clustering methods are hierarchical clustering algorithm and k-mean algorithm. There  
is a vast collection of clustering algorithms, leading to the difficulty of choosing a  
proper algorithm for certain circumstances (Dubes and Jain, 1976). Jain et al. (1999)  
reviewed data clustering methods and suggested a taxonomy of selecting clustering  
technique. They proposed a set of eligible criteria, considering the situation, the data  
structure and the sensitivity of the technique to variability. Hierarchical clustering  
algorithm aims to separate large amount of data into different similar groups, without  
pre-identifying the number of groups. In contract, k-mean algorithm requires analysts  
pre-determine how many clusters before carrying out the grouping process. In  
comparison to other data mining methods, clustering method can reduce the negative  
effect of noise data by separating them into an outlier group (Zhang et al., 1997; Chiu  
et al., 2001). Moreover, the classification function of clustering can be used to  
simplify a large set of data when finding out distributions for variables in a simulation  
model (Lu, 2009). Thereafter, predictive analytic can be carried out in a better way  
with regression model to identify the failure frequency. As a predictive analytics  
technique, regression is introduced to predict the future behaviour of the data using  
linear functions, especially when the target variable is numerical. Baesens (2014)  
stated that by associating specific features to a particular variable and determining  
47  
their influence/interaction on the variable, the analysts could foretell the behaviour  
of relevant activities in the forthcoming time. Many predictive algorithms are  
conducted for regression models with categorical variables, for example: Naïve  
Bayes, support vector machines, neuro network, generalized linear algorithms. Off  
all the methods, the generalized linear algorithms are specialized for modelling  
binary and count data as response variables (McCullagh and Nelder, 1989). At the  
moment, there are no practices on data mining and predictive model in Vietnam,  
especially on the engine assembly quality issues in the assembly line. There are  
only studies about the performance and development of the industry (Tran and  
Ngo, 2014; Ichida, 2015)  
2. Methodology  
2.2. Research Methodology  
The process to obtain the methodology and its validity for the research is  
described in figure 2.  
Research on theoretical  
background  
Establishing model  
Data collection & validation  
Data & model analysis  
Discussion & conclusion  
Figure 2. Research methodology and validity process  
Hypothesis 1: The frequency of failures can be predicted by the existence of  
stations, models, operations and features to which they belong to.  
Hypothesis 2: The predictive power of stations, models, operations and  
features is more significant than time factor.  
Hypothesis 3: Time factor is insignificant in developing predictive model for  
failure frequency.  
In other words, engine assemblies which possess certain characteristics i.e.  
belonged to a certain type of model and/or involved with a certain station, operation  
and feature, are expected to fail.  
These hypotheses are inspired by four praised papers: Lu (2009); Kulkarni and  
Gohil (2012); Hillmann et al. (2014); Chwif et al. (2015), which are unique in terms  
of methodology, data sources, and research timeframe. The study rationale is  
originated from all of these papers: figure out which parameters are important to  
model the failures of machines and/or engines in the assembly line.  
48  
With regard to the methodology, the design of descriptive analytics is a revised  
version of three studies: Lu (2009); Hillmann et al. (2014); and Chwif et al. (2015).  
However, neither studies have tried predictive analytics regression model, instead  
they applied other methods i.e. simulation, soft system methodology and  
classification model. Therefore, this paper is challenging the conventional method,  
and proposes a newly practicable applicable technique for modelling quality issues,  
in particular, the failure of engine assemblies.  
This paper uses data mining in both data analysis and modelling establishment.  
Clustering technique is used to identify the significant characteristics of the quality  
issue data, which are the main causes for failures in engine assemblies. Then,  
regression generalized linear model (GzLM) is applied to achieve a predictive model  
for this behaviour (details in Figure 3).  
Target: Pattern recognition  
Microsoft  
Excel  
Pivot  
Table  
Target: Identify significant  
predictors  
Clustering  
methods  
SPSS  
Target: Conduct a predictive  
model  
GzLM  
Figure 3. Technical modelling techniques  
2.2. Data collection and validation  
Secondary data would be used in this research. The 6-month dataset which  
was collected from Ford Motor Company by direct visits to the factory site,  
includes of 84 stations and 27 engine models with 23441 failures recorded in the  
assembly line. Each entry of engine assembly data includes an ID series number,  
type of model, station, operation, operation count, failure date, failure time and  
features. To define, station represents the location where the product is  
manufactured, operation represents the function of the station at a particular  
time, operation count represents the times the engine is being reworked, and  
feature represents a brief description of the design reasons for failures (each is  
given a specific letter-numeric mixed code).  
49  
To avoid sampling bias, outliers and extreme values (Baesens, 2014), the data  
set would be divided into two sets of data, which are: training data and testing data.  
Training data (January to the third week of June) is used for data analysis and pattern-  
finding process, whereas 1110 data entries of the final week’s data i.e. testing data is  
used to test the model validity. To establish appropriate variables for the study, data  
transformation is applied before data analysis. The dependent variable, the failure  
count (operation count) is numerical data, that does not require data transformation.  
Since four independent variables (Model type, Station, Operation and Feature) are  
categorical data, to input these variables into the regression model, they are  
transformed by orderly assigning a numerical number to each data. For instances,  
Model has 27 kinds, it would be number 1 for model 4R8Q-6009-AA, 2 for model  
5U3Q-6006-AA and 27 for the final model in the list, Tension_Bolt. Similarly,  
Station data is coded as 1 to 84, Operation data is given numbers 1 to 216 and Feature  
data is transformed as 1 to 310.  
2.3. Data analysis and Model development  
To find out the current pattern of the dataset and identify the importance level  
of each independent variables, descriptive analytics Two-step clustering is conducted.  
Then, k-mean algorithm is used to examine the accuracy of the grouping process via  
verifying the number of groups.  
The back-up mathematical algorithms for these clustering methods are:  
Two-step clustering: The algorithm is an adaption of k-mean and hierarchical  
algorithm proposed by SPSS. Two steps are: 1) pre-cluster the cases into many small  
sub-clusters; 2) cluster these sub-clusters into a desired number of clusters.  
K means clustering: According to MacQueen (1967), each k-means cluster  
is represented by the centre of the cluster. The algorithm aims to minimise a squared  
error function, which is:  
where  
is a selected distance amount between a data point  
and the  
cluster centre indicates the respective distance of n data points to their cluster centres.  
To create a predictive model representing the failures’ behaviour and  
determine the failure frequency in future series, a Poisson regression model with a  
log link function for count data is recommended (Cameron and Trivedi, 2013). This  
generalized linear regression includes three components: a random component, a  
systematic component (possessing a linear predictor) and a link function.  
Random component: Response Y has a Poisson distribution that is:  
50  
푦ꢀ ∈ 푃표ꢀ푠푠표푛 (휇ꢀ) for ꢀ = 1, . . . , 푁  
Where the expected count of 푦ꢀ is 퐸(푌) = 휇, variance of 푦ꢀ is 푣푎푟(푌) = 휇.  
Systematic component: Any set of 푋 = (푋1, 푋2, 푋3, 푋4)are explanatory  
variables and together their linear combination contribute to the linear predictor:  
η = 훽0 + 훽1푥1 + 훽2푥2 + 훽3푥3 + 훽4푥4  
Log-linear link function 품(. ):  
( )  
log 휇 = 훽0 + 훽1푥1 + 훽2푥2 + 훽3푥3 + 훽4푥4  
Describes how the mean E(푌) = 휇 depends on the linear predictor. It is for  
transforming the expectation of the response variable, 휇 = 퐸(푌), to the linear predictor.  
As Poisson regression uses log-linear link function, then with four explanatory  
variables:  
퐺(푢) = 푙표ꢁ(휇) = η = 훽0 + 훽1푥1 + 훽2푥2 + 훽3푥3 + 훽4푥4.  
Equivalent:  
휇 = 푒푥푝(훽0 + 훽1푥1 + 훽2푥2 + 훽3푥3 + 훽4푥4)  
= 푒푥푝(훽0)푒푥푝(훽1푥1)푒푥푝(훽2푥2)푒푥푝(훽3푥3)푒푥푝(훽4푥4)  
Hence, the Generalized linear regression is established as:  
g(푢) = g(E(Y)) = 훽0 + 훽1푥1 + 훽2푥2 + 훽3푥3 + 훽4푥4 + ε  
With 훽0 = the intercept, Y = dependent variable (e.g. count data: the failures  
number of each series over time, failure count), 훽1 = Station, 훽2 = Model, 훽3 =  
Operation and 훽4 = Feature.  
To test the validity of the model, test of error is run to compare the correlation  
between the actual data and the predicted data. The significance level takes into  
account the independence between variables. A graphical comparison between actual  
and predicted value of failure count is also conducted to provide a richer view on the  
model quality. Pivot Table Excel and SPSS are deployed.  
3. Results  
Failures tend to appear in specific stations, operations, features and models.  
The frequency is likely to occur between early morning, early afternoon and late  
evening, especially within the hours of 00-05, 07-15 and 21-23. There are not much  
failures in Saturday and Sunday but significant in weekdays, leading to a Ω shaped  
distribution of failures. For every 10 hour, the failure pattern is repeated with the  
transition period of 6 o’clock and 18-19 o’clock (figure 4).  
51  
Figure 4. Failure time 24/24 hour every day in a week: 1 to 7 = Sunday to Saturday  
Regarding to the model development, descriptive analytics Clustering  
methods indicate that the failure behaviour is mainly controlled by four factors and  
time is actually not significant when building the model, as shown in table 1 with the  
significance level of each factor.  
Table 1. Clustering  
Consequently, the predictive model is built without time factor, the formula  
would be:  
52  
푌 = 25.402 + (−24.749) ∗ 푀표푑푒푙 1 + … + 0.016 ∗ 푆푡푎푡ꢀ표푛 3 + … + (−0.594) ∗  
푂푝푒푟푎푡ꢀ표푛 98 + … + (−0.691) ∗ 퐹푒푎푡푢푟푒 283 + … + 0 ∗ 퐹푒푎푡푢푟푒 310.  
Regarding the validity, the model is proved to be satisfactory in both testing  
and trial data sets (Table 2). All p-values < 0.05 indicate Model, Station, Operation  
and Feature variables are significant with the predictive power of model is  
validated.  
Tests of Model Effects  
Type III  
Source  
Wald Chi-Square  
df  
Sig.  
.000  
(Intercept)  
nmodel  
23.822  
1
387.149  
71.865  
25  
9
.000  
.000  
.000  
.000  
nstation  
noperation  
nfeature  
545.427  
2660.068  
109  
308  
Dependent Variable: Failure Count  
Model: (Intercept), nmodel, nstation, noperation, nfeature  
Table 2. GzLM - Test of Model Effects  
Figure 5. Standardized deviance residual vs. Predicted value mean of response  
In figure 5, Standardized deviance residual is checked against the predicted  
value mean of response to ensure that the model errors are not too offensive. As most  
53  
residuals are near 0.000 (between -3.3 and 3.3), predicted results are close compared  
to actual outcomes.  
Especially, a similar trend between actual and predicted values is shown in the  
testing dataset (figure 6), leading to a more confirmative support for the predictive  
power of the model.  
testing data  
18  
16  
14  
12  
10  
8
mean predicted  
6
Failure count  
4
2
0
predicted value  
Figure 6. Predicted value vs. Actual value  
4. Discussion and conclusion  
Results have claimed that data mining techniques are useful in studying  
behaviour and developing a predictive model for the behaviour of assembly failure  
issues in the automotive industry, as in the sample case of Ford assembly line.  
Notably, the significance of time is not zero (0.2) as shown in Table 1, indicating a  
possibility that time may have predictive power in other scenarios. In this study, the  
data mining clustering methods indicate that time is not so important when it stands  
alone. Depending on circumstances, when there are much more specialized data to  
analyse, it is possible to include time factor to model development (Lu, 2009).  
This project is based on a practical case study of a leading motor company that  
possesses a structure off which automotive business in Vietnam is striving for e.g.  
Vinfast. Also, there are not much applications of Big Data analytics in the automotive  
sector for quality control. Hence, this study is worthwhile as a reference to the area  
of modelling quality issues for Vietnamese automotive company with a highlight in  
data mining. By obtaining an adequate modelling system, companies have a great  
chance to learn about failures, avoid their adverse impacts and produce qualified  
assemblies in the future. Likewise, in Ford’s circumstance, deep understanding with  
54  
data behaviour has helped the company avoid repeating the current mistake of auto-  
rejecting the entire production batch due to a few defective items. Ford may now pick  
out products in which scenarios tend to be errored. Consequently, labour allocation,  
goods manufacturing standards, operation and quality control, production cost and  
customer demand can be efficiently managed. Especially, for Vietnamese automotive  
industry, these aspects have a huge contribution to achieve the domestically oriented  
strategy - to become a completely domestic car manufacture industry.  
To conclude, this study has set a foundation for Big Data analytics and can act  
as the first stepping stone for employing data mining techniques and regression model  
to quality check in the automotive industry. Another potential approach called SSM  
may be used due to its specification in favour of drawing an overall picture of  
problematic situations to identify solutions.  
5. Reference  
1. Amstadter, B. L. (1977). Reliability Mathematics: Fundamentals, Practices,  
Procedures: McGraw-Hill  
2. Baesens, B. (2014). Analytics in a big data world: The essential guide to data  
science and its applications. John Wiley & Sons.  
3. Cameron, A. C., & Trivedi, P. K. (2013). Regression analysis of count data, 53.  
Cambridge  
university  
press.  
Available  
from:  
August 2015)  
4. Chiu, T., Fang, D., Chen, J., Wang, Y., & Jeris, C. (2001). A robust and scalable  
clustering algorithm for mixed type attributes in large database environment.  
In Proceedings of the seventh ACM SIGKDD international conference on  
knowledge discovery and data mining, 263-268. ACM.  
5. Chwif, L., Banks, J., & Vieira, D. R. (2015). Simulating breakdowns: a  
taxonomy for modelling. Journal of Simulation, 9(1), 43-53.  
6. Crosby, P. B. (1979). Quality is free: The art of making quality certain. Signet.  
7. Dubes, R., & Jain, A. K. (1976). Clustering techniques: the user's dilemma.  
Pattern Recognition, 8(4), 247-260.  
8. Duc H. Tran, Dang T. Ngo, (6/2014). Performance of the Vietnamese  
Automobile Industry: A Measurement using DEA. Asian Journal of Business  
and Management Vol. 2 (3)  
9. George. L. (1994). The Bathtub Curve Doesn’t Always hold water. American  
Society for Quality.  
10. Gilmore, H. L. (1974). Product conformance cost. Quality progress, 7(5), 16-19.  
55  
11. Hanifin, L. E., & Liberty, S. G. (1976). Improved Efficiency of Transmission  
Case Machining-A GPSS-V Simulation of a Transfer Line (No. 760338). SAE  
Technical Paper.  
12. Hillmann, M., Stühler, S., Schloske, A., Geisinger, D., & Westkämper, E. (2014).  
Improving Small-quantity Assembly Lines for Complex Industrial Products by  
Adapting the Failure Process Matrix (FPM): A Case Study. Procedia CIRP, 17, 236-  
241.  
13. ICHIDA, Yozi ( 2015). Development of the Vietnamese Automotive Industry  
and EDI Infrastructure. IFEAMA SPSCP Vol.4 pp.80-95  
14. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: a review.  
ACM computing surveys (CSUR), 31(3), 264-323.  
15. Joseph M. Juran; A. Blanton Godfrey (1998). Juran’s Quality Handbook. 5th  
ed., McGraw-Hill  
16. Kulkarni, K., & Gohil, S. (2012). Assembly line improvement within the  
automotive industry: Application of soft systems methodology to manufacturing  
field.  
Available  
from:  
portal.org/smash/get/diva2:557837/FULLTEXT01.pdf(Accessed16thAugust 2015)  
17. Lamoureux P. G. (1991). Electronic reliability. Quality, pp. 45  
18. Law A. M. (2007). Simulation Modeling and Analysis. 4th ed., McGraw-Hill series  
in industrial engineering and management science. McGraw-Hill, New York  
19. Lu, L. (2009). Modelling breakdown durations in simulation models of engine assembly  
lines, Doctoral dissertation, University of Southampton. Available from:  
(Accessed 19th July 2015)  
20. MacQueen, J. (1967). Some methods for classification and analysis of  
multivariate observations. In Proceedings of the fifth Berkeley symposium on  
mathematical statistics and probability. 1(14), pp. 281-297.  
21. McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Vol. 37).  
CRC press.  
22. Proschan F. (1963). Theoretical explanation of observed decreasing failure  
rate. Technometrics, Vol. 5, 375383  
23. Proschan F. (2000). Theoretical explanation of observed decreasing failure  
rate. Technometrics, 42(1), 7-11  
24. Venton, A. O. F., & Ross, T. R. (1984). Component based prediction for  
mechanical reliability. Mechanical Reliability in the Process Industries.  
25. Zhang, T., Ramakrishnan, R., & Livny, M. (1997). BIRCH: A new data  
clustering algorithm and its applications. Data Mining and Knowledge  
Discovery, 1(2), 141-182.  
56  
pdf 14 trang Thùy Anh 18/05/2022 1160
Bạn đang xem tài liệu "Conceptualizing predictive methodology in automotive industry: Implication on business operations and strategies", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

File đính kèm:

  • pdfconceptualizing_predictive_methodology_in_automotive_industr.pdf