Skip to content

Three years of work distilled into a 'mere' 100 pages.

Notifications You must be signed in to change notification settings

FiloCara/PhD-Thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Machine Learning for the improvement of a blow-moulding process

Context and objective

The development of new technologies such as machine learning (and deep learning), IoT and cloud computing are opening up new research perspectives in the manufacturing industry. Industry 4.0, the fourth industrial revolution, holds the promise of increased flexibility in manufacturing, along with mass customisation, better quality, and improved productivity. This research work focuses on the benefits that data-driven methods can bring to the overall quality control of fuel tanks produced through the extrusion blow-moulding manufacturing process. Extrusion blow-moulding takes a thin-walled tube called a parison that has been formed by extrusion, entraps it between two halves of a larger diameter mould, and then expands it by blowing air into the tube, forcing the parison out against the mould. The fuel tanks produced through this manufacturing process must respect some dimensional and geometric constraints to comply with customer specifications. The thickness of the tank over the whole surface must be sufficient to ensure the robustness of the part and therefore its safety, while avoiding an excessive and unnecessary weight of the finished product. Unfortunately, measuring the thickness of a hollow part is a time-consuming operation that requires several minutes of work and that cannot be done online for each part. As a consequence, only a subset of the produced parts can be measured. Decisions based on frequency testing, rather than on all parts inspection, are more expedient and cost effective, but it cannot guarantee the conformity of all parts of the population from which the sample was drawn. Therefore, this research work aims at looking for the best way to use the data collected on the machines and the corresponding product in order to propose a new data-driven approach to perform a real-time, non-destructive quality control to measure the thicknesses of blow-moulded parts.

Methods and results

Traditional methods to measure the thickness of hollow parts rely on ultrasonic instruments that provide satisfactory results while avoiding the destruction of parts. These methods are extremely accurate for sample quality control, but they present a major drawback: they cannot be used for online measurement. The measurement of a large number of points, which is necessary to estimate the distribution of the material over the entire surface, is time-consuming and cannot be done online in production. The same is true for another well-known technique called Computed Tomography. In the last decades, thermal imaging, a non-contact technology capable of measuring large surfaces in a single shot, has been studied as a possible method to infer the thickness of a solid element. The main idea behind these approaches is to transfer energy to the test piece and to monitor its surface temperature evolution over time. We claim that the approach of monitoring the surface-temperature may also be applied without actively providing heat to the test part, especially for parts that are hot after the manufacturing process. In fact, our work is motivated by the empirical observation of the cooling of blow-moulded parts in the first minutes after blowing. Areas of the parts have different cooling behaviours depending on their thickness. Areas with smaller thicknesses cool down faster than those with higher thicknesses. For the thicker zones, the surface temperature even starts to increase before decreasing. This phenomenon is due to the release of energy from the innermost plastic layer that has not been in direct contact with the mould surfaces.

The proposed approach makes use of machine learning to leverage the thermal inertia of the manufactured plastic part, captured through thermal imaging, to infer the thicknesses of the part surface without any direct measurement. Three different pipelines have been designed to model the relationship between the cooling behaviour of a part area, captured by means of a thermal camera, and the corresponding thickness. The parametric temporal approach involves the use of a parametric function to approximate the pointwise temperature surface decay. The function parameters, retrieved through curve fitting, may then be used as input features for a machine learning regressor.

The flexible temporal approach, as the parametric temporal one, takes advantage of the pixel-wise temperature decay. Instead of compressing the information through the use of a parametric function, the flexible temporal approach leverages the ability of deep learning, and in particular recurrent neural network, to extract meaningful features from raw signals.

Finally, the spatio-temporal approach (Figure 1) leverages not only the temporal temperature information, but also the spatial one. Instead of extracting the temperature time series for each critical point of the blow-moulded parts we can design an end-to-end deep learning architecture able to directly handle the input thermal video. In such a way, we should be able to take into account the tank unit intrinsic information which is completely lost with the previous approaches. Training such an architecture could be a challenging task if the number of available data is limited. To address the data scarcity, we applied two well-known techniques in deep learning: transfer learning and data augmentation. The intuition behind transfer learning for image-related tasks is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can leverage these learned feature maps without having to start the training from scratch. Data augmentation is a set of techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. Popular image augmentation techniques involve geometric transformations such as image flipping, rotation or translation, or colour space transformations.

alt text

The procedure has been validated in a real-world manufacturing setting. A dataset was constructed by recording a thermal video for each tank and by measuring some critical thickness area on the tank surface. The results have shown the ability of our method to provide accurate results in predicting thickness values of a set of critical points. The order of magnitude of the error is acceptable in the studied industrial environment. Among the three pipelines presented above, the spatio-temporal approach is the one that achieves the best inference performance in reconstructing the thickness values.

Contributions

This research work presents three main scientific contributions.

  1. We propose new industrial applications of machine learning and deep learning for quality improvement. Our work highlights not only the benefits that machine learning approaches can bring to the manufacturing production line, but also the limits of these methods.
  2. We provide a new approach for measuring the thickness of hollow parts, in real-time and without contact. Thermal imaging has been used to infer the thickness of an object before, but to our knowledge, this is the first time a data-driven method has been used to infer thickness based on the decay of surface temperature. We believe that such an approach could be extended to other manufacturing production processes where the manufactured parts are subject to a cooling process.
  3. We have shown how deep learning can be exploited in the manufacturing industry, even when the amount of data available is limited. Transfer learning and data augmentation have proven to be effective techniques to address the quality data scarcity issue.