Within the control layer, we consider how to adapt the physical nodes, resources, and transactions as we gather more information from the physical and data layer. We aim to improve and optimize processes, while considering the impact and usability of the transaction engine within OCSM. We will use tools developed here to provide services that can provide guidance in utilizing sequences of resources and production processes. For example, instead of optimizing a single transaction that uses one resource from ZJU to perform a simple task specified by the user, we may provide methods for providing multi-step processes that splits the specified (perhaps complex) task between nodes (e.g., an UIUC and a ZJU lab) and provides a distribution plan for more optimal performance and less burden on the end user.
Transfer Learning for Flexibility
One overarching goal of modern manufacturing quality management is to minimize process variability so that the manufacturing process capability can be maximized. In recent years, a variety of data-driven, machine learning (ML) methods have been developed to harness the manufacturing big data, and they have substantially promoted the product quality and production efficiency by providing improved decision-making capability. The existing ML-based decision-making methods require a large training dataset and work the best for mass production. On the other hand, unique features of flexible manufacturing such as small batch size, rapid product launch, and short turnaround time make these decision-making approaches much less effective. In addition, in a manufacturing network, there may exist a fleet of machines that perform the same function. Characterizing the machine-to-machine variability and similarity are both important.
We will establish a hybrid transfer learning framework to facilitate effective and efficient development of machine-level decision-making strategies when new products are launched and/or new machines are introduced to a manufacturing network. We will develop a generalized mixed-effect model to describe the quality characteristics. The large-scale, fixed effect is modeled as a deterministic function of machine, product, and process characteristics. The small-scale, random effect is modeled by a residual term that can be jointly learned among multiple tasks through transfer learning. This flexible modeling structure allows the integration of different types of models including ML, statistical, and physics-based simulation models. Consequently, it can be applied to various decision-making tasks including process optimization, in-situ monitoring, non-destructive quality prediction, and prognostics. It will also provide physical interpretability, which is not common in state-of-the-art ML methods. Our preliminary work has demonstrated the effectiveness of hybrid transfer learning using real-world data collected from high-precision machining of automotive engines, solar panel health monitoring, and ultrasonic welding of electric vehicle batteries.
Planning for Dynamic Reconfiguration
We have expertise in developing learning-based methods for planning and decision making under uncertainty and multi-modal input data. Specifically, we have been developing methods that combine model-based and model free approaches that optimize both over long time and short time horizons. These planning methods consider uncertainty and variability in the agent (or process), but also require models of the non-stationary environment that the agent will be operating in. Dynamic reconfiguration of manufacturing networks requires the collection of data from equipment, product components, and customer orders to obtain an overall picture of the status of the production. This is then subject to sophisticated data analysis methods combining stochastics and artificial intelligence to detect possible bottlenecks, anticipate failures, and infer means to ensure a seamless continuation of the manufacturing processes. These complexities require us to integrate intelligent, real-time planning methods, building on our model abstractions from Thrust 1.
We will develop a toolkit for planning, control, and management of the supply network, designed to provide services within OSCM. This toolkit will facilitate integration of processing, transportation, service, and distribution of jobs and materials in the network. That is, it will address the management of workflow over a network of distributed sources and capabilities (e.g., manufacturing units) and distributed destinations. This will allow users to determine logistics such as allocation and routing of the raw material from production locations to each manufacturing unit. We will also consider material flow across the processing unit such that certain objectives (e.g., throughput, processing cost, or transaction time) are optimized. The tools will use real-time dynamic data available to the network to enable dynamic decision-making (e.g. rerouting of jobs) and adapt to changing costs, availability, and conditions of resources, and dynamically manage the supply, process, and operational logistics. The main challenges and tasks while developing and implementing this suite are addressing the model complexity, where these objectives typically pose a host of scheduling, resource allocation, clustering, aggregation, and routing objectives with various types of capacity, topographical, and other constraints, and ensuring robustness to uncertainties in the work flow and dynamic availability of resources.
We propose a Maximum Entropy Principle (MEP) based framework that combines AI/ML techniques along with tools from statistical mechanics to address various combinatorial optimization problems. This will include simultaneous resource allocation and routing problems that model the material/job flow problems. This framework accommodates capacity constraints on material and machine capabilities, accommodates uncertainty distributions or constraints on parameter space, and parameter dynamics (as quantities and capabilities change with time). An interesting feature of our framework is that it allows for sensitivity analysis with respect to various design parameters (such as number of machines, facility locations, transportation costs), thus, we can quantify how effective changes in these parameters will be in terms of the resulting performance (cost) function.
Adaptivity in the Supply Chain
We have extensive work in developing and analyzing dynamic decision models under uncertainty with applications in supply chain management and revenue management. Using a tractable robust optimization framework for dynamic settings by using decision rules approximations, we have successfully applied to solve problems in inventory management and water reservoir operations. We have demonstrated that robust optimization models are (a) scalable to multi-stage setting; the resulting algorithms can be implemented using linear programming (LP) or second order cone programming (SOCP), and (b) are practical and easy to implement which require mild assumptions on distributions, such as such as mean, support and variability. Our preliminary work in supply chain management has developed and analyzed operations models that consider supply and demand risk in a variety of centralized/decentralized supply chain systems and design coordination and collaboration mechanisms for complex decentralized supply chains [46]. Moving forward, dynamic reconfiguration of manufacturing networks requires the collection of data from equipment, product components, and customer orders to obtain an overall picture of the status of the production. This is subject to sophisticated data analysis methods combining stochastics and artificial intelligence to detect possible bottlenecks, anticipate failures, and to infer means to ensure a seamless continuation of the manufacturing processes. This requires integrating correct-by-construction processes and intelligent planning methods from Thrust 1 and 2. A key challenge will be to enable data analysis and propose reconfiguration in real time.