2:I[4114,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"default"] 3:I[2972,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],""] 5:I[5878,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"Image"] 6:I[4707,[],""] 8:I[6423,[],""] 9:I[2230,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"] a:I[3039,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"] b:I[7075,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"Toaster"] 4:T5e58,

Introduction

Pre-trained large language models (LLMs) have gained significant attention in the field of natural language processing (NLP), especially for the task of text summarization, generation, and question answering. The success of LMs can be attributed to the attention mechanism introduced in Transformer models, which have outperformed traditional recurrent neural network models (e.g., LSTM) in modeling sequential data.


In this paper, we leverage pre-trained causal language models for the downstream task of Failure Analysis Triplet Generation (FATG), which involves generating a sequence of failure analysis decision steps for identifying failure root causes in the semiconductor industry. In particular, we conduct extensive comparative analysis of various transformer models for the FATG task and find that the BERT-GPT-2 Transformer (Big GCVAE), fine-tuned on a proposed Generalized-Controllable Variational AutoEncoder loss (GCVAE), exhibits superior performance in generating informative latent space by promoting disentanglement of latent factors.


Specifically, we observe that fine-tuning the Transformer style BERT-GPT2 on the GCVAE loss yields optimal representation by reducing the trade-off between reconstruction loss and KL-divergence, promoting meaningful, diverse and coherent Failure Analysis Triplets (FATs) similar to expert expectations.

Symbolic representation of FATG decision-making process

Figure 1: Symbolic representation of FATG decision-making process

Failure Analysis Triplet Generation (FATG)

FATG is a scientific process that aims to generate a sequence of failure analysis texts for a given failure description. We approach FATG as a data-to-text generation task, where the input is a Failure Description Report (FDR) represented as structured tabular data, and the output is a lengthy sequence of failure analysis triplets.


Throughout the subsequent sections of this paper, the term pre-triplet will refer to failure descriptions, while triplets (Step type; Substep technique; Equipment) will denote the collection of analyses, each comprising three key components. Each triplet contributes to a failure decision, and the main objective of this paper is to generate a set of n triplets that correspond to a specific failure description.


FRACAS Variables

FRACAS stands for "Failure Reporting, Analysis, and Corrective Action System." It is a systematic approach used in industries, particularly in engineering and manufacturing, to manage the identification, analysis, and resolution of failures or defects in products, systems, or processes.


We describe the three important variables (the triplets) for decision-making as follows:

Distribution of Step types, Substep techniques, and Equipment

Figures 2-4: Percentage distribution of Step types, Substep techniques, and Equipment used in failure analysis

Big GCVAE Architecture

Leveraging our previous understanding of variational autoencoder for learning high-quality latent representations and optimal reconstruction of objects including text and images, we propose an improved variational Large Language Model accordingly. This model, Big GCVAE, is adopted for the FATG task by tying together two different transformers architectures (Encoder and Decoder-only) and fine-tuning them using GCVAE loss function.


The model is structured like a classic Transformer model but loaded with pretrained weights. The Encoder is an unmasked BERT model while the decoder is a GPT-2 (for example, GPT-2 base, small or large) and fine-tuned on a loss function with adaptive hyperparameters.

An illustration of the Big GCVAE architecture

Figure 5: An illustration of the Big GCVAE architecture. On the leftmost is the unmasked BERT weights loaded to the Encoder and GPT-2 weights loaded to the Decoder

Key Technical Innovations

Unmasking: Masking is a concept introduced in BERT model and involves selectively hiding certain tokens within an input sequence during the pre-training phase. However, when incorporating a decoder component, such as the GPT-2 model, to complete the Big GCVAE Encoder-Decoder model, we hypothesize that the exclusive use of masking limits the model's ability to learn a quality bidirectional representation. Consequently, this restriction hampers the generalization of the latent space and considerably diminishes the mutual information within the bottleneck. Therefore, we propose the omission of masking, allowing for constructive summarization of mutual information within the latent space.


Latent Injection: Following a similar approach to BERT, the initial token of each sentence in Big GCVAE is a special classification token ([CLS]). The hidden state h[CLS] in the last layer, corresponding to this token, is extracted as the sentence-level representation. To construct the latent representation z, we employ the use of the weighted matrix WE ∈ RR×H, where z is a P-dimensional vector and WE is the weight matrix.

GCVAE Loss Function

The optimization framework proposed uses adaptive hyperparameters αt, βt, and γt:

L(θ,φ,α,β,γ) = (1-αtt) Ez∼qφ(z|x)[ln pθ(x|z)] - βt ED[DKL(qφ(z|x)||pθ(z))] + γt DKL(qφ(z)||pθ(z))

The adaptive weight αt controls the reconstruction error while βt ensures the posterior latent factor qφ(z|x) does not deviate significantly from its prior pθ(z). Varying both terms gives us better control of the degree of disentanglement and helps us understand the parameters affecting density disentanglement.


Experimentation

Dataset

In our experimentation, we use real failure analysis data obtained from a semiconductor industry, specifically focusing on successful failure analysis cases from the year 2019. To prepare the data for training the transformer model, we concatenate all input features, including the triplet data, along the horizontal axis (x-axis). After preprocessing, the size of the data for the year 2019 reduces to 5809 observation (or FA analysis) of which 70% (4066) is used for training and 30% (1743) is used for evaluation.

Experimental Setup

The experimentation is conducted on a High-Performance Computing (HPC) cluster comprising 80 cores, 2 × Intel Xeon E5-2698 v4 2.20GHz CPUs (80 cores), 512GB RAM, and 8 × Nvidia V100 32GB GPUs.


We fine-tune the medium versions of GPT-2 with 335 million parameters after downloading the pre-trained weights through the Huggingface API. The batch size for training and evaluation is 1, the weight decay is 0.05, and the number of training epochs is 100.

Results

Quantitative Evaluation

We conduct performance comparison between Big GCVAE and derivative models of GCVAE, such as ControlVAE and β-VAE with Annealing KL-divergence. We adopt two versions of Big GCVAE based on the correlation measure as follows:

Model Evaluation loss ↓ Reconstruction loss ↓ KL divergence ↑
GPT2-M 0.19
BigVAE 1.10 128.34 6.49
BigControlVAE 1.18 1.10 9.85
BigGCVAE† 1.18 1.09 8.23
BigGCVAE‡ 1.11 1.09 3.80

Table 1: Performance evaluation of Big GCVAE models and its derivatives. Both Big GCVAE† and Big GCVAE‡ have the lowest reconstruction loss compared to BigVAE.

Big GCVAE‡ model demonstrates superior performance compared to the benchmark Big variational model across various evaluation metrics. When specifically applied for the FATG task, the Big GCVAE‡ model outperforms GPT-2 medium, highlighting the efficacy of controllable Lagrangian hyperparameters in achieving optimal representation and generalizing the latent space.

Model BLEU-1 BLEU-3 ROUGE-1 F1 ROUGE-L F1 LESE-1 F1 LESE-3 F1 METEOR
GPT2-M 11.64 7.50 16.12 14.79 8.57 0.30 16.0
BigVAE 15.10 8.55 11.79 10.06 4.04 1.28 14.0
BigControlVAE 15.10 9.05 10.39 10.01 4.08 1.28 14.0
BigGCVAE† 14.60 9.07 11.81 10.11 4.04 1.28 14.0
BigGCVAE‡ 22.46 17.60 34.77 32.63 24.91 10.73 16.0

Table 3: Performance comparison of models. Observe that Big GCVAE‡ performs best for ROUGE-1 and LESE-1 metrics. High values indicate better performance for all metrics.

Qualitative Evaluation

We conduct an evaluation of the generative capabilities of the Big GCVAE models and its variants. The results reveal a notable enhancement in the distribution of BLEU-1, BLEU-3, and LESE-1, LESE-3 scores. The figure clearly demonstrates an increased frequency of accurately generated FATs by the model that closely align with the expert failure analysis.


When compared to the decoder-only transformer model (GPT-2), the Big GCVAE exhibits the potential to generate failure analysis sequences that are notably more realistic (following the order of Step type; Substep technique and Equipment). This improvement can be attributed to the Big GCVAE's ability to generalize effectively within the latent embedding space associated with the task.


Real-World Use Cases

This paper presents real-world use cases to demonstrate the generative efficacy of Big GCVAE:



Limitations

Despite the overwhelming performance of Big GCVAE (BERT-GPT2) model for the task of failure analysis triplets generation, it still suffers significant challenge that can be addressed. It is crucial to acknowledge that the model may occasionally generate unrealistic failure analysis triplets due to the phenomenon of hallucination. This can be both a problem of overgeneralization and overfitting. However, no particular metric perfectly addresses this phenomenon, except the quantitative and domain expert evaluations.


This limitation highlights the need for further refinement and improvement by prompt engineering failure description and using reinforcement learning to mitigate the occurrence of unrealistic outputs.


Conclusion

To overcome the challenges of robust representation and high-quality generation of failure analysis triplets, we propose a new approach that involves fine-tuning a Transformer-based Variational Autoencoder (VAE) architecture using an unmasked pre-trained BERT Encoder and a GPT2 Decoder. By leveraging the Generalized-Controllable Variational AutoEncoding (GCVAE) loss, our model aims to achieve an optimized representation with a low reconstruction loss and highly disentangled latent space.


Our evaluation of the model's performance in generating failure analysis triplets yields the following key findings:

In summary, Big GCVAE is a robust model that can generate failure analysis triplets (sequences of text-encoded steps for analyzing defective components in the semiconductor industry) that are logical, reasonable, and tailored to specific problems. The model is able to do this by learning to represent failure analysis triplets in a latent space that is both disentangled and informative.

Read the Full Paper

For detailed methodology, additional results, and comprehensive analysis, please refer to the full research paper published in the Journal of Intelligent Manufacturing.

Access Full Paper
7:["id","big-gcvae","d"] 0:["b2TsEGqQOWST_PpOaC7LK",[[["",{"children":["blog",{"children":[["id","big-gcvae","d"],{"children":["__PAGE__?{\"id\":\"big-gcvae\"}",{}]}]}]},"$undefined","$undefined",true],["",{"children":["blog",{"children":[["id","big-gcvae","d"],{"children":["__PAGE__",{},[["$L1",["$","main",null,{"children":[["$","$L2",null,{}],["$","article",null,{"className":"min-h-screen pt-24 pb-16 px-6","children":["$","div",null,{"className":"container mx-auto max-w-4xl","children":[["$","$L3",null,{"href":"/blog","className":"inline-flex items-center gap-2 text-muted-foreground hover:text-primary transition-colors mb-8 font-body","children":[["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-arrow-left w-4 h-4","children":[["$","path","1l729n",{"d":"m12 19-7-7 7-7"}],["$","path","x3x0zl",{"d":"M19 12H5"}],"$undefined"]}],"Back to Blog"]}],["$","div",null,{"className":"mb-8","children":[["$","div",null,{"className":"flex flex-wrap items-center gap-2 mb-4","children":[["$","span",null,{"className":"text-sm text-primary font-body font-medium","children":"Research"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"2025-01-15"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-primary font-body","children":[40," citations"]}],[["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"Journal of Intelligent Manufacturing 36 (4), 2423-2438"}]]]}],["$","h1",null,{"className":"font-display text-4xl md:text-5xl lg:text-6xl font-medium mb-4","children":"Big GCVAE: Decision-making with Adaptive Transformer Model for Failure Root Cause Analysis in Semiconductor Industry"}],["$","p",null,{"className":"text-lg text-muted-foreground font-body mb-4","children":["By ","K Ezukwoke, A Hoayek, M Batton-Hubert, X Boucher, P Gounet, J Adrian"]}],["$","a",null,{"href":"https://scholar.google.com/scholar?q=Big+GCVAE+decision-making+with+adaptive+transformer+model+Ezukwoke+2025","target":"_blank","rel":"noopener noreferrer","className":"inline-flex items-center gap-2 text-primary hover:text-primary/80 transition-colors font-body text-sm font-medium","children":["Read Full Paper ",["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-external-link w-4 h-4","children":[["$","path","a6xqqp",{"d":"M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"}],["$","polyline","mznyad",{"points":"15 3 21 3 21 9"}],["$","line","18c3s4",{"x1":"10","x2":"21","y1":"14","y2":"3"}],"$undefined"]}]]}]]}],["$","div",null,{"className":"prose prose-lg max-w-none font-body prose-headings:font-display prose-headings:font-medium prose-h2:text-3xl prose-h2:mt-12 prose-h2:mb-6 prose-h3:text-2xl prose-h3:mt-8 prose-h3:mb-4 prose-h4:text-xl prose-h4:mt-6 prose-h4:mb-3 prose-p:text-base prose-p:leading-relaxed prose-p:mb-4 prose-ul:list-disc prose-ul:ml-6 prose-ul:mb-4 prose-ol:list-decimal prose-ol:ml-6 prose-ol:mb-4 prose-li:mb-2 prose-strong:font-semibold prose-strong:text-foreground prose-table:w-full prose-table:border-collapse prose-th:border prose-th:border-border prose-th:p-4 prose-th:bg-muted/50 prose-th:font-medium prose-td:border prose-td:border-border prose-td:p-4 prose-a:text-primary prose-a:no-underline hover:prose-a:underline","dangerouslySetInnerHTML":{"__html":"$4"}}]]}]}],["$","footer",null,{"id":"footer","className":"border-t border-border/50 bg-card/30","children":["$","div",null,{"className":"container mx-auto px-6 py-16","children":[["$","div",null,{"className":"grid grid-cols-2 md:grid-cols-5 gap-8","children":[["$","div",null,{"className":"col-span-2 md:col-span-1","children":[["$","a",null,{"href":"/","className":"flex items-center gap-3 mb-4","children":[["$","$L5",null,{"src":"/images/logo/quadapt_logo.png","alt":"QuadaptAI Logo","width":32,"height":32,"className":"h-8 w-auto"}],["$","span",null,{"className":"font-display font-semibold","children":"QuadaptAI"}]]}],["$","p",null,{"className":"text-sm text-muted-foreground font-body mb-4","children":"Based in Paris, France"}],["$","div",null,{"className":"flex gap-3 text-muted-foreground","children":[["$","a",null,{"href":"https://twitter.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M18.244 2.25h3.308l-7.227 8.26 8.502 11.24H16.17l-5.214-6.817L4.99 21.75H1.68l7.73-8.835L1.254 2.25H8.08l4.713 6.231zm-1.161 17.52h1.833L7.084 4.126H5.117z"}]}]}],["$","a",null,{"href":"https://linkedin.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M20.447 20.452h-3.554v-5.569c0-1.328-.027-3.037-1.852-3.037-1.853 0-2.136 1.445-2.136 2.939v5.667H9.351V9h3.414v1.561h.046c.477-.9 1.637-1.85 3.37-1.85 3.601 0 4.267 2.37 4.267 5.455v6.286zM5.337 7.433c-1.144 0-2.063-.926-2.063-2.065 0-1.138.92-2.063 2.063-2.063 1.14 0 2.064.925 2.064 2.063 0 1.139-.925 2.065-2.064 2.065zm1.782 13.019H3.555V9h3.564v11.452zM22.225 0H1.771C.792 0 0 .774 0 1.729v20.542C0 23.227.792 24 1.771 24h20.451C23.2 24 24 23.227 24 22.271V1.729C24 .774 23.2 0 22.222 0h.003z"}]}]}]]}]]}],[["$","div","Products",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Products"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Failure Analysis Agent",{"children":["$","$L3",null,{"href":"/products#failure-analysis-agent","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Failure Analysis Agent"}]}],["$","li","Insight Generation",{"children":["$","$L3",null,{"href":"/products#insight-generation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Insight Generation"}]}],["$","li","Developer Platform",{"children":["$","$L3",null,{"href":"/products#knowledge-extraction","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Developer Platform"}]}],["$","li","All Products",{"children":["$","$L3",null,{"href":"/products","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"All Products"}]}]]}]]}],["$","div","Research",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Research"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Publications",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Publications"}]}],["$","li","Case Studies",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Case Studies"}]}],["$","li","Technical Papers",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Technical Papers"}]}],["$","li","Open Source",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Open Source"}]}]]}]]}],["$","div","Company",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Company"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","About",{"children":["$","$L3",null,{"href":"/company","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"About"}]}],["$","li","Careers",{"children":["$","$L3",null,{"href":"/careers","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Careers"}]}],["$","li","News",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"News"}]}],["$","li","Contact",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Contact"}]}]]}]]}],["$","div","Resources",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Resources"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Documentation",{"children":["$","$L3",null,{"href":"/documentation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Documentation"}]}],["$","li","Academy",{"children":["$","$L3",null,{"href":"/academy","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Academy"}]}],["$","li","Blog",{"children":["$","$L3",null,{"href":"/blog","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Blog"}]}],["$","li","Support",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Support"}]}]]}]]}]]]}],["$","div",null,{"className":"border-t border-border/50 mt-12 pt-8 flex flex-col md:flex-row justify-between items-center gap-4","children":[["$","p",null,{"className":"text-sm text-muted-foreground font-body","children":["© ",2026," QuadaptAI. All rights reserved."]}],["$","div",null,{"className":"flex gap-6 text-sm text-muted-foreground font-body","children":[["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Privacy Policy"}],["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Terms of Service"}],["$","a",null,{"href":"/research","className":"hover:text-primary transition-colors","children":"Responsible AI"}]]}]]}]]}]}]]}],null],null],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children","$7","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[[[["$","link","0",{"rel":"stylesheet","href":"/_next/static/css/f34024b8cab46471.css","precedence":"next","crossOrigin":"$undefined"}]],["$","html",null,{"lang":"en","children":["$","body",null,{"children":[["$","$L9",null,{}],["$","$La",null,{"gaId":"G-9LZTSRBKWK"}],["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":[["$","title",null,{"children":"404: This page could not be found."}],["$","div",null,{"style":{"fontFamily":"system-ui,\"Segoe UI\",Roboto,Helvetica,Arial,sans-serif,\"Apple Color Emoji\",\"Segoe UI Emoji\"","height":"100vh","textAlign":"center","display":"flex","flexDirection":"column","alignItems":"center","justifyContent":"center"},"children":["$","div",null,{"children":[["$","style",null,{"dangerouslySetInnerHTML":{"__html":"body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}"}}],["$","h1",null,{"className":"next-error-h1","style":{"display":"inline-block","margin":"0 20px 0 0","padding":"0 23px 0 0","fontSize":24,"fontWeight":500,"verticalAlign":"top","lineHeight":"49px"},"children":"404"}],["$","div",null,{"style":{"display":"inline-block"},"children":["$","h2",null,{"style":{"fontSize":14,"fontWeight":400,"lineHeight":"49px","margin":0},"children":"This page could not be found."}]}]]}]}]],"notFoundStyles":[]}],["$","$Lb",null,{}]]}]}]],null],null],["$Lc",null]]]] c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}],["$","meta","1",{"charSet":"utf-8"}],["$","title","2",{"children":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","3",{"name":"description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy. Advanced AI-powered failure analysis and root cause detection for semiconductor manufacturing."}],["$","meta","4",{"name":"author","content":"QuadaptAI"}],["$","link","5",{"rel":"manifest","href":"/favicon/site.webmanifest","crossOrigin":"use-credentials"}],["$","meta","6",{"name":"keywords","content":"semiconductor analysis,AI failure analysis,root cause analysis,autonomous AI,semiconductor manufacturing,machine learning,failure detection,intelligent manufacturing"}],["$","meta","7",{"name":"creator","content":"QuadaptAI"}],["$","meta","8",{"name":"publisher","content":"QuadaptAI"}],["$","meta","9",{"name":"robots","content":"index, follow"}],["$","meta","10",{"name":"googlebot","content":"index, follow, max-video-preview:-1, max-image-preview:large, max-snippet:-1"}],["$","link","11",{"rel":"canonical","href":"https://quadaptai.ai"}],["$","meta","12",{"name":"format-detection","content":"telephone=no, address=no, email=no"}],["$","meta","13",{"property":"og:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","14",{"property":"og:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","15",{"property":"og:url","content":"https://quadaptai.ai"}],["$","meta","16",{"property":"og:site_name","content":"QuadaptAI"}],["$","meta","17",{"property":"og:locale","content":"en_US"}],["$","meta","18",{"property":"og:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","meta","19",{"property":"og:image:width","content":"1200"}],["$","meta","20",{"property":"og:image:height","content":"630"}],["$","meta","21",{"property":"og:image:alt","content":"QuadaptAI Logo"}],["$","meta","22",{"property":"og:type","content":"website"}],["$","meta","23",{"name":"twitter:card","content":"summary_large_image"}],["$","meta","24",{"name":"twitter:creator","content":"@quadaptai"}],["$","meta","25",{"name":"twitter:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","26",{"name":"twitter:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","27",{"name":"twitter:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","link","28",{"rel":"icon","href":"/favicon.ico","sizes":"any"}],["$","link","29",{"rel":"icon","href":"/favicon/favicon-16x16.png","sizes":"16x16","type":"image/png"}],["$","link","30",{"rel":"icon","href":"/favicon/favicon-32x32.png","sizes":"32x32","type":"image/png"}],["$","link","31",{"rel":"apple-touch-icon","href":"/favicon/apple-touch-icon.png","sizes":"180x180","type":"image/png"}]] 1:null