2:I[4114,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"default"]
3:I[2972,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],""]
5:I[5878,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"Image"]
6:I[4707,[],""]
8:I[6423,[],""]
9:I[2230,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"]
a:I[3039,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"]
b:I[7075,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"Toaster"]
4:T50d3,
Introduction
Pre-trained Language Models recently gained traction in the Natural Language Processing (NLP) domain for text summarization, generation and question answering tasks. This stems from the innovation introduced in Transformer models and their overwhelming performance compared with Recurrent Neural Network Models (Long Short Term Memory (LSTM)).
In this paper, we leverage the attention mechanism of pre-trained causal language models such as Transformer model for the downstream task of generating Failure Analysis Triplets (FATs) - a sequence of steps for analyzing defected components in the semiconductor industry. We compare different transformer models for this generative task and observe that Generative Pre-trained Transformer 2 (GPT-2) outperformed other transformer models for the failure analysis triplet generation (FATG) task.
In particular, we observe that GPT-2 (trained on 1.5B parameters) outperforms pre-trained BERT, BART and GPT-3 by a large margin on ROUGE. Furthermore, we introduce Levenshtein Sequential Evaluation metric (LESE) for better evaluation of the structured FAT data and show that it compares exactly with human judgement than existing metrics.
Figure 1: Overview of Failure Analysis Triplet Generation process
Background: Failure Analysis in Semiconductor Industry
Root cause analysis (RCA) in semiconductor industry is the process of discovering the root causes of a failure in order to identify appropriate actions to systematically prevent and solve the underlying issues. Reliability engineers (experts) in semiconductor industry are usually tasked with carrying out RCA technique known as Failure mode and effects analysis (FMEA).
FMEA involves reviewing several components, assemblies, and subsystems to identify potential failure modes in a system and their root causes. This process is done to improve product reliability and quality; cut production cost and reduce defective parts susceptible to failures. Inspection, testing, localization, failure reproduction and documentation are amongst the major steps needed for RCA.
FRACAS: Failure Reporting, Analysis and Corrective Action System
The Failure Reporting, Analysis and Corrective Action System (FRACAS) is a closed-loop feedback path by which pertinent reliability data is collected, recorded and analyzed during in-house (laboratories) and production/operation to determine where failure concentration in the design of the equipment.
The heart of FRACAS is its database management system (DBMS) which classifies failure modes into categories that are essential in identifying the processes in the product (hardware/software) life cycle requiring the most attention for reliability improvement. The report obtained from FRACAS DBMS contains information describing the type of failure and origin of detection; a set of analysis (in the form of triplets - Step type, Substep technique and Equipment) proposed to find the failure root cause; conclusion on the outcome of failure analysis.
What are Failure Analysis Triplets?
A Failure Analysis Triplet (FAT) consists of three components:
- Step type: The type of analysis step (e.g., "Sample preparation", "Physical Analysis", "Nondestructive Inspection")
- Substep technique: The specific technique used (e.g., "Package decap", "SEM", "X-ray")
- Equipment: The specific equipment used (e.g., "PHOENIX X-RAY NANOMEX", "CRI7", "LEICA M165C")
Example Failure Analysis Triplet
Step type: Nondestructive Inspection
Substep technique: X-ray
Equipment: PK103-PHOENIX X-RAY NANOMEX
A complete failure analysis may consist of multiple triplets in sequence, with the longest FA having 23 triplets.
Failure Analysis Triplets Generation (FATG) is a scientific process of generating a series of sequential failure analysis text associated with a failure description. We model the FATG as a sequence-to-sequence data-to-text problem where the input is the Failure Description (FDR) - structured tabular data, and the output is a long sequence of failure analysis triplets.
Pre-trained Language Models for FATG
We evaluated several pre-trained transformer models for the FATG task:
GPT-2 (Generative Pre-trained Transformer 2)
GPT-2 is a type of transformer model that uses a multi-layer Transformer decoder instead of the encoder-decoder model. GPT-2 was trained on 40G of WebText data (web pages from outbound links on Reddit) excluding Wikipedia pages. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) and contains a vocabulary size of 50,257.
We fine-tuned three versions of GPT-2:
- GPT-2 (base): 117M parameters
- GPT-2 Medium: 335M parameters
- GPT-2 Large: 774M-1.5B parameters
GPT-3
OPENAI-GPT-3 (175B parameters and 500B tokens) was trained on BooksCorpus dataset, which contains over 7000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance.
BART (Bidirectional Auto-Regressive Transformer)
BART is a Bidirectional Auto-Regressive Transformer trained by corrupting text with an arbitrary noising function, and learning a model to reconstruct the original text. BART uses a standard Transformer-based neural machine translation architecture (Encoder-Decoder Transformer).
Experimental Setup
Dataset
We performed experimentation using real failure analysis data from a semiconductor industry, taking into account only successful failure analysis for one year (2019). The size of the data after preprocessing for one year (2019) is 5,809 cases.
The 10-input failure description features (also called Expert features) include: Reference, Subject, Site, Requested activity, Priority level, High confidentiality, Context, Objectives / Work description, Source of failure / request, and Source of failure (Detailed).
FATs has a dimension of R69 with the longest FA having 23 triplets. We preprocess FDR according to the scheme presented in previous work and vectorize the joint space {x,λ} using a byte-level version of Byte Pair Encoding (BPE) from GPT-2.
Training Configuration
Experimentation was carried out on a High performance computing (HPC) cluster with 80 cores (2 × Intel Xeon E5-2698 v4 2.20GHz CPU), 512G RAM size and 8 × Nvidia V100 32G GPU.
For GPT-2 models, batch size for training and evaluation is 1; weight decay is 0.05 and the number of training epochs is 100. We use top-p = 0.95 and top-k = 10 with a normalizing temperature value of 1.9 for decoder sampling.
Results
Quantitative Evaluation
Results indicate GPT-2 models trained on the WebText dataset perform considerably better than the baseline (mini-GPT), GPT-3 and BART models. The best performing GPT-2 (GPT-2 Large) outperforms the base model by 49% on the BLEU metric; 73% better than BART and GPT-3 for FATG.
| Model |
BLEU-1 |
BLEU-3 |
ROUGE-1 F1 |
ROUGE-L F1 |
LESE-1 F1 |
LESE-3 F1 |
| mini-GPT |
11.54 |
11.22 |
12.63 |
11.52 |
7.11 |
0.30 |
| BART |
6.14 |
11.24 |
10.16 |
9.43 |
4.08 |
1.28 |
| GPT-3 |
6.10 |
7.07 |
8.84 |
7.91 |
5.46 |
0.42 |
| GPT-2 (base) |
22.18 |
30.25 |
29.67 |
27.75 |
20.97 |
10.49 |
| GPT-2 Medium |
22.15 |
29.89 |
29.69 |
27.78 |
21.21 |
10.74 |
| GPT-2 Large |
22.46 |
29.73 |
29.82 |
27.93 |
21.25 |
10.73 |
All scores are percentages. Higher values are preferred for all metrics except Levenshtein distance (Lev). GPT-2 Medium performs best on LESE-3 triplet score, closely followed by GPT-2 Large.
Figure 2: Performance comparison of different transformer models for FATG task
Key Finding
GPT-2 Medium and Large models perform best on LESE-1 and LESE-3 triplet scores, which corresponds to human judgement. The LESE metric shows that GPT-2 models are trained on correlating domain knowledge related to failure analysis and reliability engineering, enabling adaptive transfer of knowledge for failure analysis triplets generation.
Qualitative Evaluation: Short vs Long Sequence FATG
We classify generation difficulty into two classes: (i) Short-sequence FATG (SS-FATG) and (ii) Long-sequence FATG (LS-FATG).
Short Sequence FATG (SS-FATG)
Given a FDR prompt and a short human-expert failure analysis triplets (with length ranging between 3-6), Short sequence FATG is the ability of a fine-tuned PLM to generate the same order and length of the FATs given by human-expert.
We observe that all models including mini-GPT (non-pretrained baseline), BART, GPT-2 (base), GPT-2 Medium and GPT-2 Large are capable of generating short sequences. However, BART generates incorrect SS-FATs while GPT-3 generates mostly long sequences with low BLEU and LESE scores.
GPT-2 Medium achieves perfect LESE-1 and LESE-3 scores of 100% for some short sequence cases, generating highly correlating FATs as the human expert.
Long Sequence FATG (LS-FATG)
Long sequence FATG is the ability of a fine-tuned PLM to generate long-length sequences that preserve the sequential order as the original human-expert FATs. Long sequence FATs are complete failure analysis containing all sequence of FAs in the correct order, usually above 7 or more triplets.
The best LESE-1 score obtained for a FDR prompt having 7-FATs is 31% with fine-tuned GPT-2 (Base and Large). Despite being suited for long text generation, GPT-3 fails to generate triplets consistent with human FATs. In generating LS-FATs, none of the models accurately generated the correct equipment proposed by human-expert, however, the equipment generated could serve similar purpose as that proposed by expert.
We also observe that specifying the location of FA in the failure description consequently improves the accuracy of both SS-FATG and LS-FATG, as the triplets generated for different locations differ by the type of equipment used for failure analysis.
LESE: A New Evaluation Metric
For better understanding of the sequential generation of FATs and to address the drawback of ROUGE, we propose LEvenshtein Sequential Evaluation (LESE) metric. LESE is an n-gram Levenshtein distance based metric used for measuring the similarity between two sequences by computing the n-gram edits (insertions, deletions or substitutions) required to change one sequence into another.
LESE is unbiased in the computation of precision and recall as it takes into account the total number of n-grams as the denominator, in the computation of the precision and recall rather than unigrams. When compared to human evaluation, LESE-N performs comparatively similar, especially LESE-3 for FATG compared to other metrics used for quantitative evaluation.
Why LESE?
Traditional metrics like BLEU and ROUGE do not accurately convey human-expert evaluation for structured FAT data. For example, if we flip equipment in the last two triplets, BLEU-1 does not distinguish the difference since it only seeks to find the intersection between sets in hypothesis and reference. LESE-1 and LESE-3 accurately measure the difference in equipment and correlate with human evaluation.
Conclusion
We evaluate the efficiency of pre-trained transformer models for the downstream task of failure analysis triplet generation. We observe that forward only auto-regressive modelling used in GPT-2 and GPT-3 gives it excellent capabilities to generate structured data.
When adapted for FATG, GPT-2 ROUGE scores outperforms other benchmarks for generating both short and long FATs. Since ROUGE, BLEU and METEOR scores do not accurately convey human-expert evaluation, we introduce Levenshtein Sequential Evaluation (LESE) metric. LESE-N performs on par with human expert judgment for different test cases.
GPT-2 Medium and Large models perform best on LESE-1 and LESE-3 triplet scores, which corresponds to human judgement. Fine-tuned BART generates very short triplets and seeks contextual representation of triplets making it unfit for structured long-text sequences, while GPT-3 generates long story-telling-like data that do not necessarily follow known expert failure analysis FATs.
This work demonstrates the practical value of pre-trained language models for automating failure analysis workflows in semiconductor manufacturing, enabling faster and more consistent generation of failure analysis procedures.
Read the Full Paper
For detailed methodology, additional experiments, and comprehensive analysis, please refer to the full research paper available on arXiv.
Access Full Paper on arXiv
7:["id","pretrained-triplets","d"]
0:["b2TsEGqQOWST_PpOaC7LK",[[["",{"children":["blog",{"children":[["id","pretrained-triplets","d"],{"children":["__PAGE__?{\"id\":\"pretrained-triplets\"}",{}]}]}]},"$undefined","$undefined",true],["",{"children":["blog",{"children":[["id","pretrained-triplets","d"],{"children":["__PAGE__",{},[["$L1",["$","main",null,{"children":[["$","$L2",null,{}],["$","article",null,{"className":"min-h-screen pt-24 pb-16 px-6","children":["$","div",null,{"className":"container mx-auto max-w-4xl","children":[["$","$L3",null,{"href":"/blog","className":"inline-flex items-center gap-2 text-muted-foreground hover:text-primary transition-colors mb-8 font-body","children":[["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-arrow-left w-4 h-4","children":[["$","path","1l729n",{"d":"m12 19-7-7 7-7"}],["$","path","x3x0zl",{"d":"M19 12H5"}],"$undefined"]}],"Back to Blog"]}],["$","div",null,{"className":"mb-8","children":[["$","div",null,{"className":"flex flex-wrap items-center gap-2 mb-4","children":[["$","span",null,{"className":"text-sm text-primary font-body font-medium","children":"Research"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"2022-10-17"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-primary font-body","children":[3," citations"]}],[["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"arXiv preprint arXiv:2210.17497"}]]]}],["$","h1",null,{"className":"font-display text-4xl md:text-5xl lg:text-6xl font-medium mb-4","children":"Leveraging Pre-trained Models for Failure Analysis Triplets Generation"}],["$","p",null,{"className":"text-lg text-muted-foreground font-body mb-4","children":["By ","K Ezukwoke, A Hoayek, M Batton-Hubert, X Boucher, P Gounet, J Adrian"]}],["$","a",null,{"href":"https://arxiv.org/abs/2210.17497","target":"_blank","rel":"noopener noreferrer","className":"inline-flex items-center gap-2 text-primary hover:text-primary/80 transition-colors font-body text-sm font-medium","children":["Read Full Paper ",["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-external-link w-4 h-4","children":[["$","path","a6xqqp",{"d":"M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"}],["$","polyline","mznyad",{"points":"15 3 21 3 21 9"}],["$","line","18c3s4",{"x1":"10","x2":"21","y1":"14","y2":"3"}],"$undefined"]}]]}]]}],["$","div",null,{"className":"prose prose-lg max-w-none font-body prose-headings:font-display prose-headings:font-medium prose-h2:text-3xl prose-h2:mt-12 prose-h2:mb-6 prose-h3:text-2xl prose-h3:mt-8 prose-h3:mb-4 prose-h4:text-xl prose-h4:mt-6 prose-h4:mb-3 prose-p:text-base prose-p:leading-relaxed prose-p:mb-4 prose-ul:list-disc prose-ul:ml-6 prose-ul:mb-4 prose-ol:list-decimal prose-ol:ml-6 prose-ol:mb-4 prose-li:mb-2 prose-strong:font-semibold prose-strong:text-foreground prose-table:w-full prose-table:border-collapse prose-th:border prose-th:border-border prose-th:p-4 prose-th:bg-muted/50 prose-th:font-medium prose-td:border prose-td:border-border prose-td:p-4 prose-a:text-primary prose-a:no-underline hover:prose-a:underline","dangerouslySetInnerHTML":{"__html":"$4"}}]]}]}],["$","footer",null,{"id":"footer","className":"border-t border-border/50 bg-card/30","children":["$","div",null,{"className":"container mx-auto px-6 py-16","children":[["$","div",null,{"className":"grid grid-cols-2 md:grid-cols-5 gap-8","children":[["$","div",null,{"className":"col-span-2 md:col-span-1","children":[["$","a",null,{"href":"/","className":"flex items-center gap-3 mb-4","children":[["$","$L5",null,{"src":"/images/logo/quadapt_logo.png","alt":"QuadaptAI Logo","width":32,"height":32,"className":"h-8 w-auto"}],["$","span",null,{"className":"font-display font-semibold","children":"QuadaptAI"}]]}],["$","p",null,{"className":"text-sm text-muted-foreground font-body mb-4","children":"Based in Paris, France"}],["$","div",null,{"className":"flex gap-3 text-muted-foreground","children":[["$","a",null,{"href":"https://twitter.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M18.244 2.25h3.308l-7.227 8.26 8.502 11.24H16.17l-5.214-6.817L4.99 21.75H1.68l7.73-8.835L1.254 2.25H8.08l4.713 6.231zm-1.161 17.52h1.833L7.084 4.126H5.117z"}]}]}],["$","a",null,{"href":"https://linkedin.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M20.447 20.452h-3.554v-5.569c0-1.328-.027-3.037-1.852-3.037-1.853 0-2.136 1.445-2.136 2.939v5.667H9.351V9h3.414v1.561h.046c.477-.9 1.637-1.85 3.37-1.85 3.601 0 4.267 2.37 4.267 5.455v6.286zM5.337 7.433c-1.144 0-2.063-.926-2.063-2.065 0-1.138.92-2.063 2.063-2.063 1.14 0 2.064.925 2.064 2.063 0 1.139-.925 2.065-2.064 2.065zm1.782 13.019H3.555V9h3.564v11.452zM22.225 0H1.771C.792 0 0 .774 0 1.729v20.542C0 23.227.792 24 1.771 24h20.451C23.2 24 24 23.227 24 22.271V1.729C24 .774 23.2 0 22.222 0h.003z"}]}]}]]}]]}],[["$","div","Products",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Products"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Failure Analysis Agent",{"children":["$","$L3",null,{"href":"/products#failure-analysis-agent","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Failure Analysis Agent"}]}],["$","li","Insight Generation",{"children":["$","$L3",null,{"href":"/products#insight-generation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Insight Generation"}]}],["$","li","Developer Platform",{"children":["$","$L3",null,{"href":"/products#knowledge-extraction","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Developer Platform"}]}],["$","li","All Products",{"children":["$","$L3",null,{"href":"/products","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"All Products"}]}]]}]]}],["$","div","Research",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Research"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Publications",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Publications"}]}],["$","li","Case Studies",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Case Studies"}]}],["$","li","Technical Papers",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Technical Papers"}]}],["$","li","Open Source",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Open Source"}]}]]}]]}],["$","div","Company",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Company"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","About",{"children":["$","$L3",null,{"href":"/company","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"About"}]}],["$","li","Careers",{"children":["$","$L3",null,{"href":"/careers","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Careers"}]}],["$","li","News",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"News"}]}],["$","li","Contact",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Contact"}]}]]}]]}],["$","div","Resources",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Resources"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Documentation",{"children":["$","$L3",null,{"href":"/documentation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Documentation"}]}],["$","li","Academy",{"children":["$","$L3",null,{"href":"/academy","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Academy"}]}],["$","li","Blog",{"children":["$","$L3",null,{"href":"/blog","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Blog"}]}],["$","li","Support",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Support"}]}]]}]]}]]]}],["$","div",null,{"className":"border-t border-border/50 mt-12 pt-8 flex flex-col md:flex-row justify-between items-center gap-4","children":[["$","p",null,{"className":"text-sm text-muted-foreground font-body","children":["© ",2026," QuadaptAI. All rights reserved."]}],["$","div",null,{"className":"flex gap-6 text-sm text-muted-foreground font-body","children":[["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Privacy Policy"}],["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Terms of Service"}],["$","a",null,{"href":"/research","className":"hover:text-primary transition-colors","children":"Responsible AI"}]]}]]}]]}]}]]}],null],null],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children","$7","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[[[["$","link","0",{"rel":"stylesheet","href":"/_next/static/css/f34024b8cab46471.css","precedence":"next","crossOrigin":"$undefined"}]],["$","html",null,{"lang":"en","children":["$","body",null,{"children":[["$","$L9",null,{}],["$","$La",null,{"gaId":"G-9LZTSRBKWK"}],["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":[["$","title",null,{"children":"404: This page could not be found."}],["$","div",null,{"style":{"fontFamily":"system-ui,\"Segoe UI\",Roboto,Helvetica,Arial,sans-serif,\"Apple Color Emoji\",\"Segoe UI Emoji\"","height":"100vh","textAlign":"center","display":"flex","flexDirection":"column","alignItems":"center","justifyContent":"center"},"children":["$","div",null,{"children":[["$","style",null,{"dangerouslySetInnerHTML":{"__html":"body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}"}}],["$","h1",null,{"className":"next-error-h1","style":{"display":"inline-block","margin":"0 20px 0 0","padding":"0 23px 0 0","fontSize":24,"fontWeight":500,"verticalAlign":"top","lineHeight":"49px"},"children":"404"}],["$","div",null,{"style":{"display":"inline-block"},"children":["$","h2",null,{"style":{"fontSize":14,"fontWeight":400,"lineHeight":"49px","margin":0},"children":"This page could not be found."}]}]]}]}]],"notFoundStyles":[]}],["$","$Lb",null,{}]]}]}]],null],null],["$Lc",null]]]]
c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}],["$","meta","1",{"charSet":"utf-8"}],["$","title","2",{"children":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","3",{"name":"description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy. Advanced AI-powered failure analysis and root cause detection for semiconductor manufacturing."}],["$","meta","4",{"name":"author","content":"QuadaptAI"}],["$","link","5",{"rel":"manifest","href":"/favicon/site.webmanifest","crossOrigin":"use-credentials"}],["$","meta","6",{"name":"keywords","content":"semiconductor analysis,AI failure analysis,root cause analysis,autonomous AI,semiconductor manufacturing,machine learning,failure detection,intelligent manufacturing"}],["$","meta","7",{"name":"creator","content":"QuadaptAI"}],["$","meta","8",{"name":"publisher","content":"QuadaptAI"}],["$","meta","9",{"name":"robots","content":"index, follow"}],["$","meta","10",{"name":"googlebot","content":"index, follow, max-video-preview:-1, max-image-preview:large, max-snippet:-1"}],["$","link","11",{"rel":"canonical","href":"https://quadaptai.ai"}],["$","meta","12",{"name":"format-detection","content":"telephone=no, address=no, email=no"}],["$","meta","13",{"property":"og:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","14",{"property":"og:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","15",{"property":"og:url","content":"https://quadaptai.ai"}],["$","meta","16",{"property":"og:site_name","content":"QuadaptAI"}],["$","meta","17",{"property":"og:locale","content":"en_US"}],["$","meta","18",{"property":"og:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","meta","19",{"property":"og:image:width","content":"1200"}],["$","meta","20",{"property":"og:image:height","content":"630"}],["$","meta","21",{"property":"og:image:alt","content":"QuadaptAI Logo"}],["$","meta","22",{"property":"og:type","content":"website"}],["$","meta","23",{"name":"twitter:card","content":"summary_large_image"}],["$","meta","24",{"name":"twitter:creator","content":"@quadaptai"}],["$","meta","25",{"name":"twitter:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","26",{"name":"twitter:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","27",{"name":"twitter:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","link","28",{"rel":"icon","href":"/favicon.ico","sizes":"any"}],["$","link","29",{"rel":"icon","href":"/favicon/favicon-16x16.png","sizes":"16x16","type":"image/png"}],["$","link","30",{"rel":"icon","href":"/favicon/favicon-32x32.png","sizes":"32x32","type":"image/png"}],["$","link","31",{"rel":"apple-touch-icon","href":"/favicon/apple-touch-icon.png","sizes":"180x180","type":"image/png"}]]
1:null