2:I[4114,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"default"]
3:I[2972,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],""]
5:I[5878,["902","static/chunks/902-8eacf7be61c805be.js","341","static/chunks/341-4374b292aeaa283a.js","548","static/chunks/app/blog/%5Bid%5D/page-c26c44ff0bb00c86.js"],"Image"]
6:I[4707,[],""]
8:I[6423,[],""]
9:I[2230,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"]
a:I[3039,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"default"]
b:I[7075,["902","static/chunks/902-8eacf7be61c805be.js","669","static/chunks/669-85d0ee18f7f18ccd.js","185","static/chunks/app/layout-cadeb9650aeae939.js"],"Toaster"]
4:T8b17,
Introduction
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of complex density distributions. Numerous variants exist to encourage disentanglement in latent space while improving reconstruction. However, none have simultaneously managed the trade-off between attaining extremely low reconstruction error and a high disentanglement score.
We present a generalized framework to handle this challenge under constrained optimization and demonstrate that it outperforms state-of-the-art existing models as regards disentanglement while balancing reconstruction. We introduce three controllable Lagrangian hyperparameters to control reconstruction loss, KL divergence loss and correlation measure. We prove that maximizing information in the reconstruction network is equivalent to information maximization during amortized inference under reasonable assumptions and constraint relaxation.
This blog post focuses on the mathematical derivations of the GCVAE framework, particularly the detailed proofs from the appendix that establish the theoretical foundation of this approach.
Background: Variational Autoencoders
VAEs are a class of generative model proposed to model complex distributions existing in images, natural language and functional data. We formally define the VAE model by observing a d-dimensional input space {xi}i=1N ∈ X consisting of N-independently and identically distributed (i.i.d) samples; k-dimensional latent space {zi}i=1N ∈ Z (where k ≪ d) over which a generative model is defined.
We assume an empirical prior distribution pθ(z) ∼ N(0, I) to infer an approximate posterior distribution qφ(z|x) ∼ N(z|μφ(x), σφ2(x)I), with mean μφ(x) and variance σφ2(x)I used for reparameterization sampling of the latent space z. We model the data using conditional distribution pθ(x|z) ∼ N(x|μθ(x), σθ2(x)I).
The objective function L(θ, φ) to be maximized is given by the Evidence Lower Bound (ELBO):
L(θ, φ) = EpDEz∼qφ(z|x)[ln pθ(x|z)] − DKL(qφ(z|x)||pθ(z))
where φ and θ represent the parameters of the neural network encoder and decoder respectively. The first term is a reconstruction error, and the second term is the KL-divergence between the approximate posterior and the prior.
The GCVAE Framework
We propose a generalized framework for variational inference modeling, taking into account the idea of information maximization in the reconstruction network. We prioritize the disentanglement of the latent space and balancing the trade-off between disentanglement metric and reconstruction loss by maximizing the mutual information between the reconstructed data x′ and latent space z.
Let Ip(x′, z) be the mutual information between the reconstructed data x′ and z under joint distribution pθ(x′|z)pθ(z), where pθ(z) ∼ N(0, I). Iq(x, z) is the mutual information between x and z under joint distribution qφ(z|x)pD(x).
The constraint optimization formulation can be written as:
maxφ,θ Ip(x′, z)
s.t DKL(qφ(z|x)||pθ(z)) ≤ ξ1
s.t DKL(qφ(z)||pθ(z)) ≤ ξ2
ξ1, ξ2 ≥ 0
This implies that accurately reconstructing the original distribution pD(x) requires us to maximize the mutual information Ip(x′, z) in the reconstructed space while reducing information loss Iq(x, z) during inference.
Figure 1: GCVAE framework. αt, βt and γt respectively provide automatic balancing of the log-likelihood and KL divergences for optimal reconstruction and disentanglement. The feed-ins At, Bt and Ct are expectations of variational loss.
Mathematical Derivations
The following sections present detailed mathematical derivations from the appendix of the paper, establishing the theoretical foundation of GCVAE.
Appendix A.1: Proof that Maximizing Negative KL-Divergence is Equivalent to Maximizing ELBO
We prove that maximizing the negative KL-divergence is equivalent to maximizing the ELBO.
Starting from the objective function:
L(θ, φ) = −EpD[DKL(qφ(z|x)||pθ(z|x))]
Expanding the KL-divergence term:
= −EpDEz∼qφ(z|x)[ln (qφ(z|x) / pθ(z|x))]
Using Bayes' theorem, pθ(z|x) = pθ(x|z)pθ(z) / pθ(x), we can rewrite this as:
= −EpDEz∼qφ(z|x)[ln (qφ(z|x) / pθ(z) · 1 / pθ(x|z) · pθ(x))]
Rearranging terms, we obtain:
ln pθ(x) ≥ EpD[ln pθ(x|z) − DKL(qφ(z|x)||pθ(z))]
This establishes that maximizing the negative KL-divergence is equivalent to maximizing the ELBO, which provides a lower bound on the log-likelihood.
Appendix A.2: Minimizing Mutual Information Iq(x, z)
We demonstrate that minimizing the mutual information between x and latent z equivalently maximizes components of the ELBO and consequently, Ip(x′, z) subject to inference constraints.
The mutual information for the inference network is expressed as:
Iq(x, z) = ∫x∫z qφ(x, z) ln (qφ(x, z) / qφ(z)qφ(x)) dx dz
= DKL(qφ(x, z)pθ(z)||qφ(z)qφ(x)pθ(z))
Expanding this expression step by step:
Iq(x, z) = Ez∼qφ(z|x)[ln (qφ(z|x) / qφ(z) · pθ(x, z) / pθ(x, z))]
Using the relationship qφ(x, z) = qφ(z|x)qφ(x) and pθ(x, z) = pθ(x|z)pθ(z), we can rewrite:
= Ez∼qφ(z|x)[ln (qφ(z|x) / pθ(x|z)pθ(z) · pθ(z|x)pθ(x) / qφ(z))]
Rearranging and simplifying:
= Ez∼qφ(z|x)[ln (qφ(z|x) / pθ(z) · pθ(z|x) / qφ(z) · pθ(x) / pθ(x|z))]
This expands to:
Iq(x, z) = DKL(qφ(z|x)||pθ(z)) − DKL(qφ(z)||pθ(z)) − Ez∼qφ(z|x)[ln pθ(x|z)] + ln pθ(x)
Noting that pθ(z|x) ∼ pθ(z), we obtain:
ln pθ(x) ≥ Ez∼qφ(z|x)[ln pθ(x|z)] − DKL(qφ(z|x)||pθ(z)) + DKL(qφ(z)||pθ(z))
Therefore, minimizing Iq(x, z) is equivalent to maximizing the terms on the Right Hand Side (R.H.S) of the above equation, which is a lower bound.
Appendix A.3: GCVAE Constraint Optimization Proof
We recall the constraint optimization loss and prove it accordingly. Given the maximization problem:
maxθ,φ,ξ+,ξ−,ξp∈R Ip(x′, z)
s.t EpDDKL(qφ(z|x)||pθ(z)) + Ip(x′, z) ≤ ξ−
s.t −EpDDKL(qφ(z)||pθ(z)) ≤ ξ+
s.t Ip(x′, z) ≤ ξp
s.t ξ+i, ξ−i, ξip ≥ 0, ∀i = 1, ..., n
The expansion of the above equations using sets of Lagrangian multipliers is as follows:
L(x, z; θ, φ, ξ+, ξ−, ξ, α, β, γ, η, τ, ν)
= Ip(x′, z) − β(EpDDKL(qφ(z|x)||pθ(z)) + Ip(x′, z) − Σi=1n ξ−i)
+ γ(EpDDKL(qφ(z)||pθ(z)) + Σi=1n ξ+i)
− α(I(x′; z) − Σi=1n ξip)
+ Σi=1n ηiξ+i + Σi=1n τiξ−i + Σi=1n νiξip
Simplifying and collecting terms:
L(x, z; θ, φ, ξ+, ξ−, ξ, α, β, γ, η, τ, ν)
= (1 − α − β)Ip(x′, z) − βEpDDKL(qφ(z|x)||pθ(z))
+ γDKL(qφ(z)||pθ(z))
+ (β − τ)Σi=1n ξ−i + (γ − η)Σi=1n ξ+i + (α − ν)Σi=1n ξip
We take the gradient over the loss, ▽L for ξ−, ξ+, ξp and apply KKT optimality conditions to obtain:
L(x, z; θ, φ, ξ+, ξ−, ξp, α, β, γ)
= (1 − α − β)Ip(x′, z) − βEpDDKL(qφ(z|x)||pθ(z))
+ γDKL(qφ(z)||pθ(z))
Substituting Ip(x′, z) = Ez∼qφ(z|x)[ln pθ(x|z)] (since the marginal log-likelihood is intractable and can be dropped):
= (1 − α − β)Ez∼qφ(z|x)[ln pθ(x|z)]
− βEpDDKL(qφ(z|x)||pθ(z))
+ γDKL(qφ(z)||pθ(z))
We set the Lagrangian adaptive hyperparameters following ControlVAE as follows:
L(θ, φ, α, β, γ) = (1 − αt − βt)Ez∼qφ(z|x)[ln pθ(x|z)]
− βtEpDDKL(qφ(z|x)||pθ(z))
+ γtDKL(qφ(z)||pθ(z))
The adaptive weight αt controls the reconstruction error while βt ensures the posterior latent factor qφ(z|x) does not deviate significantly from its prior pθ(z). Varying both terms gives us better control of the degree of disentanglement and helps us understand the parameters affecting density disentanglement.
Appendix A.4: Expected Squared Mahalanobis Distance
Suppose that a data x ∈ X with probability p(x) is projected into a reproducing kernel Hilbert space ⟨φ(x)⟩H, the expectation of the squared Mahalanobis distance is as follows:
ED2MAH(q(z)||p(z)) ≥ ||Ex∼pφ(x) − Ey∼qφ(y)||2Σ−1,H
Expanding the squared norm:
≥ (Ex∼pφ(x) − Ey∼qφ(y))′Σ−1(Ex∼pφ(x) − Ey∼qφ(y))
Expanding the quadratic form:
≥ Σ−1(Ex∼pφ(x) − Ey∼qφ(y))′(Ex∼pφ(x) − Ey∼qφ(y))
Expanding further:
≥ Σ−1[Ex′∼p,x∼p⟨φ(x′), φ(x)⟩H
− 2Ex′∼p,y∼q⟨φ(x′), φ(y)⟩H
+ Ey′∼q,y∼q⟨φ(y′), φ(y)⟩H]
We suppose that the feature map φ(x) takes the canonical form k(x, ·), so that ⟨φ(x′), φ(x)⟩H = k(x′, x) where k(x′, x) represents a positive semi-definite kernel (for instance the Gaussian kernel, exp(−||x′ − x||2 / 2σ2)). Hence:
ED2MAH(q(z)||p(z)) ≥ Σ−1[Ez′∼p,z∼pk(z′, z)
− 2Ez′∼p,z∼qk(z′, z)
+ Ez′∼q,z∼qk(z′, z)]
This simplifies to:
ED2MAH(q(z)||p(z)) = Σ−1D2MMD(q(z)||p(z))
ED2MAH(q||p) is a measure of the average dissimilarity between the distributions p and q in the Hilbert space, as it normalizes the similarity with feature variances, therefore encouraging class discrimination. ED2MAH(q||p) reduces to D2MMD(q||p) when Σ−1 is identity.
Key Insights from the Derivations
The derivations establish several important theoretical results:
- Equivalence of Objectives: Maximizing negative KL-divergence is equivalent to maximizing ELBO, providing a principled lower bound on the log-likelihood.
- Mutual Information Minimization: Minimizing Iq(x, z) equivalently maximizes components of the ELBO, establishing the connection between information theory and variational inference.
- Constraint Optimization: The GCVAE framework can be derived from first principles using Lagrangian multipliers and KKT conditions, providing a rigorous mathematical foundation.
- Mahalanobis Distance: The squared Mahalanobis distance provides a better disentangling metric than Maximum Mean Discrepancy (MMD) by normalizing with feature variances.
GCVAE Loss Function
The final GCVAE loss function, derived from the constraint optimization framework, is:
L(θ, φ, α, β, γ) = (1 − αt − βt)Ez∼qφ(z|x)[ln pθ(x|z)]
− βtEpDDKL(qφ(z|x)||pθ(z))
+ γtDKL(qφ(z)||pθ(z))
where:
- αt: Controls the reconstruction error (adaptive hyperparameter)
- βt: Ensures the posterior qφ(z|x) does not deviate significantly from the prior pθ(z)
- γt: Controls the correlation measure (Mahalanobis distance or MMD)
The adaptive hyperparameters αt, βt, and γt are controlled using PID controllers similar to ControlVAE, allowing automatic balancing of reconstruction and disentanglement objectives.
Relationship to Other VAE Variants
The GCVAE framework generalizes several existing VAE variants. By setting specific values for the hyperparameters, we can recover:
- ELBO: When αt = α = −1, βt = β = 1, γ = 0
- ControlVAE: When αt = α = 0, βt > 0, γ = 0
- InfoVAE: When αt = α = 0, βt = β = 0, γt > 1
- FactorVAE: When αt = α = −1, βt = β = 1, γ = −1
This demonstrates that GCVAE provides a unified framework that encompasses and extends existing variational autoencoder approaches.
Experimental Results
We evaluate the performance of GCVAE on standard benchmark datasets including DSprites and MNIST, comparing against state-of-the-art VAE variants.
Performance Comparison
Table 1 presents a comprehensive performance comparison of different models on the DSprites dataset after training on 737 samples. The comparison metrics include MIG (Mutual Information Gap), Modularity, JEMMIG (Joint Entropy Minus Mutual Information Gap), reconstruction loss, and KL loss.
| Model |
MIG ↑ |
Modularity ↑ |
JEMMIG ↑ |
Reconstruction Loss ↓ |
KL Loss ↗ |
| VAE |
0.1268 |
0.798 |
0.233 |
3.339 |
3.0025 |
| β-VAE |
0.0778 |
0.881 |
0.238 |
0.012 |
35.0295 |
| ControlVAE |
0.1213 |
0.782 |
0.312 |
0.016 |
24.3809 |
| InfoVAE |
0.1501 |
0.757 |
0.188 |
0.079 |
10.0621 |
| GCVAE-I |
0.1507 |
0.844 |
0.236 |
0.012 |
24.3739 |
| GCVAE-II |
0.2793 |
0.858 |
0.312 |
0.012 |
24.4316 |
| GCVAE-III |
0.1337 |
0.825 |
0.294 |
0.015 |
24.2937 |
Table 1: Performance comparison of different models on DSprites after training on 737 samples. Comparison metrics MIG, Modularity and JEMMIG for 10-D Latent representation. Higher is better for MIG, Modularity and JEMMIG. GCVAE-II performs best on MIG disentanglement metric, robustness and interpretability, plus having the lowest reconstruction error.
GCVAE-II demonstrates superior performance across multiple metrics. It achieves the highest MIG score (0.2793), indicating the best disentanglement quality. The model also maintains low reconstruction error (0.012) while achieving high modularity (0.858) and JEMMIG scores (0.312). This represents an 85% increase in MIG estimation for GCVAE-II compared to GCVAE-I, demonstrating the advantage of using the squared Mahalanobis distance as a correlation measure.
Generative Process Comparison
We evaluate the quality of generation by considering the explicitness and coherency of the encoded latent variables. The generation by GCVAE-I, II and III far outperformed those of the benchmark models, especially on the MNIST dataset.
Figure 2: Generative process comparison for the different models on DSprites after training on less than 800 samples. Model GCVAE-I, II and ControlVAE clearly outperformed other models. The reconstruction error of GCVAE-II is the lowest from Table 1.
Figure 3: Generative process comparison for the different models after training the MNIST dataset for 500 epochs. GCVAE-II and ELBO (VAE) have a similar reconstruction quality with better interpretation. GCVAE-II clearly outperformed the benchmark models in generating clear and meaningful representations of the original data.
Model Performance Analysis
Figure 4 illustrates the model performance comparison on 737 samples of DSprites data, showing the relationship between reconstruction error, KL divergence, and disentanglement metrics across different latent dimensions.
Figure 4: Model performance comparison on 737 samples of DSprites data. Top: Comparison of reconstruction error against KL divergence DKL(qφ(z|x)||pθ(z)) and correlation DKL(qφ(z)||pθ(z)). Bottom: Comparing disentanglement metrics with reconstruction loss. The highest disentanglement on the MIG metric is observed for GCVAE-II on Latent-2, however, the best scores are observed for GCVAE-II on Latent-10.
Stopping Criterion Analysis
We compare the performance of GCVAE-I, II & III using a stopping criterion algorithm. Using a fixed ϵ for αt = 10e−5 and βt = 10e−4, we obtain reasonably low reconstruction loss with high disentanglement without having to train for a lengthy period. The average time required to train GCVAE using the stopping criterion is 6 hours while it takes more than 3 days to train for 250K iterations.
Figure 5: Comparison of metrics for DSprites 2D shapes dataset. Top: Comparison of GCVAE-I, II & III losses over increasing latent dimensions. The lowest reconstruction error is observed for GCVAE-I on Latent-10 and is monotone increasing thereafter. Bottom: Disentanglement metric over different dimensions. FactorVAE metric is monotone decreasing for latent space greater than 2. MIG is similar in behavior to FactorVAE metric and best for GCVAE-II on Latent-2.
Figure 6: A visual comparison of the reconstruction, −DKL(qφ(z|x)||pθ(z)) and DKL(qφ(z)||pθ(z)) losses for GCVAE-I, II & III over different latent space. Behavior of reconstruction error per latent is relatively close and indistinguishable. In all cases of latents experimented with except for Latent-2, −DKL(qφ(z|x)||pθ(z)) is comparable.
We observe that GCVAE-I is unstable in DKL(qφ(z)||pθ(z)) during training across all latents. While a lower value of DKL(qφ(z)||pθ(z)) is preferred, we observe correlation with MIG disentanglement metric in Figure 5. This highlights the importance of the correlation term in achieving optimal disentanglement.
Conclusion
The Generalized-Controllable Variational AutoEncoder (GCVAE) provides a principled framework for balancing reconstruction quality and latent space disentanglement. Through rigorous mathematical derivations, we establish that:
- Maximizing mutual information in the reconstruction network is equivalent to information maximization during amortized inference under reasonable constraints
- The constraint optimization formulation leads to a loss function with three controllable hyperparameters that automatically balance reconstruction and disentanglement
- The squared Mahalanobis distance provides a superior metric for measuring disentanglement compared to Maximum Mean Discrepancy
- GCVAE generalizes existing VAE variants, providing a unified framework for variational inference
The detailed derivations presented in this blog post establish the theoretical foundation of GCVAE and demonstrate its advantages over existing approaches in simultaneously achieving high disentanglement scores and low reconstruction errors.
Read the Full Paper
For additional experimental results, implementation details, and comprehensive analysis, please refer to the full research paper available on arXiv.
Access Full Paper on arXiv
7:["id","gcvae","d"]
0:["b2TsEGqQOWST_PpOaC7LK",[[["",{"children":["blog",{"children":[["id","gcvae","d"],{"children":["__PAGE__?{\"id\":\"gcvae\"}",{}]}]}]},"$undefined","$undefined",true],["",{"children":["blog",{"children":[["id","gcvae","d"],{"children":["__PAGE__",{},[["$L1",["$","main",null,{"children":[["$","$L2",null,{}],["$","article",null,{"className":"min-h-screen pt-24 pb-16 px-6","children":["$","div",null,{"className":"container mx-auto max-w-4xl","children":[["$","$L3",null,{"href":"/blog","className":"inline-flex items-center gap-2 text-muted-foreground hover:text-primary transition-colors mb-8 font-body","children":[["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-arrow-left w-4 h-4","children":[["$","path","1l729n",{"d":"m12 19-7-7 7-7"}],["$","path","x3x0zl",{"d":"M19 12H5"}],"$undefined"]}],"Back to Blog"]}],["$","div",null,{"className":"mb-8","children":[["$","div",null,{"className":"flex flex-wrap items-center gap-2 mb-4","children":[["$","span",null,{"className":"text-sm text-primary font-body font-medium","children":"Research"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"2022-06-09"}],["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-primary font-body","children":[5," citations"]}],[["$","span",null,{"className":"text-muted-foreground","children":"•"}],["$","span",null,{"className":"text-sm text-muted-foreground font-body","children":"arXiv preprint arXiv:2206.04225"}]]]}],["$","h1",null,{"className":"font-display text-4xl md:text-5xl lg:text-6xl font-medium mb-4","children":"GCVAE: Generalized-Controllable Variational Autoencoder"}],["$","p",null,{"className":"text-lg text-muted-foreground font-body mb-4","children":["By ","K Ezukwoke, A Hoayek, M Batton-Hubert, X Boucher, P Gounet, J Adrian"]}],["$","a",null,{"href":"https://arxiv.org/abs/2206.04225","target":"_blank","rel":"noopener noreferrer","className":"inline-flex items-center gap-2 text-primary hover:text-primary/80 transition-colors font-body text-sm font-medium","children":["Read Full Paper ",["$","svg",null,{"xmlns":"http://www.w3.org/2000/svg","width":24,"height":24,"viewBox":"0 0 24 24","fill":"none","stroke":"currentColor","strokeWidth":2,"strokeLinecap":"round","strokeLinejoin":"round","className":"lucide lucide-external-link w-4 h-4","children":[["$","path","a6xqqp",{"d":"M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"}],["$","polyline","mznyad",{"points":"15 3 21 3 21 9"}],["$","line","18c3s4",{"x1":"10","x2":"21","y1":"14","y2":"3"}],"$undefined"]}]]}]]}],["$","div",null,{"className":"prose prose-lg max-w-none font-body prose-headings:font-display prose-headings:font-medium prose-h2:text-3xl prose-h2:mt-12 prose-h2:mb-6 prose-h3:text-2xl prose-h3:mt-8 prose-h3:mb-4 prose-h4:text-xl prose-h4:mt-6 prose-h4:mb-3 prose-p:text-base prose-p:leading-relaxed prose-p:mb-4 prose-ul:list-disc prose-ul:ml-6 prose-ul:mb-4 prose-ol:list-decimal prose-ol:ml-6 prose-ol:mb-4 prose-li:mb-2 prose-strong:font-semibold prose-strong:text-foreground prose-table:w-full prose-table:border-collapse prose-th:border prose-th:border-border prose-th:p-4 prose-th:bg-muted/50 prose-th:font-medium prose-td:border prose-td:border-border prose-td:p-4 prose-a:text-primary prose-a:no-underline hover:prose-a:underline","dangerouslySetInnerHTML":{"__html":"$4"}}]]}]}],["$","footer",null,{"id":"footer","className":"border-t border-border/50 bg-card/30","children":["$","div",null,{"className":"container mx-auto px-6 py-16","children":[["$","div",null,{"className":"grid grid-cols-2 md:grid-cols-5 gap-8","children":[["$","div",null,{"className":"col-span-2 md:col-span-1","children":[["$","a",null,{"href":"/","className":"flex items-center gap-3 mb-4","children":[["$","$L5",null,{"src":"/images/logo/quadapt_logo.png","alt":"QuadaptAI Logo","width":32,"height":32,"className":"h-8 w-auto"}],["$","span",null,{"className":"font-display font-semibold","children":"QuadaptAI"}]]}],["$","p",null,{"className":"text-sm text-muted-foreground font-body mb-4","children":"Based in Paris, France"}],["$","div",null,{"className":"flex gap-3 text-muted-foreground","children":[["$","a",null,{"href":"https://twitter.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M18.244 2.25h3.308l-7.227 8.26 8.502 11.24H16.17l-5.214-6.817L4.99 21.75H1.68l7.73-8.835L1.254 2.25H8.08l4.713 6.231zm-1.161 17.52h1.833L7.084 4.126H5.117z"}]}]}],["$","a",null,{"href":"https://linkedin.com","target":"_blank","rel":"noopener noreferrer","className":"hover:text-primary transition-colors","children":["$","svg",null,{"className":"w-5 h-5","fill":"currentColor","viewBox":"0 0 24 24","children":["$","path",null,{"d":"M20.447 20.452h-3.554v-5.569c0-1.328-.027-3.037-1.852-3.037-1.853 0-2.136 1.445-2.136 2.939v5.667H9.351V9h3.414v1.561h.046c.477-.9 1.637-1.85 3.37-1.85 3.601 0 4.267 2.37 4.267 5.455v6.286zM5.337 7.433c-1.144 0-2.063-.926-2.063-2.065 0-1.138.92-2.063 2.063-2.063 1.14 0 2.064.925 2.064 2.063 0 1.139-.925 2.065-2.064 2.065zm1.782 13.019H3.555V9h3.564v11.452zM22.225 0H1.771C.792 0 0 .774 0 1.729v20.542C0 23.227.792 24 1.771 24h20.451C23.2 24 24 23.227 24 22.271V1.729C24 .774 23.2 0 22.222 0h.003z"}]}]}]]}]]}],[["$","div","Products",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Products"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Failure Analysis Agent",{"children":["$","$L3",null,{"href":"/products#failure-analysis-agent","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Failure Analysis Agent"}]}],["$","li","Insight Generation",{"children":["$","$L3",null,{"href":"/products#insight-generation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Insight Generation"}]}],["$","li","Developer Platform",{"children":["$","$L3",null,{"href":"/products#knowledge-extraction","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Developer Platform"}]}],["$","li","All Products",{"children":["$","$L3",null,{"href":"/products","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"All Products"}]}]]}]]}],["$","div","Research",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Research"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Publications",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Publications"}]}],["$","li","Case Studies",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Case Studies"}]}],["$","li","Technical Papers",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Technical Papers"}]}],["$","li","Open Source",{"children":["$","$L3",null,{"href":"/research","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Open Source"}]}]]}]]}],["$","div","Company",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Company"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","About",{"children":["$","$L3",null,{"href":"/company","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"About"}]}],["$","li","Careers",{"children":["$","$L3",null,{"href":"/careers","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Careers"}]}],["$","li","News",{"children":["$","$L3",null,{"href":"/#news","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"News"}]}],["$","li","Contact",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Contact"}]}]]}]]}],["$","div","Resources",{"children":[["$","h4",null,{"className":"font-display font-medium text-foreground mb-4","children":"Resources"}],["$","ul",null,{"className":"space-y-2","children":[["$","li","Documentation",{"children":["$","$L3",null,{"href":"/documentation","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Documentation"}]}],["$","li","Academy",{"children":["$","$L3",null,{"href":"/academy","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Academy"}]}],["$","li","Blog",{"children":["$","$L3",null,{"href":"/blog","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Blog"}]}],["$","li","Support",{"children":["$","$L3",null,{"href":"/contact","className":"text-sm text-muted-foreground hover:text-primary transition-colors font-body","children":"Support"}]}]]}]]}]]]}],["$","div",null,{"className":"border-t border-border/50 mt-12 pt-8 flex flex-col md:flex-row justify-between items-center gap-4","children":[["$","p",null,{"className":"text-sm text-muted-foreground font-body","children":["© ",2026," QuadaptAI. All rights reserved."]}],["$","div",null,{"className":"flex gap-6 text-sm text-muted-foreground font-body","children":[["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Privacy Policy"}],["$","a",null,{"href":"/company","className":"hover:text-primary transition-colors","children":"Terms of Service"}],["$","a",null,{"href":"/research","className":"hover:text-primary transition-colors","children":"Responsible AI"}]]}]]}]]}]}]]}],null],null],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children","$7","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[null,["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children","blog","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[[[["$","link","0",{"rel":"stylesheet","href":"/_next/static/css/f34024b8cab46471.css","precedence":"next","crossOrigin":"$undefined"}]],["$","html",null,{"lang":"en","children":["$","body",null,{"children":[["$","$L9",null,{}],["$","$La",null,{"gaId":"G-9LZTSRBKWK"}],["$","$L6",null,{"parallelRouterKey":"children","segmentPath":["children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L8",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":[["$","title",null,{"children":"404: This page could not be found."}],["$","div",null,{"style":{"fontFamily":"system-ui,\"Segoe UI\",Roboto,Helvetica,Arial,sans-serif,\"Apple Color Emoji\",\"Segoe UI Emoji\"","height":"100vh","textAlign":"center","display":"flex","flexDirection":"column","alignItems":"center","justifyContent":"center"},"children":["$","div",null,{"children":[["$","style",null,{"dangerouslySetInnerHTML":{"__html":"body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}"}}],["$","h1",null,{"className":"next-error-h1","style":{"display":"inline-block","margin":"0 20px 0 0","padding":"0 23px 0 0","fontSize":24,"fontWeight":500,"verticalAlign":"top","lineHeight":"49px"},"children":"404"}],["$","div",null,{"style":{"display":"inline-block"},"children":["$","h2",null,{"style":{"fontSize":14,"fontWeight":400,"lineHeight":"49px","margin":0},"children":"This page could not be found."}]}]]}]}]],"notFoundStyles":[]}],["$","$Lb",null,{}]]}]}]],null],null],["$Lc",null]]]]
c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}],["$","meta","1",{"charSet":"utf-8"}],["$","title","2",{"children":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","3",{"name":"description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy. Advanced AI-powered failure analysis and root cause detection for semiconductor manufacturing."}],["$","meta","4",{"name":"author","content":"QuadaptAI"}],["$","link","5",{"rel":"manifest","href":"/favicon/site.webmanifest","crossOrigin":"use-credentials"}],["$","meta","6",{"name":"keywords","content":"semiconductor analysis,AI failure analysis,root cause analysis,autonomous AI,semiconductor manufacturing,machine learning,failure detection,intelligent manufacturing"}],["$","meta","7",{"name":"creator","content":"QuadaptAI"}],["$","meta","8",{"name":"publisher","content":"QuadaptAI"}],["$","meta","9",{"name":"robots","content":"index, follow"}],["$","meta","10",{"name":"googlebot","content":"index, follow, max-video-preview:-1, max-image-preview:large, max-snippet:-1"}],["$","link","11",{"rel":"canonical","href":"https://quadaptai.ai"}],["$","meta","12",{"name":"format-detection","content":"telephone=no, address=no, email=no"}],["$","meta","13",{"property":"og:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","14",{"property":"og:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","15",{"property":"og:url","content":"https://quadaptai.ai"}],["$","meta","16",{"property":"og:site_name","content":"QuadaptAI"}],["$","meta","17",{"property":"og:locale","content":"en_US"}],["$","meta","18",{"property":"og:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","meta","19",{"property":"og:image:width","content":"1200"}],["$","meta","20",{"property":"og:image:height","content":"630"}],["$","meta","21",{"property":"og:image:alt","content":"QuadaptAI Logo"}],["$","meta","22",{"property":"og:type","content":"website"}],["$","meta","23",{"name":"twitter:card","content":"summary_large_image"}],["$","meta","24",{"name":"twitter:creator","content":"@quadaptai"}],["$","meta","25",{"name":"twitter:title","content":"QuadaptAI - Autonomous AI for Semiconductor Analysis"}],["$","meta","26",{"name":"twitter:description","content":"QuadaptAI delivers autonomous intelligence for semiconductor analysis with unprecedented accuracy."}],["$","meta","27",{"name":"twitter:image","content":"https://quadaptai.ai/images/logo/quadapt_logo.png"}],["$","link","28",{"rel":"icon","href":"/favicon.ico","sizes":"any"}],["$","link","29",{"rel":"icon","href":"/favicon/favicon-16x16.png","sizes":"16x16","type":"image/png"}],["$","link","30",{"rel":"icon","href":"/favicon/favicon-32x32.png","sizes":"32x32","type":"image/png"}],["$","link","31",{"rel":"apple-touch-icon","href":"/favicon/apple-touch-icon.png","sizes":"180x180","type":"image/png"}]]
1:null