Build Alphabet Dataset

Language Setup

Total Samples0
Letters Ready0 / 26
Active Letter Samples0
Not saved to repo files yet.

Draw Symbol For A

Collect samples to begin.

Letter Progress

Alien Alphabet Reference

Translation key used by this language
Alien language to English letter mapping reference

Train Letter CNN

StatusNot Trained
Validation Accuracy-
Validation Loss-

Validation Confusion Matrix

Zoogle Translate

Loaded alphabet reference Alien language to English letter mapping reference

Alien Input (Unnamed Language)

English Output

-
Tip: Hover decoded letters for posterior diagnostics.
Average Confidence-
Decoded Symbols0

Test Single Letter

Draw Test Symbol

Prediction-
Confidence-
Entropy-
Top Margin-

Visual Diagnostics

Normalized 24×24
Saliency Map

Posterior \(p(Y \mid X)\)

Bayes Evidence \(\frac{p(Y \mid X)}{p(Y)}\)

Conv Filter Contributions

About Zoogle Translate

A CS109-driven walkthrough of the full probabilistic pipeline, from labeled data to posterior inference.

Process Breakdown

1

Labeling: You create a dataset of pairs \((x_i, y_i)\), where \(x_i\) is a normalized symbol image and \(y_i\in\{A,\dots,Z\}\).

2

Modeling: A CNN parameterized by \(\theta\) produces class posteriors \(p_{\theta}(Y\mid X)\) over all letters.

3

Learning: Training minimizes \(\mathcal{L}_{\mathrm{CE}}=-\log p_{\theta}(y\mid x)\), equivalent to maximum-likelihood estimation.

4

Decision Rule: Decoding uses MAP inference \(\hat{y}=\arg\max_y p_{\theta}(y\mid X)\) for each symbol token.

5

Uncertainty + Diagnostics: The app reports entropy \(H(Y\mid X)\), MAP margin \(\Delta p\), and evidence ratio \(\frac{p(Y\mid X)}{p(Y)}\).

Posterior Update \(p(Y\mid X)\)

Visual comparison of prior \(p(Y)\) and posterior \(p(Y\mid X)\) for top classes. MAP picks the largest posterior bar.

Uncertainty \(H(Y\mid X)\)

Binary entropy curve \(H_2(p)\). Larger posterior concentration typically lowers uncertainty.

CS109 Concepts Applied

Random Variables + Conditionals: \(X\) is the observed symbol image, \(Y\) is the letter label, and prediction uses \(p_{\theta}(Y\mid X)\).

Bayes Theorem: Evidence is interpreted through \(\frac{p(Y\mid X)}{p(Y)}\), contrasting posterior mass against a prior baseline.

MAP Inference: Classification uses \(\arg\max_y p_{\theta}(y\mid X)\), the standard posterior decision rule.

Entropy: \(H(Y\mid X)\) quantifies uncertainty in bits; high entropy flags ambiguous symbols.

Likelihood + MLE: Cross-entropy training minimizes \(-\log p_{\theta}(y\mid x)\), i.e., negative log-likelihood over labeled data.