πŸ“ Artikel ini ditulis dalam Bahasa Indonesia & English
πŸ“ This article is available in English & Bahasa Indonesia

🎨 Tutorial Neural Network β€” Page 7Neural Network Tutorial β€” Page 7

Generative Adversarial
Network (GAN)

Generative Adversarial
Network (GAN)

Dua network bertarung sampai menghasilkan data baru yang realistis. Page 7 membahas: konsep adversarial training, membangun Generator dan Discriminator dari nol, DCGAN untuk image generation, dan tantangan training GAN.

Two networks compete until they produce realistic new data. Page 7 covers: the adversarial training concept, building Generator and Discriminator from scratch, DCGAN for image generation, and GAN training challenges.

πŸ“… MaretMarch 2026⏱ 24 menit baca24 min read
🏷 GANGeneratorDiscriminatorDCGANImage Generation
πŸ“š Seri Tutorial Neural Network:Neural Network Tutorial Series:

πŸ“‘ Daftar Isi β€” Page 7

πŸ“‘ Table of Contents β€” Page 7

  1. Konsep GAN β€” Dua network bertarung
  2. GAN dari Nol β€” Generator + Discriminator
  3. DCGAN β€” Generate gambar MNIST
  4. Tantangan Training β€” Mode collapse & tips
  5. Ringkasan & Preview Page 8
  1. GAN Concept β€” Two networks compete
  2. GAN from Scratch β€” Generator + Discriminator
  3. DCGAN β€” Generate MNIST images
  4. Training Challenges β€” Mode collapse & tips
  5. Summary & Page 8 Preview
🀺

1. Konsep GAN β€” Dua Network Bertarung

1. GAN Concept β€” Two Networks Compete

Generator membuat palsu, Discriminator mendeteksi β€” keduanya semakin pintar
Generator creates fakes, Discriminator detects them β€” both get smarter

GAN (Generative Adversarial Network) terdiri dari dua network yang bertarung: Generator (G) membuat data palsu dari noise, Discriminator (D) menilai apakah data itu asli atau palsu. Keduanya saling meningkatkan kemampuan β€” seperti pemalsu uang vs polisi.

GAN (Generative Adversarial Network) consists of two competing networks: Generator (G) creates fake data from noise, Discriminator (D) judges whether data is real or fake. Both improve each other β€” like a counterfeiter vs detective.

GAN Architecture Random Noise z Real Data β”‚ β”‚ β–Ό β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ Generatorβ”‚ β”‚ β”‚ (G) β”‚ β”‚ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ Fake Data β”‚ β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Discriminator (D) β”‚ β”‚ "Real or Fake?" β†’ 0 to 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό D wants: Realβ†’1, Fakeβ†’0 G wants: Fakeβ†’1 (fool D!)
πŸ”§

2. Membangun GAN dari Nol

2. Building a GAN from Scratch

Generator + Discriminator + adversarial training loop
Generator + Discriminator + adversarial training loop
33_gan.py β€” Simple GAN from Scratchpython
import numpy as np

def sigmoid(x): return 1/(1+np.exp(-np.clip(x,-500,500)))
def relu(x): return np.maximum(0, x)

class Generator:
    def __init__(self, noise_dim, hidden, out_dim):
        self.W1 = np.random.randn(noise_dim, hidden) * 0.02
        self.b1 = np.zeros((1, hidden))
        self.W2 = np.random.randn(hidden, out_dim) * 0.02
        self.b2 = np.zeros((1, out_dim))

    def forward(self, z):
        self.z = z
        self.h = relu(z @ self.W1 + self.b1)
        return sigmoid(self.h @ self.W2 + self.b2)  # fake data [0,1]

class Discriminator:
    def __init__(self, input_dim, hidden):
        self.W1 = np.random.randn(input_dim, hidden) * 0.02
        self.b1 = np.zeros((1, hidden))
        self.W2 = np.random.randn(hidden, 1) * 0.02
        self.b2 = np.zeros((1, 1))

    def forward(self, x):
        self.x = x
        self.h = relu(x @ self.W1 + self.b1)
        return sigmoid(self.h @ self.W2 + self.b2)  # P(real)

# Training: alternate D and G updates
G = Generator(noise_dim=16, hidden=32, out_dim=2)
D = Discriminator(input_dim=2, hidden=32)

print("🀺 Training GAN...")
for step in range(5000):
    # Real data: sample from 2D Gaussian
    real = np.random.randn(32, 2) * 0.5 + np.array([2, 2])
    # Fake data: Generator from noise
    noise = np.random.randn(32, 16)
    fake = G.forward(noise)
    # Train D: maximize log(D(real)) + log(1-D(fake))
    # Train G: maximize log(D(fake))  (fool D)
    # ... backprop updates ...

    if (step+1) % 1000 == 0:
        d_real = D.forward(real).mean()
        d_fake = D.forward(fake).mean()
        print(f"  Step {step+1} β”‚ D(real)={d_real:.3f} D(fake)={d_fake:.3f}")

πŸŽ“ Nash Equilibrium: GAN ideal konvergen di titik di mana D tidak bisa bedakan real vs fake (D(x) = 0.5). Generator telah belajar distribusi data yang sempurna.

πŸŽ“ Nash Equilibrium: An ideal GAN converges where D can't distinguish real from fake (D(x) = 0.5). The Generator has perfectly learned the data distribution.

πŸ–ΌοΈ

3. DCGAN β€” Generate Gambar MNIST

3. DCGAN β€” Generate MNIST Images

Arsitektur CNN untuk GAN β€” generate digit dari noise
CNN architecture for GAN β€” generate digits from noise

DCGAN (Deep Convolutional GAN) menggunakan transposed convolution di Generator dan convolution di Discriminator. Ini standar arsitektur untuk image generation.

DCGAN (Deep Convolutional GAN) uses transposed convolution in the Generator and convolution in the Discriminator. This is the standard architecture for image generation.

πŸŽ‰ Dari noise random β†’ gambar angka yang realistis! DCGAN bisa generate digit MNIST yang hampir tidak bisa dibedakan dari yang asli. Ini fondasi dari semua model generatif gambar modern (Stable Diffusion, DALL-E, dll).

πŸŽ‰ From random noise β†’ realistic digit images! DCGAN can generate MNIST digits almost indistinguishable from real ones. This is the foundation of all modern image generation models (Stable Diffusion, DALL-E, etc).

⚠️

4. Tantangan Training GAN

4. GAN Training Challenges

Mode collapse, training instability, dan tips mengatasinya
Mode collapse, training instability, and tips to handle them

GAN terkenal sulit di-train. Masalah utama: Mode Collapse (G hanya generate satu jenis output), Training Instability (loss oscillasi), dan Vanishing Gradients (D terlalu pintar β†’ G tidak bisa belajar). Tips: label smoothing, spectral normalization, Wasserstein loss.

GANs are notoriously hard to train. Main issues: Mode Collapse (G only generates one type of output), Training Instability (loss oscillates), and Vanishing Gradients (D too smart β†’ G can't learn). Tips: label smoothing, spectral normalization, Wasserstein loss.

πŸ“

5. Ringkasan Page 7

5. Page 7 Summary

Apa yang sudah kita pelajari
What we've learned
KonsepApa ItuKode Kunci
GANDua network adversarial: Generator vs DiscriminatorG(z) β†’ fake, D(x) β†’ real?
GeneratorMembuat data palsu dari noisesigmoid(z @ W + b)
DiscriminatorMenilai real vs fakesigmoid(x @ W + b)
DCGANGAN dengan arsitektur CNNConvTranspose + Conv
Mode CollapseGenerator hanya output satu jenisdiversity loss
WassersteinLoss function yang lebih stabilmean(D(real)) - mean(D(fake))
ConceptWhat It IsKey Code
GANTwo adversarial networks: Generator vs DiscriminatorG(z) β†’ fake, D(x) β†’ real?
GeneratorCreates fake data from noisesigmoid(z @ W + b)
DiscriminatorJudges real vs fakesigmoid(x @ W + b)
DCGANGAN with CNN architectureConvTranspose + Conv
Mode CollapseGenerator outputs only one typediversity loss
WassersteinMore stable loss functionmean(D(real)) - mean(D(fake))
← Page Sebelumnya← Previous Page

Page 6 β€” Word Embeddings & NLP Pipeline

πŸ“˜

Coming Next: Page 8 β€” Transformer & Attention Mechanism

Arsitektur revolusioner di balik GPT, BERT, dan semua LLM modern. Self-Attention, Multi-Head Attention, Positional Encoding β€” dari nol. Stay tuned!

πŸ“˜

Coming Next: Page 8 β€” Transformer & Attention Mechanism

The revolutionary architecture behind GPT, BERT, and all modern LLMs. Self-Attention, Multi-Head Attention, Positional Encoding β€” from scratch. Stay tuned!