šŸ“ Artikel ini ditulis dalam Bahasa Indonesia & English
šŸ“ This article is available in English & Bahasa Indonesia

šŸ”¶ Belajar TensorFlow — Page 1Learn TensorFlow — Page 1

Pengenalan TensorFlow &
Tensor Operations

Introduction to TensorFlow &
Tensor Operations

Framework deep learning paling populer dari Google. Page 1 membahas: apa itu TensorFlow dan ekosistemnya, instalasi, membuat dan memanipulasi tensor, eager execution, GradientTape untuk automatic differentiation, GPU acceleration, dan perbandingan dengan NumPy/PyTorch.

The most popular deep learning framework from Google. Page 1 covers: what TensorFlow is and its ecosystem, installation, creating and manipulating tensors, eager execution, GradientTape for automatic differentiation, GPU acceleration, and comparison with NumPy/PyTorch.

šŸ“… MaretMarch 2026ā± 25 menit baca25 min read
šŸ· TensorFlowTensorEager ModeGradientTapeGPUBeginner
šŸ“š Seri Belajar TensorFlow:Learn TensorFlow Series:

šŸ“‘ Daftar Isi — Page 1

šŸ“‘ Table of Contents — Page 1

  1. Kenapa TensorFlow? — Ekosistem, keunggulan, dan siapa yang pakai
  2. Instalasi — Setup environment dalam 2 menit
  3. Tensor Dasar — Membuat, data types, dan konversi NumPy
  4. Operasi Tensor — Math, reshape, indexing, broadcasting
  5. tf.Variable — Tensor mutable untuk parameter model
  6. GradientTape — Automatic differentiation: inti backpropagation
  7. GPU Acceleration — Pindah tensor ke GPU, benchmark
  8. TF vs NumPy vs PyTorch — Perbandingan kapan pakai apa
  9. Ringkasan & Preview Page 2
  1. Why TensorFlow? — Ecosystem, advantages, and who uses it
  2. Installation — Setup environment in 2 minutes
  3. Basic Tensors — Creating, data types, and NumPy conversion
  4. Tensor Operations — Math, reshape, indexing, broadcasting
  5. tf.Variable — Mutable tensors for model parameters
  6. GradientTape — Automatic differentiation: the core of backprop
  7. GPU Acceleration — Moving tensors to GPU, benchmarks
  8. TF vs NumPy vs PyTorch — Comparison: when to use what
  9. Summary & Page 2 Preview
šŸ”¶

1. Kenapa TensorFlow?

1. Why TensorFlow?

Framework #1 untuk production deep learning — dari Google Brain
The #1 framework for production deep learning — from Google Brain

TensorFlow adalah framework open-source dari Google untuk machine learning dan deep learning. Dipakai oleh Google Search, Gmail, YouTube, Google Translate, Waymo, dan ribuan perusahaan lainnya. TF memiliki ekosistem production terlengkap: dari training di cloud sampai deployment di handphone.

TensorFlow is an open-source framework from Google for machine learning and deep learning. Used by Google Search, Gmail, YouTube, Google Translate, Waymo, and thousands of other companies. TF has the most complete production ecosystem: from cloud training to phone deployment.

Dua fitur utama yang membuat TensorFlow powerful:

Two main features that make TensorFlow powerful:

🧮 Tensor Computation + Auto-Differentiation

🧮 Tensor Computation + Auto-Differentiation

Seperti NumPy, tapi dengan auto-differentiation (GradientTape) dan GPU/TPU acceleration. Anda tidak perlu menulis backpropagation manual — TensorFlow menghitungnya otomatis untuk arsitektur apapun.

Like NumPy, but with auto-differentiation (GradientTape) and GPU/TPU acceleration. You don't need to write manual backpropagation — TensorFlow computes it automatically for any architecture.

šŸš€ Production Ecosystem

šŸš€ Production Ecosystem

TF bukan hanya untuk training — ada TF Serving (deploy ke server), TFLite (mobile/edge), TF.js (browser), TFX (ML pipeline), dan TPU support (hardware Google khusus AI). Satu framework untuk seluruh lifecycle.

TF isn't just for training — there's TF Serving (deploy to server), TFLite (mobile/edge), TF.js (browser), TFX (ML pipeline), and TPU support (Google's AI-specific hardware). One framework for the entire lifecycle.

TensorFlow Ecosystem — End to End šŸ“Š Data 🧠 Training šŸš€ Deployment tf.data tf.keras TF Serving (REST/gRPC) TFRecord GradientTape TFLite (Android/iOS) TF Datasets tf.distribute (GPU) TF.js (browser) Feature columns TPU support SavedModel format TensorBoard TF Extended (TFX) Keras Tuner Vertex AI (Google Cloud) Data ingestion → Model building → Deploy anywhere!

šŸ’” Analogi: TensorFlow = Pabrik Mobil
NumPy = bengkel manual (buat satu-satu).
PyTorch = bengkel canggih (riset & prototipe).
TensorFlow = pabrik otomatis — dari desain, produksi massal, sampai distribusi ke seluruh dunia. Jika Anda butuh model yang berjalan di 1 miliar handphone, TensorFlow jawabannya.

šŸ’” Analogy: TensorFlow = Car Factory
NumPy = manual workshop (build one at a time).
PyTorch = advanced workshop (research & prototyping).
TensorFlow = automated factory — from design, mass production, to worldwide distribution. If you need a model running on 1 billion phones, TensorFlow is the answer.

šŸ“¦

2. Instalasi — 2 Menit Setup

2. Installation — 2 Minute Setup

pip install tensorflow — GPU auto-detected
pip install tensorflow — GPU auto-detected

TensorFlow 2.x menyatukan CPU dan GPU dalam satu package. Install satu kali, TF otomatis mendeteksi GPU NVIDIA (CUDA) jika tersedia.

TensorFlow 2.x unifies CPU and GPU in one package. Install once, TF automatically detects NVIDIA GPU (CUDA) if available.

Terminal — Install TensorFlowbash
# Install TensorFlow (CPU + GPU auto-detect)
pip install tensorflow

# Verify installation
python -c "import tensorflow as tf; print(tf.__version__)"
# Output: 2.18.0 (or latest)

# Check GPU availability
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
# [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
# or [] if CPU only — still works fine!

# Optional: specific CUDA version
pip install tensorflow[and-cuda]  # TF 2.16+ auto-installs CUDA

šŸ’” Tip: Gunakan Google Colab untuk GPU T4 gratis — TensorFlow sudah pre-installed. Atau gunakan Kaggle Notebooks untuk GPU P100 gratis (30 jam/minggu).

šŸ’” Tip: Use Google Colab for free T4 GPU — TensorFlow comes pre-installed. Or use Kaggle Notebooks for free P100 GPU (30 hours/week).

🧮

3. Tensor Dasar — Fondasi Semua Operasi

3. Basic Tensors — Foundation of All Operations

tf.constant: immutable multi-dimensional arrays dengan GPU support
tf.constant: immutable multi-dimensional arrays with GPU support

Tensor di TensorFlow sama konsepnya dengan NumPy ndarray — array multi-dimensi. Bedanya: tensor bisa dijalankan di GPU/TPU dan mendukung automatic differentiation. Semua data di TensorFlow harus berbentuk tensor.

A Tensor in TensorFlow is the same concept as a NumPy ndarray — a multi-dimensional array. The difference: tensors can run on GPU/TPU and support automatic differentiation. All data in TensorFlow must be in tensor form.

01_tf_tensors.py — Membuat & Memanipulasi Tensorpython
import tensorflow as tf
import numpy as np

# ===========================
# 1. Membuat Tensor dari data
# ===========================
# Scalar (0D)
scalar = tf.constant(42)
print(scalar)            # tf.Tensor(42, shape=(), dtype=int32)

# Vector (1D)
vector = tf.constant([1.0, 2.0, 3.0])
print(vector.shape)      # (3,)
print(vector.dtype)      # float32

# Matrix (2D)
matrix = tf.constant([[1, 2, 3],
                      [4, 5, 6]])
print(matrix.shape)      # (2, 3)

# 3D Tensor (e.g., batch of images)
tensor_3d = tf.constant([[[1,2],[3,4]],[[5,6],[7,8]]])
print(tensor_3d.shape)   # (2, 2, 2)

# ===========================
# 2. Generator Functions
# ===========================
zeros = tf.zeros([3, 4])             # 3Ɨ4 filled with 0
ones = tf.ones([2, 3])              # 2Ɨ3 filled with 1
rand_n = tf.random.normal([2, 3])   # Gaussian (mean=0, std=1)
rand_u = tf.random.uniform([2, 3])  # Uniform [0, 1)
eye = tf.eye(4)                      # Identity matrix 4Ɨ4
rng = tf.range(0, 10, 2)             # [0, 2, 4, 6, 8]
fill = tf.fill([3, 3], 7.0)         # 3Ɨ3 filled with 7

# ===========================
# 3. Data Types
# ===========================
f32 = tf.constant([1.0, 2.0])                # float32 (default float)
i32 = tf.constant([1, 2])                    # int32 (default int)
f16 = tf.constant([1.0], dtype=tf.float16)   # half precision
f64 = tf.constant([1.0], dtype=tf.float64)   # double precision
b = tf.constant([True, False])               # bool
s = tf.constant(["hello", "world"])          # string tensor!

# Cast between types
casted = tf.cast(i32, tf.float32)    # int32 → float32

# ===========================
# 4. NumPy ↔ TensorFlow (seamless!)
# ===========================
np_array = np.array([1, 2, 3])
tf_tensor = tf.constant(np_array)   # NumPy → TF (copy)
tf_tensor2 = tf.convert_to_tensor(np_array)  # same result
back_np = tf_tensor.numpy()         # TF → NumPy

# TF operations accept NumPy arrays directly!
result = tf.reduce_sum(np.array([1, 2, 3]))  # works!

šŸŽ“ Dimensi Tensor

šŸŽ“ Tensor Dimensions

Tensor Dimensions — Sama seperti NumPy / Same as NumPy Scalar (0D): 42 shape=() Vector (1D): [1, 2, 3] shape=(3,) Matrix (2D): [[1,2],[3,4]] shape=(2,2) 3D Tensor: batch of matrices shape=(batch, rows, cols) 4D Tensor: batch of images shape=(batch, H, W, C) TF default: channels LAST PyTorch: channels first Contoh / Example: 64 gambar RGB 224Ɨ224 → shape = (64, 224, 224, 3) batch height width channels

šŸŽ“ tf.constant = Immutable! Setelah dibuat, tensor constant tidak bisa diubah. Untuk data yang perlu diubah (weights model), gunakan tf.Variable (Section 5). Ini membuat TF bisa mengoptimasi memory dan computation graph.

šŸŽ“ tf.constant = Immutable! Once created, a constant tensor cannot be modified. For data that needs to change (model weights), use tf.Variable (Section 5). This allows TF to optimize memory and computation graphs.

šŸ”§

4. Operasi Tensor — Math, Reshape, Indexing

4. Tensor Operations — Math, Reshape, Indexing

Semua yang Anda perlu untuk manipulasi data
Everything you need for data manipulation
02_tensor_operations.py — Operasi Tensor Lengkappython
import tensorflow as tf

a = tf.constant([[1.0, 2.0], [3.0, 4.0]])
b = tf.constant([[5.0, 6.0], [7.0, 8.0]])

# ===========================
# 1. Operasi Matematika
# ===========================
c = a + b                   # element-wise add (tf.add)
d = a * b                   # element-wise multiply (tf.multiply)
e = a @ b                   # matrix multiply (tf.matmul)
f = tf.matmul(a, b)         # sama dengan @ operator

print(tf.reduce_sum(a))     # 10.0 — sum semua element
print(tf.reduce_mean(a))    # 2.5  — rata-rata
print(tf.reduce_max(a))     # 4.0  — nilai maksimum
print(tf.argmax(a, axis=1)) # [1, 1] — index max per baris

# Per-axis reduction
print(tf.reduce_sum(a, axis=0))  # [4., 6.] — sum per kolom
print(tf.reduce_sum(a, axis=1))  # [3., 7.] — sum per baris

# Math functions
print(tf.sqrt(a))           # element-wise sqrt
print(tf.exp(a))            # element-wise e^x
print(tf.nn.relu(a - 2))    # ReLU activation
print(tf.nn.softmax(a))     # softmax per row

# ===========================
# 2. Reshape & Transpose
# ===========================
x = tf.range(12)                         # [0,1,2,...,11]
y = tf.reshape(x, [3, 4])               # 3Ɨ4 matrix
z = tf.reshape(x, [2, 2, 3])             # 3D tensor
flat = tf.reshape(z, [-1])              # flatten (-1 = auto)

t = tf.transpose(y)                     # swap axes: (3,4) → (4,3)
p = tf.transpose(z, perm=[0, 2, 1])    # custom axis order

# Expand/squeeze dimensions
expanded = tf.expand_dims(y, axis=0)    # (3,4) → (1,3,4) — add batch
squeezed = tf.squeeze(expanded, axis=0) # (1,3,4) → (3,4) — remove batch

# ===========================
# 3. Indexing & Slicing (same as NumPy!)
# ===========================
m = tf.constant([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(m[0])           # [1, 2, 3, 4]  — first row
print(m[:, 1])        # [2, 6, 10]    — second column
print(m[0:2, 1:3])   # [[2,3],[6,7]] — slice
print(m[-1])          # [9,10,11,12]  — last row

# Advanced: gather & boolean mask
indices = tf.constant([0, 2])
print(tf.gather(m, indices))  # rows 0 and 2
mask = tf.constant([True, False, True])
print(tf.boolean_mask(m, mask))  # rows where mask=True

# ===========================
# 4. Broadcasting (same as NumPy!)
# ===========================
mat = tf.ones([3, 3])
vec = tf.constant([1.0, 2.0, 3.0])
result = mat + vec  # (3,3) + (3,) → broadcasts to (3,3)
print(result)
# [[2., 3., 4.],
#  [2., 3., 4.],
#  [2., 3., 4.]]

# Concatenate & Stack
t1 = tf.constant([[1,2]])
t2 = tf.constant([[3,4]])
print(tf.concat([t1, t2], axis=0))  # [[1,2],[3,4]] — concat rows
print(tf.stack([t1, t2], axis=0))   # [[[1,2]],[[3,4]]] — new dim
šŸ”„

5. tf.Variable — Tensor yang Bisa Diubah

5. tf.Variable — Mutable Tensors

Untuk menyimpan parameter model (weights & biases) yang di-update saat training
For storing model parameters (weights & biases) that get updated during training

tf.Variable adalah tensor yang bisa diubah nilainya — inilah yang menyimpan weights dan biases model. Saat training, optimizer meng-update Variable berdasarkan gradient. Semua weight di Keras model secara otomatis adalah Variable.

tf.Variable is a tensor whose value can be changed — this is what stores model weights and biases. During training, the optimizer updates Variables based on gradients. All weights in a Keras model are automatically Variables.

03_tf_variables.py — Variables untuk Model Weightspython
import tensorflow as tf

# ===========================
# 1. Membuat Variable
# ===========================
w = tf.Variable(tf.random.normal([3, 2]), name="weights")
b = tf.Variable(tf.zeros([2]), name="bias")
print(f"w shape: {w.shape}, dtype: {w.dtype}")  # (3,2), float32
print(f"w name: {w.name}")                      # weights:0

# ===========================
# 2. Update Variable (in-place)
# ===========================
w.assign(w * 0.9)            # replace entire value
w.assign_add(tf.ones([3,2])) # w = w + 1
w.assign_sub(tf.ones([3,2])) # w = w - 1

# āš ļø Variable TIDAK bisa di-reassign dengan =
# w = w * 2  ← WRONG! Creates new tensor, not update
# w.assign(w * 2)  ← CORRECT! In-place update

# ===========================
# 3. Constant vs Variable
# ===========================
const = tf.constant([1, 2, 3])   # ā„ļø immutable
var = tf.Variable([1, 2, 3])     # šŸ”„ mutable
# const.assign(...)  ← ERROR! Constants can't be changed
var.assign([4, 5, 6])          # āœ… Works!

# ===========================
# 4. Manual Gradient Descent (preview of training)
# ===========================
w = tf.Variable(5.0)
lr = 0.1

for step in range(20):
    with tf.GradientTape() as tape:
        loss = (w - 2.0) ** 2  # minimum at w=2
    grad = tape.gradient(loss, w)
    w.assign_sub(lr * grad)     # w = w - lr * dL/dw

print(f"Optimized w = {w.numpy():.4f}")  # ā‰ˆ 2.0000 āœ“

šŸŽ“ Kapan Constant vs Variable?
tf.constant → input data, labels, hyperparameters — apapun yang tidak berubah selama training.
tf.Variable → weights, biases, embeddings — apapun yang di-update oleh optimizer.
Di Keras, Anda jarang membuat Variable manual — keras.layers.Dense(64) otomatis membuat Variable untuk W dan b.

šŸŽ“ When Constant vs Variable?
tf.constant → input data, labels, hyperparameters — anything that doesn't change during training.
tf.Variable → weights, biases, embeddings — anything updated by the optimizer.
In Keras, you rarely create Variables manually — keras.layers.Dense(64) automatically creates Variables for W and b.

⚔

6. GradientTape — Auto-Differentiation

6. GradientTape — Auto-Differentiation

Rekam operasi, hitung gradient otomatis — inti semua training
Record operations, compute gradients automatically — the core of all training

tf.GradientTape adalah fitur terpenting TensorFlow untuk training. Ia "merekam" semua operasi yang dilakukan pada Variable, lalu menghitung gradient (turunan) secara otomatis. Ini menggantikan seluruh backpropagation manual yang kita tulis di seri Neural Network!

tf.GradientTape is TensorFlow's most important training feature. It "records" all operations performed on Variables, then computes gradients (derivatives) automatically. This replaces the entire manual backpropagation we wrote in the Neural Network series!

04_gradient_tape.py — Automatic Differentiation Lengkappython
import tensorflow as tf

# ===========================
# 1. Contoh paling sederhana
# ===========================
x = tf.Variable(3.0)

with tf.GradientTape() as tape:
    y = x ** 2  # y = x²

# dy/dx = 2x = 2(3) = 6
grad = tape.gradient(y, x)
print(f"dy/dx at x=3: {grad.numpy()}")  # 6.0 āœ“

# ===========================
# 2. Multiple variables (seperti W dan b)
# ===========================
W = tf.Variable([[1.0, 2.0], [3.0, 4.0]])
b = tf.Variable([0.1, 0.2])
X = tf.constant([[1.0, 0.5]])
y_true = tf.constant([[1.0, 0.0]])

with tf.GradientTape() as tape:
    y_pred = tf.nn.softmax(X @ W + b)
    loss = tf.reduce_mean((y_true - y_pred) ** 2)

# Gradient for ALL trainable variables at once!
grads = tape.gradient(loss, [W, b])
print(f"dL/dW shape: {grads[0].shape}")  # (2, 2)
print(f"dL/db shape: {grads[1].shape}")  # (2,)

# ===========================
# 3. Persistent tape (untuk multiple gradient calls)
# ===========================
x = tf.Variable(3.0)
with tf.GradientTape(persistent=True) as tape:
    y = x ** 2
    z = x ** 3

print(tape.gradient(y, x))  # 6.0 (dy/dx)
print(tape.gradient(z, x))  # 27.0 (dz/dx = 3x² = 27)
del tape  # free resources!

# ===========================
# 4. Higher-order gradients (gradient of gradient!)
# ===========================
x = tf.Variable(2.0)
with tf.GradientTape() as t2:
    with tf.GradientTape() as t1:
        y = x ** 3      # y = x³
    dy = t1.gradient(y, x)  # dy/dx = 3x² = 12
d2y = t2.gradient(dy, x)    # d²y/dx² = 6x = 12
print(f"First derivative:  {dy.numpy()}")  # 12.0
print(f"Second derivative: {d2y.numpy()}") # 12.0

# ===========================
# 5. Full mini training loop!
# ===========================
# Learn y = 2x + 1 (same as NN series Page 1!)
w = tf.Variable(0.0)
b_var = tf.Variable(0.0)
X_data = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0])
y_data = tf.constant([3.0, 5.0, 7.0, 9.0, 11.0])

for epoch in range(100):
    with tf.GradientTape() as tape:
        y_pred = w * X_data + b_var
        loss = tf.reduce_mean((y_data - y_pred) ** 2)
    grads = tape.gradient(loss, [w, b_var])
    w.assign_sub(0.01 * grads[0])
    b_var.assign_sub(0.01 * grads[1])

print(f"Learned: y = {w.numpy():.2f}x + {b_var.numpy():.2f}")
# Output: y = 2.00x + 1.00 ← PERFECT! Same as NN series! šŸŽ‰

šŸŽ‰ Bandingkan dengan Seri Neural Network!
Di Page 1 seri NN, kita tulis 50+ baris backpropagation manual untuk belajar y = 2x + 1.
Dengan GradientTape: 8 baris — dan bekerja untuk arsitektur apapun, termasuk Transformer dengan miliaran parameter. Inilah kekuatan auto-differentiation.

šŸŽ‰ Compare with the Neural Network Series!
In NN series Page 1, we wrote 50+ lines of manual backpropagation to learn y = 2x + 1.
With GradientTape: 8 lines — and it works for any architecture, including Transformers with billions of parameters. This is the power of auto-differentiation.

šŸŽ®

7. GPU Acceleration — 10-100Ɨ Lebih Cepat

7. GPU Acceleration — 10-100Ɨ Faster

TensorFlow otomatis pakai GPU jika tersedia — tanpa ubah kode
TensorFlow automatically uses GPU if available — no code changes needed

TensorFlow secara otomatis menempatkan operasi di GPU jika tersedia. Anda tidak perlu mengubah kode apapun — model yang sama berjalan di CPU, GPU, atau TPU. Untuk memaksa device tertentu, gunakan tf.device().

TensorFlow automatically places operations on GPU if available. You don't need to change any code — the same model runs on CPU, GPU, or TPU. To force a specific device, use tf.device().

05_gpu_acceleration.py — GPU di TensorFlowpython
import tensorflow as tf
import time

# ===========================
# 1. Check available devices
# ===========================
print("GPUs:", tf.config.list_physical_devices('GPU'))
print("CPUs:", tf.config.list_physical_devices('CPU'))

# ===========================
# 2. Benchmark: CPU vs GPU
# ===========================
size = 5000

# CPU
with tf.device('/CPU:0'):
    a_cpu = tf.random.normal([size, size])
    b_cpu = tf.random.normal([size, size])
    start = time.time()
    c_cpu = a_cpu @ b_cpu
    cpu_time = time.time() - start
    print(f"CPU: {cpu_time:.3f}s")

# GPU (if available)
if tf.config.list_physical_devices('GPU'):
    with tf.device('/GPU:0'):
        a_gpu = tf.random.normal([size, size])
        b_gpu = tf.random.normal([size, size])
        _ = a_gpu @ b_gpu  # warmup
        start = time.time()
        c_gpu = a_gpu @ b_gpu
        gpu_time = time.time() - start
        print(f"GPU: {gpu_time:.3f}s")
        print(f"Speedup: {cpu_time/gpu_time:.1f}Ɨ")
# Typical result: CPU 2.5s, GPU 0.03s → 80Ɨ speedup! šŸš€

# ===========================
# 3. Memory growth (prevent OOM)
# ===========================
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)
# This allocates GPU memory as needed, not all at once
# Put this at the TOP of your script!

# ===========================
# 4. Check where tensor lives
# ===========================
t = tf.constant([1, 2, 3])
print(t.device)  # /job:localhost/replica:0/task:0/device:GPU:0

šŸ’” Pro Tip: Selalu set memory growth di awal script! Tanpa ini, TF akan mengalokasi seluruh GPU memory — bahkan untuk model kecil. Ini bisa menyebabkan crash jika Anda menjalankan program lain yang juga butuh GPU.

šŸ’” Pro Tip: Always set memory growth at the start of your script! Without it, TF will allocate all GPU memory — even for small models. This can cause crashes if you run other programs that also need the GPU.

āš–ļø

8. TF vs NumPy vs PyTorch — Kapan Pakai Apa?

8. TF vs NumPy vs PyTorch — When to Use What?

Tiga tools, tiga kekuatan — pahami perbedaannya
Three tools, three strengths — understand the differences
AspekNumPyTensorFlowPyTorch
Auto-GradāŒ Tidakāœ… GradientTapeāœ… autograd
GPUāŒ CPU onlyāœ… Auto-detectāœ… Manual .to(device)
DeploymentāŒāœ… TF Serving, TFLite, TF.jsāš ļø TorchServe, ONNX
MobileāŒāœ… TFLite (Android/iOS)āš ļø PyTorch Mobile
RisetāŒāš ļø Goodāœ… Dominan di akademik
Industriāš ļø Data preprocessingāœ… Google, enterpriseāœ… Meta, startup
Learning Curveāœ… Easiestāš ļø Mediumāœ… Pythonic
Kapan PakaiData wrangling, preprocessingProduction ML, mobile, browserResearch, prototyping
AspectNumPyTensorFlowPyTorch
Auto-GradāŒ Noāœ… GradientTapeāœ… autograd
GPUāŒ CPU onlyāœ… Auto-detectāœ… Manual .to(device)
DeploymentāŒāœ… TF Serving, TFLite, TF.jsāš ļø TorchServe, ONNX
MobileāŒāœ… TFLite (Android/iOS)āš ļø PyTorch Mobile
ResearchāŒāš ļø Goodāœ… Dominates academia
Industryāš ļø Data preprocessingāœ… Google, enterpriseāœ… Meta, startups
Learning Curveāœ… Easiestāš ļø Mediumāœ… Pythonic
When to UseData wrangling, preprocessingProduction ML, mobile, browserResearch, prototyping

šŸŽ“ Rekomendasi Praktis:
Belajar? Mulai dari mana saja — keduanya mengajarkan konsep yang sama.
Production di Google Cloud / mobile? → TensorFlow.
Riset / paper akademik? → PyTorch.
Keduanya? → Belajar satu dengan baik, yang lain hanya perbedaan syntax. Konsep deep learning sama persis.

šŸŽ“ Practical Recommendation:
Learning? Start with either — both teach the same concepts.
Production on Google Cloud / mobile? → TensorFlow.
Research / academic papers? → PyTorch.
Both? → Master one well, the other is just syntax differences. Deep learning concepts are exactly the same.

šŸ“

9. Ringkasan Page 1

9. Page 1 Summary

Apa yang sudah kita pelajari
What we've learned
KonsepApa ItuKode Kunci
tf.constantTensor immutable (input, labels)tf.constant([1,2,3])
tf.VariableTensor mutable (weights, biases)tf.Variable(randn())
Operasi MathAdd, multiply, matmul, reducea @ b, tf.reduce_sum()
ReshapeUbah bentuk tensortf.reshape(x, [3,4])
IndexingAkses elemen (sama dengan NumPy)x[0:2, 1:3]
GradientTapeAuto-differentiationtape.gradient(loss, w)
GPU SupportAuto-detect & acceleratetf.device('/GPU:0')
NumPy ↔ TFKonversi seamless.numpy(), tf.constant()
Memory GrowthAlokasi GPU memory on-demandset_memory_growth(True)
ConceptWhat It IsKey Code
tf.constantImmutable tensor (inputs, labels)tf.constant([1,2,3])
tf.VariableMutable tensor (weights, biases)tf.Variable(randn())
Math OpsAdd, multiply, matmul, reducea @ b, tf.reduce_sum()
ReshapeChange tensor shapetf.reshape(x, [3,4])
IndexingAccess elements (same as NumPy)x[0:2, 1:3]
GradientTapeAuto-differentiationtape.gradient(loss, w)
GPU SupportAuto-detect & acceleratetf.device('/GPU:0')
NumPy ↔ TFSeamless conversion.numpy(), tf.constant()
Memory GrowthOn-demand GPU memory allocationset_memory_growth(True)
šŸ“˜

Coming Next: Page 2 — Keras API & Model Building

Membangun model dengan Keras: Sequential API (stack layers), Functional API (arsitektur kompleks), compile & fit, callbacks (EarlyStopping, TensorBoard), custom layers, dan melatih classifier MNIST 98%+ dalam 10 baris kode. Stay tuned!

šŸ“˜

Coming Next: Page 2 — Keras API & Model Building

Building models with Keras: Sequential API (stack layers), Functional API (complex architectures), compile & fit, callbacks (EarlyStopping, TensorBoard), custom layers, and training an MNIST classifier at 98%+ in 10 lines of code. Stay tuned!