š Daftar Isi ā Page 1
š Table of Contents ā Page 1
- Kenapa TensorFlow? ā Ekosistem, keunggulan, dan siapa yang pakai
- Instalasi ā Setup environment dalam 2 menit
- Tensor Dasar ā Membuat, data types, dan konversi NumPy
- Operasi Tensor ā Math, reshape, indexing, broadcasting
- tf.Variable ā Tensor mutable untuk parameter model
- GradientTape ā Automatic differentiation: inti backpropagation
- GPU Acceleration ā Pindah tensor ke GPU, benchmark
- TF vs NumPy vs PyTorch ā Perbandingan kapan pakai apa
- Ringkasan & Preview Page 2
- Why TensorFlow? ā Ecosystem, advantages, and who uses it
- Installation ā Setup environment in 2 minutes
- Basic Tensors ā Creating, data types, and NumPy conversion
- Tensor Operations ā Math, reshape, indexing, broadcasting
- tf.Variable ā Mutable tensors for model parameters
- GradientTape ā Automatic differentiation: the core of backprop
- GPU Acceleration ā Moving tensors to GPU, benchmarks
- TF vs NumPy vs PyTorch ā Comparison: when to use what
- Summary & Page 2 Preview
1. Kenapa TensorFlow?
1. Why TensorFlow?
TensorFlow adalah framework open-source dari Google untuk machine learning dan deep learning. Dipakai oleh Google Search, Gmail, YouTube, Google Translate, Waymo, dan ribuan perusahaan lainnya. TF memiliki ekosistem production terlengkap: dari training di cloud sampai deployment di handphone.
TensorFlow is an open-source framework from Google for machine learning and deep learning. Used by Google Search, Gmail, YouTube, Google Translate, Waymo, and thousands of other companies. TF has the most complete production ecosystem: from cloud training to phone deployment.
Dua fitur utama yang membuat TensorFlow powerful:
Two main features that make TensorFlow powerful:
š§® Tensor Computation + Auto-Differentiation
š§® Tensor Computation + Auto-Differentiation
Seperti NumPy, tapi dengan auto-differentiation (GradientTape) dan GPU/TPU acceleration. Anda tidak perlu menulis backpropagation manual ā TensorFlow menghitungnya otomatis untuk arsitektur apapun.
Like NumPy, but with auto-differentiation (GradientTape) and GPU/TPU acceleration. You don't need to write manual backpropagation ā TensorFlow computes it automatically for any architecture.
š Production Ecosystem
š Production Ecosystem
TF bukan hanya untuk training ā ada TF Serving (deploy ke server), TFLite (mobile/edge), TF.js (browser), TFX (ML pipeline), dan TPU support (hardware Google khusus AI). Satu framework untuk seluruh lifecycle.
TF isn't just for training ā there's TF Serving (deploy to server), TFLite (mobile/edge), TF.js (browser), TFX (ML pipeline), and TPU support (Google's AI-specific hardware). One framework for the entire lifecycle.
š” Analogi: TensorFlow = Pabrik Mobil
NumPy = bengkel manual (buat satu-satu).
PyTorch = bengkel canggih (riset & prototipe).
TensorFlow = pabrik otomatis ā dari desain, produksi massal, sampai distribusi ke seluruh dunia. Jika Anda butuh model yang berjalan di 1 miliar handphone, TensorFlow jawabannya.
š” Analogy: TensorFlow = Car Factory
NumPy = manual workshop (build one at a time).
PyTorch = advanced workshop (research & prototyping).
TensorFlow = automated factory ā from design, mass production, to worldwide distribution. If you need a model running on 1 billion phones, TensorFlow is the answer.
2. Instalasi ā 2 Menit Setup
2. Installation ā 2 Minute Setup
TensorFlow 2.x menyatukan CPU dan GPU dalam satu package. Install satu kali, TF otomatis mendeteksi GPU NVIDIA (CUDA) jika tersedia.
TensorFlow 2.x unifies CPU and GPU in one package. Install once, TF automatically detects NVIDIA GPU (CUDA) if available.
# Install TensorFlow (CPU + GPU auto-detect) pip install tensorflow # Verify installation python -c "import tensorflow as tf; print(tf.__version__)" # Output: 2.18.0 (or latest) # Check GPU availability python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" # [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] # or [] if CPU only ā still works fine! # Optional: specific CUDA version pip install tensorflow[and-cuda] # TF 2.16+ auto-installs CUDA
š” Tip: Gunakan Google Colab untuk GPU T4 gratis ā TensorFlow sudah pre-installed. Atau gunakan Kaggle Notebooks untuk GPU P100 gratis (30 jam/minggu).
š” Tip: Use Google Colab for free T4 GPU ā TensorFlow comes pre-installed. Or use Kaggle Notebooks for free P100 GPU (30 hours/week).
3. Tensor Dasar ā Fondasi Semua Operasi
3. Basic Tensors ā Foundation of All Operations
Tensor di TensorFlow sama konsepnya dengan NumPy ndarray ā array multi-dimensi. Bedanya: tensor bisa dijalankan di GPU/TPU dan mendukung automatic differentiation. Semua data di TensorFlow harus berbentuk tensor.
A Tensor in TensorFlow is the same concept as a NumPy ndarray ā a multi-dimensional array. The difference: tensors can run on GPU/TPU and support automatic differentiation. All data in TensorFlow must be in tensor form.
import tensorflow as tf import numpy as np # =========================== # 1. Membuat Tensor dari data # =========================== # Scalar (0D) scalar = tf.constant(42) print(scalar) # tf.Tensor(42, shape=(), dtype=int32) # Vector (1D) vector = tf.constant([1.0, 2.0, 3.0]) print(vector.shape) # (3,) print(vector.dtype) # float32 # Matrix (2D) matrix = tf.constant([[1, 2, 3], [4, 5, 6]]) print(matrix.shape) # (2, 3) # 3D Tensor (e.g., batch of images) tensor_3d = tf.constant([[[1,2],[3,4]],[[5,6],[7,8]]]) print(tensor_3d.shape) # (2, 2, 2) # =========================== # 2. Generator Functions # =========================== zeros = tf.zeros([3, 4]) # 3Ć4 filled with 0 ones = tf.ones([2, 3]) # 2Ć3 filled with 1 rand_n = tf.random.normal([2, 3]) # Gaussian (mean=0, std=1) rand_u = tf.random.uniform([2, 3]) # Uniform [0, 1) eye = tf.eye(4) # Identity matrix 4Ć4 rng = tf.range(0, 10, 2) # [0, 2, 4, 6, 8] fill = tf.fill([3, 3], 7.0) # 3Ć3 filled with 7 # =========================== # 3. Data Types # =========================== f32 = tf.constant([1.0, 2.0]) # float32 (default float) i32 = tf.constant([1, 2]) # int32 (default int) f16 = tf.constant([1.0], dtype=tf.float16) # half precision f64 = tf.constant([1.0], dtype=tf.float64) # double precision b = tf.constant([True, False]) # bool s = tf.constant(["hello", "world"]) # string tensor! # Cast between types casted = tf.cast(i32, tf.float32) # int32 ā float32 # =========================== # 4. NumPy ā TensorFlow (seamless!) # =========================== np_array = np.array([1, 2, 3]) tf_tensor = tf.constant(np_array) # NumPy ā TF (copy) tf_tensor2 = tf.convert_to_tensor(np_array) # same result back_np = tf_tensor.numpy() # TF ā NumPy # TF operations accept NumPy arrays directly! result = tf.reduce_sum(np.array([1, 2, 3])) # works!
š Dimensi Tensor
š Tensor Dimensions
š tf.constant = Immutable! Setelah dibuat, tensor constant tidak bisa diubah. Untuk data yang perlu diubah (weights model), gunakan tf.Variable (Section 5). Ini membuat TF bisa mengoptimasi memory dan computation graph.
š tf.constant = Immutable! Once created, a constant tensor cannot be modified. For data that needs to change (model weights), use tf.Variable (Section 5). This allows TF to optimize memory and computation graphs.
4. Operasi Tensor ā Math, Reshape, Indexing
4. Tensor Operations ā Math, Reshape, Indexing
import tensorflow as tf a = tf.constant([[1.0, 2.0], [3.0, 4.0]]) b = tf.constant([[5.0, 6.0], [7.0, 8.0]]) # =========================== # 1. Operasi Matematika # =========================== c = a + b # element-wise add (tf.add) d = a * b # element-wise multiply (tf.multiply) e = a @ b # matrix multiply (tf.matmul) f = tf.matmul(a, b) # sama dengan @ operator print(tf.reduce_sum(a)) # 10.0 ā sum semua element print(tf.reduce_mean(a)) # 2.5 ā rata-rata print(tf.reduce_max(a)) # 4.0 ā nilai maksimum print(tf.argmax(a, axis=1)) # [1, 1] ā index max per baris # Per-axis reduction print(tf.reduce_sum(a, axis=0)) # [4., 6.] ā sum per kolom print(tf.reduce_sum(a, axis=1)) # [3., 7.] ā sum per baris # Math functions print(tf.sqrt(a)) # element-wise sqrt print(tf.exp(a)) # element-wise e^x print(tf.nn.relu(a - 2)) # ReLU activation print(tf.nn.softmax(a)) # softmax per row # =========================== # 2. Reshape & Transpose # =========================== x = tf.range(12) # [0,1,2,...,11] y = tf.reshape(x, [3, 4]) # 3Ć4 matrix z = tf.reshape(x, [2, 2, 3]) # 3D tensor flat = tf.reshape(z, [-1]) # flatten (-1 = auto) t = tf.transpose(y) # swap axes: (3,4) ā (4,3) p = tf.transpose(z, perm=[0, 2, 1]) # custom axis order # Expand/squeeze dimensions expanded = tf.expand_dims(y, axis=0) # (3,4) ā (1,3,4) ā add batch squeezed = tf.squeeze(expanded, axis=0) # (1,3,4) ā (3,4) ā remove batch # =========================== # 3. Indexing & Slicing (same as NumPy!) # =========================== m = tf.constant([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) print(m[0]) # [1, 2, 3, 4] ā first row print(m[:, 1]) # [2, 6, 10] ā second column print(m[0:2, 1:3]) # [[2,3],[6,7]] ā slice print(m[-1]) # [9,10,11,12] ā last row # Advanced: gather & boolean mask indices = tf.constant([0, 2]) print(tf.gather(m, indices)) # rows 0 and 2 mask = tf.constant([True, False, True]) print(tf.boolean_mask(m, mask)) # rows where mask=True # =========================== # 4. Broadcasting (same as NumPy!) # =========================== mat = tf.ones([3, 3]) vec = tf.constant([1.0, 2.0, 3.0]) result = mat + vec # (3,3) + (3,) ā broadcasts to (3,3) print(result) # [[2., 3., 4.], # [2., 3., 4.], # [2., 3., 4.]] # Concatenate & Stack t1 = tf.constant([[1,2]]) t2 = tf.constant([[3,4]]) print(tf.concat([t1, t2], axis=0)) # [[1,2],[3,4]] ā concat rows print(tf.stack([t1, t2], axis=0)) # [[[1,2]],[[3,4]]] ā new dim
5. tf.Variable ā Tensor yang Bisa Diubah
5. tf.Variable ā Mutable Tensors
tf.Variable adalah tensor yang bisa diubah nilainya ā inilah yang menyimpan weights dan biases model. Saat training, optimizer meng-update Variable berdasarkan gradient. Semua weight di Keras model secara otomatis adalah Variable.
tf.Variable is a tensor whose value can be changed ā this is what stores model weights and biases. During training, the optimizer updates Variables based on gradients. All weights in a Keras model are automatically Variables.
import tensorflow as tf # =========================== # 1. Membuat Variable # =========================== w = tf.Variable(tf.random.normal([3, 2]), name="weights") b = tf.Variable(tf.zeros([2]), name="bias") print(f"w shape: {w.shape}, dtype: {w.dtype}") # (3,2), float32 print(f"w name: {w.name}") # weights:0 # =========================== # 2. Update Variable (in-place) # =========================== w.assign(w * 0.9) # replace entire value w.assign_add(tf.ones([3,2])) # w = w + 1 w.assign_sub(tf.ones([3,2])) # w = w - 1 # ā ļø Variable TIDAK bisa di-reassign dengan = # w = w * 2 ā WRONG! Creates new tensor, not update # w.assign(w * 2) ā CORRECT! In-place update # =========================== # 3. Constant vs Variable # =========================== const = tf.constant([1, 2, 3]) # āļø immutable var = tf.Variable([1, 2, 3]) # š„ mutable # const.assign(...) ā ERROR! Constants can't be changed var.assign([4, 5, 6]) # ā Works! # =========================== # 4. Manual Gradient Descent (preview of training) # =========================== w = tf.Variable(5.0) lr = 0.1 for step in range(20): with tf.GradientTape() as tape: loss = (w - 2.0) ** 2 # minimum at w=2 grad = tape.gradient(loss, w) w.assign_sub(lr * grad) # w = w - lr * dL/dw print(f"Optimized w = {w.numpy():.4f}") # ā 2.0000 ā
š Kapan Constant vs Variable?
tf.constant ā input data, labels, hyperparameters ā apapun yang tidak berubah selama training.
tf.Variable ā weights, biases, embeddings ā apapun yang di-update oleh optimizer.
Di Keras, Anda jarang membuat Variable manual ā keras.layers.Dense(64) otomatis membuat Variable untuk W dan b.
š When Constant vs Variable?
tf.constant ā input data, labels, hyperparameters ā anything that doesn't change during training.
tf.Variable ā weights, biases, embeddings ā anything updated by the optimizer.
In Keras, you rarely create Variables manually ā keras.layers.Dense(64) automatically creates Variables for W and b.
6. GradientTape ā Auto-Differentiation
6. GradientTape ā Auto-Differentiation
tf.GradientTape adalah fitur terpenting TensorFlow untuk training. Ia "merekam" semua operasi yang dilakukan pada Variable, lalu menghitung gradient (turunan) secara otomatis. Ini menggantikan seluruh backpropagation manual yang kita tulis di seri Neural Network!
tf.GradientTape is TensorFlow's most important training feature. It "records" all operations performed on Variables, then computes gradients (derivatives) automatically. This replaces the entire manual backpropagation we wrote in the Neural Network series!
import tensorflow as tf # =========================== # 1. Contoh paling sederhana # =========================== x = tf.Variable(3.0) with tf.GradientTape() as tape: y = x ** 2 # y = x² # dy/dx = 2x = 2(3) = 6 grad = tape.gradient(y, x) print(f"dy/dx at x=3: {grad.numpy()}") # 6.0 ā # =========================== # 2. Multiple variables (seperti W dan b) # =========================== W = tf.Variable([[1.0, 2.0], [3.0, 4.0]]) b = tf.Variable([0.1, 0.2]) X = tf.constant([[1.0, 0.5]]) y_true = tf.constant([[1.0, 0.0]]) with tf.GradientTape() as tape: y_pred = tf.nn.softmax(X @ W + b) loss = tf.reduce_mean((y_true - y_pred) ** 2) # Gradient for ALL trainable variables at once! grads = tape.gradient(loss, [W, b]) print(f"dL/dW shape: {grads[0].shape}") # (2, 2) print(f"dL/db shape: {grads[1].shape}") # (2,) # =========================== # 3. Persistent tape (untuk multiple gradient calls) # =========================== x = tf.Variable(3.0) with tf.GradientTape(persistent=True) as tape: y = x ** 2 z = x ** 3 print(tape.gradient(y, x)) # 6.0 (dy/dx) print(tape.gradient(z, x)) # 27.0 (dz/dx = 3x² = 27) del tape # free resources! # =========================== # 4. Higher-order gradients (gradient of gradient!) # =========================== x = tf.Variable(2.0) with tf.GradientTape() as t2: with tf.GradientTape() as t1: y = x ** 3 # y = x³ dy = t1.gradient(y, x) # dy/dx = 3x² = 12 d2y = t2.gradient(dy, x) # d²y/dx² = 6x = 12 print(f"First derivative: {dy.numpy()}") # 12.0 print(f"Second derivative: {d2y.numpy()}") # 12.0 # =========================== # 5. Full mini training loop! # =========================== # Learn y = 2x + 1 (same as NN series Page 1!) w = tf.Variable(0.0) b_var = tf.Variable(0.0) X_data = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0]) y_data = tf.constant([3.0, 5.0, 7.0, 9.0, 11.0]) for epoch in range(100): with tf.GradientTape() as tape: y_pred = w * X_data + b_var loss = tf.reduce_mean((y_data - y_pred) ** 2) grads = tape.gradient(loss, [w, b_var]) w.assign_sub(0.01 * grads[0]) b_var.assign_sub(0.01 * grads[1]) print(f"Learned: y = {w.numpy():.2f}x + {b_var.numpy():.2f}") # Output: y = 2.00x + 1.00 ā PERFECT! Same as NN series! š
š Bandingkan dengan Seri Neural Network!
Di Page 1 seri NN, kita tulis 50+ baris backpropagation manual untuk belajar y = 2x + 1.
Dengan GradientTape: 8 baris ā dan bekerja untuk arsitektur apapun, termasuk Transformer dengan miliaran parameter. Inilah kekuatan auto-differentiation.
š Compare with the Neural Network Series!
In NN series Page 1, we wrote 50+ lines of manual backpropagation to learn y = 2x + 1.
With GradientTape: 8 lines ā and it works for any architecture, including Transformers with billions of parameters. This is the power of auto-differentiation.
7. GPU Acceleration ā 10-100Ć Lebih Cepat
7. GPU Acceleration ā 10-100Ć Faster
TensorFlow secara otomatis menempatkan operasi di GPU jika tersedia. Anda tidak perlu mengubah kode apapun ā model yang sama berjalan di CPU, GPU, atau TPU. Untuk memaksa device tertentu, gunakan tf.device().
TensorFlow automatically places operations on GPU if available. You don't need to change any code ā the same model runs on CPU, GPU, or TPU. To force a specific device, use tf.device().
import tensorflow as tf import time # =========================== # 1. Check available devices # =========================== print("GPUs:", tf.config.list_physical_devices('GPU')) print("CPUs:", tf.config.list_physical_devices('CPU')) # =========================== # 2. Benchmark: CPU vs GPU # =========================== size = 5000 # CPU with tf.device('/CPU:0'): a_cpu = tf.random.normal([size, size]) b_cpu = tf.random.normal([size, size]) start = time.time() c_cpu = a_cpu @ b_cpu cpu_time = time.time() - start print(f"CPU: {cpu_time:.3f}s") # GPU (if available) if tf.config.list_physical_devices('GPU'): with tf.device('/GPU:0'): a_gpu = tf.random.normal([size, size]) b_gpu = tf.random.normal([size, size]) _ = a_gpu @ b_gpu # warmup start = time.time() c_gpu = a_gpu @ b_gpu gpu_time = time.time() - start print(f"GPU: {gpu_time:.3f}s") print(f"Speedup: {cpu_time/gpu_time:.1f}Ć") # Typical result: CPU 2.5s, GPU 0.03s ā 80Ć speedup! š # =========================== # 3. Memory growth (prevent OOM) # =========================== gpus = tf.config.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) # This allocates GPU memory as needed, not all at once # Put this at the TOP of your script! # =========================== # 4. Check where tensor lives # =========================== t = tf.constant([1, 2, 3]) print(t.device) # /job:localhost/replica:0/task:0/device:GPU:0
š” Pro Tip: Selalu set memory growth di awal script! Tanpa ini, TF akan mengalokasi seluruh GPU memory ā bahkan untuk model kecil. Ini bisa menyebabkan crash jika Anda menjalankan program lain yang juga butuh GPU.
š” Pro Tip: Always set memory growth at the start of your script! Without it, TF will allocate all GPU memory ā even for small models. This can cause crashes if you run other programs that also need the GPU.
8. TF vs NumPy vs PyTorch ā Kapan Pakai Apa?
8. TF vs NumPy vs PyTorch ā When to Use What?
| Aspek | NumPy | TensorFlow | PyTorch |
|---|---|---|---|
| Auto-Grad | ā Tidak | ā GradientTape | ā autograd |
| GPU | ā CPU only | ā Auto-detect | ā Manual .to(device) |
| Deployment | ā | ā TF Serving, TFLite, TF.js | ā ļø TorchServe, ONNX |
| Mobile | ā | ā TFLite (Android/iOS) | ā ļø PyTorch Mobile |
| Riset | ā | ā ļø Good | ā Dominan di akademik |
| Industri | ā ļø Data preprocessing | ā Google, enterprise | ā Meta, startup |
| Learning Curve | ā Easiest | ā ļø Medium | ā Pythonic |
| Kapan Pakai | Data wrangling, preprocessing | Production ML, mobile, browser | Research, prototyping |
| Aspect | NumPy | TensorFlow | PyTorch |
|---|---|---|---|
| Auto-Grad | ā No | ā GradientTape | ā autograd |
| GPU | ā CPU only | ā Auto-detect | ā Manual .to(device) |
| Deployment | ā | ā TF Serving, TFLite, TF.js | ā ļø TorchServe, ONNX |
| Mobile | ā | ā TFLite (Android/iOS) | ā ļø PyTorch Mobile |
| Research | ā | ā ļø Good | ā Dominates academia |
| Industry | ā ļø Data preprocessing | ā Google, enterprise | ā Meta, startups |
| Learning Curve | ā Easiest | ā ļø Medium | ā Pythonic |
| When to Use | Data wrangling, preprocessing | Production ML, mobile, browser | Research, prototyping |
š Rekomendasi Praktis:
Belajar? Mulai dari mana saja ā keduanya mengajarkan konsep yang sama.
Production di Google Cloud / mobile? ā TensorFlow.
Riset / paper akademik? ā PyTorch.
Keduanya? ā Belajar satu dengan baik, yang lain hanya perbedaan syntax. Konsep deep learning sama persis.
š Practical Recommendation:
Learning? Start with either ā both teach the same concepts.
Production on Google Cloud / mobile? ā TensorFlow.
Research / academic papers? ā PyTorch.
Both? ā Master one well, the other is just syntax differences. Deep learning concepts are exactly the same.
9. Ringkasan Page 1
9. Page 1 Summary
| Konsep | Apa Itu | Kode Kunci |
|---|---|---|
| tf.constant | Tensor immutable (input, labels) | tf.constant([1,2,3]) |
| tf.Variable | Tensor mutable (weights, biases) | tf.Variable(randn()) |
| Operasi Math | Add, multiply, matmul, reduce | a @ b, tf.reduce_sum() |
| Reshape | Ubah bentuk tensor | tf.reshape(x, [3,4]) |
| Indexing | Akses elemen (sama dengan NumPy) | x[0:2, 1:3] |
| GradientTape | Auto-differentiation | tape.gradient(loss, w) |
| GPU Support | Auto-detect & accelerate | tf.device('/GPU:0') |
| NumPy ā TF | Konversi seamless | .numpy(), tf.constant() |
| Memory Growth | Alokasi GPU memory on-demand | set_memory_growth(True) |
| Concept | What It Is | Key Code |
|---|---|---|
| tf.constant | Immutable tensor (inputs, labels) | tf.constant([1,2,3]) |
| tf.Variable | Mutable tensor (weights, biases) | tf.Variable(randn()) |
| Math Ops | Add, multiply, matmul, reduce | a @ b, tf.reduce_sum() |
| Reshape | Change tensor shape | tf.reshape(x, [3,4]) |
| Indexing | Access elements (same as NumPy) | x[0:2, 1:3] |
| GradientTape | Auto-differentiation | tape.gradient(loss, w) |
| GPU Support | Auto-detect & accelerate | tf.device('/GPU:0') |
| NumPy ā TF | Seamless conversion | .numpy(), tf.constant() |
| Memory Growth | On-demand GPU memory allocation | set_memory_growth(True) |
Coming Next: Page 2 ā Keras API & Model Building
Membangun model dengan Keras: Sequential API (stack layers), Functional API (arsitektur kompleks), compile & fit, callbacks (EarlyStopping, TensorBoard), custom layers, dan melatih classifier MNIST 98%+ dalam 10 baris kode. Stay tuned!
Coming Next: Page 2 ā Keras API & Model Building
Building models with Keras: Sequential API (stack layers), Functional API (complex architectures), compile & fit, callbacks (EarlyStopping, TensorBoard), custom layers, and training an MNIST classifier at 98%+ in 10 lines of code. Stay tuned!