π Daftar Isi β Page 10 (Final!)
π Table of Contents β Page 10 (Final!)
- Perjalanan Kita β Recap 10 Pages dalam satu diagram
- Capstone: Image Classifier Production β Full pipeline code
- Step 1: Data Pipeline β tf.data + augmentation (Page 3+4)
- Step 2: Model β Transfer learning + mixed precision (Page 3+4)
- Step 3: Training β Custom callbacks + TensorBoard (Page 2+7)
- Step 4: Evaluation β Confusion matrix, per-class accuracy
- Step 5: Export & Deploy β SavedModel + TFLite + Docker (Page 9)
- Roadmap: What's Next? β TFX, Vertex AI, JAX, MLOps
- Career Paths di ML/AI β Dari junior sampai senior
- Penutup β Selamat! π
- Our Journey β Recap 10 Pages in one diagram
- Capstone: Production Image Classifier β Full pipeline code
- Step 1: Data Pipeline β tf.data + augmentation (Page 3+4)
- Step 2: Model β Transfer learning + mixed precision (Page 3+4)
- Step 3: Training β Custom callbacks + TensorBoard (Page 2+7)
- Step 4: Evaluation β Confusion matrix, per-class accuracy
- Step 5: Export & Deploy β SavedModel + TFLite + Docker (Page 9)
- Roadmap: What's Next? β TFX, Vertex AI, JAX, MLOps
- Career Paths in ML/AI β From junior to senior
- Closing β Congratulations! π
1. Perjalanan Kita β 10 Pages dalam Satu Pandangan
1. Our Journey β 10 Pages at a Glance
2-6. Capstone: Production Image Classifier β Full Pipeline
2-6. Capstone: Production Image Classifier β Full Pipeline
Script berikut menggabungkan semua teknik dari 9 pages sebelumnya menjadi satu pipeline end-to-end. Ini bisa langsung dipakai untuk proyek image classification production.
The following script combines all techniques from the previous 9 pages into one end-to-end pipeline. This can be directly used for production image classification projects.
#!/usr/bin/env python3 """ π CAPSTONE: End-to-End Production ML Pipeline Combines ALL techniques from Pages 1-9: - Page 1: TensorFlow basics, tensors - Page 2: Keras compile/fit/callbacks - Page 3: CNN, augmentation, transfer learning - Page 4: tf.data pipeline, mixed precision, cache - Page 5: (NLP concepts referenced) - Page 6: (Transformer concepts referenced) - Page 7: Custom training concepts, gradient clipping - Page 8: (GAN concepts referenced) - Page 9: SavedModel, TFLite, Docker deployment """ import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np import time import os # βββββββββββββββββββββββββββββββββββββββββββββββββββ # CONFIGURATION # βββββββββββββββββββββββββββββββββββββββββββββββββββ DATA_DIR = "data/train" IMG_SIZE = (224, 224) BATCH_SIZE = 32 EPOCHS_PHASE1 = 15 EPOCHS_PHASE2 = 10 LR_PHASE1 = 1e-3 LR_PHASE2 = 1e-5 MODEL_DIR = "saved_model/capstone" TFLITE_PATH = "capstone.tflite" # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 1: DATA PIPELINE (Page 4) # βββββββββββββββββββββββββββββββββββββββββββββββββββ print("π Step 1: Loading data...") train_ds = keras.utils.image_dataset_from_directory( DATA_DIR, image_size=IMG_SIZE, batch_size=BATCH_SIZE, validation_split=0.2, subset="training", seed=42, label_mode="int") val_ds = keras.utils.image_dataset_from_directory( DATA_DIR, image_size=IMG_SIZE, batch_size=BATCH_SIZE, validation_split=0.2, subset="validation", seed=42, label_mode="int") NUM_CLASSES = len(train_ds.class_names) print(f" Classes: {train_ds.class_names} ({NUM_CLASSES})") # Optimize pipeline (Page 4: cache + prefetch) AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 2: MODEL (Page 3 + 4) # βββββββββββββββββββββββββββββββββββββββββββββββββββ print("π§ Step 2: Building model...") # Mixed precision (Page 4) tf.keras.mixed_precision.set_global_policy('mixed_float16') # Data augmentation (Page 3) augmentation = keras.Sequential([ layers.RandomFlip("horizontal"), layers.RandomRotation(0.15), layers.RandomZoom(0.1), layers.RandomContrast(0.1), layers.RandomBrightness(0.1), ], name="augmentation") # Transfer learning backbone (Page 3) base_model = keras.applications.EfficientNetB0( input_shape=(*IMG_SIZE, 3), include_top=False, weights="imagenet" ) base_model.trainable = False # Phase 1: freeze backbone # Full model model = keras.Sequential([ augmentation, layers.Rescaling(1./255), base_model, layers.GlobalAveragePooling2D(), layers.BatchNormalization(), layers.Dropout(0.3), layers.Dense(128, activation="relu"), layers.Dropout(0.2), layers.Dense(NUM_CLASSES, activation="softmax", dtype="float32") ], name="capstone_classifier") model.summary() print(f" Total params: {model.count_params():,}") # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 3: TRAINING β Phase 1 (Page 2 + 7) # βββββββββββββββββββββββββββββββββββββββββββββββββββ print(f"\nποΈ Step 3a: Phase 1 β Train head (backbone frozen)...") model.compile( optimizer=keras.optimizers.Adam(LR_PHASE1, clipnorm=1.0), loss="sparse_categorical_crossentropy", metrics=["accuracy"], jit_compile=True # XLA for speed (Page 4) ) callbacks_p1 = [ keras.callbacks.EarlyStopping( monitor="val_loss", patience=5, restore_best_weights=True), keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=3), keras.callbacks.TensorBoard(log_dir="logs/phase1"), ] start = time.time() history_p1 = model.fit( train_ds, validation_data=val_ds, epochs=EPOCHS_PHASE1, callbacks=callbacks_p1) p1_time = time.time() - start print(f" Phase 1 done in {p1_time:.0f}s") # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 3b: Phase 2 β Fine-tune (Page 3) # βββββββββββββββββββββββββββββββββββββββββββββββββββ print(f"\nπ§ Step 3b: Phase 2 β Fine-tune top backbone layers...") base_model.trainable = True for layer in base_model.layers[:-20]: layer.trainable = False trainable = sum(1 for l in model.layers if hasattr(l, 'trainable') and l.trainable) print(f" Unfrozen top layers for fine-tuning") model.compile( optimizer=keras.optimizers.Adam(LR_PHASE2, clipnorm=1.0), loss="sparse_categorical_crossentropy", metrics=["accuracy"], jit_compile=True ) callbacks_p2 = [ keras.callbacks.EarlyStopping( monitor="val_accuracy", patience=3, restore_best_weights=True), keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=2, min_lr=1e-7), keras.callbacks.ModelCheckpoint( "best_model.keras", save_best_only=True, monitor="val_accuracy"), keras.callbacks.TensorBoard(log_dir="logs/phase2"), ] start = time.time() history_p2 = model.fit( train_ds, validation_data=val_ds, epochs=EPOCHS_PHASE2, callbacks=callbacks_p2) p2_time = time.time() - start print(f" Phase 2 done in {p2_time:.0f}s") # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 4: EVALUATION # βββββββββββββββββββββββββββββββββββββββββββββββββββ print("\nπ Step 4: Evaluation...") val_loss, val_acc = model.evaluate(val_ds, verbose=0) print(f" Val Loss: {val_loss:.4f}") print(f" Val Accuracy: {val_acc:.1%}") # Per-class accuracy y_true, y_pred = [], [] for images, labels in val_ds: preds = model.predict(images, verbose=0) y_true.extend(labels.numpy()) y_pred.extend(np.argmax(preds, axis=1)) y_true, y_pred = np.array(y_true), np.array(y_pred) print("\n Per-class accuracy:") for i, name in enumerate(train_ds.class_names): mask = y_true == i if mask.sum() > 0: acc = (y_pred[mask] == i).mean() print(f" {name:15s}: {acc:.1%} ({mask.sum()} samples)") # βββββββββββββββββββββββββββββββββββββββββββββββββββ # STEP 5: EXPORT & DEPLOY (Page 9) # βββββββββββββββββββββββββββββββββββββββββββββββββββ print("\nπ Step 5: Export & Deploy...") # 5a. SavedModel (for TF Serving) os.makedirs(MODEL_DIR, exist_ok=True) model.save(f"{MODEL_DIR}/1") sm_size = sum(os.path.getsize(os.path.join(dp, f)) for dp, dn, filenames in os.walk(f"{MODEL_DIR}/1") for f in filenames) / (1024*1024) print(f" SavedModel: {sm_size:.1f} MB β {MODEL_DIR}/1/") # 5b. TFLite (for mobile β dynamic range quantization) converter = tf.lite.TFLiteConverter.from_saved_model(f"{MODEL_DIR}/1") converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() with open(TFLITE_PATH, "wb") as f: f.write(tflite_model) tflite_size = len(tflite_model) / (1024*1024) print(f" TFLite: {tflite_size:.1f} MB β {TFLITE_PATH}") print(f" Compression: {sm_size/tflite_size:.1f}Γ") # 5c. Verify TFLite accuracy interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_d = interpreter.get_input_details() output_d = interpreter.get_output_details() print(f" TFLite input: {input_d[0]['shape']} {input_d[0]['dtype']}") print(f" TFLite output: {output_d[0]['shape']} {output_d[0]['dtype']}") # βββββββββββββββββββββββββββββββββββββββββββββββββββ # FINAL REPORT # βββββββββββββββββββββββββββββββββββββββββββββββββββ print(f""" {'='*60} π CAPSTONE PROJECT COMPLETE! {'='*60} π Data: {NUM_CLASSES} classes π§ Model: EfficientNetB0 + custom head β‘ Training: Phase 1 ({p1_time:.0f}s) + Phase 2 ({p2_time:.0f}s) π― Accuracy: {val_acc:.1%} πΎ SavedModel: {sm_size:.1f} MB β TF Serving ready π± TFLite: {tflite_size:.1f} MB β Mobile ready π³ Docker: docker run -p 8501:8501 ... {'='*60} """)
π Ini Template Production Anda!
Script di atas menggabungkan semua best practice dari 9 pages sebelumnya:
β’ Page 4: cache + prefetch + mixed precision
β’ Page 3: augmentation + transfer learning 2-phase
β’ Page 2: EarlyStopping + ReduceLR + ModelCheckpoint + TensorBoard
β’ Page 7: gradient clipping + XLA compilation
β’ Page 9: SavedModel + TFLite quantization
Ganti DATA_DIR ke folder gambar Anda β run β production-ready classifier. π
π This Is Your Production Template!
The script above combines all best practices from the previous 9 pages:
β’ Page 4: cache + prefetch + mixed precision
β’ Page 3: augmentation + transfer learning 2-phase
β’ Page 2: EarlyStopping + ReduceLR + ModelCheckpoint + TensorBoard
β’ Page 7: gradient clipping + XLA compilation
β’ Page 9: SavedModel + TFLite quantization
Change DATA_DIR to your image folder β run β production-ready classifier. π
π³ Docker Deployment Script
π³ Docker Deployment Script
#!/bin/bash # βββββββββββββββββββββββββββββββββββββββ # π³ DOCKER DEPLOYMENT β Production Ready # βββββββββββββββββββββββββββββββββββββββ # 1. Create Dockerfile cat > Dockerfile <<EOF FROM tensorflow/serving # Copy model into container COPY saved_model/capstone /models/capstone # Set model name ENV MODEL_NAME=capstone # Expose REST and gRPC ports EXPOSE 8501 8500 EOF # 2. Build image docker build -t capstone-ml-service . # 3. Run container docker run -d --name capstone \ -p 8501:8501 \ -p 8500:8500 \ capstone-ml-service # 4. Test REST API curl -s http://localhost:8501/v1/models/capstone | python3 -m json.tool # {"model_version_status": [{"version": "1", "state": "AVAILABLE"}]} # 5. Send prediction request python3 -c " import requests, numpy as np, json img = np.random.rand(1, 224, 224, 3).tolist() r = requests.post('http://localhost:8501/v1/models/capstone:predict', json={'instances': img}) print('Prediction:', np.argmax(r.json()['predictions'][0])) " # 6. Push to registry (production) # docker tag capstone-ml-service gcr.io/my-project/capstone:v1 # docker push gcr.io/my-project/capstone:v1 # β Deploy to Google Cloud Run, Kubernetes, or any cloud! echo "π Docker deployment complete! API running on port 8501"
π± TFLite Android Integration (Preview)
π± TFLite Android Integration (Preview)
// build.gradle: implementation 'org.tensorflow:tensorflow-lite:2.14.0' // Load TFLite model Interpreter interpreter = new Interpreter(loadModelFile("capstone.tflite")); // Prepare input (224Γ224 RGB float image) float[][][][] input = new float[1][224][224][3]; // ... fill with normalized pixel values // Run inference float[][] output = new float[1][NUM_CLASSES]; interpreter.run(input, output); // Get predicted class int predictedClass = argMax(output[0]); Log.d("ML", "Predicted: " + classNames[predictedClass]); // Typical latency on modern phone: 20-50ms per inference! // Works OFFLINE β no internet needed!
π Sertifikasi yang Relevan
π Relevant Certifications
| Sertifikasi | Provider | Level | Coverage dari Seri Ini |
|---|---|---|---|
| TensorFlow Developer Certificate | Intermediate | Page 1-6 (90%+ coverage!) | |
| Google Cloud Professional ML Engineer | Google Cloud | Advanced | Page 4, 9, 10 + cloud infra |
| AWS Machine Learning Specialty | Amazon | Advanced | Concepts sama, tools berbeda |
| Deep Learning Specialization | Coursera/DeepLearning.AI | Intermediate | Teori di seri NN + TF ini |
| Certification | Provider | Level | Coverage from This Series |
|---|---|---|---|
| TensorFlow Developer Certificate | Intermediate | Pages 1-6 (90%+ coverage!) | |
| Google Cloud Professional ML Engineer | Google Cloud | Advanced | Pages 4, 9, 10 + cloud infra |
| AWS Machine Learning Specialty | Amazon | Advanced | Same concepts, different tools |
| Deep Learning Specialization | Coursera/DeepLearning.AI | Intermediate | Theory in NN series + this TF series |
π TensorFlow Developer Certificate: Setelah menyelesaikan seri ini, Anda memiliki 90%+ pengetahuan yang dibutuhkan untuk ujian TensorFlow Developer Certificate dari Google! Ujian ini menguji kemampuan build dan deploy model dengan TensorFlow Keras β persis yang kita bahas di Page 1-6. Biaya: $100. Durasi: 5 jam. Sangat direkomendasikan untuk CV Anda.
π TensorFlow Developer Certificate: After completing this series, you have 90%+ of the knowledge needed for the TensorFlow Developer Certificate exam from Google! The exam tests your ability to build and deploy models with TensorFlow Keras β exactly what we covered in Pages 1-6. Cost: $100. Duration: 5 hours. Highly recommended for your CV.
7. Roadmap: What's Next? β Setelah 10 Pages Ini
7. Roadmap: What's Next? β After These 10 Pages
| Level | Topik | Apa Itu | Tools |
|---|---|---|---|
| π’ Intermediate | TFX Pipeline | End-to-end ML pipeline: data validation β transform β train β evaluate β deploy β monitor. Standard di Google. | TFX, Apache Beam, ML Metadata |
| π’ Intermediate | MLOps | DevOps untuk ML: CI/CD untuk model, experiment tracking, reproducibility, automatic retraining. | MLflow, Vertex AI, Kubeflow, Weights & Biases |
| π’ Intermediate | Object Detection | Deteksi dan lokalisasi objek dalam gambar. YOLO, SSD, EfficientDet. | TF Object Detection API, YOLO |
| π‘ Advanced | Semantic Segmentation | Klasifikasi per-pixel: setiap pixel diklasifikasi. U-Net, DeepLab. | TF, segmentation_models |
| π‘ Advanced | Reinforcement Learning | Agent belajar dari reward. DQN, PPO, A3C. | TF-Agents, Stable Baselines 3 |
| π‘ Advanced | JAX & Flax | Google next-gen framework: composable transformations (grad, jit, vmap, pmap). Lebih cepat dari TF untuk research. | JAX, Flax, Optax |
| π‘ Advanced | Diffusion Models | State-of-the-art image generation. DALL-E, Stable Diffusion, Midjourney. | Keras CV, Diffusers |
| π΄ Expert | Model Optimization | Pruning (hapus weight kecil), distillation (model besar β kecil), neural architecture search. | TF Model Optimization Toolkit |
| π΄ Expert | Edge AI & Custom Hardware | Deploy ke Coral (Google Edge TPU), NVIDIA Jetson, OpenVINO. | TFLite, Coral, ONNX Runtime |
| π΄ Expert | Large Language Models | Build dan fine-tune LLM. LoRA, QLoRA, RLHF. | Hugging Face, PEFT, TRL |
| Level | Topic | What It Is | Tools |
|---|---|---|---|
| π’ Intermediate | TFX Pipeline | End-to-end ML pipeline: data validation β transform β train β evaluate β deploy β monitor. Standard at Google. | TFX, Apache Beam, ML Metadata |
| π’ Intermediate | MLOps | DevOps for ML: CI/CD for models, experiment tracking, reproducibility, automatic retraining. | MLflow, Vertex AI, Kubeflow, W&B |
| π’ Intermediate | Object Detection | Detect and localize objects in images. YOLO, SSD, EfficientDet. | TF Object Detection API, YOLO |
| π‘ Advanced | Semantic Segmentation | Per-pixel classification: every pixel is classified. U-Net, DeepLab. | TF, segmentation_models |
| π‘ Advanced | Reinforcement Learning | Agent learns from rewards. DQN, PPO, A3C. | TF-Agents, Stable Baselines 3 |
| π‘ Advanced | JAX & Flax | Google next-gen framework: composable transformations (grad, jit, vmap, pmap). Faster than TF for research. | JAX, Flax, Optax |
| π‘ Advanced | Diffusion Models | State-of-the-art image generation. DALL-E, Stable Diffusion, Midjourney. | Keras CV, Diffusers |
| π΄ Expert | Model Optimization | Pruning (remove small weights), distillation (large β small model), neural architecture search. | TF Model Optimization Toolkit |
| π΄ Expert | Edge AI & Custom Hardware | Deploy to Coral (Google Edge TPU), NVIDIA Jetson, OpenVINO. | TFLite, Coral, ONNX Runtime |
| π΄ Expert | Large Language Models | Build and fine-tune LLMs. LoRA, QLoRA, RLHF. | Hugging Face, PEFT, TRL |
8. Career Paths di ML/AI
8. Career Paths in ML/AI
| Role | Focus | Skills dari Seri Ini | Tambahan yang Dibutuhkan |
|---|---|---|---|
| ML Engineer | Build & deploy ML systems | P1-P9: semua! Terutama P4 (pipeline), P9 (deploy) | MLOps, cloud (GCP/AWS), CI/CD |
| Data Scientist | Analisis data + build models | P1-P6: modeling + NLP + CV | Statistics, SQL, pandas, visualization |
| Research Engineer | Implement & improve algorithms | P7-P8: custom training, GAN, advanced | JAX, paper implementation, math |
| Computer Vision Engineer | Image/video processing | P3: CNN, augmentation, transfer | Object detection, segmentation, 3D |
| NLP Engineer | Text processing systems | P5-P6: LSTM, Transformer, BERT | LLM fine-tuning, RAG, embeddings |
| MLOps Engineer | ML infrastructure & pipelines | P4 (pipeline), P9 (deploy) | Kubernetes, TFX, monitoring, CI/CD |
| Role | Focus | Skills from This Series | Additional Skills Needed |
|---|---|---|---|
| ML Engineer | Build & deploy ML systems | P1-P9: everything! Especially P4 (pipeline), P9 (deploy) | MLOps, cloud (GCP/AWS), CI/CD |
| Data Scientist | Data analysis + build models | P1-P6: modeling + NLP + CV | Statistics, SQL, pandas, visualization |
| Research Engineer | Implement & improve algorithms | P7-P8: custom training, GAN, advanced | JAX, paper implementation, math |
| CV Engineer | Image/video processing | P3: CNN, augmentation, transfer | Object detection, segmentation, 3D |
| NLP Engineer | Text processing systems | P5-P6: LSTM, Transformer, BERT | LLM fine-tuning, RAG, embeddings |
| MLOps Engineer | ML infrastructure & pipelines | P4 (pipeline), P9 (deploy) | Kubernetes, TFX, monitoring, CI/CD |
9. Penutup β Selamat! ππ
9. Closing β Congratulations! ππ
π Selamat! Anda telah menyelesaikan seluruh seri Belajar TensorFlow β 10 Pages!
Dari tensor pertama di Page 1 hingga Docker deployment di Page 9 dan capstone project di Page 10, Anda sekarang memiliki pemahaman lengkap tentang deep learning dengan TensorFlow. Anda bisa:
β
Membangun model apapun: CNN, RNN, LSTM, Transformer, GAN, VAE
β
Train secara efisien: tf.data pipeline, mixed precision, multi-GPU
β
NLP dari nol hingga BERT fine-tuning
β
Deploy ke server (TF Serving), mobile (TFLite), dan browser (TF.js)
β
Production ML: versioning, monitoring, Docker
Ini bukan akhir β ini baru awal! Gunakan roadmap di atas untuk terus berkembang. Terus berkarya, terus belajar, dan bangun sesuatu yang luar biasa! π
"The best way to predict the future is to create it." β Abraham Lincoln
π Congratulations! You've completed the entire Learn TensorFlow series β all 10 Pages!
From your first tensor in Page 1 to Docker deployment in Page 9 and this capstone project in Page 10, you now have a comprehensive understanding of deep learning with TensorFlow. You can:
β
Build any model: CNN, RNN, LSTM, Transformer, GAN, VAE
β
Train efficiently: tf.data pipeline, mixed precision, multi-GPU
β
NLP from scratch to BERT fine-tuning
β
Deploy to server (TF Serving), mobile (TFLite), and browser (TF.js)
β
Production ML: versioning, monitoring, Docker
This is not the end β it's just the beginning! Use the roadmap above to keep growing. Keep building, keep learning, and create something extraordinary! π
"The best way to predict the future is to create it." β Abraham Lincoln