Gridscript

πŸ€– Deep Learning

πŸ“˜ Introduction

Deep Learning is a subfield of Machine Learning that uses artificial neural networks to model complex patterns in data.
It mimics how the human brain processes information through layers of interconnected neurons.

Deep learning powers technologies like:

Popular frameworks: TensorFlow, Keras, and PyTorch.

🧠 Neural Networks

Concept

A Neural Network consists of layers of interconnected nodes (neurons).
Each neuron receives inputs, applies weights and an activation function, and passes the output to the next layer.

Basic Architecture

  1. Input Layer – Takes input features
  2. Hidden Layers – Perform computations and transformations
  3. Output Layer – Produces final prediction

Forward propagation: Passes input data through the network.
Backpropagation: Adjusts weights using gradient descent to minimize error.

Example: Simple Neural Network (TensorFlow)

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# Simple feedforward neural network
model = Sequential([
    Dense(16, activation='relu', input_shape=(10,)),
    Dense(8, activation='relu'),
    Dense(1, activation='sigmoid')
])

# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Summary
model.summary()

Example: Simple Neural Network (PyTorch)

import torch
import torch.nn as nn

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(10, 16)
        self.fc2 = nn.Linear(16, 8)
        self.fc3 = nn.Linear(8, 1)
        self.relu = nn.ReLU()
        self.sigmoid = nn.Sigmoid()
        
    def forward(self, x):
        x = self.relu(self.fc1(x))
        x = self.relu(self.fc2(x))
        x = self.sigmoid(self.fc3(x))
        return x

model = SimpleNN()
print(model)

πŸ–ΌοΈ Convolutional Neural Networks (CNNs)

Concept

CNNs are specialized for processing image data.
They use convolutional layers to automatically detect spatial features like edges, shapes, and textures.

Key Components

Example (TensorFlow CNN)

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

model = Sequential([
    Conv2D(32, (3,3), activation='relu', input_shape=(64, 64, 3)),
    MaxPooling2D(pool_size=(2,2)),
    Conv2D(64, (3,3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),
    Flatten(),
    Dense(128, activation='relu'),
    Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

Use Cases:

πŸ•’ Recurrent Neural Networks (RNNs)

Concept

RNNs are designed for sequential data, where previous inputs influence future predictions.
They maintain a memory of past information through recurrent connections.

However, standard RNNs struggle with long-term dependencies, so variants like LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit) are often used.

Example: LSTM for Text or Time-Series

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Embedding

model = Sequential([
    Embedding(input_dim=5000, output_dim=64),
    LSTM(100),
    Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()

Use Cases:

πŸ” Transfer Learning

Concept

Transfer Learning allows you to use a pre-trained model (trained on a large dataset) and fine-tune it for a new task.
It saves time and computational power while improving performance on limited data.

How It Works

  1. Load a model pre-trained on a large dataset (e.g., ImageNet).
  2. Freeze lower layers (retain learned features).
  3. Replace top layers with new ones for your specific task.
  4. Train only the new layers on your dataset.

Example: Transfer Learning with TensorFlow (Using VGG16)

from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten

# Load pre-trained model without top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

# Freeze base model layers
for layer in base_model.layers:
    layer.trainable = False

# Add new classification layers
x = Flatten()(base_model.output)
x = Dense(128, activation='relu')(x)
output = Dense(5, activation='softmax')(x)

# Final model
model = Model(inputs=base_model.input, outputs=output)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

Popular Pre-trained Models:

Use Cases:

🧠 Summary

ConceptDescriptionUse Case
Neural NetworksLayers of neurons that learn complex patternsGeneral prediction tasks
CNNsExtract spatial features from imagesImage classification, object detection
RNNs / LSTMsProcess sequential data with memoryText, speech, time-series analysis
Transfer LearningFine-tune pre-trained models for new tasksEfficient model training with limited data

Deep Learning has revolutionized the field of AI β€” enabling breakthroughs in computer vision, natural language processing, and predictive modeling.
By mastering CNNs, RNNs, and transfer learning, you can tackle some of the most advanced problems in modern data science.