Generative AI Specialization course
Contact us

Generative AI Specialization course

Instructor: SaratahKumar c Language: ENGLISH

About the course

The Generative AI Specialization Course is an industry-focused, hands-on training designed to help professionals master Generative AI, LLMs, and cloud AI solutions. Covering text, image, and multimodal generation, it includes prompt engineering, RAG, fine-tuning, AI agents, and responsible AI practices.

Learners will gain practical experience with GPT, Claude, Gemini, LLaMA, Titan, Cohere, and Stable Diffusion, using Amazon Bedrock, Azure AI Foundry, Google AI Studio, and Hugging Face. This 90-hour live program features 50+ hands-on labs, 20+ projects, debugging sessions, and peer networking to ensure job-ready expertise in AI development.

Program features

About the Program

The Generative AI Specialization Course is an industry-focused, hands-on training designed to help professionals master Generative AI, LLMs, and cloud AI solutions. Covering text, image, and multimodal generation, it includes prompt engineering, RAG, fine-tuning, AI agents, and responsible AI practices.

Learners will gain practical experience with GPT, Claude, Gemini, LLaMA, Titan, Cohere, and Stable Diffusion, using Amazon Bedrock, Azure AI Foundry, Google AI Studio, and Hugging Face. This 90-hour live program features 50+ hands-on labs, 20+ projects, debugging sessions, and peer networking to ensure job-ready expertise in AI development.

Key Highlights 

  • 90 Hours of Live sessions from Industrial Experts 
  • 50+ Live Hands-on Labs 
  • 20 Real-time industrial projects 
  • One-on-One Debugging with Industry Mentors

Watch Demo

Who Can Apply for the Course?

  • AI & ML Engineers building generative AI
  • Data Scientists & Researchers exploring LLMs & RAG
  • Developers integrating AI into apps
  • Cloud & DevOps Engineers using AI services
  • AI Enthusiasts & Applied Scientists in generative AI
  • Tech Professionals working with GPT, Claude, LLaMA, Stable Diffusion
  • Startup Founders leveraging AI
  • Product Managers & Leaders exploring AI strategies
  • Anyone mastering LLMOps & AI tools

Program Curriculum

Module 1: Introduction to Generative AI
  • Introduction to Generative AI
    • Applications and impact of Generative AI
    • Evolution and Architecture of Generative AI
    • How does LLM work?
  • Different Types of Generative AI
    • Text generation
    • Image generation
    • Audio and speech recognition
    • Multi-modality
  • Foundation Models vs LLMs
    • Embedding vs Image generation vs Text and Code generation
  • How to Improve LLM Results?
    • Prompt Engineering with Context
    • Retrieval Augmented Generation (RAG)
    • Fine-tuned model
    • Trained model
  • Advanced RAG Concepts
    • How does it relate to RAG, GraphRAG, KAG, and CAG
    • Vector databases for Generative AI
  • Model Selection & Ethics
    • Choose the best foundation model for your needs
    • Explainability and Interpretability
Module 2: Prompt Engineering
  • Introduction to Prompt Engineering
    • What is Prompt Engineering?
    • How Tokenization works?
    • Necessity of Prompt Engineering
  • Prompt Engineering Techniques
    • Basic Prompt Structure
    • Clear and direct instructions
    • Assigning Roles (Role Prompting)
    • Splitting Data from Instructions
    • Formatting Output using prompts
    • Step by step using Precognition
    • Using Examples in prompt (Zero-shot, One-shot, Few-shot prompting)
    • How to Avoid Hallucinations
  • Advanced Prompting
    • Chain-of-Thought prompting
    • ReAct (Reasoning + Acting) patterns
    • Self-consistency prompting
    • Tree of Thoughts
  • Project: Building Complex Prompts
    • Industry Use Cases implementation
    • A/B testing different prompt strategies
Module 3: Deep Dive into Generative AI on Cloud
  • Foundation Models Overview
    • Choose from leading Foundation models
      • AI21: Jamba, Jurassic
      • Amazon: Titan Models
      • Anthropic: Claude Models
      • Cohere: Command Models
      • Meta: Llama
      • Mistral AI
      • Stability AI Models
      • OpenAI Models
      • DeepSeek
      • Google Gemini models
      • Hugging Face models
    • Model Hyperparameter Configurations
  • AWS for Generative AI
    • Getting started with Amazon Bedrock
      • Experiment with Foundation models for different tasks
      • Chat/text playground
      • Image playground
    • Privately customize FMs with your data
    • Amazon Bedrock Converse API
    • Amazon Q – Generative AI Assistant
      • Amazon Q Business
      • Amazon Q Developer
    • Amazon SageMaker for Generative AI
      • SageMaker JumpStart pre-trained models
  • Google Cloud AI
    • Google AI Studio and Gemini API
      • Google AI Studio Introduction
      • Gemini API Overview
      • Google AI Studio playground
      • AI Playground Chat Audio Docs & Images
      • Real-Time Streaming Audio & Video
    • Vertex AI Studio
      • Vertex AI Studio Getting Started
      • Real-time Media Studio & Streaming
      • Prompt Management Gallery & Optimization
      • Model Tuning & Customization
    • Agent Builder
      • Agent Garden
      • Agent Engine
      • RAG Engine
      • Vertex AI Search
      • Vector Search
    • Vertex AI Model Garden Foundation Models
  • Azure for Generative AI
    • Azure AI Foundry
      • Getting started with Azure AI Foundry
      • Understanding RBAC Roles in Azure AI Foundry
      • Understanding Azure AI Foundry resources
        • AI project
        • AI hub
        • AI Services
        • Azure OpenAI Service
      • Azure AI Model Catalog – Discover and Deploy AI Models
      • AI Playground – Experiment, Customize & Build
      • Azure AI Agent – Secure & Scalable Enterprise Automation
      • Fine-Tune AI Models with Your Data
      • Prompt Flow – Build & Refine AI Workflows
      • Tracing & Evaluation – Debug and Optimize AI Performance
      • AI Safety & Security – Build with Confidence
Module 4: Text Generation on AWS, Azure & GCP
  • Amazon Bedrock Text Generation
    • Project: Text Generation
      • Leverage Amazon Bedrock to generate high-quality and contextually relevant text
    • Project: Bedrock model for code generation
      • Using Claude models for code generation
    • Project: Text Summarization
      • Utilize Titan and Claude models to distil complex information into concise summaries
    • Project: Question Answering (QnA)
      • Build intelligent QnA systems with the capabilities of the Titan model
    • Project: Entity Extraction
      • Master advanced techniques for extracting critical entities from text
  • Azure AI Foundry Text Generation
    • Azure Authentication & Environment Setup
    • Understanding AIProjectClient
    • Azure AI Foundry Quick Start Guide
    • Project: Chat Completions with AIProjectClient
    • Project: Getting started with Text Embeddings models
    • AI Foundry Prompt Template
    • Phi-4 Model with AIProjectClient
    • Project: Building Advanced Chat Systems with Phi-4
  • Google AI Studio Text Generation
    • Text Generation from Text-Only Inputs
    • Generate Content from Combined Text & Images
    • Real-Time Text Streaming
    • Project: AI-Powered Real-Time Food Recommendation System
    • Handling Long Contexts with Gemini
    • Executing Code with Gemini Basics
    • Producing Structured Responses with the Gemini API
    • Gemini 2.0 Rapid Reasoning & Multi-Turn Dialogues
    • Live Multimodal API Implementation
    • Function Calling with the Gemini API
    • Audio capabilities with the Gemini API
Module 5: Advanced RAG & Document Processing
  • Managed RAG on Cloud Platforms
    • Project: Amazon Bedrock Knowledge Bases and RAG
      • Managed RAG: Retrieve and generate using managed RAG services
      • LangChain RAG: Implement RAG workflows using LangChain
    • Project: Azure AI Foundry RAG
      • Azure AI Search for RAG-Powered Applications
      • Embeddings Model for RAG-Based Architecture
      • Embedding, Storing & Chatting with Docs in Azure AI Search
      • Bing Grounding: Enhance AI with Web Search Context
    • Project: Vertex AI & Google AI Studio for RAG Solutions
      • RAG architecture using Vertex AI
      • Google Search Grounding: Enrich AI with Real-Time Context
      • Enhance AI with Google Search Suggestions
      • Vertex AI RAG Engine
  • LangChain Deep Dive
    • LangChain Core Concepts
      • Architecture Overview
      • Components Deep Dive
    • Document Processing with LangChain
      • Document Loaders
      • Text Splitters
    • Advanced LangChain Patterns
      • Chains
      • Retrieval Chains
      • Callbacks & Streaming
    • Project: Azure OpenAI Chat on Private Data with LangChain
  • LlamaIndex Deep Dive
    • LlamaIndex Fundamentals
      • Core Concepts
    • Data Connectors & Loaders
      • Built-in Loaders
      • Custom data connectors
    • Query Engines
      • Types of Query Engines
      • Advanced Querying
    • Project: Azure OpenAI Q&A with Semantic Search Using LlamaIndex
    • LangChain vs LlamaIndex
  • Vector Databases In-Depth
    • Vector Database Fundamentals
    • Embedding Concepts
      • Different embedding models (OpenAI, Cohere, Sentence Transformers)
      • Embedding dimensions and trade-offs
      • Multimodal embeddings
    • Similarity Search
      • Cosine similarity, Euclidean distance
      • Dot product
      • Maximum inner product search (MIPS)
    • Different Vector Databases
      • FAISS (Facebook AI Similarity Search)
      • Pinecone
      • Milvus
      • Qdrant
      • Chroma
      • Weaviate
    • Advanced Vector DB Operations
      • Metadata Filtering
      • Hybrid Search
      • Performance Optimization
    • Chunking Strategies
    • Agentic RAG
      • Query Planning
      • Self-Reflection
      • Adaptive Retrieval
Module 6: Image, Video Generation and Multimodal Models
  • Image Generation
    • Bedrock Titan Image Generator
      • Generate high-quality images using Bedrock Titan
    • Google AI Studio
      • Imagen 3 in Gemini API
      • Imagen Model Parameters Overview
    • Azure AI Foundry
      • Azure AI Foundry Image Generation Capabilities
      • Project: Getting Started with OpenAI DALL·E 3 for Image Generation
  • Video Generation
    • Project: Bedrock Amazon Nova
      • Create detailed videos with the power of Amazon Nova Foundation Models
    • Google's Veo 2 and Veo 3
      • Project: Getting started with Google's Veo 2 and Veo 3
  • Multimodal Processing
    • Multimodal Embeddings
      • Project: Bedrock Titan Multimodal Embeddings
      • Embed Images with Azure AI Foundry
    • Vision-Language Models
      • Handling Image & Base64 Inputs with Gemini
      • Video & Text Prompting: Transcription & Visual Descriptions
      • GPT-4V for image understanding
      • LLaVA for visual question answering
  • Advanced Multimodal Applications
    • Document Understanding with Vision
      • Processing PDFs with images and tables
    • Multimodal RAG
      • Combining text and image retrieval
      • Multimodal embeddings in vector databases
Module 7: Model Customization & Fine-Tuning
  • Fine-Tuning Fundamentals
    • When to Fine-tune
      • Prompt engineering vs RAG vs Fine-tuning
      • Cost-benefit analysis
      • Use case evaluation
  • Cloud Platform Fine-Tuning
    • Fine-Tuning Models Using GCP Vertex AI
    • Model Customization Techniques in Amazon Bedrock
      • Data preparation
      • Customizing hyperparameters
      • Fine-Tuning & Retrieve Custom Model
      • Invoke Custom Model
    • Project: Custom Model Fine-Tuning with Microsoft Foundry
      • Data Preparation and Fine-Tuning GPT Models on Azure AI Foundry
      • Deploying and Invoking Models in Production
  • Parameter-Efficient Fine-Tuning (PEFT)
    • LoRA (Low-Rank Adaptation)
      • QLoRA for quantized models
      • Choosing rank and alpha
    • PEFT techniques
      • Adapter layers
      • Prefix tuning
      • Prompt tuning
  • Advanced Fine-Tuning Techniques (NEW)
    • Instruction Tuning
    • RLHF (Reinforcement Learning from Human Feedback)
    • DPO (Direct Preference Optimization)
  • Model Quantization
    • Model optimization techniques
    • Quantization Techniques
      • GPTQ (4-bit quantization)
      • AWQ (Activation-aware Weight Quantization)
      • GGUF format
Module 8: Agentic AI & Workflows
  • AI Agents Fundamentals
    • Introduction to AI Agents
      • Types of AI Agents
      • The Importance of AI Agents
      • Applications and Use Cases of AI Agents
      • Understanding the workflow of AI agents
      • What are AI agents made of?
  • Amazon Bedrock AI Agents
    • Project: Agents for Amazon Bedrock (AWS)
      • Components of Bedrock Agents
        • Foundation model
        • Instructions
        • Action groups
        • Knowledge bases for AI Agents
        • Guardrails for Amazon Bedrock
      • Getting started with Amazon Bedrock Agents
  • LangChain Agents
    • Project: Building AI Agent Workflows with LangChain
      • Creating an automated essay-writing pipeline
      • Integrating LLMs, prompts, and web search
      • Using LLMChain, ChatPromptTemplate, and StrOutputParser
      • Implementing LangChain Expression Language (LCEL) for task automation
      • Implementing structured data validation with Pydantic
      • Handling real-time information retrieval using Tavily API
  • Project: Building AI Agent Workflows with LangGraph Deep Dive
    • Core Concepts
      • State graphs vs message graphs
      • Nodes and edges
      • State management
      • Conditional routing
    • Graph Construction
      • StateGraph API
      • Adding nodes
      • Adding edges
      • Conditional edges
    • Compilation and Execution
      • Compiling graphs
      • Running graphs
      • Streaming outputs
  • Advanced LangGraph Patterns
    • Iterative AI Agents
      • Iterative AI Agent with LangGraph
      • Loops and cycles
      • Termination conditions
    • Human-in-the-Loop
      • Interrupt points
      • Collecting human feedback
      • Resuming execution
    • Checkpointing
      • Saving agent state
      • Resume from checkpoint
      • Memory persistence
    • Sub-graphs
      • Nested agents
      • Modular design
      • Reusable components
  • Multi-Agent Systems with LangGraph
    • What are Multi-AI Agents?
    • Real-world Applications of Multi-Agent Systems
    • Overview of LangGraph for Multi-Agent Orchestration
    • Benefits of Multi-Agent Architecture
    • Agent Communication
      • Message passing
      • Shared state
      • Coordination patterns
    • Hierarchical Agents
      • Supervisor-worker pattern
      • Task delegation
      • Result aggregation
  • Azure AI Agents
    • Project: Azure AI Agents
      • Building AI Agents with Azure
      • Understanding AIProjectClient for managing AI workflows
      • Managing AI Conversations
      • Enhancing AI Agents with Tools
        • Code Interpreter Tool: Performing calculations and analyzing datasets
        • File Search Tool: Searching documents and extracting insights
        • Bing Grounding Tool: Fetching real-time search results
        • Azure AI Search Tool: Connecting AI Agents to a structured search index
      • Using AI for Data Search and Retrieval
        • Setting up Azure AI Search for indexing documents
        • Creating and managing Vector Stores for AI-driven search
      • AI Search Integration for Real-time Information
      • Deploying AI Agents in Production
      • Advanced AI Agent Customization
  • GCP AI Agents
    • Project: Building and Deploying an Agent with Agent Engine in Vertex AI
      • Function Calling in Gemini
      • Agent Engine in Vertex AI
      • Test your agent locally before deploying
      • Deploy and test your agent on Vertex AI
      • Customize each layer of your agent (model, tools, orchestration)
      • Google Agent Development Kit
    • Project: Google Agent Development Kit
      • Core Agent Categories
        • LLM Agents
        • Workflow Agents
        • Custom Agents
        • Multi-Agent Systems in ADK
      • ADK Tool
        • Function Tools
        • Built-in Tools
        • Third-Party Tools
      • Integrating Model Context Protocol (MCP) with ADK
      • Deploying Your Agent
  • Agent Communication Protocols
    • Project: Agent2Agent (A2A)
      • Benefits of Using A2A
      • Key Design Principles of A2A
      • The A2A Solution
      • A2A and MCP
      • How A2A and MCP Work Together
      • Agent Discovery in A2A
  • Model Context Protocol (MCP)
    • Project: Model Context Protocol (MCP)
      • The Architecture of MCP
        • MCP Servers
        • MCP Clients
        • MCP Hosts
        • Local Data Sources
        • Remote Services
      • MCP Ecosystem and Adoption
  • Production-Ready Agent Framework
    • Project: End-to-End Production Framework for AI Agent deployment
      • Introduction to AI agents and AgentCore Runtime concepts on AWS
      • Developer Layer – Building AI agent logic with models, tools, and decorators
      • Docker & Containerization – AI applications using Docker
      • AWS DevOps tools for AI deployment
      • CI/CD with AWS CodeBuild – Automate Docker builds
      • Amazon ECR – Store and manage container images
      • Runtime Deployment – Deploy AI agents as runtime services and endpoints
      • Monitor, manage, and update deployed agents
      • Implement a complete AI agent deployment pipeline on AWS
  • OpenAI Assistants API
    • Intro to OpenAI Agent Builder API
  • Other Agent frameworks
    • AutoGen
    • CrewAI
    • Semantic Kernel
Module 9: Monitoring, Testing & Performance Evaluation
  • Observability & Monitoring
    • Project: Observability in Azure AI Foundry
    • Project: Cloud-Based Model Evaluation using AIProjectClient
    • Observability & Tracing in Azure AI
    • Azure Monitor & Application Insights
    • Best Practices for AI Model Monitoring
  • LLM Evaluation Frameworks
    • Project: LLM Evaluation & Testing with Evidently AI
      • Evaluate and test your LLM use case
      • Create and evaluate an LLM judge
      • Run regression testing for LLM outputs
    • Project: MLflow for LLM Evaluation
      • Model Evaluation in MLflow
      • Heuristic-Based Evaluation Metrics
      • LLM-as-a-Judge Evaluation Metrics
      • Custom LLM Evaluation Metrics
  • Evaluation Metrics
    • AWS Bedrock for LLM and RAG Evaluation
    • BLEU, GLEU, ROUGE, METEOR, and BERTScore for assessing text quality
  • Agent Evaluation
    • Evaluate AI Agents
      • Preparing for Agent Evaluations
      • How Evaluation works with the ADK
  • RAG Evaluation
    • Project: Create automatic RAG evaluation
    • Retrieval-augmented Generation (RAG) evaluation
      • Evaluating Retrieval Systems on cloud
      • RAG Evaluation with MLflow
Module 10: Responsible AI & Ethics
  • Responsible AI
    • What is responsible AI
    • Challenges of responsible AI
    • Amazon services and tools for responsible AI
    • Building AI responsibly at AWS
    • Core dimensions of responsible AI
    • Project: Implementing safeguards in generative AI
      • Amazon Bedrock Guardrails
    • Azure Responsible AI capabilities
    • Google Cloud's approach to responsible AI
  • Enterprise-Ready Features
    • Enterprise-Ready Features for A2A Agents
      • Transport Level Security (TLS)
      • Authentication
      • Authorization
      • Data Privacy and Confidentiality
      • Tracing, Observability, and Monitoring
      • API Management and Governance
Module 11: Production Deployment & APIs
  • Introduction to CI and CD
    • CI/CD in Generative AI operations
    • Steps involved in the CI/CD implementation in Gen AI and workflow
    • Understanding CI/CD tools like GitHub Actions, Cloud Build, Azure DevOps, CodePipelines
  • Containerization and Docker for GenAI Applications
    • Docker Foundation
    • Installing Docker on Windows, macOS & Linux
    • Managing Containers with Docker Commands
    • How does it work? Docker registry - Docker Hub
    • Building your own Docker images
    • Docker Network Types
    • Image optimization
    • Project: Learn to Dockerized any Generative AI applications
  • Docker Compose
    • Multi-container applications
    • Networking
    • Volumes and persistence
    • Project: Deploy Multi-container end-tot-end Generative AI applications
  • Cloud Deployment
    • AWS Deployment
      • Lambda functions for GenAI
      • ECS/EKS for containerized apps
      • API Gateway integration
      • Load balancing with ALB
    • Azure Deployment
      • Azure Container Apps
      • Azure App Service
      • API Management
    • Google Cloud Deployment
      • Cloud Run
      • GKE (Google Kubernetes Engine)
      • Cloud Load Balancing
      • Cloud Endpoints
      • Project: Deploy Dockerized Generative AI applications on Azure container apps
  • Serverless Patterns
    • Function-as-a-Service (FaaS)
      • AWS Lambda for GenAI
      • Azure Functions
      • Google Cloud Functions
    • Project: Deploy Serverless Generative AI applications on AWS
  • Kubernetes for Generative AI
    • Kubernetes Architecture
      • Worker Nodes
      • Control Plane
      • Virtual Network
      • API Server
      • Command line tool - kubectl
    • Kubernetes Resources
      • Pods, Services, Deployments
      • ConfigMaps and Secrets
      • Minikube
      • Project: Deploy Generative AI app in Kubernetes cluster
  • MLOps for LLMs (LLMOps)
    • What is LLMOps?
    • MLOps for LLMs
    • FMOps/LLMOps: Operationalize generative AI
    • LLM System Design
    • High-level view LLM-driven application
    • LLMOps Pipeline
Module 12: (Self-paced) AI Operations with MLOps and other Generative AI Tools
  • Understanding the Operations Side of AI with MLOps
    • MLOps Introduction
      • What is MLOps?
      • MLOps Motivation
      • MLOps challenges
      • MLOps challenges similar to DevOps
      • MLOps Components
      • Automated ML pipelines vs CI/CD ML pipelines
      • Different Roles involved in MLOps (ML Engineering + Operations)
      • Machine Learning Life Cycle
      • Different tools for MLOps
      • Benefits Of AWS MLOps
    • MLOps & Stages
      • Versioning
      • Testing
      • Automation (CI/CD)
      • Reproducibility
      • Deployment
      • Monitoring
    • SageMaker for MLOps
      • Introduction to Amazon SageMaker
      • Using Amazon S3 along with SageMaker
      • Amazon SageMaker Notebooks and SDK
      • Notebook instance type, IAM Role & VPC
      • Build, Train & deploy ML Model using XGBoost
      • Endpoint & Endpoint configurations
      • Generate inference from deployed model
    • CI/CD for MLOps
      • AWS CodeCommit
      • AWS CodePipeline
      • AWS CodeBuild
      • Introduction to GitHub Actions
      • GitHub Actions YAML pipeline structure
      • GitHub Action automation & Custom Workflows
      • GitHub Pages
      • Project: Getting started with GitHub Actions DevOps Pipeline
    • MLOps - Build, Train & Deploy ML Model
      • SageMaker Studio & SageMaker domain
      • SageMaker Projects
      • Repositories
      • Pipelines & Graphs
      • Experiments
      • Model groups
      • Endpoints
      • Project: Deploy an end-to-end MLOps pipeline using SageMaker Studio
    • SageMaker Studio Tools
      • Introduction to SageMaker pipelines
      • Containers for Amazon SageMaker
      • AWS MLOps - ML Model Monitoring
      • AWS MLOps - Amazon SageMaker Feature Store
      • AWS MLOps - Post-Deployment Challenges
  • Document Processing & Parsing
    • PDF Processing
      • PyMuPDF (fitz)
      • PDFPlumber
      • PyPDF2
  • OCR & Document Intelligence
    • Azure Document Intelligence
    • AWS Textractlligence
    • Google Document AI
  • LangSmith
    • Tracing and debugging
    • Dataset management
    • Evaluation runs
  • AI-Powered Development Tools
    • Project: Cursor / Google Antigravity
      • AI-first code editor
      • Codebase understanding
      • Multi-file editing
    • Windsurf
      • Flow state programming
      • Cascade for complex tasks
    • Replit
      • AI-powered development
      • Collaborative coding
    • Claude Code / Cowork
      • What Claude Code does for you
      • Why developers love Claude Code
      • Use Claude Code everywhere

View all plans keyboard_arrow_up

$699

$799

Syllabus

Testimonials

Nisheeth Jaiswal - Participants - MLOps

Dipali Matkar - MLOps Engineer Must Watch 👇

Rahul Patil - Participants - MLOps

Sitaram - Participants - MLOps Specialization course

Dhirendra Kumar Singh - Participants - MLOps

Fathima Hafeez - Participants - MLOps

What you’ll learn

Instructor-led Training

Get trained by top industry experts

Projects and Exercises

Get real-world experience through Projects

Peer Networking and Group Learning

Improve your professional network and learn from peers through our innovative Peer WhatsApp & community groups.

24*7 Technical Support

Speak to Subject Matter Experts anytime and clarify your queries instantly.

Live Hands-on

Hands-on exercises, project work, quizzes, and capstone projects

Reviews and Testimonials

Contact Us