Back to Courses
Numerical Computation
Advanced computational techniques and high-performance algorithms for scientific computing and engineering applications
14 Sessions
0% Complete
Course Progress0%
14
Total Sessions
42
Materials
28 hrs
Estimated Time
Course Sessions
Learning Objectives:
- Master IEEE 754 single and double precision floating-point representations
- Analyze round-off error propagation in arithmetic operations
- Understand machine epsilon, overflow, underflow, and special values (NaN, Inf)
- Implement algorithms with awareness of floating-point limitations
- Apply Kahan summation algorithm for improved accuracy
- Evaluate catastrophic cancellation and methods to avoid it
Available Materials:
IEEE 754 Standard Complete Reference (60 pages)
Floating-Point Error Analysis Theory
Kahan Summation and Compensated Arithmetic
Catastrophic Cancellation Case Studies
Precision and Accuracy Measurement Tools
Programming Exercises in Multiple Languages
Performance Analysis of Arithmetic Operations
Industry Standards and Best Practices Guide
Learning Objectives:
- Compute and analyze vector and matrix norms (1, 2, Frobenius, infinity norms)
- Master condition number computation and interpretation for linear systems
- Understand perturbation theory for linear algebraic equations
- Apply backward error analysis to assess algorithm stability
- Implement efficient condition number estimation algorithms
- Analyze sensitivity of eigenvalue problems to perturbations
Available Materials:
Matrix Analysis and Perturbation Theory (55 pages)
Condition Number Computation Algorithms
Backward Error Analysis Framework
Sensitivity Analysis Case Studies
Efficient Norm and Condition Estimation Code
Linear System Perturbation Examples
Eigenvalue Sensitivity Analysis
Numerical Stability Assessment Tools
Learning Objectives:
- Implement QR factorization using Householder reflections and Givens rotations
- Master Gram-Schmidt process and its modified version for better numerical stability
- Understand SVD computation using bidiagonalization and QR iteration
- Apply QR and SVD to least squares problems and pseudoinverse computation
- Use SVD for low-rank approximation and data compression
- Implement numerical rank determination using SVD
Available Materials:
Matrix Factorization Theory and Algorithms (65 pages)
Householder and Givens Rotation Implementations
SVD Algorithm Development and Optimization
Least Squares Applications and Case Studies
Low-rank Approximation and Data Compression Examples
Numerical Rank and Matrix Approximation
High-Performance Matrix Computation Techniques
Comprehensive Factorization Software Package
Learning Objectives:
- Understand Krylov subspace theory and its applications to linear systems
- Implement conjugate gradient method with optimal convergence properties
- Master GMRES method for nonsymmetric linear systems
- Apply BiCGSTAB and other Krylov methods for various matrix types
- Design and implement effective preconditioning strategies
- Analyze convergence rates and computational complexity of Krylov methods
Available Materials:
Krylov Subspace Theory and Methods (70 pages)
Conjugate Gradient Implementation and Analysis
GMRES and BiCGSTAB Algorithm Development
Preconditioning Techniques and Strategies
Convergence Analysis and Rate Estimation
Large-Scale Linear System Applications
Performance Optimization and Parallelization
Comprehensive Krylov Solver Library
Learning Objectives:
- Implement QR algorithm with single and double shifts for dense matrices
- Master Lanczos algorithm for symmetric eigenvalue problems
- Understand Arnoldi iteration for nonsymmetric eigenvalue computation
- Apply implicitly restarted Arnoldi method (IRAM) for large sparse problems
- Implement specialized methods for generalized eigenvalue problems
- Analyze convergence and computational efficiency of eigenvalue algorithms
Available Materials:
Advanced Eigenvalue Algorithm Theory (75 pages)
QR Algorithm with Shift Implementation
Lanczos and Arnoldi Method Development
Implicitly Restarted Arnoldi Implementation
Generalized Eigenvalue Problem Solvers
Large Sparse Matrix Eigenvalue Applications
Performance Analysis and Optimization
Complete Eigenvalue Computation Package
Learning Objectives:
- Understand DFT theory and its relationship to continuous Fourier transform
- Implement Cooley-Tukey FFT algorithm with bit-reversal and twiddle factors
- Master radix-2, radix-4, and mixed-radix FFT implementations
- Apply FFT to convolution, correlation, and filtering operations
- Use FFT for solving PDEs with spectral methods
- Implement real-valued FFT and multi-dimensional FFT algorithms
Available Materials:
Fourier Transform Theory and Applications (60 pages)
Complete FFT Algorithm Implementations
Signal Processing Applications and Examples
Spectral Methods for PDE Solution
Multi-dimensional FFT and Real-valued FFT
Performance Optimization and Memory Management
Image and Audio Processing Applications
FFT-based PDE Solver Development
Learning Objectives:
- Master various sparse matrix storage formats (CSR, CSC, COO, etc.)
- Implement efficient sparse matrix-vector multiplication algorithms
- Understand sparse direct solvers and fill-in minimization strategies
- Apply graph algorithms to sparse matrix reordering and partitioning
- Design sparse iterative solvers with effective preconditioning
- Implement parallel sparse matrix computations
Available Materials:
Sparse Matrix Theory and Applications (65 pages)
Storage Format Implementations and Comparisons
Sparse Direct Solver Algorithms
Graph-based Matrix Reordering Methods
Parallel Sparse Computation Techniques
Large-Scale Sparse System Applications
Performance Analysis and Optimization
Complete Sparse Matrix Software Library
Learning Objectives:
- Understand forward mode automatic differentiation with dual numbers
- Master reverse mode automatic differentiation and backpropagation
- Implement computational graph construction and evaluation
- Apply automatic differentiation to optimization problems
- Use AD for gradient computation in machine learning applications
- Compare automatic differentiation with numerical and symbolic differentiation
Available Materials:
Automatic Differentiation Theory and Methods (50 pages)
Forward and Reverse Mode Implementations
Computational Graph Algorithms
Optimization Applications with Gradient Computation
Machine Learning Integration Examples
Performance Comparison Studies
Advanced AD Techniques and Tools
Complete AD Software Development Project
Learning Objectives:
- Master shared memory programming with OpenMP for numerical algorithms
- Understand MPI programming for distributed numerical computations
- Implement vectorized algorithms for SIMD architectures
- Apply parallel algorithms to dense and sparse linear algebra
- Optimize cache performance and memory access patterns
- Design scalable parallel numerical algorithms
Available Materials:
Parallel Computing Theory and Practice (80 pages)
OpenMP Programming for Numerical Methods
MPI Implementation Examples and Patterns
Vectorization and SIMD Optimization
Parallel Linear Algebra Implementations
Cache Optimization and Memory Management
Scalability Analysis and Performance Modeling
High-Performance Numerical Software Development
Learning Objectives:
- Master CUDA programming model and GPU architecture understanding
- Implement efficient matrix operations on GPU with shared memory optimization
- Apply GPU acceleration to iterative methods and eigenvalue computations
- Optimize memory coalescing and bandwidth utilization
- Use CUDA libraries (cuBLAS, cuSOLVER, cuFFT) for numerical computing
- Design hybrid CPU-GPU algorithms for large-scale problems
Available Materials:
GPU Architecture and CUDA Programming Guide (70 pages)
Matrix Operation GPU Implementations
Iterative Method GPU Acceleration
Memory Optimization Techniques and Patterns
CUDA Library Integration and Usage
Hybrid CPU-GPU Algorithm Development
Performance Analysis and Profiling Tools
Complete GPU-Accelerated Numerical Package
Learning Objectives:
- Implement adaptive quadrature with automatic error estimation
- Master adaptive step size control for ODE solvers with embedded methods
- Understand a posteriori error estimation techniques
- Apply adaptive mesh refinement strategies for PDE solutions
- Design automatic tolerance control and convergence monitoring
- Implement multi-level and multi-grid adaptive methods
Available Materials:
Adaptive Algorithm Theory and Implementation (55 pages)
Adaptive Quadrature and Integration Methods
Adaptive ODE Solver Development
A Posteriori Error Estimation Techniques
Adaptive Mesh Refinement Algorithms
Multi-level and Multi-grid Methods
Convergence Monitoring and Control Systems
Comprehensive Adaptive Solver Suite
Learning Objectives:
- Understand information-based complexity theory for numerical problems
- Analyze computational complexity of matrix algorithms and their optimality
- Master complexity analysis of iterative methods and convergence rates
- Apply communication complexity theory to parallel numerical algorithms
- Evaluate optimality of numerical algorithms and existence of lower bounds
- Design algorithms that achieve optimal or near-optimal complexity
Available Materials:
Computational Complexity Theory for Numerical Methods (45 pages)
Information-Based Complexity Analysis
Matrix Algorithm Complexity and Optimality Results
Iterative Method Convergence Rate Analysis
Communication Complexity in Parallel Computing
Lower Bound Techniques and Applications
Optimal Algorithm Design Strategies
Complexity Analysis Software Tools
Learning Objectives:
- Apply neural networks to approximate solutions of differential equations
- Use machine learning for automatic parameter tuning in numerical algorithms
- Implement physics-informed neural networks (PINNs) for PDE solutions
- Apply reinforcement learning to numerical optimization problems
- Use data-driven approaches for model reduction and surrogate modeling
- Integrate machine learning with traditional numerical methods
Available Materials:
Machine Learning for Numerical Methods (60 pages)
Neural Network PDE Solver Implementations
Physics-Informed Neural Network Development
Reinforcement Learning Optimization Examples
Data-Driven Model Reduction Techniques
Surrogate Modeling and Approximation Methods
Hybrid ML-Numerical Algorithm Development
Complete ML-Enhanced Numerical Computing Framework
Learning Objectives:
- Understand quantum algorithms for linear algebraic problems
- Apply tensor decomposition methods to high-dimensional problems
- Implement randomized algorithms for matrix computations
- Use sketching and sampling techniques for large-scale numerical problems
- Explore emerging paradigms in computational mathematics
- Analyze the impact of new computing architectures on numerical methods
Available Materials:
Emerging Computational Methods Survey (40 pages)
Quantum Algorithm Theory for Linear Algebra
Tensor Method Applications and Implementations
Randomized Algorithm Development
Sketching and Sampling Techniques
Future Computing Architecture Analysis
Research Paper Collection and Analysis
Innovation Project in Computational Methods