Animated demonstrations of mathematical and machine learning concepts
Created by Xiangxiong Zhang using Claude Code — MATLAB R2023a
Disclaimer: I did not fully verify the mathematical correctness of these codes, each of which was generated within at most 30 minutes. On the other hand, they run without errors, thus most/some of them are good enough as teaching demos.
Caution: Claude Code is powerful but it can still produce useless codes.
Simulates orbital dynamics of 10 debris objects around Earth under gravitational force. Random initial conditions with varying altitudes (300–2000 km), inclinations, and eccentricities.
Method: Numerical integration of Newton's equations in 3D. 2-hour simulation, 10s time step.
Loads real satellite orbital data from TLE (Two-Line Element) format files and propagates orbits using Keplerian elements. 3D visualization of real satellite orbits.
Dataset: iss_tle_data.txt (source: CelesTrak).
Processes and visualizes a large TLE dataset (~14,000+ active satellites). Subsamples 50 for animation and 200 for full trajectory output suitable for operator learning.
Dataset: active_satellites_full.txt (source: CelesTrak active satellites catalog). 2-hour simulation, 15s time step.
Simulates balls dropping through a Galton board (peg board) into bins, demonstrating that the resulting distribution approximates a Gaussian. A theoretical normal curve is overlaid on the histogram.
Generates 800 uniform points on a unit sphere (Fibonacci spiral), builds a k=10 nearest-neighbor graph, simulates a 5000-step random walk. Stationary distribution is approximately uniform.
Transition rule: Uniform random choice among k nearest neighbors.
Loads Stanford Bunny mesh (8171 vertices), subsamples to 1500 points, builds k=12 nearest-neighbor graph, simulates an 8000-step random walk. Non-uniform density produces a non-uniform stationary distribution.
Dataset: bunny.ply (source: Stanford 3D Scanning Repository).
Simulates wave propagation on a 10×10 m domain with a circular rigid obstacle. A Gaussian pulse is launched from the left side and scatters off the obstacle.
Method: Second-order finite differences in space, leap-frog in time. Dirichlet BCs. 300×300 grid.
Extends the acoustic wave demo to spatially varying wave speed c(x,y). The medium has horizontal layers, random heterogeneity, and a high-speed lens that focuses waves.
Method: Variable-coefficient wave equation with gradient terms. First-order Mur absorbing BCs. 400×400 grid.
Solves the 1D compressible Euler equations for the Lax shock tube problem. Produces a rarefaction wave, contact discontinuity, and shock wave from discontinuous initial data.
Method: First-order Lax-Friedrichs scheme, CFL=0.9. 200 points on [0, 1], final time T=0.14.
Solves basis pursuit (minimize ‖x‖1 subject to Ax = b) using Douglas-Rachford splitting. Recovers a sparse signal (20 nonzero entries in 1000 dimensions) from 100 measurements.
Method: Douglas-Rachford iteration alternating between affine projection and soft thresholding.
Computes and animates optimal transport between two 1D Gaussians using the Benamou-Brenier dynamical formulation. Source: narrow Gaussian at x=−1.5. Target: wider Gaussian at x=1.5.
Method: CDF inversion for the optimal map T(x) = F1−1(F0(x)). McCann interpolation for the Wasserstein geodesic.
Transports a bimodal distribution (two Gaussians at x=−2 and x=2) to a single Gaussian at the origin. Demonstrates how mass from two sources optimally merges into one.
Method: Same CDF-based approach. Animation shows two bumps flowing inward and merging.
Demonstrates Langevin dynamics for sampling from a Gibbs distribution using Euler-Maruyama discretization of the Langevin SDE with a double-well potential U(x) = (x2−1)2.
Animation: Particle on potential landscape, sample path over time, histogram converging to the Gibbs distribution.
Extends Langevin MCMC by adding a Metropolis-Hastings accept/reject step to the Langevin proposal, correcting for discretization bias.
Animation: 4 panels including running acceptance rate plot.
Runs both Unadjusted Langevin (ULA) and MALA side-by-side with the same noise sequence, comparing convergence to the Gibbs distribution via KL divergence.
Animation: Side-by-side histograms, KL divergence on semilogy scale.
Trains a CNN to fit a smooth function f(x) = sin(2x) + 0.5cos(5x) + 0.3sin(8x). Uses Gabor-like patch encoding to convert 1D input to 2D image patches.
Optimizer: Adam. Animation shows fitted curve evolving during training.
Trains a CNN to classify a 4-class spiral dataset in 2D. Uses 2D patch encoding and shows decision boundary evolution during training.
Concepts: Cross-entropy loss, softmax, decision boundaries, training/validation split.
Compares 7 optimization methods on the same CNN regression problem: Vanilla GD, SGD, Heavy Ball Momentum, Nesterov, AdaGrad, RMSProp, and Adam. All implemented from scratch with custom training loops.
Animation: 7 fitted curves side-by-side with MSE loss comparison.
Trains an autoencoder (150→128→64→32→2→32→64→128→150) to compress 3D point clouds of 5 shape classes into a 2D latent space. Compares to PCA.
Key insight: Random point ordering makes reconstruction impossible — fixed by deterministic ordered sampling.
Designed to show where autoencoders beat PCA. Data lives on a nonlinear manifold (2D latent mapped to 50D via sinusoidal features). Three classes with concentric circular boundaries.
Result: AE achieves ~84% KNN accuracy vs PCA's ~72%.
Solves an 8×8 grid world (obstacles, traps, goal) using exact policy iteration: alternating policy evaluation (Bellman equation) and greedy policy improvement.
Animation: Value function heatmap and policy arrows evolving, then agent following the optimal path.
Same grid world, but Q(s,a) is approximated by a neural network (7→64→64→4) trained with Adam and experience replay. Compares learned DQN policy to exact tabular solution.
Result: ~60% policy match vs tabular optimal after 800 episodes.
Solves a 1D Mean Field Game for crowd congestion. Agents on [0,1] move toward a target while avoiding congested areas, subject to Brownian noise. The equilibrium balances proximity-to-target against congestion cost.
Method: Fictitious play iterating between HJB backward sweep (Godunov Hamiltonian) and Fokker-Planck forward sweep (upwind advection). Converges in ~27 iterations.
The first code was generated directly by AI: it is a mathematically correct but numerically useless code because it didn't use the best way to solve linear systems. The other 2 codes were generated by AI after rounds of debugging, but their mathematical correctness was not validated.
Solves −Δu = f on [0,1]×[0,1] with Dirichlet BCs using second-order finite differences. Manufactured solution u(x,y) = sin(2πx)sin(2πy) provides an exact reference.
Method: 2D Laplacian via Kronecker products. Tests N = 16, 32, 64. Expected: 2nd-order convergence.
Solves the 2D compressible Euler equations using a Q2 Discontinuous Galerkin method with the Zhang-Shu positivity-preserving limiter on a circular explosion benchmark.
Method: Q2 DG, 9 basis functions/element, Gauss-Legendre quadrature. 30×30 elements.
Measures accuracy and convergence rate of the Q2 DG Euler solver using the isentropic vortex benchmark. Tests multiple mesh resolutions and computes L2 errors.
Expected: ~3rd order convergence for Q2 elements on periodic domain.