I take simple prompts like "explain cosmology" and automatically generate 2000+ token verbose LaTeX-rich descriptions that produce beautiful Manim animations. No training data needed - just recursive reasoning.
ULTRA QED: The Complete Quantum Electrodynamics Journey
A comprehensive 4-minute, one-shot visualization of Quantum Electrodynamics generated entirely from a single text prompt with zero manual editing or intervention. This unedited journey spans 11 interconnected scenes covering the complete theoretical framework of the electromagnetic interaction:
Complete Conceptual Map:
Scene 1 - Cosmic Opening: Establishing the universal scale with a static starfield backdrop (150 celestial objects), setting the stage for fundamental physics
Scene 2 - Spacetime Foundations: Introducing Minkowski spacetime geometry with the relativistic metric
Scene 3 - Quantum Field Emergence: Transitioning from classical concepts to quantum field theory foundations, establishing the field-theoretic framework
Scene 4 - Maxwell's Transformation: Evolving classical electromagnetic waves (
Scene 5 - QED Lagrangian Heart: Presenting the complete QED Lagrangian density with color-coded term-by-term breakdown:
Components: Dirac spinor
Scene 6 - Feynman Diagram Gallery: Visualizing three fundamental QED processes through spacetime diagrams: electron-electron scattering (single photon exchange), electron-positron annihilation (matter to photons), and pair creation (photons to matter-antimatter)
Scene 7 - Fine Structure Constant: Deep dive into nature's dimensionless coupling constant
Scene 8 - Running Coupling: Demonstrating quantum corrections and renormalization through energy-dependent coupling
Scene 9 - Vacuum Polarization: Revealing virtual particle loops and quantum vacuum structure, where "empty" space seethes with ephemeral electron-positron pairs affecting photon propagation
Scene 10 - Grand Synthesis: Integrating all previous concepts into a unified theoretical framework, connecting spacetime geometry, gauge theory, and quantum mechanics
Scene 11 - Cosmic Finale: Returning to the universal scale, demonstrating how QED emerges as the quantum field theory describing all electromagnetic phenomena throughout the cosmos
Note: Some text overlap occurs during transitions between dense mathematical content - this is characteristic of unedited one-shot generation maintaining continuous narrative flow without post-production cleanup. The complete mathematical journey from relativistic spacetime through gauge field theory to renormalization demonstrates the power of single-prompt comprehensive visualization.
Generated by: Claude Sonnet 4.5 | Render time: 22 minutes | Output: 4:21 duration, 854Γ480 @ 15fps | File size: 48.7 MB
ProLIP: Probabilistic Vision-Language Model
Automatic visualization of contrastive learning, uncertainty quantification, and probabilistic embeddings - generated from a single natural language prompt.
GRPO: Group Relative Policy Optimization
A complete visualization of Group Relative Policy Optimization for reinforcement learning - showing policy updates, reward shaping, and gradient flow through neural networks. Generated from a single prompt with zero manual editing.
You give me: "explain quantum field theory"
I give you back: A complete Manim animation showing Minkowski spacetime, QED Lagrangians, Feynman diagrams, renormalization flow - with 2000+ tokens of LaTeX-rich instructions that actually render correctly.
The secret? I don't use training data. I use a Reverse Knowledge Tree that asks "What must I understand BEFORE X?" recursively until hitting foundation concepts, then builds animations from the ground up.
Most systems try to learn patterns from examples. I do the opposite.
Traditional approach:
Simple prompt -> Pattern matching -> Hope for the best
Problems:
"Explain cosmology"
v
What must I understand BEFORE cosmology?
-> General Relativity
-> Hubble's Law
-> Redshift
-> CMB radiation
v
What must I understand BEFORE General Relativity?
-> Special Relativity
-> Differential Geometry
-> Gravitational Fields
v
What must I understand BEFORE Special Relativity?
-> Galilean Relativity
-> Speed of light
-> Lorentz Transformations
v
[Continue until hitting high school physics...]
v
Build animation from foundation -> target
Result: Every animation builds conceptual understanding layer by layer, naturally creating the verbose prompts that actually work.
Coming Soon: Integration with Nomic Atlas to create a semantic knowledge graph:
See docs/NOMIC_ATLAS_INTEGRATION.md for the complete vision.
Read the full technical explanation: REVERSE_KNOWLEDGE_TREE.md
I've built a 6-agent system powered by Claude Sonnet 4.5 (with a 7th VideoReview agent underway):
Technology: Claude Agent SDK with automatic context management, built-in tools, and MCP integration.
See full architecture: docs/ARCHITECTURE.md
# Clone repository git clone https://github.com/HarleyCoops/Math-To-Manim cd Math-To-Manim # Install dependencies pip install -r requirements.txt # Set up API key echo "ANTHROPIC_API_KEY=your_key_here" > .env # Install FFmpeg # Windows: choco install ffmpeg # Linux: sudo apt-get install ffmpeg # macOS: brew install ffmpeg
# Launch Gradio interface python src/app_claude.py
Then enter a simple prompt like:
Watch the agents build the knowledge tree and generate the verbose prompt automatically.
I've organized 55+ working examples by topic:
# Physics - Quantum mechanics manim -pql examples/physics/quantum/QED.py QEDJourney # Mathematics - Geometry manim -pql examples/mathematics/geometry/pythagorean.py PythagoreanScene # Computer Science - Neural networks manim -pql examples/computer_science/machine_learning/AlexNet.py AlexNetScene # Cosmology manim -pql examples/cosmology/Claude37Cosmic.py CosmicScene
Flags:
-p = Preview when done-q = Quality (l low, m medium, h high, k 4K)Browse all examples: docs/EXAMPLES.md
Math-To-Manim/
βββ src/ # Core agent system
β βββ agents/
β β βββ prerequisite_explorer_claude.py # Reverse knowledge tree agent
β β βββ prerequisite_explorer.py # Legacy implementation
β βββ app_claude.py # Gradio UI (Claude SDK)
β βββ app.py # Legacy UI
β
βββ examples/ # 55+ working animations
β βββ physics/
β β βββ quantum/ # 13 QED/QFT animations
β β βββ gravity/ # Gravitational waves
β β βββ nuclear/ # Atomic structure
β β βββ particle_physics/ # Electroweak symmetry
β βββ mathematics/
β β βββ geometry/ # Proofs, 3D shapes
β β βββ analysis/ # Optimal transport, diffusion
β β βββ fractals/ # Fractal patterns
β β βββ statistics/ # Information geometry
β β βββ trigonometry/ # Trig identities
β βββ computer_science/
β β βββ machine_learning/ # Neural nets, attention
β β βββ algorithms/ # Gale-Shapley, sorting
β β βββ spatial_reasoning/ # 3D tests
β βββ cosmology/ # Cosmic evolution
β βββ finance/ # Option pricing
β βββ misc/ # Experimental
β
βββ docs/ # Documentation
β βββ EXAMPLES.md # Complete catalog
β βββ ARCHITECTURE.md # System design
β βββ MIGRATION_TO_CLAUDE.md # Claude SDK migration
β βββ TESTING_ARCHITECTURE.md
β
βββ tests/ # Testing infrastructure
βββ unit/
βββ integration/
βββ e2e/
Most people prompt in English. That's why it fails.
"Create an animation showing quantum field theory"
Result: Generic, incorrect, or broken code.
"Begin by slowly fading in a panoramic star field backdrop. As the camera
orients itself, introduce a title reading 'Quantum Field Theory: A Journey
into the Electromagnetic Interaction' in bold glowing text. The title shrinks
and moves to the upper-left corner, making room for a rotating wireframe
representation of 4D Minkowski spacetime. Display the relativistic metric:
$$ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2$$
with each component highlighted in a different hue to emphasize the negative
time component. Zoom into the origin to introduce undulating plane waves in
red (electric field $\vec{E}$) and blue (magnetic field $\vec{B}$),
oscillating perpendicularly. Display Maxwell's equations morphing from
classical vector calculus notation to relativistic four-vector form:
$$\partial_\mu F^{\mu \nu} = \mu_0 J^\nu$$
Animate each transformation by dissolving and reassembling symbols. Then shift
focus to the QED Lagrangian density:
$$\mathcal{L}_{\text{QED}} = \bar{\psi}(i \gamma^\mu D_\mu - m)\psi - \tfrac{1}{4}F_{\mu\nu}F^{\mu\nu}$$
Project this onto a semi-transparent plane with each symbol color-coded:
Dirac spinor $\psi$ in orange, covariant derivative $D_\mu$ in green,
gamma matrices $\gamma^\mu$ in bright teal, field strength tensor
$F_{\mu\nu}$ in gold. Let terms pulse to indicate dynamic fields..."
[...continues for 2000+ tokens]
Result: Perfect animations with correct LaTeX, camera movements, colors, and timing.
My agents generate these verbose prompts automatically by walking the knowledge tree.
By starting with high school concepts and building up, the animations naturally explain prerequisites before advanced topics. This creates coherent narrative flow.
When you write formulas in LaTeX, you're forced to be mathematically precise. This eliminates the ambiguity that breaks code generation.
Specifying exact camera movements, colors, timings, and transitions gives the LLM unambiguous instructions. "Show quantum fields" is vague. "Display red undulating waves labeled
Claude Sonnet 4.5's reasoning capabilities handle the recursive prerequisite discovery. I don't need training datasets - just well-structured prompts.
If the LLM generates broken code, I can pass it back with the error and ask for "verbose explanations." This often fixes LaTeX rendering issues automatically.
Short Term (1-2 months):
Medium Term (3-6 months):
4. Nomic Atlas Integration: Semantic knowledge graph for instant prerequisite discovery [*] NEW
Long Term (6-12 months):
5. Community Platform: Public knowledge graph, animation gallery, learning path sharing
6. Fine-Tuning Experiments: RL on successful verbose prompts
See all examples: docs/EXAMPLES.md
I've used multiple AI models to generate examples:
Each model brings unique perspectives, catching edge cases others miss.
I can generate both:
Just pass any working scene back to the LLM and ask for "verbose explanations fully rendered as LaTeX study notes" - you'll get a complete PDF-ready document.
My system handles:
The knowledge tree approach automatically adjusts depth based on the target concept.
Most one-shot animation attempts fail because of LaTeX syntax errors.
My solution: The verbose prompts explicitly show every LaTeX formula that will be rendered on screen, formatted correctly. The agents verify mathematical notation during enrichment.
"Show a quantum field" is too vague - what colors? What motion? From which angle?
My solution: The VisualDesigner agent specifies exact camera movements, color schemes, timing, and transitions. No ambiguity.
Jumping straight to QED without explaining special relativity first.
My solution: The PrerequisiteExplorer agent automatically discovers what concepts must be explained first, ensuring logical narrative flow.
Using
My solution: The MathematicalEnricher agent maintains consistent notation across the entire knowledge tree.
See full requirements: requirements.txt
-ql): 10-30 seconds per scene-qh): 1-5 minutes per scene-qk): 5-20 minutes per sceneTimes vary based on animation complexity.
I switched from DeepSeek to Claude Sonnet 4.5 + Claude Agent SDK in October 2025 because:
See migration details: docs/MIGRATION_TO_CLAUDE.md
I welcome contributions! Here's how you can help:
See guidelines: CONTRIBUTING.md
# 1. Create your animation in the appropriate category examples/physics/quantum/my_new_animation.py # 2. Follow the naming convention # Use descriptive names: schrodinger_equation.py, not scene1.py # 3. Add a docstring explaining the concept """ Visualization of the SchrΓΆdinger equation in quantum mechanics. Shows wave function evolution, probability density, and energy eigenstates. """ # 4. Test it renders correctly manim -pql examples/physics/quantum/my_new_animation.py MyScene # 5. Submit a pull request
Q: Do I need GPU for rendering?
A: No, Manim runs on CPU. GPU just speeds things up.
Q: Can I use DeepSeek instead of Claude?
A: Yes, the old implementation is in src/agents/prerequisite_explorer.py
Q: How do I fix LaTeX rendering errors?
A: Pass the error back to the LLM with the broken code and ask for corrections.
Q: What if my animation doesn't work?
A: Check the examples/ directory for working references in your topic area.
Q: Can I use this for commercial projects?
A: Yes, MIT license. Attribution appreciated.
Q: How accurate are the animations?
A: Very accurate - I use LaTeX for all mathematical notation and validate formulas during enrichment.
MIT License - See LICENSE
Star this repo if you find it useful! It helps others discover the project.
Built with recursive reasoning, not training data. Powered by Claude Sonnet 4.5.