<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rambles of a Forgetful Geek]]></title><description><![CDATA[Just a curious human jotting down what I learn about robots, code, and the chaos in between — mostly for future-me, who will definitely forget how any of this w]]></description><link>https://rambles.saifsidhik.page</link><generator>RSS for Node</generator><lastBuildDate>Sat, 16 May 2026 08:12:39 GMT</lastBuildDate><atom:link href="https://rambles.saifsidhik.page/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Differentiable Physics: Teaching Simulators to Think (and Feel Derivatives)]]></title><description><![CDATA[Let’s say you’re trying to teach a robot to pour coffee.
You run a simulation — and instead of pouring into the mug, it confidently dumps hot virtual espresso onto the table. Not ideal.
With a normal simulator, you’d shrug, tweak some parameters, hit...]]></description><link>https://rambles.saifsidhik.page/differentiable-physics-teaching-simulators-to-think</link><guid isPermaLink="true">https://rambles.saifsidhik.page/differentiable-physics-teaching-simulators-to-think</guid><category><![CDATA[differentiable simulators]]></category><category><![CDATA[robotics]]></category><category><![CDATA[physics simulation]]></category><dc:creator><![CDATA[Saif Sidhik]]></dc:creator><pubDate>Tue, 21 Oct 2025 13:35:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761054762172/f503a8b8-e5bc-4c15-8734-63def62ab321.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s say you’re trying to teach a robot to pour coffee.</p>
<p>You run a simulation — and instead of pouring <em>into</em> the mug, it confidently dumps hot virtual espresso onto the table. Not ideal.</p>
<p>With a normal simulator, you’d shrug, tweak some parameters, hit run again, and hope for the best.<br />In a <strong>differentiable simulator</strong>, though, something magical happens: the simulator tells you <em>exactly</em> how to tweak those parameters to make the pour better next time.</p>
<p>Welcome to the world of <strong>differentiable physics</strong> — where simulations don’t just predict the future, they <em>learn</em> from it.</p>
<hr />
<h2 id="heading-so-what-even-is-differentiable-physics">So What Even <em>Is</em> Differentiable Physics?</h2>
<p>At its core, a normal physics simulator answers questions like:</p>
<blockquote>
<p>“Given a mug of coffee, some gravity, and a bad pouring action, where will the coffee land?”</p>
</blockquote>
<p>It’s a one-way street: you give it inputs (forces, parameters, states), and it gives you outputs (positions, velocities, energy, etc.).<br />But if you then ask,</p>
<blockquote>
<p>“How should I <em>change</em> the pouring action to make the coffee land <em>in the cup</em> instead?”</p>
</blockquote>
<p>the simulator just stares at you blankly. It doesn’t do “why”.</p>
<p>Differentiable physics fixes that by making the simulator <strong>mathematically transparent</strong> — it not only runs the physics forward but also tells you <em>how sensitive</em> the results are to each input. In other words, it gives you <strong>gradients</strong>.</p>
<p>So instead of just knowing <em>what happened</em>, you know <em>how to make it happen better next time</em>.</p>
<hr />
<h2 id="heading-a-bit-of-math-without-the-trauma">A Bit of Math Without the Trauma</h2>
<p>A normal simulator computes:</p>
<p>$$x_{t+1} = f(x_t, u_t, p)$$</p><p>where \(x_t\) ​is your system’s state (like position and velocity) at time \(t\), \(u_t\) is control input (like motor torque), and \(p\) are physical parameters (like mass, gravity and viscosity).</p>
<p>A <em>differentiable</em> simulator also gives you:</p>
<p>$$\frac{\partial x_{t+1}}{\partial u_t}, \quad \frac{\partial x_{t+1}}{\partial p}$$</p><p>That fancy notation basically means:</p>
<blockquote>
<p>“If I nudge this parameter by a smidge, how much does the outcome change?”</p>
</blockquote>
<p>This means we can see how tiny changes in control or parameters affect the next state — the same idea that makes neural networks trainable via backpropagation. This is <em>gold</em> for optimization. Now, your simulator can be plugged straight into any standard gradient-based learning tools and <strong>the physics becomes <em>trainable</em></strong>.</p>
<h3 id="heading-why-gradients-change-everything">Why Gradients Change Everything</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761046132650/944062af-5f3c-4929-86e3-a1f4dfdb6410.png" alt class="image--center mx-auto" /></p>
<p><em>(Image: A simple visual explaining how differentiable physics works. The robot’s action</em> \(u\)<em>leads to a final state (</em>\(x_T\)) <em>— like pouring coffee into a cup — which produces a loss (𝐿) if the outcome isn’t perfect. Gradients (</em>\(\nabla_{u} L\)) <em>flow backward, telling the simulator how to adjust the action to improve the next pour, such that the loss reduces.)</em></p>
<p>The beauty of differentiable physics lies in <strong>optimisation</strong>.</p>
<p>Say you have a loss function (a measure of how well the robot poured the coffee):</p>
<p>$$L = \| x_T - x^* \|^2$$</p><p>where \(x_T\) is the final simulated state, and \(x^*\) is your goal (like a mug full of coffee poured by your robot).</p>
<p>With differentiable physics, you can compute:</p>
<p>$$\nabla_{u} L = \frac{\partial L}{\partial x_T} \frac{\partial x_T}{\partial u}$$</p><p>which in simple terms says something like, “If you’d tilted the cup <em>this much more</em>, you’d have been <em>this much closer</em> to perfect”. You can use this to directly adjust your control inputs \(u\) using gradient descent — <em>just like training a neural network</em>.</p>
<p>That’s the magic of differentiable physics — it turns physical cause and effect into <em>gradients</em> that machines can learn from. No black-box reinforcement learning, no random trials. The simulator itself becomes a learning machine.</p>
<hr />
<h2 id="heading-under-the-hood-differentiating-through-physics">Under the Hood: Differentiating Through Physics</h2>
<p>Okay, so you might be wondering — how do you <em>actually</em> take derivatives of something as messy as a physics simulator? It’s not like you can just open a textbook and find a neat formula for “∂coffee/∂tilt-speed”.</p>
<p>No, it’s a bit trickier. Physical simulations involve:</p>
<ul>
<li><p><strong>Discontinuities</strong> (like collisions or contact)</p>
</li>
<li><p><strong>Integrators</strong> (like Euler or Runge-Kutta)</p>
</li>
<li><p><strong>Constraints</strong> (like joints or limits)</p>
</li>
</ul>
<p>All of which are less than helpful in making things differentiable. The secret lies in making <strong>every step of the simulation play nicely with gradients.</strong></p>
<h3 id="heading-step-1-make-the-simulator-smooth-ish">Step 1: Make the Simulator Smooth(-ish)</h3>
<p>Real-world physics can be <em>messy</em>. Think about pouring coffee — if your mug hits the table, it’s an instant, sharp collision. Mathematically, that’s what we call a <strong>discontinuity</strong> — the motion goes from “moving” to “stopped” in no time. Gradients, which measure <em>how smoothly things change</em>, hate that kind of drama.</p>
<p>So differentiable simulators try to <strong>smooth out</strong> those rough edges a little:</p>
<ul>
<li><p>Instead of saying <em>“the mug hits the table at exactly this instant,”</em> they use a <strong>soft contact model</strong>, as if the mug and table have a tiny cushion of air. The mug gently squishes in, making the transition continuous.</p>
</li>
<li><p>Friction — normally a harsh stick-then-slide behavior — is modeled as a smooth curve.</p>
</li>
</ul>
<p>That way, when you ask <em>“how would a slightly softer pour or different tilt change the splash height?”</em> the math stays calm — no infinities, no meltdowns, just gradients that flow nicely.</p>
<h3 id="heading-step-2-use-integrators-that-play-nice-with-gradients">Step 2: Use Integrators That Play Nice with Gradients</h3>
<p>Most physics engines update motion step-by-step with integrators like Euler or Runge–Kutta.<br />A differentiable simulator, though, does better: <strong>it can efficiently compute how each state depends on the previous one</strong>. It’s like being able to know how every small change in wrist angle, mug tilt, or pour rate affects the final coffee level.</p>
<p>So instead of just computing:</p>
<p>$$x_{t+1} = f(x_t, u_t)$$</p><p>they can quietly compute and record <em>how sensitive</em> \(x_{t+1} \) is to every variable.<br />When you hit “backpropagate,” those sensitivities (the “Jacobians”) chain together backwards in time — like unrolling the tape of the simulation. (“If I had poured <em>just a little slower</em> here, I’d have hit the perfect amount.”)</p>
<h3 id="heading-step-3-let-automatic-differentiation-do-the-heavy-lifting">Step 3: Let Automatic Differentiation Do the Heavy Lifting</h3>
<p>Thanks to frameworks like <a target="_blank" href="https://docs.jax.dev/en/latest/automatic-differentiation.html"><strong>JAX</strong></a> or <a target="_blank" href="https://docs.taichi-lang.org/docs/differentiable_programming"><strong>Taichi</strong></a>, you don’t have to do calculus by hand.<br />Every operation — addition, multiplication, even matrix math — automatically keeps track of its derivative.</p>
<p>So when you say “simulate 100 time steps and minimize the spill,” the system quietly builds a <strong>computational graph</strong> of everything that happened.<br />Then, when you call <code>.backward()</code> or <code>.grad()</code>, the gradients flow through that graph like magic.</p>
<p>It’s physics, but with built-in self-reflection.</p>
<h3 id="heading-putting-it-all-together">Putting It All Together</h3>
<p>At the end of the day, “differentiating through physics” just means "letting your simulator understand <em>how its outputs depend on its inputs</em>”<em>.</em></p>
<p>So when your robot pours coffee too fast, the simulator doesn’t just say “oops, that spilled” — it also says “if you’d tilted 5° less, you’d have nailed it.”<br />That’s the power of gradients flowing through time, through equations, and through your robot’s tiny virtual neurons.</p>
<hr />
<h2 id="heading-when-physics-meets-ai">When Physics Meets AI</h2>
<p>For years, physics and AI were like two geniuses at opposite ends of the classroom — one obsessed with equations, the other with data. Both brilliant, both a little smug, and both pretending the other didn’t exist.Now, thanks to <strong>differentiable physics</strong>, they’re finally collaborating — and it turns out they make a <em>really</em> good team.</p>
<p><strong>Applications Beyond Coffee</strong></p>
<p>Differentiable physics isn’t just about robotic baristas. It’s reshaping how we approach <em>any</em> problem involving physics and optimisation.</p>
<ul>
<li><p>Robots that learn faster: Instead of brute-forcing policies through endless reinforcement learning trials (robot pouring coffee all over your simulated coffee table), you can <em>differentiate</em> through the simulation. That gives you a direct signal for how to improve control — no random trial-and-error needed.</p>
</li>
<li><p>Easier system identification: Got a simulation where the robot doesn’t quite behave like in the real world? Just let the gradients flow. The simulator can automatically tweak its parameters (like joint damping or motor lag) until it matches reality. It’s like the simulator saying, “Oh, my bad — I thought we were on the moon. Let’s adjust the gravity parameter.”</p>
</li>
<li><p>Aiding Robot Design: Want to design a soft robot that slithers gracefully like an eel? Or a structure that folds into a flower when heated? You can specify your <em>goal</em> and let the simulator backpropagate through physics to find the perfect design. Welcome to the age of <strong>inverse design</strong>.</p>
</li>
<li><p>Differentiable physics opens the door to accelerate research and improve tools in other fields as well, such as <strong>graphics and animation</strong> (inverse design for realistic physical effects, adjust material motions to match reality, etc.), and <strong>material &amp; fluid sciences</strong> (optimise material structures, fluid flows, molecular configurations etc.).</p>
</li>
</ul>
<p>When physics meets AI, we move from <strong>modeling the world</strong> to <strong>optimising it</strong>. Differentiable physics blurs the line between simulation and learning. Instead of treating simulators as “truth machines” that tell us what happens, we can now <em>teach them what we want to happen</em>.</p>
<p>This opens up an era of <strong>end-to-end trainable physics systems</strong>, where:</p>
<ul>
<li><p>Neural networks predict control inputs.</p>
</li>
<li><p>Simulators propagate both states <em>and</em> gradients.</p>
</li>
<li><p>The whole loop optimises itself.</p>
</li>
</ul>
<p>It’s not just “simulate and observe” anymore — it’s: “Simulate, differentiate, and <em>improve</em>.”</p>
<p>Everywhere there’s a physical process — from how a robot walks, to how a drone flies, to how your robot pours coffee — gradients can make it smarter, faster, and more efficient. We’re basically giving simulations a sixth sense — the sense of how to improve, and this is a game-changer. Instead of fumbling around for millions of tries like a toddler learning to walk, a robot can use gradient feedback to master a task in just a few hundred runs.</p>
<p>Take Google’s <a target="_blank" href="https://arxiv.org/abs/2106.13281"><strong>Brax</strong></a> simulator, for instance: it uses differentiable physics on TPU clusters to train walking, running, and jumping robots thousands of times faster than traditional RL. Or MIT’s <a target="_blank" href="https://arxiv.org/abs/1910.00935">DiffTaichi</a>, which simulates elastic objects “<em>188x faster</em> than TensorFlow implementations” — making it possible for neural controllers to optimise in mere <em>tens</em> of iterations.</p>
<p><strong>What’s next?</strong><br />As this field matures, we will see more simulators that don’t just describe — they <em>help build</em>.</p>
<p>Imagine simulators that jointly optimize a robot’s <strong>shape, materials, and control system</strong>. Or digital twins that continuously fine-tune themselves based on sensor data.</p>
<p>VFX artists and game designers might use self-learning physics engines to automatically match animation physics to real footage. Engineers could design, test, and perfect structures — all before a single bolt is turned in real life.</p>
<p>In short, <strong>differentiable physics</strong> can turn simulation from a descriptive tool into a creative collaborator — a kind of physics buddy that not only explains reality but also helps <em>improve</em> it.</p>
<hr />
<h2 id="heading-tools-of-the-trade">Tools of the Trade</h2>
<p>If you’re curious about differentiable physics but don’t want to wrestle with equations, you’re in luck — there are several open-source frameworks that handle the heavy math for you while still letting you play with real simulations:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/taichi-dev/difftaichi"><strong>DiffTaichi</strong></a> – Super fast, highly expressive, and written by people who clearly love math.</p>
</li>
<li><p><a target="_blank" href="https://github.com/google/brax"><strong>Brax</strong></a> – Google’s JAX-based simulator for reinforcement learning with physics gradients baked in.</p>
</li>
<li><p><a target="_blank" href="https://github.com/erwincoumans/tiny-differentiable-simulator"><strong>Tiny Differentiable Simulator</strong></a> <strong>(TDS)</strong> – The name says it all: small, elegant, and surprisingly powerful.</p>
</li>
<li><p><a target="_blank" href="https://github.com/keenon/nimblephysics"><strong>Nimble Physics</strong></a> – Great for robotics, built to play nicely with neural nets.</p>
</li>
<li><p><a target="_blank" href="https://github.com/YilingQiao/diffsim"><strong>DiffSim</strong></a><strong>,</strong> <a target="_blank" href="https://github.com/omegaiota/DiffCloth"><strong>DiffCloth</strong></a> – For when you want your simulations to flow, bend, or squish — literally.</p>
</li>
<li><p><a target="_blank" href="https://newton-physics.github.io/newton/guide/overview.html">Newton</a> - A clean, modular differentiable physics framework with a strong focus on usability and integration with deep learning workflows.</p>
</li>
</ul>
<p>Each of these frameworks gives you a taste of differentiable physics without needing a PhD in calculus — just curiosity and a bit of code.</p>
<hr />
<h2 id="heading-further-reading">Further Reading</h2>
<ul>
<li><p><a target="_blank" href="https://arxiv.org/abs/2407.05560"><em>A Review of Differentiable Simulators</em> (Newbury <em>et al.</em>, 2024)</a> - a very accessible and comprehensive survey paper on differentiable simulators.</p>
</li>
<li><p><a target="_blank" href="https://physicsbaseddeeplearning.org/diffphys.html">“Introduction to Differentiable Physics”</a> (Physics-Based Deep Learning) - for a more technical understanding of the topic with practical example.</p>
</li>
</ul>
<hr />
]]></content:encoded></item></channel></rss>