**Introductory Note:** This article was originally written for a course that is
part of my physics degree. The original article was handed in as a PDF and
typeset using $\LaTeX$. The text relies heavily on footnotes, which are
slightly^{(but only slightly)} easier to read in the original PDF.

# Welcoming Words

Let me start by asking a question: How often have you invoked conservation laws
during your career as a
late-night-problem-sheet-specialist^{1}? If the answer
is loads, you may be interested in what I have to tell
you^{2}. At the end of this article, you will learn
about something called Noether’s Theorem. But before we get there, a brief
detour to something called Calculus of Variations will be necessary [1].
However, I can assure you, my dear reader, that this is a subject of utmost
interest and that this little endeavor will be well worth it.

# Calculus of Variations (a detour)

Imagine a plane. In this plane you find yourself at a point, which is very far
removed from the point where you would like to be^{3}.
Now a natural question arises: What is the best way to get
there^{4}?

Here, a keen observer might remark: "That's easy, that's a straight line!". And
indeed this observer would not be wrong^{5}. But can
he proof this fact? And more interestingly: What if the plane is not a plane,
but rather a (more) realistic place with mountains and valleys and other such
inconveniences?

This is precisely the kind of question that calculus of variations allows us to
answer. The problem we face is minimizing a functional (a function of
functions^{6}). In our case, this is the length of the
path from where we are to where we want to go, as a function of the function
that describes our path.

To recklessly include an equation: Our problem is to find a path $y(x)$ such that

$$ \int_{a}^{b} dx \sqrt{1 + (y^{\prime})^2} = \int_{a}^{b} dx F(x, y, y^{\prime}), $$

which is the distance we would have to walk, is minimized.

This seems hard. Where would we even start^{7}? Well
in this case, we start with some wishful thinking: What if we already knew the
solution $y(x)$. This would indeed by very nice, but how does it help us? Hint:
This is where *variations* come in. What if we now *vary* this solution, by
adding on a new, completely arbitrary function $\epsilon \times
\eta(x)$^{8}?

With this approach, we find that $y(x)$ has to satisfy the Euler Lagrange
Equation^{9} [3]:

$$ \frac{\partial F}{\partial y} - \frac{d}{dx} \frac{\partial F}{\partial y^{\prime}} = 0. $$

Showing this is left to the reader as an
exercise^{10}, and can be done using integration by
parts^{11}.

Now solving tricky differential equations like this is not something a lot of people like to do. And while this equation is extremely important, it is not what I want you to take away from reading this article. After all, this is supposed to be a fun read, so I'll take the freedom to leave things like this to the Mathematical Methods lecturer.

Instead, I want to focus on the idea of what a variation is, as something quite similar will turn up for Noether's Theorem. Visually, the idea is to draw the solution (in our very nice case a straight line), and then to make it more fuzzy, by adding extra bumps along the line, but without exactly thinking about what kind of bumps we are adding.

After doing this, we then proceed to squeeze these bumps, until we get back to $y(x)$. If we had actually known the solution, doing this would be quite pointless and tedious. But since we didn't actually know the solution, we managed to learn in using this sneaky method.

# The Hamiltonian Principle (the dry bit)

Chances are, you heard of a guy called Newton^{12}.
And since you study physics, you have probably also heard of his three laws.
Now, what if I told you, that instead of fiddling with those, there is a much
nicer^{13} way of doing things?

Enter: The principle of least action [1]. It says that the action of the system is stationary, which a theorist would write as:

$$ \delta \mathcal{A}[y, \dot{y}, t] = \delta \int_{t_0}^{t_1} L(y, \dot{y}, t) \text{ } dt = 0. $$

This is a fancy way of writing, that the functional $\mathcal{A}$ has to obey
the Euler Lagrange Equation from before^{14}.

Now you may well wonder, what this $L$ thing is, and that is indeed a formidable question. For our purposes, we will just define it to be [3]:

$$ L = E_\text{Kin.} - E_\text{Pot.}, $$

the difference between the kinetic and potential energy in our system. $L$ is
called the systems Lagrangian.The full story is, of course, much richer and much
more mathematically engaged than this
definition^{15}.

Now I state, without proof^{16}, the following: Any
motion of a system $y(t)$ that obeys the principle of least action, obeys
Newtons laws^{17} [1, 3].

# Noether's Theorem (finally, the grand reveal)

We have now covered the setup necessary for introducing the setup of Noether's
Theorem: Symmetry variations^{18}.

Symmetry variations are like the variations in the calculus of variations, but with two important differences:

- They are not required to vanish at the
boundary
^{19}. - The function we called $\eta$ before now has a special form.

We define, for the continuous coordinate transformation $y(t) \rightarrow y^\prime(t) = f(y, \dot{y})$, the symmetry variation:

$$ \delta_S y(t) = y^\prime(t) - y(t), $$ as this difference tends to $0$ [1].

Armed with this nifty definition^{20}, let me show
you something magical: Consider the case, when the action is symmetric under a
continuous transformation of variables (like translation), then we have:

$$ \delta_S \mathcal{A} = 0. $$

Now we can state Noether's Theorem: If the action is symmetric under a
continuous transformation, we get a constant of
motion^{21}. And on top of knowing that there *is* a
conserved quantity, we can also directly write down *what* this quantity is.

Let me take you through what this means
mathematically^{22}:

$$ \delta_S \mathcal{A} = \delta_S \int_{t_0}^{t_1} L(y, \dot{y}, t) \text{ } dt $$ $$ = \int_{t_0}^{t_1} \frac{\partial L}{\partial y} \delta_S(y) + \frac{\partial L}{\partial \dot{y}} \delta_S(\dot{y}) \text{ } dt $$ $$ = \left[ \frac{\partial L}{\partial \dot{y}} \delta_S(y) \right]_{t_0}^{t_1} + \int_{t_0}^{t_1} \left(\frac{\partial L}{\partial y} - \frac{d}{dt}\frac{\partial L}{\partial \dot{y}}\right) \delta_S(y) \text{ } dt $$ $$ = 0. $$

We now recognize the Euler Lagrange Equation under the integral
sign^{23}, and
remember^{24}, that the start and end times $t_0$
and $t_1$ are arbitrary to find:

$$ \frac{\partial L}{\partial \dot{y}} \delta_S(y) = \text{Constant}, $$ which
is Noether's Equation^{25}!

# Conservation of Momentum (a much-needed example)

This probably looks very abstract, so we will consider the following example: Take a system having constant potential $E_\text{Pot.}(y) = V$. Then, consider continuously translating the system with the transformation $y(t) \rightarrow y^\prime(t) = y(t) + \epsilon$, where $\epsilon \in \mathbb{R}$ is an arbitrary constant, tending to $0$.

For the Lagrangian, we have $L = \frac{1}{2}m\dot{y}^2 - V$, where all the symbols have their usual meaning. Since $L$ does not depend on $y$, the symmetry variation is $\delta_S L = 0$, and thus we find for the action $\delta_S \mathcal{A} = 0$. Finally, invoking Noether's Theorem, we know:

$$ \quad\text{ } \frac{\partial L}{\partial \dot{y}} \times \delta_S(y) = \text{Constant} $$ $$ \Leftrightarrow \frac{1}{2}m2\dot{y} \times \epsilon = \text{Constant} $$ $$ \Leftrightarrow m \dot{y} = p = \text{Constant}, $$ which is simply the statement that linear momentum is conserved.

This simple example demonstrates the power and elegance of Noether's Theorem in
particular and applying mathematics in general. Using only very few assumptions,
namely the principle of least action, it is possible to capture the rich
structure of the world we inhabit^{26}.

## References

**[1]** V. I. Arnold. *Mathematical Methods of Classical Mechanics (2nd
Edition).* Springer-Verlag, 1989.

**[2]** S. V. Fomin I. M. Gelfand. *Calculus of Variations.* Prentice-Hall
International, Inc., 1963.

**[3]** Mary L. Boas. *Mathematical Methods in the Physical Sciences (2nd
Edition).* John Wiley & Sons, Inc., 1983.

## Footnotes

**1:** Asking for a friend. **[back]**

**2:** If the answer is not loads, you should probably postpone
reading this until you have gone back to your problem sheets and reconsidered
your life choices. Especially those, that lead to you reading this article. At
least those are the ones I would focus on. **[back]**

**3:** Yes, that happens to me all the time too. **[back]**

**4:** If the first question that comes to your mind is *"How did I
get here?!?!?"*, I recommend less alcohol. **[back]**

**5:** However, he would be a horrible person for stealing my
punchline. **[back]**

**6:** For the notation lovers: $ F:\mathcal{D}^n \rightarrow
\mathbb{R} $, where $ \mathcal{D}^n $ is the set of continuous functions $
f_\mathcal{D} : [a,b] \rightarrow \mathbb{R} $ with norm $ \sum_{i=0}^n
\max_x |f^{(i)}(x)| $ [2]. (Note: If you like equations like this, you
should probably read a proper book [2] instead of this article.)
**[back]**

**7:** Other that the obvious: Did Euler happen to solve this
problem for me? **[back]**

**8:** $\eta(x)$ has to go to $0$ at the ends of our interval,
such that we still end up with a path from A to B. It should also be nice, i.e.
continuous, defined on the interval, etc. So not quite arbitrary, but who has
time for so many details? **[back]**

**9:** Now that we included one equation already, a few more won't
do any harm, will they? **[back]**

**10:** Ah, everyones favorite! **[back]**

**11:** You will also need to recognize, that at the solution
$y(x)$ we have: $\epsilon = 0$ and $\frac{d}{d\epsilon}F[y(x) + \epsilon
\times \eta(x), y^\prime(x) + \epsilon \times \eta^\prime(x)] = 0$.
**[back]**

**12:** I hear, he's a big deal. Discovered gravity, or something
like that [1]. **[back]**

**13:** Nicer^{TM}, when you don't have to draw funny
force diagrams, but instead get to solve partial differential equations.
**[back]**

**14:** You will now begin to appreciate how nice this formulation
of classical mechanics truly is. Especially, after you had a chance to meet some
partial differential equations in the differential equations course. Even more
so, if they are not separable. Those, are the nicest^{13}!
**[back]**

**15:** If interested, read a proper book [1] and stop wasting
your time here. **[back]**

**16:** And without any shame. **[back]**

**17:** Furthermore, anyone who uses the principle of least
action, instead of Newtons laws, has a happier life. Or not. Actual results may
vary based on age, health, and personal preference. **[back]**

**18:** Just when you thought we were done. Ha, got you! More
random things that look like they are pulled from thin air, right until the end.
Although, personally, this is what I enjoy about mathematics: The moment of
revelation, when everything suddenly makes sense and has a precise purpose.
Mathematicians like writing in this way: Do all the setup first, then pull the
pieces together. Why exactly they like this style of writing will forever be a
mystery. (However, my money is on this: It makes them look smarter.)
**[back]**

**19:** This was mentioned before, but will only have been read by
those select few, who suffered through my attempt at humor in the footnotes.
**[back]**

**20:** Rigor not included. **[back]**

**21:** Okay, I admit it. I lied. Instead of dragging you through
all this setup, I could have just said something like: "If there is a symmetry
in the laws of nature *(read: System Lagrangian)*, then we get conserved
quantities *(also called Noether Charges)*". But that would not have made for a
very long article, would it? **[back]**

**22:** The long equation that follows, makes it seem like the
solution is fully spelled out. However, there will be a sneaky jump right at the
end, where I brush over the fact that $f(t_a) - f(t_b) = 0 \quad \forall \quad
t_a, t_b \in \mathbb{R}$ implies $f(t)$ is a constant in $t$.
**[back]**

**23:** This is trivial^{TM}. (Proof by intimidation, a
classic.) **[back]**

**24:** Thanks to the many footnotes, I may have successfully
drained your attention enough for you to not remember. If that is the case,
please refer to the Internet for help. **[back]**

**25:** You may now be at a point where you think: "Wow, so
what?". And you would not be entirely wrong to think that. However I feel
strongly that I did not succeed in writing this article if you are not feeling
inspired now. So please take a moment to feel inspired. **[back]**

**26:** To $\pm 7.5\text{mm}$. Never forget that physics is an
experimental science. (Even though I personally prefer to not be the one
conducting them.) **[back]**