My first set of instructional videos is finished. It makes up an introductory course in Python 3. Topics covered include: variables, if statements, loops, functions, lists, dictionaries, installing packages, file i/o, and basic GUI applications with tkinter. In addition there a lot of example programs that demonstrate using most of these concepts in context. You can start watching below!
If you’re reading this blog then chances are you’ve seen at least one video of an astronaut doing something cool on the International Space Station. In all of these videos, it looks like there is no gravity, right? Let’s plug some numbers into Newton’s universal law of gravitation and see if that is actually the case.
Newton famously discovered* that objects with mass attract each other via a force whose strength is inversely proportional to the distance between the objects squared. That means that if the distance between two objects is doubled, the force of gravity between them will be one-fourth as strong. The complete equation for calculating the gravitational force between two objects is:
Where here m1 and m2 are the masses of the two objects in consideration (in this case, you and the Earth), r is the distance between the objects measured from their centers, and G is a special constant of the universe called the gravitational constant. So some useful values for us to consider are:
|Mass of the Earth||me||5.972 x 1024 kg|
|Radius of the Earth||re||6.371 x 106 m|
|Gravitational constant||G||6.674 x 10−11 N · (m/kg)2|
|Distance to the ISS||yISS||400 km|
|Mass of a typical person||mp||80 kg|
So if we plug all of these values into the equation to calculate the force of gravity acting on a person on the ISS, it comes out to about 696 N. By comparison the same person standing on the surface of Earth would experience about 786 N of gravitational force. Given my penchant for making graphs, it may come as no surprise that I plotted the force of gravity versus distance away from the surface of Earth using the numbers from the table above. Here’s the result:
All of this means that there is only about 10% “less gravity” on the ISS compared to standing on Earth – far from zero gravity. So what gives!?
The answer to the question “why do astronauts on the ISS appear weightless?” is not all that complicated, but it defies the intuition that many people have. Check out this image from the Newton’s cannonball Wikipedia page:
This image does a great job of explaining what it means to be in orbit. When something is in orbit (such as the astronauts on the ISS), it is really just falling toward the Earth. However it is also moving sideways fast enough that instead of hitting the Earth it misses and continues to fall. Look at the different paths in the picture above: paths A and B represent an object that wasn’t moving quite fast enough to get into orbit, it was falling and hit the Earth; paths C and D represent objects that are in orbit, they are constantly in free fall towards the Earth but they are moving sideways so fast that they never actually hit it; path E represents an object moving sideways so fast that it just flies away from Earth and therefore doesn’t go into orbit. If you want to play around with this, there are lots of interactive Newton’s cannon demos available such as this one.
So in fact the astronauts on the ISS are just constantly in free fall. This means that they don’t experience any contact forces of objects pushing on them (like we do when we’re standing on the ground and it’s pushing up on us). If you’ve ever been on an amusement park ride where you’re dropped to the ground, you experience the same sensation of weightlessness that astronauts on the ISS do, but the difference is it only lasts for a few seconds before the ground gets in the way.
*Note that the Newtonian theory of gravity is actually an approximation that has since been supplanted by the general theory of relativity. However for the majority of simple calculations such as the one discussed here, the Newtonian theory is a very good approximation and therefore good enough to get the job done.
A random walk is an interesting type of simulation which is fairly straightforward to understand, and surprisingly pretty useful. I’ll explain the concept by way of a silly analogy — sometimes a random walk is described as a “drunkard’s walk” where you could imagine a drunkard stumbling through a town, and when he reaches an intersection he randomly goes in one of the four possible directions. Repeat the process as many times as you like, and at the end you have yourself a random walk. Now you might think that the drunkard wouldn’t get anywhere if you repeated the process enough times he’d usually end up back where he started as the moves cancel out. This actually isn’t the case at all (however if you average over many independent walks you find that it does average out to zero).
Random walks have a variety of applications in biology, physics, and even your follower recommendations on Twitter. One of Albert Einstein’s many contributions to physics was discovering that this sort of “drunkard’s walk” is very closely related to something called Brownian motion which has to do with the seemingly random motion of microscopic objects.
One small note: what I have described so far is actually a two dimensional random walk, but you could image doing this in one dimension where the walker can only go left or right, three dimensions (up/down, right/left, backwards/forwards), or more generally in n-dimensions. Indeed there are lots of interesting variations you can make such as having some directions be slightly more likely than others (this is known as a biased random walk), or not allowing the path to revisit a point it’s already been to (this is known as a self-avoiding walk).
I made some graphs for the simple 2D random walk case. The first graph just has the walker making 1000 moves, and even with this amount you can see some pretty interesting patterns emerging. Note that in all of the graphs if you locate the point (0, 0) on the graph, that is the starting point for the journey.
The remaining three graphs are the same thing as above, but for 100,000 steps. As you can see below, each pattern obtained from this process is unique and interesting.
Here is the Python source code to generate these plots. Even if you don’t know much about programming it might still be interesting to see how very simple the code used to generate these seemingly complicated plots is! As usual, you will need Python 2.7, NumPy, and matplotlib for the script to run.
The Collatz Conjecture is one tough nut to crack. Also called the 3n + 1 Conjecture, it is a deceptively simple unsolved problem in mathematics. The conjecture claims that for any positive whole number, you will always reach the value 1 if you repeatedly divide by two if the number is even and multiply by 3 and add 1 if the number is odd.
Let me explain with an example; suppose the initial number is n = 5. This number is odd, so the next value would be 3*5 + 1 = 16. This number is even, so the next iteration gives 16 / 2 = 8. This number is even so the next will be 8 / 2 =4, then 4 / 2 = 2, then finally 2 / 2 = 1 and we’ve reached 1. In table form:
|Step Number||Current Value|
So for an initial value of n = 5, the Collatz Conjecture process takes 5 steps to reach 1. It is known that this will work for even very large numbers, but no one has ever been able to prove that this process will always reach the value 1. Here’s a plot showing how many steps it takes to reach 1 using this process for all of the numbers up to one million:
Since there are of course an infinite number of numbers, we can’t just test them all and make sure – we need a mathematical proof to be able to say with certainty this will always work. You might think that since it works all the way up to a big number like 1,000,000 it will probably always work, but mathematics does not admit this type of reasoning and indeed there are famous examples where something seemed to be true for every value people tried, but then someone found a counterexample that was very large. Perhaps the most famous example of this is the Pólya conjecture for which a counterexample was found at n = 906,180,359.
Terrance Tao wrote an excellent post on the Collatz Conjecture on his blog. If you’re interested in taking a shot at this problem or just playing around with it a bit, that would be a good place to start learning about it.
I recently wrote a Python script for plotting wave functions for the Quantum Harmonic Oscillator. This is one of the few problems in quantum mechanics that has a “nice” solution which can be obtained exactly. In simple terms the quantum harmonic oscillator refers to a quantum particle that is confined by a potential whose strength is proportional to the square of the distance away from the equilibrium position (i.e. the position where there is no force from the potential acting on the particle). A concrete physical situation where this applies might be an electron that is in a uniform magnetic field. There are many other physical situations that can either be informed by or very well approximated by the harmonic oscillator model.
Below are some plots of the wavefunction for a particle in a harmonic oscillator potential. Here n denotes the “energy level” where n = 0 corresponds to the ground state of the system. The energy level takes on discrete (or “quantized” hence the name “quantum” mechanics) values. The energy of a particle at the n-th energy level is
where ħ stands for Planck’s constant and ω stands for the natural angular frequency of the oscillator (this is related to the proportionality constant for the potential described in the first paragraph). Interestingly the lowest possible energy for a particle confined to this potential (set n = 0 in this equation) is not zero! This means it possesses so-called “zero point” energy which is something pretty weird that differs from potentials in the macroscopic world.
A wavefunction is a special mathematical object that encodes information about the dynamics of a quantum system. In general wavefunctions are not directly measurable, and many view them as a mathematical tool we use to represent quantum systems. The wavefunction can be used to compute the probability that, for example, the position of the particle will be within a particular range of values. So without any further adieu, here are the plots:
This is the “ground state” wave function for the quantum harmonic oscillator. The shape of this plot is called a Gaussian. Note that these plots are done in so-called “natural units” (where c = ħ = 1) that simplify the calculations a bit.
This is the first excited state wave function, note that it has two “bumps.”
Just for fun, here is the 134th excited state. It’s quite pretty. Notice the larger bumps on either end. As n gets large, we expect the oscillations in the middle to sort of “wash out.” These two large bumps on the ends correspond to the classical turning points for the harmonic oscillator. From Wikipedia:
As the energy increases, the probability density becomes concentrated at the classical “turning points”, where the state’s energy coincides with the potential energy. This is consistent with the classical harmonic oscillator, in which the particle spends most of its time (and is therefore most likely to be found) at the turning points, where it is the slowest. The correspondence principle is thus satisfied.
Recently a video purporting to explain the P vs. NP problem to the layman was on the front page of Reddit. This problem is really fun to think about, and the fact that it seemingly has something very fundamental to say about the nature of reality is striking. Here we have a problem whose formulation was motivated by the invention of glorified adding machines that is expressed in the language of highly abstract mathematics, BUT its resolution (in either direction) will indisputably inform our views about the nature of nature. I’m not much of a Platonist but this sort of profound connection between mathematics and reality is always very intriguing. I’m much more comfortable with the unreasonable effectiveness of mathematics when it comes to mathematics being able to accurately model the physical world. This has never really been a sticking point in my mind, it seems to me that if nature behaves in a predictable way (“predictable” here meaning that identical systems will reliably produce self-consistent, sensible outcomes), then something like mathematics must be able to faithfully represent it. However what the resolution of P vs. NP will mean for our understanding of the universe seems in some sense deeper. Or maybe not. Maybe mathematics being able to predict the position of a moving object 1 second into the future is just as spooky as P vs. NP telling us about the limits of information processing in the universe. I’m really not sure.
Here’s my “elevator speech” version of the problem taken from some of my old notes:
Solutions for problems in the class P can be found efficiently in polynomial time. Trivially this means that candidate solutions to problems in P can be checked in polynomial time (by just using the efficient algorithm to find the solution and checking if it matches the candidate solution).
Given a candidate solution, you can verify it in polynomial time for problems in NP. This class therefore includes P, and the crux of the P vs. NP problem is whether or not being able to verify a solution in polynomial time implies a solution can be found in polynomial time in the first place.
So-called NP complete problems are in the category NP, but an efficient solution for any NP complete problem could be applied to any NP problem (in polynomial time). Interestingly all known problems which are in NP but not in P are NP complete. The existence of a problem which is in NP (and not in P) but not NP complete would actually show that P ≠ NP.
NP hard problems include all NP complete problems, but they also include all other decision problems where a candidate solution cannot be verified in polynomial time. If P = NP then all NP problems are NP complete and P = NP, so all decision problems would fall into the NP hard class. If P ≠ NP then problems in P and problems in NP which are not NP complete would not fall into the NP hard class.
There are lots of other complexity classes, but these are the most important ones.
For more on this topic and the implications of the ability to solve NP complete problems efficiently would have on our understanding of reality, I recommend reading NP-complete Problems and Physical Reality by Scott Aaronson, who was quoted in the video linked at the start of this post.
As a closing thought I’d like to point out the fact that these sorts of videos which explain interesting problems in math are able to gain so much traction with the general public. This speaks to how fascinating the subject can be even for “non-math people.” The stigma against math (and indeed wearing ones distaste for math as a sort of hip badge of honor) speaks to a collective failure to teach math the right way. Paul Lockhart discusses this issue in great detail in his excellent essay A Mathematician’s Lament which I highly recommend reading. I don’t agree with everything he says, but nevertheless I think he raises many valid points. Ever since reading this essay I am reminded of it every time I see lots of people excited about math videos such as this or (less encouragingly) when I see students frustrated with stupid homework problems.
This summer I’ll be teaching an online class called Intro to Programming with Python. I’ll post the complete list of videos on this site once they’re done. In the meantime, here’s the first video for the class.