God: The Programmer of the Universe!

Posted in cosmology, Philosophy, physics, Science on July 14, 2010 by Grad Student

Here’s the Bestest Ever paragraph you’ll read in a physics journal article  (physical review letters) with an awesome title to boot: “Computational capacity of the universe

What is the universe computing? In the current matter-dominated universe most of the known energy is locked up in the mass of baryons. If one chooses to regard the universe as performing a computation, most of the elementary operations in that computation consists of protons, neutrons (and their constituent quarks and gluons), electrons and photons moving from place to place and interacting with each other according to the basic laws of physics. In other words, to the extent that most of the universe is performing acomputation, it is ‘computing’ its own dynamical evolution.
This xkcd comic is apropos:

Phallic Astrophysics

Posted in astrophysics with tags , , , on June 28, 2010 by Grad Student

Inspired by this astropixie post, I present to you one of the most recognizably phallic objects you’ll see in astrophysics:

Well, it’s actually a simulation of a

jet [which] delivers its thrust in a narrow solid angle (1).

I should also mention that this type of astronomical object is thought to originate in the vicinity of a black hole.

Reference (pdf):

(1)  Meier, David L.; Koide, Shinji; Uchida, Yutaka.  Science, Volume 291, Issue 5501, pp. 84-92 (2001).

Overcoming Lying Inverse Trigonometric Functions.

Posted in math with tags on May 2, 2010 by Grad Student

Recently I was reminded of the trickiness of inverse trigonometric functions (e.g. arcsine and arccosine).  In some calculations I was doing* I had prior knowledge of the values of sin(x) and cos(x) without directly knowing what x was, and I had to calculate sin(2x).  To solve this problem, I did what every beginning student of trigonometry would do:


The problem with this simple solution can be summarized by plotting the sin(2*arcsin(sin(x))) vs x:

Obviously that ain’t a sinusoidal graph.  The problem lies in the fact that there aren’t any true inverse trigonometric functions.  Instead, the following is true


only if t is between -pi/2 and pi/2.

The solution to my problem is extremely simple, trigonometric identities:


With the above identity my problem was solved.  I already know what sin(x) and cos(x) are, so I can use the above formula to find sin(2x).

I also needed to calculate


where again I only know sin(x) , cos(x), and this time I know the value of a.  To do this without running into the same problem illustrated in the above figure, I used the following identity:


To calculate cos(2x) I use a similar identity as I used for sin(2x):


which now enables me to calculate:



Moral of the story: don’t forget the wonders of trigonometric identities.


* The calculations concern the Stokes parameters of polarized radiation.

The Astrophysics of Faraday’s Law of Induction

Posted in astrophysics, physics with tags , , , on February 28, 2010 by Grad Student

Something I love about physics is how the simple intuition you gain from an introductory physics level discussion of lenz’s law and Faradays law of induction can enable you to understand one of the outstanding problems in theoretical astrophysics: how are large scale celestial magnetic fields generated?

In terms of magnetic fields, Faraday’s law of induction says that any change in the magnetic field through a conducting loop will induce currents in that loop that will oppose such a change in the magnetic field.  In other words, changing the magnetic field in an area where currents can flow is hard and therefore takes some time.

Alternatively, Faraday’s law says that changing the current in a circuit is hard and takes time.  This is because the current itself generates a magnetic field.  So, changing the current will change the magnetic field through the loop.  So Faraday tells us that there will be an induced current that opposes the original attempt to increase (or decrease) the current.  Upshot:  changing the current in a loop is hard and therefore takes time.

Approximately how long does it take for the magnetic field to go from zero to the equilibrium value, B?  For a simple circuit with resistance and some inductance (an RL ciruit) any student of introductory physics could answer this question.

t \sim \frac{L}{R},

where L is a measure of inductance of the circuit and R is the circuit’s resistance.  The inductance of a circuit is simply a measure of how much the circuit resists any change in the magnetic field, or by another measure, it is the measure of how difficult it is to change the amount of current flowing in the circuit.

In astrophysics, there are similarities to a circuit with resistance and inductance*.

The Sun’s magnetic field is about the strength of Earth’s magnetic field, 0.0001 Tesla (where Tesla is the SI unit of magnetic field strength).  Further, the Sun is composed of ionized gas and therefore it is very conductive, meaning electric currents can flow easily.  Thus we can ask the same question about the Sun that we asked about the circuit: how long does it take for the magnetic field to attain an equilibrium value of 0.0001 Tesla?  As it turns out L/R is a good way of estimating this timescale, all we need to do is figure out how to estimate the inductance and the resistance of the Sun.

The inductance, L, depends on an important quantity known as the magnetic flux.  The magnetic flux can be thought of as the number of magnetic field lines that pierce through a surface.  In the case of the Sun, the flux is approximately

\Phi\sim B D^2,

where the greek letter Phi is the flux, B is the sun’s mean magnetic field, and D is the diameter of the sun.  Therefore, the number of magnetic field lines threading the Sun is the average magnetic field multiplied by the cross sectional area of the sun.  Now we can define the inductance as


where I is the total current that is maintaining the magnetic field of the Sun.  If we recognize that the magnetic field in the Sun is approximately

B \sim\frac{\mu_0I}{D},

then we can calculate the inductance by using the expression we have for the magnetic flux, Phi, and magnetic field, B:

L=\frac{B D^2}{I}=\mu_0 D.

Estimating the resistance is also possible using introductory physics:

R\sim \rho \frac{D}{A}\sim \frac{\rho}{D},

where the greek letter rho, the resistivity, is a measure of how much the Sun’s plasma prevents currents from flowing by particle-particle collisions.  Unfortunately, we can’t use introductory physics to calculate the resistivity, so I’ll just note that it’s proportional to the temperature of the plasma to the -3/2 power**.  Plugging everything in to the original formula for T, we finally get:

t=\frac{\mu_0 D^2}{\rho}\sim\frac{D^2}{4 \times 10^{16}}T^{3/2} years.

Plugging in the diameter and average temperature of the Sun we get:

t\sim\frac{(1.4 \times 10^9 m)^2}{4 \times 10^{16}}(1,000,000 K)^{3/2}\sim50 billion years.

Clearly we’re missing something here, the universe is only 14 billion years old.  If the Sun was born with very little magnetic field 5 billion years ago, how did it build up a field of 0.0001 Tesla against induction in that time?  According to Faraday’s law and a little plasma physics it takes much longer for such a field to reach its equilibrium value.  Another huge problem is that the Sun’s magnetic field reverses every 12 years or so.  That means the real time scale for significant changes in the Sun’s magnetic field is a decade, not 5 billion years.

The answer to this conundrum lies in a topic that is still very much at the forefront of astrophysics research today: dynamo theory.  Dynamo theory is an area of plasma physics that arose because the simple arguments I’ve presented here do not match with observations of the magnetic fields originating from the Sun, Earth, or even the galactic disk.



* Check out the pgs 4-5 in Plasma Physics for Astrophysics by Kulsrud (2005) for a similar explanation of the magnetic induction timescale in astrophysics (these pages can be viewed for free using amazon’s “look inside” feature).  The most common explanation of the induction timescale (not presented here) comes from the resistive induction equation (on pg 112 of the previous link) when the fluid velocity is zero.

** The -3/2 power of the temperature dependence for the plasma resistivity can seen if it’s recognized that the resistivity is proportional to the plasma collision frequency.  It’s also a bit surprising that the resistivity doesn’t depend on the density of particles.  The answer is that the more particles the more current, but the more particles the more particle-particle collisions which kill the current.  These two effects approximately cancel each other.  A more detailed analysis reveals that the resistivity weakly depends on the density (logarithmically) .

Teaching Lenz’s Law

Posted in physics with tags , , , , , , on February 23, 2010 by Grad Student

Usually I do a good job of teaching this strange electricity and magnetism topic, but I recently taught Lenz’s law in the most confusing way possible.  Hopefully I can repair a little bit of the damage with this flow chart I made.

Here’s a higher quality pdf of the image.

The Frontiers of Cosmology

Posted in cosmology, physics with tags , , on November 15, 2009 by Grad Student

Here’s a fascinating description of what theoretical cosmologists are thinking about these days:

A Conversation with Sean Carroll

This seems on the one hand a very obvious question. On the other hand, it is an interestingly strange question, because we have no basis for comparison. The universe is not something that belongs to a set of many universes. We haven’t seen different kinds of universes so we can say, oh, this is an unusual universe, or this is a very typical universe. Nevertheless, we do have ideas about what we think the universe should look like if it were “natural”, as we say in physics. Over and over again it doesn’t look natural. We think this is a clue to something going on that we don’t understand…

(via 3qd)

Limb Darkening in AGN Jets

Posted in astrophysics with tags , , , , on November 10, 2009 by Grad Student

While writing my first astronomy paper, I just typed the following sentence:

As expected, the profiles are roughly gaussian in shape due to line of sight effects analogous to solar limb darkening.

It makes life easier for everyone when you can refer to a well known phenomenon like limb darkening to explain some point in your paper. In this case, the analogy only saved a few words, but it’s still fun. What is limb darkening? It’s this (from Wikipedia’s entry on the subject):

Notice that all around the edges, or limb, the sun appears to be less bright, or darkened. That’s the phenomenon and you might be able to guess at the explanation. When you look at the middle of the sun you’re seeing photons that have been emitted within a few hundred kilometers of it’s surface.  As you look closer to the edge, the photons are traveling through just as much material (a few hundred kilometers), but they are coming from a part of the Sun closer to the surface, further away from the Sun’s center. This part is further away from the core of the sun where all the hot nuclear reactions take place, and thus it’s cooler (and, for other reasons less dense). Since your looking at cooler, less dense gas, it’s less bright, or more darkened. Here’s another wikipedia image to help explain limb darkening:

The distance the photons travel through the sun is represented by L in this image.   As you can see, the place where the photons are coming from along the edge of the sun (point B) is cooler than the place where the photons are coming from at the sun’s center (point A).

This effect is the well known to astronomers, which is why I use it to explain this:

This is an AGN jet originating from the galaxy NGC1052 as it appears at different frequencies (Matthias Kadler took this image). Notice that along the edges of the jet the brightness dims.  By “edge of the jet” I’m referring not to the end of the jet (located on the upper left and and lower right side of the image), but to the “sides” of the jet.  This image is what I meant by limb darkening in my paper, except this time I’m referring to an AGN jet, which is much more poorly understood than our Sun! Still, most (including me) would say the limb darkening we see in AGN jets can be understood in a similar way* as the Sun’s limb darkening. However, the most well known and well studied AGN jet (emanating from galaxy M87) shows us there’s more to this story:

Notice that the center of the jet is not as bright as the edges (especially the parts of the jet on the left), thus this jet is limb brightened. So, as usual in science and especially in astronomy, there are exceptions to the rule and we have another astronomical mystery to add to the pile.


* There are also some key differences between the limb darkening in the Sun and AGN jets.  First, solar limb darkening occurs because toward the edges of the Sun your seeing photons that have been emitted from cooler gas than when you look at the center of the Sun.  With AGN jets, it’s not clear if the plasma/gas is cooler or hotter in the outer part of the jet.   So really the only analogy that AGNs limb darkening has with solar limb darkening is that both phenomena are the result of the curvature of the surface we’re looking at (where the sun is spherical and the jet is roughly cylindrical).

Do More Home Runs Happen When It’s Hot Outside?

Posted in recreational physics with tags , on November 2, 2009 by Grad Student

I was watching the world series the other day when an announcer mentioned that the ball carries farther when it’s hot outside. My wife asked me about this and here’s the explanation/equation that immediately popped into my head:

density \propto \frac{P}{T}

I’m guessing that the air pressure change isn’t as important as the temperature change in this context. So, if we say that P is a constant, then the higher the temperature is the lower the air density. This is important, as the drag force on a hit baseball is:
F_{drag}\propto density \propto \frac{1}{T}
Thus, the drag force is inversely proportional to the air temperature if we can assume the air pressure is constant. In conclusion, yes, more home runs are hit when it’s hot outside because the drag force on the hit baseball is lower.

Chet Raymo on the Sun

Posted in Science with tags , on October 8, 2009 by Grad Student

What follows is a beautiful Chet Raymo (of “Science Musings”) essay:

Let me say this as simply as I can.

The Sun, like the universe, is mostly hydrogen. An atom of hydrogen is a single proton and a single electron, bound together.

The center of the Sun is so hot — half-a-million miles of overbearing matter — that the electrons and protons can’t hold together (try holding hands in a surging, tumultuous crowd), so what you have is a hot soup — a plasma — of protons and electrons, flailing about independently.

The electrical force between like charges is repulsive, so every time two protons approach each other they swerve away. But protons are also subject to the strong nuclear force, which is attractive, and stronger than the electrical force at very close range. Normally, if two protons approach each other, they are repelled before they get close enough for the strong nuclear force to kick in.

But now heat up the soup. The protons whiz about faster. If they are moving fast enough, they can approach closely enough for the strong nuclear force to bring them crashing together. Fusion. How hot? About 10 million degrees, which is the sort of temperature you’d find at the center of the Sun.

When two protons combine, one flings off its positive charge and becomes a neutron. Then the proton-neutron pair combines with another proton to form a helium-three nucleus — two protons and a neutron — which quickly unites with another of the same and throws off two protons to become a helium-four nucleus — two protons and two neutrons. Got that? Hydrogen is fused into helium.

The ejected positive charges go off as positrons (antielectrons), which meet up with electrons in the soup and annihilate. (There are neutrinos involved too, but let’s ignore them.)

Now for the bookkeeping.

Add up the mass of the original particles in each interaction — six protons and two electrons — and add up the mass of the final particles — a helium-four nucleus and two protons. After the orgy of combination, some mass is missing! For each individual interaction as just described the amount of missing mass is miniscule, but in the seething caldron that is the Sun’s core it amounts to four billion kilograms of vanished mass every second. Hardly missed by our star — like a thimbleful of water dipped from the ocean — but for the Earth it is the difference between day and night. Hydrogen has been turned into helium and the vanished mass appears as energy. A lot of energy. The famous Einstein equation: Energy equals mass times the speed of light squared.

The star shines!

The universe blazes with light and life.

And, knowing this — and just think what a thing it is that we know it — how is it that we whine and carp and glower? How is it that we snipe and cavil and rue our fates and that of the world? Wallace Stevens answers ironically in his poem “Gubbinal,” smothers us in irony actually:

That strange flower, the sun,
Is just what you say.
Have it your way.

The world is ugly,
And the people are sad.

That tuft of jungle feathers,
That animal eye,
Is just what you say.

That savage of fire,
That seed,
Have it your way.

The world is ugly,
And the people are sad.

I love it!  It perfectly encapsulates why I think science is beautiful.  Though I have to say I don’t quite understand how the poem fits with the rest of the essay.

Life, the Universe, and Everything: Entropy

Posted in cosmology with tags , on October 5, 2009 by Grad Student

Here’s the abstract of a curious paper by Egan and Lineweaver I saw on astro-ph a while back:

Using recent measurements of the supermassive black hole mass function we find that supermassive black holes are the largest contributor to the entropy of the observable Universe, contributing at least an order of magnitude more entropy than previously estimated. The total entropy of the observable Universe is correspondingly higher, and is $S_{obs} = 3.1^{+3.0}_{-1.7}\xt{104} k$. We calculate the entropy of the current cosmic event horizon to be $S_{CEH} = 2.6 \pm 0.3 \xt{122} k$, dwarfing the entropy of its interior, $S_{CEH int} = 1.2^{+1.1}_{-0.7}\xt{103} k$. We make the first tentative estimate of the entropy of dark matter within the observable Universe, $S_{dm} = 10^{88\pm1} k$. We highlight several caveats pertaining to these estimates and make recommendations for future work.

It is cool to think that black holes are the dominate contributors of entropy in the universe, but why is this important for understanding the universe?  Here’s the opening line of the paper:

The entropy budget of the Universe is important because
its increase is associated with all irreversible processes, on
all scales, across all facets of nature: gravitational clustering,
accretion disks, supernovae, stellar fusion, terrestrial
weather, chemical, geological and biological processes

The entropy budget of the Universe is important because its increase is associated with all irreversible processes, on all scales, across all facets of nature: gravitational clustering, accretion disks, supernovae, stellar fusion, terrestrial weather, chemical, geological and biological processes.

Okay, we know this, any other reasons?

That the increase of entropy has not yet been capped by some limiting value, such as the holographic bound (’t Hooft 1993; Susskind 1995) at Smax  10123k (Frampton et al. 2008), is the reason dissipative processes are ongoing and that life can exist.

Mmmm, it appears that that’s it.  I can’t quite grasp why this estimate of the universe’s entropy helps us understand the universe better.  Perhaps it is important for understanding why the universe started out in such a low entropy state (cosmologists like Sean Carroll and Roger Penrose like to think about these things).