Archive for October, 2009

Chet Raymo on the Sun

Posted in Science with tags , on October 8, 2009 by Grad Student

What follows is a beautiful Chet Raymo (of “Science Musings”) essay:

Let me say this as simply as I can.

The Sun, like the universe, is mostly hydrogen. An atom of hydrogen is a single proton and a single electron, bound together.

The center of the Sun is so hot — half-a-million miles of overbearing matter — that the electrons and protons can’t hold together (try holding hands in a surging, tumultuous crowd), so what you have is a hot soup — a plasma — of protons and electrons, flailing about independently.

The electrical force between like charges is repulsive, so every time two protons approach each other they swerve away. But protons are also subject to the strong nuclear force, which is attractive, and stronger than the electrical force at very close range. Normally, if two protons approach each other, they are repelled before they get close enough for the strong nuclear force to kick in.

But now heat up the soup. The protons whiz about faster. If they are moving fast enough, they can approach closely enough for the strong nuclear force to bring them crashing together. Fusion. How hot? About 10 million degrees, which is the sort of temperature you’d find at the center of the Sun.

When two protons combine, one flings off its positive charge and becomes a neutron. Then the proton-neutron pair combines with another proton to form a helium-three nucleus — two protons and a neutron — which quickly unites with another of the same and throws off two protons to become a helium-four nucleus — two protons and two neutrons. Got that? Hydrogen is fused into helium.

The ejected positive charges go off as positrons (antielectrons), which meet up with electrons in the soup and annihilate. (There are neutrinos involved too, but let’s ignore them.)

Now for the bookkeeping.

Add up the mass of the original particles in each interaction — six protons and two electrons — and add up the mass of the final particles — a helium-four nucleus and two protons. After the orgy of combination, some mass is missing! For each individual interaction as just described the amount of missing mass is miniscule, but in the seething caldron that is the Sun’s core it amounts to four billion kilograms of vanished mass every second. Hardly missed by our star — like a thimbleful of water dipped from the ocean — but for the Earth it is the difference between day and night. Hydrogen has been turned into helium and the vanished mass appears as energy. A lot of energy. The famous Einstein equation: Energy equals mass times the speed of light squared.

The star shines!

The universe blazes with light and life.

And, knowing this — and just think what a thing it is that we know it — how is it that we whine and carp and glower? How is it that we snipe and cavil and rue our fates and that of the world? Wallace Stevens answers ironically in his poem “Gubbinal,” smothers us in irony actually:

That strange flower, the sun,
Is just what you say.
Have it your way.

The world is ugly,
And the people are sad.

That tuft of jungle feathers,
That animal eye,
Is just what you say.

That savage of fire,
That seed,
Have it your way.

The world is ugly,
And the people are sad.

I love it!  It perfectly encapsulates why I think science is beautiful.  Though I have to say I don’t quite understand how the poem fits with the rest of the essay.

Life, the Universe, and Everything: Entropy

Posted in cosmology with tags , on October 5, 2009 by Grad Student

Here’s the abstract of a curious paper by Egan and Lineweaver I saw on astro-ph a while back:

Using recent measurements of the supermassive black hole mass function we find that supermassive black holes are the largest contributor to the entropy of the observable Universe, contributing at least an order of magnitude more entropy than previously estimated. The total entropy of the observable Universe is correspondingly higher, and is $S_{obs} = 3.1^{+3.0}_{-1.7}\xt{104} k$. We calculate the entropy of the current cosmic event horizon to be $S_{CEH} = 2.6 \pm 0.3 \xt{122} k$, dwarfing the entropy of its interior, $S_{CEH int} = 1.2^{+1.1}_{-0.7}\xt{103} k$. We make the first tentative estimate of the entropy of dark matter within the observable Universe, $S_{dm} = 10^{88\pm1} k$. We highlight several caveats pertaining to these estimates and make recommendations for future work.

It is cool to think that black holes are the dominate contributors of entropy in the universe, but why is this important for understanding the universe?  Here’s the opening line of the paper:

The entropy budget of the Universe is important because
its increase is associated with all irreversible processes, on
all scales, across all facets of nature: gravitational clustering,
accretion disks, supernovae, stellar fusion, terrestrial
weather, chemical, geological and biological processes

The entropy budget of the Universe is important because its increase is associated with all irreversible processes, on all scales, across all facets of nature: gravitational clustering, accretion disks, supernovae, stellar fusion, terrestrial weather, chemical, geological and biological processes.

Okay, we know this, any other reasons?

That the increase of entropy has not yet been capped by some limiting value, such as the holographic bound (’t Hooft 1993; Susskind 1995) at Smax  10123k (Frampton et al. 2008), is the reason dissipative processes are ongoing and that life can exist.

Mmmm, it appears that that’s it.  I can’t quite grasp why this estimate of the universe’s entropy helps us understand the universe better.  Perhaps it is important for understanding why the universe started out in such a low entropy state (cosmologists like Sean Carroll and Roger Penrose like to think about these things).

Research: Numerical Problems

Posted in Science with tags , on October 2, 2009 by Grad Student

The coding I do in my research is relatively simple.  I just solve some integrals that can’t be done analytically.  Simple, right?  You’d think, so, but I’m running into a classic problem, dividing by zero:

problemThere’s bizarre stuff going on in the left side of the graph and there’s two divergent features at x=-0.2 and 0.4.  It’s very simple why that’s happening.  The denominator is going to zero at those parts.  Here’s a graph of the denominator

problem2

The problem is, I don’t know how to circumvent this problem.  That’s what the equation does, so… what do I do?  Clearly I need to do some clever mathematical trick, perhaps express the quantity in another form without the singularity, I don’t know.  It’s frustrating though.

UPDATE:

As I look at what the numerator is doing I’m encouraged.  Take a look

problem3

Notice that the numerator is zero at all the places (and more) where the denominator is zero: x= -1 to -0.8, x=-0.4 and x=0.2.  This suggests that I’m not making some obvious mistake in my physics.  If I had a fully analytic solution to these integrals then I imagine the divergences would go away, I think.  While this observation makes me feel better, I’m still stuck.  My best guess is that since both the numerator and denominator go to zero at those points, the overall function should probably go to zero, much like the function:

f(x)=\frac{(x-1)^2}{(x-1)}

Both the numerator and denominator go to zero, but if you do you algebra correctly (or just take the limit as x approaches 1) the function goes to zero at the point where the denominator blows up (x = 1)

UPDATE II:

[Embarrassed clearing of throat] I found the solution, it was a missing minus sign.

The Physics of Scaling Laws and Dimensional Analysis

Posted in Science with tags , , on October 2, 2009 by Grad Student

I’ve been puzzling over dimensional analysis and scaling laws lately. Believe it or not, professional physicists sometimes use dimensional analysis to do a quick back of the envelope calculation and usually get the correct answer to within an order of magnitude or so. In this post I’m going to give three examples of how the “magic” of dimensional analysis can be used to guess the answer.  I’ll start with a simple example from introductory physics  and move to more advanced examples.

1)  If a baseball is dropped from a height ‘h’ above the ground, how long until it hits the ground neglecting air resistance?   We know that since air resistance is neglected the size of the baseball is irrelevant, and Galileo tells us the mass of the baseball is also unimportant.  That leaves the height from which the ball is dropped, h, which is in units of length [L], and the acceleration due to gravity, g, which is in units of length per time squared [L/T^2].  So, arranging these two quantities to give the dimension of time is simple:

T=L^a\left(\frac{L}{T^2}\right)^b

Which gives,

T=L^{a+b} T^{-2b}

We need time  so the power for length must be zero, so a = -b, and b=-1/2.  Thus a = 1/2 and b = -1/2 and voila:

\Delta t = \sqrt{\frac{h}{g}}

And this answer is only off by dimensionless factor of ~1.4 (root 2)!

2) As it turns out you can do precisely the same thing to figure out how large craters will be when you drop rocks into the sand (this was the topic of my previous post). The critical assumption you have to make is in deciding which variables are important. Mainly, you have to decide if density is the only important quantity for the sand.  (It turns out that for other materials there are other dimensional quantities you must include.)  Then you assume the only other important quantity is the kinetic energy of the ball.  Finally, you do dimensional analysis exactly as I did above and you will find the same scaling law I’ve found in the last post for relating “meteor” kinetic energy (KE) to crater diameter (D):
D \propto KE^{1/4}

Notice how you can get the same answer as I did in the last post without thinking about all the physics and kinematics equations.

3) Now here’s a truly bizarre application of dimensional analysis to the Schrodinger equation that I saw on the first day of a quantum mechanics class. Let’s say that you’re tired of solving partial differential equations in three dimensions and you just want to guess what the ground state energy of an electron is in a hydrogen atom.  You bypass the Schrodinger equation in all its glory and simply write the (classical) energy equation for an electron:
E= \frac{p^2}{2m}-\frac{e^2}{4 \pi \epsilon_0 r}
Now you think, in quantum mechanics you always have to throw in Planck’s constant, so how do you do that? Well we have momentum and position…mmm…momentum and position, then BAM!, you remember that planck’s contant (h-bar) has units of position times momentum because you remember the Heisenberg uncertainty principle.  So let’s just postulate through dimensional analysisthat:
p r = \hbar
and substitute that into the energy equation, eliminating position:
E= \frac{p^2}{2m}-\frac{e^2 p}{4 \pi \epsilon_0 \hbar}
Now we want to find the lowest energy level, so we do the usual calculus to minimize this function, E(p), and find:
p_{min}=\frac{e^2 m}{4 \pi \epsilon_0 \hbar}
Which tells us that:
E_{min}= -\frac{e^4m}{2(4 \pi \epsilon_0)^2 \hbar^2} \sim -13.6eV
And out of shear dumb luck, that is not just the approximate answer, it’s the exact answer to the full solution of the Schrodinger equation! Before I celebrate too much, I should admit that I could have used h instead of h-bar (which is just h/2pi). In that case I would have gotten -0.34 eV, which is definitely in the right ball park, but not nearly as impressive as getting the exact answer.

Of course you could just use dimensional on all the relevant constants in the equation.  You know that, being quantum mechanics you have include Planck’s constant.  Obviously the electric force is involved so you have to include the fundamental unit of charge and the permittivity of free space in the following way:

\frac{e^2}{4 \pi \epsilon_0}

And finally you include the mass of the electron.  If you have some experience with these types of problems you will know that you don’t have to include the proton mass because it’s so much heavier than the electron.  Then you have to solve the following equation for a, b and c.

ML^2 T^{-2}=\left[\frac{e^2}{4 \pi \epsilon_0}\right]^a \left[m_e\right]^b \left[\hbar\right]^c

Where the brackets [] mean the units of the constant and the right hand side of the equation are the units of energy.  When you do this you will find that a = 4, b = 1, and c = -2 which gives:

\frac{m_e e^4}{(4 \pi \epsilon_0)^2\hbar^2} \sim 27.2 eV

This result is only off by a factor of two!  Also, you should know that a negative sign belongs there as we’re talking about bound energies.  So, if the above were all you knew about the hydrogen atom, you would be able to rightly conclude that all the bound electronic energy levels are between 0 eV (which is to say the electron is free) and negative several tens of eV.  And this is totally correct!  Then when you have more time you can show that all the bound levels are between 0 eV and -13.6 eV.

I know it sounds crazy, just cobbling together relevant quantites to get important results, but it works.  If you understand the physics of the problem well enough, and you can guess what the relevant parameters are, then you’re ready to pull out the envelope and start calculating.