Physics



Click on a question to know the answer

Q. What are Gravity waves?
Q. Is energy always conserved?
Q. What are the effects of Finite Light Speed?
Q. What is the top quark?
Q. What are tachyons?
Q. What's the barn/pole paradox?
Q. What is the Twin Paradox?
Q. What are subatomic particles?
Q. What is Olbers' Paradox?
Q. What is Dark Matter?
Q. Does hot water freeze faster?
Q. Does my bathtub drain differently depending on whether I live in the northern or southern hemisphere?
Q. Why are Golf Balls Dimpled?
Q. Why do Mirrors Reverse Left and Right?
Q. What is the Mass of a Photon?
Q. Who won physics Nobel Prizes?
Q. What is a Plasma?
Q. What are Newton's three laws?


Return to Question and Answers















Q. What are Gravity waves?

Ans. Gravitational Radiation is to gravity what light is to electromagnetism. It is produced when massive bodies accelerate. You can accelerate any body so as to produce such radiation, but due to the feeble strength of gravity, it is entirely undetectable except when produced by intense astrophysical sources such as supernovae, collisions of black holes, etc. These are quite far from us, typically, but they are so intense that they dwarf all possible laboratory sources of such radiation.

Gravitational waves have a polarization pattern that causes objects to expand in one direction, while contracting in the perpendicular direction. That is, they have spin two. This is because gravity waves are fluctuations in the tensorial metric of space-time.

All oscillating radiation fields can be quantized, and in the case of gravity, the intermediate boson is called the "graviton" in analogy with the photon. But quantum gravity is hard, for several reasons:

(1) The quantum field theory of gravity is hard, because gauge interactions of spin-two fields are not renormalizable. See Cheng and Li, Gauge Theory of Elementary Particle Physics (search for "power counting").
(2) There are conceptual problems - what does it mean to quantize geometry, or space-time?

It is possible to quantize weak fluctuations in the gravitational field. This gives rise to the spin-2 graviton. But full quantum gravity has so far escaped formulation. It is not likely to look much like the other quantum field theories. In addition, there are models of gravity which include additional bosons with different spins. Some are the consequence of non-Einsteinian models, such as Brans-Dicke which has a spin-0 component. Others are included by hand, to give "fifth force" components to gravity. For example, if you want to add a weak repulsive short range component, you will need a massive spin-1 boson. (Even-spin bosons always attract. Odd-spin bosons can attract or repel.) If antigravity is real, then this has implications for the boson spectrum as well.

The spin-two polarization provides the method of detection. All experiments to date use a "Weber bar." This is a cylindrical, very massive, bar suspended by fine wire, free to oscillate in response to a passing graviton. A high-sensitivity, low noise, capacitive transducer can turn the oscillations of the bar into an electric signal for analysis. So far such searches have failed. But they are expected to be insufficiently sensitive for typical radiation intensity from known types of sources.

A more sensitive technique uses very long baseline laser interferometry. This is the principle of LIGO (Laser Interferometric Gravity wave Observatory). This is a two-armed detector, with perpendicular laser beams each travelling several km before meeting to produce an interference pattern which fluctuates if a gravity wave distorts the geometry of the detector. To eliminate noise from seismic effects as well as human noise sources, two detectors separated by hundreds to thousands of miles are necessary. A coincidence measurement then provides evidence of gravitational radiation. In order to determine the source of the signal, a third detector, far from either of the first two, would be necessary. Timing differences in the arrival of the signal to the three detectors would allow triangulation of the angular position in the sky of the signal.

The first stage of LIGO, a two detector setup in the U.S., has been approved by Congress in 1992. LIGO researchers have started designing a prototype detector, and are hoping to enroll another nation, probably in Europe, to fund and be host to the third detector.

The speed of gravitational radiation (C_gw) depends upon the specific model of Gravitation that you use. There are quite a few competing models (all consistent with all experiments to date) including of course Einstein's but also Brans-Dicke and several families of others. All metric models can support gravity waves. But not all predict radiation travelling at C_gw = C_em. (C_em is the speed of electromagnetic waves.)

There is a class of theories with "prior geometry", in which, as I understand it, there is an additional metric which does not depend only on the local matter density. In such theories, C_gw ! = C_em in general.

However, there is good evidence that C_gw is in fact at least almost C_em. We observe high energy cosmic rays in the
10 ** 20 - 10 ** 21 evregion.
Such particles are travelling at up to ( 1 - 10 ** -18 ) * C_em.
If C_gw < C_em, then particles with C_gw < v < C_em will radiate Cerenkov gravitational radiation into the vacuum, and decelerate from the back reaction.
So evidence of these very fast cosmic rays good evidence that
C_gw > = ( 1 - 10 ** -18 ) * C_em, very close indeed to C_em.
Bottom line: in a purely Einsteinian universe, C_gw = C_em. However, a class of models not yet ruled out experimentally does make other predictions.

A definitive test would be produced by LIGO in coincidence with optical measurements of some catastrophic event which generates enough gravitational radiation to be detected. Then the "time of flight" of both gravitons and photons from the source to the Earth could be measured, and strict direct limits could be set on C_gw.

For more information, see Gravitational Radiation (NATO ASI - Les Houches 1982), specifically the introductory essay by Kip Thorne.


Q.Is energy always conserved?

Ans. NO
Why? Every conserved quantity is the result of some symmetry of nature. This is known as Noether's theorem. For example, momentum conservation is the result of translation invariance, because position is the variable conjugate to momentum. Energy would be conserved due to time-translation invariance. However, in an expanding or contracting universe, there is no time-translation invariance. Hence energy is not conserved. If you want to learn more about this, read Goldstein's Classical Mechanics, and look up Noether's theorem.

Does Red-Shift lead tp Energy Non-Conservation: Sometimes

There are three basic cosmological sources of red-shifted light:
(1) Very massive objects emitting light
(2) Very fast objects emitting light
(3) Expansion of the universe leading to CBR (Cosmic Background Radiation) red-shift
About each:

(1) Light has to climb out the gravitational well of a very massive object. It gets red-shifted as a result. As several people have commented, this does not lead to energy non-conservation, because the photon had negative gravitational potential energy when it was deep in the well. No problems here. If you want to learn more about this read Misner, Thorne, and Wheeler's Gravitation, if you dare.

(2) Fast objects moving away from you emit Doppler shifted light. No problems here either. Energy is only one part a four-vector, so it changes from frame to frame. However, when looked at in a Lorentz invariant way, you can convince yourself that everything is OK here too. If you want to learn more about this, read Taylor and Wheeler's Spacetime Physics.

(3) CBR has red-shifted over billions of years. Each photon gets redder and redder. And the energy is lost. This is the only case in which red-shift leads to energy non-conservation. Several people have speculated that radiation pressure "on the universe" causes it to expand more quickly, and attempt to identify the missing energy with the speed at which the universe is expanding due to radiation pressure. This argument is completely specious. If you add more radiation to the universe you add more energy, and the universe is now more closed than ever, and the expansion rate slows.

If you really MUST construct a theory in which something like energy is conserved (which is dubious in a universe without time-translation invariance), it is possible to arbitrarily define things so that energy has an extra term which compensates for the loss. However, although the resultant quantity may be a constant, it is of questionable value, and certainly is not an integral associated with time-invariance, so it is not what everyone calls energy.


Q.What are the effects of Finite Light Speed?

Ans. There are two well known phenomena which are due to the finite speed of electromagnetic radiation, but are essentially classical in nature, requiring no other facts of special relativity for their understanding.

(1) Apparent Superluminal Velocity of Galaxies

A distant object can appear to travel faster than the speed of light relative to us, provided that it has some component of motion towards us as well as perpendicular to our line of sight. Say that on Jan. 1 you make a position measurement of galaxy X. One month later, you measure it again. Assuming you know it's distance from us by some independent measurement, you derive its linear speed, and conclude that it is moving faster than the speed of light.

What have you forgotten? Let's say that on Jan. 1, the object is D km from us, and that between Jan. 1 and Feb. 1, the object has moved d km closer to us. You have assumed that the light you measured on Jan. 1 and Feb. 1 were emitted exactly one month apart. Not so. The first light beam had further to travel, and was actually emitted ( 1 + d / c ) months before the second measurement, if we measure c in km/month. The object has traveled the given angular distance in more time than you thought. Similarly, if the object is moving away from us, the apparent angular velocity will be too slow, if you do not correct for this effect, which becomes significant when the object is moving along a line close to our line of sight.

Note that most extragalactic objects are moving away from us due to the Hubble expansion. So for most objects, you don't get superluminal apparent velocities. But the effect is still there, and you need to take it into account if you want to measure velocities by this technique.

References:

Considerations about the Apparent 'Superluminal Expansions' in Astrophysics, E. Recami, A. Castellino, G.D. Maccarrone, M. Rodono, Nuovo Cimento 93B, 119 (1986).

Apparent Superluminal Sources, Comparative Cosmology and the Cosmic Distance Scale, Mon. Not. R. Astr. Soc. 242, 423-427 (1990).

(2) Terrell Rotation

Consider a cube moving across your field of view with speed near the speed of light. The trailing face of the cube is edge on to your line of sight as it passes you. However, the light from the back edge of that face (the edge of the square farthest from you) takes longer to get to your eye than the light from the front edge. At any given instant you are seeing light from the front edge at time t and the back edge at time t-(L/c), where L is the length of an edge. This means you see the back edge where it was some time earlier. This has the effect of rotating the image of the cube on your retina.

This does not mean that the cube itself rotates. The image is rotated. And this depends only on the finite speed of light, not any other postulate or special relativity. You can calculate the rotation angle by noting that the side face of the cube is Lorentz contracted to
L' = L / gamma.

This will correspond to a rotation angle of arccos( 1 / gamma ).

It turns out, if you do the math for a sphere, that the amount of apparent rotation exactly cancels the Lorentz contraction. The object itself is flattened, but then you see behind it as it flies by just enough to restore it to its original size. So the image of a sphere is unaffected by the Lorentz flattening that it experiences.

Another implication of this is that if the object is moving at nearly the speed of light, although it is contracted into an infinitesimally thin pancake, you see it rotated by almost a full 90 degrees, so you see the complete backside of the object, and it doesn't disappear from view. In the case of the sphere, you see the transverse cross-section (which suffers no contraction), so that it still appears to be exactly a sphere.

That it took so long historically to realize this is undoubtedly due to the fact that although we were regularly accelerating particle beams in 1959 to relativistic speeds, we still do not have the technology to accelerate any macroscopic objects to speeds necessary to reveal the effect.

References: J. Terrell, Phys Rev. 116, 1041 (1959). For a textbook discussion, see Marion's Classical Dynamics, Section 10.5.


Q. What is the top quark?

Ans. The top quark is the hypothetical sixth fundamental strongly interacting particle ( quark ). The known quarks are up ( u ), down ( d ), strange ( s ), charm ( c ) and bottom ( b ). The Standard Model requires quarks to come in pairs in order to prevent mathematical inconsistency due to certain "anomalous" Feynman diagrams, which cancel if and only if the quarks are paired. The pairs are ( d , u ) , ( s , c ) and ( b , ? ). The missing partner of the b is called "top".

In addition, there is experimental evidence that the b quark has an "isodoublet" partner, which is so far unseen. The forward-backward asymmetry in the reaction e+ + e- -> b + b-bar and the absence of flavor-changing neutral currents in b decays imply the existence of the isodoublet partner of the b. ("b-bar", pronounced "bee bar", signifies the b antiquark.)

The mass of the top quark is restricted by a variety of measurements. Due to radiative corrections which depend on the top quark circulating as a virtual particle inside the loop in the Feynman diagram, a number of experimentally accessible processes depend on the top quark mass. There are about a dozen such measurements which have been made so far, including the width of the Z, b-b-bar mixing (which historically gave the first hints that the top quark was very massive), and certain aspects of muon decay. These results collectively limit the top mass to roughly 140 + / - 30 GeV. This uncertainty is a " 1 - sigma " error bar.

Direct searches for the top quark have been performed, looking for the expected decay products in both p-p-bar and e+e- collisions. The best current limits on the top mass are:

(1) From the absence of Z -> t + t - bar , M ( t ) > M ( Z ) / 2 = 45 GeV. This is a "model independent" result, depending only on the fact that the top quark should be weakly interacting, coupling to the Z with sufficient strength to have been detected at the current resolution of the LEP experiments which have cornered the market on Z physics in the last several years.

(2) From the absence of top quark decay products in the reaction p + p - bar -> t + t - bar -> hard leptons + X at Fermilab's Tevatron collider, the CDF (Collider Detector at Fermilab) experiment. Each top quark is expect to decay into a W boson and a b quark. Each W subsequently decays into either a charged lepton and a neutrino or two quarks. The cleanest signature for the production and decay of the t-t-bar pair is the presence of two high-transverse-momentum (high Pt) leptons (electron or muon) in the final state. Other decay modes have higher branching ratios, but have serious experimental backgrounds from W bosons produced in association with jets. The current lower limit on M( t ) from such measurements is 91 GeV ( 95 % confidence ), 95 GeV ( 90 % confidence ). However, these limits assume that the top quark has the expected decay products in the expected branching ratios, making these limits "model dependent," and consequently not as "hard" as the considerably lower LEP limit of ~45 GeV.

The future is very bright for detecting the top quark. LEP II, the upgrade of CERN's e+e- collider to E > = 2 * Mw = 160 GeV by 1994, will allow a hard lower limit of roughly 90 GeV to be set. Meanwhile, upgrades to CDF, start of a new experiment, D0, and upgrades to the accelerator complex at Fermilab have recently allowed higher event rates and better detector resolution, should allow production of standard model top quarks of mass < 150 GeV in the next two years, and even higher mass further in the future, at high enough event rate to identify the decays and give rough mass measurements.

References:

Phys. Rev. Lett. 6, 447 (1992) and the references therein.



Q. What are tachyons?

Ans.

There was a young lady named Bright, Whose speed was far faster than light. She went out one day, In a relative way, And returned the previous night! -Reginald Buller

It is a well known fact that nothing can travel faster than the speed of light. At best, a massless particle travels at the speed of light. But is this really true? In 1962, Bilaniuk, Deshpande, and Sudarshan, Am. J. Phys. 30, 718 (1962), said "no". A very readable paper is Bilaniuk and Sudarshan, Phys. Today 22,43 (1969). I give here a brief overview.

Draw a graph, with momentum ( p ) on the x-axis, and energy ( E ) on the y-axis. Then draw the "light cone", two lines with the equations E = +/- p. This divides our 1 + 1 dimensional space-time into two regions. Above and below are the "timelike" quadrants, and to the left and right are the "spacelike" quadrants.

Note: ** = to the power of

Now the fundamental fact of relativity is that E ** 2 - p ** 2 = m ** 2. (Let's take c = 1 for the rest of the discussion.) For any non-zero value of m (mass), this is an hyperbola with branches in the timelike regions. It passes through the point ( p , E ) = ( 0 , m ), where the particle is at rest. Any particle with mass m is constrained to move on the upper branch of this hyperbola. (Otherwise, it is "off-shell", a term you here in association with virtual particles - but that's another topic.) For massless particles, E ** 2 = p ** 2, and the particle moves on the light-cone. These two cases are given the names tardyon (or bradyon in more modern usage) and luxon, for "slow particle" and "light particle". Tachyon is the name given to the supposed "fast particle" which would move with v>c. Now another familiar relativistic equation is E = m * [ 1 - ( v / c ) ** 2 ] ** ( -.5 ).
Tachyons (if they exist) have v > c. This means that E is imaginary! Well, what if we take the rest mass m, and take it to be imaginary? Then E is negative real, and E ** 2 - p ** 2 = m ** 2 < 0. Or, p ** 2 - E ** 2 = M ** 2 , where M is real. This is a hyperbola with branches in the spacelike region of spacetime. Tachyons are constrained to move on this hyperbola.

You can now deduce many interesting properties of tachyons. For example, they accelerate (p goes up) if they lose energy (E goes down). Futhermore, a zero-energy tachyon is "transcendent," or infinitely fast. This has profound consequences. For example, let's say that there are electrically charged tachyons. Since they move faster than the speed of light in the vacuum, they produce Cerenkov radiation. This lowers their energy, and they accelerate. So any charged tachyon in the region of spacetime where you might choose to put a "charged tachyon detector" will quickly accelerate off to the edge of the universe, to be lost forever. You will never find a charged tachyon, whether they exist or not.

However, tachyons are not entirely invisible. You can imagine that you might produce them in some exotic nuclear reaction. If they are charged, you could "see" them by detecting the Cerenkov light they produce as they speed away faster and faster. Such experiments have been done. So far, no tachyons have been found. Even neutral tachyons can scatter off normal matter with experimentally observable consequences. Again, no such tachyons have been found.

Once you move away from relativistic kinematics and start talking about the quantum field theory or particle physics of tachyons, things get much more complicated. It is not easy to summarize results here. However, one reasonably modern reference is _Tachyons, Monopoles, and Related Topics_, E. Recami, ed. (North-Holland, Amsterdam, 1978).

One little-publicized fact is that in the framework of field theory, one CANNOT transmit information faster than the speed of light with tachyons. Since this may be controversial let us be more precise. It's easiest to begin by looking at the wave equation for a free scalar particle, the so-called Klein-Gordon equation:

( BOX + m ** 2 ) phi = 0

where BOX is the D'Alembertian, which in 1+1 dimensions is just
BOX = ( d / dt ) ** 2 - ( d / dx ) ** 2.

(For four-dimensional space-time just throw in - ( d / dy ) ** 2 - ( d / dz ) ** 2.) In field theory, noninteracting massive particles (tardyons) are described by this equation with the mass m being real. Non-interacting tachyons would be described by this equation with m imaginary. Regardless of m, any solution is a linear combination, or superposition, of solutions of the form

exp(-iEt + ipx)

where E ** 2 - p ** 2 = m ** 2. By actually solving the equation this way, one notices a strange thing. If the solution phi and its time derivative are zero outside the interval [ -L, L ] when t = 0, they will be zero outside the interval [ -L |t|, L+|t| ] at any time t. In other words, disturbances do not spread with speed faster than 1 (the speed of light).

However, there are lots of problems with tachyons in quantum field theory. A lot of mathematically rigorous work on quantum field theory uses the Garding-Wightman axioms for quantum fields. These rule out tachyons for other reasons because they require that all states satisfy E ** 2 - p ** 2 >= 0. This allows one to define the vacuum as the state minimizing E ** 2 - p ** 2 (required by these axioms to be unique). As described above, theories with tachyons violate this axiom. In fact, if one has a bunch of tachyons around, one can make E ** 2 - p ** 2 as negative as you like. Heuristically, this is bad because it means that the vacuum is unstable: spontaneous creation of tachyon-antitachyon pairs will tend to occur, reducing the total energy of the system.



Q. What's the barn/pole paradox?

Ans. These are the props. You own a barn, 40m long, with automatic doors at either end, that can be opened and closed simultaneously by a switch. You also have a pole, 80m long, which of course won't fit in the barn.

Now someone takes the pole and tries to run (at nearly the speed of light) through the barn with the pole horizontal. Special Relativity (SR) says that a moving object is contracted in the direction of motion: this is called the Lorentz Contraction. So, if the pole is set in motion lengthwise, then it will contract in the reference frame of a stationary observer.

You are that observer, sitting on the barn roof. You see the pole coming towards you, and it has contracted to a bit less than 40m. So, as the pole passes through the barn, there is an instant when it is completely within the barn. At that instant, you close both doors. Of course, you open them again pretty quickly, but at least momentarily you had the contracted pole shut up in your barn. The runner emerges from the far door unscathed.

But consider the problem from the point of view of the runner. She will regard the pole as stationary, and the barn as approaching at high speed. In this reference frame, the pole is still 80m long, and the barn is less than 20 meters long. Surely the runner is in trouble if the doors close while she is inside. The pole is sure to get caught.

Well does the pole get caught in the door or doesn't it? You can't have it both ways. This is the "Barn-pole paradox." The answer is buried in the misuse of the word "simultaneously" back in the first sentence of the story. In SR, that events separated in space that appear simultaneous in one frame of reference need not appear simultaneous in another frame of reference. The closing doors are two such separate events.

SR explains that the two doors are never closed at the same time in the runner's frame of reference. So there is always room for the pole. In fact, the Lorentz transformation for time is
t '= ( t - v * x / c ** 2 ) / sqrt( 1 - v ** 2 / c ** 2).

It's the v * x term in the numerator that causes the mischief here. In the runner's frame the further event (larger x) happens earlier. The far door is closed first. It opens before she gets there, and the near door closes behind her. Safe again - either way you look at it, provided you remember that simultaneity is not a constant of physics.

References:

Taylor and Wheeler's Spacetime Physics is the classic. Feynman's Lectures are interesting as well.



Q.What is the Twin Paradox?

Ans.The Twin Paradox

A Short Story about Space Travel:

Two twins, conveniently named A and B, both know the rules of Special Relativity. One of them, B, decides to travel out into space with a velocity near the speed of light for a time T, after which she returns to Earth. Meanwhile, her boring sister A sits at home posting to Usenet all day. When A finally comes home, what do the two sisters find? Special Relativity (SR) tells A that time was slowed down for the relativistic sister, B, so that upon her return to Earth, she knows that B will be younger than she is, which she suspects was the the ulterior motive of the trip from the start.

But B sees things differently. She took the trip just to get away from the conspiracy theorists on Usenet, knowing full well that from her point of view, sitting in the spaceship, it would be her sister, A, who was travelling ultrarelativistically for the whole time, so that she would arrive home to find that A was much younger than she was. Unfortunate, but worth it just to get away for a while.

What are we to conclude? Which twin is really younger? How can SR give two answers to the same question? How do we avoid this apparent paradox? Maybe twinning is not allowed in SR? Read on.

Paradox Resolved:

Much of the confusion surrounding the so-called Twin Paradox originates from the attempts to put the two twins into different frames --- without the useful concept of the proper time of a moving body.

SR offers a conceptually very clear treatment of this problem. First chose _one_ specific inertial frame of reference; let's call it S. Second define the paths that A and B take, their so-called world lines. As an example, take ( ct , 0 , 0 , 0 ) as representing the world line of A, and ( ct , f(t) , 0 , 0 ) as representing the world line of B (assuming that the the rest frame of the Earth was inertial). The meaning of the above notation is that at time t, A is at the spatial location ( x1 , x2 , x3 ) = ( 0 , 0 , 0 ) and B is at ( x1 , x2 , x3 ) = ( f(t) , 0 , 0 ) always with respect to S.

Let us now assume that A and B are at the same place at the time t1 and again at a later time t2, and that they both carry high-quality clocks which indicate zero at time t1. High quality in this context means that the precision of the clock is independent of acceleration. [In principle, a bunch of muons provides such a device (unit of time: half-life of their decay).]

The correct expression for the time T such a clock will indicate at time t2 is the following [the second form is slightly less general than the first, but it's the good one for actual calculations]:

........ (1)
where d \ t a u is the so-called proper-time interval, defined by



Furthermore,



is the velocity vector of the moving object. The physical interpretation of the proper-time interval, namely that it is the amount the clock time will advance if the clock moves by dx during dt, arises from considering the inertial frame in which the clock is at rest at time t, its so-called momentary rest frame (see the literature cited below). [Notice that this argument is only of a heuristic value, since one has to assume that the absolute value of the acceleration has no effect. The ultimate justification of this interpretation must come from experiment.]

The integral in (1) can be difficult to evaluate, but certain important facts are immediately obvious. If the object is at rest with respect to S, one trivially obtains T = t2-t1. In all other cases, T must be strictly smaller than t2-t1, since the integrand is always less than or equal to unity. Conclusion: the traveling twin is younger. Furthermore, if she moves with constant velocity v most of the time (periods of acceleration short compared to the duration of the whole trip), T will approximately be given by
............(2)

The last expression is exact for a round trip (e.g. a circle) with constant velocity v. [At the times t1 and t2, twin B flies past twin A and they compare their clocks.]

Now the big deal with SR, in the present context, is that T (or d \ tau, respectively) is a so-called Lorentz scalar. In other words, its value does not depend on the choice of S. If we Lorentz transform the coordinates of the world lines of the twins to another inertial frame S', we will get the same result for T in S' as in S. This is a mathematical fact. It shows that the situation of the traveling twins cannot possibly lead to a paradox within the framework of SR. It could at most be in conflict with experimental results, which is also not the case.

Of course the situation of the two twins is not symmetric, although one might be tempted by expression (2) to think the opposite. Twin A is at rest in one and the same inertial frame for all times, whereas twin B is not. [Formula (1) does not hold in an accelerated frame.] This breaks the apparent symmetry of the two situations, and provides the clearest nonmathematical hint that one twin will in fact be younger than the other at the end of the trip. To figure out which twin is the younger one, use the formulae above in a frame in which they are valid, and you will find that B is in fact younger, despite his expectations.

It is sometimes claimed that one has to resort to General Relativity in order to "resolve" the Twin "Paradox". This is not true. In flat, or nearly flat space-time (no strong gravity), SR is completely sufficient, and it has also no problem with world lines corresponding to accelerated motion.

References:

Taylor and Wheeler, Spacetime Physics (An excellent discussion) Goldstein, Classical Mechanics, 2nd edition, Chap.7


Q. What are subatomic particles?

Ans. If you look in the Particle Data Book, you will find more than 150 particles listed there. It isn't quite as bad as that, though...

The particles are in three categories: leptons, mesons, and baryons. Leptons are particle that are like the electron: they are spin-1/2, and they do not undergo the strong interaction. There are three charged leptons, the electron, muon, and tau, and three neutral leptons, or neutrinos. (The muon and the tau are both short-lived.)

Mesons and baryons both undergo strong interactions. The difference is that mesons have integral spin (0, 1,...), while baryons have half-integral spin (1/2, 3/2,...). The most familiar baryons are the proton and the neutron; all others are short-lived. The most familiar meson is the pion; its lifetime is 26 nanoseconds, and all other mesons decay even faster.

Most of those 150+ particles are mesons and baryons, or, collectively, hadrons. The situation was enormously simplified in the 1960s by the "quark model," which says that hadrons are made out of spin-1/2 particles called quarks. A meson, in this model, is made out of a quark and an anti-quark, and a baryon is made out of three quarks. We don't see free quarks (they are bound together too tightly), but only hadrons; nevertheless, the evidence for quarks is compelling.

In the quark model, there are only 12 elementary particles, which appear in three "generations." The first generation consists of the up quark, the down quark, the electron, and the electron neutrino. (Each of these also has an associated antiparticle.) These particle make up all of the ordinary matter we see around us. There are two other generations, which are essentially the same, but with heavier particles. The second consists of the charm quark, the strange quark, the muon, and the mu neutrino; and the third consists of the top quark, the bottom quark, the tau, and the tau neutrino. (The top has not been directly observed; see the "Top Quark" FAQ entry for details.) These three generations are sometimes called the "electron family", the "muon family", and the "tau family."

Finally, according to quantum field theory, particles interact by exchanging "gauge bosons," which are also particles. The most familiar on is the photon, which is responsible for electromagnetic interactions. There are also eight gluons, which are responsible for strong interactions, and the W+, W-, and Z, which are responsible for weak interactions.

The picture, then, is this:

The Standard Model of particle physics also predict the existence of a "Higgs boson," which has to do with breaking a symmetry involving these forces, and which is responsible for the masses of all the other particles. It has not yet been found. More complicated theories predict additional particles, including, for example, gauginos and sleptons and squarks (from supersymmetry), W' and Z' (additional weak bosons), X and Y bosons (from GUT theories), Majorons, familons, axions, paraleptons, ortholeptons, technipions (from technicolor models), B' (hadrons with fourth generation quarks), magnetic monopoles, e* (excited leptons), etc. None of these "exotica" have yet been seen. The search is on!

References:

The best reference for information on which particles exist, their masses, etc., is the Particle Data Book. It is published every two years; the most recent edition is Physical Review D Vol.45 No.11 (1992).
There are several good books that discuss particle physics on a level accessible to anyone who knows a bit of quantum mechanics. One is Introduction to High Energy Physics, by Perkins. Another, which takes a more historical approach and includes many original papers, is Experimental Foundations of Particle Physics, by Cahn and Goldhaber.
For a book that is accessible to non-physicists, you could try _The Particle Explosion by Close, Sutton, and Marten. This book has fantastic photography.



Q. What is Olbers' Paradox?

Ans. Why isn't the night sky as uniformly bright as the surface of the Sun? If the Universe has infinitely many stars, then it should be. After all, if you move the Sun twice as far away from us, we will intercept one-fourth as many photons, but the Sun will subtend one-fourth of the angular area. So the areal intensity remains constant. With infinitely many stars, every angular element of the sky should have a star, and the entire heavens should be a bright as the sun. We should have the impression that we live in the center of a hollow black body whose temperature is about 6000 degrees Centigrade. This is Olbers' paradox. It can be traced as far back as Kepler in 1610. It was rediscussed by Halley and Cheseaux in the eighteen century, but was not popularized as a paradox until Olbers took up the issue in the nineteenth century.

There are many possible explanations which have been considered. Here are a few:

  • There's too much dust to see the distant stars.
  • The Universe has only a finite number of stars.
  • The distribution of stars is not uniform. So, for example, there could be an infinitely of stars, but they hide behind one another so that only a finite angular area is subtended by them.
  • The Universe is expanding, so distant stars are red-shifted into obscurity.
  • The Universe is young. Distant light hasn't even reached us yet.

    The first explanation is just plain wrong. In a black body, the dust will heat up too. It does act like a radiation shield, exponentially damping the distant starlight. But you can't put enough dust into the universe to get rid of enough starlight without also obscuring our own Sun. So this idea is bad.

    The second might have been correct, but estimates of the total matter in the universe are too large to allow this escape. The number of stars is close enough to infinite for the purpose of lighting up the sky. The third explanation might be partially correct. We just don't know. If the stars are distributed fractally, then there could be large patches of empty space, and the sky could appear dark except in small areas.

    But the final two possibilities are are surely each correct and partly responsible. There are numerical arguments that suggest that the effect of the finite age of the Universe is the larger effect. We live inside a spherical shell of "Observable Universe" which has radius equal to the lifetime of the Universe. Objects more than about 15 billions years old are too far away for their light ever to reach us.

    Historically, after Hubble discovered that the Universe was expanding, but before the Big Bang was firmly established by the discovery of the cosmic background radiation, Olbers' paradox was presented as proof of special relativity. You needed the red-shift (an SR effect) to get rid of the starlight. This effect certainly contributes. But the finite age of the Universe is the most important effect.

    References:

    Ap. J. 367, 399 (1991). The author, Paul Wesson, is said to be on a personal crusade to end the confusion surrounding Olbers' paradox.
    Darkness at Night: A Riddle of the Universe, Edward Harrison, Harvard University Press, 1987



    Q. What is Dark Matter?

    Ans. The story of dark matter is best divided into two parts. First we have the reasons that we know that it exists. Second is the collection of possible explanations as to what it is.

    Why the Universe Needs Dark Matter

    We believe that that the Universe is critically balanced between being open and closed. We derive this fact from the observation of the large scale structure of the Universe. It requires a certain amount of matter to accomplish this result. Call it M.

    You can estimate the total BARYONIC matter of the universe by studying big bang nucleosynthesis. The more matter in the universe, the more slowly the universe should have expanded shortly after the big bang. The longer the "cooking time" allowed, the higher the production of helium from primordial hydrogen. We know the He/H ratio of the universe, so we can estimate how much baryonic matter exists in the universe. It turns out that you need about 0.05 M total baryonic matter to account for the known ratio of light isotopes. So only 1/20 of the total mass of they Universe is baryonic matter.

    Unfortunately, the best estimates of the total mass of everything that we can see with our telescopes is roughly 0.01 M. Where is the other 99% of the stuff of the Universe? Dark Matter!

    So there are two conclusions. We only see 0.01 M out of 0.05 M baryonic matter in the Universe. The rest must be in baryonic dark matter halos surrounding galaxies. And there must be some non-baryonic dark matter to account for the remaining 95% of the matter required to give omega, the mass of universe, in units of critical mass, equal to unity.

    For those who distrust the conventional Big Bang models, and don't want to rely upon fancy cosmology to derive the presence of dark matter, there are other more direct means. It has been observed in clusters of galaxies that the motion of galaxies within a cluster suggests that they are bound by a total gravitational force due to about 5-10 times as much matter as can be accounted for from luminous matter in said galaxies. And within an individual galaxy, you can measure the rate of rotation of the stars about the galactic center of rotation. The resultant "rotation curve" is simply related to the distribution of matter in the galaxy. The outer stars in galaxies seem to rotate too fast for the amount of matter that we see in the galaxy. Again, we need about 5 times more matter than we can see via electromagnetic radiation. These results can be explained by assuming that there is a "dark matter halo" surrounding every galaxy.

    What is Dark Matter

    This is the open question. There are many possibilities, and nobody really knows much about this yet. Here are a few of the many published suggestions, which are being currently hunted for by experimentalists all over the world:

    1. Normal matter which has so far eluded our gaze, such as
       
  • dark galaxies
       
  • brown dwarfs
       
  • planetary material (rock, dust, etc.)

    2. Massive Standard Model neutrinos. If any of the neutrinos are massive, then this could be the missing mass. Note that the possible 17 KeV tau neutrino would give far too much mass creating almost as many problems as it solves in this regard.

    3. Exotica

    Massive exotica would provide the missing mass. For our purposes, these fall into two classes: those which have been proposed for other reasons but happen to solve the dark matter problem, and those which have been proposed specifically to provide the missing dark matter.

    Examples of objects in the first class are axions, additional neutrinos, supersymmetric particles, and a host of others. Their properties are constrained by the theory which predicts them, but by virtue of their mass, they solve the dark matter problem if they exist in the correct abundance.

    Particles in the second class are generally classed in loose groups. Their properties are not specified, but they are merely required to be massive and have other properties such that they would so far have eluded discovery in the many experiments which have looked for new particles. These include WIMPS (Weakly Interacting Massive Particles), CHAMPS, and a host of others.

    References:

    Dark Matter in the Universe (Jerusalem Winter School for Theoretical Physics, 1986-7), J.N. Bahcall, T. Piran, & S. Weinberg editors. Dark Matter (Proceedings of the XXIIIrd Recontre de Moriond) J. Audouze and J. Tran Thanh Van. editors.


    Q. Does hot water freeze faster?

    Ans. You put two pails of water outside on a freezing day. One has hot water (95 degrees C) and the other has an equal amount of colder water (50 degrees C). Which freezes first? The hot water freezes first! Why?

    It is commonly argued that the hot water will take some time to reach the initial temperature of the cold water, and then follow the same cooling curve. So it seems at first glance difficult to believe that the hot water freezes first. The answer lies mostly in evaporation. The effect is definitely real and can be duplicated in your own kitchen.

    Every "proof" that hot water can't freeze faster assumes that the state of the water can be described by a single number. Remember that temperature is a function of position. There are also other factors besides temperature, such as motion of the water, gas content, etc. With these multiple parameters, any argument based on the hot water having to pass through the initial state of the cold water before reaching the freezing point will fall apart. The most important factor is evaporation.

    The cooling of pails without lids is partly Newtonian and partly by evaporation of the contents. The proportions depend on the walls and on temperature. At sufficiently high temperatures evaporation is more important. If equal masses of water are taken at two starting temperatures, more rapid evaporation from the hotter one may diminish its mass enough to compensate for the greater temperature range it must cover to reach freezing. The mass lost when cooling is by evaporation is not negligible. In one experiment, water cooling from 100C lost 16% of its mass by 0C, and lost a further 12% on freezing, for a total loss of 26%.

    The cooling effect of evaporation is twofold. First, mass is carried off so that less needs to be cooled from then on. Also, evaporation carries off the hottest molecules, lowering considerably the average kinetic energy of the molecules remaining. This is why "blowing on your soup" cools it. It encourages evaporation by removing the water vapor above the soup.

    Thus experiment and theory agree that hot water freezes faster than cold for sufficiently high starting temperatures, if the cooling is by evaporation. Cooling in a wooden pail or barrel is mostly by evaporation. In fact, a wooden bucket of water starting at 100C would finish freezing in 90% of the time taken by an equal volume starting at room temperature. The folklore on this matter may well have started a century or more ago when wooden pails were usual. Considerable heat is transferred through the sides of metal pails, and evaporation no longer dominates the cooling, so the belief is unlikely to have started from correct observations after metal pails became common.

    References:

    "Hot water freezes faster than cold water. Why does it do so?", Jearl Walker in The Amateur Scientist, Scientific American, Vol. 237, No. 3, pp 246-257; September, 1977.
    "The Freezing of Hot and Cold Water", G.S. Kell in American Journal of Physics, Vol. 37, No. 5, pp 564-565; May, 1969.



    Q. Does my bathtub drain differently depending on whether I live in the northern or southern hemisphere?

    Ans. No. There is a real effect, but it is far too small to be relevant when you pull the plug in your bathtub.

    Because the earth rotates, a fluid that flows along the earth's surface feels a "Coriolis" acceleration perpendicular to its velocity. In the northern hemisphere high pressure storm systems spin clockwise. In the southern hemisphere, they spin counterclockwise because the direction of the Coriolis acceleration is reversed. This effect leads to the speculation that the bathtub vortex that you see when you pull the plug from the drain spins one way in the north and the other way in the south.

    But this acceleration is VERY weak for bathtub-scale fluid motions. The order of magnitude of the Coriolis acceleration can be estimated from size of the "Rossby number". Coriolis accelerations are significant when the Rossby number is SMALL.

    So, suppose we want a Rossby number of 0.1 and a bathtub-vortex length scale of 0.1 meter. Since the earth's rotation rate is about 10 ** ( -4 ) / second, the fluid velocity should be less than or equal to 2 * 10 ** ( -6 ) meters/second. This is a very small velocity. How small is it? Well, we can take the analysis a step further and calculate another, more famous dimensionless parameter, the Reynolds number.

    The Reynolds number is = L * U * density / viscosity

    Assuming that physicists bathe in hot water the viscosity will be about 0.005 poise and the density will be about 1.0, so the Reynolds Number is about 4 * 10 ** ( - 2 ).

    Now, life at low Reynolds numbers is different from life at high Reynolds numbers. In particular, at low Reynolds numbers, fluid physics is dominated by friction and diffusion, rather than by inertia: the time it would take for a particle of fluid to move a significant distance due to an acceleration is greater than the time it takes for the particle to break up due to diffusion.

    Therefore the effect of the Coriolis acceleration on your bathtub vortex is SMALL. To detect its effect on your bathtub, you would have to get out and wait until the motion in the water is far less than one rotation per day. This would require removing thermal currents, vibration, and any other sources of noise. Under such conditions, never occurring in the typical home, you WOULD see an effect. To see what trouble it takes to actually see the effect, see the reference below. Experiments have been done in both the northern and southern hemispheres to verify that under carefully controlled conditions, bathtubs drain in opposite directions due to the Coriolis acceleration from the Earth's rotation.

    The same effect has been accused of responsibility for the direction water circulates when you flush a toilet. This is surely nonsense. In this case, the water rotates in the direction which the pipe points which carries the water from the tank to the bowl.

    Reference:
    Trefethen, L.M. et al, Nature 207 1084-5 (1965).



    Q. Why are Golf Balls Dimpled?

    Ans. The dimples, paradoxically, do increase drag slightly. But they also increase `Magnus lift', that peculiar lifting force experienced by rotating bodies travelling through a medium. Contrary to Freshman physics, golf balls do not travel in inverted parabolas. They follow an 'impetus trajectory':

    This is because of the combination of drag (which reduces horizontal speed late in the trajectory) and Magnus lift, which supports the ball during the initial part of the trajectory, making it relatively straight. The trajectory can even curve upwards at first, depending on conditions! Here is a cheesy diagram of a golf ball in flight, with some relevant vectors:


    The Magnus force can be thought of as due to the relative drag on the air on the top and bottom portions of the golf ball: the top portion is moving slower relative to the air around it, so there is less drag on the air that goes over the ball. The boundary layer is relatively thin, and air in the not-too-near region moves rapidly relative to the ball. The bottom portion moves fast relative to the air around it; there is more drag on the air passing by the bottom, and the boundary (turbulent) layer is relatively thick; air in the not-too-near region moves more slowly relative to the ball. The Bernoulli force produces lift. (alternatively, one could say that `the flow lines past the ball are displaced down, so the ball is pushed up.')

    The difficulty comes near the transition region between laminar flow and turbulent flow. At low speeds, the flow around the ball is laminar. As speed is increased, the bottom part tends to go turbulent first. But turbulent flow can follow a surface much more easily than laminar flow.

    As a result, the (laminar) flow lines around the top break away from the surface sooner than otherwise, and there is a net displacement up of the flow lines. The magnus lift goes negative.

    The dimples aid the rapid formation of a turbulent boundary layer around the golf ball in flight, giving more lift. Without 'em, the ball would travel in more of a parabolic trajectory, hitting the ground sooner. (and not coming straight down.)

    References: Perhaps the best (and easy-to-read) reference on this effect is a paper in American Journal of Physics by one Lyman Briggs, c. 1947. Briggs was trying to explain the mechanism behind the `curve ball' in baseball, using specialized apparatus in a wind tunnel at the NBS. He stumbled on the reverse effect by accident, because his model `baseball' had no stitches on it. The stitches on a baseball create turbulence in flight in much the same way that the dimples on a golf ball do.



    Q. Why do Mirrors Reverse Left and Right?

    Ans. The simple answer is that they don't. Look in a mirror and wave your right hand. On which side of the mirror is the hand that waved? The right side, of course.

    Mirrors DO reverse In/Out. The further behind you an object is, the further in front of you it appears in the mirror. Imaging holding an arrow in your hand. If you point it up, it will point up in the mirror. If you point it to the left, it will point to the left in the mirror. But if you point it toward the mirror, it will point right back at you. In and Out are reversed.

    If you take a three-dimensional, rectangular, coordinate system, ( X , Y , Z ), and point the Z axis such that the vector equation X x Y = Z is satisfied, then the coordinate system is said to be right-handed. Imagine Z pointing toward the mirror. X and Y are unchanged (remember the arrows?) but Z will point back at you. In the mirror, X x Y = - Z. The image contains a left-handed coordinate system.

    This has an important effect, familiar mostly to chemists and physicists. It changes the chirality, or handedness of objects viewed in the mirror. Your left hand looks like a right hand, while your right hand looks like a left hand. Molecules often come in pairs called stereoisomers, which differ not in the sequence or number of atoms, but only in that one is the mirror image of the other, so that no rotation or stretching can turn one into the other. Your hands make a good laboratory for this effect. They are distinct, even though they both have the same components connected in the same way. They are a stereo pair, identical except for "handedness".

    People sometimes think that mirrors do reverse left/right, and that the effect is due to the fact that our eyes are aligned horizontally on our faces. This can be easily shown to be untrue by looking in any mirror with one eye closed!

    Reference:

    The Left Hand of the Neutrino, by Isaac Asimov, contains a very readable discussion of handedness and mirrors in physics.



    Q. What is the Mass of a Photon? Or Does the mass of an object depend on its velocity?

    Ans. This question usually comes up in the context of wondering whether photons are really "massless," since, after all, they have nonzero energy. The problem is simply that people are using two different definitions of mass. The overwhelming consensus among physicists today is to say that photons are massless. However, it is possible to assign a "relativistic mass" to a photon which depends upon its wavelength. This is based upon an old usage of the word "mass" which, though not strictly wrong, is not used much today.

    The old definition of mass, called "relativistic mass," assigns a mass to a particle proportional to its total energy E, and involved the speed of light, c, in the proportionality constant:

    m = E / c ** 2. (** = to the power of) (1)

    This definition gives every object a velocity-dependent mass.

    The modern definition assigns every object just one mass, an invariant quantity that does not depend on velocity. This is given by m = E_0 / c ** 2, where E_0 is the total energy of that object at rest.(2)

    The first definition is often used in popularizations, and in some elementary textbooks. It was once used by practicing physicists, but for the last few decades, the vast majority of physicists have instead used the second definition. Sometimes people will use the phrase "rest mass," or "invariant mass," but this is just for emphasis: mass is mass. The "relativistic mass" is never used at all. (If you see "relativistic mass" in your first-year physics textbook, complain! There is no reason for books to teach obsolete terminology.)

    Note, by the way, that using the standard definition of mass, the one given by Eq. (2), the equation " E = m c ** 2 " is not correct. Using the standard definition, the relation between the mass and energy of an object can be written as

    E = m c ** 2 / sqrt( 1 - v ** 2 / c ** 2 ), (3)

    or as E ** 2 = m ** 2 c ** 4 + p ** 2 c ** 2, (4)
    where v is the object's velocity, and p is its momentum.

    In one sense, any definition is just a matter of convention. In practice, though, physicists now use this definition because it is much more convenient. The "relativistic mass" of an object is really just the same as its energy, and there isn't any reason to have another word for energy: "energy" is a perfectly good word. The mass of an object, though, is a fundamental and invariant property, and one for which we do need a word.

    The "relativistic mass" is also sometimes confusing because it mistakenly leads people to think that they can just use it in the Newtonian relations

    F = m a (5)
    and
    F = G m1 m2 / r ** 2. (6)

    In fact, though, there is no definition of mass for which these equations are true relativistically: they must be generalized. The generalizations are more straightforward using the standard definition of mass than using "relativistic mass."

    Oh, and back to photons: people sometimes wonder whether it makes sense to talk about the "rest mass" of a particle that can never be at rest. The answer, again, is that "rest mass" is really a misnomer, and it is not necessary for a particle to be at rest for the concept of mass to make sense. Technically, it is the invariant length of the particle's four-momentum. (You can see this from Eq. (4).) For all photons this is zero. On the other hand, the "relativistic mass" of photons is frequency dependent. UV photons are more energetic than visible photons, and so are more "massive" in this sense, a statement which obscures more than it elucidates.

    Reference:

    Lev Okun wrote a nice article on this subject in the June 1989 issue of Physics Today, which includes a historical discussion of the concept of mass in relativistic physics.



    Q. Who won physics Nobel Prizes?

    Ans. The following is a complete listing of Nobel Prize awards, from the first award in 1901 till 1932.

    1901 Wilhelm Konrad RontgenX-rays
    1902 Hendrik Antoon Lorentz, Pieter Zeeman Magnetism in radiation phenomena
    1903 Antoine Henri Bequerel, Pierre Curie, Marie Sklowdowska-Curie Spontaneous radioactivity
    1904 Lord Rayleigh (a.k.a. John William Strutt) Density of gases and discovery of argon
    1905 Pilipp Eduard Anton von Lenard Cathode rays
    1906 Joseph John Thomson Conduction of electricity by gases
    1907 Albert Abraham Michelson Precision metrological investigations
    1908 Gabriel Lippman Reproducing colors photographically based on the phenomenon of interference
    1909 Guglielmo Marconi, Carl Ferdinand Braun Wireless telegraphy
    1910 Johannes Diderik van der Waals Equation of state of fluids
    1911 Wilhelm Wien Laws of radiation of heat
    1912 Nils Gustaf Dalen Automatic gas flow regulators
    1913 Heike Kamerlingh Onnes Matter at low temperature
    1914 Max von Laue Crystal diffraction of X-rays
    1915 William Henry Bragg, William Lawrence Bragg X-ray analysis of crystal structure
    1917 Charles Glover Barkla Characteristic X-ray spectra of elements
    1918 Max Planck Energy quanta
    1919 Johannes Stark Splitting of spectral lines in E fields
    1920 Charles-Edouard Guillaume Anomalies in nickel steel alloys
    1921 Albert Einstein Photoelectric Effect
    1922 Niels Bohr Structure of atoms
    1923 Robert Andrew Millikan Elementary charge of electricity
    1924 Karl Manne Georg Siegbahn X-ray spectroscopy
    1925 James Franck, Gustav Hertz Impact of an electron upon an atom
    1926 Jean Baptiste Perrin Sedimentation equilibrium
    1927 Arthur Holly Compton
    Charles Thomson Rees Wilson
    Compton effect
    Cloud chamber
    1928 Owen Willans Richardson Thermionic phenomena, Richardson's Law
    1929 Prince Louis-Victor de Broglie Wave nature of electrons
    1930 Sir Chandrasekhara Venkata Raman Scattering of light, Raman effect
    1932 Werner Heisenberg Quantum Mechanics


    Q. What is a Plasma?

    Ans. Plasma and NonLinear Physics

    Theory : Tracy, Vahala

    Theory


    Plasma physics utilizes classical mechanics, electromagnetic theory and statistical physics. A plasma is a gas of charged particles (generally electrons and various ions) which interact with both externally applied electromagnetic fields and with fields they themselves generate. This interaction differentiates the behavior of plasmas from that of the more familiar gases.

    Plasmas are of great physical interest for a number of reasons. Most of the matter in the universe is in the plasma state (e.g. stars, the interplanetary and interstellar media are all plasmas), and they are a potential source of energy. Deep in the core of stars hydrogen is fused into helium releasing huge amounts of energy. One long-term goal of plasma research is to replicate this process in the laboratory. There are two basic approaches to this problem: magnetic confinement and inertial confinement. In magnetic confinement schemes high temperature (T~108K) and high density (n~1014 particles/cm3) plasmas must be confined for a sufficient length of time (~1 sec) to initiate controlled thermo- nuclear fusion. Inertial confinement schemes attempt to initiate fusion burns by imploding fuel pellets using high power lasers or particle beams.

    Since charged particles are influenced by fields they themselves generate plasmas are inherently nonlinear. The understanding of nonlinear phenomena has undergone rapid advance in the last ten to twenty years. New discoveries, such as solitons and chaos, the development of new mathematical techniques for the study of nonlinear systems, and the introduction of supercomputers have all contributed to this revolutionary progress. Many of these developments have been significantly advanced by the study of plasma physics.

    A broad program of basic research in theoretical plasma physics is carried out at William and Mary. Topics currently of interest include:

    A problem of great interest in plasma physics and fluid mechanics is how to compute the enhancement of various transport effects (e.g. viscosity) due to turbulence on small spatial scales. Several approaches are currently being investigated. These include Renormalization Group techniques (similar to those used in the study of critical phenomena) and Constrained Decimation Schemes, which replace the full system of degrees of freedom describing a turbulence system by a much smaller system driven by stochastic forces.

    We are investigating the feasibility of using millimeter and far infrared lasers to detect alpha particles in a tokamak plasma. A knowledge of the alpha particle distribution is essential since it is the alphas that will be needed to keep the fusion reactions going.

    Another area of active research concerns the application of Hamiltonian (WKB-type) analysis techniques to the study of plasma heating by RF waves. The frequency of the incoming wave is chosen to be resonant with the gyrating frequency of some particular ion species deep inside the plasma. At resonance, energy is transferred from the wave to the plama particles, resulting in local heating. In this resonance area the WKB approximation breaks down, and some a local approximation must be used to describe the details of the resonant interaction. By using new tools recently developed in collaboration with the plasma theory group at Lawrence Berkeley Laboratory, we have shown how to analyse this process in detail for the first time.

    We are also pursuing development of new signal processing techniques which can be used to detect and analyse chaotic behavior in experimental systems. Chaotic systems are physical systems which evolve according to simple deterministic rules, yet whose time dynamics appears to be random. Chaos is believed to be common in physical systems which are nonlinear, yet its presence can be hard to distinguish from noise. Our recent research has focused on two separate approaches which we hope to merge in the coming year or two.

    In the first approach, we take a time series from the system of interest and construct a numerical model which captures the dynamics. This model can then be trained to synchronize to the physical system, which results in a 'chaos detector': if we are now presented with a mystery signal and ask if it comes from the system of interest, then we can detect its presence by measuring the degree of synchronization. This synchronization phenomena is robust in the sense that the chaotic signal can be detected even when high levels of noise are present.

    Another line of effort concerns the use of 'symbolic dynamics' in signal processing. In this approach, the analog signal is discretized in a very coarse fashion (e.g. if the signal lies above some threshold value, assign it a '1', if it lies below, assign it a '0'). This converts the analog signal stream into a binary string. What is surprising is that, for chaotic systems, this coarse grained information still contains essentially all the information needed to reconstruct the dynamics. For example, if the signal is generated by iterating a simple map, such as the Henon map:

    x(t+1) = a+b y(t) - x ^ 2(t)
    y(t+1) = x(t)

    we can generate the 'signal' x(1), x(2), x(3)...which can then be converted into a binary string. We now ask how the symbol statistics change as we change the parameters 'a' and 'b'. The change in the symbol statistics can be measured relative to some choice of reference values of the parameters (i.e. we chose a0 = 1.79 and b0 = - .0385). A two dimensional plot of this 'error landscape' is shown in the figure [Image] with the values of parameter 'a' plotted on the horizontal, and 'b' on the vertical axes. The yellow color indicates that the symbol statistics lie close to the reference, while the red indicates very different statistics. Notice the yellow patch centered around the target values, meaning that even if the only information one is given about the system is the symbolic data and the form of the model which generated it, one can measure the parameter values. This can be done even in the presence of large amounts of noise because the information used is coarse grained.

    Close ties exist between the Plasma group at William and Mary and scientists at national laboratories (such as Lawrence Berkeley, Oak Ridge, and Livermore National Laboratories) as well as universities such as MIT and U.C. San Diego . Computations are done both on local facilities and on the CRAY 2 at the Livermore National Energy Research and Scientific Computing Center (NERSC).



    Q. What are Newton's three laws?

    Ans. Newton's first law, is the Law of Inertia, which states that an object moving in a straight line will continue to move in a straight line unless acted upon by an outside force. In otherwords, inertia is the property that makes an object stay in motion if it is moving or remain motionless if it is not moving.

    Newton's second law, bascially describes how an object changes its motion when a force is applied to it. The general equation for this is F = m a, which is the applied Force equals mass multiplied by acceleration. So, a heavier object will of course change slower than a lighter object with the same force applied on it. A Mack truck is going to take longer to stop then a Dodge Neon, as long as they are both travelling the same speed and experiencing the same force.

    Newton's third law, is the simple statement, that for every action, there is an equal and opposite reaction. For example, when you "push" on the door, the door is actually pushing right back on you. It is the downward motion of gases that is the reason the space shuttle can travel into space. The downward motion of the gases from the rocket generates a reaction to push the rocket upward.



    Return to Question and Answers