Thursday, December 30, 2010

Physics First ?

The way it ought to be. Thanks, Tennessee.

Inverted curriculum makes physics first science in high school

CHATTANOOGA (AP) -   Some educators are starting to turn the way they teach high school science upside down.

Rather than starting off in ninth and 10th grades with biology and chemistry, they are going to begin teaching physics first. The idea is to teach physics -- normally a course for later grades -- to freshmen in an effort to get them familiar with scientific concepts.

Teachers then will help students apply those concepts as they teach chemistry and biology.

“The physics is really the underlying science for biology and chemistry,” said Robert Marlowe, a professor of physics at the University of Tennessee at Chattanooga, who is helping secure grant money from the National Science Foundation to certify more local teachers in physics and chemistry.

“The benefit for students is they will see how strongly physics is tied into biology and chemistry,” he said. “They will get a sense for how it is not the case that physics lies down one path and chemistry is behind a different door and biology is behind a different door. That’s nuts! It’s never been that way.”

The grant involves eight universities and 30 school districts as well as the Tennessee Department of Education. Regardless of whether Hamilton County receives $875,000 of the$10 million total grant, officials say local teachers will move toward offering an inverted curriculum.

The money partly would go toward summer institutes to get more science teachers certified in chemistry and physics, one of which is required to teach the new freshman-level physics class.

The physical world concepts class, which a handful of schools already have begun teaching to ninth-graders, is a lower-level conceptual class, Marlowe said, which still leaves room for a senior-level physics class in 12th grade.

Since Tennessee’s academic standards have become more rigorous in the last year, ninth-graders are starting high school with more experience in math, which, in turn, makes conceptual physics easier to understand, he said.

“It’s always been that physics is more abstract than chemistry and biology and has laid a heavier emphasis on math, so the students gear up with chemistry and biology and take physics their senior year, when they can better handle the math,” Marlowe said. “But if you concentrate on the concepts, you can get away with just algebra and geometry.”

Jamie Parris, Hamilton County Schools’ director of secondary math and science, said the new freshman class will be very hands-on.

“They will be doing more experimentation instead of being told what to do,” he said.
For instance, rather than studying a two-day lesson on the motion of a pendulum, students may spend an entire week investigating the concept, Marlowe said.

“They would ask ... ’What influences the motion of a pendulum? What kind of data do we need to gather to learn about pendulums?’ They’ll analyze and graph their results,” he said. “This is going to take a little time. It could be taught faster, but what would students walk away with? Typically not much. They’re developing a method to study all types of scientific phenomenon.”

Jack Pickett teaches physical world concepts at Chattanooga Center for the Creative Arts, one of the few Hamilton County schools that already has made the switch to an inverted science curriculum. The ninth-graders are not ready for some of the more complex physics concepts, he said, so he picks and chooses the ones he thinks are important.

The class includes lessons on electricity, Newton’s law of gravity and the properties of light and sound.
Kelley Kuhn, head of the science department at Chattanooga Center for the Creative Arts, said she is particularly excited to teach biology next year to upperclassmen for the first time.

“Biology is tough, but we’ve tended to water it down so we could teach it to freshmen,” she said.
In addition to offering students what many teachers consider a more natural progression through the sciences, officials said they’re also hoping to get more students to take physics in high school and possibly later in college.

“There haven’t been many students (majoring) in physics or physics education, especially,” Marlowe said.

“We have been almost wholly focused on research-based physics and not getting them into the work force as teachers. And if you don’t have good high school preparation in physics, then you are hurting.”

Whether or not she decides to pursue a career in physics, Chattanooga Center for the Creative Arts freshman Caitlyn Clear, 14, said she’s learned more in physical world concepts than in traditional science classes.

“We use models and things. It’s more visual and more hands-on,” she said. “You can only learn so much with notes.”

as reported at timesnews.net

Published December 26th, 2010 | Added December 26th, 2010 7:44 pm | Comments

Tuesday, December 28, 2010

Has anyone seen

Schrödinger's Cat ?

Were you a weenie like me on December 31, 1999 screaming from the rafters that the new century/millennium was NOT going to begin the next day but rather a year later on January 1, 2001 ? After all, there was no such thing as a Year Zero.* December 31st 1 B.C. (B.C.E. for you Atheists and Chinese, same difference) turned into January 1st, 1 A.D. (C.E.= Common Era), by our reckoning.

Well, I was 43 on that day so I only screamed that in my head, not to others.

Were I 23 though .... I would have been QUITE vocal, and probably lost friends in the process. Young people! Youth is a temporary affliction, but fortunately, Father Time has the cure. :-)

But what does it matter, anyway? Every day is but one day later than the day before. A man who turns 30 should look on the bright side, for example, that he's tied with few others as the youngest man in his 30's on the planet, rather than the depressing thought that the single most exciting decade of his life is behind him. You are after all, just one day older. Attitude is Everything.

On Jan. 1, 2001 I was 44 with a svelte 43 year-old-wife and 4 kids ages 11, 10, 7, and 5. Good times. Now add 10 to those numbers, and a few pounds all around. Still good times, just a bit crazier, as Science has proven that the raising/expense of teenagers is the source of gray hair. :-)

So, what has happened since Jan. 1, 2001, the dawn of a new day/month/year/decade/century/millennium?

Two new American Presidents, and 2 new wars, still ongoing. Terrorism of the worst sort. A once balanced budget that isn't anymore (thanks to Dumbya and his fellow "Legal Thieves" of the U.S. Treasury, World's largest piggy bank), and probably won't be for the foreseeable future, and the rise of The People's Republic of China, which slowly but surely recognized that Communism is a dead end. Its rise is ongoing, and I don't see the economic inertia changing direction anytime soon.

What happened in  Mathematical Physics?

Again, mostly War (what the Hell is WRONG with our Species?!), especially between SuperString Theory and Loop Quantum Gravity. Lee Smolin and Peter Woit published books that had the people questioning the direction of Theoretical Physics to the point that not only were funds to ST reduced, but funds to very badly needed research in the Quantum Field Theories of Quantum Electrodynamics and Quantum Chromodynamics were reduced as well.

ALSO, String Theorists engaged in an ongoing Civil War amongst themselves over "The Anthropic Landscape" and the number 10 raised to the power of 500, large but finite, with Leonard Susskind of Stanford and Joseph Polchinski of Kavli taking the pro-Anthropic view, Nobel laureate and Kavli Director David Gross championing the anti-Anthropic view, and Edward Witten of IAS-Princeton taking the diplomatic moderate stance. And Lubos Motl got his PhD and unleashed his webblog upon the world, for good or for ... ill. But whatever he was and is, you can't say he's not entertaining ... in a Howard Stern kind of way.

Nature abhors a vacuum, so into the fray stepped the highly speculative field of Cosmology, to the point that Dark Matter Phenomenology has replaced Strings as the primary choice of specialization amongst the grad students and post-docs at top physics research institutions in the USA, at least.

But The Standard Model of Particle Physics still rules, and for this the first decade of the 21st century will likely best be known for the start-up of the LHC at CERN in Switzerland/France. Great expectations, wonderful results and thus good times are around the corner, with Nobels awarded on the one hand and careers crushed on the other, as  results both expected and unexpected are forthcoming soon from the greatest machine built by Humanity to-date.

Of course, NOTHING advanced in the last decade as much as Biology, Astronomy, and the too often forgotten fields of Social Anthropology (sometimes called Cultural Anthropology) and Psychology.

Well, Astronomy's advancement was almost pre-ordained, given the great results born of the great astronomical observatories, both in space and on Earth, planned in decades past and now up and doing their jobs. More yet to come.

Biology has taken off like a bat out of Hell, so much so that 60% of ALL Science blogs are Biology/Medicine-based.

But remember: we don't have Biology without Chemistry, and we don't have Modern Chemistry without Physics, thanks mostly to Wolfgang Pauli and Erwin Schrodinger.

Cultural Anthropology and Psychology are very broad and open fields of study, but mostly they are very young, so sure there is much work to be done, and good news, it's being done.

Overseeing ALL of this and most important all is the great tool that is Computers.

Computer Science is ... everywhere, in every field. I can't believe that it was only 1995 when the milestone of half of American households became internet-wired, and then through the ONLY significant portal of the time that was AOL. That would make this past decade the FIRST of the full decades when we became more wired than ... not.

And now, one last look at the LHC, specifically the CERN/LHC scientists celebrating the startup of same:

 Geez, are there ANY non-White people working at CERN and the LHC ??

If you enjoyed those pics, there are many more available from the source material that you can find by clicking here.

* - ADDENDUM: "Year zero" does not exist in the widely used Gregorian calendar or in its predecessor, the Julian calendar. Under those systems, the year 1 BC is followed by AD 1. However, there is a year zero in astronomical year numbering (where it coincides with the Julian year 1 BC) and in ISO 8601:2004 (where it coincides with the Gregorian year 1 BC) as well as in all Buddhist and Hindu calendars. (from Wikipedia) Yes, more useless yet mildly interesting information to help you impress others with "the size of your intellectual penis" at your local Mensa gathering or at your Math Department's Pizza Friday seminar.

Saturday, December 25, 2010

Happy Christmas from The Spirit of John Lennon, Steve Martin, John Malkovich, and Me

The Timeless Message from the only person who could say then and forever that he was the leader of the most successfu1 band of all time:

Steve Martin's 5 Christmas Wishes:

John Malkovich reads 'Twas the Night Christmas to children, in which he explains The Physics of Sleighs, and why The Santa of Portugal is the Most Feared:

Merry Christmas from Multiplication by Infinity to you and yours.

 John Lennon performing Earth Science in the 1970's

Friday, December 24, 2010

IceCube Neutrino Detector Now Finished

WELLINGTON (AFP) – An extraordinary underground observatory for subatomic particles has been completed in a huge cube of ice one kilometre on each side deep under the South Pole, researchers said.

Building the IceCube, the world's largest neutrino observatory, has taken a gruelling decade of work in the Antarctic tundra and will help scientists study space particles in the search for dark matter, invisible material that makes up most of the Universe's mass.

The observatory, located 1,400 metres underground near the US Amundsen-Scott South Pole Station, cost more than 270 million dollars, according to the US National Science Foundation (NSF).

The cube is a network of 5,160 optical sensors, each about the size of a basketball, which have been suspended on cables in 86 holes bored into the ice with a specially-designed hot-water drill.

NSF said the final sensor was installed in the cube, which is one kilometre (0.62 miles) long in each direction, on December 18. Once in place they will be forever embedded in the permafrost as the drill holes fill with ice.

The point of the exercise is to study neutrinos, subatomic particles that travel at close to the speed of light but are so small they can pass through solid matter without colliding with any molecules.

Scientists believe neutrinos were first created during the Big Bang and are still generated by nuclear reactions in suns and when a dying star explodes, creating a supernova.

Trillions of them pass through the entire planet all the time without leaving a trace, but the IceCube seeks to detect the blue light emitted when an occasional neutrino crashes into an atom in the ice.

"Antarctic polar ice has turned out to be an ideal medium for detecting neutrinos," the NSF said in a statement announcing the project's completion.

"It is exceptionally pure, transparent and free of radioactivity."

Scientists have hailed the IceCube as a milestone for international research and say studying neutrinos will help them understand the origins of the Universe.

"From its vantage point at the end of the world, IceCube provides an innovative means to investigate the properties of fundamental particles that originate in some of the most spectacular phenomena in the Universe," NSF said.

Most of the IceCube's funding came from the NSF, with contributions from Germany, Belgium and Sweden.

Researchers from Canada, Japan, New Zealand, Switzerland, Britain and Barbados also worked on the project.

It is operated by the University of Wisconsin-Madison.

From here

7 Laws to Bring Them All and In the Brightness Bind Them

1. Newton's First Law of Motion

Every body remains in a state of rest or uniform motion (constant velocity) unless it is acted upon by an external unbalanced force. This means that in the absence of a non-zero net force, the center of mass of a body either remains at rest, or moves at a constant speed in a straight line.

2. Newton's Second Law of Motion

A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma. Alternatively, the total force applied on a body is equal to the time derivative of linear momentum of the body.

3. Newton's Third Law of Motion

The mutual forces of action and reaction between two bodies are equal, opposite and collinear. This means that whenever a first body exerts a force F on a second body, the second body exerts a force −F on the first body. F and −F are equal in magnitude and opposite in direction. This law is sometimes referred to as the action-reaction law, with F called the "action" and −F the "reaction". The action and the reaction are simultaneous.

4. The First Law of Thermodynamics

Energy can be transformed, i.e. changed from one form to another, but cannot be created nor destroyed. It is usually formulated by stating that the change in the internal energy of a system is equal to the amount of heat supplied to the system, minus the amount of work performed by the system on its surroundings.

5. The Second Law of Thermodynamics

An expression of the tendency that over time, differences in temperature, pressure, and chemical potential equilibrate in an isolated physical system. From the state of thermodynamic equilibrium, the law deduced the principle of the increase of entropy and explains the phenomenon of irreversibility in nature. The second law declares the impossibility of machines that generate usable energy from the abundant internal energy of nature by processes called perpetual motion of the second kind.

The second law may be expressed in many specific ways, but the first formulation is credited to the German scientist Rudolf Clausius. The law is usually stated in physical terms of impossible processes. In classical thermodynamics, the second law is a basic postulate applicable to any system involving measurable heat transfer, while in statistical thermodynamics, the second law is a consequence of unitarity in quantum theory. In classical thermodynamics, the second law defines the concept of thermodynamic entropy, while in statistical mechanics entropy is defined from information theory, known as the Shannon entropy.

6. The Third Law of Thermodynamics

A statistical law of nature regarding entropy and the impossibility of reaching absolute zero, the null point of the temperature scale. The most common enunciation of the third law of thermodynamics is:
As a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
This minimum value, the residual entropy, is not necessarily zero, although it is always zero for a perfect crystal in which there is only one possible ground state.

7. The Wheeler-DeWitt Equation

A functional differential equation. It is ill defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional, the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in mini-superspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state.

Bryce DeWitt first published this equation in 1967 under the name “Einstein–Schrodinger equation”; it was later renamed the “Wheeler–DeWitt equation”.[2]

Simply speaking, the Wheeler–DeWitt equation says
$\hat{H}(x) |\psi\rangle = 0$
where $\hat{H}(x)$ is the Hamiltonian constraint in quantized general relativity. Unlike ordinary quantum field theory or quantum mechanics, the Hamiltonian is a first class constraint on physical states. We also have an independent constraint for each point in space.

Although the symbols $\hat{H}$ and $|\psi\rangle$ may appear familiar, their interpretation in the Wheeler–DeWitt equation is substantially different from non-relativistic quantum mechanics. $|\psi\rangle$ is no longer a spatial wave function in the traditional sense of a complex-valued function that is defined on a 3-dimensional space-like surface and normalized to unity. Instead it is a functional of field configurations on all of spacetime. This wave function contains all of the information about the geometry and matter content of the universe. $\hat{H}$ is still an operator that acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in the nonrelativistic case, and the Hamiltonian no longer determines evolution of the system, so the Schrödinger equation $\hat{H} |\psi\rangle = i \hbar \partial / \partial t |\psi\rangle$ no longer applies. This property is known as timelessness [disambiguation needed]. The reemergence of time requires the tools of decoherence and clock operators.
We also need to augment the Hamiltonian constraint with momentum constraints
$\vec{\mathcal{P}}(x) \left| \psi \right\rangle = 0$
associated with spatial diffeomorphism invariance.

In minisuperspace approximations, we only have one Hamiltonian constraint (instead of infinitely many of them).

In fact, the principle of general covariance in general relativity implies that global evolution per se does not exist; t is just a label we assign to one of the coordinate axes. Thus, what we think about as time evolution of any physical system is just a gauge transformation, similar to that of QED induced by U(1) local gauge transformation $\psi \rightarrow e^{i\theta(\vec{r} )} \psi$ where $\theta(\vec{r})$ plays the role of local time. The role of a Hamiltonian is simply to restrict the space of the "kinematic" states of the Universe to that of "physical" states - the ones that follow gauge orbits. For this reason we call it a "Hamiltonian constraint." Upon quantization, physical states become wave functions that lie in the kernel of the Hamiltonian operator.

In general, the Hamiltonian vanishes for a theory with general covariance or time-scaling invariance.

Thursday, December 23, 2010

Evil Santa Claus - With Love From Finland

Each year, Sweden gives us the Nobel Prizes, except the one for Peace.

Norway awards the Nobel Peace Prize, sometimes pre-emptively, sometimes to terrorists like Yassar Arafat, and sometimes just to piss off The People's Republic of China, like this year.

But what of Finland, the forgotten Scandinavian country? Can they play too?

Well fret no more, boys and girls! The Finns are back, especially this year, with "Rare Exports", a new film from Finland, in which Santa Claus is revealed to be an old demon (he eats children, so in a way you can say sure, he "likes" them), imprisoned in a mountain long ago, in Russia just across the Finnish border. And guess who breaks him out? Yup, the Americans. Why not? Who ya gonna call?  :-)

Here's the trailer:

You better watch out.
You better not cry.
If you meet up with Santa,
You surely will die!

Well, I but that up as a background on our computer, but my family thought Santa to be TOO evil looking, so I was forced to replace it, with this:

Tuesday, December 21, 2010

Announcing SIAL: The Somerset Institute for Advanced Logic in Rocky Hill, NJ

A group of wealthy charitable persons and deep intellectuals in Somerset County, NJ, who wish to remain anonymous for the time being, are pleased to announce the formation of yet another "Advanced Institute" to be called The Somerset Institute for Advanced Logic in the borough of Rocky Hill, NJ, in Somerset County, NJ, four miles north of Princeton.

The purpose of SIAL is as follows:

More specifically:

To Advance Humanity by assisting Physics and Physicists.

More specifically:

To Advance Humanity by assisting Physics and Physicists, by assembling in one place, the finest Applied Mathematicians and the finest Applied Computer Scientists on our planet.

And also, any future full-time employees will be well compensated, unlike most intellectuals. Minimum \$150,000 for first-year employees (a doctorate in Mathematics or Computer Science required), and future salary to be determined. We'll see how this goes.

Ironically, NO PhD. in Physics will be invited, as the purpose of SIAL will be to assist Physics, not employ them. In both Academia and other Advanced Institutes, such as IAS at Princeton Township, there are plenty of jobs to employ them. However, we are most interested in the opinions of the world's greatest Physicists as to how to proceed.

SIAL is currently in the planning stages, and will not start up until the year 2015, at the soonest.

Toward that end, there are nine individuals on Earth whose opinions we seek on how to go forward. All nine individuals are unaware of this announcement, today, on December 21st, 2010, yet we seek their opinion greatly. If not them, then we are open to the next best choices.

Those individuals are:

IN COMPUTER SCIENCE:

- Paul Allen

- Steve Wozniak

- Jaron Lanier

IN MATHEMATICS:

- Andrew Wiles

- Shing-Tung Yau

- Edward Witten

IN PHYSICS:

- Steven Weinberg

- Gerardus 't Hooft

- Garrett Lisi

If any of you reading this can think of better choices, do tell. For example, we would like to also involve John Baez and Greg Egan involved somehow, indeed they are on our shortlist as initial co-directors (Roman Republic consul style).

The current state of affairs at SIAL is that we are researching a farm to buy in either The Borough of Rocky Hill or The Township of Montgomery, which surrounds it. In order to have SIAL, a non-profit institution, sustain itself into the future and beyond the initial investment, we intend the farm to be large enough to host an amusement park across the street from the Institute (so maybe we'll have to buy two farms ... what the heck, it's only money).

Classical Newtonian Mechanics is cool, and roller coasters are the greatest "hook" in our opinion to attract people to that currently low-paying field yet ultimately important field (if we are to advance our species) that is Science, specifically the "gold standard" of Science, that is Physics.

Our "patron saint" if you will, will be Aristotle, and it is hoped we are successful one day to erect a twice-lifesize statue to that man at the entrance of our Institute.

Onward and upward, Humanity!

 Marble bust of Aristotle. Roman copy after a Greek bronze original by Lysippus c. 330 BC.

Sunday, December 19, 2010

Mathematical Physics Basics

From here.
 The language of physics is mathematics. In order to study physics seriously, one needs to learn mathematics that took generations of brilliant people centuries to work out. Algebra, for example, was cutting-edge mathematics when it was being developed in Baghdad in the 9th century. But today it's just the first step along the journey. Algebra Algebra provides the first exposure to the use of variables and constants, and experience manipulating and solving linear equations of the form y = ax + b and quadratic equations of the form y = ax2+bx+c. Geometry Geometry at this level is two-dimensional Euclidean geometry, Courses focus on learning to reason geometrically, to use concepts like symmetry, similarity and congruence, to understand the properties of geometric shapes in a flat, two-dimensional space. Trigonometry Trigonometry begins with the study of right triangles and the Pythagorean theorem. The trigonometric functions sin, cos, tan and their inverses are introduced and clever identities between them are explored. Calculus (single variable) Calculus begins with the definition of an abstract functions of a single variable, and introduces the ordinary derivative of that function as the tangent to that curve at a given point along the curve. Integration is derived from looking at the area under a curve,which is then shown to be the inverse of differentiation. Calculus (multivariable) Multivariable calculus introduces functions of several variables f(x,y,z...), and students learn to take partial and total derivatives. The ideas of directional derivative, integration along a path and integration over a surface are developed in two and three dimensional Euclidean space. Analytic Geometry Analytic geometry is the marriage of algebra with geometry. Geometric objects such as conic sections, planes and spheres are studied by the means of algebraic equations. Vectors in Cartesian, polar and spherical coordinates are introduced. Linear Algebra In linear algebra, students learn to solve systems of linear equations of the form ai1 x1 + ai2 x2 + ... + ain xn = ci and express them in terms of matrices and vectors. The properties of abstract matrices, such as inverse, determinant, characteristic equation, and of certain types of matrices, such as symmetric, antisymmetric, unitary or Hermitian, are explored. Ordinary Differential Equations This is where the physics begins! Much of physics is about deriving and solving differential equations. The most important differential equation to learn, and the one most studied in undergraduate physics, is the harmonic oscillator equation, ax'' + bx' + cx = f(t), where x' means the time derivative of x(t). Partial Differential Equations For doing physics in more than one dimension, it becomes necessary to use partial derivatives and hence partial differential equations. The first partial differential equations students learn are the linear, separable ones that were derived and solved in the 18th and 19th centuries by people like Laplace, Green, Fourier, Legendre, and Bessel. Methods of approximation Most of the problems in physics can't be solved exactly in closed form. Therefore we have to learn technology for making clever approximations, such as power series expansions, saddle point integration, and small (or large) perturbations. Probability and statistics Probability became of major importance in physics when quantum mechanics entered the scene. A course on probability begins by studying coin flips, and the counting of distinguishable vs. indistinguishable objects. The concepts of mean and variance are developed and applied in the cases of Poisson and Gaussian statistics.

 K-theory Cohomology is a powerful mathematical technology for classifying differential forms. In the 1960s, work by Sir Michael Atiyah, Isadore Singer, Alexandre Grothendieck, and Friedrich Hirzebruch generalized coholomogy from differential forms to vector bundles, a subject that is now known as K-theory.     Witten has argued that K-theory is relevant to string theory for classifying D-brane charges. D-brane objects in string theory carry a type of charge called Ramond-Ramond charge. Ramond-Ramond fields are differential forms, and their charges should be classifed by ordinary cohomology. But gauge fields propagate on D-branes, and gauge fields give rise to vector bundles. This suggests that D-brane charge classification requires a generalization of cohomology to vector bundles -- hence K-theory. Overview of K-theory Applied to Strings by Edward Witten D-branes and K-theory by Edward Witten Noncommutative geometry (NCG for short) Geometry was originally developed to describe physical space that we can see and measure. After modern mathematics was freed from Euclid's Fifth Axiom by Gauss and Bolyai, Riemann added to modern geometry the abstract notion of a manifold M with points that are labeled by local coordinates that are real numbers, with some metric tensor that determines an extremal length between two points on the manifold.     Much of the progress in 20th century physics was in applying this modern notion of geometry to spacetime, or to quantum gauge field theory.     In the quest to develop a notion of quantum geometry, as far back as 1947, people were trying to quantize spacetime so that the coordinates would not be ordinary real numbers, but somehow elevated to quantum operators obeying some nontrivial quantum commutation relations. Hence the term "noncommutative geometry," or NCG for short.     The current interest in NCG among physicists of the 21st century has been stimulated by work by French mathematician Alain Connes. Two Lectures on D-Geometry and Noncommutative Geometry by Michael R. Douglas Noncommutative Geometry and Matrix Theory: Compactification on Tori by Alain Connes, Michael R. Douglas, Albert Schwarz String Theory and Noncommutative Geometry by Edward Witten and Nathan Seiberg. Non-commutative spaces in physics and mathematics by Daniela Bigatti Noncommutative Geometry for Pedestrians by J.Madore

Friday, December 17, 2010

Some Effects of Human Overpopulation

 The Catholic Church's former attitude re Birth Control, now changed for 2010 and years thereafter.
Some problems associated with or exacerbated by human overpopulation:
• Inadequate fresh water[144] for drinking water use as well as sewage treatment and effluent discharge. Some countries, like Saudi Arabia, use energy-expensive desalination to solve the problem of water shortages.[168][169]
• Depletion of natural resources, especially fossil fuels[170]
• Increased levels of air pollution, water pollution, soil contamination and noise pollution. Once a country has industrialized and become wealthy, a combination of government regulation and technological innovation causes pollution to decline substantially, even as the population continues to grow.[171]
• Deforestation and loss of ecosystems[172] that sustain global atmospheric oxygen and carbon dioxide balance; about eight million hectares of forest are lost each year.[173]
• Changes in atmospheric composition and consequent global warming[174][175]
• Irreversible loss of arable land and increases in desertification[176] Deforestation and desertification can be reversed by adopting property rights, and this policy is successful even while the human population continues to grow.[177]
• Mass species extinctions.[178] from reduced habitat in tropical forests due to slash-and-burn techniques that sometimes are practiced by shifting cultivators, especially in countries with rapidly expanding rural populations; present extinction rates may be as high as 140,000 species lost per year.[179] As of 2008, the IUCN Red List lists a total of 717 animal species having gone extinct during recorded human history.[180]
• High infant and child mortality.[181] High rates of infant mortality are caused by poverty. Rich countries with high population densities have low rates of infant mortality.[182]
• Intensive factory farming to support large populations. It results in human threats including the evolution and spread of antibiotic resistant bacteria diseases, excessive air and water pollution, and new virus that infect humans.
• Increased chance of the emergence of new epidemics and pandemics[183] For many environmental and social reasons, including overcrowded living conditions, malnutrition and inadequate, inaccessible, or non-existent health care, the poor are more likely to be exposed to infectious diseases.[184]
• Starvation, malnutrition[143] or poor diet with ill health and diet-deficiency diseases (e.g. rickets). However, rich countries with high population densities do not have famine.[185]
• Poverty coupled with inflation in some regions and a resulting low level of capital formation. Poverty and inflation are aggravated by bad government and bad economic policies. Many countries with high population densities have eliminated absolute poverty and keep their inflation rates very low.[186]
• Low life expectancy in countries with fastest growing populations[187]
• Unhygienic living conditions for many based upon water resource depletion, discharge of raw sewage[188] and solid waste disposal. However, this problem can be reduced with the adoption of sewers. For example, after Karachi, Pakistan installed sewers, its infant mortality rate fell substantially.[189]
• Elevated crime rate due to drug cartels and increased theft by people stealing resources to survive[190]
• Conflict over scarce resources and crowding, leading to increased levels of warfare[191]
• Less Personal Freedom / More Restrictive Laws. Laws regulate interactions between humans. Law "serves as a primary social mediator of relations between people." The higher the population density, the more frequent such interactions become, and thus there develops a need for more laws and/or more restrictive laws to regulate these interactions. It is even speculated that democracy is threatened due to overpopulation, and could give rise to totalitarian style governments.[dubious ]
Some economists, such as Thomas Sowell[192] and Walter E. Williams[193] argue that third world poverty and famine are caused in part by bad government and bad economic policies. Most biologists and sociologists see overpopulation as a serious threat to the quality of human life.[10][194]

From the Wikipedia article on Overpopulation.

Shut Up About "Climate Change." What Are We Doing To The Oceans?

As you can see from the following photographs, the crap that Industry (which we cannot exclusively blame as long as we use their products, which we all do) puts into the oceans far exceeds the crap put into the atmosphere. Since the dawn of the Industrial Revolution, the acidic content of the seas has increased significantly. What will be the result?

We know more about the surface of the Moon than we do about our own oceans.

The residents of Japanese fishing villages are well aware of what happens - the existence of giant-sized Nomura jellyfish which have destroyed their local economies, below:

Blessed be the jellyfish and blessed be the sea cucumbers, for they shall inherit the earth.

Finis.
Originally posted on Apr. 4, 2010. Back by popular demand. Mine.

Thursday, December 16, 2010

Space Colonization and Transhumanism - Inevitable?

I didn't write the following (source given at end):

Space colonies will become necessary to house the many billions of individuals that will be born in the future as our population continues to expand at a lazy exponential. In his book, The Millennial Project, Marshall T. Savage estimates that the Asteroid Belt could hold 7,500 trillion people, if thoroughly reshaped into O'Neill colonies. At a typical population growth rate for developed countries at 1% per annum (doubling every 72 years), it would take us 1,440 years to fill that space. Siphoning light gases off Jupiter and Saturn and fusing them into heavier elements for construction of further colonies seems plausible in the longer term as well.

Why expand into space? For many, the answers are blatantly obvious, but the easiest is that the alternatives are limiting the human freedom to reproduce, or mass murder, both of which are morally unacceptable. Population growth is not inherently antithetical to a love of the environment — in fact, by expanding outwards into the cosmos in all directions, we'll be able to seed every star system with every species of plant and animal imaginable. The genetic diversity of the embryonic home planet will seem tiny by comparison.

Space colonization is closely related to transhumanism through the mutual association of futurist philosophy, but also more directly because the embrace of transhumanism will be necessary to colonize space. Human beings aren't designed to live in space. Our physiological issues with it are manifold, from deteriorating muscle mass to uncontrollable flatulence. On the surface of Venus, we would melt, on the surface of Mars, we'd freeze. The only reasonable solution is to upgrade our bodies. Not terraform the cosmos, but cosmosform ourselves.

From The Top Ten Transhumanist Technologies at The Lifeboat Foundation

Steve here. I just found out about this website, so I haven't explored for the moment and thus have no comment at this time about the subject, except this. I must say going in that I guess I am a creature of my times, because I find "transhumanism" as spooky in an uncomfortable way as I find it inevitable, assuming we don't extinct ourselves in the meantime.

Division By Zero

In mathematics, division by zero is a term used if the divisor (denominator) is zero. Such a division can be formally expressed as a / 0 where a is the dividend (numerator). Whether this expression can be assigned a well-defined value depends upon the mathematical setting. In ordinary (real number) arithmetic, the expression has no meaning, as there is no number which, multiplied by 0, gives a (a≠0).
In computer programming, an attempt to divide by zero may, depending on the programming language and the type of number being divided by zero, generate an exception, generate an error message, crash the program being executed, generate either positive or negative infinity, or could result in a special not-a-number value (see below).
Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to a / 0 is contained in George Berkeley's criticism of infinitesimal calculus in The Analyst; see Ghosts of departed quantities.

In elementary arithmetic

When division is explained at the elementary arithmetic level, it is often considered as a description of dividing a set of objects into equal parts. As an example, consider having ten apples, and these apples are to be distributed equally to five people at a table. Each person would receive $\textstyle\frac{10}{5}$ = 2 apples. Similarly, if there are 10 apples, and only one person at the table, that person would receive $\textstyle\frac{10}{1}$ = 10 apples.
So for dividing by zero – what is the number of apples that each person receives when 10 apples are evenly distributed amongst 0 people? Certain words can be pinpointed in the question to highlight the problem. The problem with this question is the "when". There is no way to distribute 10 apples amongst 0 people. In mathematical jargon, a set of 10 items cannot be partitioned into 0 subsets. So $\textstyle\frac{10}{0}$, at least in elementary arithmetic, is said to be meaningless, or undefined.
Similar problems occur if one has 0 apples and 0 people, but this time the problem is in the phrase "the number". A partition is possible (of a set with 0 elements into 0 parts), but since the partition has 0 parts, vacuously every set in our partition has a given number of elements, be it 0, 2, 5, or 1000. If there are, say, 5 apples and 2 people, the problem is in "evenly distribute". In any integer partition of a 5-set into 2 parts, one of the parts of the partition will have more elements than the other.
In all of the above three cases, $\textstyle\frac{10}{0}$, $\textstyle\frac{0}{0}$ and $\textstyle\frac{5}{2}$, one is asked to consider an impossible situation before deciding what the answer will be, and that is why the operations are undefined in these cases.
To understand division by zero, one must check it with multiplication: multiply the quotient by the divisor to get the original number. However, no number multiplied by zero will produce a product other than zero. To satisfy division by zero, the quotient must be bigger than all other numbers, i.e., infinity. This connection of division by zero to infinity takes us beyond elementary arithmetic (see below).
A recurring theme even at this elementary stage is that for every undefined arithmetic operation, there is a corresponding question that is not well-defined. "How many apples will each person receive under a fair distribution of ten apples amongst three people?" is a question that is not well-defined because there can be no fair distribution of ten apples amongst three people.
There is another way, however, to explain the division: if one wants to find out how many people, who are satisfied with half an apple, can one satisfy by dividing up one apple, one divides 1 by 0.5. The answer is 2. Similarly, if one wants to know how many people, who are satisfied with nothing, can one satisfy with 1 apple, one divides 1 by 0. The answer is infinite; one can satisfy infinite people, that are satisfied with nothing, with 1 apple.
Clearly, one cannot extend the operation of division based on the elementary combinatorial considerations that first define division, but must construct new number systems.

Early attempts

The Brahmasphutasiddhanta of Brahmagupta (598–668) is the earliest known text to treat zero as a number in its own right and to define operations involving zero.[1] The author failed, however, in his attempt to explain division by zero: his definition can be easily proven to lead to algebraic absurdities. According to Brahmagupta,
A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero.
In 830, Mahavira tried unsuccessfully to correct Brahmagupta's mistake in his book in Ganita Sara Samgraha: "A number remains unchanged when divided by zero."[1]
Bhaskara II tried to solve the problem by defining (in modern notation) $\textstyle\frac{n}{0}=\infty$.[1] This definition makes some sense, as discussed below, but can lead to paradoxes if not treated carefully. These paradoxes were not treated until modern times.

In algebra

It is generally regarded among mathematicians that a natural way to interpret division by zero is to first define division in terms of other arithmetic operations. Under the standard rules for arithmetic on integers, rational numbers, real numbers, and complex numbers, division by zero is undefined. Division by zero must be left undefined in any mathematical system that obeys the axioms of a field. The reason is that division is defined to be the inverse operation of multiplication. This means that the value of a/b is the solution x of the equation bx = a whenever such a value exists and is unique. Otherwise the value is left undefined.
For b = 0, the equation bx = a can be rewritten as 0x = a or simply 0 = a. Thus, in this case, the equation bx = a has no solution if a is not equal to 0, and has any x as a solution if a equals 0. In either case, there is no unique value, so $\textstyle\frac{a}{b}$ is undefined. Conversely, in a field, the expression $\textstyle\frac{a}{b}$ is always defined if b is not equal to zero.

Division as the inverse of multiplication

The concept that explains division in algebra is that it is the inverse of multiplication. For example,
$\frac{6}{3}=2$
since 2 is the value for which the unknown quantity in
$?\times 3=6$
is true. But the expression
$\frac{6}{0}=\,?$
requires a value to be found for the unknown quantity in
$?\times 0=6.$
But any number multiplied by 0 is 0 and so there is no number that solves the equation.
The expression
$\frac{0}{0}=\,?$
requires a value to be found for the unknown quantity in
$?\times 0=0.$
Again, any number multiplied by 0 is 0 and so this time every number solves the equation instead of there being a single number that can be taken as the value of 0/0.
In general, a single value can't be assigned to a fraction where the denominator is 0 so the value remains undefined (see below for other applications).

Fallacies based on division by zero

It is possible to disguise a special case of division by zero in an algebraic argument,[1] leading to spurious proofs that 1 = 2 such as the following:
With the following assumptions:
\begin{align} 0\times 1 &= 0 \\ 0\times 2 &= 0. \end{align}
The following must be true:
$0\times 1 = 0\times 2.\,$
Dividing by zero gives:
$\textstyle \frac{0}{0}\times 1 = \frac{0}{0}\times 2.$
Simplified, yields:
$1 = 2.\,$
The fallacy is the implicit assumption that dividing by 0 is a legitimate operation.

In calculus

Extended real line

At first glance it seems possible to define a/0 by considering the limit of a/b as b approaches 0.
For any positive a, the limit from the right is
$\lim_{b \to 0^+} {a \over b} = +\infty$
however, the limit from the left is
$\lim_{b \to 0^-} {a \over b} = -\infty$
and so the $\lim_{b \to 0} {a \over b}$ is undefined (the limit is also undefined for negative a).
Furthermore, there is no obvious definition of 0/0 that can be derived from considering the limit of a ratio. The limit
$\lim_{(a,b) \to (0,0)} {a \over b}$
does not exist. Limits of the form
$\lim_{x \to 0} {f(x) \over g(x)}$
in which both ƒ(x) and g(x) approach 0 as x approaches 0, may equal any real or infinite value, or may not exist at all, depending on the particular functions ƒ and g (see l'Hôpital's rule for discussion and examples of limits of ratios). These and other similar facts show that the expression 0/0 cannot be well-defined as a limit.

Formal operations

A formal calculation is one carried out using rules of arithmetic, without consideration of whether the result of the calculation is well-defined. Thus, it is sometimes useful to think of a/0, where a ≠ 0, as being $\infty$. This infinity can be either positive, negative, or unsigned, depending on context. For example, formally:
$\lim\limits_{x \to 0} {\frac{1}{x} =\frac{\lim\limits_{x \to 0} {1}}{\lim\limits_{x \to 0} {x}}} = \frac{1}{0} = \infty.$
As with any formal calculation, invalid results may be obtained. A logically rigorous as opposed to formal computation would say only that
$\lim\limits_{x \to 0^+} \frac{1}{x} = \frac{1}{0^+} = +\infty\text{ and }\lim\limits_{x \to 0^-} \frac{1}{x} = \frac{1}{0^-} = -\infty.$
(Since the one-sided limits are different, the two-sided limit does not exist in the standard framework of the real numbers. Also, the fraction 1/0 is left undefined in the extended real line, therefore it and
$\frac{\lim\limits_{x \to 0} 1 }{\lim\limits_{x \to 0} x}$
are meaningless expressions.)

Real projective line

The set $\mathbb{R}\cup\{\infty\}$ is the real projective line, which is a one-point compactification of the real line. Here $\infty$ means an unsigned infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies $-\infty = \infty$, which is necessary in this context. In this structure, $\scriptstyle a/0 = \infty$ can be defined for nonzero a, and $\scriptstyle a/\infty = 0$. It is the natural way to view the range of the tangent and cotangent functions of trigonometry: tan(x) approaches the single point at infinity as x approaches either $\scriptstyle+\pi/2$ or $\scriptstyle-\pi/2$ from either direction.
This definition leads to many interesting results. However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, $\infty + \infty$ is undefined in the projective line.

Riemann sphere

The set $\mathbb{C}\cup\{\infty\}$ is the Riemann sphere, which is of major importance in complex analysis. Here too $\infty$ is an unsigned infinity – or, as it is often called in this context, the point at infinity. This set is analogous to the real projective line, except that it is based on the field of complex numbers. In the Riemann sphere, $1/0=\infty$, but 0/0 is undefined, as is $0\times\infty$.

Extended non-negative real number line

The negative real numbers can be discarded, and infinity introduced, leading to the set [0, ∞], where division by zero can be naturally defined as a/0 = ∞ for positive a. While this makes division defined in more cases than usual, subtraction is instead left undefined in many cases, because there are no negative numbers.

In higher mathematics

Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures.

Non-standard analysis

In the hyperreal numbers and the surreal numbers, division by zero is still impossible, but division by non-zero infinitesimals is possible.

Distribution theory
In distribution theory one can extend the function $\textstyle\frac{1}{x}$ to a distribution on the whole space of real numbers (in effect by using Cauchy principal values). It does not, however, make sense to ask for a 'value' of this distribution at x = 0; a sophisticated answer refers to the singular support of the distribution.

Linear algebra

In matrix algebra (or linear algebra in general), one can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. It can be proven that if b−1 exists, then b+ = b−1. If b equals 0, then 0+ = 0; see Generalized inverse.

Abstract algebra

Any number system that forms a commutative ring — for instance, the integers, the real numbers, and the complex numbers — can be extended to a wheel in which division by zero is always possible; however, in such a case, "division" has a slightly different meaning.
The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in a skew field (which for this reason is called a division ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ring Z/6Z of integers mod 6. The meaning of the expression $\textstyle\frac{2}{2}$ should be the solution x of the equation 2x = 2. But in the ring Z/6Z, 2 is not invertible under multiplication. This equation has two distinct solutions, x = 1 and x = 4, so the expression $\textstyle\frac{2}{2}$ is undefined.
In field theory, the expression $\textstyle\frac{a}{b}$ is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning when b is zero. Modern texts include the axiom 0 ≠ 1 to avoid having to consider the trivial ring or a "field with one element", where the multiplicative identity coincides with the additive identity.

In computer arithmetic

In the SpeedCrunch calculator application, when a number is divided by zero the answer box displays “Error: Divide by zero”.

Most calculators, such as this Texas Instruments TI-86, will halt execution and display an error message when the user or a running program attempts to divide by zero.
The IEEE floating-point standard, supported by almost all modern floating-point units, specifies that every floating point arithmetic operation, including division by zero, has a well-defined result. The standard supports signed zero, as well as infinity and NaN (not a number). There are two zeroes, +0 (positive zero) and −0 (negative zero) and this removes any ambiguity when dividing. In IEEE 754 arithmetic, a ÷ +0 is positive infinity when a is positive, negative infinity when a is negative, and NaN when a = ±0. The infinity signs change when dividing by −0 instead.
Integer division by zero is usually handled differently from floating point since there is no integer representation for the result. Some processors generate an exception when an attempt is made to divide an integer by zero, although others will simply continue and generate an incorrect result for the division. The result depends on how division is implemented, and can either be zero, or sometimes the largest possible integer.
Because of the improper algebraic results of assigning any value to division by zero, many computer programming languages (including those used by calculators) explicitly forbid the execution of the operation and may prematurely halt a program that attempts it, sometimes reporting a "Divide by zero" error. In these cases, if some special behavior is desired for division by zero, the condition must be explicitly tested (for example, using an if statement). Some programs (especially those that use fixed-point arithmetic where no dedicated floating-point hardware is available) will use behavior similar to the IEEE standard, using large positive and negative numbers to approximate infinities. In some programming languages, an attempt to divide by zero results in undefined behavior.
In two's complement arithmetic, attempts to divide the smallest signed integer by − 1 are attended by similar problems, and are handled with the same range of solutions, from explicit error conditions to undefined behavior.
Most calculators will either return an error or state that 1/0 is undefined, however some TI and HP graphing calculators will evaluate (1/0)2 to ∞.
More advanced computer algebra systems will return an infinity as a result for division by zero; for instance, Microsoft Math and Mathematica will show an ComplexInfinity result.

Historical accidents

• On September 21, 1997, a divide by zero error on board the USS Yorktown (CG-48) Remote Data Base Manager brought down all the machines on the network, causing the ship's propulsion system to fail.[2]