Nuggets dark-mode light-mode

Tagged with Physics


Nima Arkani-Hamed on the important skill of turning big ideas into sharp questions

Physicist Nima Arkani-Hamed delivered a series of lectures on “Research Skills” in 2009 as part of the Perimeter Scholars International (PSI) program at the Perimeter Institute for Theoretical Physics.

Here is Nima talking about the most important research skill of all:

It is a remarkable thing that some of the questions that people started thinking about 2000 years ago— The intervening 2000 years have brought us to a place where we can actually work on them. And it is a meaningful thing to work on them.

They have been sharpen to the point where you can work on them. This is one of the—

If I had to say: What is the real, overarching skill of research? [What is] the thing that you cannot be taught, but that has to be experienced. And that has to be gone through a number of times. [It is] this process of taking very big ideas and turning them into sharp questions that you can actually work on. That is the greatest skill of all.

And that’s something that I will try to get you some flavor of towards the latter part of the lectures.

The original videos, C09028 - 09/10 PSI - Research Skills, are hosted at PIRSA. Stiched parts have been uploaded to YouTube as well.

The full curriculum from 2009-2010 PSI is here. In fact, all lectures from every single year have been recorded and made available online for free. Isn’t it awesome?

V. I. Arnold on mathematics as an experimental science

The mathematician V. I. Arnold delivered an interesting (and quite opinionated) speech in 1997 (archive).

Here are some passages about math and physics that caught my attention.

On mathematics being an experimental science:

Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap.

The Jacobi identity (which forces the heights of a triangle to cross at one point) is an experimental fact in the same way as that the Earth is round (that is, homeomorphic to a ball). But it can be discovered with less expense.

According to his viewpoint, math develops first with observations, then with the effort to find the limits of the observations, then with attempt to generalize what holds up through a conjecture, then with the “modeling” using formal logic:

The scheme of construction of a mathematical theory is exactly the same as that in any other natural science. First we consider some objects and make some observations in special cases. Then we try and find the limits of application of our observations, look for counter-examples which would prevent unjustified extension of our observations onto a too wide range of events (example: the number of partitions of consecutive odd numbers 1,3,5,7,91, 3, 5, 7, 9 into an odd number of natural summands gives the sequence 1,2,4,8,161, 2, 4, 8, 16, but then comes 2929).

As a result we formulate the empirical discovery that we made (for example, the Fermat conjecture or Poincaré conjecture) as clearly as possible. After this there comes the difficult period of checking as to how reliable are the conclusions.

At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be “absolutely” correct and are accepted as “axioms”. The sense of this “absoluteness” lies precisely in the fact that we allow ourselves to use these “facts” according to the rules of formal logic, in the process declaring as “theorems” all that we can derive from them.

On the perils of getting too far away from the “reality”:

It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result. Say, for this reason a reliable long-term weather forecast is impossible and will remain impossible, no matter how much we develop computers and devices which record initial conditions.

In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions (“proofs”), the less reliable is the final result.

Complex models are rarely useful (unless for those writing their dissertations).

The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called “the inconceivable effectiveness of mathematics in natural sciences” — or “the Wigner principle”.

Here we can add a remark by I. M. Gel’fand: there exists yet another phenomenon which is comparable in its inconceivability with the inconceivable effectiveness of mathematics in physics noted by Wigner — this is the equally inconceivable ineffectiveness of mathematics in biology.

The subtle poison of mathematical education (in Felix Klein’s words) for a physicist consists precisely in that the absolutised model separates from the reality and is no longer compared with it. Here is a simple example: mathematics teaches us that the solution of the Malthus equation, dx/dt=xdx/dt=x, is uniquely defined by the initial conditions (that is that the corresponding integral curves in the (t,x)(t,x)-plane do not intersect each other). This conclusion of the mathematical model bears little relevance to the reality. A computer experiment shows that all these integral curves have common points on the negative tt-semi-axis. Indeed, say, curves with the initial conditions x(0)=0x(0)=0 and x(0)=1x(0)=1 practically intersect at t=10t=−10 and at t=100t=−100. You cannot fit in an atom between them. Properties of the space at such small distances are not described at all by Euclidean geometry. Application of the uniqueness theorem in this situation obviously exceeds the accuracy of the model. This has to be respected in practical application of the model, otherwise one might find oneself faced with serious troubles.

I would like to note, however, that the same uniqueness theorem explains why the closing stage of mooring of a ship to the quay is carried out manually: on steering, if the velocity of approach would have been defined as a smooth (linear) function of the distance, the process of mooring would have required an infinitely long period of time. An alternative is an impact with the quay (which is damped by suitable non-ideally elastic bodies). By the way, this problem had to be seriously confronted on landing the first descending apparata on the Moon and Mars and also on docking with space stations — here the uniqueness theorem is working against us.

V. I. Arnold’s book recommendations

The mathematician V. I. Arnold delivered an interesting (and quite opinionated) speech in 1997 (archive).

He made several book recommendations throughout the speech. I am collecting them all here.

Books for amateurs that reveal the beauty of math:

The return of mathematical teaching at all levels from the scholastic chatter to presenting the important domain of natural science is a specially hot problem for France. I was astonished that all the best and most important in methodical approach mathematical books are almost unknown to students here (and, seems to me, have not been translated into French).

Among these are Essays on Numbers and Figures by V. V. Prasolov, The Enjoyment of Math by Rademacher and Töplitz, Geometry and the Imagination by Hilbert and Cohn-Vossen, What is Mathematics? by Courant and Robbins, How to Solve It and Mathematics and Plausible Reasoning by Polya, Development of Mathematics in the 19th Century by Felix Klein.

Note: on the same spirit, there is a good thread called “Best Maths Books for Non-Mathematicians” at Math Stack Exchange.

Goursat’s, Picard’s and Hermite’s calculus textbooks:

I remember well what a strong impression the calculus course by HermiteCours D’analyse De L’école Polytechnique — made on me in my school years. Riemann surfaces appeared in it, I think, in one of the first lectures (all the analysis was, of course, complex, as it should be). Asymptotics of integrals were investigated by means of path deformations on Riemann surfaces under the motion of branching points. Nowadays, we would have called this the Picard-Lefschetz theory.

Picard, by the way, was Hermite’s son-in-law — mathematical abilities are often transferred by sons-in-law: the dynasty Hadamard-P. Levy-L. Schwarz-U. Frisch is yet another famous example in the Paris Academy of Sciences.

The “obsolete” course by Hermite of one hundred years ago (probably, now thrown away from student libraries of French universities) was much more modern than those most boring calculus textbooks with which students are nowadays tormented.

Beginning with L’Hôpital’s first textbook on calculusAnalyse des infiniments petits — and roughly until Goursat’s textbook — A Course in Mathematical Analysis —, the ability to solve such problems was considered to be (along with the knowledge of the times table) a necessary part of the craft of every mathematician.

Mentally challenged zealots of “abstract mathematics” threw all the geometry (through which connection with physics and reality most often takes place in mathematics) out of teaching. Calculus textbooks by Goursat, Hermite, Picard — Traité d’analyse — were recently dumped by the student library of the Universities Paris 6 and 7 (Jussieu) as obsolete and, therefore, harmful (they were only rescued by my intervention).

When I was a first-year student at the Faculty of Mechanics and Mathematics of the Moscow State University, the lectures on calculus were read by the set-theoretic topologist L. A. Tumarkin, who conscientiously retold the old classical calculus course of French type in the Goursat version. He told us that integrals of rational functions along an algebraic curve can be taken if the corresponding Riemann surface is a sphere and, generally speaking, cannot be taken if its genus is higher, and that for the sphericity it is enough to have a sufficiently large number of double points on the curve of a given degree (which forces the curve to be unicursal: it is possible to draw its real points on the projective plane with one stroke of a pen).

The classic 10-volume series on Physics by Landau:

A teacher of mathematics, who has not got to grips with at least some of the ten volumes of the course by Landau and Lifshitz, will then become a relict like the one nowadays who does not know the difference between an open and a closed set.

His own series of lectures about group theory:

By the way, in the 1960s I taught group theory to Moscow schoolchildren. Avoiding all the axiomatics and staying as close as possible to physics, in half a year I got to the Abel theorem on the unsolvability of a general equation of degree five in radicals (having on the way taught the pupils complex numbers, Riemann surfaces, fundamental groups and monodromy groups of algebraic functions). This course was later published by one of the audience, V. B. Alekseev, as the book Abel’s Theorem in Problems and Solutions.