Here are some passages about math and physics that caught my attention.
On mathematics being an experimental science:
Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap.
The Jacobi identity (which forces the heights of a triangle to cross at one point) is an experimental fact in the same way as that the Earth is round (that is, homeomorphic to a ball). But it can be discovered with less expense.
According to his viewpoint, math develops first with observations, then with the effort to find the limits of the observations, then with attempt to generalize what holds up through a conjecture, then with the “modeling” using formal logic:
The scheme of construction of a mathematical theory is exactly the same as that in any other natural science. First we consider some objects and make some observations in special cases. Then we try and find the limits of application of our observations, look for counter-examples which would prevent unjustified extension of our observations onto a too wide range of events (example: the number of partitions of consecutive odd numbers into an odd number of natural summands gives the sequence , but then comes ).
As a result we formulate the empirical discovery that we made (for example, the Fermat conjecture or Poincaré conjecture) as clearly as possible. After this there comes the difficult period of checking as to how reliable are the conclusions.
At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be “absolutely” correct and are accepted as “axioms”. The sense of this “absoluteness” lies precisely in the fact that we allow ourselves to use these “facts” according to the rules of formal logic, in the process declaring as “theorems” all that we can derive from them.
On the perils of getting too far away from the “reality”:
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result. Say, for this reason a reliable long-term weather forecast is impossible and will remain impossible, no matter how much we develop computers and devices which record initial conditions.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions (“proofs”), the less reliable is the final result.
Complex models are rarely useful (unless for those writing their dissertations).
The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called “the inconceivable effectiveness of mathematics in natural sciences” — or “the Wigner principle”.
Here we can add a remark by I. M. Gel’fand: there exists yet another phenomenon which is comparable in its inconceivability with the inconceivable effectiveness of mathematics in physics noted by Wigner — this is the equally inconceivable ineffectiveness of mathematics in biology.
The subtle poison of mathematical education (in Felix Klein’s words) for a physicist consists precisely in that the absolutised model separates from the reality and is no longer compared with it. Here is a simple example: mathematics teaches us that the solution of the Malthus equation, , is uniquely defined by the initial conditions (that is that the corresponding integral curves in the -plane do not intersect each other). This conclusion of the mathematical model bears little relevance to the reality. A computer experiment shows that all these integral curves have common points on the negative -semi-axis. Indeed, say, curves with the initial conditions and practically intersect at and at . You cannot fit in an atom between them. Properties of the space at such small distances are not described at all by Euclidean geometry. Application of the uniqueness theorem in this situation obviously exceeds the accuracy of the model. This has to be respected in practical application of the model, otherwise one might find oneself faced with serious troubles.
I would like to note, however, that the same uniqueness theorem explains why the closing stage of mooring of a ship to the quay is carried out manually: on steering, if the velocity of approach would have been defined as a smooth (linear) function of the distance, the process of mooring would have required an infinitely long period of time. An alternative is an impact with the quay (which is damped by suitable non-ideally elastic bodies). By the way, this problem had to be seriously confronted on landing the first descending apparata on the Moon and Mars and also on docking with space stations — here the uniqueness theorem is working against us.
Here’s he trying to provide simpler explanations for a few mathematical concepts — determinants, groups, and smooth manifolds.
I shall open a few more such secrets (in the interest of poor students). The determinant of a matrix is an (oriented) volume of the parallelepiped whose edges are its columns. If the students are told this secret (which is carefully hidden in the purified algebraic education), then the whole theory of determinants becomes a clear chapter of the theory of poly-linear forms. If determinants are defined otherwise, then any sensible person will forever hate all the determinants, Jacobians and the implicit function theorem.
What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? “Oh, curse this maths” — concludes the student (who, possibly, becomes the Minister for Science in the future).
We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.
This is all the definition there is. The so-called “axioms” are in fact just (obvious) properties of groups of transformations. What axiomatisators call “abstract groups” are just groups of transformations of various sets considered up to isomorphisms (which are one-to-one mappings preserving the operations). As Cayley proved, there are no “more abstract” groups in the world. So why do the algebraists keep on tormenting students with the abstract definition?
What is a smooth manifold? In a recent American book I read that Poincaré was not acquainted with this (introduced by himself) notion and that the “modern” definition was only given by Veblen in the late 1920s: a manifold is a topological space which satisfies a long series of axioms.
For what sins must students try and find their way through all these twists and turns? Actually, in Poincaré’s Analysis Situs there is an absolutely clear definition of a smooth manifold which is much more useful than the “abstract” one.
A smooth -dimensional submanifold of the Euclidean space is its subset which in a neighbourhood of its every point is a graph of a smooth mapping of into (where and are coordinate subspaces). This is a straightforward generalization of most common smooth curves on the plane (say, of the circle ) or curves and surfaces in the three-dimensional space.
Between smooth manifolds smooth mappings are naturally defined. Diffeomorphisms are mappings which are smooth, together with their inverses.
An “abstract” smooth manifold is a smooth submanifold of a Euclidean space considered up to a diffeomorphism. There are no “more abstract” finite-dimensional smooth manifolds in the world (Whitney’s theorem). Why do we keep on tormenting students with the abstract definition? Would it not be better to prove them the theorem about the explicit classification of closed two-dimensional manifolds (surfaces)?
It is this wonderful theorem (which states, for example, that any compact connected oriented surface is a sphere with a number of handles) that gives a correct impression of what modern mathematics is and not the super-abstract generalizations of naive submanifolds of a Euclidean space which in fact do not give anything new and are presented as achievements by the axiomatisators.
The theorem of classification of surfaces is a top-class mathematical achievement, comparable with the discovery of America or X-rays. This is a genuine discovery of mathematical natural science and it is even difficult to say whether the fact itself is more attributable to physics or to mathematics. In its significance for both the applications and the development of correct Weltanschauung it by far surpasses such “achievements” of mathematics as the proof of Fermat’s last theorem or the proof of the fact that any sufficiently large whole number can be represented as a sum of three prime numbers.