A: It has two parts: `From the Earth to the Moon' and `Around the Moon' are the titles of English translation, they are published under one cover. The original French title is more complete: `De la Terre à la lune trajet direct en 97 heures 20 minutes.' It is very entertaining, one of the Jules Verne's best. Purdue library seems to have 3 copies of the English version.
A: They CAN be defined, but they live in 3-space, and thus it is extremally hard to visualize them. Mathematicians work with them, but they are not mentioned in our course. Similarly, the 3 times 3 and larger systems will rarely be mentioned in our course, because for them, even when they are autonomous (that is time does not enter explicitly), the phase space is the 3-space, and the phase portraits are hard to visualize.
There is another reason, more important than this difficulty of visualization, why systems with 3 or more functions are much more complicated than the ones with 2 functions. This deeper reason is roughly the following: a line in the plane SEPARATES the plane into two parts. But a line in space DOES NOT. We are interested in lines (straight or curved) because they are trajectories of our systems. It is very important that closed trajectories separate the plane. This is the fact, on which a lot of mathematical theory of 2 by 2 autonomous systems is based.
A: The graph of a function f is, by definition the set of points (x,f(x)). When x and f(x) are real, we need a space of 1+1=2 dimensions to plot these points. So the graph of a real function lives in the plane. If x and f(x) are complex, we need 2 dimensions for each, so the graph will live in a space of dimension 2+2=4. From your experience in multivariate calculus, you know that even graphing something in 3 dimensions is quite hard. In 4 dimensions it is even harder. Mathematicians do study such graphs, and some of them even imagine them (one can imagine almost everything, with sufficient training). But they are usually not included in undergraduate courses.
However, there are other ways to visualize complex functions of a complex variable. By drawing certain 2 dimensional pictures, other than graphs. In fact, complex functions theory is probably the most visually appealing part of mathematics. For example, the Julia and Mandelbrot sets, which are the mathematical objects, most popular with the broad public today, belong to this theory.
x'=-hy y'=-gxy,
where x is the regular force and y is the guerilla force, but not vice versa?
A: It seems the picture Lanchester had in mind when he gave these names to the variables was the following: guerillas "see" the regular force, and attack it. So the loss of the regulars is proportional to the guerillas forces and their combat efficiency h (first equation). On the other hand, the regulars don't "see" guerillas, so they have to "comb" a territory, or its population. For example they may bomb a forest indiscriminately, or mine a territory, or make random ID checks on the roads street. The guerillas loss will be proportional not only to the force of the regulars (gx) but also to the guerillas numbers y (more guerillas are around, more of them will be caught in a random ID check). This gives the second equation.
One does not always have to use this guerilla-regular interpretation literally. Good mathematical models usually have many different interpretations.
For example, in a typical night air bombing raid on a city during WWII (and in Vietnam war too) the attacking side y did know the target location, and the damage it inflicted on x was roughly proportional to the attacking force (the number of the airplanes). This gives the first equation. On the other hand, the defending side, which could not see the airplanes, or could not aim its guns, typically used the so-called `barrage fire', which means that a large number of guns fired up without aiming. Then the loss of the attacking force was roughly proportional to the number of these firing guns, as well as to the number of the airplanes , so the loss of the airplanes is described by the second equation. The air defense barrage balloons widely used in WWII had similar effect. In this example the attacking side should be described mathematically as a `guerilla force' and the defending one as `regular'.
A: Not really. See for example a paradox with exponentials, which puzzled even Euler. The main exceptions are the rules concerning logarithms and powers, whose exponents are not integers. There are special courses of `Complex Calculus', or `Complex Analysis', offered by our department: 425, 525 and 530. Complex Analysis is one of the main tools mathematics offers to science and engineering.
A: I am sorry for using k instead of lambda, but I still did not overcome the difficulties of typing math in html.
Suppose our equation has a multiple eigenvalue k. Consider a slight perturbation of this equation whose eigenvalues are k' and k'', which are very close to k. This perturbed equation has two linearly independent solutions: exp(k't) and exp(k''t). When k' and k'' tend to k these two solutions tend to the same solution exp(kt) of the original equation. To find another one we use linearity, that is the fact that a sum or difference of solutions of a linear equation is again a solution of the same equation. Taking the sum exp(k't)+exp(k''t) does not help, this sum tends to 2exp(kt), which is proportional to the solution we already found. The difference exp(k't)-exp(k''t) tends to zero when k' and k'' tend to k, so it also gives nothing useful.
But here comes the idea to rescale this difference. Let us consider this difference DIVIDED by k'-k''. This is also a solution of the perturbed equation, because a solution multiplied (or divided) by a number is a solution again. Now what is the limit of
[exp(k't)-exp(k''t)]/(k'-k''), when k' and k'' both tend to k ?
If you think a little bit, you will find that this ratio tends to the DERIVATIVE of exp(kt) WITH RESPECT TO k (t is constant in our considerations here, it is the equation which changes!)
Thus the limit is texp(kt) which gives the second solution we were looking for.
A: This is a very important question! Of course, I cannot be sure of how Laplace came with these ideas, but let me explain how he COULD.
Consider polynomials (of any degree). We write a polynomial as
f(x)=a(0)+a(1)x+a(2)x^2+...
instead of the more common notation with subscripts. There are two points of view on polynomials. One is the formal, algebraic point of view: polynomials are just the expressions of the above form, which can be added and multiplied by certain familiar rules. With this approach, the letter x means nothing, it just serves the purpose of labeling, especially when we multiply. If
g(x)=b(0)+b(1)x+b(2)x^2+...
is another polynomial, then the product f(x)g(x) is a new polynomial
(fg)(x)=c(0)+c(1)x+c(2)x^2+...,
where c(k) are defined by the following rule:
c(0)=a(0)b(0),
c(1)=a(0)b(1)+a(1)b(0),
c(2)=a(0)b(2)+a(1)b(1)+a(2)b(0),
and so on. In general,
c(n)=a(0)b(n)+a(1)b(n-1)+a(2)b(n-2)+...+a(n)b(0).
I repeat, that in this algebraic approach to polynomials, the letter x serves just accounting purposes: polynomials are nothing else then sequences, with certain rules for addition and multiplication.
Notice that the multiplication rule is formally analogous to the definition of convolution: just replace the index k by a continuous variable, and the sum by the integral.
There is of course another, more familiar point of view on polynomials: they are certain functions of a VARIABLE x. Which means, that we can plug any number x into the expression of a polynomial, and get a new number. The operation on sequences described above then corresponds to the product of functions.
Thus we have a correspondence between sequences and functions: to each sequence a(0), a(1),... corresponds a function f(x), namely f(x) is the polynomial whose coefficients are a(0), a(1),... . This operation transforms the "product" of sequences, defined above, into the usual product of functions.
In fact this correspondence (coefficients to functions) is a special case of the Laplace transform! To see this put x=exp(t), replace the integer n (in powers of x) by a continuous variable, and summation by integration.
We see that the Laplace transform is a generalization of the correspondence between coefficients and polynomials, where the powers of the variable are allowed to be any positive numbers. Now it should not be surprising that it takes convolution into product.