Poisson’s equation and Boltzmann distribution (part 2.1)

Boltzmann distribution (part 1)



Before we come to the conclusion of the Boltzmann distribution and understand the physical sense, it is necessary to give preliminary information on the elementary probability theory. The fact is that the macrosystems that we observe consist, as is known, of a huge number of smaller particles, for example, any substance consists of atoms, and the latter are in turn divided into nuclei and electrons, the nucleus of an atom is divided into protons and neutrons and etc. In the material system, which has the largest number of particles (in the so-called microsystem), it is senseless to consider each particle separately, firstly because no one will ever be able to describe each particle (even modern supercomputers), secondly, it will not give us anything, because the behavior of the macrosystem is described by averaged parameters, as we will see later. With such a huge number of particles, it makes sense to be interested in the probabilities that a parameter lies in one or another range of values.



So, we proceed to some definitions from probability theory, and then, having explained the Maxwell distribution, we will come to the analysis of the Boltzmann distribution.



In probability theory there is such a thing as a random event - this is a phenomenon that in some experience either takes place or is not. For example, consider a closed box containing molecule A and some allocated volume in this box (see fig. 1).







image



Fig. one



So, a random event will either hit molecule A in the allocated volume or the absence of this molecule in this volume (after all, the molecule moves, and at any given time it either exists in a certain volume or it does not).



The probability of some random event is the ratio of the number of tests m at which this event took place to the total number of tests M, and the total number of tests must be large. We can not talk about the likelihood of some event with one test. The more testing, the more accurate the probability of an event.



In our case, the probability that molecule A will be in the volume equals:













Consider now in the same box two allocated volumes and (see fig. 2)



image

Pic2



If these two volumes do not intersect (see Fig. 2a), then the molecule A can at some moment of time t be located either in the volume or in volume . At the same time, one molecule cannot be in two different places. Thus, we come to the concept of incompatible events , when the implementation of one event excludes the implementation of another event. In a case when volumes and intersect (see figure 2b), that is, the probability that the molecule can get into the intersection, and then the two events are compatible .



The probability that molecule A will fall into the volume equals:













where - the number of tests when the molecule was in volume . Similarly, the probability that molecule A will fall into the volume equals:













Further, the event, consisting in the fact that the molecule enters at least one of the two volumes, is realized time. Hence the probability of this event is equal to:













Thus, we can conclude that the probability of implementing one of the incompatible events is equal to the sum of the probabilities of the implementation of each of them.



A complete group of incompatible events is such a set of events in which the implementation of one is reliable, i.e. the probability of one of the events is 1.



Events are called equally possible if the probability of the realization of one of them has the same value, i.e. probabilities of all events are the same.



Consider the last example and introduce the concept of independent events . Let the first event be that the molecule A at the moment of time t is in the volume and the second event is that the other molecule B falls into the volume . If the magnitude of the probability that the molecule B will fall into the volume does not depend on whether molecule A got into or not, these events are called independent.



Suppose we have completed all n tests, and found that molecule A was once in volume , and molecule B - once in volume , the probabilities of these events are equal to:













Select from the test at which A got into the number of tests in which B also got into . Obviously, this number of selected tests is . Hence the probability of the joint implementation of events A and B is equal to:













Those. the probability of independent events in the joint implementation is equal to the product of the probabilities of each event separately.



If we measure a certain value, for example, the speed of a molecule, or the energy of a single molecule, then the value can take any real value on the number axis (including negative values), i.e. this value is continuous , in contrast to what we considered above (the so-called discrete values). Such values ​​are called random variables . For a continuous random variable, it is wrong to be interested in the probability of its given value. The correct formulation of the question is to find out the probability that a given value lies in the range from, say x to x + dx. This probability is mathematically equal to:













Here w (x) is a function called probability density. Its dimension is inverse to the dimension of the random variable x.



And finally, it is still necessary to say a rather obvious thing that the probability of a reliable event, or the sum of all the probabilities of a complete group of incompatible events, is equal to one.

In principle, these definitions are enough for us to show the derivation of the Maxwell distribution, and then the Boltzmann distribution.



So, consider an ideal gas (it can be an electron gas, so rarefied that the interaction of electrons can be neglected). Each particle of this gas has a speed v or momentum and all these speeds and impulses can be anything. This means that these parameters are random variables and we will be interested in the probability density .



Next, it is convenient to introduce an idea of ​​the space of pulses. Let us put off the particle momentum component along the axes of the coordinate system (see Fig. 3)



image

Fig. 3



We need to find out what is the probability that each component of the pulse lies in the ranges:













Ie, which is the same, the end of the vector p is in the rectangular volume dΩ:













Maxwell put two postulates on the basis of which he derived the distribution of impulses. He suggested:



A) All directions in space are equal and this property is called isotropy, in particular, isotropy of probability density .



B) The movement of particles along three mutually perpendicular axes is independent, i.e. impulse value , does not depend on what the value of its other components and .



The particles move in different directions, both in a positive direction and in a negative one. Ie, for example, on the x axis, the pulse value can take the value as so and . But the probability density is an even function (i.e., for negative values ​​of the argument, a positive function), so it depends on the square :













From the properties of isotropy (see above), it follows that the probability densities of the two remaining components are expressed in a similar way:













By definition, the probability that the momentum p falls into the volume dΩ is equal to:













Recall that we found out above that for independent events this probability can be expressed in terms of the product of the probabilities of the events of each component:













Consequently:













Prologize this expression and get:













Then we differentiate this identity by :













where the prime is the derivative of the corresponding function with respect to its complex argument.



After shortening this expression to we get:













The same applies to other components of the pulses, respectively, we obtain:













Hence the important relationships:













From these expressions it can be seen that the ratio of the derivative of the function by the function itself from one or another component of the pulse is a constant value, respectively, we can write as follows (we denote the constant as ):













Solving this differential equation, we get (how to solve such equations can be found in any textbook on ordinary differential equations):













Where C and β are constants that we still have to derive (in the next article). Thus, from the condition of isotropy and independence of motion along the coordinate axes, it follows that the probability that component of momentum will be in the interval is determined by the ratio:













and the probability dW of the momentum being in the volume dΩ is equal (remember the product of the probabilities of independent events):













In the next article we will complete the derivation of the Maxwell distribution, find out the physical meaning of this distribution, and go directly to the conclusion of the Boltzmann distribution.



All Articles