A number of years ago I became aware of the large number of physics enthusiasts out there who have no venue to learn modern physics and cosmology. All files for research proposal and bachelor thesis on Quantum Machine Learning at the University of KwaZulu-Natal in Durban, South Africa. The Theoretical Minimum is a book for anyone who has ever regretted not taking physics in college—or who simply wants to know how to think like a physicist. In this unconventional introduction, physicist Leonard Susskind and citizen-scientist George Hrabovsky offer a first.
|Language:||English, Spanish, French|
|Distribution:||Free* [*Sign up for free]|
Download Now: wm-greece.info?book= Download The Theoretical Minimum: What You Need to Know to Start. Susskind L., Friedman A. Quantum Mechanics: The Theoretical Minimum. Файл формата pdf; размером 1,58 МБ. Добавлен пользователем SoloninkaOleg. The Theoretical Minimum: What You Need to Know to Start Doing Physics Leonard Susskind, George Hrabovsky pdf free. The Theoretical Minimum: What You.
Think of the apparatus as a black box1 with a window that displays the result of a measurement. We begin by pointing it along the z axis Fig. Before the apparatus interacts with the spin, the window is blank labeled with a question mark in our diagrams. But rest assured, it does not contain a cat. Spin Spin? Apparatus Apparatus X.
Figure 1. A Spin and cat-free apparatus before any mea- surement is made. If the spin is not disturbed and the apparatus keeps the same orientation, all subsequent measurements will give the same result.
Coordi- nate axes show our convention for labeling the directions of space. Assuming the simple law of Eq. The same will be true for any number of repetitions. We can also say this in the following way: A simple explanation is that the apparatus measures the component of the vector along an axis embedded in the apparatus.
If we are convinced that the spin is a vector, we would naturally describe it by three components: The apparatus begins in the upright position with the up-arrow along the z axis. Next, rotate A so that the up-arrow points along the x axis Fig. If the spin really is a vector, it is a very peculiar one indeed.
Suppose we repeat the operation many times, each time following the same procedure, that is:. The repeated experiment spits out a random series of plus- ones and minus-ones. Determinism has broken down, but in a particular way. The apparatus rotated by an arbitrary angle within the x—z plane. The situation is of course more general. We did not have to start with A oriented along z. We may summarize the results of our experimental investigation as follows: What we are learning is that quantum mechanical systems are not deterministic—the results of experiments can be sta- tistically random—but if we repeat an experiment many times, average quantities can follow the expectations of clas- sical physics, at least up to a point.
In that sense, every experiment is invasive. This is true in both classical and quantum physics, but only quan- tum physics makes a big deal out of it. Why is that so? Classical experi- ments can be arbitrarily gentle and still accurately and repro- ducibly record the results of the experiment.
While it is true that the light must have a small enough wavelength to form an image, there is nothing in classical physics that prevents the image from being made with arbitrarily weak light.
In other words, the light can have an arbitrarily small energy content. Any interaction that is strong enough to measure some aspect of a system is necessarily strong enough to dis- rupt some other aspect of the same system. Thus, you can learn nothing about a quantum system without changing something else.
We can do this over and over without chang- ing the result. But consider this possibility: The answer is no. One might say that measuring one component of the spin destroys the information about another component.
If the system is a coin, the space of states is a set of two elements, H and T. The logic of set the- ory is called Boolean logic. Boolean logic is just a formalized version of the familiar classical logic of propositions. A fundamental idea in Boolean logic is the notion of a truth-value. The truth-value of a proposition is either true or f alse. Nothing in between is allowed. The related set theory concept is a subset. Roughly speaking, a proposition is true for all the elements in its corresponding subset and false for all the elements not in this subset.
There are rules for combining propositions into more com- plex propositions, the most important being or, and, and not. We just saw an example of not, which gets applied to a single subset or proposition. And is straightforward, and applies to a pair of propositions. Applied to two subsets, and gives the elements common to both, that is, the intersection of the two subsets. In the die example, the intersection of subsets A and B is the subset of elements that are both odd and less than 4.
The same goes for or. The or rule is similar to and, but has one additional subtlety. In everyday speech, the word or is generally used in the exclusive sense—the exclusive version is true if one or the other of two propositions is true, but not both. However, Boolean logic uses the inclusive version of or, which is true if either or both of the propositions are true.
Thus, according to the inclusive or, the proposition. The inclusive or has a set theoretic interpretation as the union of two sets: An Example of the Classical model of State Space. White num- bers are elements of the union of A with B, representing the proposition A or B. Consider the following two propositions:. Each of these is meaningful and can be tested by orienting A along the appropriate axis. The negation of each is also meaningful.
A and B: Consider how we would test the proposition A or B. There is an alternative procedure, which is to interchange the order of the two measurements. The proposition B or A is true.
In classical physics, the two orders of operation give the same answer. Therefore, the propo- sition A or B has the same meaning as the proposition B or A. Our job is to use the apparatus A to determine whether the proposition A or B is true or false. We will try using the procedures outlined above. It is unnecessary to go on: A or B is true. The answer is unpredictable. B or A is true. Please take a moment to let this idea sink in. We cannot overstate its importance.
Evidently, in this example, the inclusive or is not sym- metric. What about A and B? This is of course a possible outcome. We would be inclined to say that A and B is true. If you know a bit about quantum mechanics, you proba- bly recognize that we are talking about the uncertainty prin- ciple. In the case of position and momentum, the two propositions we might consider are:.
Awkward as they are, both of these propositions have mean- ing in the English language, and in classical physics as well. The need for complex quantities will become clear later on, when we study the mathematical representation of spin states. Complex Numbers Everyone who has gotten this far in the Theoretical Mini- mum series knows about complex numbers.
Nevertheless, I will spend a few lines reminding you of the essentials. A complex number z is the sum of a real number and an imaginary number. We can write it as.
In the Cartesian representation, x and y are the hor- izontal real and vertical imaginary components. In each case, it takes two real numbers to represent a single complex number. Complex numbers can be added, multiplied, and divided by the standard rules of arithmetic. They can be visualized as points on the complex plane with coordinates x, y. They can also be represented in polar coordinates: Adding complex numbers is easy in component form: Similarly, multiplying them is easy in their polar form: Simply multiply the radii and add the angles:.
Multiplying a complex number and its conjugate always gives a positive real result:. If z is a phase-factor, then the following hold:. Vector Spaces 1. The space of states of a quantum system is not a mathematical set;6 it is a vector space. Before I tell you about vector spaces, I need to clarify the term vector.
As you know, we use this term to indicate an 6 To be a little more precise, we will not focus on the set-theoretic properties of state spaces, even though they may of course be regarded as sets. Such vectors have three components, corresponding to the three dimensions of space. I want you to completely for- get about that concept of a vector. From now on, whenever I want to talk about a thing with magnitude and direction in ordinary space, I will explicitly call it a 3-vector.
A math- ematical vector space is an abstract construction that may or may not have anything to do with ordinary space. When you come across the term Hilbert space in quantum mechanics, it refers to the space of states. There is a unique vector 0 such that when you add it to any ket, it gives the same ket back:.
Also, multiplication by a scalar is linear:. Ordinary 3-vectors would satisfy these axioms except for one thing: Axiom 6 allows a vector to be multiplied by any complex number. One can think of 3- vectors as forming a real vector space, and kets as forming a complex vector space. As we will see, there are various concrete ways to represent ket-vectors as well. First of all, consider the set of continuous complex-valued functions of a variable x.
Call the functions A x. You can add any two such functions and multiply them by complex numbers.
You can check that they satisfy all seven axioms. This example should make it obvious that we are talking about something much more general than three-dimensional arrows. Two-dimensional column vectors provide another con- crete example. You can add two column vectors by adding their components: Column vector spaces of any number of dimensions can be constructed.
In the same way, a complex vector space has a dual version that is essentially the complex conjugate vector space. Why the strange terms bra and ket? Bra vectors satisfy the same axioms as the ket-vectors, but there are two things to keep in mind about the corre- spondence between kets and bras:.
Then the bra corresponding to. You have to remember to complex-conjugate. Thus, the bra corresponding to. The analogous operation for bras and kets is the inner product. The inner product is always the product of a bra and a ket and it is written this way:.
The result of this operation is a complex number. The ax- ioms for inner products are not too hard to guess:. The rule for inner products is essentially the same as for dot products: Exercise 1. A vector is said to be normalized if its inner product with itself is 1. Normalized vectors satisfy,. For ordinary 3-vectors, the term normalized vector is usually replaced by unit vector, that is, a vector of unit length.
Two vectors are said to be or- thogonal if their inner product is zero. This is the analog of saying that two 3-vectors are or- thogonal if their dot product is zero. Each is of unit length and orthogonal to the others. However, if there were more dimensions of space, there would be more basis vectors. Obviously, there is nothing special about the particular axes x, y, and z. As long as the basis vectors are of unit length and are mutually orthogonal, they comprise an orthonormal basis.
The same principle is true for complex vector spaces. Then look for a third, fourth, and so on. Eventually, you may run out of new direc- tions and there will not be any more orthogonal candidates. The maximum number of mutually orthogonal vectors is the dimension of the space. For column vectors, the dimension is simply the number of entries in the column. However, in quantum mechanics they generally are. In this book, whenever we say basis, we mean an orthonormal basis.
Next, we use the fact that the basis vectors are orthonormal. This makes the sum in Eq. Thus, we see that the components of a vector are just its inner products with the basis vectors. We can rewrite Eq. Very roughly, knowing a quantum state means knowing as much as can be known about how the system was prepared. In the last chapter, we talked about using an apparatus to prepare the state of a spin. In fact, we implicitly assumed that there. The obvious question to ask is whether the unpredictabil- ity is due to an incompleteness in what we call a quantum state.
There are various opinions about this matter. Here is a sampling:. There are two versions of this view. In version A, the hidden variables are hard to measure but in principle they are experimentally available to us. In version B, because we are made of quantum mechanical matter and there- fore subject to the restrictions of quantum mechanics, the hidden variables are, in principle, not detectable.
Quantum mechanics is unavoid- ably unpredictable. Quantum mechanics is as complete a calculus of probabilities as is possible.
The job of a physicist is to learn and use this calculus. For practical reasons, we will adopt the second view. Our goal is to build a representation that captures everything we know about the behavior of spins. At this point, the process will be more intuitive than formal. Please read this section care- fully. You get the idea. The idea that there are no hidden variables has a very simple mathematical representation: This point deserves emphasis: All possible spin states can be represented in a two- dimensional vector space.
We can write this as an equation,. I am going to tell you right now what they mean: The basis vectors must be orthogonal to each other. In other words, it is the probability of the spin being up if measured along the z axis. They are themselves not probabil- ities. To compute a probability, their magnitudes must be squared. In other words, the probabilities for measurements of up and down are given by.
Two other points are important: In other words,. The physical meaning of this is that, if the spin is prepared up, then the probability to detect it down is zero, and vice versa. Two orthogonal states are physically distinct and mutually exclu- sive. If the spin is in one of these states, it cannot be has zero probability to be in the other one. This idea applies to all quantum systems, not just spin. In fact, the directions up and down are not orthogonal directions in space, even though their associated state-vectors are orthogonal in state space.
The second important point is that for the total proba- bility to come out equal to unity, we must have.
This is a very general principle of quantum mechanics that extends to all quantum systems: Moreover, the squared magnitudes of the components of the state-vector, along particular basis vectors, represent probabilities for various experimental outcomes.
Here is what we know: But there is nothing special about up and down that is not also true of right and left. In particular, if the spin is right, it has zero probability of being left. Thus, by analogy with Eq. Exercise 2. This is called the phase ambiguity. This condition states that in and out are represented by or- thogonal vectors in the same way that up and down are. Using the relationships expressed in Eqs. These conditions state that if the spin is oriented along y, and is then measured along z, it is equally likely to be up or down.
We should also expect that if the spin were measured along the x axis, it would be equally likely to be right or left. This leads to additional conditions:. Here is the result:. Are they unique in that respect? Are the complex numbers in Eqs. Given our framework for spin states, there is no way around them. The following exercise gives you a road map. For the moment, forget that Eqs.
For example, the generalized coordinates we used in Volume I referred to as qi each represented an independent degree of freedom. Along sim- ilar lines, our next task is to count the number of physically distinct states there are for a spin.
I will do it in two ways, to show that you get the same answer either way. How many parameters does it take to specify such an orientation?
The answer is of course two. That seems to add up to four real pa- rameters, with each complex parameter counting as two real ones. But recall that the vector has to be normalized as in Eq. The normalization condition gives us one equation involving real variables, and cuts the number of parameters down to three. As I said earlier, we will eventually see that the physical properties of a state-vector do not depend on the overall phase-factor.
This means that one of the three remaining parameters is redundant, leaving only two—the same as the number of parameters we need to specify a direction in three- dimensional space. Thus, there is enough freedom in the expression. Latitude and longitude provide another example. These abstractions help us focus on mathematical relation- ships without worrying about unnecessary details. We need them to have unit length, and to be mutually orthogonal.
Our goal was to synthesize what we know about spins and vector spaces. Here is a brief outline of what we did:.
How did we get away with this? We were clever enough to notice that these four numbers are not all independent. This representation is not unique. While achieving these concrete results, we got a chance to see some state-vector mathematics in action and learn some- thing about how these mathematical objects correspond to physical spins. Although we will focus on spin, the same concepts and techniques apply to other quantum systems as well.
Lecture 3. No, we were not built to sense quantum phenomena; not the same way we were built to sense classical things like force and temperature. And eventually we do develop new kinds of intuition. This lecture introduces the principles of quantum me- chanics. Linear Operators 3.
Physical observables—the things that you can measure—are described by linear operators. Observables are the things you measure. Observables are also associated with a vector space, but they are not state-vectors. John Wheeler liked to call such mathematical objects machines. He imagined a machine with two ports: The gears turn and the machine delivers a result in the output port. Not every machine is a linear operator. Linearity implies a few simple properties. To begin with, a linear operator must give a unique output for every vector in the space.
We can imagine a machine that gives an output for some vectors, but just grinds up others and gives nothing. This machine would not be a linear operator. Something must come out for anything you put in. The next property states that when a linear operator M acts on a multiple of an input vector, it gives the same multiple of the output vector.
The only other rule is that, when M acts on a sum of vectors, the results are simply added together:. The row-column notation depends on our choice of basis vectors. If the vector space is N -dimensional, we choose a set of N orthonormal orthogonal and normalized ket-vectors. As we did in Eq. Notice that each mkj is just a complex number. This equation involves a slight abuse of notation that would give a purist indigestion. The left side is an abstract linear operator and the right side is a concrete representation of it in a particular basis.
Equating them is sloppy but it should not cause confusion. And so on. If you are not familiar with matrix multiplication, run to your computer and look it up right away. There are both advantages and disadvantages to repre- senting vectors and linear operators concretely with columns, rows, and matrices known collectively as components. The advantages are obvious. Components provide a completely explicit set of arithmetic rules for working the machine.
The underlying relationships between vectors and operators is independent of the particular basis we choose, and the concrete representation obscures that fact. But for a particular linear operator, there will be certain vectors whose directions are the same when they come out as they were when they went in.
These special vectors are called eigenvectors. Furthermore, it is a ket with a very special rela- tionship to M.
Try it out. M also happens to have another eigenvector: M alters the direction of the vector as well as its magnitude. Just as the vectors that get multiplied by numbers when M acts on them are called eigenvectors of M, the constants that multiply them are called eigenvalues. In general, the eigenvalues are complex numbers. Here is an example that you can work out for yourself. Linear operators can also act on bra-vectors.
I will keep the discussion short by telling you the rule for this type of multiplication. It is most simple in component form. Remember that bra-vectors are represented in component form as row vectors. The problem is complex conjugation. You have to complex- conjugate Z when going from kets to bras: What we need is a concept of complex conjugation for operators. We would like to write this equation in matrix form, using bras instead of kets.
In doing this, we have to remember that bra-vectors are represented by rows, not columns. For the result to work out correctly, we also need to rearrange the complex conjugate elements of the matrix M.
Our new equation is. For example, where you see m23 in Eq. In other words, the rows and columns have been interchanged. When we change an equation from the ket form to the bra form, we must modify the matrix in two steps:. Interchange the rows and the columns. Complex-conjugate each matrix element. In matrix notation, interchanging rows and columns is called transposing and is indicated by a superscript T.
The complex conjugate of a transposed matrix is called its Hermitian conjugate, denoted by a dagger. You could think of the dagger as a hybrid of the star-notation used in complex conjugation and the T used in transposition. In symbols,. To summarize: In symbols:. The results of any measurements are real numbers. If we want to be pedantic, we might say that observable quantities are equal to their own complex con- jugates.
What kind of linear operators? The kind that are the clos- est thing to a real operator. Observables in quantum me- chanics are represented by linear operators that are equal to their own Hermitian conjugates. They are called Hermitian operators after the French mathematician Charles Hermite. Hermitian operators satisfy the property. Hermitian operators and matrices have some special properties. Thus, we can rewrite the two equations as. The basic idea is that observable quan- tities in quantum mechanics are represented by Hermitian operators.
We can state it more precisely as follows:. This means that any vector the operator can generate can be expanded as a sum of its eigen- vectors. Degeneracy comes into play when two operators have simultaneous eigenvectors, as discussed later on in Sec- tion 5. One can summarize the fundamental theorem as follows: The eigenvectors of a Hermitian operator form an orthonormal basis.
By now, the trick should be obvious, but I will spell it out. The result is. In other words, the two eigenvectors must be orthogonal. In other words, there are two distinct eigenvectors with the same eigenvalue.
It should be clear that any linear combi- nation of the two eigenvectors is also an eigenvector with the same eigenvalue.
Consider an arbitrary linear combination of these two eigenvectors:. By assumption, these two vectors are linearly independent—otherwise, they would not represent distinct states. We outline the Gram-Schmidt procedure below, in Section 3. In other words, if the space is N -dimensional, there will be N orthonormal eigenvectors. The proof is easy and I will leave it to you.
Exercise 3. Prove the following: If a vector space is N - dimensional, an orthonormal basis of N vectors can be con- structed from the eigenvectors of a Hermitian operator.
This typi- cally happens when a system has degenerate states—distinct states that have the same eigenvalue.
In that situation, we can always use the linearly independent vectors we have, to create an orthonormal set that spans the same space. The method is the Gram-Schmidt procedure I alluded to earlier. Figure 3. The Gram-Schmidt Procedure.
We can extend this procedure to larger sets of linearly independent vectors. You can see in Fig. It should be clear that we can extend this procedure to larger sets of linearly independent vectors in more dimensions. The principles all involve the idea of an observable, and they presuppose the existence of an underlying complex vec- tor space whose vectors represent system states.
In this lec- ture, we present the four principles that do not involve the evolution of state-vectors with time. An observable could also be called a measurable. These are examples of observables.
The observable or measurable quantities of quantum mechanics are represented by linear operators L. The ability to use nonorthogonal vectors as a starting point is the main feature of the Gram-Schmidt Procedure. Some authors regard this as a postulate, or basic principle. We have chosen instead to derive it from the other principles.
The end result is the same either way: The possible results of a measurement are the eigenvalues of the operator that represents the ob- servable. Unambiguously distinguishable states are represented by orthogonal vectors. We can already begin to see that an operator is a way of packaging up states along with their eigenval- ues, which are the possible results of measuring those states. These ideas should become clear as we move forward. First of all, the result of a measurement is generally statistically uncertain.
However, for any given observable, there are particular states for which the result is absolutely certain. Principle 1 gives us a new way to look at these facts. When an observable is measured, the result is always a real number drawn from a set of possible results. For exam- ple, if the energy of an atom is measured, the result will be one of the established energy levels of the atom. The apparatus never gives any other result.
Namely, the result of a measurement is always one of the eigenvalues of the corresponding operator. Principle 3 is the most interesting. It speaks of unambiguously distinct states, a key idea that we have already encountered. Two states are physically distinct if there is a measurement that can tell them apart without ambiguity.
There is no possibility of a mistake. One might say that the inner product of two states is a measure of the inability to dis- tinguish them with certainty. Sometimes this inner product is called the overlap. Principle 3 requires physically distinct states to be represented by orthogonal state-vectors, that is, vectors with no overlap.
We will do so shortly. But, in general, there is no way to tell for certain which of these values will be observed. More precisely, the probability is the square of the magnitude of the overlap:. You might be wondering why the probability is not the over- lap itself. Why the square of the overlap? Keep in mind that the inner product of two vectors is not always positive, or even real. Probabilities, on the other hand, are both pos- itive and real. The operators that represent observables are Hermitian.
The reason for this is twofold. First, since the result of an experiment must be a real number, the eigenvalues of an operator L must also be real. As you know, physicists recognize various types of physi- cal quantities, such as scalars and vectors.
It should come as no surprise, then, that an operator associated with the mea- surement of a vector such as spin has a vector character of its own. In our travels so far, we have seen more than one kind of vector. The 3-vector is the most straightforward and serves as a prototype. For that, we need bras and kets, which have complex-valued components. But what does that actually mean? In physical terms, it means this: The bottom line is that there is a spin operator for each direction in which the apparatus can be oriented.
We can express this with the abstract equations. This can be expressed as. First, some ex- perimental data: Yes, we can. In equation form, 3 We are not trying to slip in a political slogan. Just say no to slogans. To make Eqs. Here is the solution: These three matrices are very famous and carry the name of their discoverer. They are the Pauli matrices. The correspondence between operators and measure- ments is fundamental in quantum mechanics. It is also very easy to misunderstand.
Operators are the things we use to calculate eigenvalues and eigenvectors. Operators act on state-vectors which are abstract math- ematical objects , not on actual physical systems. Having said what is true about operators, I want to warn you about a common misconception. It is often thought that measuring an observable is the same as operating with the corresponding operator on the state. For example, suppose we are interested in measuring an observable L. The mea- surement is some kind of operation that the apparatus does to the system, but that operation is in no way the same as acting on the state with the operator L.
Fortunately, the spin example of the previous subsection is just what we need. Recall Eqs. No problems here. From Eq. OK, here is our trap. Despite what you might think, the state-vector on the right-hand side of Eq. Neither of these results would leave the system state-vector in the superposition represented by Eq. But surely that state-vector must have something to do with the measurement result? In fact, it does. However, the result of a measurement cannot be properly described without taking the apparatus into account as part of the system.
What actually does happen during a measurement is the subject of Section 7. This is a good time to return to the two notions of vectors that come up all the time in physics. The other completely distinct meaning of the term vector is the state-vector of a system. Are they vectors, and if so, what kind? Clearly, they are not state-vectors; they are operators written as matrices that correspond to the three measur- able components of spin.
In fact, these 3-vector operators represent a new type of vector. We measure spin components by orienting the apparatus A along any one of the three axes and then activating it. There must be an operator that corresponds to this measurable quantity. They themselves are not operators. To be more concrete, we can write Eq. Perhaps there is some comfort in the fact that the re- sulting matrix operator corresponds to a vector component, which is a scalar.
It all works out in the end. Or even more explicitly, we can combine these three terms into a single matrix: What is this good for? And we will also be able to calculate probabilities for those outcomes. In other words, we will have a complete pic- ture of spin measurements in three-dimensional space.
That is pretty darn cool, if I say so myself. Plug- ging these values into Eq. Notice that our suggested column vector must have unit length.
Notice some important facts. This should come as no surprise; the ap- paratus A can only give one of these two answers no matter which way it points. The second fact is that the two eigenvectors are orthogonal. We are now ready to make an experimental prediction. Does our mathematical framework give the same result? It had better! Unfortunately, we need to cheat a little by using an equa- tion that we will not fully explain until the next lecture.
This is the equation that tells us how to calculate the average value also called the expectation value of a measurement. Here it is: To calculate the expectation value of a measurement cor- responding to the operator L, we multiply each eigenvalue by its probability, and then sum the results. Using Eqs. Having come this far, you might want to try your hand on a slightly more general problem. As before, we start with the apparatus A pointing in the z direction. But now, once the spin has been prepared in the up state, we can rotate A to an arbitrary direction in space for the second set of measurements.
Go ahead and try it. Compute the eigenvalues and eigenvectors for the matrix of Eq. Spherical Coordinates. It also illustrates the conversion to Cartesian coordinates: Can you show it? I will call it. The Spin-Polarization Principle: Any state of a single spin is an eigenvector of some component of the spin.
An interesting consequence of this theorem is that there is no state for which the expectation values of all three com- ponents of spin are zero. There is a quantitative way to ex- press this. Moreover, this is true for any state:. There is a massive, quiet, intimidating man sitting alone at the end of the bar. The quantum version has taken three lectures, three mathematical interludes, and according to my rough count, about 17, words to get to the same place.
But I think the worst is over. We now know what a state is. The other half involves a rule about how states change with time. Let me just give you a quick reminder about the na- ture of change in classical physics. In classical physics, the space of states is a mathematical set.
The logic is Boolean, and the evolution of states over time is deterministic and re- versible. In the simplest examples we considered, the state- space consisted of a few points: The states were pictured as a set of points on the page, and the time evolution was just a rule telling you where to go next. A law of motion consisted of a graph with arrows connecting the states. But there was also another rule called reversibility. Reversibility is the requirement that a properly formulated law must also tell you where you were last.
A good law cor- responds to a graph with exactly one arrow in and one arrow out at each state. There is another way to describe these requirements. It says that information is never lost. Moreover, in the past they were also in dif- ferent states. On the other hand, if two identical systems are in the same state at some point in time, then their histories and their future evolutions must also be identical.
Distinc- tions are conserved. The basic dynamical assumption of quantum mechanics is that if you know the state at one time, then the quantum equations of motion tell you what it will be later. Without loss of generality, we can take the initial time to be zero and the later time to be t.
The state at time t is given by some operation that we call U t , acting on the state at time zero. The operation U is called the time-development operator for the system. We are setting up U t in such a way that the state-vector will evolve in a deterministic manner.
Yes, you heard me correctly—the time evolution of the state-vector is deter- ministic. This is nice because it provides us with something we can try to predict.
But how does that square with the statistical character of our measurement results? For this rea- son, Eq. Clas- sical determinism allows us to predict the results of experi- ments. The quantum evolution of states allows us to com- pute the probabilities of the outcomes of later experiments.
It goes back to the relationship between states and measurements we mentioned at the very beginning of this book. First, it requires U t to be a linear operator. That is not very surprising. The relationships between states in quantum mechanics are always linear. It goes along with the idea that the state-space is a vector space. But linearity is not the only thing that quantum mechanics requires of U t.
Recall from the last lecture that two states are distin- guishable if they are orthogonal. The conservation of distinctions implies that they will con- tinue to be orthogonal for all time. We can express this as. This principle has consequences for the time-development operator U t.
Notice the dagger that indicates Hermitian conjugation. Any basis will do. The orthonormality is expressed in equation form as. Substituting into Eq. In that case, the inner product between them should be 1. Therefore, the general relation takes the form. In physics lingo, time evolution is unitary.
Unitary operators play an enormous role in quantum mechanics, representing all sorts of transformations on the state-space. Time evolution is just one example. Exercise 4. One could call this the conservation of overlaps. It expresses the fact that the logical relation between states is preserved with time.
There are two principles that go into the study of incre- mental changes.
The second principle is continuity. This means that the state-vector changes smoothly. It should be obvious that in this case the time-evolution operator is merely the unit operator I. The theory of functions of a complex variable, residues, solving equations by means of contour integrals Laplace's method , the computation of the asymptotics of integrals, special functions Legendre, Bessel, elliptic, hypergeometric, gamma function Quantum Mechanics.
Relativistic Quantum Theory, Vol. Statistical Physics, Vol. Fluid Mechanics, Vol. Electrodynamics of Continuous Media, Vol. Statistical Physics, Part 2.
Physical Kinetics. Find the probability that the electron will jump out. A hemisphere lies on an infinite two-dimensional plane. The electron falls on the hemisphere, determine the scattering cross section in the Born approximation.