subject
stringclasses 2
values | topic
stringlengths 4
138
| question
stringlengths 3
1.14k
| answer
stringlengths 1
45.6k
|
---|---|---|---|
Physics
|
|general-relativity|spacetime|
|
Is the spacetime referred to in Einstein's field equations, spacetime constructed with inertial coordinates?
|
<p>The EFE holds for any coordinate system. It does not have to be realizable by any set of rods and clocks. It holds in non-inertial coordinates. It holds in coordinates with any arbitrary synchronization strategy. It even holds in coordinate systems that do not have time as a coordinate.</p> <p>The coordinate system has only two requirements:</p> <ol> <li>it must be smooth</li> <li>it must be invertible</li> </ol> <p>Other than that there is no restriction.</p>
|
Physics
|
|quantum-field-theory|feynman-diagrams|path-integral|perturbation-theory|
|
Expanding the generating functional $W[J]$ for connected diagrams as a power series in $\hbar$
|
<p>OP is essentially asking about the <span class="math-container">$\hbar$</span>/loop-expansion for the generating functional <span class="math-container">$W_c[J]$</span> of connected diagrams, i.e. that the power of <span class="math-container">$\hbar$</span> in a diagram is given by the number of loops. This is e.g. proven in my Phys.SE answer <a href="https://physics.stackexchange.com/a/270456/2451">here</a>.</p>
|
Physics
|
|thermodynamics|phase-transition|ice|
|
What would happen if you add ice to an isolated system of water that is already at 0°C?
|
<p>Depends on the amount of water. If the negative energy content of the ice between -2 and 0 C in frozen form is enough to freeze the water, the water is freezed, else the water is freezed partly on the surface until the ice reaches 0C and energy transport stops in the equilibrium of coexistence of water and ice at 0C.</p>
|
Physics
|
|lagrangian-formalism|symmetry|gauge-theory|gauge-invariance|noethers-theorem|
|
Noether's first Vs Noether's Second theorem
|
<ol> <li>Unfortunately I don't understand this part of the question.</li> <li>An infinitesimal transformation is a symmetry of the action if the action changes by a boundary term under it. To give an example, consider the scalar field Lagrangian <span class="math-container">$\mathcal L=-\partial_\mu\phi\partial^\mu\phi^\ast,$</span> where <span class="math-container">$\phi$</span> is complex valued. The transformation <span class="math-container">$$ \delta\phi=i\alpha\phi,\quad\delta\phi^\ast=-i\alpha\phi^\ast $$</span> is an infinitesimal symmetry if <span class="math-container">$\alpha$</span> is a constant, since <span class="math-container">$$ \delta\mathcal L=-\partial_\mu(i\alpha\phi)\partial^\mu\phi^\ast+\partial_\mu\phi\partial^\mu(i\alpha\phi^\ast) =-i\alpha\partial_\mu\phi\partial^\mu\phi^\ast+i\alpha\partial_\mu\phi\partial^\mu\phi^\ast=0. $$</span> But if <span class="math-container">$\alpha$</span> is a function on spacetime, we have <span class="math-container">$$ \delta\mathcal L=i\partial_\mu\phi\phi^\ast\partial^\mu\alpha-i\phi\partial^\mu\phi^\ast\partial_\mu\alpha\neq 0, $$</span> and this isn't even a total derivative term. So the transformation with <span class="math-container">$\alpha$</span> a function is <em>not</em> in general a symmetry.</li> <li>The text does not appear to be very systematic regarding this, but if <span class="math-container">$J^\mu_\lambda$</span> is a current that depends on a set of functions <span class="math-container">$\lambda^a$</span> linearly and differentially, e.g. <span class="math-container">$$ J^\mu=J^\mu_a\lambda^a+J^{\mu,\nu}_a\partial_\nu\lambda^a+\dots+J^{\mu,\nu_1...\nu_r}_a\partial_{\nu_1...\nu_r}\lambda^a, $$</span> and it satisfies <span class="math-container">$ \partial_\mu J^\mu_\lambda=0$</span> for all <span class="math-container">$\lambda$</span> as an <em>off-shell</em> relation, then we can always write <em>also off-shell</em> that <span class="math-container">$$ J^\mu_\lambda=\partial_\nu K^{\mu\nu}_\lambda, $$</span> where <span class="math-container">$K^{\mu\nu}_\lambda$</span> is antisymmetric and also depends on <span class="math-container">$\lambda$</span> linearly and differentially. Moreover, if one gives a proper global formulation for these types of objects, it turns out that this exactness result is <em>also true globally</em>, i.e. there is no nontrivial notion of "de Rham cohomology" for these types of objects. <span class="math-container">$$ \ $$</span>When <span class="math-container">$J^\mu_\lambda=J^\mu_a\lambda^a+J^{\mu,\nu}_a\partial_\nu\lambda^a$</span>, this can be worked out easily: The conservation law gives <span class="math-container">$$ 0=\partial_\mu J^\mu_\lambda=J^{\mu,\nu}_a\partial_{\mu\nu}\lambda^a+(J^\nu_a+\partial_\mu J^{\mu,\nu}_a)\partial_\nu\lambda^a+\partial_\mu J^\mu_a\lambda^a, $$</span> and since <span class="math-container">$\lambda^a$</span> is arbitrary, this must separately vanish order-by-order in <span class="math-container">$\lambda^a$</span>. The second order part gives <span class="math-container">$$ J^{(\mu,\nu)}_a=0\Longleftrightarrow J^{\mu,\nu}_a=K^{\mu\nu}_a, $$</span> where <span class="math-container">$K^{\mu\nu}_a$</span> is antisymmetric. The first order part gives <span class="math-container">$$ J^\mu_a=\partial_\nu K^{\mu\nu}_a, $$</span> and the zeroth order part is trivially satisfied then. Then if we define <span class="math-container">$$ K^{\mu\nu}_\lambda=K^{\mu\nu}_a\lambda^a, $$</span> we get <span class="math-container">$$ \partial_\nu K^{\mu\nu}_\lambda=\partial_\nu K^{\mu\nu}_a\lambda^a+K^{\mu\nu}_a\partial_\nu\lambda^a=J^\mu_a\lambda^a+J^{\mu,\nu}_a\partial_\nu\lambda^a=J^\mu_a. $$</span> <em>Exercise:</em> Work out this equation for the electromagnetic example in OP <em>without</em> using the field equations (hint: the second order part of the coservation law is the field equations themselves).</li> </ol>
|
Physics
|
|experimental-physics|microscopy|image-processing|
|
What is the relation between image resolution and Cutoff from 2D FFT functions in frequency space?
|
<h4>Frequencies in an image</h4> <p>The image of an object consists of many frequencies. High frequencies correspond to quickly varying intensity and likewise low frequencies correspond to slowly varying features. So high frequencies capture the details of an image, while low frequencies capture the overall shape. Due to the diffraction limit, a microscope can only capture frequencies up to a certain point. Roughly speaking, the diffraction limit depends on the wavelength of the length of the strength of your strongest lens.</p> <p>If you look at an image whose resolution is lower than your camera resolution, you will see that in Fourier space your image lives inside a certain circle. This circle is not sharply defined, but rather smoothed out. To find out what this radius is you could do two things. You could find the circle in Fourier space that contains most of your image. Or, you could transform your image to Fourier spae, cut out everything outside a particular circle, transform back and see if your image has degraded. If your image has not degraded (by much), you know that your circle contains all of your image's frequencies.</p> <h4>Explanation of image</h4> <p>Below, I did this procedure on an artificially created image. The width of the stripes is given by <code>1 1 2 2 4 4 8 8</code> etc. In Fourier space I cut out everything outside of the red circle, which corresponds to <span class="math-container">$\lambda=8$</span>. In the final image you can see that the stripes of width <code>4 4</code>, which have a wavelength of 8, can be resolved just fine. But stripes narrower than that are not resolvable.</p> <h4>Scale in Fourier space</h4> <p>Note that an important part of this procedure relies on knowing which parts in Fourier space correspond to which wavelengths. To figure this out you can make use of the following facts.</p> <ol> <li>I define the window of the image in real space to be of width <span class="math-container">$[0,x_{max})$</span>. This means the size of a pixel is given by <span class="math-container">$dx=n/x_{max}$</span>, where <span class="math-container">$n$</span> is the number of pixels. Note that <span class="math-container">$x_{max}$</span> is excluded because of periodicity.</li> <li>The Fast Fourier Transform (FFT) sends this image to <span class="math-container">$[0,k_{max})$</span>. Likewise, we have <span class="math-container">$dk=n/k_{max}$</span>. To relate Fourier space to real space we need a linking equation: <span class="math-container">\begin{align} \begin{cases} n\,dx\,dk=1&\text{$k$ in cycles/length}\\ n\,dx\,dk=2\pi&\text{$k$ in radians/length} \end{cases} \end{align}</span> So <span class="math-container">$k$</span> is either defined as <span class="math-container">$k=2\pi/\lambda$</span> for radians/length or <span class="math-container">$k=1/\lambda$</span> for cycles per length. Numpy, for example, uses cycles per length. But in physics we often use the radians per length convention.</li> <li>The FFT defines the (0,0) point of an image as the origin. If you want to display images with the origin in the center, you may need to "roll" the image a few times to place the top-left corner in the center.</li> </ol> <p>As a final note, this is just one hackish way to get the resolution of the image. There are probably better ways.</p> <p>I hope you enjoy your thesis!</p> <p><a href="https://i.stack.imgur.com/XCzW9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XCzW9.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np import matplotlib.pyplot as plt n = 150 img = np.zeros((n, n)) # Making a multiple resolution image cur_width = 1 pointer = 0 while pointer < n: img[pointer:pointer+cur_width,pointer:] = 1 img[pointer:,pointer:pointer+cur_width] = 1 pointer += 2*cur_width cur_width *= 2 # Adding some noise img += np.random.rand(*img.shape)*.1 fig, axes = plt.subplots(ncols=3, dpi=200) plt.rcParams.update({'font.size': 8}) axes[0].imshow(img) axes[0].set_xlabel('x (pixels)') # Calculating FFT and centering fft = np.fft.fftshift(img) fft = np.fft.fft2(fft) fft = np.fft.fftshift(fft) # cutting out hole in frequency space cutoff_wavelength = 8 cutoff_k = 1/cutoff_wavelength dx = 1.0 k = np.fft.fftfreq(n, d=dx) k = np.fft.fftshift(k) Kx, Ky = np.meshgrid(k, k) fft_cutoff = fft*(Kx**2 + Ky**2 < cutoff_k**2) # plotting result with right scale dk = 1/(n*dx) krange = n*dk kmin = -krange/2 kmax = krange/2 - dk axes[1].imshow(np.log(1 + np.abs(fft)), extent=[kmin, kmax, kmin, kmax]) circ = plt.Circle((0,0), cutoff_k, color='r', fill=False) axes[1].add_patch(circ) axes[1].set_xlabel('k (1/pixels)') # transforming back to real space, making sure the zero frequency is in the (0, 0) corner img_low_resolution = np.fft.fftshift(fft_cutoff) img_low_resolution = np.fft.ifft2(img_low_resolution) img_low_resolution = np.fft.fftshift(img_low_resolution) img_low_resolution[n-8:n, :] = img[n-8:n,:]*np.max(img_low_resolution) axes[2].imshow(np.real(img_low_resolution)) axes[2].set_xlabel('x (pixels)') plt.tight_layout() plt.show() </code></pre>
|
Physics
|
|quantum-mechanics|hilbert-space|wavefunction|normalization|spherical-harmonics|
|
Square Integrability of spherical symmetric wave
|
<p>I think that it is most likely that your professor was not thinking of multiplying and integrating term by term the series. Instead he was just thinking about the following:</p> <p>For the given function <span class="math-container">$u(r,t)$</span> a necessary, but not sufficient, condition for its square integrability is that is that <span class="math-container">$\lim\limits_{r\rightarrow\infty} u(r,t)=0$</span>. If the series is uniformly convergent, then we can take the limit term by term so that we must have: <span class="math-container">$$\lim\limits_{r\rightarrow\infty} u_0(t)=0\implies u_0(t)=0,$$</span> however, <span class="math-container">$$\lim\limits_{r\rightarrow\infty}{u_1(t)\over r}=0, $$</span> does not require that <span class="math-container">$u_1(t)$</span> be equal to zero in general.</p>
|
Physics
|
|homework-and-exercises|kinematics|
|
Why is the 2nd approach is incorrect?
|
<p>In the second approach you have implicitly assumed that after the object has fallen for a time <span class="math-container">$\sqrt \beta$</span> it has speed <span class="math-container">$u$</span> i.e. you are assuming that its speed profile as it descends is the reverse of its speed profile as it ascends. This is incorrect because its deceleration as it ascends is <span class="math-container">$12 \space m/s^2$</span> whereas its acceleration as it descends is only <span class="math-container">$8 \space m/s^2$</span>.</p>
|
Physics
|
|particle-physics|nuclear-physics|standard-model|hydrogen|protons|
|
Can the protium nucleus be in an excited state?
|
<p>As other answers have noted, a proton can be excited into many other states such as the <span class="math-container">$\Delta(1232)$</span> or N<span class="math-container">$(1520)$</span>, but for clarity it is perhaps worth noting that a protium atom cannot have its nucleus in an excited state.</p> <p>All proton excited states have lifetimes <span class="math-container">$\sim 10^{-23}$</span> seconds, far shorter than the timescale (<span class="math-container">$\sim\hbar/m_ec^2\alpha \sim 10^{-19}$</span> seconds) for stable electron orbitals to form. Long before a stable protium atom with an excited proton nucleus can form, that excited nucleus will have decayed.</p>
|
Physics
|
|electromagnetism|electromagnetic-radiation|potential|
|
Solving the Inhomogeneous Wave Equation
|
<blockquote> <p>... <em>the</em> solution of the inhomogeneous wave equation: <span class="math-container">$$\frac{1}{c^2} \frac{\partial^2{A^{\nu}}}{\partial t^2}- \nabla^2{A^{\nu}} = \frac{-4\pi}{c}J^{\nu}.\tag{1}$$</span></p> </blockquote> <p>To any particular solution, you can always add solutions to the homogeneous equation <span class="math-container">$$\frac{1}{c^2} \frac{\partial^2{A^{\nu}}}{\partial t^2}- \nabla^2{A^{\nu}} = 0.$$</span> and arrive at another solution. This is one thing to watch out for.</p> <p>One general method you could try is to Fourier transform. It is probably instructive to see how this method sort of succeeds and sort of fails...</p> <p>Anyways, Fourier transforming Eq. (1) gives <span class="math-container">$$ \tilde A^\nu(\vec k, \omega) (-\omega^2/c^2 + |\vec k|^2) = -\frac{4\pi}{c}\tilde J^\nu(\vec k, \omega)\;, $$</span> which can be rearranged and Fourier transformed back to find: <span class="math-container">$$ A^\nu(\vec x, t)= {4\pi c}\int \frac{d^3k d\omega}{(2\pi)^4} e^{i\vec k \cdot \vec x-i\omega t} \frac{\tilde J^\nu(\vec k,\omega)}{\omega^2 - c^2|\vec k|^2} $$</span> <span class="math-container">$$ = \int d^3 x' dt' J^{\nu}(\vec x', t')\underbrace{{4\pi c}\int \frac{d^3k d\omega}{(2\pi)^4} e^{i\vec k \cdot \vec x-i\omega t}e^{-i\vec k\cdot \vec x'+i\omega t'} \frac{1}{\omega^2 - c^2|\vec k|^2}}_{this}\;. $$</span></p> <p>Now you just have to show that the thing marked "this" above is the same as the thing you are calling: <span class="math-container">$$ \frac{1}{\pi c}\frac{1}{R^2} $$</span></p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|notation|observables|
|
Showing the Variance of an observable in a determinate state is always zero
|
<p>In chapter one (eq. 1.11), Griffiths defines the variance as: <span class="math-container">$$\sigma^2=\langle(\Delta j)^2\rangle,$$</span> where <span class="math-container">$\Delta j$</span> is the amount of "spread" in the distribution,: <span class="math-container">$$\Delta j=j-\langle j\rangle.$$</span> At this point in the text Griffiths is just talking about the meaning of variance for any statistical distribution, he has not yet brought in quantum mechanical observables, <span class="math-container">$j$</span> can be some ordinary observable like a person's age or something like the classical position or momentum of a particle. Even though we must replace classical observable (numbers) with operators in quantum mechanics, the theory is still statistical and the rules of probability and statistics are independent of quantum mechanics and still hold when we transfer to the quantum realm. So in your case, Griffiths wants to find the variance for a quantum mechanical observable so from the definition of variance we have that: <span class="math-container">$$\sigma^2=\langle (\hat Q-\langle \hat Q\rangle)^2\rangle.$$</span> This just stems from the basic formulations of the concept of variance in probability and statistics. Now we need to know what is meant by <span class="math-container">$\langle \hat Q\rangle$</span>.</p> <p>This quantity is known as the <em>expectation value</em> of the observable <span class="math-container">$\hat Q$</span>. The expectation value is the average of the observable over an ensemble of identical quantum systems, it is calculated according to: <span class="math-container">$$\langle\hat Q\rangle=\langle\Psi |\hat Q|\Psi\rangle=\int_{\infty}^{+\infty}\Psi^*\hat Q\Psi\; dx.$$</span> So the inner products come in through the definition of the expectation value of an observable.</p> <p>So what is the value of <span class="math-container">$\langle\hat Q\rangle$</span>? Now Griffiths is talking in this section of the book about <em>determinate</em> states, which are the <em>stationary</em> or <em>eigenstates</em> of the observable <span class="math-container">$\hat Q$</span>. Since the states are eigenstates we know that: <span class="math-container">$\hat Q\Psi=q\Psi$</span>, where <span class="math-container">$q$</span> is the eigenvalue. So we have that <span class="math-container">$$\langle\hat Q\rangle=\langle\Psi|\hat Q|\Psi\rangle=q\langle\Psi|\Psi\rangle=q.$$</span> Where we have used the above eigenvalue equation and the normalization condition <span class="math-container">$\langle\Psi|\Psi\rangle=1$</span>.So we can replace <span class="math-container">$\langle\hat Q\rangle$</span> with simply <span class="math-container">$q$</span> in our variance calculation to which we are now ready to return.</p> <p>So we must find: <span class="math-container">$$\begin{align} \sigma^2&= \langle(\hat Q-q)^2\rangle \\ &=\langle\Psi|(\hat Q-q)^2|\Psi\rangle\\ &=\langle\Psi|(\hat Q-q)(\hat Q-q)|\Psi\rangle\\ &=\langle(\hat Q-q)\Psi|(\hat Q-q)\Psi\rangle \end{align}$$</span> In the last step we used the fact that the operator <span class="math-container">$(\hat Q-q)$</span> is <em>Hermitian</em>, i.e. <span class="math-container">$(\hat Q-q)^{\dagger}=(\hat Q-q)$</span> and that for any operator and ket: <span class="math-container">$(\hat O|\alpha\rangle)^{\dagger}=\langle\alpha|\hat O^{\dagger}=\langle\hat O^{\dagger}\alpha|$</span>.</p> <p>So all we need now is to understand <span class="math-container">$|(\hat Q-q)\Psi\rangle$</span>: <span class="math-container">$$(\hat Q-q)\Psi=\hat Q\Psi-q\Psi=q\Psi-q\Psi=0,$$</span> where once again the eigenvalue equation <span class="math-container">$\hat Q\Psi=q\Psi$</span> was used since <span class="math-container">$\Psi$</span> are stationary or determinate states. Thus we have that: <span class="math-container">$$\sigma^2=\langle 0|0\rangle=0.$$</span> This is basically a statement of the fact that determinate states are such that measurements on the system always return the eigenvalue of the corresponding operator.</p>
|
Physics
|
|rotational-dynamics|reference-frames|angular-momentum|momentum|
|
Can a force acting on a single point of a resting, freely movable body cause it to spin without causing translational movement?
|
<p>Consider that there is a system consisting of an incoming projectile (point with mass m) and a rigid rod (one dimensional segment with mass M). For the sake of simplicity let the problem be two dimensional so that the motion is entirely in the <span class="math-container">$x,y$</span> plane, otherwise the rod is unconstrained, i.e. it is not fastened at any point and is free to move in any fashion.</p> <p>We now ask: what happens when the projectile strikes the rod imparting an impulse at some point along the rod's extension, with a particular emphasis on the conservation of momentum.</p> <p>The situation is that the incoming projectile impinges against the rod imparting to it an impulse while merging with it so that the two bodies are now one. Since the system consists of the projectile and rod, we have two masses each with its own linear and angular momenta (depending upon the coordinates chosen).</p> <p>Before the collision the projectile has some angular momentum according to the origin of coordinates so chosen. The rod on the other hand is stationary and has no angular momentum. After the collision, the combined rod\projectile system has two kinds of angular momentum that sum to a net total, viz. the angular momentum of the system due to rotation about its center of mass and the angular momentum associated with the translation of the center of mass itself. The principle of conservation of angular momentum demands that the angular momentum of the projectile before collision is equal to the angular momentum of the combined rod\projectile after collision.</p> <p>This analysis assumes coordinate such that each mass can be characterized entirely in terms of its angular momentum. We could have chosen other coordinates where one would have had to introduce both linear and angular momentum, however, each would still be separately conserved and the resultant motion identical, i.e. the rod would move away from its initial location with both rotational and translational motion.</p> <p>Do there ever arise any cases where a rigid body can experience an external force and not exhibit translation in addition to rotation. The answer is <em>definitely not</em>. The fact that the body is rigid will not allow for the pure rotation without translation in the absence of some force of constraint. This can be seen clearly from the standpoint that; whereas the configuration manifold of the unconstrained system is two dimensional, requiring two numbers to specify its motion, viz. <span class="math-container">$x,y$</span>, the purely rotating system requires only one coordinate to specify its motion entirely, i.e. an angle <span class="math-container">$\theta$</span> will suffice. Thus, there must be an equation of constraint to eliminate one of the two previously required coordinates. </p> <p>I must gratefully acknowledge user Zaph, as their stubborn persistence, in what they knew to be right, was a guiding light for the resolution of the problem.</p>
|
Physics
|
|quantum-mechanics|special-relativity|quantum-entanglement|measurements|
|
Entangled particles and the Andromeda paradox experiment
|
<p>Without disagreeing with anything in Eric's answer and comment, I will add a bit to hoping to clarify the situation.</p> <ol> <li>The issue of simultaneity of measurement does not enter into the equation at all. Strictly speaking, most would say that it is not even possible to determine that two entangled particles were measured at the same time. That is simply because the detections cannot be resolved well enough using today's technology - even with the incredible advances that have bee made in recent years. But even if you could measure simultaneously, theory says that nothing special happens at that point. Simultaneous or not, the results appear the same for entangled systems.</li> </ol> <p>If you want to read about entanglement experiments where the measurements are as near-simultaneous as possible, I would recommend studying what is called the <a href="https://en.wikipedia.org/wiki/Hong%E2%80%93Ou%E2%80%93Mandel_effect" rel="nofollow noreferrer">Hong-Ou-Mandel effect</a>. In experiments, it is often called the HOM dip and is characterized by a graph with a dip in the middle. Here is the <a href="https://picture.iczhiku.com/resource/paper/sYiDyJukartfwMMc.pdf" rel="nofollow noreferrer">original HOM paper (see Figure 2)</a>. But please understand this subject - measuring time to the level of femtoseconds or even attoseconds - is extremely complex for even the most well-studied (which I am not lol).</p> <ol start="2"> <li><p>Likewise, the issue of moving reference frames does not enter into the equation at all. Nor does direction of movement (closer or farther) have any observable impact. For entangled photons, many experiments have been performed with such pairs. Obviously, their velocity is c. But they are measured going in all kinds of directions relative to each other. There is no change to the quantum predictions required for this scenario. Same applies for entangled systems of other particle types.</p> </li> <li><p>Your question actually asks whether this experiment can be performed and confirmed. The answer is sure, it can be done and has been done in many ways. But there's nothing specific to confirm, you didn't actually provide a prediction of something you expect to witness one way or the other. But again, there is nothing to see when it comes to reference frames or simultaneity when it comes to entangled systems.</p> </li> </ol> <p>If you want to read about entanglement experiments from Earth to space, this has actually been done with satellites in orbit: <a href="https://arxiv.org/abs/1707.00934" rel="nofollow noreferrer">Ground-to-satellite quantum teleportation</a></p> <ol start="4"> <li>Not only does special relativity <em>not</em> enter into the quantum predictions in any way, neither does the order of measurement - which might otherwise lead you to believe the earlier measurement is the cause and the later one is the effect. In the so-called "delayed-choice" experiments on entangled pairs: cause-and-effect are demonstrated to be in reverse. Keep in mind that most interpretations of QM do not consider this to be evidence of the future changing the past; but certainly there is no apparent difference when order is reversed.</li> </ol> <p>If you want to know more about delayed choice: <a href="https://arxiv.org/abs/1407.2930" rel="nofollow noreferrer">Delayed-choice gedanken experiments and their realizations</a></p> <p>While it may not be entirely clear how these references apply to your question: they actually are the experimental pieces that when put together, provide the answers you seek. And the experimental results match the quantum mechanical theory extremely well.</p>
|
Physics
|
|electromagnetism|simulations|
|
Determining particle position in a magnetic field given initial conditions
|
<p>Newton's second law of motion is one way to derive a set of second order differential equations known as the equations of motion. We can write Newton's second law as: <span class="math-container">$$\vec F=m\vec a(t)=m{d^2\vec r(t)\over dt^2},$$</span> where, <span class="math-container">$\vec a(t)$</span> and <span class="math-container">$\vec r(t)=x(t)\mathbf{\hat i}+y(t)\mathbf{\hat j}+z(t)\mathbf{\hat k},$</span> are the acceleration and position respectively. The velocity is given by: <span class="math-container">$$\vec v(t)=x'(t)\mathbf{\hat i}+y'(t)\mathbf{\hat j}+z'(t)\mathbf{\hat k}.$$</span> In the case of a charged particle in a magnetic field: <span class="math-container">$$\vec F=q\vec v\times\vec B\implies m{d^2\vec r(t)\over dt^2}=q\vec v\times\vec B.$$</span> So the vector product of <span class="math-container">$\vec v$</span> and <span class="math-container">$\vec B$</span> is given by: <span class="math-container">$$\vec v\times \vec B=(y'(t)B_z-z'(t)B_y)\mathbf{\hat i}-(x'(t)B_z-z'(t)B_x)\mathbf{\hat j}+(x'(t)B_y-y'(t)B_x)\mathbf{\hat k}.$$</span> And the second derivative of the position <span class="math-container">$\vec r(t)$</span> as: <span class="math-container">$${d^2\vec r(t)\over dt^2}=x''(t)\mathbf{\hat i}+y''(t)\mathbf{\hat j}+z''(t)\mathbf{\hat k}.$$</span> Putting this all together gives a set of <em>coupled</em> differential equations for the respective vector components: <span class="math-container">$$\begin{align} x''(t)&={q\over m}(y'(t)B_z-z'(t)B_y)\\ y''(t)&={q\over m}(x'(t)B_z-z'(t)B_x)\\ z''(t)&={q\over m}(x'(t)B_y-y'(t)B_x) \end{align}$$</span> Now armed with the initial conditions for the problem, viz. the initial positions <span class="math-container">$x(0)=x_0$</span>, <span class="math-container">$y(0)=y_0$</span>, <span class="math-container">$z(0)=z_0$</span>, and velocities <span class="math-container">$x'(0)=v_{x0}$</span>, <span class="math-container">$y'(0)=v_{y0}$</span>, <span class="math-container">$z'(0)=v_{z0}$</span>, of the charged particle you can solve for the position as a function of time.</p>
|
Physics
|
|electromagnetism|gauss-law|conductors|
|
How does the electric field behave at the surface of a realistic conductor?
|
<p>What you ask is essentially how the transition occurs from a field outside a conductor to the field-free region inside the conductor (at least in the drawings it seems that the vertical junps are at the inner and outer surface of a conductive spherical wall).</p> <p>First of all: the field inside a real-world conductor is not zero <em>at all</em> because you are there between positive cores of atoms, with negative electron wave mixtures all around you, and the resulting fields can fluctuate strongly. But on average they are zero, that much is true.</p> <p>If you then come close to the surface, interestingly, there are two things that can happen. Let's assume that we have negative free flowing charge, like the electrons in most metals (for holes as the free charges the modifications are straightforward). We then have either:</p> <ol> <li>A <strong>depletion layer</strong>, if the external <span class="math-container">$E$</span>-field points <em>outward</em> from the surface. In this layer the electrons are pushed away to the inner parts of the conductor by the field, which penetrates into that part of the conductor. Only the atom cores are left, creating a positively charged layer with finite thickness. You can calculate how the field will now transition in a nice continuous way, and you can calculate how much thickness you actually need (if you look up the atom density for the metal involved).</li> <li>An <strong>accumulation layer</strong> if the external <span class="math-container">$E$</span>-field points inwards to the surface. In that layer the electrons are crowding up, so they create a negatively charged layer with finite thickness. Again the <span class="math-container">$E$</span>-field will gradually decrease now, since here again we have no infinitely thin charge layers. But here it is less trivial to compute how thick the accumulation layer will become to create the required amount of charge! Unlike the atom cores which are fixed, the electrons are governed by the field equations of quantum mechanics (at least that is the distinction in the semi-classical description that is usually applied in solid state physics). This often results in accumulation layers being <em>thinner</em> than depletion layers, for situations where the external fields are equal but opposite.</li> </ol> <p>I leave it to you to look up the details and especially the interesting implications for semicondutor devices (like diodes, bipolar transistors, and MOSFETs).</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|forces|work|power|
|
Power-time graphs for constant speed
|
<p>By definition <span class="math-container">$P = \frac{dW}{dt}$</span>, where <span class="math-container">$W$</span> is the work. Then <span class="math-container">$P = F \cdot \frac{s}{dt} = F \cdot v$</span>, where <span class="math-container">$v$</span> is velocity.</p> <p>So, if the force acting on the body and the angle between the force and velocity vector does not change, then the power would definitely be constant. Here, we are considering power due to the engine. Therefore, force would be provided by the engine only.</p> <p>So, in the first case, the engine must provide some force in order to counteract the frictional force, then the power must be constant, assuming pure rolling of tires. But if there are no frictional forces, the engine will not provide any power, as there is no force to counteract; hence in case (i), power will be <span class="math-container">$0$</span> (as <span class="math-container">$F = 0$</span>, <span class="math-container">$F \cdot v = 0$</span>).</p> <p>In the second case, the component of gravity parallel to the road must be countered to go up, therefore force due to engine = force of gravity, hence assuming the slope doesn't vary, power shall be constant. (<span class="math-container">$F$</span> constant, <span class="math-container">$v$</span> constant, therefore <span class="math-container">$F \cdot v = $</span> constant).</p>
|
Physics
|
|general-relativity|black-holes|event-horizon|singularities|
|
Range that the Schwarzschild metric is valid
|
<blockquote> <p>The Schwarzschild metric is the metric calculated from the field equation outside of the black hole. This condition of region (outside of the matter) was the reason why we could use <span class="math-container">$T_{μν}=0$</span>.</p> </blockquote> <p>You mean Schwarzschild exterior or outer solution. I translate from <a href="https://de.wikipedia.org/wiki/Schwarzschild-Metrik" rel="nofollow noreferrer">Schwarzschild Metrik</a> (German version is different to English one):</p> <p>“The <strong>full Schwarzschild mode</strong>l consists of the <strong>outer</strong> Schwarzschild solution for the space outside the mass distribution and the <strong>inner</strong> Schwarzschild solution, which solves the field equations inside the mass distribution with the additional assumption that the mass is a homogeneous fluid.”</p> <blockquote> <p>But we can tell some properties of the singularity of the black hole, which is at r=0, from the Schwarzschild metric?</p> </blockquote> <p>No, we cannot. However, it has become established to assume that after gravitational collapse all matter disappears in so called “singularity” (which is not a part of the spacetime manifold!) leaving empty (<span class="math-container">$T_{\mu\nu=0}$</span>) spherically symmetric universe.</p> <blockquote> <p>But the singularity is obviously inside the range of matter.</p> </blockquote> <p>You are right. To study what is singularity one should better use the full Schwarzschild solution for Buchdahl limit (<span class="math-container">$r_{S}/R=8/9$</span>). One can easily calculate that at the center the energy density is finite, but the pressure diverges as <span class="math-container">$4/\kappa~r^{-2} (\kappa \equiv 8\pi G/c^4$</span>). It looks like it is the pressure and not the geometry that behaves singularly there. By the way, <span class="math-container">$r=0$</span> is not a point but a two sphere in the limit of zero surface area.</p>
|
Physics
|
|kinematics|velocity|differentiation|calculus|speed|
|
Average velocity showing different results
|
<p>If you are trying to show that the temporal averaged function is equal to the spatially averaged function, <span class="math-container">$$\langle v\rangle_t=\langle v\rangle_s,$$</span> then several of the comments and other answer apply: integrating functions of different arguments (<span class="math-container">$v(t)$</span>, <span class="math-container">$v(s)$</span>) over their dependent variables (<span class="math-container">$t$</span>, <span class="math-container">$s$</span>) generally yields different results and should not be surprising.</p> <p>From your comments, however, it appears that you are trying to apply the <span class="math-container">$t\to s$</span> mapping on the time-averaged quantity and expecting that it also will be the spatially-averaged quantity: <span class="math-container">$$\varphi\left(\langle v(t)\rangle_t\right)_s=\varphi\left(\frac{1}{2}at\right)_s=\sqrt{\frac{sa}{2}}\neq\langle v(s)\rangle_s$$</span> which is off by a factor of <span class="math-container">$4/3$</span>.</p> <p>The reason this doesn't work is essentially the same as the first paragraph of my answer: you are averaging two different function over two different regions. Pictorially, the time-dependent velocity curve is in green and the spatial-dependent velocity in purple (using <span class="math-container">$a=1/2$</span> in both so that <span class="math-container">$v(t)=t/2$</span> and <span class="math-container">$v(s)=\sqrt{s}$</span>). Since the shape of the curves are different, with one always larger than the other, the averages will be necessarily different also.<br /> <a href="https://i.stack.imgur.com/lzvKo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lzvKo.png" alt="enter image description here" /></a><br /> (<a href="https://www.symbolab.com/graphing-calculator" rel="nofollow noreferrer">via Symbolab</a>)<br /> In order to transform the temporal average to make it equal to the spatial average, you either have to transform <span class="math-container">$v(t)$</span> into <span class="math-container">$v(s)$</span> before taking the average <em>or</em> adjust the final result by accounting for the difference between the two curves.</p>
|
Physics
|
|newtonian-gravity|rotational-dynamics|planets|centrifugal-force|
|
What is the hydrostatic shape of an ideal rotating planet?
|
<p>Check out the nice illustrations of Jos Leys <a href="https://www.josleys.com/show_gallery.php?galid=313" rel="nofollow noreferrer">The shape of Planet Earth</a> for Ghys’ presentation.</p> <p>It’s actually an illustration of symmetry breaking. Yes, the problem is symmetric, but this only means that the set of solutions is stable by the symmetry. You can only conclude that an individual solution is symmetric if it is unique. Symmetry breaking therefore arises when you have a multiplicity of solutions.</p> <p>In your case, the error in your reasoning is to assume that there is only one possible shape for the rotating fluid. The idea is that when it spins sufficiently fast, you get a bifurcation and many different shapes are possible. The set of all the possible shapes is invariant by rotation. Furthermore, paradoxically, the Maclaurin solution which is axisymmetric and which is the one you have in mind that flattens out, ceases to be stable at a finite value of angular momentum.</p> <p>Note that at even higher rotation rates you can have even less symmetric stable configurations (pear shaped, etc.).</p> <p>Hope this helps.</p>
|
Physics
|
|quantum-mechanics|quantum-entanglement|bells-inequality|non-locality|
|
Does local realism imply entangled photons are equal (or opposite)?
|
<p>Here are the issues to keep in mind.</p> <ol> <li><p>You've done a pretty good job of addressing the 33% (1/3) vs 25% (1/4) arithmetic, which is one of the hardest to understand. That being for the case that the photons' hidden variables are exactly the same (or opposite, but let's ignore that for simplicity). So good job.</p> </li> <li><p>You are absolutely correct that in a hidden variables theory, the left and right photons potentially are <em>not</em> identical. In your example, that coincides with your (000,000), (000,001) (000,010) cases (and variations). In that case, you <em>might</em> be able to wiggle some changes to bring the hidden variables minimum down from 33% to closer to the actual of 25%. But...</p> </li> <li><p>There is also the requirement of "perfect" correlations with this scenario. Measuring both photons at the same angle settings - any same settings - always produces a matching (100%) result! That of course negates point 2. In your example, such cases as (000,001) or (000,010) can't/don't physically occur. You only see (000,000), (010,010), etc.</p> </li> </ol> <p>Answering your question then: Yes, entangled photon pairs are always equal (or opposite).</p> <p>And in fact, it was the perfect correlations that were discovered first - in 1935, by Einstein and 2 others in the paper known as EPR. Bell discovered point 1 around 1964. They both must be met. So local realism cannot model the actual results of experiments.</p> <p>A few more point. Around 1988, an important new paper was written introducing the <a href="https://www.drchinese.com/David/Bell-MultiPhotonGHZ.pdf" rel="nofollow noreferrer">GHZ Theorem</a>. The also refutes local realism, and one of its authors won a Nobel for this and other work on entanglement. In GHZ, statistical probabilities are not involved. In every single GHZ run, Quantum Mechanics predicts the exact opposite of the local realistic assumption. Needless to say, experiment supports QM.</p> <p>If you find this useful for anything, I also have a web page that loosely mirrors the video example you provided. It is called <a href="https://drchinese.com/David/Bell_Theorem_Easy_Math.htm" rel="nofollow noreferrer">Bell's Theorem with Easy Math</a>.</p>
|
Physics
|
|quantum-mechanics|energy|frequency|wave-particle-duality|
|
Why Does Planck's Relation $E=hf$ Imply a Linear Relationship Only for Sinusoidal Frequency Bases?
|
<p>For convenience I will be using the reduced Planck constant <span class="math-container">$\hbar=\frac{h}{2\pi}$</span> and angular frequency <span class="math-container">$\omega=2\pi f$</span> in this answer. Secondly, if you'll indulge me, it's easier to talk about complex exponentials (i.e. functions of the form <span class="math-container">$c(t)=e^{i\omega t}$</span> or <span class="math-container">$c(t)=e^{-i\omega t}$</span> ) instead of sines and cosines. One can show that these have a period <span class="math-container">$T=\frac{2\pi}{\omega}$</span>. Of course, sines and cosines can be written in terms of these two functions via Euler's formula.</p> <p>Thus the question becomes: Why does the Planck relation <span class="math-container">$E=\hbar \omega$</span> hold for particles (note: this relation holds for all particles, not just photons) with a <span class="math-container">$e^{-i\omega t}$</span> time dependence? Why doesn't a triangle or square wave with angular frequency <span class="math-container">$\omega$</span> have an energy <span class="math-container">$E=\hbar \omega$</span> as well?</p> <p>In the Schrödinger picture, a quantum mechanical system with a Hamiltonian <span class="math-container">$\hat{H}$</span> and a state <span class="math-container">$|\psi(t)\rangle$</span> evolves over time according to the Schrödinger equation: <span class="math-container">$$\hat{H}|\psi(t)\rangle=i\hbar\frac{d}{dt}|\psi(t)\rangle $$</span> Furthermore, if a system has a well-defined energy <span class="math-container">$E$</span>, then the state is an eigenstate of the Hamiltonian such that: <span class="math-container">$$\hat{H}|\psi (t)\rangle=E|\psi(t)\rangle = i\hbar\frac{d}{dt}|\psi(t)\rangle $$</span> This differential equation is solved by a state <span class="math-container">$|\psi(t)\rangle =e^{-i\frac{E}{\hbar}t}|\psi(0)\rangle$</span> (check for yourself!), where <span class="math-container">$|\psi(0)\rangle$</span> is the state of the system at time <span class="math-container">$t=0$</span>. Thus, we can see that the state oscillates with an angular frequency <span class="math-container">$\omega=\frac{E}{\hbar}$</span>, giving us Planck's famous relation. What happens when <span class="math-container">$|\psi(t)\rangle$</span> is <em>not</em> of this form? In that case <span class="math-container">$|\psi(t)\rangle$</span> is not an eigenstate: <span class="math-container">$$i\hbar\frac{d}{dt}|\psi(t)\rangle \neq E|\psi(t)\rangle$$</span> In other words, the system does not have a well defined energy. As you correctly point out in the comments, we can decompose any function into sines and cosines of different frequencies. In the same vein, we can decompose any state into eigenstates of the Hamiltonian: <span class="math-container">$$|\psi(t)\rangle=A_1e^{-i\frac{E_1}{\hbar}t}|\psi_1(0)\rangle + A_2e^{-i\frac{E_2}{\hbar}t}|\psi_2(0)\rangle + A_3e^{-i\frac{E_3}{\hbar}t}|\psi_3(0)\rangle + \dots$$</span> In this sense, even though the state doesn't have a single definite energy, the relation between the energy and the frequency is still 'linear' as you describe it. If you double the frequency of every eigenstate, then the energy of every eigenstate will also be doubled. But to reiterate: these states do <em>not</em> have a definite energy. I hope this answers your question.</p> <p>Some final notes:</p> <ul> <li>It is conceivable to construct a universe in which the energy eigenstates <em>don't</em> depend on time according to <span class="math-container">$e^{-i\omega t}$</span>. In that case, the Schrödinger equation would be of a different form, perhaps with triangle or square waves as solutions like you propose. However, this is not the universe we live in.</li> <li>This of course does just kick the can further down the road: why is the Schrödinger equation the way it is? It is possible to motivate the form from certain symmetries (i.e. the Hamiltonian, and thus the energy, is related to time translation symmetry. Thus, it shouldn't be too surprising that there is a relation between energy and frequency), but that goes beyond the scope of this answer.</li> <li>It is often convenient, especially when talking about photons specifically, to use the Heisenberg picture, in which the <em>operators</em> (the fields, the Hamiltonian, the momenta) change over time and the <em>state</em> remains constant. I used the Schrödinger picture mainly for clarity, but suffice it to say the physics are the same.</li> <li>It should go without saying, but historically the relation <span class="math-container">$E=\hbar \omega$</span> was found <em>before</em> the Schrödinger equation. In this case, what we <em>mean</em> by <span class="math-container">$\omega$</span> is 'the angular frequency of a sine/cosine/complex exponential' and not 'the angular frequency of a triangle/square/other wave'. As it happens, this is consistent with reality (and thus subsequent developments in quantum mechanics).</li> </ul>
|
Physics
|
|classical-mechanics|coordinate-systems|hamiltonian-formalism|phase-space|
|
Question about canonical transformation and generating functions
|
<ol> <li><p>Even if we assume that the canonical transformation (CT) (9.4) exists, it doesn't mean that we explicitly know its form.</p> </li> <li><p>Concerning OP's 1st yellow quote: Ref. 1 is not claiming that <em>all</em> <a href="https://en.wikipedia.org/wiki/Canonical_transformation#Generating_function_approach" rel="nofollow noreferrer">types 1-4 of generating functions</a> exist; only that <em>some</em> (or perhaps a hybrid thereof) exist locally, cf. e.g. <a href="https://physics.stackexchange.com/q/329427/2451">this</a> and <a href="https://physics.stackexchange.com/q/699266/2451">this</a> Phys.SE posts.</p> </li> <li><p><em>Example:</em> The eq. (9.11) with the assumption that the generating function <span class="math-container">$$F(q,Q,p(q,Q,t),P(q,Q,t),t)~=~F_1(q,Q,t) \tag{9.12}$$</span> is of <a href="https://en.wikipedia.org/wiki/Canonical_transformation#Type_1_generating_function" rel="nofollow noreferrer">type 1</a> leads to conditions (9.14), which can help us solve for the explicit form of the CT (9.4), cf. OP's 2nd yellow quote. See also e.g. <a href="https://physics.stackexchange.com/q/228909/2451">this</a> related Phys.SE post.</p> </li> </ol> <p>References:</p> <ol> <li>H. Goldstein, <em>Classical Mechanics;</em> section 9.1.</li> </ol>
|
Physics
|
|newtonian-mechanics|rotational-kinematics|angular-velocity|
|
Angular Velocity and Angular acceleration when the position vector is changing
|
<p>What you are looking for is simply the general formula for acceleration in polar coordinates <span class="math-container">$(r,\theta)$</span>: <span class="math-container">$$\mathbf{r} = r \hat{\mathbf{r}} \\ \dot{\mathbf{r}} = \dot{r}\hat{\mathbf{r}} + r \dot{\theta}\hat{\boldsymbol{\theta}} \\ \ddot{\mathbf{r}} = \left(\ddot{r} - r \dot{\theta}^2\right)\hat{\mathbf{r}} + \left(r \ddot{\theta} + 2\dot{r}\dot{\theta}\right)\hat{\boldsymbol{\theta}}$$</span></p> <p>To show this, all you have to do is to start from the first line and take successive time derivatives of the position vector. Note that <span class="math-container">$\mathrm{d}\hat{\mathbf{r}}/\mathrm{d}\theta = \hat{\boldsymbol{\theta}}$</span> and <span class="math-container">$\mathrm{d}\hat{\boldsymbol{\theta}}/\mathrm{d}\theta = -\hat{\mathbf{r}}$</span>.</p>
|
Physics
|
|cosmology|definition|anthropic-principle|
|
Cosmology - Anthropic Principle
|
<p>From my friend Claude 3: "The anthropic principle refers to the idea that certain fundamental aspects of the universe must be compatible with the existence of intelligent life, since we exist to observe those aspects. There are different formulations of the anthropic principle:</p> <p>Weak Anthropic Principle (WAP): This states that the observed values of physical and cosmological quantities are not equally probable, but rather they are restricted by the requirement that there exist sites where carbon-based life can form and evolve over billions of years. The WAP is essentially a tautological statement - if the universe were not compatible with our existence, we could not exist to observe it.</p> <p>Strong Anthropic Principle (SAP): This makes the much bolder claim that the universe must have those properties which allow life to develop within it at some stage in its evolution. The SAP suggests that the universe's laws and parameters are in some way fine-tuned to allow the eventual emergence of intelligent observers.</p> <p>There are some key differences:</p> <p>The WAP is widely accepted as logically sound, while the SAP is more controversial and metaphysical in nature. The WAP is just a selection effect, while the SAP implies a deeper connection between the universe's laws and the existence of observers. The WAP makes no statement about whether other regions of the universe allow life, only that we exist in a region compatible with life. The SAP suggests the entire universe must be predisposed for life. The SAP can be interpreted either as a constraint on the initial conditions of the universe or as requiring an as-yet unknown universe-creating principle favoring life. So in essence, the WAP is uncontroversial from a scientific perspective, while the SAP ventures into issues of cosmic purpose which many scientists view as outside the realm of science. The anthropic principles try to explain why the universe appears so finely-tuned for life's existence."</p>
|
Physics
|
|electric-circuits|electric-fields|conductors|
|
Does there exits an electric field in a wire not connected to battery?
|
<blockquote> <p>Does there exists any electric field in the wire ?</p> </blockquote> <p>Yes, viewed on a microscopic scale there exist tiny, and randomly oriented electric fields produced by the electrons. And relatively regular electric fields produced by the fixed protons.</p> <p>(But be careful of taking this too literally as at quantum level the free electrons in a metallic conductor aren't normally found in states with well-defined positions)</p> <blockquote> <p>if there exists electric fields, then why there isn't any electric current ?</p> </blockquote> <p>The electrons (again, speaking in terms of a particle model, not consistent with QM) do move around randomly, in a manner resembling Brownian motion. So at microscopic scale, there are tiny currents. These are responsible for important noise effects in high-sensitivity electronics.</p> <p>But at macroscopic scale there is no net current, because the direction of the electrons' motion is random, averaging out to zero motion when you sum up the effect from all the electrons together, and average over any appreciable period of time.</p> <blockquote> <p>Then the question is why should the loop exist to have the current ?</p> </blockquote> <p>A loop is required even when there is a battery or other voltage source, because it gives the electrons a place to go without piling up and producing their own electric field that would balance out the applied field.</p>
|
Physics
|
|optics|electromagnetic-radiation|visible-light|
|
Physics behind Lambertian reflectors
|
<p>It seems you are worried about situation A in the drawing, where a collection of independent isotropic point sources does indeed create the same amount of outgoing radiation at a grazing angle than in the normal direction.</p> <p><a href="https://i.stack.imgur.com/8vLhN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8vLhN.png" alt="radiators" /></a></p> <p>Case A would actually give <em>the same</em> luminous intensity in all directions but <em>higher</em> radiance in the grazing direction. That somewhat confusing distinction is explained in <a href="https://en.wikipedia.org/wiki/Lambert%27s_cosine_law#Details_of_equal_brightness_effect" rel="nofollow noreferrer">Wikipedia</a>. But in any case it differs from Lambertian behavior.</p> <p>The resolution is that the sources are blocking each other, like in situation B in the drawing. There you have essentially the same situation for observers in all directions, from their point of view any ray will at some point end at one of the radiating elements. This is a bit reminiscent of Olbers' paradox, where at every point in the sky we would see the surface of some distant star <a href="https://en.wikipedia.org/wiki/Olbers%27s_paradox" rel="nofollow noreferrer">[Olbers]</a>.</p> <p>So in case A, the observer from the normal direction would see the sources more spread out, with empty space between them. While in case B both observers see everything filled with sources, the normal observer just looks a bit deeper into the material.</p> <p>Case C shows that surface roughness does the same as the randomly distributed immersed particles in case B. Only case A with its point sources will make the observer see an angle-dependent density of sources, because point sources are always sparse, while the other types fill up the viewing field regardless from which angle they are viewed. (And of course a mirror-smooth surface is excluded because we need incoherent addition of the light, otherwise a pronounced beam direction will result.)</p>
|
Physics
|
|optics|visible-light|reflection|geometric-optics|
|
Object and Image Distance from Image height, Object height, and focal length
|
<p>To begin with,</p> <p><span class="math-container">\begin{align*} & f \text{ represents the Focal Length of the mirror.} \\ & v \text{ represents the Image Distance, which is the distance from the mirror to the image.} \\ & u \text{ represents the Object Distance, which is the distance from the mirror to the object.} \\ & h_i \text{ represents the Image Height.} \\ & h_o \text{ represents the Object Height.} \end{align*}</span></p> <p>As per your question, we are provided with the values of the Object Height, Image Height, and the Focal Length of the mirror.</p> <p>For simplicity's sake, suppose, <span class="math-container">\begin{align} \text{Image Height } (h_{i}) = 10cm \\ \text{Object Height } (h_{o}) = 5cm \\ \text{Focal Length } (f) = 10cm \end{align}</span></p> <p>Then by the magnification formula,</p> <p><span class="math-container">\begin{align} &\text{Magnification } (M) = \frac{h_{i}}{h_{o}} = -\frac{v}{u} \\ &\implies \frac{h_{i}}{h_{o}} = -\frac{v}{u} \\ &\implies \frac{10cm}{5cm} = -\frac{v}{u} \\ &\implies 2 = -\frac{v}{u} \\ &\implies 2u = -v \\ &\implies v = -2u \quad \text{... (i)} \end{align}</span></p> <p>Now by the mirror formula, <span class="math-container">\begin{align} &\frac{1}{f} = \frac{1}{v} + \frac{1}{u} \\ &\implies \frac{1}{10} = \frac{1}{-2u} + \frac{1}{u} \quad \text{(v = -2u, from (i))} \\ &\implies \frac{1}{10} = \frac{1}{u} - \frac{1}{2u} \\ &\implies \frac{1}{10} = \frac{2 - 1}{2u} \\ &\implies \frac{1}{10} = \frac{1}{2u} \\ &\implies u = 5 \\ &∴ u = 5cm \\ &\implies v = -2u = -2 \times (5 cm) = -10 cm \end{align}</span></p> <p>I hope you now understand how this works in a more general sense.</p>
|
Physics
|
|fluid-dynamics|flow|drag|aerodynamics|turbulence|
|
Why are cylinders more aerodynamic than spheres?
|
<p>Your question seems to presume the sphere and cylinder have the same diameter. I am not sure it is necessarily true that a cylinder has less drag than a sphere of the same diameter. I would think a cylinder has more drag than a sphere of the same diameter but there are performance metrics other than drag to consider when talking about bullets, missiles, and rockets.</p> <p>What should be intuitive is that a cylinder of the same diameter will be able to have much more volume/mass for a relatively minor increase in drag. That is because you pay for the frontal drag due to cross section only once and skin drag of the parallel sides of the cylinder is fairly minor. So you can keep adding more volume/mass by extending the length of the cylinder for a relatively minor increase in drag compared to if you were to increase the volume/mass of a sphere. That lets you make bigger rockets and heavier bullets without significant drag penalties.</p> <p>Similarly, the same volume, a cylinder will a frontal cross section that is smaller than that of the sphere. I am not sure where the exact drag balance lays between the smaller, but flat frontal cross section of the cylinder versus the rounded, but larger diameter of the sphere, but it does reduce drag disadvantage of the cylinder's flat face, while providing advantages of the cylinder (such as the ability to add fins or rifling for stabilization and more usable payload volume).</p> <p><strong>EDIT:</strong></p> <p>So according to Wiki, <a href="https://en.wikipedia.org/wiki/Drag_coefficient#/media/File:14ilf1l.svg" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Drag_coefficient#/media/File:14ilf1l.svg</a> which is drawing its data from Clancy, L. J. (1975). "5.18". Aerodynamics. Wiley. ISBN 978-0-470-15837-1.</p> <p>The drag coefficients as a result of frontal area are as follows:</p> <p><a href="https://i.stack.imgur.com/64HSq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/64HSq.png" alt="enter image description here" /></a></p> <p>So that would mean that for the same diameter, the drag on a cylinder is significantly more than that of a sphere. This is even when mostly neglecting the skin drag of the sides of the cylinder.</p> <p>However, since the volumes are:</p> <p><span class="math-container">$V_{sphere}=\frac{4}{3}\pi r^3$</span></p> <p><span class="math-container">$V_{cylinder}=\pi r^2 L$</span></p> <p>When used with the coefficients of drag provided, that means that once a cylinder has a radius/diameter that is smaller than 75.7% the radius of a sphere, the cylinder will begin to have less drag, assuming the conditions of what constitutes a "long cylinder" in the table are met. And of course, you can just keep increasing the length of the cylinder for relatively insignificant increases in drag.</p> <p>Doubling the volume of the sphere requires a 26% increase in radius/diameter which increases the TOTAL drag by 58.7%. But doubling the volume of the cylinder by doubling its length doubles ONLY the skin drag of the sides. I don't have the actual equations or numbers for this type of drag, but it is less than the frontal drag of the cylinder.</p>
|
Physics
|
|quantum-mechanics|phase-velocity|
|
The velocity of a quantum mechanical plane wave of a photon
|
<p>First of all your energy momentum dispersion for the photon is wrong as <span class="math-container">$E = \frac{p^2}{m}$</span> is a classical relation. What would one even use as photon mass in that formula? In the case of a relativistic particle with no mass <span class="math-container">$E = c |p|$</span>, as mentioned by Hyperon, should be used instead. In consequence both phase and group velocity become c.</p> <p>As for your first question: The phase velocity tells you the speed, at which your single plane wave is propagating threw space, the group velocity tells you at what speed the envelope is propagating threw space. Since the envelope of single plane wave is constant in space, propagating it threw space does not do anything and therefore the concept does indeed not feel particularly meaningful in that case.</p> <p>Regarding your second Question: No, phase and group velocity do not necessarily become identical for a plane wave, in fact they will not if the dispersion relation is not a proportional one, as you show in your question. It’s just that as stated above the group velocity becomes a physically irrelevant mathematical concept, when looking at that special case.</p> <p>Concerning your last Question: As I am aware the wavelength of the wave function being the same as for the EM-fields in cases where they can be measured macroscopically is less of something asserted by theory(, until you start dealing with relativistic quantum electrodynamics I would guess,) but rather a basic observation, seen when doing slit experiments with singular photons vs slit experiments with larger scale EM-waves for example. Usually assuming a plane wave solution in free space, de-Broglie- and Planck-relations and an energy momentum relation, which all can be motivated from experiments, is a good way to come up with a guess for the QM equation of motion, by just assuming a wave equation and plugging in your frequency wavelength relation. That’s what we did, when I took the introductory QM-course. If you use a classical energy momentum relation there, you will end up with Schrödingers equation for free particles. If you use the relativistic dispersion relation I think you end up with the Klein-Gordon equation.</p> <p>So the main problem in your example is, that it is talking about photons, but using the classical dispersion relation. If either talking about electrons or using the right relativistic relations I can see it as decent exercise to get some contact with the calculations done in the motivation of the QM equation of motion discussed above. I am not exactly sure what the didactic value is of calculating group velocities for a single plane wave here, but then again calculating such properties can always be seen as a value on itself I guess. Comparing group and phase velocities could also be used to show some vivid differences for the consequences of using the relativistic or nonrelativistic energy-momentum-relations.</p> <p>Edit: As “The Feadow” pointed out in his comment, the phase velocity is only half the classical movement speed. That is in my opinion indeed an interesting observation. My answer regarding that irritation would be, that a single plane wave is dislocated over the entirety of your position space. That means, that moving it, doesn’t really do anything. Moving the electron does only get interesting, if you have some kind of semi-located wave package or looking at the classical approximation of a hard ball (or the classical limit of a <span class="math-container">$\delta$</span> function in position-space). But then the group velocity is what gives you the movement of that wave package. So contrary to what I said before, one might claim that phase velocity is physically irrelevant and only group velocity matters. But group velocity also is irrelevant as long as you only look at a single plane wave.</p>
|
Physics
|
|electrostatics|vectors|differentiation|vector-fields|mathematics|
|
Where to apply $\nabla$ operator when taking curl of a cross product?
|
<p>In Jackson's "Classical Electrodynamics" you find how to derive lots of results like this. On the inside cover pages there's a lookup table <a href="https://www.scribd.com/doc/260715311/Classical-Electrodynamics-J-D-Jackson" rel="nofollow noreferrer">[Jackson]</a> where they are summarized, this one in fact is present, when you absorb the <span class="math-container">$1/r^3$</span> into one of the vectors in your expression, e.g. <span class="math-container">${\bf d} \rightarrow {\bf d}/r^3$</span> or <span class="math-container">${\bf r} \rightarrow {\bf r}/r^3$</span>.</p> <p>Of course you can try to derive the result yourself by starting with the definition of the cross product and the curl using the Levi-Civita symbol <span class="math-container">$\epsilon_{ijk}$</span> and the Einstein notation convention of summing over repeated indices: <span class="math-container">$$ ({\bf a} \times {\bf b})\,_i = \epsilon_{ijk} \, a_j \, b_k $$</span> <span class="math-container">$$ (\nabla \times {\bf v})\,_i = \epsilon_{ijk} \, \partial_j \, v_k $$</span> But it might be better to first look up how these things are done for a few examples.</p>
|
Physics
|
|thermodynamics|
|
Convection in standing fluid
|
<p>As a first pass, we could look at whether <a href="https://en.wikipedia.org/wiki/Rayleigh%E2%80%93B%C3%A9nard_convection" rel="nofollow noreferrer">convection cells</a> are predicted to form in the puddle.</p> <p>Evaporation cools surfaces, where latent heat is being removed. This results in a temperature gradient from the cooler surface to warmer depths. There is then a corresponding driving force for the warmer (i.e., less dense) lower liquid to rise. At some point, this tendency overcomes the viscosity, and liquid exchange essentially switches on, shifting cooler (warmer) water downward (upward):</p> <img src="https://i.stack.imgur.com/BaKSw.png" width="400"> <p>Examples at different scales:</p> <img src="https://i.stack.imgur.com/KqKKb.png" width="400"> <img src="https://i.stack.imgur.com/Yk96o.png" width="500"> <img src="https://i.stack.imgur.com/5NoVx.png" width="500"> <img src="https://i.stack.imgur.com/DpY2m.png" width="300"> <p>The threshold for convection cell generation of this type is a <a href="https://en.wikipedia.org/wiki/Rayleigh_number" rel="nofollow noreferrer">Rayleigh number</a> of greater than approximately 1000. (The Rayleigh number dimensionlessly compares gravitational and viscous effects.) Plugging in the material constants of water around room temperature, I find that this threshold lies between 1 mm and 1 cm puddle height for a temperature difference of 1°C, for example. This could help you estimate whether currents exist in the puddle as it dries through evaporation.</p> <p>It's difficult to give a more precise answer or to address your other questions because you don't specify the humidity, wind speed, evaporation rate, puddle dimensions, or surrounding material. Once you know the humidity and wind speed, you can estimate the evaporation rate. Once you have that, you can estimate the heat flux from the latent heat and mass transfer. Once you have that, you can estimate the relative contributions of convection and conduction within the puddle.</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|energy|rotational-dynamics|work|
|
Work-energy theorem in rotational motion
|
<p>Considering a cylinder of radius <span class="math-container">$R$</span>, the condition for rolling without slipping is met when <span class="math-container">$v=\omega R$</span>. What we'll do is basically find how <span class="math-container">$\omega$</span> and <span class="math-container">$v$</span> vary in time and equate them:</p> <p>To find <span class="math-container">$\omega$</span> we'll use the rotational analog of Newton's <span class="math-container">$2^{\text{nd}}$</span> law: <span class="math-container">$$\begin{aligned} \tau=||\mathbf r\wedge\boldsymbol f||=||(0,-R)\wedge(-f,0)||=fR=\mathcal I\alpha=\mathcal I\frac{\mathrm d\omega}{\mathrm dt}&\implies\int_0^\omega\mathrm d\omega'=\frac{fR}{\mathcal I}\int_0^t\mathrm dt'\\ &\implies \omega(t)=\frac{fR}{\mathcal I}t=\frac{2\mu g}{R}t, \end{aligned}$$</span> where in the last step we used the fact that <span class="math-container">$\mathcal I_{\text{cylinder}}=\dfrac{1}{2}mR^2$</span> and <span class="math-container">$f=\mu N=\mu mg$</span>.</p> <p>As for <span class="math-container">$v$</span> we'll use Newton's <span class="math-container">$2^{\text{nd}}$</span> law <span class="math-container">$$m\frac{\mathrm dv}{\mathrm dt}=-f=-\mu mg\implies\int_{v_0}^v\mathrm dv'=-\mu g\int_0^t\mathrm dt'\implies v(t)=v_0-\mu gt$$</span></p> <p>Imposing <span class="math-container">$v(t_{\mathrm r})=\omega(t_{\mathrm r})R$</span>, we get that <span class="math-container">$$v_0-\mu gt_{\mathrm r}=2\mu gt_{\mathrm r}\implies t_{\mathrm r}=\frac{v_0}{3\mu g}$$</span></p> <p>Finally to find the distance travelled by the cylinder once it starts rolling without slipping, we shall integrate <span class="math-container">$v$</span> once again an evaluate it at <span class="math-container">$t_{\mathrm r}$</span>: <span class="math-container">$$d=\int_0^{t_{\mathrm r}}v\mathrm dt=\left[v_0t-\frac{1}{2}\mu gt^2\right|_0^{t_{\mathrm r}}=v_0\left(\frac{v_0}{3\mu g}\right)-\frac{1}{2}\mu g\left(\frac{v_0}{3\mu g}\right)^2=\left(\frac{1}{3}-\frac{1}{18}\right)\frac{v_0^2}{\mu g}=\frac{5v_0^2}{18\mu g}$$</span></p> <hr /> <p>It might be doable by energies like you suggested, but it'd be a bit more tedious alternative to get to the same conclusion since what you're asking can't be simpler than imposing <span class="math-container">$v=\omega R$</span>.</p>
|
Physics
|
|quantum-mechanics|operators|quantum-information|time-evolution|open-quantum-systems|
|
On properties of open quantum dynamics and Lindbladians
|
<p>It is not a channel, no, and it is not completely positive in general. Nor you should expect it to be: the point of complete positivity is to ensure you get meaningful quantum states after the application of the channel. On the other hand, <span class="math-container">$\mathbb L$</span> gives you the derivative of the state, which won't be a quantum state in general. In fact, it's essentially a difference of quantum states, so I'd guess it can never even be positive (except in cases where it gives zero), let alone completely positive.</p> <p>Consider for example the simplest case of a unitary time-independent dynamic, for which you have <span class="math-container">$\mathbb L(\rho(t))=-i[H,\rho(t)]$</span>. The superoperator/quantum map <span class="math-container">$\mathbb L$</span> is thus <span class="math-container">$\mathbb L(X)\equiv -i[H,X]$</span>. To check whether it's completely positive you can study its Choi, which is the operator <span class="math-container">$$J(\mathbb L)\equiv \sum_{ij} \mathbb L(E_{ij})\otimes E_{ij} = -i \sum_{ij} [H,E_{ij}]\otimes E_{ij},$$</span> where <span class="math-container">$E_{ij}\equiv |i\rangle\!\langle j|$</span>. For example, if <span class="math-container">$H=Z$</span> is a single-qubit Hamiltonian, then <span class="math-container">$[Z,E_{00}]=[Z,E_{11}]=0$</span>, and <span class="math-container">$[Z,E_{01}]=2E_{01}$</span>, <span class="math-container">$[Z,E_{10}]=-2E_{10}$</span>. Thus <span class="math-container">$$J(\mathbb L)= -2i (E_{01}\otimes E_{01}-E_{10}\otimes E_{10}) =-2i \begin{pmatrix}0&0&0&1\\0&0&0&0 \\0&0&0&0 \\ -1 &0&0&0\end{pmatrix}.$$</span> This matrix is clearly not positive semidefinite, which is equivalent to <span class="math-container">$\mathbb L$</span> not be completely positive. It's also not positive semidefinite on separable states, which means <span class="math-container">$\mathbb L$</span> is not even positive.</p>
|
Physics
|
|nuclear-physics|atomic-physics|estimation|history|
|
Size of an atom
|
<p>In the Rutherford experiment, from which he derived that statement you mentioned, he was shooting <span class="math-container">$\alpha$</span>-particles (Helium cores) at gold foil.</p> <p>However you should note that neither atoms nor nuclei are solid spheres and are instead described by a quantum mechanical wave function. Even if one does neglect the QM-nature, which is probably not too unreasonable for particles heavier then protons at usual conditions on Earth, the scattering should still at least be modeled as a Coulomb interaction and its <a href="https://en.wikipedia.org/wiki/Cross_section_(physics)" rel="nofollow noreferrer">cross section</a> therefore depends on more then just the involved particle species, e.g., also the kinetic energy of the particles will play a role.</p>
|
Physics
|
|quantum-mechanics|quantum-information|computational-physics|linear-algebra|
|
Can we project into Pauli size sectors efficiently?
|
<ol> <li>Vectorize <span class="math-container">$\mathcal O$</span>. (This comes at no cost.)</li> <li>Transform <span class="math-container">$\mathcal O$</span> into the Pauli basis. This is a linear map on each qubit subspace (i.e. 4 levels in the vectorized form) independently, and can be done with <span class="math-container">$O(L4^L)$</span> operations.</li> <li>Construct a vector which contains a 1 for all Pauli strings of the desired weight, and zero otherwise. (I won't count this for the cost, as this can be reused. There's certainly smart ways to do that, as it just amounts to enumerating bit strings with a certain Hamming weight.)</li> <li>Perform a component-wise multiplication of the two vectors. (Cost: <span class="math-container">$4^L$</span>) This gives the vectorized <span class="math-container">$\mathcal P_s\mathcal O$</span>.</li> <li>Transform the result back into the original basis. (Cost: <span class="math-container">$O(L4^L)$</span>.)</li> </ol> <p>Voila! Overall cost: <span class="math-container">$L4^L$</span>, which is close to optimal. (You have to touch every entry at least once, so <span class="math-container">$4^L$</span> is a lower bound.)</p>
|
Physics
|
|quantum-mechanics|path-integral|symmetry-breaking|instantons|functional-determinants|
|
Instantons and Spontaneous Symmetry Breaking
|
<p>For precision's sake let's define the time-translation operator <span class="math-container">$\hat{T}(\Delta \tau)$</span> as the operator acting on functions <span class="math-container">$f$</span> of <span class="math-container">$\tau$</span> as <span class="math-container">\begin{equation} (\hat{T}(\Delta \tau)f)(\tau) = f(\tau + \Delta \tau)\,. \end{equation}</span></p> <p>Clearly we have <span class="math-container">$(\hat{T}(\Delta\tau)x_I) \neq x_I$</span> for generic values of <span class="math-container">$\Delta \tau$</span> thus the instanton solution break the time-translation symmetry of the system. On the other hand what is called <span class="math-container">$\tau_0$</span> you will find when you continue your study on instantons is what is usually called a <em>modulus</em> (you will see the term <em>moduli space</em> a bunch). You can think of the term modulus as a synonim of parameter most of the time. I will change you notation to make the dependence on the modulus explicit so that <span class="math-container">\begin{equation} x_I(\tau, \tau_0) := x_0 \tanh(\alpha(\tau - \tau_0))\,. \end{equation}</span></p> <p>These are solutions of the classical Euclidean equations of motion for all values of the modulus <span class="math-container">$\tau_0$</span>. On the other hand <span class="math-container">$\hat{F}[x_\text{cl}]$</span> (I use square brackets because this is really an operator-valued <em>functional</em> of the classical trajectory <span class="math-container">$x_\text{cl}(\tau)$</span>) is related to the saddle point expansion around the classical trajectory <span class="math-container">$x_\text{cl}$</span>: <span class="math-container">\begin{equation} S_E[x_\text{cl} + \delta x] = S_E[x_\text{cl}] + \frac{1}{2}\int_{-T/2}^{T/2}d\tau \delta x(\tau)\hat{F}[x_\text{cl}]\delta x(\tau) + O(\delta x^3) \end{equation}</span></p> <p>But then suppose I move in the moduli space (which means I am changing the modulus <span class="math-container">$\tau_0$</span>), wherever I move I always find solutions to the classical equations of motions which thus extremize <span class="math-container">$S_E$</span>. Suppose I move of an infinitesimal <span class="math-container">$\delta \tau_0$</span>, what I get is <span class="math-container">\begin{equation} x_I(\tau, \tau_0 + \delta\tau_0) = x_I(\tau, \tau_0) + \partial_{\tau_0}x_I(\tau, \tau_0) \delta \tau_0 + O(\delta \tau_0^2)\,. \end{equation}</span> Since these are all minima of the action we must have <span class="math-container">$S_E[x_I(\tau, \tau_0 + \delta\tau_0)] = S_E[x_I(\tau, \tau_0)]$</span>, thus <span class="math-container">\begin{equation} \hat{F}[x_I]\partial_{\tau_0}x_I(\tau, \tau_0) = 0 \end{equation}</span> which is your zero-mode. You will thus sometimes find the description of moduli as flat directions (a continuous line of minima) in some space, in this case its flat directions of the action functional.</p> <p>By this explanation we also found the actual zero-mode expression as <span class="math-container">$\partial_{\tau_0}x_I$</span> which you see in your notes as (2.60), up to normalization of course.</p> <p>Short version of this is: realize that for all values of <span class="math-container">$\tau_0$</span> the instanton is a solution to the classical equations of motion, thus there has to be a continuum of points in trajectory space (the space you integrate over in the path integral) which minimizes the action. Since <span class="math-container">$\hat{F}$</span> is basically the Hessian and there is a flat direction it has to have a zero eigenvalue</p>
|
Physics
|
|classical-mechanics|biophysics|
|
Is it possible to calculate the weight distribution on each foot from the centre of mass of a human body?
|
<p>Yeah. We can find the weight distrubution of feet, by equating net torque and net force to 0. However, we need the <strong>horizontal distance between COM and the feet</strong> to find the weight distribution. We cannot find it if we only have distance between feet and COM unless the distance between the feet is also given(Which is ultimately used to find the same horizontal distance between the COM and feet) Assuming that the feet are just point sized, we can do the following <a href="https://i.stack.imgur.com/sfpsR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sfpsR.png" alt="enter image description here" /></a></p> <p>One equation is pretty obvious which is <strong>N1+N2=mg</strong> (net force is 0)</p> <p>Another equation is of torque=0, torque is a turning force, here torque is 0 as the human being is not rotating(is in rotational equilibrium)</p> <p><strong>Torque= force x perpendicular distance</strong></p> <p>perpendicular distance is the horizontal distance between COM and feet. therefore if the horizontal distance and the total weight is given, the forces exerted by floor on the feet is</p> <p><strong>N1 = mgx2/(x1+x2)</strong></p> <p><strong>N2 = mgx1/(x1+x2)</strong></p> <p>If the feet are not point sized, then the floor will exert more force on one side of the same foot. You feel this when you lean forward, the force exerted by the ground will be more at toes, less at heels. It will create an additional torque, which will reduce the demand for extra torque required to stay in equilibrium.</p> <p>Hope the answer helps</p>
|
Physics
|
|quantum-mechanics|general-relativity|reference-frames|observers|quantum-gravity|
|
Does quantum mechanics respect the principle of relativity?
|
<p>If by "the principle of relativity" you mean "physics is the same for all inertial observers," then the answer is that quantum mechanics is fully compatible with the principle of relativity. The evidence for this statement is that there exist relativistic quantum field theories that are fully compatible with the postulates of quantum mechanics, and with Lorentz symmetry (meaning that the laws of physics are invariant under boosts of velocity, as well as rotations). The most famous and empirically tested relativistic quantum field theory is the Standard Model of particle physics. Another way to say this, is that quantum mechanics is fully compatible with <strong>special</strong> relativity. It's worth pointing out that <em>not all</em> quantum mechanical theories respect special relativity (such as the Hydrogen atom potential you solve in a first quantum mechanics course). Quantum mechanics is a large framework you can apply to different theories. The statement is that quantum mechanics <em>can</em> be applied to theories which obey special relativity.</p> <p>If your question is, "why is quantum mechanics incompatible with General Relativity", then the answer is much more technical than "the principle of relativity." There are different ways of formulating the problem (perhaps also different problems). One of the most common issues you will see discussed, is that if you treat General Relativity as a quantum field theory like the Standard Model, then it is not <em>renormalizable</em>. There are different ways of explaining what this means in non-technical language. The way I would say it, is that if you try to treat General Relativity as a quantum field theory, then you will find that it behaves like the first term in a Taylor expansion in the energy of a process, divided by the <a href="https://en.wikipedia.org/wiki/Planck_units" rel="noreferrer">Planck scale</a>. This means that for energies below the Planck scale, we can trust our methods to give reliable predictions. But if we try to push GR to energies near or above the Planck scale, then we find we need access to the rest of the Taylor series, and we simply do not know what that is. The previous few sentences are meant to be a very high-level summary of the <a href="https://arxiv.org/abs/gr-qc/9405057" rel="noreferrer">effective field theory of gravity</a>. Of course, there are many proposed solutions to this problem and we have no idea which if any are correct. The idea of "asymptotically safe gravity" is, basically, that it is possible to construct all the missing terms of the Taylor series, using symmetry considerations. The idea of string theory is that the full theory actually involves many, many more degrees of freedom beyond those represented in the spacetime metric of general relativity, and all of these degrees of freedom have to be included to get a consistent theory.</p>
|
Physics
|
|statistical-mechanics|hilbert-space|string-theory|harmonic-oscillator|approximations|
|
How to get the factor of $n^{-27/4}$ in number of open string states from the calculation in GSW's book?
|
<p>TL;DR: The extra factor <span class="math-container">$n^{-3/4}$</span> comes from the Hessian/second-order derivative (i.e. the Gaussian integral) in the WKB/saddle point approximation.</p> <p>In more detail,</p> <p><span class="math-container">$$\begin{align} d_n~\stackrel{(2.3.118)}{=}&\oint \frac{dw}{2 \pi i} \frac{f(w)^{-24}}{w^{n+1}} ~\stackrel{(2.3.117)}{=}~\oint \frac{dw}{2 \pi i} e^{-S(w)}\cr~\stackrel{\rm WKB}{\sim}~&\frac{e^{-S(w_0)}}{i\sqrt{2\pi S^{\prime\prime}(w_0)}}~=~\frac{n^{-6}e^{4\pi\sqrt{n}}}{\sqrt{2n^{3/2}}}~=~\frac{1}{\sqrt{2}}n^{-27/4}e^{4\pi\sqrt{n}} \quad{\rm for}\quad n~\to~\infty ,\end{align}$$</span> where <span class="math-container">$$\begin{align} S(w)~\stackrel{(2.3.117)}{=}& -12\ln(1-w) -\frac{4\pi^2}{1-w} +(n+1)\ln(\underbrace{w}_{=1-(1-w)}),\cr 0~\approx~S^{\prime}(w)~=~~& \frac{12}{1-w} -\frac{4\pi^2}{(1-w)^2} +\frac{n+1}{w}\cr &\quad\Rightarrow\quad 1-w_0~\sim~2\pi n^{-1/2} \quad{\rm for}\quad n~\to~\infty,\cr S^{\prime\prime}(w)~=~~& \frac{12}{(1-w)^2} -\frac{8\pi^2}{(1-w)^3} -\frac{n+1}{w^2},\end{align}$$</span> so that <span class="math-container">$$\begin{align} S(w_0)~\sim~&6\ln n -2\pi \sqrt{n} - n 2\pi n^{-1/2}~=~ 6\ln n -4\pi \sqrt{n},\cr S^{\prime\prime}(w_0)~\sim~&-\frac{n^{3/2}}{\pi} \quad{\rm for}\quad n~\to~\infty. \end{align}$$</span></p> <p>References:</p> <ol> <li><p>M.B. Green, J.H. Schwarz and E. Witten, <em>Superstring theory,</em> Vol. 1, 1986; subsection 2.3.5.</p> </li> <li><p>J. Bedford, <a href="https://arxiv.org/abs/1107.3967" rel="nofollow noreferrer">arXiv:1107.3967</a>; p. 65 eq. (A.22). (Hat tip: <a href="https://physics.stackexchange.com/users/372582/sanjana">Sanjana</a>.)</p> </li> </ol>
|
Physics
|
|semiconductor-physics|
|
Why the number of minority charge carriers not increase when we put a potential is reverse bias in a diode?
|
<p>You have to look at the complete system, or at least include the point where the metal wire coming from the negative battery side is attached to the <span class="math-container">$p$</span>-side of the diode, in this case.</p> <p>At that point the electrons (which are plenty in the metal wire) can flow into the diode's <span class="math-container">$p$</span> side but then they recombine with holes in there, so they only recuce the number of holes, giving that side a negative charge and a negative potential. It will quickly become too negative to accept more electrons from the battery.</p> <p>Of course you can still ask: aren't those first electrons enough to annihilate all those holes so that we still end up with excess electrons in that region? That could happen in a very weakly doped <span class="math-container">$p$</span>-material, but the designers of the diode will make the doping strong enough to prevent that at in the intended reverse voltage range of the diode. And even if the voltage becomes too hight then there are other things that will go wrong first (like avalanche in the diode-junction).</p>
|
Physics
|
|thermodynamics|conductors|
|
If something has a high specific heat capacity does that mean that it is a poor conductor of heat?
|
<blockquote> <p>Basically, substances with high specific heat capacity have a greater ability to store heat. Does this go hand in hand with thermal conductivity?</p> </blockquote> <p>As a rule, there is an inverse relationship between thermal conductivity and specific heat- metals (electrical conductors) have higher thermal conductivity and lower specific heats than. non-metals (e.g. plastics).</p> <blockquote> <p>Poor conductors do not transfer heat well, trap heat and therefore Make things feel warm to touch</p> </blockquote> <p>No. In general thermal conductivity and not specific heat determines how warm or cool something feels because it determines the rate of heat transfer to or from the skin. Heat capacity can limit the amount of heat available for low mass.</p> <p>For surface temperatures greater than the skin, Good conductors of the same temperature as a poor conductor prior to being touched will feel warmer to the skin. That's because they transfer heat <em>to the skin</em> at a faster rate.</p> <p>At temperatures lower than the skin, the good conductor will feel cooler than the poor conductor because it will transfer heat <em>away from the skin</em> at a faster rate.</p> <p>Try touching some plastic items and metal items at room temperature. Unless its mass is very low, the metal item will feel cooler than the plastic item, even though they have the same pre-touch temperature.</p> <blockquote> <p>So is it safe and correct to assume that poor conductors always have higher specific heat capacity than good conductors like metals?</p> </blockquote> <p>Again, as a rule, yes. But you also need to keep in mind that, in general, good conductors (like metal) have a higher density than poor conductors (like plastics). So when you take into account the density, the volumetric heat capacity (heat capacity of equal volumes of a material) of a good conductor and poor conductor do not differ that greatly.</p> <p>Hope this helps.</p>
|
Physics
|
|newtonian-mechanics|reference-frames|projectile|
|
How does moving the launch area forward impact the area of impact?
|
<p>As the comments make clear, the dynamics of a launched projectile do not depend on location. One way to see this, ignoring air friction, is to take a look at the basic kinematical equations for an object moving under the influence of constant acceleration.</p> <p>For example, a launched projectile experiences only the gravitational force, which under suitable circumstances is constant, i.e. the acceleration due to gravity is <span class="math-container">$g=9.81\text{m}\text{s}^2$</span>. So we have: <span class="math-container">$$x(t)={1\over 2}gt^2+v_0t+x_0,$$</span> for the horizontal component of the motion. Not that the solution depends on the initial condition of the problem, i.e. the initial velocity <span class="math-container">$v_0$</span> and initial position <span class="math-container">$x_0$</span>. The <span class="math-container">$x_0$</span> term expresses exactly your concern, e.g. if you were initially at some point, call it "ground zero" where we take <span class="math-container">$x_0=0$</span>, then the object moves horizontally a distance <span class="math-container">$x(t)$</span> in distance during the time from <span class="math-container">$t_0=0,t$</span>: <span class="math-container">$$\text{distance}_1=x(t)={1\over 2}gt^2+v_0t.$$</span> Now consider the same problem except your new launching point is <span class="math-container">$20\text{m}$</span> forward of your original starting point so that <span class="math-container">$x_0=20$</span>, you have: <span class="math-container">$$\text{distance}_2=x(t)={1\over 2}gt^2+v_0t+20=\text{distance}_1+20.$$</span> Thus, we see that shifting the launch point shifts the landing point by the corresponding amount.</p>
|
Physics
|
|homework-and-exercises|lagrangian-formalism|variational-calculus|functional-derivatives|
|
Evaluating functional derivatives
|
<blockquote> <p>I am having difficulty evaluating the following derivative:</p> <p><span class="math-container">$$I = \frac{\delta}{\delta x(t)}\frac{\delta}{\delta x(t')}\int_{u_i}^{u_f}\frac{du}{2}\left(\frac{dx}{du}\right)^2~.$$</span></p> <p>... How can this derivative be evaluated?</p> </blockquote> <p>I will define a functional called F <span class="math-container">$$ F[x] = \int_{u_i}^{u_f}\frac{du}{2}\left(\frac{dx}{du}\right)^2 \equiv \int_{u_i}^{u_f}\frac{du}{2}\left(\dot x(u)\right)^2\;. $$</span></p> <p>Consider this functional <span class="math-container">$F$</span> evaluated at <span class="math-container">$x(u)+\delta x(u)$</span>, where we are planning to expand in a power series in <span class="math-container">$\delta x$</span>. We have: <span class="math-container">$$ F[x+\delta x] = F[x] + \int du \dot x(u)\dot {\delta x(u)} + \frac{1}{2}\int du \dot {\delta x(u)}\dot {\delta x(u)}\tag{A}\;. $$</span> Here, the power series expansion is exact after three terms since the function is just a quadratic function.</p> <p>Also, by definition of the first and second functional derivatives we have: <span class="math-container">$$ F[x+\delta x] = F[x] + \int du {\delta x(u)}\frac{\delta F}{\delta x(u)} +\frac{1}{2!} \int du du' {\delta x(u)}\delta x(u') \frac{\delta^2 F}{\delta x(u)\delta x(u')}+\ldots\;.\tag{B} $$</span></p> <p>For example, in order to identify the first functional derivative by comparing Eq. (A) to Eq. (B), we need to put the linear term in Eq. (A) into the correct form by integrating by parts.</p> <p>For example, in order to identify the second functional derivative by comparing Eq. (A) to Eq. (B), we need to insert a delta function and integrate by parts twice <span class="math-container">$$ \frac{1}{2}\int du \dot {\delta x(u)}\dot{\delta x(u)} = \frac{1}{2}\int du du' \delta(u-u')\dot {\delta x(u)}\dot{\delta x(u')} = -\frac{1}{2}\int du du' \ddot{\delta}(u-u')\delta x(u)\delta x(u') $$</span> to see that <span class="math-container">$$ \frac{\delta^2 F}{\delta x(u)\delta x(u')} = -\ddot{\delta}(u-u') $$</span></p> <hr /> <p>This can also be arrived at by first taking the first functional derivative of <span class="math-container">$F$</span> and then taking another functional derivative of the result: <span class="math-container">$$ \frac{\delta F}{\delta x(u)}[x] = -\ddot x(u) = -\int du' \ddot x(u')\delta(u-u') $$</span> <span class="math-container">$$ \frac{\delta F}{\delta x(u)}[x+\delta x] = \frac{\delta F}{\delta x(u)}[x]-\int du' \delta(u-u')\ddot {\delta x(u')} $$</span> <span class="math-container">$$ =\frac{\delta F}{\delta x(u)}[x] - \int du \ddot{\delta}(u-u')\delta x(u') $$</span> thus <span class="math-container">$$ \frac{\delta }{\delta x(u')}\frac{\delta F}{\delta x(u)} = -\ddot{\delta}(u-u') $$</span></p>
|
Physics
|
|newtonian-mechanics|momentum|conservation-laws|free-body-diagram|rocket-science|
|
How is Newton's 3rd law applied in rocket propulsion?
|
<p>There are two ways to think about it, depending on how you want to think about gases. You can think of gasses as having a pressure. When a rocket is burning, there is a very high pressure inside the rocket. That pressure pushes gas downward, but also pushes up on the rocket.</p> <p>The other approach is to think of individual molecules of gas. They aren't just going straight down. They bounce around with thermal energy (they're <em>very</em> hot). Those gas molecules sometimes collide with the rocket, imparting momentum to the rocket.</p> <p>Both are valid ways of thinking about a rocket motor, it just depends on how you want to treat them. Sometimes its best to think of the gasses as a fluid. Other times its best to think of it as a bunch of particles. But both have a rationale for why the rocket goes upward.</p>
|
Physics
|
|temperature|electromagnetic-induction|metals|
|
Did my pot really reach >400°C internally while heating water (soup broth)
|
<p>Are you sure the discoloration wasn't there before your experiment? Because what you are looking at are layers of transparent chromium oxide on the pot bottom that are just thick enough to create colored interference fringes. To form them on chrome or stainless steel normally requires extended exposure to temperatures typical of the exhaust stacks on a Kawasaki Vulcan 1500 motorcycle engine at extended cruise conditions (hot enough to inflict 3rd degree burns on human skin pressed against the pipes). I know this from first-hand experience.</p> <p>If the pot was full of boiling water while the oxide layer was formed, this would require that the heat flux through the pot bottom be sufficient to blanket the entire bottom of the pot with a continuous layer of steam so the bottom would <em>boil dry</em>; only then could the temperature of the pot bottom get high enough to grow that oxide.</p> <p>On the other hand, dry-firing the pot (no water in it while adding heat) would build an oxide just like that in minutes.</p> <p>The other possibility was that the soup was sufficiently acidic to either 1) etch away the existing uniform oxide on the pot bottom at the temperature of boiling water, yielding a temperature map on the pot bottom, or 2) to thicken the native (very thin) oxide layer enough to yield fringes.</p>
|
Physics
|
|quantum-mechanics|photons|quantum-electrodynamics|scattering|feynman-diagrams|
|
Is Feynman's Compton scattering diagram the same as the one in most books?
|
<p>The diagrams are different in that the on that occurs in "most books" is the one that occurs mostly in elementary treatments and is more in line with how the Compton effect was originally conceived historically, which was antecedent to Feynmann's work in Quantum Field Theory. Both diagrams do allude to things like momentum conservation, however, the Feynmann diagram is draw with a particular purpose in mind, viz. to aid in the calculation of perturbations in the QFT treatment of scattering problems, particularly the Wick expansion of the <span class="math-container">$S$</span> matrix. Because the Compton effect was derived before the advent of QFT, it managed to arrive at the correct result via incorrect theoretical foundations, nevertheless, the discovery was crucial in driving Quantum theory forward despite its theoretical shortcomings.</p>
|
Physics
|
|newtonian-mechanics|newtonian-gravity|orbital-motion|solar-system|escape-velocity|
|
Escape Velocity from Moon to escape Earth-Moon System
|
<p>The easiest example is if we neglect the sun and all the other planets, assume the earth and moon stationary and the rocket to escape in the direction radially away from both:</p> <p><a href="https://i.ibb.co/FDmY0pW/yukterez.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/86BOT.png" alt="earth moon rocket escape velocity potential" /></a></p> <p>The combined potential is</p> <p><span class="math-container">$$\rm U=U_1+U_2=-\frac{G \ M_1}{r_1}-\frac{G \ M_2}{r_2}$$</span></p> <p>For the escape velocity we set <span class="math-container">$\rm E_{kin}=-E_{pot}$</span>, so <span class="math-container">$\rm v^2/2=-U$</span>, which gives</p> <p><span class="math-container">$$\rm v= \sqrt{2 \ G \ (M_1 / r_1+M_2 / r_2)}$$</span></p> <p>With <span class="math-container">$\rm G=6.67384{\scriptsize{E}}-11 \ m^3 kg^{-1} s^{-2}$</span>, the moon's mass and radius <span class="math-container">$\rm M_1=7.342{\scriptsize{E}}22 \ kg$</span> and <span class="math-container">$\rm r_1=1.7374{\scriptsize{E}}6 \ m$</span>, the earth's mass <span class="math-container">$\rm M_2=5.972{\scriptsize{E}}24 \ kg$</span> and the distance to the earths center when standing on the far side of the moon <span class="math-container">$\rm r_2=3.844{\scriptsize{E}}8 \ m + r_1=3.861374{\scriptsize{E}}8 \ m$</span> we get</p> <p><span class="math-container">$$\rm v=2776 \ m/s \ \ || \ \ v=2374 \ m/s$$</span></p> <p>in order to escape from the surface of the moon, left with the earth behind you and for comparison on the right when we neglect the earth's mass (setting <span class="math-container">$\rm M_2=0$</span>).</p> <p>If we consider the moon's rotation around the earth, have the rocket launch in an arbitrary direction, or even perform swing-by maneuvers, there might not be an analytical solution and you have to fall back to <a href="https://notizblock.yukterez.net/viewtopic.php?t=67" rel="nofollow noreferrer">numerical</a> computations.</p>
|
Physics
|
|hilbert-space|quantum-spin|spinors|quantum-states|bloch-sphere|
|
Physical meaning behind the double rotation of spin 1/2 particles
|
<p>States are unit vectors <strong>up to phases</strong>. The representation of a full rotation is just a phase <span class="math-container">$-1$</span> in the Hilbert space of spin 1/2 particles. So this transformation is nothing but the identity on the states as physics requires. It is physically wrong saying that “it changes the sign of a state”. It just changes the sign of a state <strong>vector</strong> leaving the state invariant.</p> <p>This property of the unitary representation of rotations for semi integer spin particles has however physical consequences. In particular, a <strong>super selection rule</strong> which states that no coherent superposition of states with integer and semi integer spin is physically possible.</p>
|
Physics
|
|thermodynamics|temperature|thermal-radiation|
|
How the range at which the radiation is emitted get affected with changing temperature?
|
<p>A grey body emits a blackbody spectrum with a lower emissivity (<span class="math-container">$<1$</span>), where the emissivity is independent of wavelength/frequency. NB. This is the <a href="https://en.wikipedia.org/wiki/Black_body#cite_note-emissivity-3" rel="nofollow noreferrer">definition of a grey body</a>. (See also <a href="https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095907858" rel="nofollow noreferrer">here</a>).</p> <p>So if a blackbody radiates at all wavelengths, then by definition, so does a grey body (but less).</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|schroedinger-equation|time-evolution|
|
Time Evolution of Eigenkets in the Heisenberg picture
|
<p>I think there is a semantic problem between you and the author reagarding:</p> <blockquote> <p>"In the Schrödinger picture, A does not change, so the base kets, obtained as the solutions to this eigenvalue equation at t=0, for instance, must remain unchanged".</p> </blockquote> <p>As I do understand it, what he is referring to, is that, if an operator <span class="math-container">$\hat{A}$</span> does not change with time, then calculating the eigenstates of <span class="math-container">$\hat{A}$</span> will give you the same eigenstates independently of what value you assign to your time argument, because the time argument just does not occur in the calculation.</p> <p>This is not to be confused with the statement, that a system prepared in one of the eigenstates of <span class="math-container">$\hat{A}$</span> would not change with time. In fact it would change at least by a time depended phase factor, if <span class="math-container">$\hat{A}$</span> commutes with <span class="math-container">$\hat{H}$</span> or in more complicated ways, if it does not commute. As a consequence a system in an eigenstate of <span class="math-container">$\hat{A}$</span> will generally not remain in a eigenstate of <span class="math-container">$\hat{A}$</span> after time evolution. In Schrödinger-picture the eigenstate problem of <span class="math-container">$\hat{A}$</span> is time-independent and independent of your System in general, but the Eigenstates of <span class="math-container">$\hat{A}$</span> still will develop in time according to the systems time-evolution operator, but they won't remain eigenstates after time-evolution. I do agree that the formulation "The eigenstates of A do not change with time." is ambigous and I hope I could clarify how it is to understand.</p> <p>Regarding your second question, in Heisenberg picture the kets do not show any time development, which means that <span class="math-container">$ |a'\rangle_H = |a',t=0\rangle$</span>. The Heisenberg state does have no time development, as the time evolution operator is included in the Heisenberg-operators instead.</p> <p>Edit: The states describing a system state do show no time evolution in Heisenberg picture as stated in the above paragraph. Now when looking at the eigenvalue problem of an operator <span class="math-container">$\hat{A}$</span> in Heisenberg picture, as we did for the Schrödinger picture before, they indeed will show a time development because the operator <span class="math-container">$\hat{A}_H(t)$</span> will change with time in a way that <span class="math-container">$\langle a|_H | \psi \rangle_H = \langle a|_S | \psi \rangle_S = \langle a(t=0)| \hat{U} |\psi(t=0) \rangle $</span>. Taking the hermitian adjoint one gets <span class="math-container">$| a \rangle_H = \langle a|_H^\dagger = (\langle a| \hat{U}) ^\dagger = \hat{U}^\dagger | a \rangle$</span>. So, as mentioned in the comment of Albertus Maguns, the ket-states created by the eigenvalue problem of an operator in the Heisenberg picture, do indeed show a time development inversely to the one, that the corresponding eigenstate of the same operator in Schrödinger-picture would show, when developed according to the Schrödigner equation of the System. This would suggest that your line <span class="math-container">$$U^\dagger|a',t\rangle_S=|a',t=0\rangle=U|a',t\rangle_H$$</span> might be true. However from my understanding using such a time dependend state to desribe the physical state of your system means you are no longer operating in the Heisenberg picture, which shouldn't be a problem physically, so who cares. But nomenclature really seems to get quite convoluted there.</p>
|
Physics
|
|quantum-field-theory|particle-physics|experimental-physics|renormalization|quantum-gravity|
|
Are Weinberg's soft theorems relevant when making predictions about collider physics?
|
<p><em>Gravitons</em> have never been observed, much less <em>graviton scattering</em>, so the graviton soft theorem is not relevant for particle physics experiments.</p> <p>The graviton soft theorem has been shown to be related to the gravitational-wave memory effect, which is a non-linear effect that applies to classical gravitational waves. Gravitational wave detectors like LIGO and Virgo search for the memory effect, although as far as I know it has not been detected to date. (Because it is hard to detect, not because there are serious reasons to think it isn't there).</p> <p>In terms of experimental tests of the soft <em>photon</em> theorem, it looks like there is at least a proposal to use a detector at the LHC to use the soft photon theorem to study QCD phenomena: <a href="https://indico.gsi.de/event/11946/contributions/50402/attachments/34543/45390/2021_RRTF_Low.pdf" rel="noreferrer">https://indico.gsi.de/event/11946/contributions/50402/attachments/34543/45390/2021_RRTF_Low.pdf</a>. However, I've always understood the main point of the theorem to be more theoretical, by constraining the types of singularities that have to occur in S-matrix elements, and understanding the soft limit is necessary to see how to deal with infrared divergences.</p>
|
Physics
|
|classical-mechanics|newtonian-gravity|multipole-expansion|
|
Is quadrupole contribution to gravitational potential the sum of the contribution of all $m$ values?
|
<p>The quadrupole moment is a <em>tensor</em>, i.e., it's not the sum of <span class="math-container">$q_{2,m}$</span> over all <span class="math-container">$m$</span>, it's just <span class="math-container">$q_{2,m}$</span>, thought of as a spherical tensor of rank-2 with components <span class="math-container">$q_{2,m}$</span>. Equivalently, in Cartesian coordinates, it takes the form of a symmetric, traceless matrix, which has five independent entries <span class="math-container">$Q_{xx}$</span>, <span class="math-container">$Q_{yy}$</span>, <span class="math-container">$Q_{xy}$</span>, <span class="math-container">$Q_{xz}$</span>, and <span class="math-container">$Q_{yz}$</span>. The <span class="math-container">$q_{2,m}$</span>'s can be written as linear combinations of the <span class="math-container">$Q_{ij}$</span>'s and vice-versa.</p> <p>This is analogous to how the dipole moment is a <em>vector</em> with three components <span class="math-container">$D_x$</span>, <span class="math-container">$D_y$</span>, and <span class="math-container">$D_z$</span>, corresponding to three orthogonal directions <span class="math-container">$x$</span>, <span class="math-container">$y$</span>, <span class="math-container">$z$</span>, in space. Equivalently, it's a spherical tensor of rank 1 with components <span class="math-container">$d_{1,m}$</span>, where <span class="math-container">$m=-1,0,1$</span>.</p> <p>Only the monopole moment is a scalar.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|schroedinger-equation|quantum-optics|schroedingers-cat|
|
Creating Schrödinger cat states with trapped ions
|
<p>The general form of the displacement operator is <span class="math-container">$$ \hat{D}(\alpha) = e^{\alpha\hat{a}^{\dagger} - \alpha^*\hat{a}}\,, $$</span> which can also be written as <span class="math-container">$$ \hat{D}(\alpha) = e^{-|\alpha|^2/2}e^{\alpha\hat{a}^{\dagger}}e^{- \alpha^*\hat{a}} = e^{|\alpha|^2/2}e^{- \alpha^*\hat{a}}e^{\alpha\hat{a}^{\dagger}}\,. $$</span> You can use either of these last two forms to compute the action of <span class="math-container">$e^{-ig(\hat{\alpha}^{\dagger}+\hat{\alpha})t} = e^{(-igt)\hat{\alpha}^{\dagger}-(-igt)^*\hat{\alpha}}$</span>. But, we already know that <span class="math-container">$\hat{D}(\alpha)\lvert 0\rangle = \lvert \alpha\rangle$</span>, so really you just need to use that.</p>
|
Physics
|
|particle-physics|dark-matter|
|
What's the argument(s) against dark matter being "normal" baryonic dust?
|
<p>Although this is not how it arose historically, the most precise evidence for nonbaryonic dark matter now comes from the early universe. The is no place for "baryonic dark matter" to hide in the early universe: the temperature and density are high enough that all baryons are strongly coupled, and so all baryonic matter is the same.</p> <p>Inferences based on early-universe physics are consistent with baryons comprising about 5% of the energy density within the present-day universe and nonbaryonic dark matter comprising about 26%. If the abundance of baryonic matter were much higher than that, then:</p> <ol> <li>The relative abundances of light elements/isotopes that emerged from primordial nucleosynthesis would be very different. For example, the fractional abundance of deuterium would be a lot lower, because with more baryons, there is more time for the deuterium to fuse into helium before the density of baryons drops too low.</li> <li>Sound waves in the primordial plasma would have much higher amplitudes, since sound can transmit through baryons but not through nonbaryonic dark matter. That would leave a clear imprint in the pattern of temperature fluctuations in the cosmic microwave background.</li> </ol>
|
Physics
|
|quantum-mechanics|homework-and-exercises|operators|commutator|
|
Conmutators and Jacobi's Identity
|
<p>You have not "found" something which is not even wrong. What you <em>should</em> have found is your top line reducing to <span class="math-container">$$ A[B,[C,D]]+ [A,[C,D]]B - B[A,[C,D]]-[B,[C,D]]A, $$</span> where , now, it should be evident to you that each double commutator automatically vanishes by the Jacobi identity <em><strong>for each</strong></em>, e.g,, <span class="math-container">$$ [B,[C,D]]= -[C,[D,B]]-[D,[B,C]]= -[B,0]- [D,1]=0, $$</span> etc.</p> <p>If you understand faithful realizations, you might short-cut through the above comment.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|schroedinger-equation|definition|scattering|
|
Difference between stationary states, collision states, scattering states, and bound states
|
<p>I always refer to them as "scattering states". It is difficult for me to tell what your potential is doing at infinity, so assuming it is finite as it looks in the example, then this example of potential is one for which the WKB method was invented, because even for case I, the wave function will exhibit tunneling into the non-classical regions (places where <span class="math-container">$E\lt V$</span>). Thus in all of the examples, the solutions are scattering states.</p> <p>Bound states are characterized by eigenvalues, which are generated by subjecting the Schrodinger equation to boundary conditions. In this case the energies take on discrete values with some spacing between them and the spectra may or may not be degenerate. Scattering states, on the other hand, are evocative of the free-particle, the energy make take on any allowable value.</p> <p>So how do you tell which are scattering states? Griffiths' <em>Introduction to Quantum Mechanics</em> <span class="math-container">$2^{nd}$</span>ed., elucidates the problem nicely; the conditions you want are: <span class="math-container">$$E\lt [V(-\infty)\;\text{and}\; V(+\infty)]\implies\; \text{bound state},$$</span> <span class="math-container">$$E\gt [V(-\infty)\;\text{or}\; V(+\infty)]\implies\;\text{scattering state}$$</span> In cases where the potential goes to zero at infinity, which is the majority case in everyday life, then we have that: <span class="math-container">$$E\lt 0\;\implies\;\text{bound state},$$</span> <span class="math-container">$$E\gt 0\implies\;\text{scattering state}.$$</span> Even if a particle looks like it is trapped in a well as in example, I, if the potential doesn't grow to infinity, then the particle <em>will</em> leak by tunneling and the problem becomes a scattering problem in the technical sense. If you can get a hold of Griffith's book then take a look at section 2.5, it should clear up all of your doubts.</p>
|
Physics
|
|electrostatics|electric-fields|integration|
|
Trying to evaluate integral using cylindrical basis vectors
|
<p>It is not really possible because the polar basis vectors change with position. Therefore, you cannot integrate the <span class="math-container">$r$</span> or <span class="math-container">$\theta$</span> components directly. <span class="math-container">$$\int_{C}\left(E_r\hat{\mathbf{r}} + E_{\theta}\hat{\boldsymbol{\theta}} + E_{\phi}\hat{\boldsymbol{\phi}}\right) \mathrm{d}l \neq \hat{\mathbf{r}}\int_{C}E_r \mathrm{d}l + \hat{\boldsymbol{\theta}}\int_{C} E_{\theta} \mathrm{d}l + \hat{\boldsymbol{\phi}}\int_{C} E_{\phi} \mathrm{d}l$$</span> You have to consider the components with respect to a fixed basis.</p> <p>This is also mentioned in section 1.4 of Griffiths' <em>Introduction to Electrodynamics</em>.</p>
|
Physics
|
|newtonian-mechanics|rotational-dynamics|energy-conservation|work|friction|
|
A wheel is rolling up the (horizontal) street. Will it roll forever with the same speed?
|
<p>If all bodies are rigid, then the wheel will roll with constant velocity forever. There is no friction at all because the point of contact is stationary relative to the ground and there is nothing else that would otherwise cause slip. Therefore, the net force is zero. Friction only acts when the wheel attempts to slip. In other words, if, without friction, <span class="math-container">$v$</span> would otherwise be different from <span class="math-container">$r\omega$</span>, then friction will act in the opposite direction to attempt to close the gap. This is how a motorized wheel works. Starting from rest, we have <span class="math-container">$v=0$</span>, but if there were no friction, the driving torque would result in <span class="math-container">$r\omega \gt 0$</span>. So friction acts to increase <span class="math-container">$v$</span> and the wheel is driven forward. This is consistent with the behavior of friction in any other case. Note that the converse is not true: no-slip does not imply no friction.</p> <p>If the bodies are not rigid, then the weight will cause some deformation of the ground and a net force backwards, reducing the speed. This force is not friction but rolling resistance.</p>
|
Physics
|
|newtonian-mechanics|rotational-dynamics|reference-frames|coriolis-effect|
|
Coriolis acceleration for wind at some latitude and longitude on the surface
|
<p>You ask specifically about the rotation-of-Earth-effect on wind.</p> <p>To discuss the physics of air mass in motion:<br /> For this answer I will leave out pressure gradient, because that is a transient factor. I begin with considering the forces that are always there.</p> <p><a href="https://i.stack.imgur.com/Bnk1B.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bnk1B.gif" alt="Forces on an oblate spheroid" /></a></p> <p><em>Diagram 1.</em><br /> <em>Forces on a buoyant object (animated GIF)</em></p> <p>We have that due to its rotation the Earth is an oblate spheroid. (The Earth started out as a protoplanetary disk, and over time contracted to its current oblate shape.)</p> <p>In the diagram the oblateness is exaggerated, of course, the actual oblateness is is about 1/300 (The equatorial radius is about 20 kilometer larger than the polar radius.)</p> <p>The blue arrow represents newtonian gravity, the red arrow represents buoyancy force.</p> <p>On an oblate spheroid the direction of newtonian gravity is not perpendicular to the local level surface. For example, at 45 degrees latitude the deviation from perpendicular is about 0.1 degree.</p> <p>Since newtonian gravity and the buoyancy force are not exactly opposite in direction there is a resultant force, in the diagram indicated in green.</p> <p>(Note that in the case of ballistic motion Diagram 1 is not applicable. In the case of ballistic motion there is no buoyancy force.)</p> <p>In the interpretation of measurement results we must allow for the equivalence of inertial and gravitational mass. Let me expand on what it means to take into account that inertial mass and gravitational mass are equivalent.</p> <p>When you are located at the Equator you are circumnavigating the Earth's axis; a centripetal acceleration is required to sustain that centripetal acceleration. That centripetal acceleration goes at the expense of the amount of gravity that you are subject to.</p> <p>At the Equator: to remain co-rotating with the Earth requires a centripetal acceleration of about 0.0339 <span class="math-container">$m/s^2$</span>. That goes at the expense of gravitational acceleration.</p> <p>Because of the equivalence of inertial and gravitational mass: a gravimetric instrument cannot measure that effect directly. A gravimetric instrument measures a single gravitational acceleration.</p> <p>The rotation of Earth does become apparent when a measuring instrument has a <em>velocity</em> relative to the Earth. This effect is called <a href="https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s_effect" rel="nofollow noreferrer">Eötvös effect</a>, after the hungarian scientist Eötvös, who designed and operated gravimetric instruments of the highest sensitivity.</p> <p>To illustrate the Eötvös effect I wil use the example of an airship. Let an airship be flying parallel to the Equator, in west-to-east direction. The airship is trimmed to neutral buoyancy. Next the airship makes a U-turn. After the U-turn the buoyancy needs to be re-trimmed.</p> <p>Before the U-turn the airship was circumnavigating the Earth's axis a bit faster than the Earth itself is rotating. So the airship was experiencing a bit less gravitational acceleration than when stationary wrt the Earth. After the U-turn the airship is circumnavigating the Earth's axis a bit slower than the Earth, so then it is experiencing a bit more gravitational acceleration.</p> <p><strong>Air mass and the rotation of the Earth</strong></p> <p>If you scroll back to Diagram 1:<br /> The green arrow represent a resultant force that is acting in centripetal direction. From here on I will refer to that as 'the poleward force'.</p> <p>The poleward force provides the amount of force that is required for buoyant mass to be co-rotating with the Earth. At any latitude: if air mass is flowing west-to-east the air mass in a sense "speeding", the provided centripetal force is then not enough, and the air mass will swing wide (deviate towards the Equator). If air mass is flowing east-to-west the air mass is experiencing a surplus of centripetal force, and subsequently the air mass will move to the <em>inside</em> of the latitude line that is is moving along.</p> <p>Next:<br /> Quantitative description of the rotation-of-Earth-effect.</p> <p>First a simpler situation: motion over a flat surface, subject to a centripetal force such that at any distance to the axis of rotation a buoyant object remains co-rotating with the system.</p> <p><a href="https://i.stack.imgur.com/WhKTE.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WhKTE.gif" alt="Motion subject to a centripetal force" /></a></p> <p><em>Diagram 2.</em><br /> <em>Motion subject to a centripetal force (animated GIF)</em></p> <p>Diagram 2 represents side by side a stationary point of view, and a co-rotating point of view. The circle represents a rotating disk. Along the rim quadrants are added to give the viewer a reference of orientation.</p> <p>The arrow in the animation represents the centripetal force. From center to rim there is a linear increase of the centripetal force.</p> <p>The motion of the black dot is according to the following parametric expression:</p> <p><span class="math-container">$x = a \cos(\Omega t)$</span><br /> <span class="math-container">$y = b \sin(\Omega t)$</span></p> <p><span class="math-container">$a$</span> half the length of the major axis<br /> <span class="math-container">$b$</span> half the length of the minor axis<br /> <span class="math-container">$\Omega$</span> 360 degrees divided by the duration of one revolution</p> <p>For emphasis: the above is valid only in the case that a force is present force such that at any distance to the axis of rotation a buoyant object remains co-rotating with the system. Note that rotating systems tend to settle down to such a state spontaneously. Example: set up a dish with fluid in it to rotate at a uniform angular velocity. After sloshing has subsided the cross section of the surface will be in the shape of a parabola. That is: the slope of the fluid provides the required centripetal force to remain co-rotating.</p> <p>To set up for transformation to a rotating coordinate system: rearrange as follows:</p> <p><span class="math-container">$$ x = \frac{a+b}{2} \cos(\Omega t) + \frac{a-b}{2} \cos(\Omega t) $$</span> <span class="math-container">$$ y = \frac{a+b}{2} \sin(\Omega t) - \frac{a-b}{2} \sin(\Omega t) $$</span></p> <p>After transformation of the motion to the rotating coordinate system the motion relative to the co-rotating coordinate system is as follows:</p> <p><span class="math-container">$$ x = \frac{a-b}{2} \cos(2 \Omega t) $$</span> <span class="math-container">$$ y = - \frac{a-b}{2} \sin(2 \Omega t) $$</span></p> <p>That is, the motion relative to the rotating coordinate system is perfectly circular, and proceeds at a frequency of <span class="math-container">$2\Omega$</span>, twice the frequency of the rotating system.</p> <p>I will refer to the circle of the motion relative to the rotating coordinate system as the epi-circle.</p> <p>I will use the lowercase <span class="math-container">$\omega$</span> for the angular velocity relative to the rotating coordinate system.</p> <p>with:<br /> <span class="math-container">$a_c$</span> centripetal acceleration wrt to center of epi-circle<br /> <span class="math-container">$r$</span> radial distance to center of epi-circle</p> <p><span class="math-container">$$ a_c = \omega^2r \tag{1} $$</span></p> <p>Let <span class="math-container">$v_r$</span> be the velocity vector relative to the rotating coordinate system. the following substitution results in (2): <span class="math-container">$\omega r=v_r$</span></p> <p><span class="math-container">$$ a_c = \omega v_r \tag{2} $$</span></p> <p>We make the expression independent of the center of the epi-circle with the following substitution: <span class="math-container">$\omega = 2\Omega$</span>:</p> <p><span class="math-container">$$ a_c = 2 \Omega v_r \tag{3} $$</span></p> <p>The form of (3) is independent of the <em>direction</em> of the velocity vector relative to the rotating system.</p> <p><strong>General remarks</strong></p> <p>The motion pattern of Diagram 2 consists of two coupled oscillations. Oscillation of distance to the axis of rotation, and oscillation of the angular velocity. Note especially that both of those oscillations are independent of the choice of coordinate system.<br /> -In both coordinate systems, inertial and rotating, the distance to the center of rotation is the same.<br /> -In both coordinate systems, inertial and rotating, the <em>rate of change</em> in angular velocity is the same.</p> <p>The relevant aspects here are those aspects of the motion that are in both coordinate systems <em>the same</em>.</p> <p>The interplay of those two oscillations (radial and angulular velocity) is described by the relation <span class="math-container">$a_c = 2 \Omega v_r$</span></p> <p>Of course, we have that in the equation of motion for motion with respect to a rotating coordinate system there is a term of the form <span class="math-container">$2\Omega v_r$</span>, which is the same form as in equation (3).</p> <p>There is of course a reason that the same form makes another appearance - discussion of that is interesting, but outside of the scope of this answer.</p> <p><strong>Expanding to motion relative to the Earth's surface</strong></p> <p>In the case of rotation-of-Earth-effect on wind:<br /> The magnitude of the rotation-of-Earth-effect is proportional to the projection onto the latitudinal plane.</p> <p>Hence close to the Equator the rotation-of-Earth-effect on wind is very small, whereas close to the poles the motion of the air mass (which is parallel to the local surface) is close to parallel to the latitudinal plane. The closer to the poles, the closer the ratio approaches 1:1.</p>
|
Physics
|
|optics|maxwell-equations|geometric-optics|
|
Mathematical definition of ray
|
<p>If you apply separation of variables to the wave equation via the assumption: <span class="math-container">$$\vec E(\vec r,t)=\vec E(\vec r)e^{-i\omega t},\;\;\text{(monochromatic radiation)}$$</span> then substitution into the wave equation gives: <span class="math-container">$$e^{-i\omega t}\{\nabla^2\vec E(\vec r)+\omega^2\epsilon\mu\vec E(\vec r)\}=0.$$</span> From which, the Helmholtz equation follows by cancellation of the oscillatory factor: <span class="math-container">$$\nabla^2\vec E(\vec r)+\omega^2\epsilon\mu\vec E(\vec r)=0.$$</span> You do not have to assume that the radiation is monochromatic (all the waves at one frequency), it is enough for the equation to be merely separable, <span class="math-container">$$\vec E(\vec r,t)=\vec E(\vec r)\Phi(t).$$</span> However, whenever the monochromatic case is applicable (as is often the case), the above derivation furnishes you with the necessary factors of <span class="math-container">$\omega$</span>.</p>
|
Physics
|
|electrostatics|charge|multipole-expansion|point-particles|
|
Can a point charge be asymmetric?
|
<p>It's possible to write down the charge distribution for an infinitesimally small dipole. This was done here: <a href="https://physics.stackexchange.com/a/384501">https://physics.stackexchange.com/a/384501</a></p> <p>For example, a point electric dipole located at the origin, whose dipole moment points along the positive z-axis, would have the charge distribution</p> <p><span class="math-container">$$ \rho(x, y, z) = -p\delta(x)\delta(y)\delta'(z) $$</span></p> <p>where <span class="math-container">$p$</span> is the magnitude of the dipole moment, and <span class="math-container">$\delta'$</span> is the <strong>distributional derivative</strong> of the Dirac delta distribution. As a quick sanity check for this, we can try to compute the scalar potential at a point along the positive z-axis using Coulomb's law:</p> <p><span class="math-container">\begin{align} \varphi(z) &= -\frac{p}{4\pi\epsilon_0} \int_{-\infty}^\infty \frac{\delta'(z')}{z - z'} \, \mathrm{d}z' \\ &= \frac{p}{4\pi\epsilon_0} \int_{-\infty}^\infty \delta(z')\frac{1}{(z-z')^2} \, \mathrm{d}z' \\ &= \frac{p}{4\pi\epsilon_0} \left[ \frac{1}{(z-z')^2} \right]_{z'=0} \\ &= \frac{p}{4\pi\epsilon_0 z^2} \end{align}</span></p> <p>Note that in order to derive the second line from the first, we used integration by parts to "transfer" the derivative from the delta distribution to the <span class="math-container">$1/(z-z')$</span> part. This resulted in a sign change and a boundary term that vanishes. The final result is exactly what we expect to find.</p> <p>Similarly, we can dream of infinitesimal quadrupoles (which would have a second derivative of the Dirac delta) and higher-order multipoles.</p> <p><strong>However, the term "point charge" always means a pure monopole</strong> (i.e., it's spherically symmetric). Infinitesimal dipoles and so on are simply not <em>called</em> point charges, but they can exist. In fact, the electron is predicted to have a <a href="https://en.wikipedia.org/wiki/Electron_electric_dipole_moment" rel="nofollow noreferrer">nonzero dipole moment</a> (in addition to its monopole moment, of course) by the Standard Model, though it's below current detection thresholds.</p>
|
Physics
|
|quantum-mechanics|homework-and-exercises|isospin-symmetry|
|
Allowed Isospin states of two nucleons
|
<p>Suppose, <span class="math-container">$$ |N_1 \rangle = | l_1 m_{l_1} \rangle \otimes | S_1 m_{S_1} \rangle \otimes | I_1 m_{I_1} \rangle$$</span> describes state of first nucleon and <span class="math-container">$$ |N_2 \rangle = | l_2 m_{l_2} \rangle \otimes | S_2 m_{S_2} \rangle \otimes | I_2 m_{I_2} \rangle$$</span> describes state of second nucleon.</p> <p>Then the two nucleon state can be written as <span class="math-container">$$ |N_1N_2 \rangle = | l m_{l} \rangle \otimes | S m_{S} \rangle \otimes | I m_{I} \rangle, $$</span>where <span class="math-container">$\vec{l} = \vec{l}_1 + \vec{l}_2 $</span>, <span class="math-container">$\vec{S} = \vec{S}_1 + \vec{S}_2 $</span> and <span class="math-container">$\vec{I} = \vec{I}_1 + \vec{I}_2$</span>. Means it’s a coupled state. If you are not familiar with it, read how angular momentum coupling is done and about CG coefficients. For the 1st question <span class="math-container">$l$</span> is arbitrary, so don't worry about its coupling. Couple the spin and isospin states. Write the coupled states in terms of uncoupled states by calculating the CG coefficients.</p> <p>Depending upon the values of <span class="math-container">$l$</span>, <span class="math-container">$ S $</span> and <span class="math-container">$I$</span> you can now determine the symmetry of the states. And then due of <strong>Pauli exclusion principle</strong>, you have to identify the states which make the total two particle state antisymmetric when we exchange the particles.</p> <p>For <span class="math-container">$ l = 0 $</span> state the space part is always is symmetric. So the product of spin and isospin part has to be antisymmetric.</p> <p>This will be bit complicated for a beginner, better to follow a nuclear physics book where the <strong>deuteron problem (two nucleon problem)</strong> is mentioned in detail. Check how deuteron states are formed.</p> <p>Book recommendation: Nuclear physics by Roy and Nigam</p>
|
Physics
|
|general-relativity|differential-geometry|tensor-calculus|vector-fields|mathematics|
|
Is $dJ(V,V)=0$? where $J$ is a 1-form?
|
<blockquote> <p>If <span class="math-container">$dx^j(\partial_k)=\delta^j_k$</span> then it is equal to 0 but this relation maintain for all basis?</p> </blockquote> <p>It is always true, regardless of whether or not you use the canonical basis for the 1-forms. Observe that</p> <p><span class="math-container">$$v^kdx^j(\partial_k)v^ldx^i(\partial_l)-v^ldx^j(\partial_l)v^kdx^i(\partial_k)$$</span> <span class="math-container">$$=v^k v^l \big[\mathrm dx^j(\partial_k) \mathrm dx^i(\partial_l) - \mathrm dx^j(\partial_l) \mathrm dx^i(k)\big]$$</span></p> <p>Obviously <span class="math-container">$v^k v^l$</span> is symmetric in <span class="math-container">$(k,l)$</span> while the object in square brackets is antisymmetric in <span class="math-container">$(k,l)$</span>, so this vanishes regardless of what the <span class="math-container">$\mathrm dx^i(\partial_k)$</span> 's are.</p>
|
Physics
|
|newtonian-mechanics|
|
Direction of $\hat\theta $ and $\hat r$ in pendulum
|
<p>Firstly, your mistake is that you do not write the correct expression for the net force on the pendulum bob. Since the tension balances one of the gravitational components, the correct net force is: <span class="math-container">$$\vec F=-mg\sin\alpha\;\hat\theta.$$</span> WHy is there a minus sign? The minus sign comes from the fact that I am using the usual right-handed polar coordinates, i.e. <span class="math-container">$\hat r\times\hat \theta=\hat \phi$</span>, where <span class="math-container">$\hat\phi$</span> is out of the page. So <span class="math-container">$\hat\theta$</span> is actually opposite the direction you have written it in your diagram. However, you don't have to use right-handed coordinates, however, your specification of <span class="math-container">$\vec F$</span> must change accordingly. At any rate if you work in either coordinates <em>correctly</em> then there will be a minus sign. <span class="math-container">$$mg(-\sin\alpha\hat\theta)=(2\dot r\dot\alpha +r\ddot\alpha)\hat\theta.$$</span> So you get a set of coupled DEs, one for the radial, and one for the angular components: <span class="math-container">$$-mg\sin(\alpha(t))=2\dot r\dot\alpha(t)+r\ddot\alpha.$$</span> Since <span class="math-container">$r$</span> is constant, these simplify to: <span class="math-container">$$-mg\sin(\alpha(t)=r\ddot\alpha(t).$$</span> Solving this should definitely give you something besides the erroneous exponentials.</p>
|
Physics
|
|quantum-mechanics|atomic-physics|orbitals|quantum-chemistry|
|
How to calculate total angular momentum $L$ of partially filled $p$-orbitals?
|
<blockquote> <p>Why has <span class="math-container">$^1D$</span> configuration lower energy than <span class="math-container">$^1S$</span> ? Hund's second rule says that for two configurations with the same multiplicity, the configuration with the highest total orbital angular momentum <span class="math-container">$L$</span> has the lowest energy. But how do I calculate total angular momentum <span class="math-container">$L$</span>?</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Hund%27s_rules" rel="nofollow noreferrer">Hund's rules</a> are a prescription for determining the <em>ground state</em>. Hund's rules here tell you that the ground state is the <span class="math-container">${}^3P$</span> state. They don't tell you anything else.</p> <p>If you want to understand how Hund's rules work as well as how to determine the energy order of higher states (like the <span class="math-container">${}^1 D$</span> and <span class="math-container">${}^1 S$</span> states) you have to look at the many-body atomic physics. (Which often can be reduced to a matrix diagonalization problem, where the matrix elements are radial integrals, also known as "Slater-Condon parameters.")</p> <p>I suggest consulting a textbook like Griffith's <a href="https://rads.stackoverflow.com/amzn/click/com/052111599X" rel="nofollow noreferrer" rel="nofollow noreferrer">Theory of Transition Metal Ions</a>. Especially Section 4.5 titled <em>"<span class="math-container">$p^n$</span> configurations."</em></p> <p>In general, the ordering of the different states will depend on the Slater Condon parameters. For example, for the <span class="math-container">$p^2$</span> or <span class="math-container">$p^4$</span> configurations, Griffith provides the energies in terms of the Slater Condon parameters <span class="math-container">$F_0$</span> and <span class="math-container">$F_2$</span> as: <span class="math-container">$$ E({}^3 P) = F_0 - 5 F_2 $$</span> <span class="math-container">$$ E({}^1 S) = F_0 + 10 F_2 $$</span> <span class="math-container">$$ E({}^1 D) = F_0 + F_2 $$</span></p> <p>Thus, given that <span class="math-container">$F_0$</span> and <span class="math-container">$F_2$</span> are positive we have: <span class="math-container">$$ E({}^3 P) < E({}^1 D) < E({}^1 S)\;. $$</span></p> <hr /> <p><strong>UPDATE:</strong></p> <p>First of all, I should say that this is difficult stuff to understand from first principles. It is well-established--but still difficult--many-body quantum mechanics. (That is one reason we have these <em>ad hoc</em> rules like Hund's rules.) So, don't feel bad for not getting it right away.</p> <p>Second, I suggest you switch to using a textbook from a reputable publisher and reputable author rather than some random thing you found on the Internet. The online text you cite seems pretty bad.</p> <p>Third, to establish a base-line understanding I suggest you <em>explicitly</em> work out the case of the <span class="math-container">$p^2$</span> configuration, at least with respect to <em>counting</em> the states. The <span class="math-container">$p^2$</span> configuration is actually the same as the <span class="math-container">$p^4$</span> configuration. This is the case since (if you include the spin quantum number) there are 6 orbitals that can be occupied and six-choose-two is the same as six-choose-four. Another way to put this is that you can think of <span class="math-container">$p^2$</span> as two electrons and <span class="math-container">$p^4$</span> as two holes.</p> <p>Anyways, I'll help you with the counting for <span class="math-container">$p^2$</span>. For the case of <span class="math-container">$p^2$</span> there are <span class="math-container">$\frac{6!}{2!4!} = 15$</span> direct-product states: <span class="math-container">$$ |m=-1, m_s = \uparrow\rangle|m=-1, m_s = \downarrow\rangle \tag{1} $$</span> <span class="math-container">$$ |m=-1, m_s = \uparrow\rangle|m=0, m_s = \downarrow\rangle \tag{2} $$</span> <span class="math-container">$$ |m=-1, m_s = \uparrow\rangle|m=0, m_s = \uparrow\rangle \tag{3} $$</span> <span class="math-container">$$ \ldots \tag{...} $$</span> <span class="math-container">$$ |m=+1, ms = \uparrow\rangle|m=+1, m_s=\downarrow\rangle \tag{15}\;. $$</span></p> <p>In the non-interacting theory of the atom, all 15 of these states are exactly degenerate in energy.</p> <p>When you include the electron-electron interaction terms the degeneracy is partially split. To understand the splitting consider first the total spatial angular momenta that could possibly be achieved, written symbolically as: <span class="math-container">$$ p \otimes p = D\oplus P \oplus S\;.\tag{A} $$</span> Or, sometime you will see it written like <span class="math-container">$$ 1\otimes 1 = 2\oplus 1 \oplus 0\;. $$</span> What Eq. (A) is supposed to mean is that you can combine two <span class="math-container">$\ell=1$</span> particles to get at most a total momentum of 2 (the D state) <span class="math-container">$L=|\ell_1 + \ell_2|$</span> and at least a momentum of 0 (the S state) <span class="math-container">$L=|\ell_1 - \ell_2|$</span>.</p> <p>Next consider the addition of the spin angular momentum, written symbolically as : <span class="math-container">$$ \frac{1}{2}\otimes \frac{1}{2} = 1\oplus 0\;.\tag{B} $$</span> What Eq. (B) means is that you can combine two spin 1/2 particles to get an overall total spin state of either 1 or 0.</p> <p>Now consider the symmetries of all these states: <span class="math-container">$$ p \otimes p = \underbrace{D}_{sym}\oplus \underbrace{P}_{anti} \oplus \underbrace{S}_{sym}\;.\tag{A} $$</span> <span class="math-container">$$ \frac{1}{2}\otimes \frac{1}{2} = \underbrace{1}_{sym}\oplus \underbrace{0}_{anti}\;.\tag{B} $$</span></p> <p>The overall state <em>must</em> be antisymmetric, which leave us with the combinations: <span class="math-container">$$ {}^1 D\;, $$</span> where, confusingly, the superscript <span class="math-container">$1$</span> means it is the <span class="math-container">$S=0$</span> (spin 0) state. (In general the term symbol denotes the total spin <span class="math-container">$S$</span> and total spatial <span class="math-container">$L$</span> angular momentum values as <span class="math-container">${}^{2S+1}L$</span>.) <span class="math-container">$$ {}^3 P\;, $$</span> where the superscript <span class="math-container">$3$</span> means it is a spin-1 state, and <span class="math-container">$$ {}^1 S\;. $$</span></p> <p>Now lets count states again. There are <span class="math-container">$1\times 5 = 5$</span> states that make up the <span class="math-container">${}^1 D$</span> states (all degenerate in energy). There are <span class="math-container">$3\times 3 = 9$</span> states that make up the <span class="math-container">${}^3 P$</span> states (all degenerate in energy). And there are <span class="math-container">$1\times 1 = 1$</span> state that makes up the <span class="math-container">${}^1 S$</span> state. And, happily, <span class="math-container">$5+9+1 = 15$</span>, just like it should.</p> <p>In order to figure out the arrangement of the energy levels relative to each other for <span class="math-container">${}^1D$</span>, <span class="math-container">${}^1S$</span>, and <span class="math-container">${}^3P$</span> you have to construct the <span class="math-container">$15\times 15$</span> matrix of the interaction energy and diagonalize it.</p>
|
Physics
|
|symmetry|standard-model|quantum-chromodynamics|quarks|isospin-symmetry|
|
Isospin doublet and quark content from contraction of quarks
|
<p>Apart from strong and weak isospin mixup as mentioned in the comments, let's also be careful not to mix up color SU(3) and flavor SU(3). We can leave the color out of it entirely. Also full flavor SU(3) is not needed, only the (strong) isospin subgroup SU(2). And there you just add three isospin <span class="math-container">$\frac12$</span> states to a total isospin <span class="math-container">$\frac12$</span>.</p> <p>This is not the only thing you can do to combine three <span class="math-container">$q$</span>'s, you could also have combined them into a total isospin <span class="math-container">$\frac32$</span> state. That would give you the <span class="math-container">$\Delta$</span>'s. It's a choice you make, in the two cases you contract the indices in a different way. All described of course by Clebsch-Gordan coefficients, which technically speaking should answer your question.</p> <p>But you can also say that to combine them to <span class="math-container">$p$</span> and <span class="math-container">$n$</span>, you'll need terms in the summation that always have the isospin <span class="math-container">$z$</span>-component add up to <span class="math-container">$\pm\frac12$</span>. If, on the other hand, you combine them to <span class="math-container">$\Delta$</span>'s, you'll usually start creating the combinations with isospin <span class="math-container">$z$</span>-component <span class="math-container">$\pm\frac32$</span> (which is easy to do) and then raising and lowering operators can do the rest. That will then help you to find the <span class="math-container">$n$</span> and <span class="math-container">$p$</span> states, which should be orthogonal to the two "middle states" of the <span class="math-container">$\Delta$</span>'s, the ones that have isospin <span class="math-container">$z$</span>-component <span class="math-container">$\pm\frac12$</span>. (To pin down the exact states you'll also have to use total symmetry for spin+isospin, since the color part of the state is already totally antisymmetric).</p> <p>NB: to define everything with the right phase factor it might be wise to check the Condon & Shortley phase convention.</p>
|
Physics
|
|temperature|electric-current|conductors|
|
Rewriting fraction as a derivative
|
<p>If <span class="math-container">$t=\Delta t=t-0$</span>, then one might write: <span class="math-container">$${\Delta T\over\Delta t}=j^2{\rho\over c}.$$</span> Now if you take the limit, then you may write: <span class="math-container">$$\lim\limits_{\Delta t\rightarrow 0}{\Delta T\over\Delta t}={d T\over dt}=j^2{\rho\over c}.$$</span> However, the argument depends on <span class="math-container">$t$</span> as the only variable in your equation, all the rest must be constant. Now if <span class="math-container">$\Delta t$</span> is not small, then you have an expression for the <em>average</em> time rate of change instead of the instantaneous rate of change given by the derivative, i.e. <span class="math-container">$${\Delta T\over \Delta t}=\text{average time rate of change for T}.$$</span></p>
|
Physics
|
|definition|conventions|fermions|spinors|grassmann-numbers|
|
Fierz idendity (supersymmetry)
|
<p>We define <span class="math-container">\begin{align} \psi \chi &\equiv \psi_a \chi^a = - \epsilon_{ab} \psi^a \chi^b , \qquad {\bar \psi} {\bar \chi} \equiv {\bar \psi}^{\dot a} {\bar \chi}_{\dot a} = \epsilon_{{\dot a}{\dot b}} {\bar \psi}^{\dot a} {\bar \chi}^{\dot b} . \end{align}</span> Expanding the sum out explicitly and find <span class="math-container">$$ \psi \psi = - 2 \psi^1 \psi^2 , \qquad {\bar \psi} {\bar \psi} = 2 {\bar \psi}^{\dot 1} {\bar \psi}^{\dot 2} . \tag{1} $$</span> We have <span class="math-container">$$ \psi^a \psi^b = c_1 \epsilon^{ab} \psi \psi , \qquad {\bar \psi}^{\dot a} {\bar \psi}^{\dot b} = c_2 \epsilon^{{\dot a}{\dot b}} {\bar \psi} {\bar \psi} $$</span> For some constants <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span>.</p> <p>We can now set <span class="math-container">$ab={\dot a}{\dot b}=12$</span> in the equation above and using <span class="math-container">$\epsilon^{12} = \epsilon^{{\dot 1}{\dot 2}} = 1$</span>, and matching to (1), we find <span class="math-container">$$ c_1 = - \frac{1}{2} , \qquad c_2 = \frac{1}{2}. $$</span></p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|heisenberg-uncertainty-principle|
|
Some questions about derivation of uncertainty principle
|
<p>It is true that by taking step (2) we are 'unnecessarily' making the uncertainty principle weaker. Furthermore, it's possible to use <span class="math-container">$|z|^2=Re(z)^2+Im(z)^2$</span> to derive the stronger <a href="https://en.wikipedia.org/wiki/Uncertainty_principle#Mathematical_formalism" rel="noreferrer">Robertson-Schrödinger uncertainty principle</a>: <span class="math-container">$$\sigma_A^2\sigma_B^2\geq \left|\frac{1}{2}\langle \{ \hat{A},\hat{B}\}\rangle-\langle \hat{A} \rangle \langle \hat{B} \rangle \right|^2 + \left|\frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle \right|^2 $$</span> And of course, leaving out the first term, we get the well-known uncertainty principle (3): <span class="math-container">$$\sigma_A^2\sigma_B^2\geq \left|\frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle \right|^2 $$</span> So why do we generally see the latter over the former? It's largely a matter of practicality. Note that the expected value of any operator <span class="math-container">$\hat{O}$</span> is <span class="math-container">$\langle \hat{O} \rangle=\langle\psi|\hat{O}|\psi\rangle$</span>, which is dependent on the state <span class="math-container">$|\psi \rangle$</span>. Thus, <span class="math-container">$\langle \{ \hat{A},\hat{B}\}\rangle$</span>, <span class="math-container">$\langle \hat{A} \rangle$</span> and <span class="math-container">$\langle \hat{B} \rangle$</span> are generally not known. On the other hand, there are many cases where <span class="math-container">$\langle[\hat{A},\hat{B}]\rangle$</span> is either the same regardless of <span class="math-container">$|\psi\rangle$</span>, such as: <span class="math-container">$$\langle[\hat{x},\hat{p}]\rangle=\langle i\hbar \hat{I} \rangle = i\hbar \Longrightarrow \sigma_x\sigma_p\geq \frac{\hbar}{2}$$</span> or otherwise noteworthy, such as: <span class="math-container">$$\langle[\hat{L_x},\hat{L_y}]\rangle=\langle i\hbar \hat{L_z} \rangle\Longrightarrow \sigma_{L_x}\sigma_{L_y}\geq \left|\frac{\hbar}{2}\langle L_z \rangle\right|$$</span> and: <span class="math-container">$$\langle[\hat{O},\hat{H}]\rangle=i\hbar \frac{d\langle\hat{O}\rangle}{dt} \Longrightarrow \frac{\sigma_{O}}{\left|\frac{d\langle\hat{O}\rangle}{dt}\right|}\sigma_{H}\geq \frac{\hbar}{2}$$</span></p> <p>At the end of the day, the uncertainty principle has little utility for doing exact calculations (if you have <span class="math-container">$|\psi\rangle$</span> you can just explicitly calculate <span class="math-container">$\sigma_A$</span> and <span class="math-container">$\sigma_B$</span>), but it does provide some insight into the relationship between the observables of a system. In that regard, the term <span class="math-container">$\left|\frac{1}{2}\langle \{ \hat{A},\hat{B}\}\rangle-\langle \hat{A} \rangle \langle \hat{B} \rangle \right|^2$</span> simply isn't very useful and is therefore left out.</p>
|
Physics
|
|general-relativity|black-holes|differential-geometry|
|
Are worldlines towards the origin with the Schwarzschild metric finite in time and length?
|
<p>Null geodesics are of interest because their collection can give you the set of null-cones on the spacetime, which, in turn show the constraints on massive particle worldlines. I think what John Rennie was saying in the comments is that once you see the alignment of null cones in Kruskal-Szekeres coordinates, it is clear that a time-like worldline, after crossing the event horizon, will end at the singularity. And, because there is no compactification for KS coordinates (like on a Penrose diagram), "it is obvious" that this end must occur in finite proper time.</p> <p>To calculate the infall time you need to integrate the geodesic equation. You could, e.g., follow these <a href="https://www.reed.edu/physics/courses/Physics411/html/411/page2/files/Lecture.31.pdf" rel="nofollow noreferrer">lecture notes by Joel Franklin (Reed College)</a> on radial freefall. They show that the proper time in going from some initial coordinate <span class="math-container">$r_0$</span> to some final value <span class="math-container">$r$</span> is: <span class="math-container">$$ \tau(r) =\frac{ \pm 2}{3 \sqrt{2 M}} \left( r^{3/2} - r_0^{3/2} \right) $$</span> where the sign is chosen to be "<span class="math-container">$-$</span>" for infall (and the initial velocity, <span class="math-container">$dr/d\tau$</span>, has been chosen specially to get this simple form; see comments of Cham below). Thus the infall time to <span class="math-container">$r=0$</span> is <span class="math-container">$$ \tau_\text{infall} = \frac{2 \, r_0^{3/2}}{3 \sqrt{2M}} \quad \rightarrow \quad c \, \tau_\text{infall} = \frac{2 r_0^{3/2}}{3 \sqrt{ 2 M \left( \frac{G}{c^2}\right)}} \quad \rightarrow \quad \tau_\text{infall} = \frac{2 r_0^{3/2}}{3 \sqrt{2 G M}} $$</span> where I've followed <a href="https://physics.stackexchange.com/a/806033/307551">Wald's prescription</a> for returning to physical units. Alternatively, you could write this as: <span class="math-container">$$ c \, \tau_\text{infall} = \frac{2}{3} \sqrt{\frac{r_0^3}{r_s}} $$</span> where the Schwarzschild radius is <span class="math-container">$r_s = \frac{2 GM}{c^2}$</span>. Taking the initial point to be the Schwarzschild radius, <span class="math-container">$r_0 = r_s$</span>, the time inside the black hole is: <span class="math-container">$$ \tau_\text{inside BH} = \frac{2}{3} \frac{r_s}{c} = \left\{\begin{array}{ll} \frac{2}{3} \frac{2950\,\text{m}}{c} \approx 6.6\, \mu\text{s} & M = 1\, M_\odot\\ \frac{2}{3} \frac{1.2\times10^{10}\,\text{m}}{c} \approx 40\, \text{s} & M = 4\times10^6\, M_\odot\\ \end{array} \right. $$</span></p> <p>Figure 31.1 in those lecture notes nicely shows the finite (proper) time of infall for the object, as compared to the infinite infall time as seen by an observer at infinity.</p> <p>And there is no need for the infalling worldline to traverse any "distance". Once you cross the event horizon, the future singularity will arrive to you in that finite proper time. You don't need to go anywhere to find it.</p>
|
Physics
|
|homework-and-exercises|orbital-motion|celestial-mechanics|
|
Is there a way to use the distances of the two opposite apsides to determine the eccentricity of an orbit?
|
<p>Yes. Derivation is pretty straightforward. Ellipse semi-major axis is arithmetic mean of perihelion & aphelion :</p> <p><span class="math-container">$$ \tag 1 a = \frac {r_{max}+r_{min}}{2}$$</span></p> <p>while semi-minor axis is geometric mean of maximum and minimum distances from foci : <span class="math-container">$$ \tag 2 b = \sqrt {r_{max} \cdot r_{min}} $$</span></p> <p>And since the eccentricity of an ellipse is defined as :</p> <p><span class="math-container">$$ \tag 3 e = \sqrt {1-\frac{b^2}{a^2}} $$</span> ,</p> <p>substituting (1) and (2) into 3-rd, gives :</p> <p><span class="math-container">$$\begin{align} \tag 4 e &= \sqrt {1-\frac{4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}} \\ &=\sqrt {\frac{(r_{max}+r_{min})^2}{(r_{max}+r_{min})^2}-\frac{4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{r_{max}^2+r_{min}^2+2 \cdot r_{max} \cdot r_{min} - 4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{r_{max}^2+r_{min}^2 - 2 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{(r_{max}-r_{min})^2}{(r_{max}+r_{min})^2}}\\ &=\boxed {\frac{r_{max}-r_{min}}{r_{max}+r_{min}}} \end{align} $$</span></p>
|
Physics
|
|newtonian-mechanics|reference-frames|collision|
|
Is the net force on centre of mass always equal to the external force applied on one block of a two block system connected by a spring?
|
<p>The force of the spring, does not act on the center of mass. That is because it does pull one of your masses in one direction, but the other one in the other, so the center of your mass is not affected by it.</p> <p>Looking only at the force caused by the spring (<span class="math-container">$F_S$</span>) you can see <span class="math-container">$$F_{S, m_1} = F_{S, m_2} \Leftrightarrow F_{S, m_1} + F_{S, m_2}= 0 \Leftrightarrow A_{S, m_1} m_1 + A_{S, m_2} m_2 = \frac{A_{S, m_1} m_1 + A_{S, m_2} m_2}{m_1 + m_2} \cdot (m_1 + m_2) =A_{S, com} m_{S, com} = F_{S, com} = 0 $$</span>.</p> <p>If now looking at the external force acting on <span class="math-container">$m_2$</span> only one can see for the center of mass: <span class="math-container">$$F_{com} = \frac{A_{m_1} m_1 + A_{m_2} m_2}{m_1 + m_2} \cdot (m_1 + m_2) = A_{m_1} m_1 + A_{m_2} m_2 = F_1 + F_2 = F_2 = F_{ext} $$</span></p> <p>Adding the 2 contributions together (superposition principle) you get indeed get <span class="math-container">$F_{com} = F_{ext} $</span>.</p> <p>Now looking at your proposal for the expression for the acceleration due to the restive force of the spring. This does look like the acceleration of the spring, I would expect to act on <span class="math-container">$m_2$</span>, expect that you wrote double the mass there <span class="math-container">$ F_{S, m_2}(x(t),k) = k \cdot x(t) = m A_{S, m_2}$</span>. However the resistive force of the spring only acts on mass 2, while it acts with an equally sized accelerating force on <span class="math-container">$m_1$</span>, which is why the contributions to the center of mass cancel out.</p> <p>For a less mathematical and more intuitive understanding, try to imagine the edge cases of <span class="math-container">$k \rightarrow 0$</span> (no spring) and <span class="math-container">$k \rightarrow \infty$</span> (rigid connection between the masses) in both cases it might be seen intuitively (maybe?), that the external force is the same force acting on the center of mass. Then why would one expect anything else in the intermediate cases?</p>
|
Physics
|
|quantum-mechanics|operators|hamiltonian|mathematics|eigenvalue|
|
Property of the Hamiltonian's discrete spectrum
|
<p>This is the so-called Weinstein criterion.<span class="math-container">$^1$</span> To prove it, we proceed as OP and expand the (normalized) vector <span class="math-container">$\psi$</span> into the (assumed to be) complete orthonormal eigenbasis of <span class="math-container">$H$</span> and denote by <span class="math-container">$E_c$</span> the eigenvalue of <span class="math-container">$H$</span> which is closest to <span class="math-container">$E_\psi$</span>. Then</p> <p><span class="math-container">$$(\Delta_\psi H)^2= \sum\limits_n \left(E_n-E_\psi\right)^2 |c_n|^2 \geq \left(E_c-E_\psi\right)^2 \tag 1 $$</span></p> <p>and thus</p> <p><span class="math-container">$$(\Delta_\psi H)\geq |E_c-E_\psi| \tag 2\quad .$$</span></p> <p>From the definition of the absolute value, this is equivalent with the fact that</p> <p><span class="math-container">$$-(\Delta_\psi H)\leq E_c-E_\psi \leq (\Delta_\psi H) \tag 3$$</span></p> <p>or</p> <p><span class="math-container">$$ E_\psi-(\Delta_\psi H) \leq E_c \leq (\Delta_\psi H) +E_\psi\tag 4\quad ,$$</span></p> <p>concluding the proof.</p> <hr /> <p><span class="math-container">$^1$</span> <em>Weinstein, D. H. "Modified ritz method." Proceedings of the National Academy of Sciences 20.9 (1934): 529-532.</em></p>
|
Physics
|
|quantum-mechanics|hilbert-space|metric-tensor|schroedinger-equation|
|
Global phase of wave function in quantum mechanics and Fubini-Study metric
|
<p>Let's first get some terminology straight:</p> <p>Yes, by the postulates of quantum mechanics, the vectors <span class="math-container">$\psi$</span> and <span class="math-container">$\mathrm{e}^{\mathrm{i}\lambda}\psi$</span> for any <span class="math-container">$\lambda\in\mathbb{R}$</span> in Hilbert space <span class="math-container">$H$</span> represent the same state. This leads us to consider the <a href="https://en.wikipedia.org/wiki/Complex_projective_space" rel="nofollow noreferrer">projective Hilbert space</a> <span class="math-container">$P(H) := H/\sim$</span> where <span class="math-container">$$ \psi \sim \psi ' \iff \exists c\in\mathbb{C} \psi' = c\psi.$$</span> This is not "the projective representation", it's just a projective space. If we would now consider how groups (the physical groups of symmetries or transformations of this space of states) act on this, we would be led to the idea of projective representations (see also <a href="https://physics.stackexchange.com/q/203944/50583">this Q&A of mine</a>), but there are no representations in the question as written.</p> <p>There is, however, the <a href="https://en.wikipedia.org/wiki/Fubini%E2%80%93Study_metric" rel="nofollow noreferrer">Fubini-Study metric</a> <span class="math-container">$d_\text{FS}$</span>. This is a metric on <span class="math-container">$P(H)$</span> in the proper sense, i.e. <span class="math-container">$$d_\text{FS}(\psi,\psi') = 0 \iff \psi = \psi'.$$</span> Now, the question seems concerned about the "trajectory" <span class="math-container">$\psi(t) = \mathrm{e}^{\mathrm{i}f(t)}\psi(0)$</span> being a the same state at all times even when <span class="math-container">$f(t) \neq -Et$</span>, i.e. <span class="math-container">$\psi(t)$</span> is not a solution to the Schrödinger equation.</p> <p>But - regardless of whether we phrase it in terms of projective spaces and the FS metric or not - this is just what you should expect from the postulate: <span class="math-container">$\psi(0)$</span> and <span class="math-container">$\psi(t)$</span> are, by definition, the same state when they are related by multiplication by a complex number, i.e. when they lie in the same "ray" in Hilbert space, and <span class="math-container">$\mathrm{e}^{\mathrm{i}f(t)}$</span> acts as multiplication by a complex number at all times.</p> <p>So this "trajectory" is, in fact, the constant trajectory that remains the same physical state at all times, regardless of whether it is a solution to the equations of motion or not.</p>
|
Physics
|
|thermodynamics|energy-conservation|differentiation|calculus|
|
First law of thermodynamics: Can we always speak in terms of infinitesimal changes?
|
<blockquote> <p>e.g. to speak of pressure-volume work, the process must be <em>quasistatic</em> as then we can equate thermodynamic pressure and mechanical pressure (example with a sliding piston) <span class="math-container">$\delta W=P\text{d}V$</span></p> </blockquote> <p>A process need not be quasi-static to determine work. <span class="math-container">$PdV$</span> work for any process is determined using the external pressure. If the process happens to be quasistatic, then the <span class="math-container">$P$</span> is both the system and external pressure in equilibrium with one another.</p> <blockquote> <p>So, the <strong>first law of thermodynamics</strong> in the most general case is formulated as: <span class="math-container">$$ \Delta U=\Delta Q-\Delta W$$</span></p> </blockquote> <p>Drop the deltas from heat and work. It makes no sense to talk about a "change" in heat or work. There are amounts of energy transferred in the form of heat and work. Your next differential equation of the first law correctly shows heat and work as inexact differentials, whereas internal energy (a property) is an exact differential.</p> <blockquote> <p>My question is, can we always speak of the total differential <span class="math-container">$\text{d}U$</span> even if the other states variables are not defined?</p> </blockquote> <p>Yes, because internal energy is a system property independent of the process. A change in internal energy between two equilibrium states is the same regardless of the process. For example, it doesn't matter if the process is quasistatic or not.</p> <blockquote> <p>To further underscore what I mean; the isovolumetric heat capacity is defined as: <span class="math-container">$$C_V=\lim\limits_{\Delta T\rightarrow0}\left(\frac{\Delta Q}{\Delta T}\right)_V$$</span></p> </blockquote> <p>That is not how the heat capacity at constant volume is defined. It is defined in terms of internal energy and temperature according to:</p> <p><span class="math-container">$$c_{v}=\biggl (\frac{\delta u}{\delta T}\biggr )_V$$</span></p> <blockquote> <p>Now, what must be this "constraint" that allows us to equate this limit with the partial derivative of the internal energy?</p> </blockquote> <p>The only constraints are constant volume and that the system consist of a single phase pure component.</p> <p>Hope this helps.</p>
|
Physics
|
|quantum-mechanics|harmonic-oscillator|quantum-entanglement|coupled-oscillators|
|
Does it make sense to talk about individual energies of interacting quantum particles?
|
<p>If we call the uncoupled Hamiltonian <span class="math-container">\begin{align*} \hat{H}_0= & \frac{\hat{P}_{1}^{2}}{2m}+\frac{\hat{P}_{2}^{2}}{2m}+\frac{m\omega^{2}\hat{X}_{1}^{2}}{2}+\frac{m\omega^{2}\hat{X}_{2}^{2}}{2}\,, \end{align*}</span> then it's not too difficult to show that <span class="math-container">$[k\hat{X_{1}}\hat{X_{2}}, \hat{H}_0]\neq0$</span> and therefore <span class="math-container">$[\hat{H}_0,\hat{H}]\neq0$</span>. This means that the two Hamiltonians do not share an eigenbasis, and hence there is <em>at least one</em> eigenvector of <span class="math-container">$\hat{H}$</span> (call it <span class="math-container">$\lvert E\rangle$</span>) that is not an eigenvector of <span class="math-container">$\hat{H}_0$</span>. This means that this particular state of definite energy <span class="math-container">$E$</span> is <em>not</em> a state of definite energy of the uncoupled system, which means that the two particles do not individually have definite energies.</p> <hr /> <p>To see more details, let's consider the following. The ground state for the full Hamiltonian <span class="math-container">\begin{align*} \hat{H}_0= & \frac{\hat{P}_{1}^{2}}{2}+\frac{\hat{P}_{2}^{2}}{2}+\frac{\hat{X}_{1}^{2}}{2}+\frac{\hat{X}_{2}^{2}}{2} + k\hat{X}_1\hat{X}_2\,, \end{align*}</span> is given by <span class="math-container">$$ \psi^{(k)}_0(x_1,x_2)=\frac{\sqrt[8]{1-k^2}}{\sqrt{\pi }} \exp \left( -\frac{\sqrt{1-k}}{4} \left(x_1-x_2\right)^2-\frac{\sqrt{k+1}}{4} \left(x_1+x_2\right)^2 \right)\,. $$</span> One can <em>immediately</em> see that this state is necessarily entangled between the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> degrees of freedom <em>unless</em> <span class="math-container">$k=0$</span>, where this expression reduces to <span class="math-container">$$ \frac{1}{\sqrt{\pi}}e^{-x^2/2}e^{-y^2/2}\,. $$</span> That is, it cannot be written as a product.</p> <p>Then, to measure the "energy of one of the particle", that requires one to have an operator in mind that corresponds to the energy of that particle. The Hamiltonian cannot be written as the sum of two terms, one involving <span class="math-container">$X_1$</span> and the other involving <span class="math-container">$X_2$</span>, and so this doesn't really work. However, we <em>could</em> salvage this by saying that the energy of particle <span class="math-container">$i$</span> is just <span class="math-container">$$ \hat{H}_j = \frac{1}{2}\hat{P}_j^2 + \frac{1}{2}\hat{X}_j^2\,, $$</span> and ask the question of what would be the result of the measurement of <span class="math-container">$\hat{H}_j$</span> if the two-particle system was in the ground state of <span class="math-container">$\hat{HG}$</span>. It's not too hard to show that <span class="math-container">$\psi^{(k)}_0(x_1,x_2)$</span> above is necessarily a linear combination of products <span class="math-container">$\psi_{n_1}(x_1)\psi_{n_2}(x_2)$</span> of the eigenstates of the decoupled system (<span class="math-container">$\hat{H}_0$</span>). Thus, there is necessarily a <em>spread</em> in the measurements of <span class="math-container">$\hat{H}_1$</span> and <span class="math-container">$\hat{H}_2$</span> when the state is the ground state <span class="math-container">$\psi^{(k)}_0(x_1,x_2)$</span> of the coupled system.</p>
|
Physics
|
|orbital-motion|astronomy|moon|exoplanets|eclipse|
|
Is it possible, by monitoring the brightness of stars, to find a “copy of the Earth + Moon” near them?
|
<p>Hunting for <a href="https://en.wikipedia.org/wiki/Exomoon" rel="noreferrer">exomoons</a> is an active area of research. There is a list of exomoon candidates <a href="https://en.wikipedia.org/wiki/List_of_exomoon_candidates" rel="noreferrer">here</a>, although none have been positively confirmed so far. Only one candidate exoplanet on that list, <a href="https://en.wikipedia.org/wiki/Kepler-409b" rel="noreferrer">Kepler-409b</a>, is close to the Earth in size, and its exomoon candidate is now deemed to be unlikely.</p> <p>So it looks like the answer is that an Earth/Moon size exoplanet and exomoon could be detected in principle, but it would be very difficult and/or improbable with current technology.</p>
|
Physics
|
|projectile|rocket-science|space-travel|nasa|
|
Why are NASA etc so good at hitting tiny targets in tiny time windows?
|
<p>So the launch team hands the S/C off to deep-space-navigation. For Mars orbit insertion of landing, there are 3 telemetry corrections maneuvers (TCM) at the beginning, with 3 correcting 2 correcting 1.</p> <p>Then on the long trip delta-DOR navigation (timing w/ baseline pulsars) builds very high precision (since the tracked object has a transmitter on it). During approach, there are 3 (or was it 4?) TCMs, maybe starting a week out? And then the last opportunity to change the S/C is fours before entry.</p> <p>Nav tells EDL: The S/C will go through "this" plane (the B plane... 1km x 1km) within a one second window. At that point, control (which there is none outside the light-cone) is handed to EDL, or approach descent landing, or orbit insertion team--which ever is appropriate.</p> <p>Regarding defective parts, get a part to space is very hard. To get a hammer into space, you need to know which tree the wood came from, and you need to track all other hammers made from that tree, or even the whole forest. One failure, and the handle is no longer space qualified. Same for the iron: you know the mine, you know iron ore from that mine performed elsewhere.</p> <p>In attaching the head to the handle, the entire procedure is writing dow and reviewed before the book goes to the clean room. Procedures may be rehearsed. The tech who does the work has two QA engineers watching the attachment processes, and calling out a mistakes.</p> <p>The finished hammer goes into bonded storage, where someone always has custody or control of it--all documents up the ying yang. Humidity and temperature are recorded for the life of the hammer.</p> <p>The hammer may be vibe, shock, hot/cold, and vacuum tested. In the space simulator (see JPL's--a nation historic monument), the sun's heat load is simulated.</p> <p>Similar hammers are vibe, shock, hear/cold, tested to failure to compute the margin.</p> <p>If you have 1000 hammers, a huge batch is used and they are "burned in"--se Weibull Distribution and Failure analysis---failures are most likely early or late, so the minimum is calculated to be during the mission.</p> <p>The hammer also goes under EMC and EMI compatibility testing.</p> <p>Also: you don't fly Apple's newest hammer--you get one that is 10 years old because it has been space qualified (class S).</p> <p>Did I mention radiation hardness requirements, and SEU sensitivity?</p> <p>Above all that are planetary protection requirements.</p> <p>Regarding radiation forces and what not, they are included, but over 40 years can go south. See "Pioneer Anomaly".</p> <p>For gravity...we have a lot of information about planets we've orbited many times (see: "Radio Science")--the harder part is dealing with non-newtonian gravity and the warp coordinates of our solar system. See "Parameterized Post Newtonian" and "JPL Barycentric Coordinates".</p>
|
Physics
|
|electromagnetism|lagrangian-formalism|energy-conservation|stress-energy-momentum-tensor|poynting-vector|
|
Showing that Poynting’s theorem preserved with Proca Lagrangian
|
<p>First let calculate <span class="math-container">$\partial_\nu\theta^{\mu\nu}$</span>: <span class="math-container">$$\begin{aligned} \partial_\nu\theta^{\mu\nu}& =\partial_\nu\theta^{\mu\nu}_0 + \frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2\partial_\nu(A^\mu A^\nu - \frac{1}{2}g^{\mu \nu}A_\alpha A^\alpha) &{(1)} \end{aligned}$$</span></p> <p>Putting <span class="math-container">$\theta^{\mu\nu}_0$</span> into equation (1) we have</p> <p><span class="math-container">$$\begin{aligned} \partial_\nu\theta^{\mu\nu} &=-\frac{1}{\mu_0}\partial_\nu(g_{\alpha\lambda}F^{\mu\alpha}F^{\lambda\nu}+ \frac{1}{4}g^{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}) + \frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 \partial_\nu(A^\mu A^\nu - \frac{1}{2}g^{\mu \nu}A_\alpha A^\alpha) &{(2)} \\&= -\frac{1}{\mu_0}g_{\alpha\lambda} (\partial_\nu F^{\mu\alpha})F^{\lambda\nu} -\frac{1}{\mu_0}g_{\alpha\lambda}F^{\mu\alpha} (\partial_\nu F^{\lambda\nu}) - \frac{1}{4 \mu_0}\partial^\mu(F_{\alpha\beta}F^{\alpha\beta}) \\& + \frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 (\partial_\nu A^\mu) A^\nu + \frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 A^\mu (\partial_\nu A^\nu) - \frac{1}{2\mu_0}(\frac{m_\gamma c}{\hbar})^2 g^{\mu \nu} \partial_\nu (A_\alpha A^\alpha) &{(3)} \end{aligned}$$</span></p> <p>For terms of equation (3) can be shown that</p> <ul> <li><p>The first term equals to <span class="math-container">$-\frac{1}{\mu_0}g_{\alpha\lambda}(g_{\beta\nu}\partial^\beta F^{\mu\alpha})F^{\lambda\nu}=-\frac{1}{\mu_0} F_{\alpha\beta}\partial^{\beta}F^{\mu\alpha}$</span></p> </li> <li><p>The third term equals to <span class="math-container">$-\frac{1}{2\mu_0}F_{\alpha\beta}\partial^\mu F^{\alpha\beta}$</span></p> </li> <li><p>The sum of first term and third term equals to zero, i.e.</p> </li> </ul> <p><span class="math-container">$$\begin{aligned} -\frac{1}{2\mu_0} F_{\alpha\beta}\{2 \partial^{\beta}F^{\mu\alpha} +\partial^\mu F^{\alpha\beta} \} &= 0 \ \ &{(4)} \end{aligned}$$</span></p> <ul> <li>The second term equals to <span class="math-container">$- F^{\mu\alpha} J_\alpha + \frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 F^{\mu\alpha} A_\alpha$</span> because of Proca field equation, i.e. <span class="math-container">$$\begin{aligned} \partial_\nu F^{\lambda\nu} &= \mu_0 J^\lambda- (\frac{m_\gamma c}{\hbar})^2 A^\lambda \ \ &{(5)} \end{aligned}$$</span></li> <li>If we choose Lorentz condition, i.e. <span class="math-container">$\partial_\nu A^\nu=0$</span>, the fifth term equals to zero;</li> <li>Sum of forth and sixth terms equal to <span class="math-container">$$\begin{aligned} -\frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 (-\partial_\nu A^\mu + \partial^\mu A_\nu)A^\nu = -\frac{1}{\mu_0}(\frac{m_\gamma c}{\hbar})^2 F^{\mu\nu} A_\nu \ &{ \ } \end{aligned}$$</span></li> </ul> <p>Thus equation (3) becomes <span class="math-container">$$\begin{aligned} \partial_\nu \theta^{\mu \nu} = - \frac{1}{\mu_0} F^{\mu\alpha}(\mu_0 J_\alpha) &= - F^{\mu\alpha}J_\alpha \equiv -f^{\mu} \ &{ \ (6)} \end{aligned}$$</span></p> <p>Now easily can be shown that <span class="math-container">$$\begin{aligned} \theta^{00} &=\frac{\epsilon_0}{2}(|E|^2 +(\frac{m_\gamma c}{\hbar})^2 \phi^2) + \frac{1}{2 \mu_0}(|B|^2 +(\frac{m_\gamma c}{\hbar})^2 A^2) \equiv W &{(7)} \end{aligned}$$</span></p> <p><span class="math-container">$$\begin{aligned} \theta^{0i} &=\frac{1}{\mu_0}(\boldsymbol{E}×\boldsymbol{B} +(\frac{m_\gamma c}{\hbar})^2 \phi A) \equiv \boldsymbol{S} &{(8)} \end{aligned}$$</span></p> <p><span class="math-container">$$\begin{aligned} f^\mu &= J_\alpha F^{\mu\alpha} = (\frac{(\boldsymbol{J} \cdot \boldsymbol{E})}{c} , \rho \boldsymbol{E} + \boldsymbol{J} × \boldsymbol{B}) = (f^0, f^i) &{(9)} \end{aligned}$$</span></p> <p>Where <span class="math-container">$W$</span> is energy density and <span class="math-container">$\boldsymbol{S}$</span> is Poyinting vector in Proca theorem.</p> <p>Finally let calculate <span class="math-container">$\partial_\nu \theta^{\mu \nu}$</span> for <span class="math-container">$\mu=0$</span>:</p> <p><span class="math-container">$$\begin{aligned} \partial_\nu \theta^{0\nu} &= \partial_0 \theta^{00} + \partial_i \theta^{0i} = \frac{1}{c} (\frac{\partial W}{\partial t} + \boldsymbol{\nabla} \cdot \boldsymbol{S}) \\& = -f^0 = - \frac{\boldsymbol{J} \cdot \boldsymbol{E}}{c} \ &{(10)} \end{aligned}$$</span></p> <p>Therefore we show that Poynting's theorem preserved in proca theorem, i.e. <span class="math-container">$$\begin{aligned} \frac{\partial W}{\partial t} + \boldsymbol{\nabla} \cdot \boldsymbol{S} &= - \boldsymbol{J} \cdot \boldsymbol{E} \ &{(11)} \end{aligned}$$</span></p>
|
Physics
|
|visible-light|
|
How do I calculate lumens of a specific RGB light source from watts per square meter per steradian?
|
<p>The conversion factor you need is the <a href="https://en.wikipedia.org/wiki/Luminous_efficacy" rel="nofollow noreferrer">luminous efficacy</a>. You need to know the full spectrum of the light to compute it. Tristimulus values (such as Rec.709 RGB) aren't enough, unless you make additional assumptions. For example, if you happen to know that the light has a blackbody spectrum, then you can work out the temperature from the RGB color and the luminous efficacy from the temperature.</p> <p>If you assume that the illuminant is a mixture of 610nm, 555nm and 465nm monochromatic lights then you can calculate the luminous efficacy as well, but that assumption is extremely unlikely to be true. The Rec.709 primaries aren't monochromatic lights of those or any other wavelengths, and even if they were, RGB values elsewhere in the cube wouldn't necessarily come from spectra that are linear mixture of those lights.</p> <p>All of this is irrelevant, though, since Blender shouldn't be using watts to begin with, it should be using lumens. If the rigid-body simulation demands masses in pounds, you shouldn't use the local gravity where the scene is set to do the conversion. You should use whatever fixed conversion factor they're using internally.</p> <p>Unfortunately for my theory, <a href="https://docs.blender.org/manual/en/latest/render/lights/light_object.html" rel="nofollow noreferrer">this page on blender.org</a> has a table of suggested lumen-watt conversions in which the ratio is nowhere close to any fixed value. Either they don't know what they're talking about or I don't.</p> <p>My advice, for what it's worth, is to use a simple fixed factor like 300 lm/W. As long as you're consistent, the value shouldn't matter since it's equivalent to a change in the overall exposure.</p>
|
Physics
|
|condensed-matter|spin-chains|
|
Relative Sign In XXZ Chain
|
<p>The two expressions can be shown to be equivalent using a canonical transformation that implements a spin rotation by <span class="math-container">$\pi$</span> about the spin <span class="math-container">$z$</span> axis on every other site. Explicitly, this transformation may be chosen to take <span class="math-container">$$ S_{2n}^x \rightarrow S_{2n}^x, \quad S_{2n+1}^x\rightarrow -S_{2n+1}^x,\\ S_{2n}^y \rightarrow S_{2n}^y, \quad S_{2n+1}^y\rightarrow -S_{2n+1}^y,\\ S_{n}^z \rightarrow S_{n}^z $$</span> This flips the sign of the <span class="math-container">$S_n^x S_{n+1}^x$</span> and <span class="math-container">$S_n^y S_{n+1}^y$</span> terms.</p>
|
Physics
|
|electromagnetism|
|
Why aren't big electromagnets made of iron?
|
<p>Iron is not a very good conductor of electricity so such design would increase the power wasted in generating the magnetic field. Not to mention that in solenoidal type designs, the magnetic flux is not through the wires but through the coil's center, where it makes more sense to put the iron.</p>
|
Physics
|
|special-relativity|observers|faster-than-light|tachyon|
|
One-way Tachyonic anti-telephone
|
<p>The problem is that it leads to events happening before their causes. The example that is often quoted considers two people who communicate with information that travels faster than light, with the result that the second person's reply arrives before the first person's question was sent.</p>
|
Physics
|
|general-relativity|metric-tensor|coordinate-systems|tensor-calculus|differentiation|
|
Tensor equation
|
<p>A valid tensor equation is simply an equation relating tensors. The partial derivatives of tensor components do not form the components of a tensor simply because they do not obey the tensor transformation rules. If an object is a tensor, its components must obey the tensor transformation rules. However, <span class="math-container">$$\frac{\partial v'^\rho}{\partial x'^\sigma} = \frac{\partial}{\partial x'^\sigma}\left(\frac{\partial x'^\rho}{\partial x^\mu}v^\mu\right) \\ = \frac{\partial v^\mu}{\partial x'^\sigma}\frac{\partial x'^\rho}{\partial x^\mu} + v^\mu \frac{\partial}{\partial x'^\sigma}\frac{\partial x'^\rho}{\partial x^\mu} \\ = \frac{\partial x^\gamma}{\partial x'^\sigma}\frac{\partial x'^\rho}{\partial x^\mu}\frac{\partial v^\mu}{\partial x^\gamma} + v^\mu \frac{\partial x^\gamma}{\partial x'^\sigma}\frac{\partial^2 x'^\rho}{\partial x^\gamma\partial x^\mu}.$$</span> Due to the presence of the second term, these cannot be the components of a tensor.</p>
|
Physics
|
|special-relativity|
|
Simple question on orthocronous proper lorentz transformation
|
<p>The group SO(3,1) has two discrete transformations, changing the unit sign of the determinant, T, the diagonal matrix of time reflection <span class="math-container">$t\to- t$</span> and R, the diagonal matrix <span class="math-container">$x_3 \to -x_3$</span>.</p> <p>All other spatial reflections can by accomplished as products of rotations and the single spatial reflection R acting on a single spatial dimension.</p> <p>It follows, that <span class="math-container">$SO(3,1)$</span> has four distinct copies of the proper Lorentz <span class="math-container">$L_{t+,\det=1}$</span> with positive time direction and standard orientation of the space sub-matrix.</p> <p><span class="math-container">$L$</span> is the subgroup of the group, continously connected to the unit matrix. The four distinct constituents are <span class="math-container">$$SO(3,1) = ( \text{Id} \vee T \vee R \vee R\cdot T ) \cdot SO(3,1)_{t+,det=1} $$</span></p> <p>To come the question, the orthochronous, <span class="math-container">$\det 1$</span>, subgroup is the product of the 1-d <span class="math-container">$(t,x)$</span> Lorentz boost subgroup and the subgroup of proper rotations in <span class="math-container">$SO(3)_{det=1}$</span>.</p> <p>This group may be parametrized by a-d plane of the rotation and an angle of rotation. The classical parametrization uses the axis in <span class="math-container">$\mathbb R^3 $</span> and the angle as radius.</p> <p>Now, by the fact, that the orientation of the plane or the direction of the axis leads to identify any rotions by <span class="math-container">$\pm \pi,$</span> the parameter space is the ball for the axis directions and the radius as the angle of rotation.</p> <p>With this model of the group manifold, it's evident, that a radial path from <span class="math-container">$r=0$</span> to <span class="math-container">$r=\pi$</span> enters the ball over the antipodal point of the same axis with closing the path until it reaches the origin.</p> <p>Such paths passing over the two identified antipodal points cannot be deformed continously into a point. By rotational invariance, on has one homotopy class of non contractible, closed single loops.</p> <p>SU(2,C), as a more elementary representation of the rotation group, extends the parameter ball up to radius <span class="math-container">$4\pi$</span>, with the areas of the outer 2-spheres beyond radius <span class="math-container">$2 \pi$</span> geometrically shrinking to a point at <span class="math-container">$4\pi$</span> again, that is identified with the identity.</p> <p>SU(2,C) is simply connected with the consequence, that it cannot not represent coordinate reflections, because <span class="math-container">$-1$</span> is the value on the <span class="math-container">$2\pi$</span>-\sphere.</p>
|
Physics
|
|general-relativity|differential-geometry|vectors|curvature|geodesics|
|
Orthogonal self-intersection of geodesics
|
<p>No, I don't see how the implication follows. It is very possible for geodesics to intersect themselves orthogonally, even in 2-dimensional manifolds. One example that is easy to visualize is a <a href="https://en.wikipedia.org/wiki/Geodesics_on_an_ellipsoid" rel="nofollow noreferrer">geodesic on an ellipsoid</a>: <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Really_long_geodesic_on_an_oblate_ellipsoid.svg/1920px-Really_long_geodesic_on_an_oblate_ellipsoid.svg.png" alt="1" /></p> <p>In fact, this is the trajectory of a satellite in low Earth orbit due to the oblateness of the Earth. However, if you consider 4-dimensional spacetime, your statement is true for massive particles as they follow timelike trajectories and two timelike vectors can never be orthogonal to each other. But again, this does not follow from your first statement that "geodesics parallel transport their velocity vectors".</p>
|
Physics
|
|special-relativity|inertial-frames|faster-than-light|tachyon|
|
Two-way tachyonic anti-telephone
|
<blockquote> <p>Now, the argument is that one can choose <span class="math-container">$a$</span> such that <span class="math-container">$T < 0$</span> and, hence, <span class="math-container">$A$</span> will receive a message from <span class="math-container">$B$</span> before he even sends one to <span class="math-container">$B$</span>. However, if you choose <span class="math-container">$a$</span> such that <span class="math-container">$T < 0$</span> we see that it will <span class="math-container">$a' < 0$</span> which means that the reply from <span class="math-container">$B$</span> is actually going away from <span class="math-container">$A$</span>, so <span class="math-container">$A$</span> will never receive a message from <span class="math-container">$B$</span>. So I do not see where the paradox actually is?</p> </blockquote> <p>This doesn't work. The easiest way to see this is to simply use the Lorentz transform on some of the intermediate events on the worldline of the signal. You can parameterize these intermediate events with some frame independent affine parameter, <span class="math-container">$\lambda$</span>.</p> <p>If <span class="math-container">$\lambda$</span> increases along the signal's worldline from <span class="math-container">$B$</span> to <span class="math-container">$A$</span>, that fact will hold in all frames. As <span class="math-container">$\lambda$</span> increases, the signal gets closer to <span class="math-container">$A$</span>. In <span class="math-container">$B$</span>'s frame <span class="math-container">$\lambda$</span> will increase with time but in <span class="math-container">$A$</span>'s frame <span class="math-container">$\lambda$</span> will decrease with time, but in both frames the signal will get closer to <span class="math-container">$A$</span> with increasing <span class="math-container">$\lambda$</span>.</p> <p>For example, consider <span class="math-container">$B$</span> returning a tachyon signal to <span class="math-container">$A$</span>. For concreteness let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> pass each other at <span class="math-container">$t=t'=0$</span> with <span class="math-container">$A$</span> moving at <span class="math-container">$v=0.6$</span> in <span class="math-container">$B$</span>'s frame (the unprimed frame) in units where <span class="math-container">$c=1$</span>. In <span class="math-container">$B$</span>'s frame the worldline for <span class="math-container">$A$</span> is <span class="math-container">$(t_A(t),x_A(t))=(t,0.6\ t)$</span>. At time <span class="math-container">$t=5$</span> in <span class="math-container">$B$</span>'s frame, <span class="math-container">$B$</span> will send a tachyon signal which travels at <span class="math-container">$u=6$</span>. In terms of an affine parameter <span class="math-container">$\lambda$</span> the worldline for the signal is <span class="math-container">$(t_s(\lambda),x_s(\lambda))=(\lambda+5,6\lambda)$</span>. Solving for <span class="math-container">$(t_A,x_A)=(t_s,x_s)$</span> we get <span class="math-container">$\lambda = 5/9$</span> which means that in all frames the affine parameter runs from <span class="math-container">$0$</span> to <span class="math-container">$5/9$</span>. Now, Lorentz transforming to <span class="math-container">$A$</span>'s frame (the primed frame), we get the worldline for the tachyon signal is <span class="math-container">$(t'_s(\lambda),x'_s(\lambda))=(-3.25 \lambda+6.25,6.75 \lambda-3.75)$</span>. We can simply evaluate this worldline at say <span class="math-container">$\lambda = 2/9$</span> and <span class="math-container">$\lambda = 4/9$</span> to get <span class="math-container">$(t'_s(2/9),x'_s(2/9))=(5.52,-2.25)$</span> and <span class="math-container">$(t'_s(4/9),x'_s(4/9))=(4.81,-0.75)$</span>. So in <span class="math-container">$A$</span>'s frame the negative signal velocity means that the signal is going backwards in time as it approaches <span class="math-container">$A$</span>.</p>
|
Physics
|
|newtonian-mechanics|rotational-dynamics|work|
|
Work-Energy Theorem but Work is $F.S_{COM}$ (Extended)
|
<p>The work-energy theorem is a difficult concept, and it is frequently misunderstood. Note that before equation 5.15 we are given equation 5.14, which is: <span class="math-container">$$\mathbf{ F} = M \ddot{\mathbf{R}}$$</span> This is Newton's 2nd law. The <span class="math-container">$\mathbf{F}$</span> in Newton's 2nd law is the net force <span class="math-container">$$\mathbf{F_{net}}=\sum_{i=1}^n \mathbf{F_i}$$</span> To me, this is a little confusing already. The authors should always (in my opinion) write the net force as <span class="math-container">$\mathbf{F_{net}}$</span> or with some other similar indication that it is a net force and not an individual force.</p> <p>Similarly, the <span class="math-container">$\mathbf{R}$</span> in Newton's 2nd law is the position center of mass (COM). While it is not as critical as the net-force distinction, it would also help in clarity if it were written <span class="math-container">$\mathbf{R_{com}}$</span>. So if we write 5.15 with full clarity then we get <span class="math-container">$$\int_{\mathbf{R_{com,a}}}^{\mathbf{R_{com,b}}}\mathbf{F_{net}}\cdot d\mathbf{R_{com}}=\frac{1}{2}M \mathbf{V_{com,b}}^2-\frac{1}{2}M \mathbf{V_{com,a}}^2$$</span></p> <p>This is clearer, but less concise and more effort to write.</p> <blockquote> <p>Why did KK decide to take the centre of mass displacement? Is it correct? If so, when can we take like that?</p> </blockquote> <p>They took the center of mass displacement because that is the <span class="math-container">$\mathbf{R}$</span> in Newton's 2nd law. As such, it is correct whenever you are using Newton's 2nd law. Indeed, that is the meaning of the 2nd law.</p> <p>However, although it is always correct in Newtonian mechanics, the point that causes confusion is the following. The quantity on the left of 5.15 is called the "net work" where the word "net" refers to the "net force". In turn, the word "net" in "net force" is used because the "net force" is the sum of all of the individual forces acting on the system. This inevitably leads to the confusion that students believe that the "net work" is also the sum of all of works done by each individual force acting on the system. This is false.</p> <p>Let's use the term "total work" to refer to the sum of all works done by each individual force acting on the system. The "net work" is not necessarily equal to the "total work". They are two separate concepts, and they can even differ when there is only a single force acting on the system!</p> <p>The total work gives the total change in energy. The net work gives only the portion of the change of total energy corresponding to a change in the kinetic energy of the center of mass, often called the translational kinetic energy.</p> <p>For example, consider a spring being compressed at a constant rate, <span class="math-container">$v$</span>, by a force <span class="math-container">$F$</span> from my hand while the other side is attached to a fixed wall. By Newton's 2nd law, the force from the wall is <span class="math-container">$-F$</span>. From these we can calculate both the "net work" and the "total work". The "net work" is the net force times the displacement of the COM, and since the net force is <span class="math-container">$F-F=0$</span> the "net work" is also zero regardless of the displacement. The work energy theorem says that the spring is not gaining KE. The "total work" is the sum of the work from each force. The wall has a displacement of zero, so the wall's work is <span class="math-container">$0$</span>. The hand has a work of <span class="math-container">$\int F \ ds$</span>, which from Hooke's law is <span class="math-container">$1/2 \ k x^2$</span>, assuming that the spring started at equilibrium. So the total work is <span class="math-container">$0+1/2 \ kx^2$</span>. Both the "net work" and the "total work" are correct. The "net work" says that the spring is not gaining KE. The "total work" says that the spring is gaining total energy. So, in this case, the difference is the increase in internal energy, the elastic potential energy.</p> <blockquote> <p>If it is valid, in what cases? Only for rigid bodies? Also, how would you justify the definition of work here?</p> </blockquote> <p>In the question about the drum, they are also using the rotational version of the work energy theorem. This would be the "net rotational work" about the COM. It is also valid when the rotational version of Newton's 2nd law is valid. So, in this case <span class="math-container">$fb\theta$</span> is the "net torque" not the torque due to the single force <span class="math-container">$f$</span>. Again, this "net rotational work" is a separate concept from the "total work" ("total work" is not divided into rotational and translational parts). Those two quantities differ, even in this case where there is a single force providing torque. For the "net rotational work" it doesn't matter that the point of application of the force providing the torque is not moving, that is relevant for the separate concept of "total work".</p>
|
Physics
|
|gravity|black-holes|projectile|
|
Would a black hole be slowed down as it passed through a large solid object?
|
<p>The black hole would be slowed by <a href="https://en.wikipedia.org/wiki/Dynamical_friction" rel="nofollow noreferrer">dynamical friction</a>. The black hole pulls material toward itself, but as it continues to move, that material ends up behind it. This makes an excess of material behind the black hole, and the gravitational pull of that excess mass slows the black hole.</p> <p>A secondary effect would be that as the black hole accretes material, it would pick up momentum from that material, also slowing it.</p>
|
Physics
|
|homework-and-exercises|dirac-equation|klein-gordon-equation|dirac-matrices|
|
Understanding derivation of Klein-Gordon equation from Dirac equation
|
<p><span class="math-container">$$ \gamma^\mu \gamma^\nu \partial^2_{\mu\nu}= \frac 12( \gamma^\mu \gamma^\nu \partial^2_{\mu\nu}+ \gamma^\nu \gamma^\mu \partial^2_{\nu\mu}) $$</span> where in the second term we have renamed the dummy indices <span class="math-container">$\mu\leftrightarrow \nu$</span>. As <span class="math-container">$\partial^2_{\mu\nu}= \partial^2_{\nu\mu}$</span> we have <span class="math-container">$$ \frac 12( \gamma^\mu \gamma^\nu \partial^2_{\mu\nu}+ \gamma^\nu \gamma^\mu \partial^2_{\mu\nu})= \frac 12( \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu) \partial^2_{\mu\nu}\\ =\eta^{\mu\nu}\partial^2_{\mu\nu}. $$</span></p>
|
Physics
|
|quantum-field-theory|path-integral|correlation-functions|propagator|klein-gordon-equation|
|
Time ordered correlator from path integral: equation of motion?
|
<p>Let <span class="math-container">$P(\Phi)$</span> be a polynomial in a set of field variables <span class="math-container">$\Phi= \{\phi_1(x), \phi_2(x),\ldots\}$</span>, and consider a correlation function <span class="math-container">$$ \langle{P(\Phi)}\rangle= \frac 1 Z \int d[\Phi] P(\Phi)e^{-S[\Phi]}. $$</span> The functional integral over <span class="math-container">$\Phi$</span> should be unaffected by a linear shift in the integration variables <span class="math-container">$\phi_i(x) \to \phi_i(x) +\delta \phi_i(x)$</span>. We therefore have<br /> <span class="math-container">$$ 0= \frac 1 Z \int d[\Phi] \left\{\int \delta \phi_i(x) \left(\frac{\delta P}{\delta \phi_i(x)} - P(\Phi) \frac{\delta S}{\delta \phi_i(x)}\right)d^d x\right\} e^{-S[\Phi]}, $$</span> and hence <span class="math-container">$$ 0=\left\langle \int \delta \phi_i(x) \left(\frac{\delta P}{\delta \phi_i(x)} - P(\Phi) \frac{\delta S}{\delta \phi_i(x)}\right)d^d x\right\rangle $$</span> for any <span class="math-container">$\delta\phi_i(x)$</span>. So, for each point <span class="math-container">$x$</span> and field <span class="math-container">$\phi_i$</span>, we have <span class="math-container">$$ \left\langle \frac{\delta P}{\delta \phi_i(x)}\right\rangle= \left\langle P(\Phi) \frac{\delta S}{\delta \phi_i(x)}\right\rangle, $$</span> which is the quantum counterpart of the classical equation of motion<span class="math-container">$$ \frac{\delta S}{\delta \phi_i(x)}=0. $$</span></p> <p>When the fields in <span class="math-container">$P= \phi(x_1)\phi(x_2) \cdots \phi(x_n)$</span> are bosonic <span class="math-container">$$ \frac{\delta P}{\delta \phi(x)}= \sum_{k=1}^n \delta(x-x_k)\phi(x_1) \phi(x_2)\cdots \widehat{\phi(x_k)} \cdots \phi(x_n), $$</span> where the hat denotes the omission of that factor.</p> <p>In your case the functional derivative <span class="math-container">$\frac{\delta S}{\delta \phi_i(x)}$</span> is the KG equation and you get your equation by taking <span class="math-container">$P= \phi(x)$</span> to be a one point function.</p>
|
Physics
|
|atomic-physics|spectroscopy|
|
Doubt in a method to determine the spectral terms for equivalent electrons
|
<blockquote> <p>why <span class="math-container">$L = 0$</span> or <span class="math-container">$S $</span> state is not considered after the term <span class="math-container">${}^{4}P$</span> in the example in table I for <span class="math-container">$ nd^3 $</span> case ?</p> </blockquote> <p>The answer is the two <span class="math-container">$M_L=0$</span> entries after <span class="math-container">$M_L=1$</span> entries belong to the last two spectral terms. We could have a <span class="math-container">$ L = 0 $</span> state if we had three <span class="math-container">$M_L =0$</span> entries instead of two.</p> <p>Same for other terms. Check the picture below. <a href="https://i.stack.imgur.com/w5AJ6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w5AJ6.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|quantum-mechanics|mass|atomic-physics|physical-constants|
|
Does Rydberg constant vary with mass?
|
<p>Yes, the Rydberg constant depends on the mass. This dependence can be seen from the perspective of Bohr's atomic theory for example. However, the Rydberg constant was established prior to any modern, including Borh's, atomic theories. The Balmer lines of the hydrogen spectra could be related using the Rydberg constant. There was certainly no reason then to doubt that it had any variation as there was no common knowledge of isotopic hydrogen. Providing that you are treating a specific isotope of hydrogen, then the Rydberg constant is truly constant.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|quantum-entanglement|
|
What does doubly-entangled $W$-like state do with three-particle setup?
|
<p>Are talking about the "usual" 3 particle W states? Your question seems to be about this, not doubly entangled W states (which are extremely complicated) per the title. Assuming I have not misunderstood you:</p> <ol> <li>The order of measurement of the 3 particles is not relevant.</li> <li>After the first is measured, the remaining 2 will retain a degree of entanglement. (That is a major difference as compared to the 3 particle GHZ state.)</li> <li>I don't believe there is a W state quite like the formula you presented. Usually there are 3 terms (as @Ghoster says); and it is the square root of three; and there is a plus sign; like this:</li> </ol> <p>|W> = 1√3(|001> + |010> + |100>)</p> <p>If so: You cannot say that the remaining spins/polarizations will or will not be the same if you measure them on the same basis as the first. Even though they are entangled, that entanglement will be one of two possible types. You may find this reference useful, which compares W states and GHZ states:</p> <p><a href="https://arxiv.org/abs/quant-ph/0107146" rel="nofollow noreferrer">Bell's theorem with and without inequalities for the three-qubit Greenberger-Horne-Zeilinger and W states</a></p> <p>W states are usually created probabilistically. However, I saw this piece proposing deterministic creation of such states. I don't know if it has been tested.</p> <p><a href="https://arxiv.org/pdf/1602.04166.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.04166.pdf</a></p>
|
Physics
|
|material-science|metals|
|
Cross Section Area
|
<p>Assuming negligible change in density, conservation of mass dictates the cross sectional area must decrease if the length increases so that the volume, and thus mass, remains the same.</p> <p>Hope this helps.</p>
|
Physics
|
|string-theory|conformal-field-theory|
|
Large central charge limit for Virasoro blocks
|
<p>Roughly speaking, the contribution of the state <span class="math-container">$L_{-n}|\Delta_s\rangle$</span> to a conformal block is <span class="math-container">$$ \mathcal{F}_{\Delta_s}^{(s)} = \cdots + \frac{\langle V_1V_2 L_{-n}|\Delta_s\rangle \langle V_3V_4L_{-n}|\Delta_s\rangle}{||L_{-n}|\Delta_s\rangle ||^2} + \cdots $$</span> In this contribution the 3pt functions that appear in the numerator are <span class="math-container">$c$</span>-independent. So if the denominator becomes infinite, the contribution vanishes.</p>
|
Physics
|
|general-relativity|differential-geometry|coordinate-systems|definition|kerr-metric|
|
What actually is Boyer-Lindquist coordinates?
|
<p>They are the same for a=0.<BR> For a>0 you get eliptical coordinate lines for the distance parameter<BR> (translate <a href="https://de.wikipedia.org/wiki/Boyer-Lindquist-Koordinaten" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Boyer-Lindquist-Koordinaten</a>). <BR>Your link is an adaption for Kerr-Newmann metric ...</p> <p><a href="https://i.stack.imgur.com/tJZsp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJZsp.png" alt="enter image description here" /></a></p>
|
Physics
|
|homework-and-exercises|electromagnetism|differential-geometry|tensor-calculus|
|
Maxwell's equations with differential form formalism
|
<p>The first thing you need to note is that your <span class="math-container">$\star F$</span> is only defined on a specific subset, <span class="math-container">$\Omega_0$</span>, of <span class="math-container">$\Bbb{R}\times\Bbb{R}^3$</span>, where <span class="math-container">$\Omega_0$</span> is the cartesian product of <span class="math-container">$\Bbb{R}$</span> with the domain of the spherical coordinate chart <span class="math-container">$(r,\theta,\phi)$</span>. This domain excludes quite a bit of stuff (e.g half a half a meridian from each sphere of radius <span class="math-container">$r>0$</span>). In particular, the origin of <span class="math-container">$\Bbb{R}^3$</span> is not a part of this domain, so when you calculated <span class="math-container">$d(\star F)$</span>, you should be mindful that you have only calculated it a-priori on <span class="math-container">$\Omega_0$</span>.</p> <p>Ok, this still isn’t the main issue, because the form <span class="math-container">$\star F$</span> as you have written can be written more globally as <span class="math-container">$q\cdot \frac{x\,dy\wedge dz+y\,dz\wedge dx+z\,dx\wedge dy}{r^2}$</span>, where <span class="math-container">$r^2:=x^2+y^2+z^2$</span>. Now, with this formula we see that <span class="math-container">$\star F$</span> is defined and smooth on <span class="math-container">$\Omega_1:=\Bbb{R}\times (\Bbb{R}^3\setminus\{0\})$</span>, and since <span class="math-container">$\Omega_0$</span> is dense in <span class="math-container">$\Omega_1$</span>, it follows that <span class="math-container">$d(\star F)$</span> vanishing on <span class="math-container">$\Omega_0$</span> implies it vanishes on <span class="math-container">$\Omega_1$</span>. So, indeed, <span class="math-container">$\star J$</span>, and thus <span class="math-container">$J$</span>, vanishes on <span class="math-container">$\Omega_1$</span>.</p> <p>Next, you have indeed correctly computed the Hodge dual <span class="math-container">$F=-\star\star F=-\frac{q}{r^2}\,dt\wedge dr$</span>; again in deriving this formula you a-priori assumed yourself to be working on the domain <span class="math-container">$\Omega_0$</span>, but again you can convince yourself (by smoothness of everything) this holds on the larger domain <span class="math-container">$\Omega_1$</span>.</p> <p>But now is where you need to be extremely mindful. Note carefully that <span class="math-container">$\Omega_1\neq\Bbb{R}\times\Bbb{R}^3$</span>, and that all the forms <span class="math-container">$F,\star F, J,\star J$</span> have singularities at the set of points with <span class="math-container">$r=0$</span>. So, none of the above calculations can be extended beyond the domain <span class="math-container">$\Omega_1$</span>. Therefore, you <strong>will NOT</strong> be able to recover a Dirac delta expression simply by fiddling around with <span class="math-container">$d(\star F)$</span> in this manner. If you want to properly treat this, then you need to treat it distributionally right from the beginning, then you need to compute exterior derivatives in the distributional sense (which is essentially defined by ensuring that Stokes’ theorem holds). Once you calculate things (i.e introduce your test-forms, and define a functional by integrating against them etc), you’ll recover your beloved Dirac delta.</p> <hr /> <p><strong>Further Remarks.</strong></p> <p>At this point I should point out that Carroll is really being quite sloppy (I guess intentionally so) with his prompt in (d), because he’s applying Stokes’ theorem in the classical sense on a domain where it is not applicable. This is one of the biggest reasons why confusion arises (for students especially) in Physics. What he really wants to do is treat the exterior derivative distributionally, but he’s not being explicit about it (because obviously he doesn’t introduce distributions formally).</p>
|
Physics
|
|cosmology|differential-geometry|spacetime|universe|topology|
|
Could the universe have a form of a $T^3$-torus?
|
<p>A universe with toroidal spatial surfaces is not in obvious conflict with general relativity, which is a local theory. Nor is it in conflict with the FLRW solution, which is separately applicable to different patches of the universe and says nothing about the universe as a whole.</p> <p>It would likely be at odds with observations, however, if the length scale is smaller than the size of the observable universe. It should make repeating patterns in the cosmic microwave background and the large-scale structure of the universe, which would show up very clearly in the frequency-space analyses that we normally use to characterize these systems.</p>
|
Physics
|
|homework-and-exercises|forces|pressure|fluid-statics|density|
|
Water pressure at the bottom of a box changes drastically depending on whether a water column above is connected or not?
|
<p>A major issue with your thought experiment is that, in your two scenarios, you seem to be making <strong>two different and contradictory assumptions</strong> about the rigidity of the box (and/or the compressibility of the fluid).</p> <hr /> <p>In your first scenario, you seem to be implicitly assuming that the walls of the box (and the pipe) are <strong>perfectly rigid</strong>. If the top and sides of the box can deform even a little bit under the enormous pressure of the 1 km column of water, they'll flex outwards and expand the volume of the box until most of the water in the pipe (of which there is only one liter) has flowed into the box, drastically reducing the pressure.</p> <p>In your second scenario, however, your reasoning only works if the box is <em>not</em> perfectly rigid. If it was, the weight of the pipe on top would be supported fully by the box, and none of it would be transferred into the water inside the box. Instead, you seem to be assuming that the top of the box is effectively <strong>floating on the water inside</strong>, so that the full weight of the pipe is transferred to the water.</p> <hr /> <p>Furthermore, if the box <em>was</em> perfectly rigid, fully closed and filled with incompressible fluid, the pressure inside in scenario 2 would actually be indeterminate! You can see this by observing that, since the box is rigid and the fluid incompressible, we can fill the box with fluid to <em>any</em> pressure before sealing it, and the volume of fluid inside the box will be the same! Thus, under these assumptions, just knowing the shape of the box and the amount of fluid inside is not enough to determine the pressure.</p> <p>Obviously that's not a physically meaningful scenario, but we can regard it as an approximation of a situation where the box is <em>almost</em> rigid and/or the fluid <em>almost</em> incompressible.</p> <p>In particular, let's assume that there's a valve at the top of the box, where the pipe will connect, which we can open and close at will.</p> <p>We will first remove the pipe and pour water in through the top valve until the box is full of water and the pressure at the top of the box equals ambient pressure. As you've calculated, the pressure at the bottom of the box will then be (ambient pressure plus) 9.81 kPa, i.e. the pressure under one meter of water.</p> <p>If we now close the valve, the pressure inside the box will not change. Now we plug in the 1 km × 1 mm² pipe into the (closed) valve and fill that with water too.</p> <p>The pressure at the bottom of the pipe (above the closed valve) will now be 9.81 MPa. (We assume the pipe and the valve somehow withstand this pressure.) The only thing that has changed <em>below</em> the valve, however, is that there's now an extra 1 kg weight of water (plus the weight of the empty pipe itself, which realistically would of course be way more than 1 kg) resting on top of the box.</p> <p>Since we assumed the box to be <em>almost</em> perfectly rigid, we can assume that the box will support most of this weight by itself, and thus the pressure of the water inside should not appreciably change. However, even if the entire weight of the water column was somehow transferred through the top of the box to the water below — maybe the "top" is actually a piston supported by the water below, but otherwise free to slide up and down? — that would still only increase the pressure inside the box by 9.81 Pa, i.e. from 9.81 to 9.81981 ≈ 9.82 kPa.</p> <p>Now let's open the valve. What happens?</p> <p>If the top of the box <em>was</em> actually a floating piston (of negligible mass, just like the pipe), what would happen is that the liter of water in the pipe would simply drain into the box, while the piston would rise by 1 mm to accommodate it. The pressure at the bottom of the box would still be 9.82 kPa just like before opening the valve.</p> <p>However, let's go back to our initial assumption of a nearly rigid box. When the valve is opened, the 9.81 MPa pressure at the bottom of the pipe is now transferred to the water in the box, and through it to the sides of the box. That's a <em>lot</em> of pressure pushing the sides outwards, and since they're only <em>almost</em> rigid, they'll still deform a little. And they only need to move a fraction of a millimeter for the box to expand enough to fit the extra liter of water from the pipe.</p> <p>Even if the walls of the box were <em>really</em> rigid, and could withstand a pressure of nearly 10 MPa without moving even a fraction of a millimeter, the water in the box is only <em>nearly</em> incompressible. The <a href="https://en.wikipedia.org/wiki/Properties_of_water#Compressibility" rel="nofollow noreferrer">bulk modulus of water</a> is around 2.2 GPa, so at a pressure of 10 MPa the volume of water decreases by about 0.45%. Since the volume of the pipe is only 0.1% of the volume of the box, however, that's more than enough for all the water in the pipe to fit into the box even without the walls flexing at all.</p> <p>Of course, as water drains out of the pipe and into the box, the pressure in the box will drop until the system attains an equilibrium, with a pressure at the bottom of the box somewhere strictly between 9.82 kPa and 9.82 MPa.</p> <p>The exact equilibrium pressure depends on the rigidity of the box and the compressibility of the fluid in it. Assuming a perfectly rigid box and a bulk modulus of 2.2 GPa for water, we can in fact calculate the equilibrium height of the water column as about 180 m, with a pressure at the bottom of about 1.8 MPa, which is enough to compress the volume of the water in the box by about 0.08%, or just enough to accommodate the extra 820 ml of water drained from the pipe.</p> <hr /> <p>Ps. What if we now close the valve again?</p> <p>The pressure on both sides of the valve is now the same, so nothing really changes. The box still contains approximately a cubic meter of water under high pressure (up to 1.8 MPa, which is a lot, but nowhere near the 9.8 MPa we'd get for the open-valve equilibrium pressure if we assumed the water to be perfectly incompressible and the box perfectly rigid), while the pipe still has enough water in it to maintain the same pressure on the other side of the valve (i.e. a column about 180 m high, which is also a lot, but nowhere near 1 km).</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|angular-momentum|mathematical-physics|
|
Azimuthal coordinate operator: Hermition or not? Self-adjoint or not?
|
<p>This is an attempt to rephrase @ACuriousMind's <a href="https://physics.stackexchange.com/questions/233266/how-can-i-solve-this-quantum-mechanical-paradox/233311#233311">answer</a> linked in the comments using a less mathematically sophisticated language as requested by the OP -- this is just an attempt at translation, not an original explanation (any errors are mine). As anticipated in the comments, as a physicist-not-a-mathematician, I'm not going to be careful about distinguishing Hermiticity from self-adjointness or essential self-adjointness, possibly to my peril.</p> <h2>Finite dimensional intuition</h2> <p>Let's start with finite dimensional spaces, and remind ourselves what Hermitian means in this context. Of course, we know an operator <span class="math-container">$A$</span> is Hermitian if <span class="math-container">$A=A^\dagger$</span>. For finite dimensional operators, we can check this condition by choosing some basis and writing out all the matrix elements <span class="math-container">$A_{mn}$</span> in that basis, and then checking explicitly that <span class="math-container">$A_{mn}=\bar{A}_{nm}$</span>, where overbar denotes the complex conjugate.</p> <p>A more sophisticated way to say what we're doing is to recall what it means to evaluate the matrix elements. It means we are evaluating the inner products <span class="math-container">$\langle n | A | m \rangle$</span> for some complete and orthonormal basis <span class="math-container">$|m\rangle$</span>. In order to be a complete basis, the states <span class="math-container">$|m\rangle$</span> must <em>span</em> the space of all states. In other words, we must be able to construct <em>any possible</em> state as a linear superposition of the basis states <span class="math-container">$|m\rangle$</span>. This is why, when answering a question about whether an operator is Hermitian or self-adjoint (or other questions about an operator), it is important to understand the space of states over which the operator is acting. These subtleties only become more important when dealing with infinite dimensional operators.</p> <p>Now, often, we start off working with some "canonical" space, where we get used to various operations being well-defined, but when we move to a variant of the original space, certain operations may become ill-defined.</p> <p>Again, let's just start with a finite-dimensional example. We can start with <span class="math-container">$R^3$</span>, or 3-dimensional vectors. Then an operation that makes total sense is to rescale a vector by a constant <span class="math-container">$a$</span>. This operation is Hermitian. However, now let's restrict ourselves to the subspace vectors in <span class="math-container">$R^3$</span> <strong>that have unit norm</strong>. Then it doesn't make sense to rescale vectors by any constant other than <span class="math-container">$1$</span> -- to do so, would take us out of the space we started in.</p> <h2>Operators and wavefunctions on a circle</h2> <p>OK now let's go to the example you're interested in.</p> <p>In the "canonical" example of an infinite straight line, we can define Hermitian operators <span class="math-container">$x$</span> and <span class="math-container">$p$</span>. Not only that, but their products <span class="math-container">$xp$</span> and <span class="math-container">$px$</span>, and their commutator <span class="math-container">$[x,p]=xp-px$</span>, are well-defined and Hermitian as well. Of the various properties we have to check, one of them is that all of these operators (<span class="math-container">$x$</span>, <span class="math-container">$p$</span>, <span class="math-container">$xp$</span>, <span class="math-container">$px$</span>, <span class="math-container">$[x,p]$</span>) take <em>wavefunctions</em> (normalizable complex-valued functions of the real line) and map them to wavefunctions.</p> <p>Now, the case you want to study is a particle moving on a circle. We will model this by saying that the infinite straight line is periodic, where we identify <span class="math-container">$x$</span> with <span class="math-container">$x+L$</span>. This means that the wavefunction <span class="math-container">$\psi$</span> must obey <span class="math-container">$\psi(x)=\psi(x+L)$</span>. (This is mathematically equivalent to your example, if you replace <span class="math-container">$x$</span> with <span class="math-container">$\phi$</span>, <span class="math-container">$p$</span> with <span class="math-container">$L_\phi$</span>, and <span class="math-container">$L$</span> with <span class="math-container">$2\pi$</span>.)</p> <h3>Operator <span class="math-container">$p$</span></h3> <p>Now, consider the operator <span class="math-container">$p$</span>. Does <span class="math-container">$p$</span> map wavefunctions from the space of periodic functions, to the space of periodic functions? Sure, because <span class="math-container">$\psi'(0) = \psi'(L)$</span>, and as naive physicists we'll just assume that the wavefunction is as smooth as we need it to be to take as many derivatives as we want.</p> <h3>Operator <span class="math-container">$x$</span></h3> <p>What about the operator <span class="math-container">$x$</span>? Well, it's perfectly possible to multiply <span class="math-container">$\psi$</span> by <span class="math-container">$x$</span> at any point. Furthermore, we can still normalize <span class="math-container">$x\psi$</span>. So, we can call it a Hermitian operator. However, there is a subtlety, because <span class="math-container">$x$</span> will not map periodic functions to periodic functions; <span class="math-container">$x \psi$</span> will not be periodic in general. This is because the periodicity condition, applied to <span class="math-container">$x\psi$</span>, becomes <span class="math-container">$0 = L \psi(L)$</span>, which is only true if <span class="math-container">$\psi(L)=0$</span> (and, of course, periodicity also implies that <span class="math-container">$\psi(0)=0$</span>). This only works for a special subspace of wavefunctions. However, it does't affect the ability to compute a value for <span class="math-container">$x\psi$</span>, or to evaluate the integral of <span class="math-container">$|x\psi|^2$</span>, so at this level we are ok.</p> <h3>Operator <span class="math-container">$px$</span></h3> <p>Now imagine we want to compute the action of the operator <span class="math-container">$px$</span> on our circle space. There was no problem with computing the action of this operator on the infinite line. However, now we run into a problem on the circle. The problem is that the input to <span class="math-container">$p$</span> should be a function that is periodic, so that we can differentiate it. However, we've just seen that <span class="math-container">$x\psi$</span> does not generally produce a periodic function. Therefore, <span class="math-container">$px$</span> is not defined for all wavefunctions on the circle. It is only defined for the special subspace of wavefunctions that also obey the stronger condition <span class="math-container">$\psi(0)=\psi(L)=0$</span>. Note that this subspace does not include the functions <span class="math-container">$e^{i k x}$</span>, or in azimuthal coordinates <span class="math-container">$e^{i m\phi}$</span>.</p> <h3>Uncertainty principle</h3> <p>In @ACuriousMind's answer, they point out that there is a trick for defining <span class="math-container">$\langle \psi| [p, x] | \psi \rangle$</span> for wavefunctions on the circle, to make sense of the uncertainty principle. Instead of evaluating <span class="math-container">$[p, x] | \psi\rangle$</span>, which would involve evaluating <span class="math-container">$px \psi$</span>, which we have just seen is not always defined, we can instead evaluate <span class="math-container">$p \psi$</span> and <span class="math-container">$x \psi$</span>, then compute the integral <span class="math-container">$\int( \bar{(p \psi)}(x \psi) - \bar{(x \psi)} (p \psi))$</span>. In this example, we are able to sidestep the issue of evaluating <span class="math-container">$px\psi$</span>, so we don't run into the problem of differentiating a discontinuous function. Then, everything is well defined, and the apparent issue with the uncertainty principle brought up in that question is resolved.</p> <h3>Summary</h3> <p>Anyway, to summarize, there is a subtlety with the <span class="math-container">$x$</span> operator when working on a circle, because <span class="math-container">$x\psi$</span> is not periodic. This doesn't necessarily cause a major problem on its own, but you can quickly run into issues if you start mixing <span class="math-container">$x$</span> and <span class="math-container">$p$</span>, as @ACuriousMind stated in the comments.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.