subject
stringclasses 2
values | topic
stringlengths 4
138
| question
stringlengths 3
1.14k
| answer
stringlengths 1
45.6k
|
---|---|---|---|
Physics
|
|homework-and-exercises|rotational-dynamics|angular-momentum|momentum|energy-conservation|
|
Very interesting case, where energy is not conserved?
|
<p>Your analysis is correct. The initial energy in the system of particle plus disc is the kinetic energy of the particle, <span class="math-container">$(1/2) m v^2$</span>. The final energy in the system of particle plus disc is the kinetic energy of the disc, which has a contribution <span class="math-container">$(1/2) m v^2$</span> from linear motion and also a contribution from rotation, making the total greater than <span class="math-container">$(1/2) m v^2$</span>. Here I am taking the wording of the question to mean that the particle hits the disc and the disc moves away, while the particle then stays still at the location of the collision (it does not stick to the disc).</p> <p>The conclusion is that something unusual must happen in order to make the particle end up not moving. If the collision were elastic or totally inelastic the particle would end up moving (either bounced off the disc or stuck to it). We have to conclude that no interaction involving only the disc and particle can give the observed result. There must be a third party involved, such as a nail that pops up and catches the particle, or a lump of glue or something. But if there is a nail or a lump of glue then our momentum analysis is incomplete: we have to account for the momentum of the body to which the nail is attached.</p> <p>If we insist that there is just the particle and the disc then the conclusion is one of:</p> <p>either</p> <ol> <li>the final state of motion given in the question never occurs, being a physical impossibility for the given system and initial conditions</li> </ol> <p>or</p> <ol start="2"> <li>the disc or particle is not simply a disc or particle, but has some stored internal energy, which is released when the two collide, having the effect of bringing the particle to rest and providing the requisite energy (imparted to the disc) to make this possible while conserving momentum and angular momentum overall</li> </ol>
|
Physics
|
|electric-current|capacitance|
|
Current and infinite coils/resisotrs/capacitors
|
<p>In order to simplify these types of arrangements of components you need more advanced methods than simply combining series and parallel components. One such method is called the "Pi to Tee" conversion, also known as the "Delta to Wye" conversion. See the figure below.</p> <p>After making appropriate conversions the circuit can often be simplified. You can obtain the applicable equations for the conversions on the web. Just google Pi to T or Delta to Wye conversion.</p> <p>Hope this helps.</p> <p><a href="https://i.stack.imgur.com/62GOh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/62GOh.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|electromagnetism|electrostatics|charge|gauss-law|
|
What exactly does charge density mean in Gauss's law?
|
<p>The charge density in Gauss' law is simply a scalar function that specifies the distribution of charge in a region of space. The region may be 1, 2, or 3-dimensional as the case demands. Essentially, <span class="math-container">$\rho$</span> specifies the nature of the source of the electrostatic field, in terms both of its magnitude and geometry. When you integrate it over all space you get the net charge contained in the region: <span class="math-container">$$Q_{\text{net}}=\int_{\text{all space}}\rho(\vec r)\;dV.$$</span> Essentially, Gauss' law tells one that the nature of the electric field depends on the distribution of charge in space. It is important to realize that the charge density can be a set of discrete point charges, an oddly shaped continuous yet non-uniform space charge or any combination of the two. The charge can also be distributed over the surface of a conductor. It is important to remember that the integral must exist for well-posed problems, for example, a constant charge density for all points in space leads to infinite net charge and a consequently infinite electric field, thus it makes sense that physical charge densities should be mindful of the convergence of <span class="math-container">$\int\rho\; dV$</span>. Any distribution of charge is fair (even infinite densities) if it is finite in its spatial extension, however, if the charge is to go to infinity in it's spatial extension, then one must take care to ensure that the net charge is still finite.</p>
|
Physics
|
|classical-mechanics|oscillators|
|
Double Pendulum Cannot Flip
|
<p><strong>Preliminaries.</strong></p> <p>First, let's clarify: the "solid black line" is not very clearly visible in this image, but the article provides a bigger image from which it's clear that it refers to the boundary of the central white region (excluding the two mushroom-shaped protrusions).</p> <p>Now, getting the equation for this boundary is pretty straightforward:</p> <p><strong>1. Total energy.</strong></p> <p>Assume, as the article does, that the rods are identical and have mass <span class="math-container">$m$</span> and length <span class="math-container">$l$</span>. <span class="math-container">$\theta_1$</span> is the deviation from the vertical of the top rod, and <span class="math-container">$\theta_2$</span> is likewise for the bottom rod. Define, for simplicity, <span class="math-container">$a=l/2$</span>. Then, when the system is released from rest, its total energy is:</p> <p><span class="math-container">$$U = U_1 + U_2 = mga(1-cos\theta_1) + [mga(1-cos\theta_1) + mga(1-cos\theta_2)]$$</span> <span class="math-container">$$U = mga(3-2cos\theta_1-cos\theta_2)$$</span></p> <p><strong>2. Minimum energy for at least one pendulum to flip.</strong></p> <p>To be energetically possible for the bottom pendulum to flip, the total energy must be enough for <span class="math-container">$\theta_2$</span> to get to <span class="math-container">$\pi$</span>. The lowest energy such configuration is when <span class="math-container">$\theta_1 = 0, \theta_2=\pi$</span>, and the kinetic energy is zero. So the minimum required energy is:</p> <p><span class="math-container">$$U_{barely} = mga(3 - 2cos 0-cos\pi)=2mga$$</span></p> <p>For the top pendulum, the minimum energy is higher (<span class="math-container">$4mga$</span>). So the equation for when the total energy is barely enough for at least one pendulum to flip is:</p> <p><span class="math-container">$$mga(3-2cos\theta_1-cos\theta_2)=U_{barely}=2mga$$</span></p> <p>Simplifying:</p> <p><span class="math-container">$$2cos\theta_1+cos\theta_2=1$$</span>,</p> <p>which gives the shape of the "solid black line".</p>
|
Physics
|
|quantum-mechanics|resource-recommendations|quantum-interpretations|wavefunction-collapse|
|
Relationship of the do-operator in do calculus and the notion of the collapse of wave a function
|
<p>You make an interesting point, although not a popular one (at least not yet). Far more common is to interpret the wavefunction collapse as Bayesian updating (upon learning new information). See for instance <a href="https://arxiv.org/abs/1107.5849" rel="nofollow noreferrer">Leifer and Spekkens</a>, and arguably all of <a href="https://plato.stanford.edu/entries/quantum-bayesian/" rel="nofollow noreferrer">Quantum Bayesianism</a>. But the do-calculus is not about mere “learning” of information. Indeed the whole point of causal modelling is that there is a crucial causal distinction between <em>seeing</em> a value and <em>setting</em> a value. The former is not a cause, and the latter is a cause.</p> <p>The original influential paper concerning the relationship of causal modeling to quantum systems (particularly entangled systems) is from <a href="https://arxiv.org/abs/1208.4119" rel="nofollow noreferrer">Wood and Spekkens</a>. They deviate from the essentials of the do-calculus in a few places (notably in Figure 26 of the arXiv version, where they curiously allow causal arrows to point to the settings S and T). The whole point of the do-calculus is that the externally-controlled points of intervention are the exogenous inputs to a causal model, and certainly what one “does” in these experiments is to choose the settings from outside the system. Choosing the basis in which one measures the system must by definition be a “cause” in such models. But apart from this lapse, the paper is excellent.</p> <p>With one notable exception, there are no categories of quantum interpretations which modify the description of the system when one “does” something -- in this context, choosing the measurement settings. Certainly there is no proposal in which choosing the settings collapses the wavefunction. True, when one learns the output result of the measurement, this is associated with the mathematical wavefunction collapse, thus the motivation to think of collapse as Bayesian updating. But the outcome is not the setting, and the do-calculus takes the setting to be a causal input, not an output at all.</p> <p>Of course, many people have noted that when one infers the past behavior of a quantum system, analyzing it in classical terms, it seems “as if” the input choice of the setting has influenced what has already happened. When one looks to see which slit a particle had passed through, it always is seen to only pass through one slit, “as if” the particle was responding to the future measurement basis. When one looks to see the interference pattern after a pair of slits, the result makes it look like something has passed through both slits, again “as if” the future setting was important in determining the past behavior. Following through on your observation, you notice that this happens in the do-calculus; choosing an input correlates other events with that input. That’s just cause and effect. And here we have an interpretation of events (at the slits) seemingly correlated with an external input (the setting choice). Could the setting be the cause of this behavior? If so, it would have to look something like figure 27 in Wood and Spekkens. The causal arrow would have to point from the future settings to the past.</p> <p>As mentioned, there is a category of quantum interpretations that takes this observation seriously, attempting to develop causal models of this sort, with some backwards-pointing causal arrows. Such modes are typically either called “retrocausal” or “future-input-dependent”. (See this <a href="https://arxiv.org/abs/1906.04313" rel="nofollow noreferrer">Rev. Mod. Phys.</a> piece for a relatively recent review.) In general, this would imply that “the wavefunction” was actually a large set of probability distributions, one for each possible future setting. (As the number of particles in the system grows, the number of possible future settings grows exponentially, which would then explain why “the wavefunction” gets exponentially complicated with particle number.). As described in <a href="https://arxiv.org/abs/1403.2374" rel="nofollow noreferrer">these</a> simple Ising-model-like examples, when one chooses the setting one would then be selecting which particular probability distribution would describe reality. Once chosen, that distribution need not be exponentially complicated, allowing the reality to exist in space and time, rather than in some enormous Hilbert space.</p> <p>But all of that would only work if the causal model was allowed to occasionally point backwards in time to past hidden variables, an idea that’s usually rejected out of hand. Without such retrocausation, it’s hard to see how the do-calculus could matter. On the other hand, if one entertained the possibility of hidden retrocausal effects, this viewpoint on causal modelling would seem to be absolutely essential to making further progress in developing any spacetime-based reformulation of quantum theory.</p>
|
Physics
|
|quantum-mechanics|energy|magnetic-fields|schroedinger-equation|linear-algebra|
|
Landau levels in symmetric gauge, what is the constraint on the quantum numbers?
|
<p>I have looked at this problem in the context of the Weyl equation. Below is a word-for-word extract from my notes, so it is not <em>exactly</em> what you want, but the square of the Weyl operator is the Schroedinger equation and my last paragraph relates to the physical meaning of the ranges of various integers. This interpretation came from numerically plotting the wavefunctions rather than by analytic reasoning.</p> <hr /> <p>Consider the motion of a massless, right-handed, Weyl fermion with positive charge <span class="math-container">$e$</span> in a magnetic field <span class="math-container">${\bf B}= -B \hat{\bf z}$</span>. The downward direction of the field has been chosen so that the particle orbits in an anticlockwise direction about the <span class="math-container">$z$</span> axis. The vector potential is therefore <span class="math-container">${\bf A}= (By/2,-Bx/2,0)$</span>.</p> <p>The Weyl Hamiltonian is <span class="math-container">$$ H= -i{\boldsymbol \sigma}\cdot (\partial -ie{\bf A}), $$</span> and <span class="math-container">$$ H^2 = {\mathbb I}\left(- \nabla^2 + \frac{e^2B^2}{4} r^2 +eBL_3 +k_3^2\right) + eB\sigma_3. $$</span> Here <span class="math-container">$L_3= -i(x\partial_y-y\partial x)=-i\partial_\theta$</span> is the canonical (but not kinetic) angular momentum. The energy eigenvalues for the expression in brackets is <span class="math-container">$$ E^2_{n,l} = eB\left\{2n+|l|+l+ 1\right\}+k^2_3 $$</span> with eigenfunctions <span class="math-container">$$ \varphi_{n,l}(r,\phi) =\left(\frac{eB}{2}\right)^{(|l|+1)/2} \sqrt{\frac{n!}{(n+|l|)!}} r^{|l|}\exp\left(-\frac{eBr^2}{4}\right) L^{|l|}_n\left(\frac{eBr^2}{2}\right) e^{il\theta}. $$</span> Here <span class="math-container">$L^{|l|}_n(x)$</span> is the degree-<span class="math-container">$n$</span> associated Laguerre polynomial <span class="math-container">$$ L^{|l|}_n(x) = \frac{x^{-|l|}}{n!} e^x \frac{d^n}{dx^n} (e^{-x} x^{n+|l|}) $$</span> with the property that <span class="math-container">$$ \int_0^\infty x^{\alpha} e^{-x} L^\alpha_n(x)L^\alpha_m(x) dx = \frac{(n+\alpha)!}{n!} \delta_{nm}. $$</span> There is a generating function <span class="math-container">$$ \sum_{n=0}^\infty L_n^k(x) t^n =(1-t)^{-k-1} \exp\left\{-\frac{ xt}{(1-t)}\right\} $$</span> And an addition formula <span class="math-container">$$ \exp{\left(x y e^{i \phi}\right)} L_n\left(x^2 + y^2 - 2 x y \cos{\phi}\right) = \sum_{k = 0}^\infty \left(x y e^{i\phi}\right)^{(k - n)} \frac{n!}{k!} L_n^{(k - n)}\left(x^2\right) L_n^{(k - n)}\left(y^2\right). $$</span></p> <p>For our eigenfunctions both <span class="math-container">$n$</span> and <span class="math-container">$l$</span> are integers. When <span class="math-container">$n=0$</span> and <span class="math-container">$l>0$</span>, the wavefunction corresponds to classical particle describing a circle with radius <span class="math-container">$$ R_l=\sqrt{\frac{2l}{eB}} $$</span> with the origin as its centre. If we decrease <span class="math-container">$l$</span> while staying in the same Landau level (<em>i.e.</em> by increasing <span class="math-container">$n$</span> so as to keep <span class="math-container">$E^2_{n,l}$</span> fixed) the classical circular orbit keeps the same radius but its centre moves away from the origin and is smeared-out in <span class="math-container">$\theta$</span> over the full <span class="math-container">$2\pi$</span>. When <span class="math-container">$l=0$</span> the circle passes through the origin. For <span class="math-container">$l$</span> negative, the energy no longer depends on <span class="math-container">$l$</span> and the Landau level keeps <span class="math-container">$n$</span> fixed while <span class="math-container">$l$</span> continues to decrease. The classical orbit still has the original radius, but no longer encloses the origin.</p> <p>The familiar lowest-Landau-level state with the same radius circular wavefunction corresponds to <span class="math-container">$n=0$</span> and <span class="math-container">$l=-|l|$</span>.</p>
|
Physics
|
|fluid-dynamics|aircraft|propulsion|
|
Why is accelerating more air slower more efficient than less air faster?
|
<p>One of assumptions in the answer that you linked to is that the purpose of accelerating the air is to use the reaction force to lift an object off the ground. Force is change in momentum per unit time. So to generate a given force <span class="math-container">$F$</span> we have to accelerate a mass <span class="math-container">$m$</span> of air each second from rest to a speed <span class="math-container">$v$</span> where</p> <p><span class="math-container">$F = mv$</span></p> <p>The energy that we use to do this is (at least) the kinetic energy added to the air, which is</p> <p><span class="math-container">$\displaystyle E = \frac 1 2 mv^2$</span></p> <p>But we could choose to accelerate a larger mass of air (say a mass <span class="math-container">$m' = 2m$</span>) to a slower speed (say <span class="math-container">$v' = \frac v 2$</span>). We still have</p> <p><span class="math-container">$F = m'v'$</span></p> <p>so we are still producing the required force. But the energy required to do this is now</p> <p><span class="math-container">$\displaystyle E' = \frac 1 2 (m')(v')^2 = \frac 1 2 (2m) \left( \frac v 2 \right)^2 = \frac 1 4 mv^2 = \frac 1 2 E$</span></p> <p>So by accelerating a larger mass of air to a lower speed we produce the same reaction force but use only half the energy - which is more efficient.</p>
|
Physics
|
|newtonian-mechanics|experimental-physics|
|
Breaking the rock using eggs
|
<p>Easy, if you allow for relativistic speed. At a few tens of kilometers per second (about earth's orbital speed), minor forces like chemical bonds or electrostatic attraction between electrons and nuclei already cease to play a role. Egg and stone would simply turn into a mixed plasma for an instant, and the plasma would break the stone into pieces. Once you get to a significant fraction of the speed of light, you'd also be fusing nuclei. In any case, a single egg is ample amount of ammo. Please, don't overdo it, the majority of people on earth does not wish to die a sudden, violent death...</p>
|
Physics
|
|quantum-field-theory|lagrangian-formalism|dirac-equation|charge-conjugation|cpt-symmetry|
|
Charge conjugation is a symmetry for the quantized free Dirac action?
|
<p>Charge conjugation leaves spacetime coordinates and Dirac matrices alone, and acts only on the spinor structure of <span class="math-container">$\Psi$</span>, meaning that <span class="math-container">$$ {\cal{C}}\left(i\gamma^\mu\partial_\mu\right)\Psi{\cal{C}}^\dagger=\left(i\gamma^\mu\partial_\mu\right){\cal{C}}\Psi{\cal{C}}^\dagger \ . $$</span> Using <span class="math-container">$$ {\cal{C}}\Psi{\cal{C}}^\dagger=i\gamma^2\Psi^*=i\gamma^2\gamma^0\overline{\Psi}^T \ , $$</span> <span class="math-container">$$ {\cal{C}}\Psi^\dagger{\cal{C}}^\dagger=i\Psi^T\gamma^2 \ \ , \ \ {\cal{C}}\overline{\Psi}{\cal{C}}^\dagger=i\Psi^T\gamma^2\gamma^0 \ , $$</span> and <span class="math-container">$$ \gamma^0\gamma^\mu\gamma^0=\gamma^{\mu \, \dagger} \ \ , \ \ \gamma^2\gamma^\mu\gamma^2=\gamma^{\mu \, *} \ , $$</span> we find <span class="math-container">$$ {\cal{C}}\overline{\Psi}\Psi{\cal{C}}^\dagger={\cal{C}}\overline{\Psi}{\cal{C}}^\dagger{\cal{C}}\Psi{\cal{C}}^\dagger=-\Psi^T\overline{\Psi}^T=-\Psi_\alpha\overline{\Psi}_\alpha \ , $$</span> <span class="math-container">$$ {\cal{C}}\overline{\Psi}\left(i\gamma^\mu\partial_\mu\right)\Psi{\cal{C}}^\dagger={\cal{C}}\overline{\Psi}{\cal{C}}^\dagger\left(i\gamma^\mu\partial_\mu\right){\cal{C}}\Psi{\cal{C}}^\dagger=\Psi^T\left(i\gamma^{\mu \, T}\partial_\mu\right)\overline{\Psi}^T= \Psi_\alpha\left(i\left(\gamma^{\mu}\right)_{\beta\alpha}\partial_\mu\overline{\Psi}_\beta\right) \ . $$</span> Now we have to "pull" <span class="math-container">$\overline{\Psi}$</span> to the left. The anticommutator of two fermion field operators (or their derivatives) is problematic when evaluated at the very same spacetime point, but note that in its second quantized definition, the Lagrangian is normal ordered (it is defined like that exactly to cancel these ill-defined constants). Within normal ordering (denoted as <span class="math-container">$:(...):$</span>), all fermion field operators anticommute without additional terms: <span class="math-container">$$ :\Psi_\alpha\overline{\Psi}_\beta: \, =- \, :\overline{\Psi}_\beta\Psi_\alpha: \ \ , \ \ :\Psi_\alpha\left(\partial_\mu\overline{\Psi}_\beta\right): \, =- \, :\left(\partial_\mu\overline{\Psi}_\beta\right)\Psi_\alpha:=- \, :\overline{\Psi}_\beta\overset{\leftarrow}{\partial_\mu}\Psi_\alpha: \ , $$</span> leading to <span class="math-container">$$ {\cal{C}}:\overline{\Psi}\Psi:{\cal{C}}^\dagger= \, :\overline{\Psi}\Psi: \ \ , \ \ {\cal{C}}:\overline{\Psi}\left(i\gamma^\mu\partial_\mu\right)\Psi:{\cal{C}}^\dagger=-:\overline{\Psi}\left(i\gamma^\mu\overset{\leftarrow}{\partial_\mu}\right)\gamma^\mu\Psi: \ . $$</span> If we define the Lagrangian as <span class="math-container">$$ {\cal{L}}'= \, :\overline{\Psi}\left(i\gamma^\mu\overset{\rightarrow}{\partial_\mu}-m\right)\Psi: \ , $$</span> then charge conjugation would transform this into <span class="math-container">$$ {\cal{C}}{\cal{L}}'{\cal{C}}^\dagger= \, :\overline{\Psi}\left(-i\gamma^\mu\overset{\leftarrow}{\partial_\mu}-m\right)\Psi: \ . $$</span> The difference between this and the original Lagrangian is just an uninteresting total derivative: <span class="math-container">$$ \overline{\Psi}i\gamma^\mu\overset{\rightarrow}{\partial_\mu}\Psi=-\overline{\Psi}i\gamma^\mu\overset{\leftarrow}{\partial_\mu}\Psi + \, \text{total derivative} \ , $$</span> meaning that <span class="math-container">${\cal{L}}'$</span> and <span class="math-container">${\cal{C}}{\cal{L}}'{\cal{C}}^\dagger$</span> are essentially equal. We can make this symmetry more manifest by defining the Lagrangian alternatively (but equivalently) as <span class="math-container">$$ {\cal{L}}=\, :\overline{\Psi}\left(\frac{i}{2}\gamma^\mu\overset{\leftrightarrow}{\partial_\mu}-m\right)\Psi: \ , $$</span> where <span class="math-container">$\overset{\leftrightarrow}{\partial_\mu}=\overset{\rightarrow}{\partial_\mu}-\overset{\leftarrow}{\partial_\mu}$</span>. This choice shows <span class="math-container">$$ {\cal{C}}{\cal{L}}{\cal{C}}^\dagger={\cal{L}} $$</span> explicitly.</p>
|
Physics
|
|speed-of-light|time|speed|
|
Northern lights / aurora borealis "pre-warning" - how does this work w.r.t timing and different particle / wave speeds?
|
<p>You are roughly correct, although the first thing isn't really a material thing that is "ejected" by the Sun. The prediction is usually first done by satellites directly observing the Sun (indeed "light travelling at light speed"), most notably the <a href="https://en.wikipedia.org/wiki/Solar_and_Heliospheric_Observatory" rel="noreferrer">SOHO</a> which uses a coronagraph. From this, information such as the strength of the solar flare and whether there's an Earth-directed CME can be determined. The CME, which consists of charged particles, is the second thing you are talking about and takes a couple of days to reach Earth, causing aurora. The charged particles travel anywhere between <span class="math-container">$100$</span> and <span class="math-container">$3000 \;\text{km/s}$</span>, although speeds are most typically a few hundred <span class="math-container">$\text{km/s}$</span>.</p> <p>Instruments closer to Earth detect when the CME actually arrives. For example, the DSCOVR and ACE satellites, which measure the interplanetary magnetic field and solar wind, are located at the Sun-Earth L1 point which gives about 30 minutes of advance warning. Finally, magnetometers on Earth detect when it actually hits Earth.</p> <p>More information can be found <a href="https://www.swpc.noaa.gov/news/coronal-mass-ejections-cme-space-weather-phenomena" rel="noreferrer">here</a>.</p> <p>There are dedicated websites that track such data, such as <a href="https://solarham.net" rel="noreferrer">https://solarham.net</a> and <a href="https://www.spaceweatherlive.com" rel="noreferrer">https://www.spaceweatherlive.com</a>.</p>
|
Physics
|
|waves|electromagnetic-radiation|
|
What exactly qualifies something to be a transverse wave?
|
<p>Statement D is not true because there are transverse waves that require a medium to be vibrated and so do not travel in vacuum. Transverse waves on a rope is an example.</p> <p>By a process of elimination I think C is the expected answer. You can form a "stationary" wave (I think this must mean a standing wave) from a transverse wave by fixing two nodes an appropriate distance apart - this is how guitar strings work, for example. However, this is not a defining characteristic of transverse waves, since it is also true of longitudinal waves e.g. in organ pipes.</p>
|
Physics
|
|quantum-field-theory|quarks|protons|gluons|
|
Why so much kinetic energy inside a proton?
|
<p>The simplistic answer is that a proton is very small. The quarks are not free, but are confined to a small region. By the uncertainty principle a small uncertainty in the position of the quarks implies a large uncertainty in their momentum. The expectation of the momentum is zero, but the uncertainty is large. To calculate the energy you square the momentum, so the energy has a non-zero expectation that is relatively large.</p>
|
Physics
|
|thermodynamics|water|fluid-statics|phase-transition|density|
|
Does having a liquid (less dense than ice) above a floating (in water) ice cube, change the fact that the water level remains constant when ice melts?
|
<p>Without the second liquid above, the ice displaces a volume of water exactly equal to its own weight. After it melts, the ice becomes the same weight and volume of water, which is why the water level remains constant.</p> <p>However, the upper liquid layer provides some buoyancy, so less of the ice cube is in the water than if there were no liquid above (i.e. it "sits higher" in the water). Therefore, the ice displaces less water than before but still contains the same amount of water as before, resulting in a rise in the water level after it melts.</p>
|
Physics
|
|newtonian-mechanics|forces|rotational-dynamics|friction|rigid-body-dynamics|
|
How does rolling without slipping happen physically?
|
<blockquote> <p>"<em>What is the mechanism that assures the equality v=Rω, and under what conditions does it work?</em>"</p> </blockquote> <p>The condition when the friction force at the interface between the wheel's edge and the ground never exceeds the the (maximum) static friction. Because if it does, that's when you get slip.</p> <p>When you get no slip, that means the point of contact between the wheel and the ground has no relative motion. When that happens, if the ground is your frame of reference, the entire wheel rotates about that point. It behaves similar to an object that is continuously falling forward (on flat horizontal ground, anyways). That's where this diagram, which you have probably seen before comes from:</p> <p><a href="https://i.stack.imgur.com/vGXyw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vGXyw.png" alt="enter image description here" /></a></p> <p><a href="https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_%28Martin_Neary_Rinaldo_and_Woodman%29/12%3A_Rotational_Energy_and_Momentum/12.02%3A_Rolling_motion" rel="noreferrer">https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_%28Martin_Neary_Rinaldo_and_Woodman%29/12%3A_Rotational_Energy_and_Momentum/12.02%3A_Rolling_motion</a></p> <p>If the point of contact between the wheel and the ground cannot slide, it means that every point on the circumference of the wheel can be uniquely mapped to a point on the ground over a single rotation.</p> <p>When deriving what a radian is from a unit circle (or any arbitrary circle, really) you come by the arc length:</p> <p><span class="math-container">$d=R\theta$</span></p> <p>Since you can map the circumference of the wheel over a single rotation to the ground you can use that to determine the distance traveled. If you differentiate with respect to time then you can get the speed:</p> <p><span class="math-container">$v=R\omega$</span></p> <p>which is the speed.</p> <p>The reason passive rolling tends towards no slip is because the system is tending towards a lower energy state (or, more generally, least action as things in the universe seem to inexplicably do) with the the lowest energy state being the ultimate no-slip condition: zero speed. Slipping involves dynamic friction which causes energy loss while static friction does not. That means slipping will continue to dump energy until the system falls into an energy state low enough to have no-slip. From this, we can retro-actively conclude that slipping is a higher energy state than no slipping which may have been intuitively obvious, but rather difficult to explain beyond "something passively spinning really fast has more energy while also being much more likely to slip than something passively spinning slowly".</p> <p>For active rolling there is no rule that says the system needs to tend towards no slip since we have continuous energy input that can offset the tendency to fall into an energy state low enough to have no slip. However, if something is actively rolling, it probably means a human is doing it and we strive to operate in no-slip conditions since slippage is inefficient if your goal is to move and therefore not desirable.</p>
|
Physics
|
|newtonian-mechanics|vectors|velocity|collision|speed|
|
In this conservation of momentum problem, where is the sign error coming from?
|
<p>You have a choice of you keeping track of the vertical direction, or the math keeping track.</p> <p>Since you know one particle is going "up" and one particle is going "down", you can set the magnitude of their vertical momentum equal to each other. That's what you've done with the final equation. That works because you calculate the magnitude with the angle from horizontal.</p> <p>To have the math keep track, you need to use the absolute angle. In that case piece2 does not move at an angle of 60 degrees, but an angle of 300 degrees. The sine of that angle is negative.</p>
|
Physics
|
|electromagnetism|special-relativity|causality|
|
How do you make self-consistent initial values?
|
<p>Knowing the initial position and momenta of all charges, plus the analogous quantities for the electric and magnetic fields, at one initial time, is sufficient to determine the motion of the charges and fields at all later times (using the <a href="https://en.wikipedia.org/wiki/Green%27s_function#Advanced_and_retarded_Green%27s_functions" rel="nofollow noreferrer">retarded Green's function</a>), <strong>and at all earlier times</strong> (using the advanced Green's function).</p> <p>So you don't need to know in advance the complete history of the charges and fields up to some initial time. You just need to know the initial conditions at one time.</p> <p>There is a subtlety with specifying initial data in electromagnetism; namely, there are <a href="https://accelconf.web.cern.ch/e96/PAPERS/TUPG/TUP117G.PDF" rel="nofollow noreferrer">constraints</a> on the initial data, which means you aren't completely free to specify all the components of the electric and magnetic fields, and their first time derivatives, as well as the positions and momenta of the charges, at the initial time. The physical meaning of this is that you can freely specify the positions and momenta of the charges, and two components of the fields and two time derivatives corresponding to the initial configuration of the two independent propagating polarizations of the electromagnetic field, and then the rest of the components of the fields and their time derivatives at the initial time are determined by a subset of Maxwell's equations.</p>
|
Physics
|
|oscillators|dissipation|linear-systems|coupled-oscillators|
|
How to demonstrate in a simple way that this system of differential equations form a damped harmonic oscillator?
|
<p>The standard technique to solve this system of linear differential equations <span class="math-container">$$\begin{align} \dot x &= -\alpha_x x - \omega y \\ \dot y &= -\alpha_y y + \omega x \end{align}$$</span> is to write it in matrix form: <span class="math-container">$$\frac{d}{dt}\begin{pmatrix}x \\ y \end{pmatrix}= \begin{pmatrix}-\alpha_x & -\omega \\ +\omega & -\alpha_y\end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} \tag{1}$$</span></p> <p>To solve it make the ansatz <span class="math-container">$$\begin{pmatrix}x(t) \\ y(t) \end{pmatrix}= \begin{pmatrix}x_0 \\ y_0 \end{pmatrix} e^{\lambda t} \tag{2}$$</span> with a still unknown constant <span class="math-container">$\lambda$</span>. Inserting (2) into (1) immediately leads to this matrix eigenvalue equation: <span class="math-container">$$\lambda\begin{pmatrix}x_0 \\ y_0 \end{pmatrix}= \begin{pmatrix}-\alpha_x & -\omega \\ +\omega & -\alpha_y\end{pmatrix} \begin{pmatrix}x_0 \\ y_0 \end{pmatrix} \tag{3}$$</span></p> <p><a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Eigenvalues_and_eigenvectors_of_matrices" rel="nofollow noreferrer">Finding the eigenvalues and eigenvectors</a> is straight-forward (and I skip the math details here). You get two eigenvalues: <span class="math-container">$$\lambda=-\frac{1}{2}(\alpha_x+\alpha_y) \pm\sqrt{\frac{1}{4}(\alpha_x+\alpha_y)^2-\omega^2}$$</span> These are complex values with a negative real part and an imaginary part <span class="math-container">$\approx \pm i\omega$</span> (at least if <span class="math-container">$\alpha_x+\alpha_y\ll\omega$</span>). So you have found damped harmonic oscillations for <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span>.</p>
|
Physics
|
|thermodynamics|energy|thermal-radiation|thought-experiment|
|
Fundamental principles for simple radiative heat transfer problems
|
<p>The first thing to note is that the properties of the gray plate are such that it is for all intents and purposes a blackbody with respect to its absorption from the brown plate and its emission to space, but it is completely transparent to the radiation from the hot source. For example for a temperature of 300K a blackbody emits <span class="math-container">$\sigma (300^4) = 459.3 W/m^2$</span>, while the gray plate emits <span class="math-container">$\sigma (300^4) - 1.2 \times 10^{-14} W/m^2$</span> when it is at 300K, where the small number being subtracted is the amount of radiation a blackbody emits in the wavelengths from 0 to 1 <span class="math-container">$\mu m$</span>.</p> <p>Next, in order to track all of the heat inflows and outflows to the plates, let’s make the lumped thermal capacity approximation for the plates and assume that they are each at a uniform temperature. Let’s also write everything as per unit area of the plates. The rate form for the first law of thermodynamics (under the assumption of local thermodynamic equilibrium) is,</p> <p><span class="math-container">$$\frac{dU}{dt}=\dot Q + \dot W$$</span></p> <p>where <span class="math-container">$U$</span> is the internal energy, <span class="math-container">$\dot Q$</span> is the net rate of heat inflow to the system <span class="math-container">$\dot Q=\dot Q_{in}-\dot Q_{out}$</span>, and <span class="math-container">$\dot W$</span> is the rate that work is done by the surroundings on the system. For this problem it is clear that <span class="math-container">$\dot W=0$</span>.</p> <p>The lumped thermal capacitance approximation then uses,</p> <p><span class="math-container">$$\frac{dU}{dt} = C \frac{dT}{dt}$$</span></p> <p>where <span class="math-container">$C$</span> is the total thermal capacity of the plate, which is just the mass per unit area of the plate times its specific heat.</p> <p>The heat influx to the brown plate per unit area is simply the heat influx from the hot distant source <span class="math-container">$\dot Q_{in} = Q_0=240 W/m^2$</span>, and the heat outflux per unit area is given by the Stefan-Boltzmann equation for the radiative heat flux between two closely spaced parallel plates, <span class="math-container">$\dot{Q}_{out} = \sigma (T_{b}^{4}-T_{g}^{4})$</span> where <span class="math-container">$b$</span> is for the brown plate and <span class="math-container">$g$</span> is for the gray plate. This allows us to write a governing equation for the temperature evolution of the brown plate as,</p> <p><span class="math-container">$$C_b \frac{dT_b}{dt} = 240 - \sigma (T_{b}^{4}-T_{g}^{4})$$</span></p> <p>For the gray plate the heat influx is simply the heat outflux from the brown plate, <span class="math-container">$\dot{Q}_{in} = \sigma (T_{b}^{4}-T_{g}^{4})$</span>, and the heat outflux is the radiation emitted to space, <span class="math-container">$\dot{Q}_{out} = \sigma T_{g}^{4}$</span>. This allows us to write a governing equation for the temperature evolution of the gray plate as,</p> <p><span class="math-container">$$C_g \frac{dT_g}{dt} = \sigma (T_{b}^{4}-T_{g}^{4})-\sigma T_{g}^{4} =\sigma (T_{b}^{4}-2 T_{g}^{4}) $$</span></p> <p>These equations can be cast in non-dimensional form with the following definitions,</p> <p><span class="math-container">$$T_0 =(Q_0/\sigma)^{1/4}, \bar{T} = \frac{T}{T_0},t_0 = \frac{C_b}{\sigma T_0^{3}}\text{, and } \bar{t} = \frac{t}{t_0}$$</span></p> <p>The equations are then,</p> <p><span class="math-container">$$\frac{d\bar{T_b}}{d\bar{t}} = 1 - \bar{T}_{b}^{4}+\bar{T}_{g}^{4}$$</span></p> <p>and</p> <p><span class="math-container">$$\frac{C_g}{C_b} \frac{d\bar{T_g}}{d\bar{t}} = \bar{T}_{b}^{4}-2 \bar{T}_{g}^{4} $$</span></p> <p>This set of two ordinary differential equations for <span class="math-container">$\bar{T_b}$</span> and <span class="math-container">$\bar{T_g}$</span> have a single parameter <span class="math-container">$C_g/C_b$</span> that must be specified along with initial temperatures for each plate. Let’s consider the case where the gray plate is gone and the brown plate has reached its steady state temperature of <span class="math-container">$\bar{T_b}=1$</span>, and at time <span class="math-container">$\bar{t}=0$</span> the gray plate is introduced at a temperature of <span class="math-container">$\bar{T_g}=0$</span>. Let’s also take <span class="math-container">$C_g/C_b = 1$</span>. For this case the time evolution of the temperatures of the brown and gray plates is shown below. Note that the steady state temperatures for the plates are <span class="math-container">$T_b=2^{1/4} T_0$</span> and <span class="math-container">$T_g = T_0$</span>.</p> <p><a href="https://i.stack.imgur.com/1pYdh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1pYdh.png" alt="enter image description here" /></a></p> <p>So how is it that the brown plate increases in temperature if the white plate is always colder and there is never any heat flux from the gray to the brown plate?</p> <p>The answer lies in the fact that the <em>net</em> heat outflux from the brown plate changes upon the introduction of the gray plate. Prior to the introduction of the gray pate the brown plate was emitting radiation to 0K space, which is the maximum amount of heat that it can lose per second via radiation. After the gray plate is introduced it warms due to the heat it receives from the brown plate and thus the brown plate is losing less heat per second than when it was when it was emitting to space. Since it is still receiving 240 <span class="math-container">$W/m^2$</span> from the source every second it must warm up until it is transferring 240 <span class="math-container">$W/m^2$</span> to the gray plate. In turn, the gray plate must emit 240 <span class="math-container">$W/m^2$</span> to space at steady state and will be at 255K to do so.</p> <p>The graph below illustrates this behavior showing the heat influx to the brown plate from the source and the heat outflux to the gray plate. The <em>net</em> is of course positive during the transient, which causes the temperature of the brown plate to increase until the new steady state is achieved. This is in spite of the fact that no heat is ever transferred from the colder gray plate to the hotter brown plate.</p> <p><a href="https://i.stack.imgur.com/yS5n7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yS5n7.png" alt="enter image description here" /></a></p>
|
Physics
|
|newtonian-mechanics|newtonian-gravity|orbital-motion|celestial-mechanics|
|
Orbiting body around a star
|
<p>That equation relating the mass, orbital speed, radial distance, and semi-major axis is known as the <a href="https://en.wikipedia.org/wiki/Vis-viva_equation" rel="nofollow noreferrer">vis-viva</a> equation. Its standard form is</p> <p><span class="math-container">$$v^2 = \mu\left(\frac2r - \frac1a\right)$$</span></p> <p>where <span class="math-container">$\mu$</span> is the <a href="https://en.wikipedia.org/wiki/Standard_gravitational_parameter" rel="nofollow noreferrer">standard gravitational parameter</a> <span class="math-container">$GM$</span>, where <span class="math-container">$M$</span> is the sum of the masses of the two bodies. It's common to neglect the mass of the smaller body when it's much smaller than the mass of the larger body.</p> <p>I have a derivation of the vis-viva equation <a href="https://physics.stackexchange.com/a/676872/123208">here</a></p> <p>In your scenario, the initial velocity vector is perpendicular to the radial vector. Now in a Kepler ellipse, that can <em>only</em> happen at periapsis or apoapsis. That is, your initial <span class="math-container">$r$</span> is either <span class="math-container">$r_a$</span> or <span class="math-container">$r_p$</span>. So we can rearrange vis-viva to calculate <span class="math-container">$a$</span>, and then use <span class="math-container">$2a = r_p + r_a$</span> to calculate the other <span class="math-container">$r$</span>. Then we just compare the two <span class="math-container">$r$</span> values to figure out which one's which.</p> <p>Rearranging, we get</p> <p><span class="math-container">$$\frac1a = \frac2r - \frac{v^2}\mu$$</span> <span class="math-container">$$a = \frac{\mu r}{2\mu - rv^2}$$</span></p> <p>With a little more algebra, we get the other radius,</p> <p><span class="math-container">$$r_2 = \frac{r^2v^2}{2\mu - rv^2}$$</span></p> <hr /> <p>Note that <span class="math-container">$a$</span> becomes infinite when <span class="math-container">$2\mu - rv^2 = 0$</span>. That's the escape velocity, which gives a parabolic trajectory. And when <span class="math-container">$2\mu - rv^2 < 0$</span>, <span class="math-container">$a$</span> becomes negative, and we have a hyperbolic trajectory.</p>
|
Physics
|
|quantum-mechanics|quantum-field-theory|operators|harmonic-oscillator|second-quantization|
|
Ladder operators and creation & annihilation operators - different between $a$, $b$ and $c$
|
<p>When we talk about the quantum harmonic oscillator, <span class="math-container">$a$</span> and <span class="math-container">$a^{\dagger}$</span> are usually used.(<span class="math-container">$a$</span> may stands for "annihilation".) Note that we have not discuss second quantization here, so <span class="math-container">$a$</span> and <span class="math-container">$a^{\dagger}$</span> are not field operators.</p> <p>In the area of many-body physics, we usually use field operators to write Hamiltonian. For bosonic system, we tend to write <span class="math-container">$b$</span> and <span class="math-container">$b^{\dagger}$</span> where I believe <span class="math-container">$b$</span> stands for "boson" here. For fermionic system, both <span class="math-container">$c,c^{\dagger}$</span> and <span class="math-container">$f, f^{\dagger}$</span> are widely used. (<span class="math-container">$f$</span> may stand for fermion while I don't know what <span class="math-container">$c$</span> stands for.)</p> <p>Note that letter itself is not important, you can use whatever letter to represent the annihilation operator <span class="math-container">$a, b, c, f, ...$</span>. All you have to focus is their (anti-)commutation relation.</p>
|
Physics
|
|electromagnetism|electric-circuits|electricity|electric-current|potential|
|
Why doesn't charge accumulate in a loop?
|
<p>It depends on how this potential difference is applied to the conductor. The first case seems to correspond to a conductor placed in an external electric field. Then the charges would move to the ends of the conductor and build up a counteracting electric field so that the field and potential difference and thus the current in the conductor becomes zero. This is a consequence of the charge flow being blocked at the ends. The same would happen to a conducting loop that is placed in an external field. Opposite charges would build up at the end of the loop in the direction of the applied external field. The situation is different when the potential difference is due to an applied battery which maintains a potential difference at the ends of a conductor. Then a constant current flow occurs because the charge entering the conductor from the battery at one electrode flows back through the conductor entering the battery at the other electrode where it continues to the other electrode. Thus the charge current flow forms a circuit and there is nowhere an accumulation of charge due to this current. Similarly, a closed conductor loop in a changing magnetic field, which produces an electric field by induction along the loop, has a current without an accumulation of charge. This is a consequence of the law of charge preservation, or the current continuity equation in the stationary case.</p>
|
Physics
|
|forces|fluid-dynamics|drag|
|
Difference between Drag force and virtual force
|
<p>F.W. Bessel introduced the concept of 'added mass' or 'virtual mass' when he observed that the period of a pendulum moving in a fluid differs from that in a vacuum.</p> <p>Consider an object of mass <span class="math-container">$m$</span> connected to a spring of spring constant <span class="math-container">$k$</span> moving inside a fluid of viscosity <span class="math-container">$\mu$</span>. The equation of motion is given by:</p> <p><span class="math-container">$$m\dfrac{d^2x}{dx^2}+\mu\dfrac{dx}{dt}+kx=0$$</span></p> <p>The natural frequency is given by: <span class="math-container">$$\omega=\sqrt{\dfrac{k}{m}}$$</span></p> <p>Note that the drag does not change the natural frequency. So experimentally, if the natural frequency pendulum is different in a fluid compared to a vacuum, the effective mass has to be different.</p> <p><span class="math-container">$$\omega=\sqrt{\dfrac{k}{m+m_a}}$$</span></p> <p>Here <span class="math-container">$m_a$</span> is the added mass. Intuitively, one can think of this as some fluid that gets carried with your pendulum bob as it oscillates. The corresponding extra force you need to supply to account for this extra mass is called virtual force.</p> <p><span class="math-container">$$(m+m_a)\dfrac{d^2x}{dx^2}+\mu\dfrac{dx}{dt}+kx=0$$</span></p> <p>This is a reasonable thought, since if there was no such thing, you would have infinite sheer at the boundary.</p> <p><a href="https://i.stack.imgur.com/NtK3M.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NtK3M.jpg" alt="enter image description here" /></a></p> <p><strong>Difference between drag and added mass</strong></p> <p>Suppose you want to slow down the rotation of a merry-go-round. Drag is like standing outside and rubbing your hands against the wheel to slow it down. Virtual mass is like jumping onto the platform, increasing the effective moment of inertia to slow it down. Though both are slowing it down, one affects the motion externally while the other changes the intrinsic properties.</p> <p><a href="https://ocw.mit.edu/courses/2-016-hydrodynamics-13-012-fall-2005/6124a1fd14e811d9ea753f85e0dfc8d0_2005reading6.pdf" rel="nofollow noreferrer">Reference 1</a></p> <p><a href="http://brennen.caltech.edu/fluidbook/basicfluiddynamics/unsteadyflows/addedmass/introduction.pdf" rel="nofollow noreferrer">Reference 2</a></p>
|
Physics
|
|quantum-field-theory|lagrangian-formalism|renormalization|feynman-diagrams|dimensional-analysis|
|
Difference between renormalizable and super-renormalizable theories
|
<ul> <li><p>It should stressed be that Peskin & Schroeder are here using the old Dyson definitions of renormalizability. For a more general derivation of eq. (10.13), see e.g. <a href="https://physics.stackexchange.com/q/481249/2451">this</a> Phys.SE post, which also answers several of OP's questions.</p> </li> <li><p>OP's main question seems to be the following. Assume that the spacetime dimension <span class="math-container">$d>2$</span> and that the theory only has one (scalar) field <span class="math-container">$\phi$</span> with a single coupling constant <span class="math-container">$\lambda$</span>.</p> <p>An <span class="math-container">$N$</span>-point (connected) amplitude/correlation function is a sum of <em>infinitely</em> many (connected) Feynman diagrams with a fixed number <span class="math-container">$N$</span> of external legs.</p> <p>If <span class="math-container">$[\lambda]\geq 0$</span> [corresponding to the (super)renormalizable case], then only for <em>finitely</em> many natural numbers <span class="math-container">$N \leq \frac{d}{[\phi]}$</span>, where <span class="math-container">$[\phi]=\frac{d-2}{2}$</span>, the <span class="math-container">$N$</span>-point amplitude can contain Feynman diagrams<span class="math-container">$^1$</span> with <span class="math-container">$D\geq 0$</span>.</p> <p>(The above argument can be generalized to a theory with <em>finitely</em> many types of fields and coupling constants.)</p> </li> </ul> <p>--</p> <p><span class="math-container">$^1$</span> NB: Be aware that due to possible (UV) divergent subdiagrams, the actual (UV) divergencies may be worse than indicated by <span class="math-container">$D$</span>.</p>
|
Physics
|
|quantum-field-theory|lagrangian-formalism|path-integral|quantum-chromodynamics|ghosts|
|
Ghosts in QCD Lagrangian
|
<p>It's conventional to specify the classical Lagrangian, which does not include ghost terms. (Ghosts only contribute at loop level).</p> <p>One reason not to write ghost terms, when one is speaking generically about the classical Lagrangian of QCD, is that the ghost terms depend on your gauge fixing procedure, so it depends on how you choose to do the calculation, while the classical gauge invariant QCD Lagrangian does not.</p> <p>Another reason is simplicity; if you are writing the classical QCD Lagrangian down, it is probably just to make some very general, high-level points about the degrees of freedom or couplings, or to fix conventions. In other words, equations are tools to express ideas, and you don't want to write an equation down that has superfluous technical content you don't need for your discussion. As an extreme example, <a href="https://visit.cern/content/standard-model-mug" rel="noreferrer">the Standard Model can fit on a coffee cup</a>, but only if you're willing to use a very abstract notation that <a href="https://www.symmetrymagazine.org/article/the-deconstructed-standard-model-equation?language_content_entity=und" rel="noreferrer">leaves very many technical details implicit</a>. However, the abstract notation is fine, for "coffee cup level physics", in the sense that it expresses there is a single equation that can be used to derive all particle physics results, and the various conceptual pieces of the Standard Model are represented.</p> <p>On the other hand, if you are writing a paper about loop level calculations in QCD, you should specify how you are doing the gauge fixing and show the ghost Lagrangian explicitly, but then the classical QCD Lagrangian will only be one of many equations in your paper.</p>
|
Physics
|
|cosmology|space-expansion|redshift|
|
Why are physicists not more concerned that there are too many explanations for redshift in the universe?
|
<p>One of the difficult concepts for people starting in general relativity is that we can and often do use radically different coordinates to describe the same physical system. It often appears that we are using different physical theories when really it is the same theory (GR) just in different coordinates.</p> <p>In this case, the equations that every student learns for the expanding universe are written in a coordinate system called <a href="https://en.wikipedia.org/wiki/Comoving_and_proper_distances#Comoving_coordinates" rel="noreferrer">comoving coordinates</a>. In these coordinates everything in the universe is (approximately) stationary. Since everything is stationary the increasing distances between objects, and the red shift, is interpreted as the universe expanding.</p> <p>However it is also possible to use coordinates in which the universe is not expanding. This isn't often done since the maths gets messy, but it can be done and in this case we get a universe that is not expanding but in which the velocities of objects increase with distance from the observer. In that case the red shift is indeed described by the Doppler equation.</p> <p>So your question is slightly off target. There are not different mainstream explanations for red shift. Instead there are different coordinate systems we can use to describe the same physics. Physicists are not <em>concerned</em> that different coordinate systems exist. Rather the reverse because different coordinate systems can be very useful for doing calculations in different ways.</p>
|
Physics
|
|quantum-mechanics|wavefunction|potential|schroedinger-equation|quantum-tunneling|
|
Why is probability outside the infinite square well zero?
|
<p>If the well is sufficiently deep to bind the particle, the wavefunction decays exponentially outside the well. The rate of exponential decay depends on the depth of the well. For an infinitely deep well, the decay is infinitely fast, so the amplitude is zero outside the well.</p>
|
Physics
|
|special-relativity|coordinate-systems|acceleration|speed-of-light|
|
Rindler coordinates and objects possibly exceeding the speed of light
|
<p>I will use the more standard Rindler coordinates (<span class="math-container">$a=1$</span>): <span class="math-container">$$ \begin{align} t &= X\sinh T & x &= X\cosh T \end{align} $$</span> with <span class="math-container">$t,x$</span> inertial coordinates and <span class="math-container">$T,X$</span> your accelerated frame and <span class="math-container">$a$</span> the proper acceleration. It's essentially the same as your coordinates, only <span class="math-container">$X$</span> is translated so that the observer performing hyperbolic motion is at position <span class="math-container">$X = 1$</span> (instead of <span class="math-container">$X=0$</span>). It has the advantage of making analogies with polar coordinates more easily and setting the Rindler horizon at <span class="math-container">$X=0$</span> (instead of <span class="math-container">$X=-1$</span>) which is where the coordinate system is singular.</p> <p>Your mistake is to assume that the condition of not crossing the speed of light is: <span class="math-container">$$ \left|\frac{dX}{dT}\right|<1 $$</span> This cannot be true as the speed of light limit would depend on the coordinate system. It's easier to arrive at a coordinate independent definition by thinking geometrically. Not crossing the speed of light means that the 4-velocity along the worldline is always time-like, i.e. inside the future region bounded by the future lightcone. Quantitatively, to check whether the 4-velocity is time-like, you therefore need to apply the metric <span class="math-container">$ds^2$</span> to it and check its sign.</p> <p>In an inertial frame (using the <span class="math-container">$(+,-,-,-)$</span> convention): <span class="math-container">$$ ds^2 = dt^2-dx^2 $$</span> Being time like means <span class="math-container">$ds^2>0$</span>, so you do recover: <span class="math-container">$$ \left|\frac{dx}{dt}\right|<1 $$</span></p> <p>However, this only applies for inertial frames where the metric always has the same expression. In a general coordinate system, the expression of the expression of the metric is more complicated. Geometrically, the light cone is typically distorted and physically, you can interpret it as the speed of light acquiring a space-time dependence as well as a directional dependence in general.</p> <p>This is what happens for Rindler coordinates. The metric in the new coordinate system is: <span class="math-container">$$ ds^2 = dt^2-dx^2 = X^2dT^2-dX^2 $$</span> At the event <span class="math-container">$T,X$</span>, the speed of light is: <span class="math-container">$$ c=|X| $$</span> It therefore acquires a spatial dependence. In your example, the speed of <span class="math-container">$1.5$</span> is reached at a position <span class="math-container">$X>1.5$</span> so the particle did not cross the speed of light. Remember that even if on your graph <span class="math-container">$X_0\sim 1.1$</span>, my <span class="math-container">$X$</span> is shifted by <span class="math-container">$1$</span>, so <span class="math-container">$X\sim 2.1>1.5$</span>, so you are far from the speed of light. Note that in your case, at the position of the accelerated observer <span class="math-container">$X = 1$</span>, there is not distortion as expected, since the local frame is just boosted with respect to an inertial frame.</p> <p>In general, to conserve the value and isotropy of the speed of light, you'll need to use conformal maps, i.e. maps that preserve angles so that the light cone is preserved. For the accelerated frame, you can change the Rindler coordinates to Radar (or Lass) coordinates: <span class="math-container">$$ \begin{align} t &= e^{\tilde X}\sinh T & x &= e^{\tilde X}\cosh T \\ X &= e^{\tilde X} & \tilde X &= \ln X \end{align} $$</span> which are conformal: <span class="math-container">$$ ds^2 = e^{2\tilde X}(dT^2-d\tilde X{}^2) $$</span> This time, the speed of light is always <span class="math-container">$c=1$</span> as in the inertial case. You can map the inertial motion in this new coordinate system and check that the speed of light is still never crossed.</p> <p>Hope this helps.</p>
|
Physics
|
|thermodynamics|
|
How do we know that we only need three second derivatives to represent all the second derivatives of a thermodynamic system?
|
<p>In the context that your thermodynamical system has 3 independent variables this is true. Because any other derivative along some direction in parameter space can be expressed as a linear combination of these. But in general, e.g. if you have like 200 particle species with all different chemical potential, I'm not sure if this statement is correct. (Or imagine other "thermodynamical forces")</p> <p>EDIT: Adressing the comment: Suppose <span class="math-container">$f(x,y)$</span> depends on two independent variables <span class="math-container">$x$</span>,<span class="math-container">$y$</span>. Now you have a third one, <span class="math-container">$z(x,y)$</span> which can locally be inverted (This can be done if assumptions of the thm. of implicit inverse is satisfied). Then the derivative w.r.t. <span class="math-container">$z$</span> looks like:</p> <p><span class="math-container">$\frac{\partial f(z(x,y),x,y)}{\partial z}= \frac{\partial f(z(x,y),x,y)}{\partial x}\frac{\partial x(z,y)}{\partial z}+\frac{\partial f(z(x,y),x,y)}{\partial y}\frac{\partial y(z,x)}{\partial z}$</span></p> <p>per Assumption, these things are all well defined.</p>
|
Physics
|
|radiation|
|
Beta decay in Au-198 element
|
<p>Here is the decay chart from the <a href="https://pripyat.mit.edu/KAERI/" rel="nofollow noreferrer">KAERI website</a> which shows that your data was missing one of the possible <span class="math-container">$\beta^-$</span> decay modes.</p> <p><a href="https://i.stack.imgur.com/f23Zt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f23Zt.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|general-relativity|energy-conservation|gravitational-waves|stress-energy-momentum-tensor|linearized-theory|
|
How do gravitational waves carry energy when gravitational energy cannot be localised?
|
<p>Suppose some matter A emits gravitational waves, and some matter B absorbs those waves, and the spacetime does not have further dynamics apart from this. In such a process the <span class="math-container">$0,0$</span> component of the energy momentum tensor <span class="math-container">$T_{\mu\nu}$</span> integrated over matter A goes down (in a LIF with A at rest) and the <span class="math-container">$0,0$</span> component of energy momentum tensor <span class="math-container">$T_{\mu\nu}$</span> integrated over matter B (in a LIF with B at rest) goes up. That is the sense in which gravitational waves carry energy. It is also the sense in which energy is conserved in general relativity.</p> <p>There does not need to be a conservation overall if the spacetime has further dynamics, but in the case where it does not, the energy accounting is done via the non-linearity of the equations. It is something of a marvel that it all works out correctly.</p> <p>In the weak gravity limit one can extract from the metric a quantity which acts in most respects like a stress-energy tensor for gravitational waves, as long as there is a clear separation of distance scales between the wavelength of the waves and the radius of curvature of the background spacetime on which the waves are propagating. This quantity involves averaging over a wavelength-sized region so it is not well-defined at a point. The suggestion is that the energy to be associated with the wave is owing to the curvature introduced by the wave.</p>
|
Physics
|
|quantum-field-theory|renormalization|interactions|effective-field-theory|non-perturbative|
|
Triviality of $\phi^4$ theory, is it settled now (2024)?
|
<p>It sort of depends on what exactly you mean by "settled". Up to physics standards, the question is definitely settled: it has been understood, for many decades now, that <span class="math-container">$\phi^4$</span> in <span class="math-container">$d=4$</span> is trivial. The only open question is to prove this in a way that mathematicians will find convincing. This is very similar to the <a href="https://en.wikipedia.org/wiki/Yang%E2%80%93Mills_existence_and_mass_gap" rel="nofollow noreferrer">Yang-Mills mass gap problem</a>: physicists have known that YM is gapped for many decades now, it only remains to prove it in a mathematically rigorous way. But the answer is definitely "known".</p> <p>We actually know the answer in any dimension. In <span class="math-container">$4d$</span>, the only non-trivial theory is non-abelian Yang-Mills with a handful of bosons and fermions. If you add too many matter fields, or make the gauge field abelian, you make the theory trivial. In lower dimensions, any gauge theory is non-trivial, but e.g. <span class="math-container">$3d$</span> <span class="math-container">$\phi^6$</span> is again trivial. In <span class="math-container">$2d$</span>, any <span class="math-container">$V(\phi)$</span> is non-trivial.</p> <p>This is the correct, known answer. It is not "settled" only in the sense that we don't know how to prove this rigorously, but only because we don't have a rigorous definition of QFT. As soon as we manage to define QFT in a rigorous way, one should be able to prove these claims rigorously; absolutely nobody expects a surprise here: if your rigorous definition of QFT does not imply triviality of <span class="math-container">$\phi^4$</span>, or a YM gap, then your definition is wrong.</p> <p>It is important to stress that "triviality" in this context does not mean that the theory is boring or useless. For example, <span class="math-container">$4d$</span> QED is trivial, but it is obviously a tremendously useful theory, it makes the <a href="https://en.wikipedia.org/wiki/Anomalous_magnetic_dipole_moment" rel="nofollow noreferrer">most accurate physical prediction ever</a>. In this context, "triviality" simply means "infrared triviality", i.e., the coupling constant becomes smaller and smaller the lower the energy of your experiment is. QED and <span class="math-container">$\phi^4$</span>, and many other theories, are infrared trivial, but they are still non-trivial, interacting, useful low-energy effective theories.</p>
|
Physics
|
|rotation|
|
Rotation of disc on smooth surface
|
<p>It is possible for an object to be on a frictionless surface and have both translational and rotational motion, i.e. rolling with slipping.(now, if you launch a rotating disc onto a frictionless surface with the right initial conditions, you <em>can</em> have rolling without slipping) In the case of pure rotation the instantaneous center of rotation is at the center of the disc and not at the point of contact, however, if the disc rolls, then at any instant the center of rotation is at the point of contact.</p> <p>Essentially, in order to be an instantaneous center of rotation, the velocity vectors of the points in the rigid body must for a <em>circular</em> field around the axis in question. In the case where the disc is rolling with slipping, There is at least one point on the line passing through the disc's center and perpendicular to the surface that is an ICR, i.e. the point where the magnitude of the velocity of translation equals the magnitude of the velocity of rotation. This point was brought to my attention by user @Zaph.</p>
|
Physics
|
|statistical-mechanics|temperature|
|
Can energy be any energy in canonical ensembles?
|
<p>When we consider the canonical ensemble (so constant temperature <span class="math-container">$T$</span>) we have that given that the system consists of a set of energies <span class="math-container">${E_i}$</span>, the probability of finding the system in energy <span class="math-container">$E_j$</span> is proportional to <span class="math-container">$P \propto e^{E_j/k_b T}$</span>. This can be any kind of energy <span class="math-container">$E$</span>, as long as the system under consideration is in thermal equilibrium with an environment (also called a heat bath).</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|forces|orbital-motion|propulsion|
|
Applying 1 gram-force for 1000 seconds to 1kg mass
|
<p>According to Newton’s laws F=ma. If a net force is applied to a body it <em>will</em> accelerate. With no net force the body remains in a state of constant motion, either at rest or with constant velocity in a straight line.</p> <p>When a body is accelerated its velocity will be a*t plus any initial velocity v_0. Inserting this in the initial equation for force we find v=Ft/m +v_0</p> <p>If we let 1 gram force = .0098N then the velocity of a 1Kg object after 1000s starting from rest is easy to calculate.</p> <p>This is true as long as the velocity of the body is small compared to c. For relativistic velocities you need a different equation. However, this won't happen after several minutes of acceleration with this force applied.</p> <p>As InfiniteZero pointed out in his comment, for a more accurate answer you will have to take into account the decreasing mass of your rocket as fuel is expended. The simple equation becomes F=m(t)a , where mass is a function of time and you will have to use integration to get the final answer.</p>
|
Physics
|
|tensor-calculus|group-theory|representation-theory|lie-algebra|
|
Multiplying two $SO(3)$ representations
|
<p><strong>Question 1</strong>. I don't have that text, but Tony is "cheating" by reminding himself that adding spin 2 to spin 1, your 5⊗3, he gets a spin 3, and a spin 2, and a spin 1, your more tasteful 7⊕5⊕3. His (2⋅3+1)=7 reminds you spin 3 is a septuplet, is all.</p> <p>Response to <strong>Question 2</strong> and comments on it. Your basic building block <span class="math-container">$P^{ijk} = S^{ij} T^k$</span> is symmetric in (ij) but has no specific symmetry with respect to interchange of the latter two indices jk, so resolves to two pieces, (jk) and [jk], <span class="math-container">$$P^{ijk} = (S^{ij} T^k + S^{ik} T^j)/2 +(S^{ij} T^k - S^{ik} T^j)/2 \equiv R^{i(jk)} + Q^{i[jk]}, $$</span> and you may observe any vestige of <span class="math-container">$ Q$</span> has disappeared in your U and <span class="math-container">$\tilde U$</span>.</p> <p>The antisymmetric pieces in <em>Q</em> are then converted by ε to the mixed symmetry traceless <span class="math-container">$$ V^{il}= W^{(il)} + X^{[il]}, $$</span> and you got the <em>W</em> part.</p>
|
Physics
|
|electromagnetism|magnetic-fields|isotopes|cold-atoms|ion-traps|
|
Ultracold magnetic traps for uranium enrichment
|
<p>In principle yes: One can enrich elements with magnetic traps. Even other traps are in principle feasible to enrich elements (Magneto-optical traps).</p> <p>HOWEVER (this is a big however!) typically, magnetic traps can trap only very few atoms (e.g. ~10^8 for rubidium).</p> <p>Trapping 10^8 uranium atoms (which is very hard) corresponds to ~40 femto-gramm! Let's say, you can trap these atoms in 1s, you would need 792.7 million years to enrich 1kg of uranium.</p>
|
Physics
|
|classical-mechanics|lagrangian-formalism|conventions|notation|
|
Why do we multiply the Euler-Lagrange equations by negative one?
|
<p>If you were to prefer one over the other, perhaps you could argue for</p> <p><span class="math-container">$$\frac{\partial L}{\partial q_k} - \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}_k} \right) = 0.$$</span></p> <p>since that's what directly follows from the condition that the integrand must be 0 in the <a href="https://web.mit.edu/jgross/Public/least_action/Principle%20of%20Least%20Action.pdf" rel="nofollow noreferrer">principle of least action</a>.</p> <p>But in physics there's generally no need for a "deep reason" to multiply an equation by -1 to make it look "nicer" or more convenient.</p>
|
Physics
|
|electromagnetism|magnetic-fields|electric-current|
|
Does a single moving charge produce magnetic field in an empty universe?
|
<p>According to our current understanding of electromagnetism, a single moving charge in an empty universe would not produce a static magnetic field in the reference frame where the charge is at rest.</p> <p>Here's why:</p> <p>Magnetic field arises from relative motion: Magnetism is a consequence of the theory of relativity. It's not an absolute property of a single moving charge. The magnetic field is created by the relative motion between the observer and the charge. Reference frame dependence: If you are in the reference frame where the charge is stationary, you wouldn't observe a magnetic field because there's no relative motion. However, an observer in a different reference frame where the charge is moving would detect a magnetic field.</p> <p>However, there's a twist:</p> <p>Time-dependent magnetic field: Even in the reference frame where the charge is at rest, there can be a time-dependent magnetic field. As the charge accelerates or decelerates, a changing electric field is produced, and according to Maxwell's equations, a changing electric field creates a changing magnetic field. So, a single moving charge wouldn't produce a static magnetic field in its rest frame, but it could produce a time-dependent magnetic field during acceleration/deceleration.</p>
|
Physics
|
|waves|
|
Wavefront of concentric Water Waves
|
<p>In a water wave, molecules travel in a circle as the wave passes. The crest of the wave is where the molecules are at the top of their circle.</p> <p>The surface where all molecules are at the top is a wavefront. You could also pick any other phase: at the bottom, or at <span class="math-container">$37^o$</span>.</p> <p>For a linear wave with a single frequency in deep water, the wave fronts are vertical planes. The amplitude of the circles get smaller with depth, until they are too small to measure.</p> <p>In shallow water, waves slow. The back part catches up with the front. The wave gets taller and steeper until it breaks. Wave fronts still are linear in simple shorelines.</p> <p>If you have multiple wavelengths, it gets more complex. Different frequencies travel at different speeds. Fourier analysis will resolve a complex wave into different components. You can assign wave fronts to each component. But I am not sure how you would define it for the sum.</p> <p>Curved wave fronts also get messy. Looking at the picture, you can see general shapes that correspond to wave fronts. But there are places where crests end, or it isn't clear if there is a crest or not. It is hard to say what wave fronts would be there.</p>
|
Physics
|
|electromagnetic-radiation|wavelength|
|
Emission spectrum of a fluorescent lamp
|
<p>the emission lines you detect are only those that make it all the way through the phosphor coating on the inside of the CFL tube. Those phosphors are designed to absorb as much of the spectral output of the ionized mercury as possible and convert it into a mixture of red, green and blue light that your eye interprets as "white". Based on this it isn't surprising that you don't detect all the lines.</p> <p>It may also be that in your case, the current flowing through the tube is not sufficient to excite all the lines that are possible.</p>
|
Physics
|
|general-relativity|black-holes|kerr-metric|
|
Perimeter of Kerr's event horizon
|
<p>Horizon is a <em>null hypersurface</em>, the metric induced on it is degenerate, so it does not matter how you define hypersurface <span class="math-container">$t=\mathrm {const}$</span>, circumference of the equator would not change.</p>
|
Physics
|
|optics|experimental-physics|laser|interferometry|fiber-optics|
|
Why fiber optic Sagnac interferometers don't produce multiple interference fringes
|
<p>The fibre is a single mode fibre and the phase is therefore associated with the mode as a whole varying along the longitudinal direction. The interference thus only determines which output port gets how much of the wave. There is no transverse phase variation.</p>
|
Physics
|
|quantum-field-theory|operators|quantum-interpretations|observables|
|
Dictionary between interpretations of field operators
|
<p>Assuming that you are interested in the physical interpretation of the hermitean field operator of a free <em>relativistic</em> scalar theory in the Heisenberg picture, its Fourier decomposition (omitting the hat indicating the operator) is given by <span class="math-container">$$\begin{align} \phi(t, \vec{x})&=\int\limits_{\mathbb{R}^3} \!\frac{d^3 p}{(2\pi)^3 2 \omega(\vec{p})} \left(a(\vec{p})e^{-i \omega(\vec{p})t}e^{i\vec{p} \cdot \vec{x}}+ a^\dagger(\vec{p}) e^{i \omega(\vec{p}) t} e^{-i\vec{p} \cdot \vec{x}} \right), \\[5pt] \omega(\vec{p})&=\sqrt{\vec{p}^2+m^2}, \end{align} \tag{1} \label{1}$$</span> where the annihilation and creation operators <span class="math-container">$a(\vec{p})$</span> and <span class="math-container">$a^\dagger(\vec{p})$</span> satisfy the commutation relations <span class="math-container">$$\left[a(\vec{p}), a^\dagger (\vec{p}^\prime) \right]=(2\pi)^3 2 \omega(\vec{p}) \delta^{(3)}(\vec{p}-\vec{p}^\prime), \quad \left[a(\vec{p}, a(\vec{p}^\prime) \right]=0. \tag{2} \label{2}$$</span> Usually \eqref{1} is written in the more compact form <span class="math-container">$$\phi(x)= \int \! d\mu(p) \left (a(p) e^{-ip\cdot x}+ {\rm h.c.} \right) \tag{3} \label{3}$$</span> with <span class="math-container">$$x=(t, \vec{x}), \quad p^0=\omega(\vec{p}), \quad p=(p^0, \vec{p}), \quad d\mu(p)=\frac{d^3 p}{(2\pi)^3 2 p^0}. \tag{4} \label{4}$$</span></p> <ol> <li><p>Your first assertion, interpreting \eqref{3} as an (hermitean) operator associated with an observable field, is essentially correct, apart from the technical complication that <span class="math-container">$\phi(x)$</span> is a highly singular object kicking a normalized state vector <span class="math-container">$|\psi\rangle \in \mathcal{H}$</span> (<span class="math-container">$\mathcal{H}$</span> denotes the Fock space of the theory) out of your Hilbert space, i.e. <span class="math-container">$\phi(x) | \psi \rangle\notin \mathcal{H}$</span>. This disease can be cured by considering smeared-out operators <span class="math-container">$$\phi_f= \int \! d^4x \, f(x) \phi(x), \tag{5} \label{5}$$</span> where <span class="math-container">$f$</span> is a a suitable function with compact support on some finite domain of space-time such that <span class="math-container">$\phi_f|\psi\rangle \in \mathcal{H}$</span>. For this reason, <span class="math-container">$\phi(x)$</span> is referred to as an "operator-valued distribution".</p> </li> <li><p>Your second assertion is <em>wrong</em>. As can be seen from \eqref{3}, <span class="math-container">$\phi(x)$</span> is a linear combination of creation <em>and</em> annihilation operators. On top of that, interpreting <span class="math-container">$$\phi(t, \vec{x}) |0\rangle= \int \! d\mu(p) e^{-ip\cdot x} a^\dagger(p) |0\rangle \tag{6} $$</span> as a (non-normalized) state of a particle located at the point <span class="math-container">$\vec{x}$</span> at the time <span class="math-container">$t$</span> is <em>misleading</em> as <span class="math-container">$$ \langle0 |\phi(t, \vec{x}^\prime) \phi(t, \vec{x})|0\rangle \ne \delta^{(3)}(\vec{x}-\vec{x}^\prime), \tag{7} \label{7}$$</span> in contrast to the field operator <span class="math-container">$\Psi(t, \vec{x})$</span> of the second-quantized <em>nonrelativistic</em> Schrödinger theory, where <span class="math-container">$\Psi^\dagger(t=0, \vec{x}) |0\rangle$</span> corresponds indeed to the "state" <span class="math-container">$|\vec{x}\rangle$</span>, where a single particle is located at the point <span class="math-container">$\vec{x}$</span>. Note that \eqref{7} reflects the fact that a single particle in a <em>relativistic</em> quantum field theory cannot be localized at a point <span class="math-container">$\vec{x}$</span>, demonstrating that the notion of "state-vectors" <span class="math-container">$|\vec{x}\rangle$</span> makes only sense in the <em>nonrelativistic approximation</em>. However, any element <span class="math-container">$|\psi\rangle$</span> of the Hilbert space <span class="math-container">$\mathcal H$</span> of the relativistic theory can be written as a linear combination of products of <span class="math-container">$\phi_f$</span>'s acting on the vacuum, <span class="math-container">$|\psi \rangle= \sum_n c_{i_1 \ldots i_n} \phi_{f_{i_1}} \ldots \phi_{f_{i_n}}|0\rangle$</span>. According to the Reeh-Schlieder theorem, the supports of the functions <span class="math-container">$f_{i_k}$</span> in the previous sum may even be restricted to a <em>common</em> finite (open) region <span class="math-container">$\Omega$</span> of space-time, i.e. <span class="math-container">${\rm supp} \, f_{i_k} \subset \Omega$</span> for all <span class="math-container">$i_k$</span>.</p> </li> <li><p>It is not clear to me what you mean by "cynical" and which "treatments" you have in mind. Of course, <span class="math-container">$\phi(x)$</span> plays an important role as the basic building block in constructing all other operators corresponding to observables of the theory. Because of the aforementioned singular behaviour of the field operator, one encounters additional technical complications defining products of <span class="math-container">$\phi(x)$</span> at the same space-time point (requiring the necessity of "renormalization" already in the free quantum field theory). A simple example, found in all text-books, is the energy-momentum operator <span class="math-container">$P^\mu$</span> written as a linear combination of normally ordered products of <span class="math-container">$\phi(x)$</span>, <span class="math-container">$$ P^\mu = \int \! d^3x \,: \phi(x) i \partial^\mu \phi(x): =\int \! d\mu(p) p^\mu a^\dagger(p) a(p).\tag{8}$$</span></p> </li> </ol> <p>Edit: In view of some of the comments, let me add a few remarks on "observables" in quantum field theories. Apart from the obvious requirements of hermiticity (more precisely, self-adjointness), being of "bosonic" type (reducing the candidates for observables in fermionic field theories to bilinears of fermion field operators and products thereof) and gauge invariance in local gauge theories, the class of operators qualifying as observables is - to some extent - a matter of taste. In my opinion, excluding the field operator of a hermitean scalar field theory (or its smeared-out cousins) from the list of observable quantities (as suggested by @TobiasFünke in a comment) seems to be too strict.</p> <p>Let me give you a well known example where measuring <span class="math-container">$\phi(x)$</span> (or <span class="math-container">$\phi_f$</span>) makes perfectly sense. Consider the coherent state <span class="math-container">$$\begin{align} | c \rangle &= e^{ \int \!d\mu(k) \left(c(k) a^\dagger(k)-c(k)^\ast a(k)\right)} |0\rangle \\[5pt] &= e^{\int \!d\mu(k) |c(k)|^2/2}\, e^{\int \! d\mu(k) c(k) a^\dagger(k)} |0\rangle, \quad \langle c|c\rangle =1,\end{align} \tag{9} \label{9}$$</span> describing a "classical" state of the scalar field. The expectation value of the field operator in the state <span class="math-container">$|c\rangle$</span> is given by <span class="math-container">$$\langle c |\phi(x) |c\rangle = \int \! d\mu(k) \left(e^{-ik \cdot x} c(k) +e^{i k \cdot x} c^\ast(k) \right), \tag{11} \label{11}$$</span> being a classical solution of the Klein-Gordon equation. At the same time, \eqref{11} implies <span class="math-container">$$\begin{align}\langle c |\phi_f |c\rangle &= \int \! d\mu(k) \left(\tilde{f}(k) c(k)+ \tilde{f}^\ast(k) c^\ast(k) \right), \\[5pt] \tilde{f}(k) &= \int \!d^4x \, e^{-ik\cdot x}f(x),\end{align} \tag{12} \label{12}$$</span> for the expectation value of <span class="math-container">$\phi_f$</span>.</p> <p>It is (in principle) straightforward to adapt these ideas to the case of the electromagnetic field, replacing the scalar field by the photon field. The analogue of the state \eqref{9} of the electromagnetic field corresponds to a coherent superposition of energy-momentum eigenstates with all possible numbers of photons, describing a classical electromagnetic wave (e.g. the electromagnetic field produced by a radio transmitter). The "quantum-field-o-meter" is a receiver measuring the electric field <span class="math-container">$\vec{E}$</span> averaged over a certain space-time volume (corresponding to its space-time resolution). This observable can be modelled by introducing a suitable normalized weight function <span class="math-container">$f(x)$</span> (in the spirit of \eqref{5}) leading to a space-time averaged operator <span class="math-container">$\vec{E}_f(t, \vec{x})$</span> of the electric field, taming at the same time (potentially divergent) quantum fluctuations in in expressions like <span class="math-container">$\langle c | \vec{E}_f(t, \vec{x})^2 |c\rangle-\langle c |\vec{E}_f(t,\vec{x})|c\rangle^2$</span>.</p>
|
Physics
|
|thermodynamics|energy|temperature|entropy|
|
Does the notion of temperature depend on the zeroth law of thermodynamics?
|
<p>Yes, there are different notions of temperature and they are not the same.</p> <p>The notion of temperature arising from the zeroth law is that you can assign some number to objects describing the direction of heat flows. This is not sufficiently precise to lead to a unique quantity temperature – even beyond the problem of deciding a unit or zero point (Celsius vs. Fahrenheit vs. Réaumur vs. …). This is because the zeroth law cannot tell you anything about how this temperature scales. For example the square of everyday temperature (measured in Kelvin) or the logarithm would be a temperature complying with the zeroth law. Only when we impose further empirical constraints such as quantifying the heat flow or thermal expansion scaling nicely with temperature differences, we get a unique physical quantity (and can then fight about units). Let’s call this <em>empirical temperature.</em></p> <p>The entropy-based definition coincides with empirical temperature in many applications, but not all. For example, it allows for negative temperatures, which do not comply with the zeroth law when it comes to the direction of heat flows: Heat would be flowing from any negative-entropy-temperature object to any positive-entropy-temperature object. Thus negative-entropy-temperature objects have a higher empirical temperature than positive-entropy-temperature objects.</p> <p>Finally, a note regarding the last paragraph of your second text: The formulation of the zeroth law used in that text wouldn’t induce a notion of temperature as it is not about the direction of heat flows. Therefore, it is not in conflict with the entropy definition of temperature.</p> <p>Also see <a href="https://physics.stackexchange.com/a/727798/36360">this answer of mine on another question</a>.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|wavefunction|mathematical-physics|quantum-states|
|
When do two state functions represent the same quantum state?
|
<p>The set of wavefunctions which corresponds to the same state as <span class="math-container">$\psi$</span> is just the set of multiples of <span class="math-container">$\psi$</span> by a non-zero complex number (respectively, a complex number of absolute value <span class="math-container">$1$</span>, if you consider only normalized wavefunctions).</p> <p>The Aharonov-Bohm effect does not change this. What it changes is that the Hilbert space is no longer the space of <span class="math-container">$L^2$</span> functions on <span class="math-container">$M$</span>, but rather the space of <span class="math-container">$L^2$</span> sections of a hermitian line bundle, whose connection is the magnetic field.</p> <p>Concretely, this means that you can only identify a wavefunction with a complex valued function after choosing a gauge on a simply connected patch of <span class="math-container">$M$</span>. If you do that on two different patches <span class="math-container">$U $</span> and <span class="math-container">$V$</span>, one wavefunction <span class="math-container">$\psi$</span> will yield two complex valued functions which differ from a phase on the overlap <span class="math-container">$U\cap V$</span> (even though they are still restrictions of the same element of the Hilbert space). This phase is a transition function which is part of defining the quantum system (it is part of the data defining the Hilbert space).</p> <p><strong>Edit</strong> : Wave-functions as section of a fiber bundle</p> <p>I don't have a specific reference at hand right now, but I would be searching for key-word like "geometric phases", "geometry of the Aharonov-Bohm effect". In geometric quantization, wave-function appear naturally as section of a line bundle (see the discussion <a href="https://mathoverflow.net/questions/312095/physical-intuition-behind-prequantization-spaces">here</a> for example), but this is a mathematics thing so it doesn't really address the physical reasoning behind this.</p> <p>Let me try do give some explanation : in the presence of a magnetic field, the Hamiltonian and the Schrödinger equation are written in terms of the vector potential <span class="math-container">$\vec A$</span> : <span class="math-container">$$H = \frac{1}{2m}(p-qA)^2$$</span> To preserve gauge invariance, when we change vector potential <span class="math-container">$\vec A\to \vec A +\nabla \lambda$</span>, we need to also change the wavefunction by a phase : <span class="math-container">$\psi \to e^{-iq\lambda }\psi$</span>. Hence : <em>we can only hope for the wave-function to be a complex valued function once a gauge is specified</em>. To see the right framework to describe the wavefunction, we need to look into the geometry of gauge theory.</p> <p>In the case of the Aharonov-Bohm effect, we can consider the whole of space, with non-vanishing magnetic field inside the flux tube. We can choose a vector potential defined over the whole <span class="math-container">$\mathbb R^3$</span> and solve the Schrödinger equation as usual, with wave-functions being just complex-valued functions.</p> <p>We may also consider the flux tube to be infinitely thin and remove it from our space, which becomes <span class="math-container">$M = \mathbb R \times (\mathbb R^2 \backslash \{ 0\})$</span>. Here, the magnetic field vanishes uniformly. Again, we can find a gauge defined over the whole <span class="math-container">$M$</span> (for example by taking the previous realistic solution and taking the width of the tube to <span class="math-container">$0$</span>). We cannot choose <span class="math-container">$A = 0$</span> however, because the integral of <span class="math-container">$A$</span> along a path encircling the flux tube is the magnetic flux <span class="math-container">$\Phi\neq 0$</span>.</p> <p>We can also divide <span class="math-container">$M$</span> into two simply connected (open) subsets <span class="math-container">$U$</span> and <span class="math-container">$V$</span> (the left and right side of the tube). On both <span class="math-container">$U$</span> and <span class="math-container">$V$</span>, we can take <span class="math-container">$A = 0$</span> (which makes solving the Schrödinger equation easier !). Formally, we do this by starting with the previous vector potential and performing a gauge transformation with <span class="math-container">$\nabla \lambda_{U,V} = -A$</span> on <span class="math-container">$U$</span> (resp. on <span class="math-container">$V$</span>). The transformed wavefunctions are <span class="math-container">$\psi_{U,V} = e^{-iq\lambda_{U,V}}\psi$</span>. On the overlap <span class="math-container">$U\cap V$</span>, those two wavefunctions satisfy <span class="math-container">$\psi_V = e^{iq(\lambda_U-\lambda_V)} \psi_U$</span>.</p> <p>Mathematically, this is the formula relating two different trivialization of a hermitian line bundle : at each point on <span class="math-container">$M$</span>, the wave-function takes it value on a different copy of <span class="math-container">$\mathbb C$</span>; the vector potential is the connection which allows us to differentiate the wave function; on a small enough open subsets, you can identify the wavefunctions with complex valued functions, but you have to change gauge and multiply by a phase when switching from one of those small patch to another.</p>
|
Physics
|
|condensed-matter|solid-state-physics|dispersion|ferromagnetism|domain-walls|
|
Derivation in a Landau-Lifshitz ferromagnetism paper
|
<p>As you wrote, the expression is for the energy <em>density</em>, so its dimensions are [Energy / volume]. Meaning that <span class="math-container">$\alpha$</span> has dimensions of <span class="math-container">$[E a^2 s^{-2} a^{-3}] = [E s^{-2} a^{-1}]$</span> as they have in their formula</p>
|
Physics
|
|classical-mechanics|rotational-dynamics|angular-momentum|definition|
|
Why is angular momentum defined so?
|
<p>The concept of angular momentum has a precursor: Kepler's law of areas. In retrospect: Kepler's law of areas is an instance of conservation of angular momentum.</p> <p>Isaac Newton showed in his work 'Principia' that Kepler's law of areas generalizes to <em>any</em> central force.</p> <p>There is a 2022 answer (by me) that presents the logic of <a href="https://physics.stackexchange.com/a/702994/17198">Newton's derivation of the law of areas</a></p> <p>The key concept in Newton's derivation is that if the force that is involved is a central force then there is a rotational symmetry; the reasoning is independent of the orientation.</p> <p>The area law shows that there is a conserved quantity that is proportional to <span class="math-container">$\omega r^2$</span></p> <p>Angular momentum is an entity that exists in a plane; in order to have an angular momentum at all there must be some form of circumnavigating motion, so the minimum space that is needed is a plane.</p> <p>One way to motivate the concept of moment of inertia is to require consistency between linear kinetic energy and angular kinetic energy.</p> <p>Let's say a car is coasting along a straight section of road, at some velocity <span class="math-container">$v$</span>, so we attribute a kinetic energy of <span class="math-container">$\tfrac{1}{2}mv^2$</span><br /> Let the car be on a banked circuit and the car enters a corner. The car is moving along a circular path now, but obviously it still has the same kinetic energy.</p> <p>We can express the kinetic energy in terms of the instantaneous tangent velocity vector, or we can express the kinetic energy in terms of <em>angular velocity</em></p> <p>We have: <span class="math-container">$v = \omega r$</span></p> <p>If we want to express kinetic energy in terms of angular velocity, then in order to have overall consistency:</p> <p><span class="math-container">$$ \tfrac{1}{2}mv^2 = \tfrac{1}{2}m\omega^2 r^2 $$</span></p> <p>Rearranging the second expression:</p> <p><span class="math-container">$$ \tfrac{1}{2}mr^2 \omega^2 $$</span></p> <p>That suggests making the quantity <span class="math-container">$mr^2$</span> a standardized item: <span class="math-container">$I=mr^2$</span>.</p> <p>Then angular momentum is <span class="math-container">$I\omega$</span> and angular kinetic energy is <span class="math-container">$ \tfrac{1}{2}I \omega^2 $</span></p> <p>So we have two lines of reasoning that slot in with each other: the law of areas, and self-consistent attribution of kinetic energy</p> <p>It seems to me that to the scholars who worked in the century or so after Newton the concept of moment of inertia defined as <span class="math-container">$mr^2$</span> was pretty much inescapable.</p>
|
Physics
|
|general-relativity|metric-tensor|perturbation-theory|klein-gordon-equation|qft-in-curved-spacetime|
|
Dervation of the first-order Klein-Gordon equation
|
<p>In your case, one has <span class="math-container">$$ \sqrt{-g}=\sqrt{a^8(1-2\Phi)(1-2\Psi)^3}\approx a^4\left(1-\Phi-3\Psi\right). $$</span></p>
|
Physics
|
|newtonian-mechanics|rotation|
|
Understanding the relationship between the angular velocity and angular acceleration vectors
|
<p>So my understanding is that you are always finding the direction of the angular velocity and calling that the <span class="math-container">$z$</span>-direction, such that <span class="math-container">$\boldsymbol{\omega} = \omega \boldsymbol{k}$</span> where <span class="math-container">$\boldsymbol{k}$</span> is the unit vector in the <span class="math-container">$z$</span>-direction. The important thing to note is that this <span class="math-container">$z$</span>-axis rotates in space for a general motion and also rotates with respect to an observer that is rotating with the rigid body. I point this out because in some descriptions of rigid body motion, axes that are attached to and rotating with the rigid body are used. This is not that case. In this case the angular acceleration has to include both the time derivative of the magnitude of the angular velocity, and the time derivative of <span class="math-container">$\boldsymbol{k}$</span>.</p> <p><span class="math-container">$$\boldsymbol{\alpha}=\frac{d\boldsymbol{\omega}}{dt}=\frac{d\omega}{dt}\boldsymbol{k}+\omega\frac{d\boldsymbol{k}}{dt}$$</span></p> <p>Note that if <span class="math-container">$\boldsymbol{k}$</span> were attached to the rigid body then we would have <span class="math-container">$\frac{d\boldsymbol{k}}{dt}=\boldsymbol{\omega}\times\boldsymbol{k}$</span>. But again, <span class="math-container">$\boldsymbol{k}$</span> is not attached to the rigid body in your description except for simple cases where there is some reason that axis is fixed, e.g. 2-D problems.</p>
|
Physics
|
|thermodynamics|entropy|reversibility|
|
Change in entropy in reversible and irreversible process
|
<blockquote> <p>But to prove it we used the fact that in reversible process <span class="math-container">$dS=\frac{\delta Q_{rev}}{T}$</span>. How is that logically possible?</p> </blockquote> <p>I think you misunderstand. The entropy change is <em><strong>defined</strong></em> for a reversible transfer of heat, but applies to any process, reversible or not, and whether or not any heat transfer is involved. Since entropy is a state function independent of the process, one can replace any actual irreversible process between two states with any convenient reversible path between the same two states and calculate the entropy change using the definition of entropy. Note the reversible path need not bear any resemblance to the actual process.</p> <p>Although the change in entropy of the <em>system</em> is the same for a reversible and irreversible process, the total change in entropy of the <em>system plus its surroundings</em> is not. For a reversible process it is zero. For an irreversible process it is greater than zero.</p> <p>For the reversible isobaric expansion process the system must be brought into contact with an infinite series of thermal reservoirs, ranging between the initial and final temperature, with the temperature of each reservoir being infinitesimally greater than the preceding and always infinitesimally greater than the temperature of the system. Thus the change in entropy of the system is</p> <p><span class="math-container">$$\Delta S_{sys}=\int_1^2\frac{C_{p}dT}{T}=C_{p}\ln\frac {T_{2}}{T_1}\tag {1}$$</span></p> <p>Since surroundings is always in thermal equilibrium with the system throughout the process, and since the only difference is heat is leaving the surroundings (is negative), the heat transfer is reversible and the change in entropy of the surroundings is</p> <p><span class="math-container">$$\Delta S_{surr}=-C_{p}\ln\frac {T_{2}}{T_1}\tag{2}$$</span></p> <p>from which it follows</p> <p><span class="math-container">$$\Delta S_{sys}+\Delta S_{surr}=0\tag{3}$$</span></p> <p>Now consider instead the system being placed in contact with a single thermal reservoir in the surroundings of temperature equal to the temperature of the equilibrium state. Due to the finite temperature difference between the system and surroundings, heat transfer is irreversible. The entropy change of the system is the same, but for the surroundings it is</p> <p><span class="math-container">$$\Delta S_{surr}=-\int_1^2 \frac{Q}{T_2}= -\int_1^2 \frac{C_{p}(T_{2}-T_{1})}{T_2}\tag{4}$$</span></p> <p>From which it can be shown for any <span class="math-container">$T_2$</span> and <span class="math-container">$T_1$</span></p> <p><span class="math-container">$$\Delta S_{sys}+\Delta S_{surr}=C_{p}\ln\frac {T_{2}}{T_1}-\int_1^2 \frac{C_{p}(T_{2}-T_{1})}{T_2}\gt 0\tag{5}$$</span></p> <blockquote> <p>So isn't it correct to write <span class="math-container">$=_{}/+_{}$</span> in irreversible? instead of just <span class="math-container">$=_{}/$</span></p> </blockquote> <p>It's not correct. Instead, it should be written without the <span class="math-container">$rev$</span> subscript for <span class="math-container">$Q$</span> as:</p> <p><span class="math-container">$$dS=\frac{\delta Q}{T_{B}}+dS_{generated}\tag{6}$$</span></p> <p>where <span class="math-container">$\delta Q$</span> is the differential amount of heat flowing into the system from its surroundings (through the boundary with its surroundings), <span class="math-container">$T_B$</span> is the temperature at the boundary through which the heat is flowing, which is not necessarily the same as the temperature within the system, and <span class="math-container">$dS_{generated}$</span> is the differential amount of entropy generated within the system as a result of irreversible component of the heat transfer and any irreversible work that is done.</p> <p>In equation (6) it is helpful to think of <span class="math-container">$\frac{\delta Q}{T_B}$</span> as the differential entropy <em><strong>change</strong></em> as a result of entropy <em><strong>transfer</strong></em> and <span class="math-container">$dS_{generated}$</span> as the differential amount of entropy <em><strong>generated</strong></em> within the system during the process due to irreversible heat transfer and/or irreversible work.</p> <p>For a reversible process path, the amount of entropy generated within the system is zero, so the entropy change for such a path is simply</p> <p><span class="math-container">$$dS=\frac{\delta Q_{rev}}{T}\tag{7}$$</span></p> <p>where <span class="math-container">$T$</span> is the temperature throughout the system and not just at the boundary, as the system is always in thermal equilibrium with the surroundings in order for the heat transfer to be reversible.</p> <p>Combining equations (6) and (7) we can determine the differential amount of entropy generated for the irreversible process as</p> <p><span class="math-container">$$dS_{generated}=\frac{\delta Q_{rev}}{T}-\frac{\delta Q}{T_{B}}\tag{8}$$</span></p> <p>EXAMPLE:</p> <p>As an example of the application of equations (6) through (8) consider the reversible and irreversible isobaric processes previously introduced. Let <span class="math-container">$T_{1}=300 K$</span> and <span class="math-container">$T_{2}=600 K$</span>.</p> <p>From Eq (1) the change in entropy of the system for the reversible process, where the temperature within system is always the same as at the boundary with the surroundings, is</p> <p><span class="math-container">$$\Delta S_{sys}=C_{p}\ln\frac{600}{300}=+0.69 C_{p}\tag{9}$$</span></p> <p>since this is the same change in entropy as the irreversible process, we can equate the result of eq (9) with integral form of the right hand side of eq (6)</p> <p><span class="math-container">$$0.69 C_{p}=\int_1^2\frac{C_{p}dT}{T_{B}}+S_{generated}\tag{10}$$</span></p> <p>For the irreversible process the temperature at the boundary is constant and equals that of the single 600 K thermal reservoir. Therefore <span class="math-container">$T_{B}=600 K$</span> and <span class="math-container">$C_{p}dT=C_{p}(T_{2}-T_{1})$</span>. Eq (10) becomes</p> <p><span class="math-container">$$0.69 C_{p}=\frac{C_{p}(600-300)}{600}+S_{generated}\tag{11}$$</span></p> <p><span class="math-container">$$0.69 C_{p}=0.5 C_{p}+S_{generated}\tag {12}$$</span></p> <p>from which it follows</p> <p><span class="math-container">$$S_{generated}=+0.19 C_{p}\tag{13}$$</span></p> <p>To summarize, for the irreversible isobaric process</p> <p><span class="math-container">$$\Delta S_{sys}=0.5 C_{p} + 0.19 C_{p}=0.69 C_{p}\tag{14}$$</span></p> <p>Where</p> <p><span class="math-container">$0.5C_{p}$</span> = the entropy transferred to the system from the surroundings.</p> <p><span class="math-container">$0.19 C_{p}$</span> = the entropy generated within the system due to irreversible heat transfer.</p> <p><span class="math-container">$0.69 C_{p}$</span> = the total entropy change of the system</p> <p>Hope this helps.</p>
|
Physics
|
|homework-and-exercises|rotational-dynamics|reference-frames|work|rigid-body-dynamics|
|
Doubt in rotational work-energy theorem
|
<p>There is no such thing as "rotational work" or "translational work". Work is work and energy is energy. In fact, we can go further - "work" and "energy" are just two names for the same thing.</p> <p>The horizontal force <span class="math-container">$F$</span> moves through a horizontal distance <span class="math-container">$l \sin \theta$</span> so the work done by the force <span class="math-container">$F$</span> is <span class="math-container">$Fl \sin \theta$</span>. This is not "rotational work" or "translational work" - it is just work.</p> <p>The centre of mass of the rod is raised by a distance <span class="math-container">$\frac l 2 (1-\sin \theta)$</span> so the work done <em>against</em> gravity is <span class="math-container">$ \frac {mgl} 2 (1-\sin \theta)$</span>. Once again, this is not "rotational work" or "translational work" - it is just work. In fact, you could put this term on the other side of the equation and call it "potential energy gained by the rod" instead of "work done against gravity" - it comes out to the same thing.</p> <p>The kinetic energy gained by the rod is <span class="math-container">$\frac {ml^2 \omega^2} 6$</span>. You can think of this as being <span class="math-container">$\frac 1 2 I_1 \omega^2$</span> where <span class="math-container">$I_1$</span> is the rod's moment of inertia about the stationary pivot. Or you can think of it as being <span class="math-container">$\frac 1 2 mv^2 + \frac 1 2 I_2 \omega^2$</span> where <span class="math-container">$v$</span> is the velocity of the rod's centre of mass and <span class="math-container">$I_2$</span> is the moment of inertia about its centre of mass. You get the same answer either way, because energy is energy.</p>
|
Physics
|
|magnetic-fields|magnetic-moment|
|
Confusion regarding magnetic moment
|
<p>Imagine if you have two dipoles that are right next to each other pointing in opposite directions. Such a configuration has no dipole moment, and at long range the field is a "quadruple" field. That's the next term in the multipole expansion.</p>
|
Physics
|
|general-relativity|field-theory|resource-recommendations|vortex|cosmic-string|
|
Metric of a rotating Cosmic String
|
<p>The "Catalogue of Spacetimes" review of exact solutions to Einstein's equation, <a href="https://arxiv.org/abs/0904.4184" rel="nofollow noreferrer">Mueller & Graves (2009)</a>, is the first place I would look. And sure enough section 2.22 might be what you are looking for. They pulled that metric from <a href="https://link.springer.com/article/10.12942/lrr-2004-9#Sec5.10" rel="nofollow noreferrer">Perlick (2004)</a>, an LRR on gravitational lensing, which can point you further down the reference rabbit hole.</p>
|
Physics
|
|newtonian-mechanics|rotation|
|
A drum is rolling down a hill - is the force of friction with the surface a constant?
|
<p>Yes, <strong>friction must be static</strong> because the point that is in contact with the ground at a particular instant is in rest (wrt ground).</p> <p>Yes, the <strong>frictional force is constant</strong>.</p> <p>mgsin(θ) - ƒ = ma</p> <p>ƒR = {m(r^2)/2}* (α)</p> <hr /> <p>m = mass of drum r = radius of drum θ = angle of hill α = angular acceleration of drum wrt center of mass a = linear acceleration if drum ƒ = frictional force</p> <p>By solving we get α constant, and therefore value of ƒ is also constant</p> <p>PS -: In pure rolling, friction does not contribute to work because the point of contact between the drum and the ground is instantaneously at rest wrt ground frame.</p>
|
Physics
|
|forces|rotational-dynamics|work|rigid-body-dynamics|
|
Rotational work by a force
|
<blockquote> <p>If we have an object which is acted upon by a force which which produces both translational and rotational motion of body then would the total work done by the force be the sum of its translational and rotational work?</p> </blockquote> <p>There may be semantic issues here and it would perhaps to clearer to talk in terms of energy which is equivalent to work. If we assume the body initially has zero linear and angular energy, then its final total energy <span class="math-container">$E_T$</span> of the body after the application of the force is <span class="math-container">$$E_T = E_L + E_R = \frac12 m v^2 + \frac12 I \omega^2$$</span></p> <p>where <span class="math-container">$E_L$</span> is the linear energy <span class="math-container">$E_R$</span> is the rotational energy, <span class="math-container">$I$</span> is the moment of inertia, m is the mass,<span class="math-container">$\omega$</span> is the angular velocity and v is the linear velocity of the body.</p> <p>If we assume conditions where no energy is lost as heat etc, then the final total energy must be equal to the energy input by the original force acting over a distance d, where for simplicity we consider the force to be constant.</p> <p>The input energy is equal to F d and we could call this the linear work done on the system. Since torque is <span class="math-container">$FR$</span> we can alternatively express this input energy as <span class="math-container">$F R \theta$</span> where <span class="math-container">$\theta$</span> is the angle the object rotates through, during the application of the force and call this the input rotational work, but they are the same thing expressed in different ways and we should be careful not to double count and assume the total input work is <span class="math-container">$F d + F R \omega$</span> as that would cause an error.</p> <p>P.S. I am working on a more detailed answer to the question you posed in a comment to my answer in a <a href="https://physics.stackexchange.com/questions/807920/rotation-of-disc-on-smooth-surface/807956?noredirect=1#comment1814956_807956">related question</a>, so stand by :-)</p>
|
Physics
|
|general-relativity|black-holes|astrophysics|polarization|frame-dragging|
|
Is this an actual photo of frame dragging?
|
<p>It is not frame dragging, it is a visualisation of the linear polarisation present in 230 GHz light emerging from the vicinity of Sgr A* at the centre of the Milky Way. But getting an answer took some detective work...</p> <p>The press release links to a paper by the <a href="https://iopscience.iop.org/article/10.3847/2041-8213/acff70" rel="noreferrer">Event Horizon Telescope Collaboration</a>. This paper present images of the region around the (very likely) black hole at the centre of <strong>M87</strong> in circularly polarised (CP) light. The CP signature is v "both weak and sensitive to calibration errors". The CP signature is a couple of vague blobs with only very crude spatial structure at a resolution roughly similar to the Schwarzschild radius, with positive and negative CP either side of the centre. This paper does not contain the image in your press release and is nothing to do with Sgr A*, the black hole at the centre of the Milky Way.</p> <p>So, after that wild-goose chase, I tracked down the <a href="https://iopscience.iop.org/article/10.3847/2041-8213/ad2df0" rel="noreferrer">correct paper</a>, which is also from the EHT collaboration and is a study of the linear polarisation (LP) and CP present in the emission from the region immediately surrounding Sgr A*. In addition to the problems inherent to analysing the M87 polarisation data, the Sgr A* dataset is also afflicted by short timescale (20s) variability and interstellar scattering. However, they do detect a very strong LP signal and it is from this that the visualisation has been created.</p> <p>It makes more sense to see the "raw" data (though note that the image has been reconstructed from interferometric data) and the visualisation together (as they are in the paper). Here it is in the top plot: They grey scale shows the total intensity of the 230GHz radiation, the tick marks show the direction of the electric field vector polarisation angle (for the linearly polarised component) and the colour scale indicates what fraction of the light is linearly polarised. The white dotted contours indicate the strength of the linearly polarised component.</p> <p>The bottom plot is the image in the press release and has been created by overlaying dark "streamlines" of linear polarisation over the top of an orange-tinted grey-scale image of the total intensity, in a loosely defined way. The electric polarisation vector ticks in the upper plot have been given a length and opacity that is proportional to the square of the polarised intensity.</p> <p><a href="https://i.stack.imgur.com/7jDF5.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7jDF5.jpg" alt="EHT resconstructed image" /></a></p> <p>Thus the answer to your question is that this has nothing to do with frame-dragging. It is a visualisation of what the linear polarisation field looks like (the dark lines aren't "real").</p> <p>Another point to mention is that the image is <em>not</em> a direct image of the accretion flow around the black hole. The ring-like structure is a consequence of light rays passing from material near the black hole, approaching the photon sphere at <span class="math-container">$1.5 r_s$</span> and being bent around into our line of sight. This ring would be seen from whichever direction we view the black hole and for any orientation of the accretion flow.</p>
|
Physics
|
|quantum-mechanics|fermions|dirac-delta-distributions|grassmann-numbers|coherent-states|
|
Grassmann variables and orthogonality of coherent fermionic states
|
<ol> <li><p>Bosonic (Grassmann-even) and fermionic (Grassmann-odd) <a href="https://en.wikipedia.org/wiki/Coherent_state#Mathematical_features_of_the_canonical_coherent_states" rel="nofollow noreferrer">coherent states</a> are <em>overcomplete bases,</em> that are <em>not</em> orthogonal.</p> </li> <li><p>One may show that the fermionic definition of coherent states<span class="math-container">$$ |\eta \rangle~:=~e^{\hat{c}^{\dagger}\eta}|0 \rangle~=~|0 \rangle+|1 \rangle\eta, \qquad |1 \rangle~:=~\hat{c}^{\dagger}|0 \rangle, \tag{0}$$</span> does satisfy the completeness relation <span class="math-container">$$\int_{\mathbb{C}^{0|1}} d\bar{\eta}~d\eta ~e^{-\bar{\eta}\eta} |\eta \rangle\langle \bar{\eta} |~=~\mathbb{1}.$$</span></p> </li> <li><p>If we introduce a <a href="https://ncatlab.org/nlab/show/picture+changing+operator" rel="nofollow noreferrer">picture-changing operator</a> <span class="math-container">$$\hat{P}~:=~|1 \rangle\langle 0|-|0 \rangle\langle 1| ,$$</span> we can write <span class="math-container">$$ \langle \bar{\eta} |\hat{P}|\eta \rangle~=~\bar{\eta}-\eta~=~\delta(\bar{\eta}\!-\!\eta), $$</span> OP's eq. (3).</p> </li> </ol> <p>References:</p> <ol> <li><p>A. Altland & B. Simons, <em>Condensed matter field theory,</em> 2nd ed., 2010; p. 160-164.</p> </li> <li><p>T. Lancaster & S.J. Blundell, <em>QFT for the Gifted Amateur,</em> 2014; section 28.2.</p> </li> </ol>
|
Physics
|
|homework-and-exercises|special-relativity|relative-motion|
|
Calculating relative velocity: What am I doing wrong?
|
<p>The mistake is in making differences between positions as they were scalars, while they are vectors.</p> <p>Consider the vectors <span class="math-container">$\vec{r}_1$</span>, <span class="math-container">$\vec{r}_2$</span>, and <span class="math-container">$\vec{r}_3$</span> corresponding to the position of <span class="math-container">$P_1$</span>, <span class="math-container">$P_2$</span>, and <span class="math-container">$P_3$</span> from <span class="math-container">$A$</span>, respectively. The magnitude of these vectors are the distances of <span class="math-container">$P_1$</span>, <span class="math-container">$P_2$</span>, and <span class="math-container">$P_3$</span> from <span class="math-container">$A$</span>. Now, if you just compute the differences between these magnitudes, you get the results in your questions, which are not correct. In fact, you should compute the differences between vectors:</p> <ul> <li><span class="math-container">$\vec{r}_2 - \vec{r}_1$</span> is the position vector of <span class="math-container">$P_2$</span> with respect to <span class="math-container">$P_1$</span>.</li> <li><span class="math-container">$\vec{r}_3 - \vec{r}_2$</span> is the position vector of <span class="math-container">$P_3$</span> with respect to <span class="math-container">$P_2$</span>.</li> </ul> <p>Computing the magnitude of these differences gives you the right result.</p> <p>Another way to understand the same mistake is the following: take a triangle with side length <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>. For a generic triangle, <span class="math-container">$c \neq a + b$</span>. However, in your post you are saying that <span class="math-container">$c = a + b$</span>, which is indeed incorrect.</p>
|
Physics
|
|electromagnetism|magnetic-fields|electric-fields|electromagnetic-induction|
|
How do Electric and magnetic fields generate each other (mathematically)?
|
<p>Linked differential equations <em>may</em> reinforce one another leading to unbounded exponential growth - but they do not have to.</p> <p>For example, the linked equations</p> <p><span class="math-container">$\displaystyle y = \frac {dx}{dt} \\ \displaystyle x = \frac {dy}{dt}$</span></p> <p>with initial conditions <span class="math-container">$x(0)=y(0)=1$</span> have solution</p> <p><span class="math-container">$x = e^t \\ y = e^t$</span></p> <p>which is unbounded as <span class="math-container">$t \rightarrow \infty$</span>. But if we introduce a negative sign as follows:</p> <p><span class="math-container">$\displaystyle y = -\frac {dx}{dt} \\ \displaystyle x = \frac {dy}{dt}$</span></p> <p>with the same initial conditions <span class="math-container">$x(0)=y(0)=1$</span> then the solution is</p> <p><span class="math-container">$x = \cos(t) - \sin(t) \\y = \cos(t) + \sin(t)$</span></p> <p>which is bounded for all <span class="math-container">$t$</span>.</p> <p>The interaction between electric and magnetic fields is like the latter case (although more complicated since we have spatial dimensions to consider as well).</p>
|
Physics
|
|quantum-mechanics|wavefunction|schroedinger-equation|boundary-conditions|
|
Question about Griffiths' proof that $\Psi$ stays normalized
|
<p>The proof by Griffiths relies in fact on stronger (implicit) hypotheses about the behaviour of <span class="math-container">$\psi$</span> at infinity. These hypotheses could be justified by requiring (as in Giorgio's answer of the other question) that the used <span class="math-container">$\psi$</span> also belongs to the domains of some observables (selfadjoint operators): Nevertheless the main claim is valid even if <span class="math-container">$\psi \in L^2(\mathbb{R})$</span> without further restrictions.</p> <p>The general validity is due to the fact that time evolution, due to basic axioms of QM, is given by an one-parameter group of unitary operators.</p> <p>The Schroedinger equation <span class="math-container">$$i\hbar \frac{d}{dt} \psi_t = H\psi_t$$</span> makes sense only if <span class="math-container">$\psi \in D(H)$</span>, where <span class="math-container">$H$</span> is the Hamiltonian observable. Its domain <span class="math-container">$D(H)$</span> is a dense subspace of the Hilbert space. However the map which associates the solution at time <span class="math-container">$t$</span> (in <span class="math-container">$D(H)$</span>) with the initial condition in <span class="math-container">$D(H)$</span>: <span class="math-container">$$U_t : D(H) \ni \psi_0 \mapsto \psi \in D(H)$$</span> is <span class="math-container">$L^2(\mathbb{R})$</span>-continuous: that is because it can be written in terms of a unitary map as I explain below. Therefore, as <span class="math-container">$D(H)$</span> is dense, continuously extends to the whole Hilbert space <span class="math-container">$$U_t : L^2(\mathbb{R}) \to L^2(\mathbb{R}).$$</span> More explicitly, as <span class="math-container">$H$</span> is selfadjoint, we can take advantage of the functional calculus and the operator <span class="math-container">$U_t$</span> can be written as <span class="math-container">$U_t= e^{-i t H/ \hbar}$</span>. Here selfadjointness of <span class="math-container">$H$</span> implies unitarity of <span class="math-container">$U_t$</span>: <span class="math-container">$$U_t^\dagger = (e^{-i t H/ \hbar})^\dagger = e^{+i t H^\dagger \! / \hbar}= e^{i t H/ \hbar}= U_t^{-1}\:.$$</span> In particular, <span class="math-container">$$U^\dagger U = e^{i t H/ \hbar} e^{-i t H/ \hbar} = e^{-i (t-t) H/ \hbar}= e^{i 0 H/ \hbar}= I\:.$$</span> As a consequence, if <span class="math-container">$\psi \in L^2({\mathbb{R}})$</span> <span class="math-container">$$\int_{\mathbb{R}} |\psi(x,t)|^2 dx = ||U_t \psi_0||^2=\langle U_t \psi_0| U_t \psi_0\rangle = \langle \psi_0| U^\dagger_tU_t \psi_0\rangle= \langle \psi_0|\psi_0\rangle = \int_{\mathbb{R}} |\psi(x,0)|^2 dx\:.$$</span></p>
|
Physics
|
|electromagnetism|electrostatics|electric-circuits|
|
Why Don't Electrons "Try" to Flow in an Open Circuit?
|
<p>An open switch can be thought of as a capacitor with a very small capacitance with air as the dielectric. As such opposite sign charges can reside on the two sides of a switch wand there will be a potential difference across the switch. Thus there will be a time when current flows along the wires until the capacitor (switch) is fully charged, i.e., the potential difference across the switch is equal to that across the terminals of the voltage source. <br /> As air is not a perfect insulator, a minute current can flow across the switch contacts and if the potential difference across the switch contacts is large enough so that the electric field between the contacts exceeds 30 kV/cm, the air will break down (become a conductor) and a significant current will flow through the air gap. <br /> Even with low voltages arcing can occur as switches are opened and closed as the air breaks down and the can cause radio interference. At high voltages the effect can be quite spectacular particularly at night as the video <em><a href="https://www.youtube.com/watch?v=rnhob9zqY3w" rel="noreferrer">High Voltage 345,000 volt switch closing at night!</a></em> shows. In the video the switch is being closed and when the air gap is small enough the electric field between the switch contacts is large enough for the air to breakdown and a significant current pass through it.</p>
|
Physics
|
|quantum-field-theory|feynman-diagrams|correlation-functions|self-energy|1pi-effective-action|
|
Understanding $W^{(n)}$, $\Gamma^{(n)}$, and $\Sigma$ in Feynman diagrams
|
<ul> <li><p>The connected <span class="math-container">$n$</span>-point function <span class="math-container">$$\langle \phi^{k_1}\ldots \phi^{k_n}\rangle^c_{J=0}~=~\left(\frac{\hbar}{i}\right)^{n-1} W_{c,n}^{k_1\ldots k_n}$$</span> is the sum of connected Feynman diagrams with <span class="math-container">$n$</span> external legs.</p> </li> <li><p>For <span class="math-container">$n\geq 3$</span> the <span class="math-container">$n$</span>-point function<span class="math-container">$^1$</span> <span class="math-container">$\Gamma_{n,k_1,\ldots k_n}$</span> of the <a href="https://en.wikipedia.org/wiki/Effective_action" rel="nofollow noreferrer">effective/proper action</a> <span class="math-container">$\Gamma[\phi_{\rm cl}]$</span> is the sum of 1PI Feynman diagrams with <span class="math-container">$n$</span> external amputated legs, cf. e.g. <a href="https://physics.stackexchange.com/q/146734/2451">this</a> Phys.SE post.</p> </li> <li><p>The <span class="math-container">$2$</span>-point functions <span class="math-container">$i^{-1}(W_{c,2})^{k\ell}$</span> and <span class="math-container">$i^{-1}(\Gamma_2)_{k\ell}$</span> are inverses of each other, cf. e.g. eq. (8) in my Phys.SE answer <a href="https://physics.stackexchange.com/a/348518/2451">here</a>.</p> </li> <li><p>For <span class="math-container">$n\geq 3$</span> an <span class="math-container">$n$</span>-point function<span class="math-container">$^1$</span> <span class="math-container">$W_{c,n}^{k_1\ldots k_n}$</span> of connected Feynman diagrams is a sum of possible trees consisting of 1PI <span class="math-container">$m$</span>-vertices <span class="math-container">$\Gamma_{m,\ell_1,\ldots\ell_m}$</span> with <span class="math-container">$m\geq 3$</span> and lines made of connected propagators <span class="math-container">$(\Gamma_2^{-1})^{k\ell}=-(W_{c,2})^{k\ell}$</span>, cf. e.g. <a href="https://physics.stackexchange.com/q/146734/2451">this</a> Phys.SE post. Note that the 2-point function <span class="math-container">$(\Gamma_2)_{k\ell}$</span> here plays a very different role than the higher point functions.</p> </li> <li><p>The difference between on one hand the <span class="math-container">$2$</span>-point function/inverse connected propagator <span class="math-container">$(\Gamma_2)_{k\ell}=-(W_{c,2}^{-1})_{k\ell}$</span>, and on the other hand the <a href="https://en.wikipedia.org/wiki/Self-energy" rel="nofollow noreferrer">self-energy</a> <span class="math-container">$\Sigma$</span>, is that the former contains an inverse free propagator, cf. e.g. <a href="https://physics.stackexchange.com/q/440789/2451">this</a> Phys.SE post. (Note that a free propagator is <em>not</em> 1PI.)</p> </li> <li><p>For further description of the Legendre transformation between <span class="math-container">$W_c[J]$</span> and <span class="math-container">$\Gamma[\phi_{\rm cl}]$</span>, see e.g. <a href="https://physics.stackexchange.com/q/107936/2451">this</a> Phys.SE post.</p> </li> </ul> <p>--</p> <p><span class="math-container">$^1$</span> We assume for simplicity in this answer that there are no tadpoles.</p>
|
Physics
|
|quantum-mechanics|electromagnetism|entropy|physical-chemistry|electrochemistry|
|
Can a Transformer Last until the Heath Death of the Universe if Made with Inorganic Insulation?
|
<p>Your transformer will last nowhere near the heat death of the universe. The heat death of the universe is a really really really really really really really really really really really really really really really really really really really really really really really really really really long time away. And to be perfectly fair, I should have had far more "really"s there, but you'd have to scroll past them and that wouldn't be nice. As a <a href="https://en.wikipedia.org/wiki/Heat_death_of_the_universe#Current_status" rel="nofollow noreferrer">lower bound</a>, the heat death of the universe is <em>at least</em> <span class="math-container">$10^{106}$</span> years away, because it would take that long for some of the largest black holes to evaporate. Let's give that number credit and spell it out. That's 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years. This is long enough that your intuition about what matters starts to fall short. You mention that the coolant doesn't come in contact with an oxidizing agent, thinking that will matter on this timescale. In reality, we worry about things like <a href="https://en.wikipedia.org/wiki/Proton_decay" rel="nofollow noreferrer">proton decay</a>. Modern science literally doesn't know whether protons themselves decay. The best we know is that the half life is at least <span class="math-container">$1.67\cdot10^{34}$</span> years, but that's an astonishingly short time compared to the heat death of the universe. It's within the error bounds of the best of modern science that the half life of a proton is <span class="math-container">$10^{35}$</span> years, in which case nearly all of the protons in your material will have decayed before the heat death of the universe. And by "nearly all," I mean you have <span class="math-container">$10^71$</span> half-lives of your proton, so each proton has only a <span class="math-container">$\frac{1}{2}^{10^{71}}$</span> chance of surviving. I'd write out how many zeroes appear at the start of that percentage, but I'd run out of space in the stack exchange answer size limit.</p> <p>So let's walk back from the heat death of the universe. That's so far out that talk of "insulators" is ... quaint.</p> <p>We also have to consider externalizes. On the order of 7 billion years from now, <a href="https://spaceandbeyondbox.com/life-cycle-of-the-sun/" rel="nofollow noreferrer">the sun will expand</a> as it burns up its primary fuel. It may even engulf the Earth, in which case all talk of "not contacting oxidizers" is rendered moot by fusion-hot plasma melting everything.</p> <p>So what can we do? Well, the best measure I can think of for what you're interested in is the <a href="https://en.wikipedia.org/wiki/Gibbs_free_energy" rel="nofollow noreferrer">Gibbs free energy</a> of a compound. This is a very useful measure for how stable a compound is. The Gibbs free energy of <a href="https://en.wikipedia.org/wiki/Standard_Gibbs_free_energy_of_formation" rel="nofollow noreferrer">Aluminum trioxide</a> (<span class="math-container">$Al_2O_3$</span>) at standard temperature and pressure (which your question vaguely suggests is the right temperature and pressure to use) is −1582.3 kJ/mol. So it is the compound least likely to decompose over time. It is also an insulator and can be produced with modern manufacturing means.</p> <p>How long does it last? That gets into a funny question. Like all compounds, it exists in a balance with its decomposed elements. Having a very low (very negative) Gibbs free energy, it is a very favored form. I've seen aluminum oxide's lifespan listed as "indefinite." But in reality, it's not.</p> <p>The hard question is what happens after it decomposes. If the oxygen hangs around, it is very likely to re-combine with the aluminum, and thus not really decompose meaningfully. But if the oxygen escapes, then the insulator slowly turns into aluminum. How fast this happens depends on your seals, which are not well specified. On the short term, the insulation is perfect. But on the scale of millions of years, we have to consider such things</p>
|
Physics
|
|classical-mechanics|lagrangian-formalism|harmonic-oscillator|action|
|
Directly integrating the Lagrangian for a simple harmonic oscillator
|
<ol> <li><p>Well, if we know the classical solution <span class="math-container">$q_{\rm cl}:[t_i,t_f] \to \mathbb{R}$</span> (which we do for the harmonic oscillator), we can plug it into the action functional <span class="math-container">$S[q]$</span> and obtain the on-shell action function <span class="math-container">$$\begin{align}S(q_f,t_f;q_i,t_i)~:=~&S[q_{\rm cl}]~=~\ldots\cr ~=~&\frac{m\omega}{2}\left((q_f^2+q_i^2)\cot(\omega\Delta t_{fi})-\frac{2q_fq_i}{\sin(\omega\Delta t_{fi})}\right),\cr \Delta t_{fi}~:=~&t_f-t_i,\qquad \omega~:=~\sqrt{\frac{k}{m}},\end{align}$$</span> cf. e.g. <a href="https://physics.stackexchange.com/q/15325/2451">this</a> Phys.SE post.</p> </li> <li><p>Generically, we can only explicitly perform the integration in the action functional <span class="math-container">$S[q]$</span> if we know the explicit form of the (possibly virtual) path <span class="math-container">$q:[t_i,t_f] \to \mathbb{R}$</span>, if that's what OP is asking.</p> </li> </ol>
|
Physics
|
|electromagnetism|magnetic-fields|electric-fields|electromagnetic-induction|
|
Induced Electric field due to magnetic field in Faraday experiment
|
<p>To start with question 3), the case where there is no coil but just a magnet that you move through empty space, as <a href="https://physics.stackexchange.com/users/161019/user1245">@user1245</a> asked in question<a href="https://physics.stackexchange.com/questions/343447">questions/343447</a>: is there a force opposing the motion? Yes there is, if the movement is not at constant velocity! Then there will be what is called "radiative reaction", or "radiation damping". Because you radiate energy creating EM waves (although very weak ones) there must be an opposing force.</p> <p>This also happens for an electric dipole instead of a magnet and an even simpler case is one single particle with electric charge. See <a href="https://en.wikipedia.org/wiki/Radiation_damping" rel="nofollow noreferrer">Radiation_damping</a>, or Jackson's "Classical Electrodynamics", <a href="https://en.wikipedia.org/wiki/Classical_Electrodynamics_(book)#Table_of_contents_(3rd_edition)" rel="nofollow noreferrer">Chapt. 16</a>. But even the simplest case is not trivial to compute and can lead to <a href="https://en.wikipedia.org/wiki/Abraham%E2%80%93Lorentz_force#Runaway_solutions" rel="nofollow noreferrer">runaway solutions</a> if you are not careful.</p> <p>Your reasoning is basically correct that moving the magnet results in <span class="math-container">$d{\bf B}/dt$</span> and to have the matching <span class="math-container">$\nabla \times {\bf E}$</span> there must be a nonzero <span class="math-container">${\bf E}$</span>, which by itself will also be time-dependent so you have <span class="math-container">$d{\bf E}/dt$</span> and there should be a matching <span class="math-container">$\nabla \times {\bf B}$</span>. Since the magnetostatic field of a stationary magnet has <span class="math-container">$\nabla \times {\bf B}=0$</span>, there must indeed be some change in the shape of the <span class="math-container">${\bf B}$</span>-field.</p> <p>As for your question 1), this radiative reaction is present also in the case with the coil present but its opposing force will usually be much weaker that that from the coil. For question 2): the change in field from the presence of the coil is usually much bigger than the departure from the magnetostatic field that you get by movement in empty space. (Unless we would talk about very strong acceleration, or a coil with extremely high resistance).</p> <p>NB: we should be cautious in saying that the fields are "produced" or "caused" by each other. They must be present in a combination consistent with Maxwell's equations, but that doesn't prove whether the curl of one field causes the time derivative of the other, or the time derivative of one field causes the curl of the other.</p>
|
Physics
|
|fluid-dynamics|pressure|fluid-statics|flow|bernoulli-equation|
|
Bottle with a hole, with straw through the lid
|
<p>It is clear that the water tap works by regulating the air supply through bubbles in the red straw, but how it does that is interesting.</p> <p>The bubble formation only happens at a threshold pressure. Since water pressure has a linear vertical gradient, he can control when this threshold is crossed by moving the straw up and down. Let me explain.</p> <p><em>"For any pipe immersed in a fluid, the fluid will flow out only if the pressure inside the pipe exceeds the pressure outside (plus surface tension, which we neglect here)"</em> Let's call this argument <span class="math-container">$(1)$</span>.</p> <p><a href="https://i.stack.imgur.com/0ftWQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0ftWQ.jpg" alt="enter image description here" /></a></p> <p>Let <span class="math-container">$a$</span> be the distance between the water surface and the blue pipe, h be the total water depth, <span class="math-container">$\rho$</span>, <span class="math-container">$g$</span>, etc have the usual meanings.</p> <p><a href="https://i.stack.imgur.com/mmCAC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mmCAC.jpg" alt="enter image description here" /></a></p> <p>Water pressure at any depth is given by <span class="math-container">$\rho g d + P_0$</span>, where <span class="math-container">$P_0$</span> is the pressure exerted by air pocket on the water surface (<span class="math-container">$P_0<P_{air}<P_0+\rho g h$</span>).</p> <p><a href="https://i.stack.imgur.com/kfwVw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kfwVw.jpg" alt="enter image description here" /></a></p> <p>So when water flows, the air pocket expands, and <span class="math-container">$P_0$</span> reduces, finally reaching an equilibrium <span class="math-container">$P_0$</span> where water flow through the blue pipe stops.</p> <p>This happens when the water pressure at depth <span class="math-container">$a$</span> equals air pressure (apply <span class="math-container">$(1)$</span> to blue pipe).</p> <p><span class="math-container">$$P(a)=P_0+\rho g a=P_{air}\tag 2$$</span></p> <p>Coincidentally, this is also the pressure needed to make air bubbles in the red pipe (apply <span class="math-container">$(1)$</span> to the red pipe).</p> <p><span class="math-container">$$P(d)<P_{air}\tag 3$$</span></p> <p>Combining <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span>, we get</p> <p><span class="math-container">\begin{align*} P(d)&<P(a)\\ P_0+\rho g d&<P_0+\rho g a\\ d&<a \end{align*}</span></p> <p>In other words, bubbles can be made on the red pipe only when the tip of the red pipe is above the level of the blue pipe, which is exactly what we observe. These bubbles reduce the pressure deficit in the air pocket, allowing water to flow.</p>
|
Physics
|
|general-relativity|astrophysics|
|
What is the energy density on the surface of a polytropic gas sphere?
|
<blockquote> <p>Where is my fallacy?</p> </blockquote> <p>The “fallacy” is in the choice of matter model. If you want a solution of Einstein equation with a distinct boundary between matter and vacuum then you have to choose matter that allows nonzero density with zero pressure. But <em>polytropic gas</em> has equation of state: <span class="math-container">$$p=K \rho^{1+\frac1n},$$</span> where <span class="math-container">$n\ge 1$</span> is a constant. So as pressure approaches zero density also approaches zero.</p> <p>An <a href="https://en.wikipedia.org/wiki/Interior_Schwarzschild_metric" rel="nofollow noreferrer">interior Schwarzschild solution</a> is one such example of solution with nonzero energy density at the boundary of the star. The matter here is incompressible fluid.</p>
|
Physics
|
|newtonian-mechanics|angular-momentum|rotational-kinematics|angular-velocity|
|
Understanding the meaning of the directions of $\vec\omega$ and $\vec{L}$
|
<p>It sounds like you got if figured out. The direction of <span class="math-container">$\vec \omega$</span> is always perpendicular to the plane defined by <span class="math-container">$\vec r\times \vec v$</span>. Further, <span class="math-container">$\vec \omega$</span> is always parallel to the axis of rotation whenever <span class="math-container">$\vec r$</span> and <span class="math-container">$\vec v$</span> lie in the plane of rotation. This all stems from the use of the cross product in the definitions as you note in your post.</p> <p>Now as far as intuition goes. Intuitively, defining the angular quantities in terms of cross products gives a sense of <em>orientation</em> to the motion, i.e. using the "right hand rule", one can define a sense of positive rotation or negative rotation respectively.</p>
|
Physics
|
|newtonian-mechanics|fluid-dynamics|kinematics|vectors|
|
What is the locus of the velocity vectors of a boat navigating in the sea under the presence of some force?
|
<p>We know from Newton's second law that</p> <p><span class="math-container">$\displaystyle \vec F = \frac {d(m \vec v)}{dt}$</span></p> <p>and as long as the mass <span class="math-container">$m$</span> of the boat is constant we can conclude that</p> <p><span class="math-container">$\displaystyle \vec F = m \frac {d \vec v}{dt}$</span></p> <p>which we can re-arrange to get</p> <p><span class="math-container">$\displaystyle \frac {d \vec v}{dt} = \frac {\vec F} {m}$</span></p> <p>If the force vector <span class="math-container">$\vec F$</span> is constant (in both magnitude and direction) then we have</p> <p><span class="math-container">$\displaystyle \vec v(t) = \vec v(0) + \frac t m \vec F$</span></p> <p>So if the locus of <span class="math-container">$\vec v(0)$</span> is a circle then the locus of <span class="math-container">$\vec v(t)$</span> will be the same circle displaced by <span class="math-container">$\frac t m \vec F$</span>.</p> <p>If <span class="math-container">$\vec F$</span> depends on <span class="math-container">$t$</span> but not on <span class="math-container">$\vec v$</span> then we have to find the integral <span class="math-container">$\vec G(t) = \int_0^t \vec F(t) dt$</span>, and then</p> <p><span class="math-container">$\displaystyle \vec v(t) = \vec v(0) + \frac 1 m \vec G(t)$</span></p> <p>once again, if the locus of <span class="math-container">$\vec v(0)$</span> is a circle then the locus of <span class="math-container">$\vec v(t)$</span> will also be a circle. If we want the locus of <span class="math-container">$\vec v(t)$</span> to be a different shape from the locus of <span class="math-container">$\vec v(0)$</span> then we need a force <span class="math-container">$\vec F$</span> that depends on <span class="math-container">$\vec v$</span> - such as a drag force.</p>
|
Physics
|
|electromagnetism|poynting-vector|
|
Confusion in Poynting's theorem
|
<p>Whenever you have a conflict between some established principle and the concept of a classical point charge, the issue is the classical point charge. They lead to all sorts of oddities like infinite energy, weird self-forces, and other such things.</p> <p>Poynting's theorem follows directly from Maxwell's equations. So it can be used any time that Maxwell's equations apply. To resolve the issue you mention, simply, use continuous charge and current distributions, <span class="math-container">$\rho$</span> and <span class="math-container">$\vec J$</span>. These are the variables that appear in Maxwell's equations, so applying them makes direct sense. In terms of those variables the Lorentz force density is <span class="math-container">$\vec f = \rho \vec E + \vec J \times \vec B$</span> and it can be applied directly.</p>
|
Physics
|
|newtonian-mechanics|forces|free-body-diagram|
|
Tension while hanging from a bar
|
<p>You are correct. <span class="math-container">$T = \frac {mg} {2 \cos(x)}$</span> where <span class="math-container">$x$</span> is <em>half</em> the angle between the arms. When <span class="math-container">$x=0$</span> (arms vertical below the bar) we have <span class="math-container">$T = \frac {mg} 2$</span>. As <span class="math-container">$x$</span> approaches <span class="math-container">$90$</span> degrees (arms horizontal) then <span class="math-container">$T$</span> approaches infinity. When <span class="math-container">$x$</span> is greater than <span class="math-container">$90$</span> degrees your shoulders are above the bar and <span class="math-container">$T$</span> becomes negative, meaning that the arms are now in compression and not in tension. Finally when <span class="math-container">$x$</span> reaches <span class="math-container">$180$</span> degrees (arms vertical but above the bar) then <span class="math-container">$T = - \frac {mg} 2$</span>.</p>
|
Physics
|
|quantum-mechanics|fourier-transform|heisenberg-uncertainty-principle|dimensional-analysis|physical-constants|
|
How does the Planck constant enter into the uncertainty principle?
|
<p>Notice that the kernel of the transform from position to momentum representations is <span class="math-container">$$ e^{- i x p / \hbar}. $$</span> Thus, comparing with your definition of Fourier transform, you need to have <span class="math-container">$$ \xi = \frac{p}{h}, $$</span> while you used <span class="math-container">$\hbar$</span> instead. Indeed, if you substitute <span class="math-container">$h$</span> to <span class="math-container">$\hbar$</span> anywhere in the last few equations, and in particular in the last one, you get the correct result, i.e., <span class="math-container">$$ \sigma_x \sigma_p \geq \frac{h}{4 \pi} = \frac{\hbar}{2}. $$</span></p>
|
Physics
|
|thermodynamics|fluid-dynamics|temperature|everyday-life|
|
What is the difference between heating and cooling, fundamentally?
|
<p>Heating and cooling were originally thought to be different things. The equations for them were derived differently. Eventually it was discovered that they are the same mechanism, but in opposite directions. (James Maxwell)</p> <p>What you describe, however, is more than <em>just</em> heating or cooling. When you talk of warm air rising, you are also talking about mass transport. In such a situation, opposite directions can behave differently, just as how if you are facing a wall, taking a step backwards or a step forward is different. Even though both of them are just a displacement, in opposite directions, one can cause you to stub your toe while the other cannot. That being said, it is very trivial to make examples where they do indeed behave symmetrically. The room you describe is <em>almost</em> one such example (there's a second-order effect of viscosity being different on cooler fluids, but I believe that is utterly negligible in the case of gases. It might be more of an issue for liquid examples)</p> <p>While it is not possible to give a definitive reason for why your particular intuition, because we are not you, one experiment that might be worth looking into is looking at how the smoke from incense moves. If the air in a room is still, incense smoke can travel upwards in a surprisingly laminar flow for a long time before the transition to turbulence occurs. You can also look into <a href="https://www.walmart.com/c/kp/waterfall-incense" rel="nofollow noreferrer">incence waterfalls</a>.<sup>*</sup> In these cases, the smoke particles are permitted to cool first, and they instead flow downward. Observing these examples may be helpful for adjusting your intuition to better align with what the empirical evidence of science provides.</p> <p><sup>*</sup>. I am not affiliated with this commercial link. It is extremely difficult to find a link describing this sort of product which is not selling one. The link here was chosen because it was high up on Google's search list and because it provided examples of several products.</p>
|
Physics
|
|general-relativity|reference-frames|acceleration|time-dilation|equivalence-principle|
|
On the equivalence principle
|
<p>These considerations about the principle of equivalence fall in the category of cases where there is a tension between local assessment and global assessment.</p> <p>Example:<br /> Locally any part of the Earth's surface is - as far as the naked eye can tell - flat, but globally the shape of the Earth is (to a very good approximation) a sphere.</p> <p>What you are proposing is to compare the amount of proper time that elapses for you to the amount of proper time that elapses for a population of clocks all located at a significant <em>distance</em> to you.</p> <p>Of course that brings out the difference that you are pointing out.</p> <p>The thing is, the principle of equivalence does not assert 1-on-1 correspondence on <em>global</em> scale.</p> <p>The thrust of the principle of equivalence is similar to the thrust of the argument that Galilei offered to argue for what we now call the Principle of Relativity of Inertial Motion.</p> <p>Imagine you are in a cabin inside of a ship that is in motion over perfectly smooth water. Create some mechanical setup in the cabin, such as a swinging pendulum. Can the inside-the-cabin mechanical setup provide you with a way to assess the velocity of the ship? It cannot.</p> <p>The thrust of the thought demonstration associated with the principle of equivalence is as follows: a local setup, of any kind (mechanical, electromagnetic, nuclear), will not detect a difference between uniform acceleration with respect to geometrically flat spacetime, or being stationary relative to a source of curvature of spacetime.</p> <p>The question of how local is local enough is not particularly relevant; that is not the thrust of the principle of equivalence. The thrust of the principle of equivalence is that if it would not hold good then <em>a local experiment</em> would be available that would tell you the difference.</p>
|
Physics
|
|electromagnetic-radiation|wavelength|doppler-effect|gravitational-redshift|
|
How is wavelength defined when it's changing continuously?
|
<p>Those are good ways. Though you can't make the time interval infinitesimal because you then can match any sine wave.</p> <p>If the function is slowly varying, the frequency doesn't change fast. You get a good estimate of the frequency at a given time from the wave in the near future and near past. So average over an interval.</p> <p>To get an estimate at short time later, slide the interval forward. There will be a lot of overlap, so you won't see much change. But that is what you expect for a slowly varying frequency.</p> <p>Another approach is to take the Fourier transform. You often get interesting information from how the Fourier components vary with time.</p>
|
Physics
|
|hamiltonian-formalism|phase-space|constrained-dynamics|poisson-brackets|
|
Why do we need a Poisson bracket structure?
|
<p>The simplest way to motivate the Dirac bracket is to consider a situation where two second-class constraints <span class="math-container">$f_1,f_2$</span> have constant non-zero Poisson bracket <span class="math-container">$\{f_1,f_2\} = c \neq 0$</span>, e.g. when we have a system where one pair of position and momentum is constrained as <span class="math-container">$p_1 = 0, q^1 = 0$</span>.</p> <p>Really, what we want in our theory is to be able to look at our equations "modulo constraints", i.e. impose <span class="math-container">$f_1 \approx 0,f_2 \approx 0$</span> and have everything be consistent. We cannot do this with the equation <span class="math-container">$\{f_1, f_2\} = c$</span>, since plugging in the vanishing of the constraints yields <span class="math-container">$0 \approx c$</span>, a contradiction. Hence the original Poisson bracket is not the right structure for this theory of constraints. If you are very straightforwardly only interested in solving equations of motion, you might not worry about this, but then again - if you are only interested in that, why are you doing the whole Hamiltonian formalism in the first place?</p> <p>At the latest when we try to <em>quantize</em> this theory, we need a bracket that <em>does</em> play nice with the imposition of the constraints, since canonical quantization sends the bracket to the quantum commutator, and we cannot have states on which the constraint operators vanish but that also have <span class="math-container">$[f_1,f_2]\psi = c\psi$</span>.</p> <p>The Dirac bracket fulfills precisely this role since <span class="math-container">$[f_1,f_2]_\text{D} = 0$</span> by construction.</p> <p>A purely classical way to motivate the Dirac bracket is geometrical: The constraints <span class="math-container">$f_i = 0$</span> pick out a submanifold <span class="math-container">$\Sigma = \{(q,p)\in\mathbb{R}^{2n}\mid f_i(q,p) = 0\}$</span> in phase space and we would like to consider this submanifold a phase space in its own right, "forgetting" that there was a larger space in which it is embedded. In order to do that, we cannot use the naive Poisson bracket of the embedding space, since as we saw above it can produce terms that you cannot possibly get from "inside" the submanifold where <span class="math-container">$f_i = 0$</span> always holds.</p> <p>The <a href="https://en.wikipedia.org/wiki/Symplectic_manifold" rel="nofollow noreferrer">symplectic form</a> <span class="math-container">$\omega = \mathrm{d}q^i\wedge \mathrm{d}p^i$</span> underlying the Poisson bracket via <span class="math-container">$$\{g_1,g_2\} = \frac{\partial g_1}{\partial x^i}\omega_{ij}\frac{\partial g_2}{\partial x^j} $$</span> induces a corresponding form on the submanifold in the standard sense: If <span class="math-container">$y^\lambda$</span> are coordinates of the submanifold and <span class="math-container">$x^i$</span> are the coordinates of ordinary phase space, then for the components <span class="math-container">$\omega_{ij}$</span> of the original form, the induced form has <span class="math-container">$$ \omega_{\lambda\rho} = \omega_{ij}\frac{\partial x^i}{\partial y^\lambda}\frac{\partial x^j}{\partial y^\rho},$$</span> exactly like an induced metric would. When transformed back into the "bracket language", the induced bracket is <span class="math-container">$$ [g_1,g_2]_\Sigma = \frac{\partial g_1}{\partial y^\lambda}\omega_{\lambda\sigma}\frac{\partial g_2}{\partial y^\sigma},$$</span> and it turns out that <span class="math-container">$[-,-]_\Sigma = [-,-]_\mathrm{D}$</span>, i.e. the induced bracket is the Dirac bracket, when all the constraints <span class="math-container">$f_i$</span> are second-class.</p>
|
Physics
|
|cosmology|perturbation-theory|
|
Expression for $k_\mathrm{eq}$, the wavenumber entering the horizon at matter-radiation equality
|
<p><span class="math-container">$$H(a_\mathrm{eq})=H_0 \sqrt{\Omega_\mathrm{m}a_\mathrm{eq}^{-3}+\Omega_\mathrm{r}a_\mathrm{eq}^{-4}+\Omega_\Lambda}.$$</span> But <span class="math-container">$\Omega_\mathrm{m}a_\mathrm{eq}^{-3}=\Omega_\mathrm{r}a_\mathrm{eq}^{-4}$</span> at matter-radiation equality, by definition: the matter density and radiation density are equal. Also <span class="math-container">$\Omega_\Lambda\simeq 0.69$</span>, while <span class="math-container">$\Omega_\mathrm{m}a_\mathrm{eq}^{-3}=\Omega_\mathrm{r}a_\mathrm{eq}^{-4}\sim 10^{10}$</span>. That is, matter and radiation totally dominate over dark energy at the time of matter-radiation equality. So we can neglect the dark energy. This leads to <span class="math-container">$$H(a_\mathrm{eq})=H_0 \sqrt{2\Omega_\mathrm{m}a_\mathrm{eq}^{-3}}$$</span> and hence the expression in the book.</p>
|
Physics
|
|quantum-mechanics|experimental-physics|interference|diffraction|wavelength|
|
What is the upper limit of size for diffracting an object?
|
<p>As far as I know the largest object that has been successfully diffracted is a oligoporphyrin molecule with a molecular weight of about <span class="math-container">$25000$</span>. This was done on 2019 and is reported in <a href="https://www.nature.com/articles/s41567-019-0663-9" rel="nofollow noreferrer">Quantum superposition of molecules beyond 25 kDa</a> by Fein <em>at al</em> in Nature Physics volume 15, pages 1242–1245.</p> <p>Quantum behaviour has been <a href="https://www.scientificamerican.com/article/physicists-create-biggest-ever-schroedingers-cat/" rel="nofollow noreferrer">demonstrated in larger objects</a> but the behaviour being observed was superposition not diffraction.</p> <p>To observe diffraction we need the system to remain coherent, but the decoherence time decreases rapidly with increasing size. In principle there is no limit to how how large an object can be and still show quantum behaviour, but in practice the coherence lasts for too short a time to be observed for macroscopic objects.</p>
|
Physics
|
|homework-and-exercises|differentiation|error-analysis|statistics|
|
Question regarding error analysis of focal length of a lens
|
<p>There are two separate issues here. Firstly you're asking how differentiation is being used here, and secondly you correctly point out that errors should be added in quadrature.</p> <p>Let's take the second issue first: you are correct that if the errors are normally distributed and if our equation is:</p> <p><span class="math-container">$$ a = b + c $$</span></p> <p>then the standard deviation in <span class="math-container">$a$</span> is given by adding the standard deviations in <span class="math-container">$b$</span> and <span class="math-container">$c$</span> in quadrature:</p> <p><span class="math-container">$$ \sigma_a{}^2 = \sigma_b{}^2 + \sigma_c{}^2 $$</span></p> <p>and the solution is just adding the errors not adding in quadrature. However in introductory discussions of error analysis it's common to just add (the modulus of) the errors i.e.</p> <p><span class="math-container">$$ \Delta a = \Delta b + \Delta c $$</span></p> <p>and that is what the solution is doing. This overestimates the error, but in most cases it still gives a reasonable estimate of the error and in any case we don't know if the errors are normally distributed or not.</p> <p>Anyhow, now on to how differentiation is being used. The solution <em>is</em> using differentiation in the usual way. We have the equation:</p> <p><span class="math-container">$$ g = \frac{1}{u} + \frac{1}{v} $$</span></p> <p>where <span class="math-container">$g = 1/f$</span> and differentiation gives (adding the moduli of the errors):</p> <p><span class="math-container">$$\begin{align} dg &= \frac{\partial}{\partial u}\left(\frac{1}{u}\right) du + \frac{\partial}{\partial v}\left(\frac{1}{v}\right) dv \\ &= \frac{du}{u^2} + \frac{dv}{v^2} \end{align}$$</span></p> <p>Then since:</p> <p><span class="math-container">$$ g = \frac{1}{f} $$</span></p> <p>we get:</p> <p><span class="math-container">$$ dg = \frac{\partial}{\partial u}\left(\frac{1}{f}\right) df = \frac{df}{f^2} $$</span></p> <p>Giving the result in the solution;</p> <p><span class="math-container">$$ \frac{df}{f^2} = \frac{du}{u^2} + \frac{dv}{v^2} $$</span></p>
|
Physics
|
|thermodynamics|energy|differential-geometry|entropy|
|
How to understand the relationship between Weinhold geometry and Ruppeiner geometry in thermodynamic geometry?
|
<p>You have to provide more background information, with more-accessible references.</p> <p>From a Google search, these seem relevant and more-accessible</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Ruppeiner_geometry" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Ruppeiner_geometry</a></li> <li>(presumably Weinhold's site) <a href="https://www.researchgate.net/publication/258879552_Thermodynamics_and_geometry" rel="nofollow noreferrer">https://www.researchgate.net/publication/258879552_Thermodynamics_and_geometry</a></li> </ul> <hr /> <p><em>[Update: The answer to two of your questions can be answered without consulting the above references.]</em></p> <p><span class="math-container">$\gamma$</span> is likely the ratio <span class="math-container">$C_p/C_v$</span> <em>(familiar from thermodynamics)</em>.</p> <p>Then the matrix element you obtained [from your reference] simplifies to <span class="math-container">$$\frac{p \gamma }{V}-\frac{p^2}{C_v T}=\frac{p}{V},$$</span> using <span class="math-container">$C_p-C_v=R_{idealGasConstant}$</span> (the ideal gas constant)<BR> and the Ideal Gas Law (<span class="math-container">$PV=nR_{idealGasConstant}T$</span>) <em>(both familiar from thermodynamics)</em>.</p>
|
Physics
|
|kinematics|velocity|differentiation|
|
How to calculate the final position of a particle under variable accelaration and its instantenous velocity?
|
<p>The time shown on the screen(4 mins) is also instantaneous and based on the speed of the train. That's why ETA decreases when you reach closer to the destination. So, <span class="math-container">$d=V(inst) × t = 11.73 km$</span></p> <p>No need of calculus here it seems. About your second part, the relation of <span class="math-container">$x$</span> with <span class="math-container">$t$</span> generally appears when there is a constant external force in play. Like when the train reaches uphill and is rolling down with engine off on a defined curvature (gravitational force). Or a object thrown at an angle with some initial velocity, or a charged particle traveling in a constant or varying (defined by nature of its source) Magnetic field, etc.</p>
|
Physics
|
|general-relativity|reference-frames|acceleration|centrifugal-force|equivalence-principle|
|
Would objects really be at rest relative to each other in orbit?
|
<p>From a Newtonian perspective the objects free-floating inside the spaceship are subject to (almost) exactly the same gravitational acceleration as the spaceship and travel on similar orbits. The walls of the spaceship do not "shield" them from gravitational attraction. Hence there is no apparent centrifugal or coriolis force on those objects in the accelerated frame of reference of the spaceship. There <em>will</em> be tidal forces, since the gravitational acceleration will not be identical in all parts of the spaceship.</p>
|
Physics
|
|quantum-field-theory|
|
Orthonormality condition of polarization four-vectors
|
<p>The choice that you propose is not possible. You can change the sign of all <span class="math-container">$\zeta$</span>, which is a matter of taste.</p>
|
Physics
|
|electromagnetism|electrostatics|energy|acceleration|propulsion|
|
Are there very high voltage very low amperage electromagnetic/electrostatic launchers?
|
<p>In a launcher, force comes from the magnetic field. The magnetic field comes from the current. So they are designed for high current and low voltage. High current comes with high resistive heating.</p> <p>There is no simple way around it. Use the strongest, lowest resistance material you can.</p> <p>Superconductors might help, but they have their own problems. Keeping them cool is as hard a problem to solve as handling excess heat.</p>
|
Physics
|
|special-relativity|tensor-calculus|linear-algebra|
|
Determinant of Rank-2 Tensor using Levi-Civita notation
|
<p>Levi-cevita symbol which is defined as <span class="math-container">$$ \varepsilon_{\mu\nu\rho\sigma}= \begin{cases} +1&\text{if}\,{\mu\nu\rho\sigma} \rm \, is\,an\,even\,permutation\,of\,0123 \\ -1&\text{if}\, {\mu\nu\rho\sigma}\rm \,is\,an\,odd\,permutation\,of\,0123\\ 0&\text{otherwise} \end{cases} $$</span> is a density tensor with weight <span class="math-container">$w=-1$</span>; Based on definition of determinant we can write determinant of a matrix as <span class="math-container">$$\text{Det}(T) = \varepsilon_{\mu\nu\rho\sigma} T^{\mu}_{0} T^{\nu}_{1} T^{\rho}_{2} T^{\sigma}_{3} $$</span> or <span class="math-container">$$\text{Det}(T) = \varepsilon^{\mu\nu\rho\sigma} T_{\mu}^{ 0} T_{\nu}^{ 1} T_{\rho}^{ 2} T_{\sigma}^{ 3} $$</span> Also we can construct an absolute tensor for each density tensor, that for Levi-Civita density tensor is as <span class="math-container">$$\epsilon_{\mu\nu\rho\sigma}=\sqrt{\left|g\right|}\varepsilon_{\mu\nu\rho\sigma}$$</span><br /> <span class="math-container">$$\epsilon^{\mu\nu\rho\sigma}=\frac{\text{sgn}(g)}{\sqrt{\left|g\right|}}\varepsilon_{\mu\nu\rho\sigma}$$</span><br /> Then for Minkophski spacetime we have: <span class="math-container">$$\epsilon_{\mu\nu\rho\sigma}= \varepsilon_{\mu\nu\rho\sigma}$$</span> <span class="math-container">$$\epsilon^{\mu\nu\rho\sigma}= - \varepsilon^{\mu\nu\rho\sigma}$$</span> Now if we try to write determinant of <span class="math-container">$T$</span> as Contraction of two tensor we have</p> <p><span class="math-container">$$\text{Det}(T) = \varepsilon^{\mu\nu\rho\sigma} T_{\mu}^{ 0} T_{\nu}^{ 1} T_{\rho}^{ 2} T_{\sigma}^{ 3} = -\epsilon^{\mu\nu\rho\sigma} T_{\mu}^{ 0} T_{\nu}^{ 1} T_{\rho}^{ 2} T_{\sigma}^{ 3} = -\epsilon_{\mu\nu\rho\sigma} T^{\mu 0} T^{\nu 1} T^{\rho 2} T^{\sigma 3 }$$</span> Therefore your professor was right.</p>
|
Physics
|
|homework-and-exercises|quantum-field-theory|integration|causality|propagator|
|
Exponential decay of propagator outside lightcone
|
<p>Such asymptotic behaviour are typically calculated using the Laplace method (which is generalised to the saddle point method). It's worth looking into in depth, you'll use it again and again in QFT. You want to estimate (setting <span class="math-container">$m=1$</span> to fix the energy scale): <span class="math-container">$$ D(\vec x) = \int \frac{d^n\vec p}{(2\pi)^n2(\vec p^2+1)^s}e^{-i\vec p\cdot \vec x} $$</span> with <span class="math-container">$n=3,s=1/2$</span> in your case. Before applying Laplace, you'll first need to convert it to a 1D integral and make isotropy more evident by making it only depend on <span class="math-container">$x=|\vec x|$</span>. In 3D, you can just do the angular integral which is accidentally easy, but instead I'll show you a method applicable in arbitrary dimensions and arbitrary powers. A general trick is to notice that (Mellin transform): <span class="math-container">$$ \frac1{x^s} = \frac1{\Gamma(s)}\int_0^{+\infty} e^{-xt}t^{s-1}dt $$</span> so this reduces the <span class="math-container">$n$</span>-dimensional integral to a gaussian integral: <span class="math-container">$$ \begin{align} D(\vec x) &= \int_0^{+\infty} dt\frac{t^{s-1}e^{-t}}{\Gamma(s)}\int \frac{d^n\vec p}{(2\pi)^n2}e^{-i\vec p\cdot \vec x-t\vec p^2} \\ &= \frac{1}{2(4\pi)^{n/2}\Gamma(s)}\int_0^{+\infty} t^{s-1-n/2}e^{-t-x^2/4t}dt \\ &= \frac{1}{2(4\pi)^{n/2}\Gamma(s)}\left(\frac x2\right)^{s-n/2}\int_0^{+\infty} t^{s-1-n/2}e^{-\frac x2(t+1/t)}dt \end{align} $$</span> At this stage, you could recognise the integral representation of a Bessel function and look up its asymptotic behaviour and call it a day. The rest of the calculations will therefore be to compute the asymptotic behaviour of the Bessel function. The stage is set for applying Laplace's method, by sending <span class="math-container">$x\to\infty$</span>. The argument of the exponential is where <span class="math-container">$t+1/t$</span> is maximised, i.e. <span class="math-container">$t=1$</span> and <span class="math-container">$t+1/t=2$</span>. Therefore: <span class="math-container">$$ D(\vec x) \asymp e^{-x} $$</span> as claimed. You can even do better by Taylor expanding and doing the gaussian integral, which will give you an asymptotic equivalent: <span class="math-container">$$ D(\vec x) \sim \frac{1}{2(4\pi)^{n/2}\Gamma(s)}\left(\frac x2\right)^{s-n/2}e^{-x}\sqrt{\frac{2\pi}x} $$</span></p> <p>Hope this helps.</p>
|
Physics
|
|thermodynamics|energy|ideal-gas|
|
Internal energy of a gas under motion
|
<p>If you also consider the motion of the system, then you have to use a more sophisticated formalization of the thermodynamical system, proper of the <em>theory of continuum thermo-mechanics</em>.</p> <p>There, the work done on the system is responsible for both thermodynamical effects (together with the heat entering the system) and macroscopic variations of the macroscopic kinetic energy (together with the work done by the internal stresses).</p> <p>Macroscopic kinetic energy is <strong>not</strong> part of the (thermodynamic) internal energy.</p>
|
Physics
|
|electromagnetism|charge|potential|
|
Is there any finite charge distribution the potential of which doesn't decay with $\frac{1}{r}$ at infinity?
|
<p>If the charge distribution is finite in it's spatial extension and it is simple in the sense that it is composed of all positive or all negative charge, then the potential will always look like <span class="math-container">$1/r$</span> at far distances. However, if you have more complicated distributions, for example a positive lobe and a negative lobe, then the potential will look like a dipole potential at large distances. Similarly, one can have multipole distributions, such as quadrupoles for example.</p>
|
Physics
|
|mathematics|dirac-delta-distributions|
|
Manipulation of functions inside a Dirac Delta function
|
<p>This is pretty grotesque.</p> <p>You need to first know a property of the Dirac delta distribution. It is stated as <span class="math-container">$$\tag1\delta[g(x)]=\frac{\delta(x-x_0)}{|g^\prime(x_0)|}$$</span> on Wikipedia, say, for any function <span class="math-container">$g$</span> that has a simple isolated root at <span class="math-container">$x_0$</span>, and we will be using this property in reverse for many times.</p> <hr /> <p>There are many skipped steps, even at the front. <span class="math-container">$$ \begin{align} \tag21&=\int_0^1\mathrm d\rho\\ \tag3 &=\int_0^1\mathrm d\rho\int\mathrm dt\ \delta(t-c) \end {align} $$</span> where the integral of <span class="math-container">$t$</span> just needs to include <span class="math-container">$c$</span>; it is then sensible to choose any <span class="math-container">$c$</span> we like, which is in particular <span class="math-container">$c=P^{-1}[P(u)+\ln\rho]$</span> that the text used.</p> <p>We can now apply the property above backwards twice, once on the CDF function <span class="math-container">$P$</span> and another time on the exponential <span class="math-container">$\mathrm e$</span>, leading to the 2nd to last line. By the FTC, <span class="math-container">$P(t)-P(u)=-\int_t^u p(x)\mathrm dx$</span></p> <p>The last line is then a swapping of the integrals, since we are free to use the Dirac delta distribution to destroy <span class="math-container">$\rho$</span> rather than <span class="math-container">$t$</span>, and to do that properly, you also have to consider the integration region.</p>
|
Physics
|
|quantum-mechanics|wavefunction|schroedinger-equation|scattering|s-matrix-theory|
|
Scattering Matrix and the Lippmann-Schwinger equation in QM
|
<blockquote> <p><strong>However, is there a formula that relates the S-matrix elements and these states <span class="math-container">$\psi^{(+)}$</span> and <span class="math-container">$\psi^{(-)}$</span>?</strong></p> </blockquote> <p>You have not provided a sufficiently detailed description of what exactly the <span class="math-container">$\psi^{(+)}$</span> and <span class="math-container">$\psi^{(-)}$</span> functions are, and I do not have a copy of Sakurai to look at to supplement the minimal description provided here.</p> <p>So, I will proceed by making some assumptions. I will assume that <span class="math-container">$$ \psi^{(\pm)}_\alpha = \Omega^{(\pm)}\phi_\alpha\;,\tag{1} $$</span> where <span class="math-container">$\Omega^{(\pm)}$</span> are Moller wave operators, as discussed further, for example, in <a href="https://rads.stackoverflow.com/amzn/click/com/0444867732" rel="nofollow noreferrer" rel="nofollow noreferrer">Joachain's "Quantum Collision Theory"</a> at chapter 14.</p> <p>The function <span class="math-container">$$ \psi^{(\pm)}_\alpha $$</span> is an eigenfunction of the full Hamiltonian <span class="math-container">$H$</span>, including the scattering potential <span class="math-container">$V$</span>.</p> <p>The function <span class="math-container">$$ \phi_\alpha $$</span> is an eigenfunction of the (hopefully simpler) "unperturbed" Hamiltonian <span class="math-container">$H_0 = H-V$</span>.</p> <p>The index <span class="math-container">$\alpha$</span> on each of the two above functions represents a collection of numbers defining the state of the system and we have <span class="math-container">$$ H_0 \phi_\alpha = E_\alpha \phi_\alpha\;. $$</span></p> <p>We can also write Eq. (1) a little more concretely as: <span class="math-container">$$ \psi^{(\pm)}_\alpha = \lim_{\epsilon\to 0}\frac{\pm i\epsilon}{E_\alpha - H \pm i\epsilon}\phi_\alpha \tag{2}\;. $$</span> <span class="math-container">$$ =\phi_\alpha + \lim_{\epsilon\to 0}\frac{1}{E_\alpha - H \pm i\epsilon}V\phi_\alpha $$</span></p> <p>The <span class="math-container">$S$</span> matrix is given by: <span class="math-container">$$ S_{\beta,\alpha} = \langle \phi_\beta|S|\phi_\alpha\rangle $$</span> <span class="math-container">$$ =\langle \phi_\beta|{\Omega^{(-)}}^{\dagger}\Omega^{(+)}|\phi_\alpha\rangle $$</span> <span class="math-container">$$ =\langle \psi^{-}_\beta|\psi^{+}_\alpha\rangle \;, $$</span> which provides a simple expression for the <span class="math-container">$S$</span> matrix in terms of the <span class="math-container">$\psi$</span> states.</p>
|
Physics
|
|electromagnetism|antennas|waveguide|
|
Phasors and propagating modes in a waveguide
|
<p>In a homogeneous waveguide, both propagating TE and TM waves, modes, can be derived from the scalar Helmholtz equation. Let <span class="math-container">$\partial \mathcal B =0$</span> denote the metal boundary of the cross section. Denoting the wavenumber in the guide by <span class="math-container">$\beta$</span> and <span class="math-container">$\kappa_c^2 +\beta ^2 = k_0^2$</span>, then the harmonic wave propagating as <span class="math-container">$e^{-\mathfrak j \beta z}$</span> along the <span class="math-container">$z$</span> axis satisfies <span class="math-container">$\nabla_{x,y}^2 f(x,y) + \kappa_c^2 f(x,y) = 0 $</span></p> <p>For a TE wave <span class="math-container">$$\nabla^2 h_z(x,y) + \kappa_c^2 h_z(x,y) = 0 \space \mathrm{and} \space \frac{\partial h_z}{\partial n}|_{\partial \mathcal B} =0 \\ \mathbf e_z=0 \\ \mathbf h_t = -\frac{\mathfrak j \beta}{\kappa_c}\nabla h_z \space e^{-\mathfrak j \beta z}\\ \mathbf e_t=-\frac{k_0}{\beta}\sqrt{\frac{\mu_o}{\epsilon_0}} \hat z \times \mathbf h_t \space e^{-\mathfrak j \beta z}\tag{1}$$</span></p> <p>For a TM wave <span class="math-container">$$\nabla^2 e_z(x,y) + \kappa_c^2 e_z(x,y) = 0 \space \mathrm{and} \space e_z|_{\partial \mathcal B} =0\\ \mathbf h_z=0 \\ \mathbf e_t= -\frac{\mathfrak j \beta}{\kappa_c^2}\nabla e_z \space e^{-\mathfrak j \beta z}\\ \mathbf h_t = \frac{k_0}{\beta}\sqrt{\frac{\epsilon_0}{\mu_o}} \hat z \times \mathbf e_t \space e^{-\mathfrak j \beta z} \tag{2}$$</span></p> <p>Now notice that both <span class="math-container">$h_z$</span> for a TE and <span class="math-container">$e_z$</span> for a TM mode are <em>real</em> functions, therefore so are their respective gradients. Since the propagation factor <span class="math-container">$e^{-\mathfrak j \beta z}$</span> is common to all components in both TE and TM the respective transversal vector fields, <span class="math-container">$\mathbf e_t$</span> and <span class="math-container">$\mathbf h_t$</span>, are in quadrature. Since the <span class="math-container">$\mathbf e_t \perp \mathbf h_t \perp \hat z$</span>, you also have <span class="math-container">$dS_z =(\mathbf e \times \mathbf h)\cdot d\mathbf a =(\mathbf e_t \times \mathbf h_t) \cdot d\mathbf a $</span>, where <span class="math-container">$d\mathbf a = \hat z da$</span> is the cross sectional normal pointing along the axis, thus all "pieces" of each mode are <em>in phase</em>.</p>
|
Physics
|
|homework-and-exercises|kinematics|
|
Doubt regarding Velocity-Time graph with Constant Accleration
|
<p>Initial velocity is not 0 in condition C and E.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.