subject
stringclasses 2
values | topic
stringlengths 4
138
| question
stringlengths 3
1.14k
| answer
stringlengths 1
45.6k
|
---|---|---|---|
Physics
|
|fluid-dynamics|pressure|fluid-statics|gas|bernoulli-equation|
|
Different types of pressure and their sources
|
<p>We sense (measure) pressure as a force on a unit area. Consider standing at a measurement point on a given area. The two levels to consider the force we would feel are the molecular and the bulk.</p> <p>At the molecular level in a fluid (gas or liquid), force arises due to a change in the momentum of a particle as it collides with the measurement point and changes direction. This is your #3. We derive the ideal gas equation of state for pressure from kinetic theory, where particles with no size and no attractive or repulsive interactions traveling at a unit average kinetic energy collide elastically with a wall of a given area. We can derive the van der Waals equation of state for pressure when we include attractive interactions between particles and give the particles a defined size. Other equations of state for pressure apply other assumptions or take an entirely empirical approach.</p> <p>At the bulk level, force arises from two considerations. First, in a gravitational field, the bulk mass experiences an acceleration in the direction of the gravitational field. This is the gravitational pressure in your #1 and is modeled using barometric equations such as <span class="math-container">$\Delta p = \rho\ g\ \Delta h$</span>. Secondly, when a bulk unit or packet of fluid is traveling cooperatively and impacts a unit area, the packet imparts a force as it changes direction at impact. This is the bulk equivalent of the molecular level force and is your #2. It is modeled by such equations as Bernoulli's equation (which has its own set of assumptions about how pressure is transmitted through the bulk fluid). The packet is not the entire fluid. It is simply a micro-region within the entire flow. The transition between molecular flow and turbulent flow can be considered as an example of when we can or cannot break the cooperative behavior of packets within a fluid and revert to the molecular behavior of particles within the packets. Viscosity measures the ability of particles to transmit shear forces between themselves. Viscosity does not exists in ideal gases, and it otherwise plays a role in defining the magnitude of cooperative behavior, hence the transition between laminar and turbulent flow.</p> <p>In summary, the fact that we can model any of the given types of pressure by a certain equation (ideal gas law, barometric equation, or Bernoulli's equation) is not to be inverted and used as a way to explain the source of the given type of pressure. The fundamental source of what we define as pressure, whether at the molecular or bulk level, is a force that is being impacted on a defined unit of area.</p>
|
Physics
|
|electromagnetism|speed-of-light|refraction|vacuum|physical-constants|
|
On the (variable?) nature of $\epsilon_0$ and $\mu_0$
|
<p><span class="math-container">$\epsilon_0$</span> and <span class="math-container">$\mu_0$</span> were constant conversion constants until 2019. At that time, the SI organization changed their definitions to make them dependent on the 'fine structure constant', <span class="math-container">$\alpha$</span>, whose value is determined by very difficult experiments. Now they could change if alpha varied with time or as you moved through the universe.</p>
|
Physics
|
|electric-fields|maxwell-equations|
|
How do I show that the Electric field can be written as a composition of an irrotational and a solinoidal one?
|
<p>To obtain these equations in their general form is to consider by definition</p> <p><span class="math-container">$$\vec{B} = \nabla × \vec{A}$$</span></p> <p>Substitution into Faraday's law get us</p> <p><span class="math-container">$$\nabla × \vec{E} = -\frac{\partial \nabla × \vec{A}}{\partial t}$$</span></p> <p><span class="math-container">$$\nabla ×[ \vec{E} + \frac{\partial \vec{A}}{\partial t}] = 0$$</span></p> <p>And thus since the curl is zero we can assign a scalar potential <span class="math-container">$ V$</span> <span class="math-container">$$\vec{E} + \frac{\partial \vec{A}}{\partial t} = -\nabla V$$</span></p> <p><span class="math-container">$$\vec{E} = -\nabla V -\frac{\partial \vec{A}}{\partial t}$$</span></p> <p>However these need not to be a Helmholtz decomposition</p> <p>Using the Coulomb gauge <span class="math-container">$\nabla \cdot \vec{A} =0$</span></p> <p>Taking the divergence, the right term on the RHS equals zero by definition of the Coulomb gauge, Whilst the left remains.</p> <p>Taking the curl, the curl of a gradient is zero, and the right term is non zero.</p> <p>And thus this actively demonstrates that the left term is purely irrotational and the right term is purely solenoidal.</p> <p>This is only true when working in the Coulomb gauge, which I assume you misread.</p>
|
Physics
|
|energy|sun|hydrogen|fusion|
|
Fusion in the sun for 4 hydrogen to Helium-4. How is the energy produced
|
<p>(0) This process is called "the proton-proton chain", and should be referred to as such in question titles and what-not.</p> <p>Two protons do not fuse into <span class="math-container">${}^2_2{\rm He}$</span>, that has a half-life in the 1e-22 second region...compare that with the mean collision time for protons in the core of the Sun to see that it cannot contribute.</p> <p>Rather, two protons undergo a weak interaction:</p> <p><span class="math-container">$$ p + p \rightarrow {}^2_1{\rm D} + e^+ + \nu_e + 0.42\,{\rm MeV}$$</span></p> <p>(that it is weak means it is unlikely, hence a 10 billion year lived Sun).</p> <p>The initial (final) mass-sum on the RHS (LHS) is:</p> <p><span class="math-container">$$ M_0 = 2m_p = 1876.544163\,{\rm MeV} $$</span></p> <p><span class="math-container">$$ M_1 = m_d + m_e +m_{\nu_e}= 1876.1239416\,{\rm MeV} $$</span></p> <p>(where I've ignored the neutrino mass). Note that</p> <p><span class="math-container">$$M_1 - M_0 = -0.420221\,{\rm MeV} < 0$$</span></p> <p>So the total mass is reduced. This is generally referred to "binding energy", which is negative. The deuteron binding energy (the only binding energy every nuclear physicist has memorized) is 2.2 MeV, which gives the deuteron a lower mass than that of a free proton plus a free neutron. (Here, some of the 2.2 MeV is required to turn a proton into a neutron and create the final state leptons).</p> <p>That mass difference is generally liberated as kinetic energy according to:</p> <p><span class="math-container">$$ E = mc^2 $$</span></p> <p>The positron goes onto annihilate an unrelated plasma electron, releasing:</p> <p><span class="math-container">$$2m_e = 1.022\,{\rm MeV}$$</span></p> <p>as 2, 3, 4, ... gamma rays. Note that this is not fusion, and does not require temperature of pressure to occur, though number density is obviously a factor.</p> <p>The deuteron is stable, and finds a proton:</p> <p><span class="math-container">$$ d + p \rightarrow {}^3_2{\rm He} + \gamma + 5.493\,{\rm MeV}$$</span></p> <p>which is a fairly hard gamma.</p> <p><a href="https://en.wikipedia.org/wiki/Proton%E2%80%93proton_chain" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Proton–proton_chain</a> says the mean residence time of the helium-3 is 400 years.</p> <p>From here, there are 4 branches to helium 4. The main one is:</p> <p><span class="math-container">$$ {}^3_2{\rm He} + {}^3_2{\rm He} \rightarrow {}^4_2{\rm He} + 2p + 12.859\,{\rm MeV}$$</span></p> <p>(So I retract my hard gamma quip, and now apply it here).</p> <p>You can see the above reference for details on other branches. They are:</p> <p>Lithium Burning (<a href="https://en.wikipedia.org/wiki/Lithium_burning" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lithium_burning</a>)</p> <p>The pp-III branch, involving berylliums-7,8 and boron-8, which is dominant above 25 MK.</p> <p>p-IV (Hep), which is a theoretical <em>weak interaction</em> branch: <span class="math-container">${}^3_2{\rm He} + p \rightarrow \alpha + {\rm appropriate\ leptons}$</span>.</p> <p>It's important to appreciate the difference between strong and weak interaction fusion processes, as the time-scale differ by orders of magnitude in the exponent of "orders of magnitude".</p> <p>Finally, to address your question, "How to work out the energy released". Find the difference between initial mass and final mass:</p> <p><span class="math-container">$$ E_{pp} = \Big[4(m_p + m_e)\Big] - \Big[m_{\alpha} + 2(m_e + m_{\nu_e})\Big]$$</span></p> <p>Of course the neutrino mass isn't known (and the electron neutrino isn't even a mass eigenstate), but it is tiny (<span class="math-container">$ m_{\nu_e} \approx 0.07\,{\rm eV}$</span>)...200 times smaller than the binding energy of hydrogen.</p> <p>Since the neutrinos carry away their mass and kinetic energy, a full analysis of available energy for heating would require detailed analysis of the neutrino spectrum. See <a href="https://jila.colorado.edu/%7Epja/astr3730/lecture21.pdf" rel="nofollow noreferrer">https://jila.colorado.edu/~pja/astr3730/lecture21.pdf</a> . IIRC, the neutrino luminosity is around 1% of the solar luminosity.</p> <p>Note: IMHO, an interesting, but often overlooked role the neutrinos play is radiating lepton number. In the Standard Model, lepton number is conserved.</p> <p>The core contains <span class="math-container">$0.34 \times M_{\rm solar symbol}/{\rm grams} \times N_A \approx 4\times 10^{56}$</span> protons and electron, each. Over the lifetime of the sun, that becomes <span class="math-container">$2\times 10^{56}$</span> protons, neutrons and electron, each (assuming 100% burning, idk if that is correct), so baryon number is conserved, but <span class="math-container">$L=2\times 10^{56}$</span> lepton number has "gone missing" and is radiated via neutrinos.</p>
|
Physics
|
|coordinate-systems|astronomy|
|
$30°$ shift in a star position wrt earth over a month
|
<p>The Earth orbits the Sun once per year. That means at any given time, the sky appears to revolve at a rate of 360 degrees per year, or 30 degrees per month.</p> <p>If you were on the equator, where stars at the eastern horizon would appear to rise straight upwards and the sunset occurs at <em>roughly</em> the same time of day from month to month, then that means that if you add a month then the star appears to be higher in the sky by 30 degrees at sunset.</p>
|
Physics
|
|electrostatics|electric-fields|potential|voltage|conventions|
|
A question regarding the concept of potential difference between two points in an electric field, as stated in my 12th grade book
|
<p>Consider a system where the electric force is due to a negative charge, not a positive charge as is usually assumed.</p> <p><a href="https://i.stack.imgur.com/cHX7I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cHX7I.png" alt="enter image description here" /></a></p> <p>Now, when it is said that:</p> <blockquote> <p>... work done in displacing a unit positive charge... against the electric forces</p> </blockquote> <p>what is meant is that an external force that is <strong>equal in magnitude but opposite in direction</strong> to the electric force is applied.</p> <p>Now, suppose, as the figure shows, you move the charge <em>q</em> (which is positive) from A to B. So, the direction of your force is AB. The big charge <em>Q</em> will try instead to make it move towards A. So, it's direction is BA. Now, the force you apply and the direction of the charge's displacement is in the same direction. Therefore, in this case work done by the external force (you) is positive.</p> <p>Let us now assume the charge moves from B to A. In this case, the electric force is still in the direction BA, and you are applying a force in the direction AB. In this case, the direction of the displacement and the external force are opposite to each other, so the work done is negative.</p>
|
Physics
|
|friction|collision|
|
Car vs Tricycle getting rear ended
|
<p>They'll go the same distance. The number of points of contacts doesn't matter, since as you reduce the number of contact points, they each support more weight and can therefore apply more friction.</p> <p>The force applied by kinetic friction is <span class="math-container">$\mu N$</span>, which on flat stationary ground is just <span class="math-container">$\mu mg$</span>. With a tricycle, you'll have <span class="math-container">$3$</span> wheels each applying a force of <span class="math-container">$1/3 \mu mg$</span>, and with a regular car you'll have <span class="math-container">$4$</span> wheels each applying a force of <span class="math-container">$1/4 \mu mg$</span> - the total either way is <span class="math-container">$\mu mg$</span>.</p> <p>Note that you can put two wheels of the car right next to each other and effectively have a "three-wheeled" vehicle, but there is no reason why simply thinking about the wheels differently should change the stopping distance.</p>
|
Physics
|
|general-relativity|observers|geodesics|
|
Inertial and a gravitational component of two observers
|
<p>Well, I am generally familiar with the works of Michel Janssen and the works of Jürgen Renn, and I endorse the interpretation of GR that they present in their works.</p> <p>What Janssen and Renn are describing is about attribution.</p> <p>In physics the current standard approach is to think of interactions as mediated by <em>field</em>.</p> <p>It is for example assumed that the Coulomb force and magnetism are manifestations of the electromagnetic field.</p> <p>That is, rather than assuming that the dynamic interaction of two charged particles is a direct particle-to-particle interaction it is assumed that a field is acting as <em>mediator</em> of that interaction.</p> <p>Janssen and Renn are representative of a school of thought in which the following proposition is made:<br /> Allow for the possibility that the phenomenon of Inertia is mediated by a field.</p> <p>This inertia field is then assumed to have the property that it opposes <em>change of velocity</em>.</p> <p>The measure for coupling to the inertia field is the familiar term for inertial mass: '<span class="math-container">$m$</span>'.</p> <p>The next step is to grant the following set of suppositions:</p> <p>In the absence of a source of gravitational interaction the inertia field is uniform. Another way of stating that is: by default inertia is isotropic (the same in all directions)</p> <p>A source of gravitational interaction induces a <em>bias</em> in the inertia field. The standard name for this bias is: 'Curvature of Spacetime'.</p> <p>This biased state of the inertia field is then acting as mediator of gravitational interaction.</p> <p>So in terms of this interpretation there is a fundamental assumption that there is no separate gravitational field.</p> <p>Rather, according to this interpretation the field that gives rise to the phenomenon of inertia and the field that is acting as the mediator of gravitational interaction are <em>one and the same field</em>.</p> <p>This assumption of a <em>single field</em> then accounts for the equivalence of inertial and gravitational mass.</p> <p>So we take the standard thought demonstration of a space station that is spinning. Let the space station be ring-shaped. In that rotating ring: the centripetal acceleration has the effect of pulling G's</p> <p>Do you attribute the G-load to inertia? Or do you attribute the G-load to the presence of gravity?</p> <p>The point is: a local experiment cannot tell the difference.</p> <p>About the word 'split' that Janssen and Renn are using. They are referring to an analogy with electromagnetism.</p> <p>In terms of electromagnetism: take the case of a wire with a current running through it. Now consider the following two cases: a charged particle that is stationary with respect to the wire, and a charged particle that has a velocity relative to the wire.</p> <p>The stationary-wrt-the-wire particle does not experience a Lorentz force, the velocity-wrt-the-wire particle does experience a Lorentz force. Generally, depending on the velocity of an observer relative to some system the composition of Coulomb force and Lorentz force comes out differently. There is a single field, the electromagnetic field, but depending on <em>velocity</em> of the observer relative to the system the decomposition comes out differently. Janssen and Renn refer to that decomposition as 'split'.</p> <p>Janssen and Renn point out that there is an analogous split if one grants that GR is a theory of the inertio-gravitational field. For the inertio-gravitational field: how that split comes out depends on the state of <em>acceleration</em> of the observer relative to the system.</p>
|
Physics
|
|newtonian-mechanics|thermodynamics|pressure|work|volume|
|
Why is the work done by an expanding ideal gas $\textbf{P}_{ext}\Delta V$?
|
<p>Assuming it is even possible to deform a material at constant pressure (which I doubt as a material generally undergoes elastic prior to plastic deformation), the work done by the gas on the container is <span class="math-container">$P_{int}\Delta V$</span> and the work done by the constant external pressure on the container is <span class="math-container">$-P_{ext}\Delta V$</span>.</p> <p>Thus the net work done <em>on the container</em> is the sum of two, or <span class="math-container">$(P_{int}-P_{ext})\Delta V$</span>.</p> <p>Hope this helps.</p>
|
Physics
|
|quantum-mechanics|wavefunction|schroedinger-equation|terminology|parity|
|
What does $f(x)$ satisfies the given equation means?
|
<blockquote> <p>Now the solution just requires one to prove that <span class="math-container">$\partial^2/\partial x^2=\partial^2/\partial (-x)^2$</span>.</p> </blockquote> <p>Yes, if you can "just" prove the above-quoted equality you are basically "done." But have you understood what Griffiths hopes that you understand?</p> <hr /> <p>It may be helpful to first think of an explicit example to see what is going on. Consider, for example, the (completely made up) function: <span class="math-container">$$ \psi_0(x) = x + (1+x)^2\;. $$</span> The function <span class="math-container">$\chi_0(x) = \psi_0(-x)$</span> can also be written down explicitly: <span class="math-container">$$ \chi_0(x) = -x + (1-x)^2\;. $$</span></p> <p>Now look at the derivatives with respect to <span class="math-container">$x$</span>: <span class="math-container">$$ \psi_0'(x) = 1 + 2(1+x) $$</span> and <span class="math-container">$$ \chi_0'(x) = -1 - 2(1-x) $$</span> and note that we have, explicitly in this specific case: <span class="math-container">$$ \chi_0'(x) = -\psi_0'(-x)\;. $$</span></p> <hr /> <p>It is also true by the "chain rule" that, in general, whenever we have <span class="math-container">$$ \chi(x) = \psi(-x) $$</span> then <span class="math-container">$$ \chi'(x) = -\psi'(-x)\;. $$</span></p> <p>By taking another derivative with respect to <span class="math-container">$x$</span> and using the chain rule we also see that, in general, whenever <span class="math-container">$$ \chi(x) = \psi(-x)\tag{1} $$</span> we have <span class="math-container">$$ \chi''(x) = \psi''(-x)\;. \tag{A} $$</span></p> <hr /> <blockquote> <p>Thereby the form of the equation does not change. All that changes is that the variable goes from <span class="math-container">$x\rightarrow-x$</span>. So does the fact that the form of the equation does not change imply that <span class="math-container">$\psi(-x)$</span> is a solution?</p> </blockquote> <p>Thereby and forsooth, milord! However, I'm guessing that Griffiths <em>probably</em> wants you to be a little more specific in your proof... For example, first write down the starting point, which is that <span class="math-container">$\psi$</span> satisfies the TISE (I set <span class="math-container">$\hbar=m=1$</span> for my own sanity): <span class="math-container">$$ -\frac{1}{2}\psi''(x) + V(x)\psi(x) = E\psi(x)\tag{B} $$</span></p> <p>Then use Eq. (A) above to see that: <span class="math-container">$$ -\frac{1}{2}\psi''(x) = -\frac{1}{2}\chi''(-x) $$</span> and, by Eq. (B), we see that this also equals <span class="math-container">$$ =E\psi(x) - V(x)\psi(x)\;. $$</span> and by Eq. (1) <span class="math-container">$$ =E\chi(-x) - V(x)\chi(-x) $$</span> and by the evenness of the potential <span class="math-container">$V(x)=V(-x)$</span> <span class="math-container">$$ =E\chi(-x) - V(-x)\chi(-x)\;. $$</span></p> <p>Thusly and thereby and whence, we see that: <span class="math-container">$$ -\frac{1}{2}\chi''(-x) + V(-x)\chi(-x) = E\chi(-x)\tag{C}\;. $$</span></p> <p>But the variable <span class="math-container">$-x$</span> in Eq. (C) can just be renamed to <span class="math-container">$x$</span> if you would like: <span class="math-container">$$ -\frac{1}{2}\chi''(x) + V(x)\chi(x) = E\chi(x)\tag{D}\;. $$</span></p> <hr /> <blockquote> <p>Also, I belive that one can easily generalize this to time-dependent Schrödinger equation, but the fact that authors don't mention it makes me question the correctness of my claim.</p> </blockquote> <p>Sure, yes, one can "easily" generalize this to the time-dependent Schrodinger equation. But, if it is so easy, then why are you questioning the correctness of the generalization? Anyways, the details of the generalization will be left as an exercise to the interested reader, since it is so easy. (Yes, it really is easy--or maybe better to say it is "straightforward".)</p>
|
Physics
|
|quantum-mechanics|momentum|time-reversal-symmetry|
|
Time reversal in momentum space
|
<p>According to a theorem by Wigner (see E.P. Wigner, Gruppentheorie, Vieweg, Braunschweig, 1931, pp. 251-254), a symmetry transformation in quantum mechanics is either represented by a <em>unitary</em> or an <em>antiunitary</em> transformation.</p> <p>In particular, the so-called time reversal transformation operator <span class="math-container">$T$</span> is <em>antiunitary</em>, i.e. <span class="math-container">$T$</span> is <em>antilinear</em>,<span class="math-container">$$ T(a \phi + b \psi)= a^\ast T \phi+b^\ast T\psi \quad \forall \;\phi, \;\psi \in \mathcal{H} \quad \text{and} \quad \forall \; a,b \in \mathbb{C}, \tag{1} \label{1}$$</span> (<span class="math-container">$\mathcal H$</span> denotes the Hilbert space), <em>surjectiv</em>, <span class="math-container">$${\rm im }\, T = \mathcal{H} \tag{2} \label{2}$$</span> and <em>isometric</em>, <span class="math-container">$$|| T \psi ||=||\psi || \quad \forall \; \psi \in \mathcal{H}, \tag{3} \label{3}$$</span> implying <span class="math-container">$$\langle T \phi | T \psi \rangle = \langle \psi | \phi \rangle \quad \forall \; \phi, \psi \in \mathcal{H}\tag{4} \label{4} $$</span> (note the <em>order</em> of the arguments!). The associated <em>adjoint</em> operator <span class="math-container">$T^\dagger$</span> is defined by <span class="math-container">$$\langle \phi |T \psi \rangle =\langle T^\dagger \phi |\psi \rangle^\ast \quad \forall \; \phi,\psi \in \mathcal{H}. \tag{5} \label{5}$$</span> For fixed <span class="math-container">$\phi$</span>, the mapping <span class="math-container">$\psi \to \langle \phi | T\psi \rangle$</span> is an <em>antilinear</em> functional, implying that <span class="math-container">$T^\dagger$</span> is also antilinear. Finally, the antiunitarity of <span class="math-container">$T$</span> implies that also <span class="math-container">$T^\dagger$</span> is antiunitary and we can write <span class="math-container">$$ T^\dagger T = T T^\dagger = \mathbf{1}. \tag{6} \label{6} $$</span> Note in particular that <span class="math-container">$T^\dagger (a \mathbf{1}) T =a^\ast \mathbf{1}$</span> holds.</p> <p>Using Dirac's bra-ket notation, the time-reversal operator <span class="math-container">$T$</span> can indeed be defined by its action on the eigen-distributions <span class="math-container">$|p\rangle$</span> of the momentum operator, <span class="math-container">$$ T | p\rangle = |-p\rangle, \tag{7} \label{7}$$</span> satisfying \eqref{2} and \eqref{3}. Taking into account \eqref{1}, the action of <span class="math-container">$T$</span> on an arbitrary vector <span class="math-container">$$|\psi \rangle = \int\limits_{-\infty}^{+\infty} \!\!\! dp\; \underbrace{\langle p |\psi \rangle}_{\tilde{\psi}(p)} \;|p\rangle\tag{8} \label{8} $$</span> is given by <span class="math-container">$$T |\psi \rangle = \int\limits_{-\infty}^{+\infty} \! \!\! dp \; \tilde{\psi}(p)^\ast\; T |p\rangle= \int\limits_{-\infty}^{+\infty} \! \! \!dp \; \tilde{\psi}(p)^\ast \; |-p\rangle = \int\limits_{-\infty}^{+\infty} \! \! \! dp \; \tilde{\psi}(-p)^\ast \; |p\rangle, \tag{9} \label{9}$$</span> were the change of integration variables <span class="math-container">$p \to -p$</span> was performed in the last step. Equivalently, one obtains <span class="math-container">$$ \langle p |T |\psi\rangle =\langle -p | \psi \rangle^\ast = \tilde{\psi}(-p)^\ast \tag{10}$$</span> by employing \eqref{5}, obviously in agreement with \eqref{9}.</p>
|
Physics
|
|electromagnetism|reflection|
|
Reflection of electromagnetic waves from dielectric
|
<p>So, most dielectric materials are modeled with a variable dielectric constant (or tensor) that accounts for polarization of the underlying medium (induced dipoles, etc.) Because energy is required to disturb molecules in a dielectric medium from equilibrium, the dielectric will tend to absorb a certain amount of energy from the field, and this energy will include both elastic (reversible) forms, akin to the energy stored in a harmonic oscillator, as well as inelastic (irreversible/dissipative) forms, akin to the power absorbed by a conductor, except caused by (i) radiation from perturbed electronic orbitals and (at higher intensities) (ii) charge cascades on a molecular or microscopic scale. Explicitly, assuming an isotropic and homogeneous dielectric medium, the reflection and transmission coefficients <span class="math-container">$A_r$</span> and <span class="math-container">$A_t$</span> are given (in terms of the incident coefficient <span class="math-container">$A_{inc}$</span>) by <span class="math-container">\begin{align*} A_{r} = \frac{1 - \alpha}{1+\alpha} A_{inc}\\ A_t = \frac{2}{1 + \alpha} A_{inc} \end{align*}</span> where <span class="math-container">$\alpha$</span> is the factor by which the wave vector is multiplied under transmission (i.e. the index of refraction: <span class="math-container">$\alpha = \frac{c}{c'}$</span>, where <span class="math-container">$c'$</span> is the reduced speed of light in the dielectric). Letting <span class="math-container">$\alpha = 1+\eta$</span>, the total reflected and transmitted power is then proportional to <span class="math-container">\begin{align*} A_r^2+A_t^2 = \frac{4 + \eta^2}{4+2\eta + \eta^2}A_{inc}^2 \end{align*}</span> which approaches one when <span class="math-container">$\eta \rightarrow 0$</span> (i.e. perfect transmission) and when <span class="math-container">$\eta \rightarrow \infty$</span> (i.e. perfect reflection), with a minimum at <span class="math-container">$\eta = 2$</span>, which is roughly consistent with what you might expect from the microscopic picture (i.e. that most energy would be lost/absorbed/thermally-reemitted/radiated when the dielectric is responsive, but not so responsive that it screens itself completely, and that energy loss from dipole radiation would be greater at higher frequencies.)</p> <p>EDIT: After briefly reviewing Griffiths, I (re)learned that it is a happy mathematical accident that the internal energy of the perturbed molecular dipoles in the dielectric medium can be accounted for simply by using the material permittivity and permeability instead of their vacuum counterparts in the normal expression for the energy of an electromagnetic field. Accounting for this energy, the reflected and transmitted power combine to match the incident power. The slightly lower power in the bare field (i.e. not accounting for the internal energy of the dielectric) can be attributed to the process of the wave penetrating the dielectric medium (this can be verified by checking that the reflected and transmitted power also matches the incident power in steady state conditions when the dielectric has finite thickness.)</p> <p>References:</p> <p>Introduction to Electrodynamics (3rd. Edition), David J. Griffiths, section 9.3.2</p>
|
Physics
|
|electromagnetism|solid-state-physics|electric-current|electrical-resistance|dissipation|
|
When the energy in a conductor is not carried by the electrons, how resistances warm up?
|
<p>You need more than just the Poynting vector, you need all of <a href="https://en.m.wikipedia.org/wiki/Poynting%27s_theorem" rel="noreferrer">Poynting’s theorem</a><span class="math-container">$$\frac{du}{dt}+\nabla\cdot \vec S + \vec J\cdot \vec E =0$$</span></p> <p>The Poynting vector describes the flow of energy from one location to another. But energy that simply flows through does not produce any heat. In order for the energy to heat the resistor, it must flow in and then dissipate. The term <span class="math-container">$$\vec J\cdot \vec E$$</span> describes that dissipation.</p> <p>Taken together, Poynting’s theorem says that the fields which are produced by the surface charges result in a flux of energy through space, and then the current dissipates that energy from the fields into the matter.</p>
|
Physics
|
|newtonian-mechanics|newtonian-gravity|work|biology|
|
Is holding a falling object the same effort as lifting it?
|
<p>One of the most frustrating parts about teaching physics is that it's so hard to make examples involving the human body. The human body so chock full of complex higher order effects that often our intuition about the human body leads us astray from the physics principles.</p> <p>From a pure physics perspective, you need to put the same energy into lifting an object as you do into arresting its motion, and holding it in place transfers no energy at all. We can write the simple physics equation <span class="math-container">$W=fd$</span>, work is physics times distance, smile, and say we're done.</p> <p>But with the human body, "effort" is not always a good metric for "energy." It turns out that our muscles are far more complicated than that. Our muscles operate on fascinating chemical reactions. These chemical reactions do not work the same for "concentric" motion, where the force is in the direction of motion, as they do for "eccentric" motion, where the force is in the opposite direction. So while catching a falling object and lifting an object require the same energy in the nice easy physics world, in the real world with real muscles, the story can be more complicated.</p> <p>Even more complex, it turns out that "isotonic" force, where one applies force with no motion, is something that our muscles do even better than either eccentric or concentric motion (in fact, it's on the order of 3x stronger than concentric contractions). In many cases, our brains are astonishingly good at this, and will naturally lead us to use the weak arm muscles isotonically, and instead manage motion using concentric/eccentric motion of the abdominal muscles, which are far stronger. As a result, your instinct about what is "easier" turns out to be frustratingly far from the "energy" we teach in physics.</p> <p>If you want a better intuitive connection to "energy," consider simple machines rather than the complex human body. If you consider a play-ground see-saw catching an object versus lifting it, it's easier to see that it involves the same energy transfer. The human body is just... complicated.</p>
|
Physics
|
|electromagnetism|
|
How does Faradays & Amperes Law behave on quantized charged distributions
|
<p>In one-dimensional space EM theory with <span class="math-container">$E$</span> and <span class="math-container">$B$</span> fields does not work. You can only have a static E-field, it is not very interesting (but if you wish, see <a href="https://physics.stackexchange.com/questions/32685/can-light-exist-in-21-or-11-spacetime-dimensions">ref1</a> <a href="https://physics.stackexchange.com/questions/91444/does-electromagnetic-radiation-make-sense-in-one-dimension">ref2</a>).</p> <p>You could, however, keep your defined <span class="math-container">$f(x)$</span> exactly as it is and use it in 3 space dimensions, where it will describe an infinite slab of uniform charge parallel to the <span class="math-container">$yz$</span>-plane and moving in the <span class="math-container">$x$</span> direction.</p> <p>To solve it we can start with <span class="math-container">$v=0$</span>, which gives just a static E-field, finite everywhere. Then transform it to a frame of reference where <span class="math-container">$v$</span> is nonzero, see <a href="https://en.wikipedia.org/wiki/Lorentz_transformation#Transformation_of_the_electromagnetic_field" rel="nofollow noreferrer">ref3</a>. Nothing infinite will come out of that.</p> <p>You can then also try to solve it directly. It seems your reasoning would still apply and give an infinite <span class="math-container">$E$</span>-field if you use Ampere's and Faraday's laws. So you basically prove that Maxwell was right when he added the <span class="math-container">$\partial{\bf E}/\partial t$</span> to Ampere's law to complete it. You will see that here it cancels the <span class="math-container">${\bf J}$</span> term! (Note that <span class="math-container">${\bf E}$</span> inside the charged slab is not zero, it points from the "middle plane" outwards with increasing strength if you move from this middle plane to the slab's surfaces.)</p> <p>From <a href="https://en.wikipedia.org/wiki/Lorentz_transformation#Transformation_of_the_electromagnetic_field" rel="nofollow noreferrer">ref3</a> we see that even for relativistic speed, the moving slab still has the same <span class="math-container">${\bf E}$</span> as in the static case and that <span class="math-container">${\bf B}$</span> remains zero (from <span class="math-container">${\bf E}_{\|'}= {\bf E}_{\|}$</span> and the three equations following it). So this is actually just as uninteresting as EM in one space dimension where you would have this result by definition.</p>
|
Physics
|
|quantum-mechanics|operators|inertial-frames|harmonic-oscillator|galilean-relativity|
|
Energy levels of a translating quantum harmonic oscillator
|
<p>OP seemingly wants to understand whether or not there is a contribution to the energy from the overall translation of the system with velocity <span class="math-container">$v$</span>. Of course there should be, and we should expect something like we usually find when doing separation of variables to separate out the center of mass motion.</p> <p>In order to see this very explicitly, consider first the ground state of the potential in its rest frame: <span class="math-container">$$ \psi_0(x) = Ae^{-\frac{1}{2}x^2}\;, $$</span> where <span class="math-container">$A$</span> is the usual normalization constant (<span class="math-container">$1/\pi^{1/4}$</span>, or whatever), and where I am setting <span class="math-container">$m=\hbar=\omega=1$</span> to help keep my typing brief. (But it is straightforward to fill those constants back in, if desired.)</p> <p>It is also helpful to have a reference to look at for a similar problem. <a href="https://rads.stackoverflow.com/amzn/click/com/1107189632" rel="nofollow noreferrer" rel="nofollow noreferrer">Griffiths' Quantum Mechanics book</a> has a section in Chapter 2 on the "delta function potential." Griffiths also provides a related practice problem in Chapter 2 regarding how the ground state of the <em>boosted</em> delta function potential (i.e., the delta function potential with the replacement <span class="math-container">$x\to x-vt$</span>) compares to the ground state of the stationary delta function potential.</p> <p>By analogy with Griffiths, it is straightforward to see that <em>our</em> boosted solution should be: <span class="math-container">$$ \chi_0(x,t) = Ae^{-\frac{1}{2}(x-vt)^2}e^{-i(\frac{1}{2}\omega + \frac{1}{2}mv^2)t}e^{imvx}\;,\tag{C} $$</span> where I've now put back in <em>some</em> of the <span class="math-container">$m,$</span> <span class="math-container">$\hbar$</span>, and <span class="math-container">$\omega$</span> variables to help orient the reader, but there are still some missing (the reader can fill them in on their own).</p> <p>The reader can show that the function <span class="math-container">$\chi_0(x,t)$</span> satisfies the <em>time-dependent</em> Schrodinger equation: <span class="math-container">$$ i\frac{\partial \chi_0(x,t)}{\partial t} = -\frac{1}{2}\chi_0''(x,t)+\frac{1}{2}(x-vt)^2\chi_0(x,t)\;. $$</span></p> <p>Now, a few words about the pieces in Eq. (C): <span class="math-container">$$ \chi_0(x,t) = \underbrace{Ae^{-\frac{1}{2}(x-vt)^2}}_{1} \underbrace{e^{-i(\frac{1}{2}\omega + \frac{1}{2}mv^2)t}}_{2}\underbrace{e^{imvx}}_{3}\;,\tag{C}\;. $$</span></p> <ul> <li>Piece 1 is just the ground state of the stationary harmonic oscillator potential, but now evaluated at the moving position <span class="math-container">$(x-vt)$</span> instead of evaluated at <span class="math-container">$x$</span>.</li> <li>Piece 2 has the usual energy factor <span class="math-container">$e^{-iEt}$</span>, but the energy <span class="math-container">$E$</span> is the energy of the ground state of the stationary potential <em>plus</em> the center of mass kinetic energy <span class="math-container">$\frac{1}{2}mv^2$</span>. (I think this is the piece OP is mainly interested in.)</li> <li>Piece 3 is a plane wave factor for a plane wave of momentum <span class="math-container">$mv$</span>, which is just the momentum of the center of mass due to the overall translation of the system at velocity <span class="math-container">$v$</span>.</li> </ul> <p>The reader can calculate the expectation value of <span class="math-container">$\hat H_v$</span> on <span class="math-container">$\chi_0(x,t)$</span> and will find a result that they are probably expecting. (Namely, <span class="math-container">$\langle \chi_0|\hat H_v|\chi_0\rangle = \frac{1}{2}mv^2 + \frac{1}{2}\hbar\omega$</span>.)</p>
|
Physics
|
|optics|waves|interference|superposition|huygens-principle|
|
Is the Huygens' principle consistent for intersecting wavefronts?
|
<p>The wavefronts don't intersect anywhere. The wavefronts are perpendicular to the rays everywhere (pretty much by definition). Rays and wavefronts look like this:</p> <p><a href="https://i.stack.imgur.com/CCx1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CCx1m.jpg" alt="enter image description here" /></a><br /> <sub>(image from question "<a href="https://physics.stackexchange.com/questions/737757/what-does-the-wave-look-like-during-refraction">What does the wave look like during refraction?</a>")</sub></p> <p>In medium 2 the waves travel with slower speed than in medium 1. Hence in medium 2 the wavefronts are denser spaced than medium 1. It follows from Huygens' principle that the wavefronts (and hence also the rays) bend at the border between the media.</p>
|
Physics
|
|astronomy|
|
$1°$ shift in sun's sidereal position over a day
|
<p>Imagine the Earth making a complete revolution around the Sun. As it does so, the stars "behind" the Sun from the point of view of the Earth appear to change. It takes a full year (365 days) for this process to complete. Not entirely by coincidence, there are 360 degrees in a circle, so the Sun's apparent position relative to the distant stars shifts by about 1 degree per day.</p> <p>By contrast the rotation of the Earth causes both the Sun and stars to appear to move across the sky once per day.</p>
|
Physics
|
|special-relativity|approximations|
|
Taylor Approximation for Time Dilation and Lorentz Contraction
|
<p>Expression <span class="math-container">$(a)$</span> is obtained with a zeroth order expansion. I assume we work in units where <span class="math-container">$c = 1$</span>. In one hand you have <span class="math-container">$$1 - (1-\epsilon)^2 = \epsilon(2-\epsilon)$$</span> and in the other you have <span class="math-container">$$\frac{1}{\sqrt{2-\epsilon}} = \frac{1}{\sqrt{2}} + \mathcal{O}(\epsilon),$$</span> so putting both things together you obtain the following approximation for the Lorentz factor: <span class="math-container">$$\gamma := \frac{1}{\sqrt{1-(1-\epsilon)^2}} = \frac{1}{\sqrt{\epsilon}}\cdot \frac{1}{\sqrt{2-\epsilon}} = \frac{1}{\sqrt{\epsilon}}\cdot\left(\frac{1}{\sqrt{2}} + \mathcal{O}(\epsilon)\right) = \frac{1}{\sqrt{2\epsilon}} + \mathcal{O}(\sqrt{\epsilon})\stackrel{\epsilon\downarrow0}{\sim}\frac{1}{\sqrt{2\epsilon}}$$</span> where <span class="math-container">$\stackrel{\epsilon\downarrow0}{\sim}$</span> means that the expressions to its sides are assymptotically equivalent as <span class="math-container">$\epsilon$</span> approches <span class="math-container">$0$</span> from the positive side. This explains your expression <span class="math-container">$(a)$</span>.</p> <p>Concerning <span class="math-container">$(b)$</span>, if <span class="math-container">$\Delta\bar{x}$</span> is indeed the proper length then I'd say the expression is wrong and you are right.</p>
|
Physics
|
|quantum-field-theory|mathematical-physics|regularization|analyticity|casimir-effect|
|
Analytical continuation as regularization in Quantum Field Theory, the remaining questions
|
<p>For your first question, analytical continuation is one way out of many ways to regulate a sum. As you pointed out, there is always an ambiguity in attributing a sum to a diverging series. Analytical continuation is one way to do this. What is especially relevant in physics is that in some cases, it is consistent with regularisation approaches in the following sense.</p> <p>A regularisation allows you to render the original divergent series to a convergent one. However the resulting sum now depends on the cutoff and the regulator. In some cases, you can check that in the asymptotic expansion of the cutoff scale, the constant term is independent of the regulator, and corresponds to the analytically continued value.</p> <p>For your second question, it is essentially answered in equation (24) and the following paragraph. The case of Riemann series, you have the general asymptotic expansion for <span class="math-container">$\Re s <1$</span>: <span class="math-container">$$ \sum_{n=1}^\infty n^{-s}\eta(n/N) = C_{\eta,-s}N^{1-s}+\zeta(s)+O(1/N) $$</span> with <span class="math-container">$\eta$</span> the regulator and <span class="math-container">$N$</span> the cutoff scale. As expected the regularised sum depends both on <span class="math-container">$N$</span> and <span class="math-container">$\eta$</span>. This is the case for the first term, which is leading order. However, the constant term by definition does not depend on <span class="math-container">$N$</span>, but surprisingly does not depend on <span class="math-container">$\eta$</span> either. This is why it is a natural candidate for the sum of the divergent series. It turns out to match the analytical continuation approach. Tao proposes two approaches to see this. One is to define <span class="math-container">$\zeta$</span> by the previous equation and check that it is analytic. A second approach is to derive the asymptotic expansion from the analytically continued <span class="math-container">$\zeta$</span>.</p> <p>For your third question I will treat your examples in more detail. Using the entire series (of the polylogarithm), you can also recover the zeta function. You just need to recognise that the Abel summation amounts to using the regulator: <span class="math-container">$$ \eta(x) = e^{-ax} $$</span> with <span class="math-container">$\Re a>0$</span>. In other words, you can relate <span class="math-container">$z$</span> to the cutoff <span class="math-container">$z = e^{-a/N}$</span>, and you need to do an asymptotic expansion in terms of <span class="math-container">$N = -\frac a{\ln(z)}$</span>. In this case, you get: <span class="math-container">$$ \begin{align} \text{Li}_{-1}(z) &= \frac z{(1-z)^2} \\ &= \frac{e^{-an/N}}{(e^{-an/N}-1)^2} \\ &= \left(\frac Na\right)^2-\frac1{12}+O(1/N) \\ &= \frac1{(\ln z)^2}-\frac1{12}+O(\ln(z)) \end{align} $$</span> so you recover the cutoff and regulator independent value <span class="math-container">$-\frac1{12}$</span>.</p> <p>For the second method, you do see that energy is infinite since the original sum diverges at <span class="math-container">$s=0$</span>. To define the finite energy at <span class="math-container">$s=0$</span>, you need to invoke analytic continuation, which directly gives you the cutoff and regulator independent value.</p> <p>I think that part of your uneasiness is that you don't see a leading infinite term in the analytical continuation method. It turns out that the Casimir force is a bit of an exception in this regard. In your typical QFT calculations, the value you re interested in is often a pole of the analytic continuation. You therefore still have an infinite result that you'll need to absorb into the renormalised parameters.</p> <p>For example, you can see this kind of behaviour in simple theories like vertex corrections of <span class="math-container">$\phi^4$</span> in <span class="math-container">$D=3+1$</span> dimensions. Whether you use zeta or dimensional regularisation, you'll get a pole that needs to be compensated by renormalising the coupling constant <span class="math-container">$\lambda$</span>. The "tameness" of the Casimir force comes from the fact that parameters of the theory don't need to be renormalised, which is rather exceptional in QFT.</p> <p>Hope this helps.</p>
|
Physics
|
|general-relativity|black-holes|causality|singularities|kerr-metric|
|
What happens if $ a^2 > M^2 $ in Kerr metric?
|
<p>If the mass is in the form of a material body like the earth where the spin parameter is <span class="math-container">$\rm a=J c/(G M)=890 M$</span>, it would have to lose some angular momentum until <span class="math-container">$\rm a<M$</span> before it could collapse into a black hole, otherwise it can't since the centrifugal repulsion is larger than the centripetal attraction.</p> <p>If the mass is in the form of a singularity that would be a <a href="https://notizblock.yukterez.net/viewtopic.php?p=595#p595" rel="nofollow noreferrer">naked singularity</a>, and if it is an <a href="https://arxiv.org/abs/hep-th/0507109" rel="nofollow noreferrer">elementary particles</a> like the electron where <span class="math-container">$\rm a$</span> is many orders of magnitudes larger than <span class="math-container">$\rm M$</span> and <span class="math-container">$\rm r=0$</span> as well the close field should be <a href="https://physics.stackexchange.com/search?q=user%3A24093+reissner+nordstr%C3%B6m+repulsion">gravitationally repulsive</a>.</p>
|
Physics
|
|everyday-life|material-science|glass|
|
Does glass slowly (invisibly) degrade until it breaks?
|
<p>As @Cleonis states in his answer, glass has an amorphous molecular structure. It is not a solid, but is a super cooled liquid that has undergone a secondary transition. Some references say that it can deform over time due to creep, usually over centuries.<a href="https://www.iq.usp.br/mralcant/About_Rheo.html" rel="nofollow noreferrer">Rheology - flow of matter</a> (although others disagree <a href="https://math.ucr.edu/home/baez/physics/General/Glass/glass.html" rel="nofollow noreferrer">link</a> ). It may however very slowly crystallize. Unless you are extremely old, I doubt that either of these happened to your tumbler.</p> <p>The surface of glass is very important for it to maintain its integrity. To cut a sheet of plate glass, first a line is scored across the surface. Slight pressure and a small tap are enough to cause a crack to propagate along the score line. If you have ever had a crack in your windshield you will have no doubt noticed how the crack will grow under thermal and other pressures.</p> <p>When glass is fabricated there are always internal stresses. One most interesting example is Prince Rupert’s drops. <a href="https://en.wikipedia.org/wiki/Prince_Rupert%27s_drop" rel="nofollow noreferrer">wikipedia Prince Rupert's drops</a> These are made by dropping molten glass into water and are tear drop shaped. The bulb of the drop is extremely strong - you can hit it with a hammer and it won’t shatter, but even a small deformation of the tail will cause explosive disintegration into powder! <a href="https://www.youtube.com/watch?v=xe-f4gokRBs" rel="nofollow noreferrer">youtube video</a></p> <p>Most probably what happened to your tumbler is a combination of growing internal fractures, surface scoring and chance (i.e. bad luck).</p>
|
Physics
|
|pressure|material-science|elasticity|continuum-mechanics|stress-strain|
|
What is a general definition of bulk modulus?
|
<p>If <span class="math-container">$V_f \approx V_i$</span>, then <span class="math-container">$$\ln\left(\frac{V_f}{V_i}\right) = \ln \left( 1 + \frac{V_f - V_i}{V_i} \right) \approx \frac{V_f - V_i}{V_i}.$$</span>So the two formulas are functionally equivalent so long as the volume does not change significantly during the process.</p> <p>I suspect that your equation (1) implicitly assumes that <span class="math-container">$\Delta V = V_f - V_i$</span> is small compared to <span class="math-container">$V_i$</span> (or <span class="math-container">$V_f$</span>.) The equation (2) is the more general relation.</p> <p>I should also note that the bulk modulus is not necessarily constant with respect to volume (or pressure), so the integration you have written following (2) is not necessarily correct. For example, for an ideal gas <span class="math-container">$B = P = NkT/V$</span>, which is very obviously dependent on <span class="math-container">$V$</span> or on <span class="math-container">$P$</span>.</p>
|
Physics
|
|newtonian-mechanics|inertial-frames|galilean-relativity|
|
What is the exact meaning of Galileo's principle of relativity?
|
<p>Galileo's <a href="https://simple.wikipedia.org/wiki/Principle_of_relativity" rel="noreferrer">principle of relativity</a> states: "It is impossible by mechanical means to say whether we are moving or staying at rest". This predates but underpins Newton's laws of motion. It states that the basic principles governing the motion of objects (which would later be formalized in Newton's laws) apply equally in all inertial frames (frames that are either at rest or move with a constant velocity).</p> <p>Galileo did not specifically mention Newton's laws, conservation laws, or the formulations of mechanics by Lagrange and Hamilton as he preceded all of these developments, however, his principle implies that these fundamental laws of mechanics are invariant across different inertial frames.</p> <p>This principle is significant because it introduces the idea that the laws governing physical phenomena are consistent and universal, regardless of the observer's state of motion, laying the groundwork for classical mechanics and influencing future developments in physics, including Einstein's theories of relativity.</p>
|
Physics
|
|quantum-mechanics|photons|electrons|antimatter|
|
Can it be disproven that an electron is a wave packet of photons?
|
<p>As mentioned in the comments, photons are neutral, so no collection of photons can have charge <span class="math-container">$-1$</span>; moreover, they are spin 1, so no collection of photons can have spin <span class="math-container">$\frac 1 2$</span>. Meanwhile equal numbers of electrons and positron have zero charge and integer spin.</p> <p>If electrons were collections of photons, the anti particle would be an electron, since photons are there own antiparticle, but as we know it's a positron, but, if an electron were made of photons and positron was made from a charge conjugate state to that, then it seems:</p> <p><span class="math-container">$$ e^+e^-\rightarrow 2\gamma $$</span> <span class="math-container">$$ e^+e^-\rightarrow 3\gamma $$</span> <span class="math-container">$$ e^+e^-\rightarrow 4\gamma $$</span></p> <p>...and so on...</p> <p>would be hard to explain, esp. when QED explains the ratios perfectly.</p> <p>Finally, the complaint that this reaction exists seems to imply photons can't be created...but that is how they couple to charge.</p>
|
Physics
|
|photons|electrons|atoms|
|
How do electrons absorb photons?
|
<p>Imagine you have two clouds of charge, Like two spheres with uniform charge distributions where the shape of each cloud is fixed. Also suppose the clouds can pass through eachother. The ground state is for the two charge clouds to be perfectly overlapped. In this case there will be no <span class="math-container">$E$</span> field and certainly no radiation. Now manually displace one of the charges. This will introduce electrostatic energy into the system. When you release the charge cloud it will move towards the other charge cloud. When it overlaps it will still have velocity and it will move past the equilibrium point. The charge clouds will oscillate in position with respect to each other. If this were the whole story the charges would oscillate forever like a mass on a frictionless spring.</p> <p>However, the oscillating of the charges causes fluctuations in the electromagnetic field. These fluctuations carry away energy in the form of radiation. It costs kinetic energy for this radiation to be created so the charge oscillation slows down. Eventually all the energy is lost and the charge clouds over lap again. The system has decayed to the ground state by emitting electromagnetic radiation/energy.</p> <p>Now play this picture in reverse and you have the story of how the charge cloud can absorb electromagnetic radiation. Basically the radiation illuminating the charge clouds imparts kinetic and electrostatic energy to the two clouds of charge, this comes at reducing the total energy of the electromagnetic field. Hence radiation energy is converted into kinetic and electrostatic energy (the energy of the bound state).</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|energy|work|free-body-diagram|
|
Final velocity of block pulled along a surface using work and energy
|
<p>You need to know the net force acting on the block, i.e. the difference between the upward force of <span class="math-container">$100N$</span> and the opposing frictional force. So lets find the magnitude of the normal force. The magnitude of the normal force is <span class="math-container">$\vec N=100\sin(30^\circ)\mathbf{\hat j}-mg\mathbf{\hat j}$</span>, thus the frictional force is given by: <span class="math-container">$\vec f_k=-\mu_k|\vec N|\mathbf{\hat i}$</span>. So applying Newton's second law we have: <span class="math-container">$$\sum\limits_{i}\vec F_i=100\cos(30^\circ)\mathbf{\hat i}-\mu_k(100\sin(30^\circ)-mg)\mathbf{\hat i}=m\vec a.$$</span></p> <p>So by the work energy theorem we have: <span class="math-container">$$W=\Delta K\implies W={1\over 2}m(v_f^2-v_i^2)={1\over 2}m(v_f^2-64m^2/s^2).$$</span> From the definition of work: <span class="math-container">$$W=|\vec F|d=(100\cos(30^\circ)-\mu_k(100\sin(30^\circ)-mg)(25m-15m).$$</span> Now, combine these to equations and solve for <span class="math-container">$v_f$</span>.</p>
|
Physics
|
|electromagnetism|magnetic-fields|electric-fields|electromagnetic-induction|
|
Inducing electric field vs Inducing magnetic field
|
<p>An electric motor converts electrical energy to magnetic fields and from that produces mechanical energy. An electric generator uses mechanical energy to move a coil through a magnetic field, which produces electrical energy.</p> <p>Which one is "easier" to do? Neither. At least in a crude design, they're both the same machinery, just used in the opposite way. Any differences of efficiency or "easiness" in more sophisticated designs is a matter of engineering, not physics.</p> <p>Similarly you can use a loop antenna as a receiver (converting magnetic energy to electrical) or as a transmitter (converting electrical energy to magnetic). Neither way is easier, they're just different ways to drive the same physical structure.</p>
|
Physics
|
|acoustics|biology|noise|perception|
|
Why and how white noise cancels other background noises?
|
<h2>Introduction</h2> <p>First of all, let me state that the clip you provided does not contain <em>white noise</em>, but some kind of filtered noise. I haven’t analysed it to conclude on the “colour” of the noise but it is definitely <strong>not</strong> white.</p> <h2>Clarifying the problem</h2> <p>Now, let me clarify a bit what the “problem” is here. White noise rarely (if ever) cancels out other noise. White noise is more often than not (<em>always</em> would be a very good approximation to the frequency this statement is true) incoherent to other noise. The summation of two such sources (incoherent) results in the summation of their energies. Since the energy of travelling waves cannot be negative, when two sources such as the noise clip you provide and any other background noise add up, the total energy will only rise. This is definitely not <em>cancellation</em> of the background noise. There are means to cancel noise with “noise”, and is used in active control of noise in headphones for example (or cars and planes nowadays) but it has its limitations and is not a trivial task.</p> <p>So, if not cancellation, what? If the playback of the noise clip seems to be suppressing the already present background noise, then what is actually happening?</p> <h2>Masking effect</h2> <p>As has already been commented, this has to do with the <em>perception</em> of sound, which is <strong>subjective</strong>. The field of acoustics dealing with the perception of sound is <em>psychoacoustics</em> and the relevant phenomenon/effect here is <strong>frequency masking</strong>. A sound can <em>mask</em> another sound if their frequencies and amplitudes have specific relations. I will not go into detail on the masking effect but you can find more information <a href="https://en.wikipedia.org/wiki/Auditory_masking" rel="nofollow noreferrer">Wikipedia auditory masking page</a> and <a href="https://en.wikipedia.org/wiki/Sound_masking" rel="nofollow noreferrer">Wikipedia sound masking page</a>.</p> <p>So, what really happens is that you add some noise which masks the offending background noise, not cancelling it but making it imperceivable. Low frequencies can mask higher frequencies more efficiently than high frequencies can mask the low ones. This is most probably one of the reasons that the clip you provided has much energy in the low frequencies, which resembles more <em>pink noise</em> than <em>white noise</em>.</p>
|
Physics
|
|newtonian-mechanics|rotational-kinematics|
|
Is the angular acceleration and the resultant acceleration the same thing?
|
<p>If we have a centre of rotation <span class="math-container">$O$</span> then we can divide the velocity <span class="math-container">$\vec v$</span> of a particle into a tangential component <span class="math-container">$v_{\theta}$</span> and a radial component <span class="math-container">$v_{r}$</span> so that</p> <p><span class="math-container">$\vec v = v_r \hat r + v_{\theta} \hat \theta$</span></p> <p>where <span class="math-container">$\hat r$</span> and <span class="math-container">$\hat \theta$</span> are unit radial and tangent vectors (with respect to <span class="math-container">$O$</span>). If the path of the particle is a circle about <span class="math-container">$O$</span> then <span class="math-container">$v_r=0$</span> and we have</p> <p><span class="math-container">$\vec v = v_{\theta} \hat \theta$</span></p> <p>where <span class="math-container">$v_{\theta}$</span> may vary with time. The acceleration of the particle is then</p> <p><span class="math-container">$\displaystyle \frac {d \vec v} {dt} = \frac {d v_{\theta}}{dt} \hat \theta + v_{\theta} \frac {d \hat \theta} {dt} = \frac {d v_{\theta}}{dt} \hat \theta - \frac {v_{_\theta}^2}{r} \hat r $</span></p> <p>The first term <span class="math-container">$\frac {d v_{\theta}}{dt} \hat \theta$</span> is the tangential acceleration and the second term <span class="math-container">$- \frac {v_{_\theta}^2}{r} \hat r$</span> is the radial or centripetal acceleration. The angular acceleration is the tangential acceleration divided by <span class="math-container">$r$</span>, so it is <span class="math-container">$\frac 1 r \frac {d v_{\theta}}{dt} \hat \theta$</span>.</p>
|
Physics
|
|electrostatics|electric-fields|conductors|
|
Time taken by field inside a Conductor to become zero
|
<p>Since eventually it will reach an equilibrium where the field inside is zero, we can just look at the deviation from this equilibrium: how fast will this become zero? This means you split off the solution for the steady state and use linearity.</p> <p>That leaves you with a situation without an external field (which is already in the steady state) and you have some charge distribution on the sphere which has to go to zero. Describe it using spherical harmonics, <span class="math-container">$Y_{lm}$</span>'s, and you will see that the lowest possible mode, with <span class="math-container">$l=1$</span>, is the slowest. That is essentially the mode with positive charge on the upper half and negative charge on the bottom half of the sphere. The finer ripples will decay faster.</p> <p>I leave the mathematics to you, but for the slowest mode a first estimate will of course be found by just using the RC description: the capacitance between top and bottom of the sphere and the resitansce between top and bottom. Those C and R must be in the order of <span class="math-container">$\varepsilon_0 r$</span> and <span class="math-container">$\rho$</span> respectively, where <span class="math-container">$r$</span> is the sphere's radius and <span class="math-container">$\rho$</span> the resistance per square of its surface. So the time constant will be in the order of <span class="math-container">$\rho \ \varepsilon_0 r$</span>.</p>
|
Physics
|
|reflection|radiation|diffraction|solar-cells|renewable-energy|
|
Total flux is bigger than radiation flux, error?
|
<p>Your first equation seems to be assuming that the sun is shining on something unobstructed, but that there's also e.g. a window reflecting to it as well, plus the diffracted term. As you can imagine, in that case you <em>could</em> have more total flux hitting that object than just the direct flux, because the reflected light adds to the direct light. If you're talking about solar panels, assuming that <span class="math-container">$I_\text{direct}$</span> is from unobstructed sunlight would make sense, since you're (hopefully!) putting them where they have an unobstructed view of the sun.</p> <p>That equation wouldn't work for sunlight shining on something on the other side of the window, since it doesn't take into account that the direct radiation is reduced some by the window. You would instead use something like</p> <p><span class="math-container">$$I_\text{direct} = T ⋅ I_\text{radiated},$$</span></p> <p>along with</p> <p><span class="math-container">$$I_\text{diffracted} = d ⋅ I_\text{radiated}$$</span></p> <p>and</p> <p><span class="math-container">$$I_\text{reflected} = f(d,r,I_\text{radiated})$$</span></p> <p>where <span class="math-container">$T$</span> is the transmission coefficient of the window. <span class="math-container">$T$</span> will be less than 1.</p>
|
Physics
|
|quantum-mechanics|operators|harmonic-oscillator|representation-theory|lie-algebra|
|
Are the SHO's ladder operators induced from a Lie group action?
|
<p>Ch 10 of the late <a href="https://www.google.com/books/edition/Symmetry_Groups_and_Their_Applications/rW7-eLDdVm0C?hl=en&gbpv=0" rel="nofollow noreferrer">W Miller's legendary book <em>Symmetry Groups and their Applications</em></a> tells you more than you'd ever want to know about this <em><strong>oscillator group</strong></em>, quite different than SO(3), with which it shares a similar ladder structure.</p> <p>The oscillators provide the "defining" representation of it. For its four generators, <span class="math-container">$N=a^\dagger a$</span>, <span class="math-container">$J^+=a^\dagger$</span>, <span class="math-container">$J^-= a$</span>, <span class="math-container">$E=\mathbb I$</span> (central) <span class="math-container">$[E,\bullet]=0$</span>, so <span class="math-container">$$ [N, J^+ ]= J^+,\\ [N,J^-]=-J^-,\\ [J^+,J^-]=-E,$$</span> where the last commutation relation makes all the difference! It's really the Heisenberg algebra generating the eponymous <a href="https://en.wikipedia.org/wiki/Heisenberg_group#The_three-dimensional_case" rel="nofollow noreferrer">group</a>, and from its trace you see that <em><strong>all faithful irreducible representations of it are infinite-dimensional</strong></em> like the one you saw.</p> <p>The Casimir is also <em>radically different</em>, <span class="math-container">$$ C= J^+ J^- - EN, $$</span> and vanishes for the oscillator representation above.</p> <p>The corresponding (solvable, but not nilpotent) oscillator group is <span class="math-container">${\cal G}(0,1)$</span>; it can be written in the basis for its generic element <span class="math-container">$$ {\mathbb g}=e^{aE}e^{bJ^+}e^{cJ^-}e^{\tau N }, $$</span> which has simple, elegant, group element composition properties, just like its <a href="https://en.wikipedia.org/wiki/Heisenberg_group#The_three-dimensional_case" rel="nofollow noreferrer">Heisenberg subgroup</a>, <em>τ=0</em>.</p> <p>Some like to realize it in 4×4 matrix notation, a <a href="https://math.stackexchange.com/questions/1952573/why-no-faithful-finite-dimensional-irreducible-representation-of-the-heisenberg">reducible</a> 4d representation with <em>C=0</em>, <span class="math-container">$$\begin{pmatrix} 1 &ce^\tau &a &\tau \\ 0&e^\tau& b& 0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix}; $$</span> while some do the same for the algebra with the realization <span class="math-container">$$ N= \lambda+ z\frac{d}{dz}, ~~~~~~J^+=\mu z , ~~~ J^-={\xi\over z} +{d\over dz} , ~~~~ E=\mu~~\leadsto \\ C= \mu(\xi-\lambda). $$</span></p> <hr /> <p><em><strong>Geeky Details</strong></em></p> <p>The above 4d representation follows directly from <span class="math-container">$$ \exp(a\begin{pmatrix} 0 & 0 &1 &0 \\ 0&0& 0& 0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix})~ \exp(b\begin{pmatrix} 0 & 0 &0&0\\ 0&0& 1& 0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}) \times \\ \exp(c\begin{pmatrix} 0 &1 &0 &0 \\ 0&0& 0& 0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}) ~\exp(\tau \begin{pmatrix} 1 &0 &0 &1 \\ 0&1& 0& 0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}) ~. $$</span> This oscillator group is <span class="math-container">${\cal G}(0,1)$</span>.</p> <p>Its partner, <span class="math-container">${\cal G}(0,1)$</span>, has the Lie algebra <em>gl(2)</em> which inspired you, and contains <em>so(3)</em>. Contrast its realization, <span class="math-container">$$ J^3= \lambda+ z\frac{d}{dz}, ~~~~~~J^+=(2\lambda+\xi) z+ z^2 \frac{d}{dz} , ~~~ J^-={\xi\over z} -{d\over dz} , \\ E=\mu~ $$</span> to the above!</p>
|
Physics
|
|optics|statistical-mechanics|scattering|correlation-functions|imaging|
|
Correlation time of speckle pattern
|
<p>The correlation time of the speckle pattern from a laser beam scattered by a rotating ground glass disk is derived in [<a href="https://doi.org/10.1364/JOSA.61.001301" rel="nofollow noreferrer">1</a>]. If the laser is focused on the ground glass, the first-order correlation simplifies to <span class="math-container">$$\left| g^{(1)}(\tau) \right| = \exp{\left( -\frac{1}{2} {v_\text{rot}}^2 {\sigma_k}^2 \tau^2 \right)},$$</span> where <span class="math-container">$v_\text{rot}$</span> is the velocity of the disk at the position of the laser spot and <span class="math-container">$\sigma_k$</span> is the standard deviation of <span class="math-container">$k$</span>-vectors contained in the (<a href="https://en.wikipedia.org/wiki/Gaussian_beam" rel="nofollow noreferrer">Gaussian</a>) laser beam. It is related to the beam divergence <span class="math-container">$\theta = 2 \sigma_k \left/ \left| \vec{k} \right| \right.$</span>. Using the <a href="https://doi.org/10.1140/epjd/s10053-022-00558-5" rel="nofollow noreferrer">Siegert relation</a>, one obtains the second-order correlation function <span class="math-container">$$g^{(2)}(\tau) = 1 + \left| g^{(1)}(\tau) \right|^2 = 1 + \exp{\left( -{v_\text{rot}}^2 {\sigma_k}^2 \tau^2 \right)} = 1 + \exp{\left( - \left( \frac{\tau}{\tau_c} \right)^2 \right)}$$</span> with a correlation time <span class="math-container">$\tau_c = \frac{1}{v_\text{rot} \sigma_k} = \frac{w_0}{v_\text{rot}}$</span>, where <span class="math-container">$w_0$</span> is the beam waist of the laser. <img src="https://i.stack.imgur.com/kI9vx.png" width="652"></p> <p>So, if you need a longer coherence time in your experiment, you can rotate the disk slower or use a less strongly focused laser beam (increasing <span class="math-container">$w_0$</span>). If you need even longer coherence times, you can also hold the disk still, acquire one data point, move the disk a bit, acquire the next data point, and so on. This is what was done in <a href="https://nbn-resolving.org/urn:nbn:de:bvb:29-opus4-103280" rel="nofollow noreferrer">this PhD thesis</a> (Ch. 3.1.1).</p>
|
Physics
|
|electricity|charge|electrons|electric-current|conventions|
|
What is actually electric current?
|
<blockquote> <p>The electron flows in the wire and then the current flows in the opposite direction of it.</p> </blockquote> <p>More precisely: the electrons flow in the wire and the current density points in the opposite direction.</p> <p>While it is common to say that current flows, it is somewhat inaccurate. Since you are struggling with this, it may help to say things precisely.</p> <p>A current consists of some moving charged particles. Those particles are called charge carriers. The charge carriers move, so that is a flow. The current density is a vector defined by <span class="math-container">$\vec j=\rho \vec v$</span> where <span class="math-container">$\rho$</span> is the charge density of the charge carriers and <span class="math-container">$\vec v$</span> is their velocity.</p> <blockquote> <p>What is that thing that is flowing in the opposite direction of the electrons?</p> </blockquote> <p>Only the electrons are flowing in a metal. The current density points in the opposite direction of the flow of electrons.</p> <blockquote> <p>What is that thing that we are feeling as shock and making electric things run?</p> </blockquote> <p>That is current density. What exactly is flowing is irrelevant. In a wire the charge carriers (the thing that flows) are negative. In your tissues there are both positive and negative charge carriers. So the charge carriers can be negative, positive, or both. It doesn’t matter.</p>
|
Physics
|
|thermodynamics|physical-chemistry|hydrogen|
|
Hydrogen under pressure and high temperatere released energy: what is this process?
|
<p>If hydrogen lost mass due to high pressure and high temperature it is probably fusion that is happening(unlikely to happen in large amounts on Earth) and often happens in stars. Energy was released because the two hydrogen nuclei or protons have a higher mass because they are less stable when apart than together when in the form of deuterium where one of the protons is converted to a neutron after the fusion reaction.</p> <p>This is important to note that because the universe typically likes to drop to the lowest energy state possible so the higher temperature and pressure allows the particles to have the high velocities and this can either help quantum tunneling be more likely to fuse the nuclei together, or if the particle amount is low, actually can cause the nuclei to overcome the Coloumb barrier preventing fusion. These conditions make it possible for the particles to reach their goal of going to the lowest energy state, so it is more likely to happen.</p> <p>So essentially this is why energy is released because it is possible for energy to be released under those conditions of high temperature and pressure.</p> <p>To calculate the energy released, you just need to subtract off the final mass from the initial mass to find the amount of mass that is lost, and then convert it to kilograms and then multiply it by speed of light squared to get the energy in Joules.</p>
|
Physics
|
|electrostatics|electric-fields|potential|conductors|method-of-images|
|
Potential inside metal sphere in field of external charge
|
<p>At any point <span class="math-container">$P$</span> inside the sphere, let the superposed electric field be <span class="math-container">$$\vec{E_P} = \vec{E}_{PQ} + \vec{E}_{PI}$$</span> where<br /> <span class="math-container">$\vec{E}_{PQ}$</span> means electric field at <span class="math-container">$P$</span> due to <span class="math-container">$Q$</span> and<br /> <span class="math-container">$\vec{E}_{PI}$</span> means electric field at <span class="math-container">$P$</span> due to all of the induced charges on the sphere.</p> <p>Then, <span class="math-container">\begin{align*} V_P = - \int_\infty^P \vec{E_P} \cdot d\vec{l} &= - \int_\infty^P \vec{E}_{PQ} \cdot d\vec{l} - \int_\infty^P \vec{E}_{PI} \cdot d\vec{l} \\ &= V_{PQ} + V_{PI} \end{align*}</span> where<br /> <span class="math-container">$V_{PQ}$</span> means potential at <span class="math-container">$P$</span> due to <span class="math-container">$Q$</span> and<br /> <span class="math-container">$V_{PI}$</span> means <strong>sum of potential</strong> at <span class="math-container">$P$</span> due to all of the induced charges on the sphere.</p> <p>For point <span class="math-container">$C$</span> (the center of sphere) <span class="math-container">\begin{align*} V_{CQ} &= \frac{kq}{2r} \\ V_{CI} &= \sum_i^{\text{all induced charges on the sphere}} \frac{kq_i}{r} = 0 \\ V_C &= V_{CQ} + V_{CI}\\ &= \frac{kq}{2r} \end{align*}</span> Notice above that <span class="math-container">$V_{CI}$</span> is zero due to the assumption that the sphere is neutral to start with, so that the sum of all induced <span class="math-container">$q_i$</span> should be zero (conservation of charge).</p> <p>Now consider point P as located in OP's diagram.<br /> Since potentials of any two points inside the sphere are the same, and since <span class="math-container">$P$</span> and <span class="math-container">$C$</span> are both in the sphere, <span class="math-container">\begin{align*} V_P &= V_C \\ V_{PQ} + V_{PI} &= \frac{kq}{2r} \\ \frac{kq}{3r} + V_{PI} &= \frac{kq}{2r} \\ V_{PI} &= \frac{kq}{6r} \end{align*}</span></p>
|
Physics
|
|special-relativity|reference-frames|coordinate-systems|inertial-frames|
|
Scalar multiple of inertial frame
|
<p>You are mixing frames and coordinate vectors, and that is leading to some confusion.</p> <p>In a frame, vector quantities like position and velocity have some vector value. Changing frames may change those vectors (such as how accceleration is changed by transforming into a rotating frame).</p> <p>When you speak of the coordinates changing from <span class="math-container">$(t, x_1, x_2, x_3)$</span> to <span class="math-container">$(t, 2x_1, 2x_2, 2x_3)$</span>, you are using coordinate vectors. You construct a coordinate vector by starting with a frame and then defining a basis -- 3 vectors with which you are "measuring" all vectors. Any vector is thus converted to a coordinate vector by taking the dot products with all 3 vectors. The result is a triple (a quad, if you count time) of real numbers.</p> <p>At the deepest level, your two "frames" are actually the same frame, with different trios of basis vectors. This means you may measure things differently, but fundamentally they are the same vector.</p> <p>This is no more poignant than the fact that I can measure something as 1 inch long <em>or</em> 2.54 cm long. Fundamentally, the vector from one end of the object to the other didn't change. All I did was change my basis vectors, scaling by "units."</p>
|
Physics
|
|general-relativity|gravity|spacetime|quantum-gravity|carrier-particles|
|
What is the current most widely-accepted explanation of gravity?
|
<p>The current best theory of gravity is Einstein's general theory of relativity, which explains gravity as an effect of the curvature of spacetime by energy, momentum, stress, and pressure. Over the past century many alternative theories have been proposed, and many experiments and observations conducted to test these alternative theories and general relativity, and so far Einstein's theory has passed all challenges and remains the simplest theory which is in accord with all observations.</p> <p>Having said that, there are reasons to believe that eventually another theory will supplant general relativity (GR). That is because GR is not a quantum theory, and all other aspects of the universe (including the existence of particles and the non-gravitational forces between them) are explained by quantum theories, specifically quantum field theories. Developing a quantum theory of gravity is an active area of research, but testing such theories is very difficult because gravity is a very weak force and quantum gravitational effects are expected to be observed only in extreme situations or at very tiny levels.</p> <p>Most (but not all) quantum theories of gravity predict the existence of a spin-2 particle called the "graviton", which would be an excitation of the quantum gravitational field in much the same way as the photon is an excitation of the electromagnetic field. Gravitons, like photons, are predicted to be massless and hence able to travel arbitrary distances at the speed of light. Just as the light from a distant star is able to travel vast distances to eventually interact with the rods and cones in your eye, so gravitons would be able to traverse interstellar space to interact with matter. (Technically gravitons would make up gravitational waves, and the force of gravity would be modelled by exchange of virtual gravitons, but in either case there would be no range limit.)</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|energy-conservation|work|batteries|
|
Electric car battery weight: understand question
|
<blockquote> <ol> <li>How data about car's ability to reduce speed from 100 to 80 can help with battery weight? Is it relevant at all to the question?</li> </ol> </blockquote> <p>A electric car has to use the energy in its batteries to counteract the effect of air friction on the car. It has to do work according to <span class="math-container">$W=Fd$</span>. The little experiment where we let off the gas and observe how fast the car slows down can be used to calculate what the force of air friction is. (well, we can get close. I'm guessing you're expected to assume a constant force at all speeds, which is not realistic. <a href="https://en.wikipedia.org/wiki/Drag_(physics)" rel="nofollow noreferrer">Drag</a> is better modeled as proportional to the square of velocity. However, given that the problem doesn't state the speed you intend to go 100km at, I'm guessing they're just hand-waving it as a constant force at all speeds.</p> <blockquote> <ol start="2"> <li>I believe we should calculate energy consumed during the 100km drive to the destination (and it would be maybe 80% of battery capacity?). Thou, I actually do not get how electric car works if to one direction it uses 80% of battery, but back it uses only 20%, how this is even possible?</li> </ol> </blockquote> <p>You didn't quote the actual wording of the question, so there's no way we can be sure. However, if I venture a guess, it is likely they want to do the entire round trip with 20% battery remaining. Thus they need to travel 200km using 80% of the energy in the batteries.</p> <blockquote> <ol start="3"> <li>I understand that I have to calculate how many Li-ion cells I need, using given data about 1 cell. 3000 * 3.7/1000 = 11.1 wH</li> </ol> </blockquote> <p>This seems reasonable on the surface. Without actually seeing the question, I don't think we can suggest otherwise.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|wavefunction|notation|quantum-states|
|
What is the distinction between a ket and a state in quantum mechanics?
|
<p>I think all other answers so far do not address the problem directly. The definition of the OP regarding kets is a quite common, though not the most used one, see e.g. references 1 and 2. So I don't think one should argue now which definition of "ket" is the right one (the question is ill-posed: it is just notation).</p> <p>The OP further uses the word "state" to refer to an element of the Hilbert space, which a priori is fine, but it does not correspond to the <strong>physical</strong> notion of the word (but to be fair, I think many intro texts use this notion, although one might add a normalization constraint). Others have commented on that, although it is not the primary question and actually not relevant here.</p> <p>The question asks about the difference between <span class="math-container">$\psi\in H$</span> and <span class="math-container">$|\psi\rangle\in L(\mathbb C,H)$</span>, based on a confusion and mixing of different notations and concepts.</p> <hr /> <p>To start, let me recap the basics which lead to the notation introduced in the question: For any vector <span class="math-container">$v\in H$</span> of a complex Hilbert space <span class="math-container">$H$</span>, we can define a map <span class="math-container">$|v\rangle: \mathbb C \to H$</span> by <span class="math-container">$|v\rangle: \lambda \mapsto \lambda v$</span>. In fact, the thus defined map: <span class="math-container">$\mathcal I: H\to L(\mathbb C, H)$</span>, with <span class="math-container">$\mathcal I: v\mapsto |v\rangle$</span> is a canonical isomorphism. This means, roughly speaking, that we can relate both objects <span class="math-container">$v$</span> and <span class="math-container">$|v\rangle$</span> in a natural way, without "loosing information". Further, with this definition, we can meaningfully speak of the adjoint of <span class="math-container">$|v\rangle$</span>, which turns out to be the well-known "bra", denoted by <span class="math-container">$\langle v|$</span>. As a last point, let me remark that it is exactly <strong>this</strong> notion of a "ket", which people refer to when saying things like "the bra is the adjoint of the ket" or so.</p> <hr /> <p>Let me now clear up the confusion in the question:</p> <p>With the definition of "ket" given in the question, we have to make sure to understand all other symbols accordingly. Most importantly, if we write <span class="math-container">$|\psi\rangle\langle\psi|$</span>, then we denote by this the composition of the "ket" and the "bra" map, i.e. <span class="math-container">$|\psi\rangle\langle\psi|:=|\psi\rangle \circ \langle \psi|: H \to H$</span>, with <span class="math-container">$$|\psi\rangle\langle\psi| (v)= |\psi\rangle\left(\langle\psi,v\rangle_H\right)= \langle\psi,v\rangle_H \,\psi \quad . \tag 1$$</span></p> <p>Hence, in this notation we have that <span class="math-container">$|\psi\rangle\langle\psi|$</span> acts as a hermitian one-dimensional projection on <span class="math-container">$\psi \in H$</span>, which often is also written as <span class="math-container">$P_\psi$</span> (which makes no reference to any ket-notation). The spectral theorem now allows to decompose every hermitian operator <span class="math-container">$A$</span> as</p> <p><span class="math-container">$$ A = \sum\limits_{j=1}^{\dim H} a_j\, P_{\psi_j}\overset{(1)}{=}\sum\limits_{j=1}^{\dim H} a_j\, |\psi_j\rangle\langle\psi_j| \quad ,\tag 2$$</span></p> <p>where the <span class="math-container">$\psi_j$</span> are (orthonormal) eigenvectors of <span class="math-container">$A$</span> with eigenvalues <span class="math-container">$a_j$</span>. The first equality is the spectral theorem, the second a consequence of our notation and definitions.</p> <p>We therefore obtain</p> <p><span class="math-container">$$A v\overset{(2)}{=}\sum\limits_{j=1}^{\dim H} a_j\,|\psi_j\rangle\langle\psi_j| (v)\overset{(1)}{=}\sum\limits_{j=1}^{\dim H} a_j\, \langle \psi_j,v\rangle_H\, \psi_j \tag 3 \quad ,$$</span></p> <p>and everything makes sense, i.e. the linear operator <span class="math-container">$A$</span> maps vectors of the Hilbert space <span class="math-container">$H$</span> to vectors again.</p> <hr /> <p>Now, the reason many people seem to be confused here is, I suppose, that in most physics books a "ket" is defined to be a vector in the Hilbert space. Put differently, then it is assumed that <span class="math-container">$|\psi\rangle \in H$</span> and symbols like <span class="math-container">$\psi$</span> have no meaning; it is the whole object, the ket, which is the vector. Then the operator <span class="math-container">$P_\psi$</span> can also be written with this notation as <span class="math-container">$P_\psi=|\psi\rangle\langle \psi|$</span>, where the symbol on the RHS is defined as the operator <span class="math-container">$|\psi\rangle\langle \psi|: H\to H $</span> with</p> <p><span class="math-container">$$ |\psi\rangle\langle \psi|: |v\rangle \mapsto \langle \psi|v\rangle |\psi\rangle \quad , \tag 4$$</span></p> <p>just as before (if you identify both ways to write the inner product); but to emphasize: The symbols here have a <strong>different</strong> meaning than before. It is thus important to not mix both notations and concepts, and stick to one convention from the beginning. Else one might end up in contradictions and ill-defined expression as in the question.</p> <p>You do not get or lose anything by using one or the other notation, a priori. For some purposes one might be more suitable than the other. On the other hand, many people (including myself) do not use any of these "bra-ket" notations at all.</p> <hr /> <p><strong>References:</strong></p> <ol> <li><em>Quantum Information Theory. J. M. Renes. De Gruyter. Appendix B2, p. 276.</em></li> <li><em>Categories for Quantum Theory: An Introduction. C. Heunen and J. Vicary. Section 0.2.4, p. 18.</em></li> </ol>
|
Physics
|
|general-relativity|spacetime|curvature|
|
How would objects move in a linear gravitational field?
|
<p>The uniform field in one direction is basically what we experience on the surface of Earth.</p> <p>The illustrations you see like the rubber sheet bent by a heavy object are just trying to give you a rough conceptual idea. They are not a correct description of General Relativity when you analyze it in detail.</p> <p>The thing to remember is, space is not just curved – <em>spacetime</em> is curved. And in weak gravity fields, nearly all the curvature is in time, not in space.</p> <p>Here is a good explanation that is scientifically accurate, with good visualizations as well. The channel itself is a great guide for introduction to moderately advanced physics concepts</p> <p><a href="https://youtu.be/F5PfjsPdBzg" rel="nofollow noreferrer">https://youtu.be/F5PfjsPdBzg</a></p> <p>First let's take a familiar case of space curvature: let there be two ships at the Equator, each facing due North. The captains of each ship use a laser ranging device to measure that the ships are 1 mile apart from each other, in the East West direction. Then each one sails due North from his starting location until they reach the Arctic Circle. They take their measurement again, and find that they are significantly closer than 1 mile, as if the ships had some "attraction" to one another. Neither one ever drifted East or West, and there were no forces acting on them. Their paths simply converged, because Earth's surface is curved, not flat.</p> <p>Now, an example of time curvature. Say I stood on a high tower, say the apex of the <a href="https://en.m.wikipedia.org/wiki/KRDK-TV_mast" rel="nofollow noreferrer">KRDK mast</a>, and aimed a laser directly at the ground below me, where you stand with a receiver. I fire a pulse, and then another pulse exactly 1 second later by my clock. You would receive those pulses, and the time between them by your clock would be <em>less than</em> 1 second. The paths of the light rays converged, because the spacetime around the Earth is curved. The effect would be about on the order of <span class="math-container">$10^{-13}$</span> seconds at that height, but a very real effect. This has been experimentally confirmed many times.</p>
|
Physics
|
|lagrangian-formalism|gauge-theory|quantum-electrodynamics|dimensional-analysis|effective-field-theory|
|
Why are these terms not present in the QED Lagrangian?
|
<p>It is plausible for a term like <span class="math-container">$|g|\bar\psi\bar\psi\psi\psi$</span> to appear; this will be an interaction that eats two particles and spits back out two particles. But the question's <span class="math-container">$-g(\bar\psi\psi)^2$</span> will be that and an effective mass term. Now, renormalisation means that the allowed 2-particle interaction term will modify the mass spectrum, but we can impose that the renormalisation not alter the rest mass, whereas this term definitely will alter.</p> <p>Depending upon your professor's choice of signs, if the potential from <span class="math-container">$-g(\bar\psi\psi)^2$</span> is considered to be negative semi-definite, then there is also no stable vacuum.</p> <p>You should augment that the reason why photons cannot have mass is that the mass term singles out the Lorenz gauge as special, whereas the original Lagrangian is gauge invariant.</p> <p>The <span class="math-container">$F_{\mu\nu}\square F^{\mu\nu}$</span> term means a theory that is 4th order derivative. The classical electrodynamics theory that incorporates radiation reaction by having a 3rd order derivative already gave Feynman enough of a headache, so imagine how bad 4th order will turn out to be.</p>
|
Physics
|
|quantum-mechanics|schroedinger-equation|
|
Deducing the ground state from a known first excited state
|
<p>For <em>b</em>=1, your problem reduces to the well-known <a href="https://en.wikipedia.org/wiki/P%C3%B6schl%E2%80%93Teller_potential" rel="nofollow noreferrer">Pöschl–Teller potential</a>, shifted by a unit, so your excited state has <span class="math-container">$$ \psi''= \left (1-{2(2+1)\over \cosh^2(x)}\right )\psi, $$</span> alright.</p> <p>And then the ground state, the only other bound state of it, <em>must</em> be proportional to <span class="math-container">$P^2_2(\tanh (x))$</span>, proportional to <span class="math-container">$$ \psi_0= 1/\cosh^2(x), ~~~\leadsto \\ \psi_0''= \left(1+3 -6/\cosh^2(x) \right )\psi_0. $$</span></p> <p>You might consider doing perturbation theory in (<em>b-1</em>)...</p>
|
Physics
|
|solid-state-physics|electronic-band-theory|
|
Electron bands in solids and delocalized electrons
|
<p>Bands and bonds are models.</p> <pre><code>"All models are wrong but some are useful." -- George Box </code></pre> <p>The simple localized bond model fails even for small molecular rings when there is <a href="https://en.wikipedia.org/wiki/Aromaticity" rel="nofollow noreferrer">aromaticity</a>. You can, if you like, think of metallic bonding as as similar phenomenon, but on a lattice, not a ring.</p> <p>If you model the electrons as waves, you may conceive of the ring as tightly confined box for electron waves. In a tightly confined box, you get isolated resonant frequencies. But in a big box, a crystal, you get many closely spaced resonances, a band.</p>
|
Physics
|
|nuclear-physics|radioactivity|half-life|electron-capture|
|
Possible electron capture decay of $^{148}\mathrm{Gd}$?
|
<p>The ground state of <span class="math-container">$^{148}\mathrm{Gd}$</span> is a <span class="math-container">$0^{+}$</span> state while the ground state of <span class="math-container">$^{148}\mathrm{Eu}$</span> is a <span class="math-container">$5^{-}$</span> state.</p> <p>The transition from <span class="math-container">$^{148}\mathrm{Gd}$</span> to <span class="math-container">$^{148}\mathrm{Eu}$</span> involves both a change in parity <span class="math-container">$\Delta\pi = 1$</span> and a change in angular momentum of <span class="math-container">$\Delta J = 5$</span>. Therefore, this decay is a fifth-order forbidden decay with a parity change. The matrix elements for such transitions are significantly suppressed, making the probability of observing this decay practically zero.</p> <p>Even when considering decays to higher excited states of <span class="math-container">$^{148}\mathrm{Eu}$</span>, such as the second excited state at <span class="math-container">$6^{-}$</span>, the third at <span class="math-container">$7^{-}$</span>, and so on, does not alleviate the situation due to the increasing order of forbiddenness and the associated suppression of the decay probability.</p> <p>A list of levels for <span class="math-container">$^{148}\mathrm{Gd}$</span> can be found <a href="https://www.nndc.bnl.gov/nudat3/getdataset.jsp?nucleus=148Gd&unc=NDS" rel="nofollow noreferrer">here</a> and a list of levels for <span class="math-container">$^{148}\mathrm{Eu}$</span> can be found <a href="https://www.nndc.bnl.gov/nudat3/getdataset.jsp?nucleus=148Eu&unc=NDS" rel="nofollow noreferrer">here</a>.</p>
|
Physics
|
|operators|conformal-field-theory|stress-energy-momentum-tensor|
|
Inconsistency in Virasoro expansion of stress energy
|
<p><span class="math-container">$L_0$</span> is an operator, and can be replaced with a number only when acting on some field -- but the number depends on the field.</p> <p>In your case, if <span class="math-container">$L_0 V = \Delta V$</span>, then <span class="math-container">$L_0 \partial V = (\Delta+1)\partial V$</span>. This agrees with the relation that you find problematic, i.e. <span class="math-container">$[\partial, L_0] = -\partial$</span>.</p>
|
Physics
|
|thermodynamics|mass-energy|
|
How mass change can be converted to energy?
|
<p>To find the energy released, just look at the mass difference because according to the question you have given, there is both a starting mass and a final mass, so it is possible that not all of the mass was actually converted.</p> <p>So to find the energy released, you can do Mass difference equal to=m1-m2, to get how much mass was lost. Then multiply this Mass difference with c^2 to get total amount of energy gained. So essentially the formula would be E=(m1-m2)c^2 because not all of the mass is converted to energy.</p> <p>Most likely reason this probably happened is probably fission or fusion because normally using E=mc^2 for combustion questions is not neccesary, and if it was an antimatter annihilation then there probably won't be a final mass, because usually they make the question in such a way that all of the antimatter is used for complete annihilation.</p>
|
Physics
|
|electromagnetism|field-theory|gauge-theory|quantum-electrodynamics|dirac-equation|
|
How particles interact with the electromagnetic potential $A^\mu$?
|
<p>In quantum field theories the interaction between particle and field are instituted as interaction terms in the Lagrangian densities that characterize the specific theory. For the ordinary electromagnetic field, we can express it as a classical field theory with the Lagrangian density: <span class="math-container">$$\mathcal L_{Maxwell}=-{1\over 4}F_{\mu\nu}F^{\mu\nu}.$$</span> From the Euler-Lagrange equations one can obtain the Maxwell equations. Of course, this is strictly a classical theory. Some important examples of quantum field theories include for example the Dirac field, which is the field appropriate for describing fermions. So, in QED the particles (fermions) interact with the electromagnetic field, this is captured with the QED Lagrangian: <span class="math-container">$$\mathcal L_{QED}=\mathcal L_{Dirac}+\mathcal L_{Maxwell}+\mathcal L_{int},$$</span> where the last term <span class="math-container">$\mathcal L_{int}$</span> is the term describing the interaction of the fermion and the electromagnetic field, i.e. the Dirac field and the Maxwell field are <em>coupled</em> through this interaction term, an explicit form for this interaction, for an electron is given by: <span class="math-container">$$\mathcal L_{int}=-e\bar\psi\gamma^{\mu}\psi A_\mu .$$</span>Note, however, that the Maxwell field is found in its ordinary form in the QED Lagrangian. The Maxwell field is unaltered, however, it is coupled in a local interaction to the Dirac field.</p> <p>So, the QED Lagrangian is given by: <span class="math-container">$$\mathcal L_{QED}=\bar\psi(i\gamma^\mu D_\mu-m)\psi-{1\over 4}F_{\mu\nu}F^{\mu\nu}-e\bar\psi\gamma^\mu\psi A_\mu,$$</span> where <span class="math-container">$D_\mu$</span> is the "gauge covariant derivative": <span class="math-container">$$D_\mu=\partial_\mu+ieA_\mu(x).$$</span> Plugging this Lagrangian into the Euler-Lagrnge equations for you get the following two field equaitions: <span class="math-container">$$(i\gamma^\mu D_\mu-m)\psi(x)=0, and\quad \partial_\mu F^{\mu\nu}=e\bar\psi\gamma^\nu\psi.$$</span> So the second of these equations is the Maxwell equations for the local interaction and the first is the Dirac equation, with the odd derivative operator. You can understand that the operator <span class="math-container">$D$</span> is a sort of generalization of the ordinary momentum operator of quantum mechanics for the case when there are electromagnetic fields present, at least that is the way that Dirac motivated it in his development of the relativistic treatment of the electron. Perhaps, more precisely, it provides the QED Lagrangian and resulting field equations with gauge symmetry. When the fermionic field is coupled to the electromagnetic field in this way, i.e. with the gauge symmetry: <span class="math-container">$$\psi(x)\rightarrow e^{i\alpha(x)}\psi(x),\quad A_{\mu}\rightarrow {1\over e}\partial_\mu\alpha(x),$$</span> it is said to be <em>minimally coupled</em>.</p>
|
Physics
|
|forces|energy|rotational-dynamics|energy-conservation|
|
Can a "Floppy Hammer" apply more force/energy than a regular hammer?
|
<p>Most of the kinetic energy of a sledge hammer is in the head. It will have more kinetic energy if the head is more massive and/or moves faster.</p> <p>This handle looks longer than the usual sledge hammer. This might make it possible to swing it faster.</p> <p>Being floppy might too. When the workers wind up, the head is farther back than for a straight handle. This means it travels a longer path. If the worker applies a force over a longer path, the head will gain more kinetic energy.</p> <p>Drawbacks? A longer, floppy handle is harder to aim. Especially if it flops sideways.</p> <p>A longer stroke means each stroke takes longer. The worker can't make as many strikes per hour.</p> <p>In general, tools that take muscle need to be tuned to what people can produce. In the 1800's shoveling coal employed many people. One employer decided to see if he could make shoveling work better. He had a series of shovels built in various sizes. Workers could choose what fit them best. It turned out that workers were more comfortable with a bigger shovel that lifted more coal with each stroke. Productivity went up.</p> <p>I expect sledge hammers have been created with the length and mass that fits most people best. If the floppy hammer gets away from optimum, it won't catch on.</p>
|
Physics
|
|newtonian-mechanics|forces|free-body-diagram|rocket-science|lift|
|
Newton's 3rd law, force on a rocket
|
<p>Newton 3rd law forces, what your link refers to as "partner forces", are equal and opposite forces that act on different objects. The motion of an object is due to the net force acting on that object, per Newton's 2nd law. It is sometimes difficult to distinguish these forces without the aid of free body diagrams.</p> <p>FIG 1 below shows the forces acting on the box, table and Earth for the "stationary" system. The Newton 3rd law pairs are circled in blue. The forces responsible for the motion of the box and table are circled in red.</p> <p>Note that the normal reaction force of the table acting up on the box is simply the force of gravity acting down on the box. The net force acting up on the block is the normal reaction force of the table minus the downward force of gravity of the box, for a net force of zero. The same applies to the table except that the force acting down on the table is the sum of the gravitational forces acting downward on the box and table.</p> <p>FIG 2 shows the rocket with the table and box as its only contents. I didn't actually view the link, but when its says "the force of gravity is not a partner force to the force moving upward" what it probable means is the upward force on the box is no longer simply the force of gravity acting down on the box as in FIG 1. As shown in FIG 2 the upward reaction force of the table acting on the box minus the force of gravity acting downward on the box must equal the net upward force on the box responsible for its acceleration.</p> <p>Hope this helps.</p> <p><a href="https://i.stack.imgur.com/U8as0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U8as0.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/SUdro.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SUdro.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|gravity|astrophysics|supernova|
|
Stellar Core Collapse Speeds
|
<p>Core collapse <em>does</em> occur on roughly a free fall timescale.</p> <p>The free fall timescale is of order <span class="math-container">$(G\rho)^{-1/2}$</span>, where <span class="math-container">$\rho$</span> is the density. Since the density is of order <span class="math-container">$10^{11-12}$</span> kg/m<span class="math-container">$^3$</span> in the electron-degenerate core at the point of collapse, this timescale is a second or less.</p> <p>Since density increases inwards, the collapse timescale is shorter near the centre. Thus the collapse occurs inside-out.</p> <p>The <em>speed</em> of collapse is of order the size of the core (a few thousand km) divided by the freefall timescale, so of order <span class="math-container">$10^4$</span> km/s.</p> <p>The core has a mass of <span class="math-container">$\sim 1$</span> solar mass, so the gravitational acceleration is of order <span class="math-container">$GM/r^2\sim 10^4$</span> km/s<span class="math-container">$^2$</span>. i.e. the whole core accelerates to speeds of <span class="math-container">$10^4$</span> km/s in a second. This is the acceleration at the "edge" of the core at the beginning of the collapse; it will be higher towards the centre where the densities are larger and will increase as the collapse progresses because the same mass will be enclosed within a decreasing volume of smaller radius.</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|angular-momentum|representation-theory|
|
Why are the angular momentum raising and lowering operator coefficients real?
|
<p>Yes, OP is right: Normalized eigenstates <span class="math-container">$|j, m\rangle$</span> are in principle defined modulo an arbitrary phase factor. However, it is customary to chose the phase factors of neighboring eigenstates <span class="math-container">$|j, m\rangle$</span> and <span class="math-container">$|j, m\pm 1\rangle$</span> such that the matrix elements of <span class="math-container">$J_{\pm}$</span> are non-negative. (To be clear: the underlying <span class="math-container">$so(3)$</span> Lie algebra is not affected by these choices; only the representation matrices thereof.)</p>
|
Physics
|
|kinematics|everyday-life|geometry|
|
When a bus goes around a corner, does the person sitting at the back travel further distance than the person sitting at the front?
|
<p>The center of the front axle must travel along a larger arc than the center of the rear axle or any point between those points in order for the vehicle to avoid whatever obstacle the vehicle is navigating around with the turn (i.e. the curb, the edge of the lane, a lamp post...). This is because the fixed wheels of the rear axle constrain the rear axle to move towards the front axle (as long as they are rolling without slipping), while the front axle is free to move in any direction the steerable front wheels are pointed. Being in front, for whatever arc the front axle is following, the front axle is that much farther along that arc than the rear axle, so the rear axle, always moving towards the front axle, will take the "inside track".</p> <p>If you're a driver, you've experienced this even in a small personal vehicle when easing forwards out of very tight parking spaces or parallel parking: you have to wait until the rear of the wheel base has passed the obstacle before initiating your turn, otherwise you'll run the side of your vehicle into the obstacle.</p> <p>Points forward of the front axle travel farther than the front axle, and points rearward of the rear axle travel farther than the rear axle. There's no limit to how <em>much</em> farther: if you had a mile long weightless extension on the back of your bus with a weightless passenger sitting on the end of it, she would traverse an arc of about <span class="math-container">$\pi/2$</span> miles when the bus took a 90 degree turn. For real vehicles, the farthest-turning point is almost always the front bumper.</p> <p>Adding articulation reduces the difference in the size of the arcs of the front-most and rear-most axles by adding intermediate axles which all chase the axle directly in front, instead of the rearmost axle chasing the frontmost axle.</p> <p>The rear axle(s) <em>can</em> travel wider arcs than the front axle(s) if the rear wheels are not constrained to move towards the front axle. This is "spinning out" or "fishtailing".</p>
|
Physics
|
|thermodynamics|collision|kinetic-theory|mean-free-path|
|
How is number of collisions per unit distance related to mean free path?
|
<p>The number of collisions <span class="math-container">$n$</span> per distance <span class="math-container">$d$</span>, so the calculation <span class="math-container">$n/d$</span>, is a measure with a unit like this: <span class="math-container">$\frac{1}{\mathrm m}$</span>. This is a measure of how many times a particle will collide with structural atoms if it moves over e.g. one metre (or nanometre or other preferred unit of distance). In intuitive terms, maybe a particle collides with 5 atoms per metre.</p> <p>Now turn this around.</p> <p>How many metres can the particle move before colliding with an atom? If it collides with 5 for every metre it has moved, then on average it will collide after every <span class="math-container">$1/5$</span> metre. In other words, it's path is only free for it to travel along undisturbed for on average <span class="math-container">$1/5$</span> metres. This value is found as simply the reciprocal, <span class="math-container">$\lambda=d/n$</span>, and the unit will be <span class="math-container">$\mathrm m$</span>.</p> <p>Instead of "collisions per metre" we are here looking at "metres per collision". Remember that this is an average, a mean. We refer to this measure as the <em>mean free path</em> <span class="math-container">$\lambda$</span>.</p>
|
Physics
|
|gauge-theory|representation-theory|wilson-loop|
|
What is a non-linear space of connections
|
<p>Let <span class="math-container">$\pi:P\rightarrow M$</span> be a principal fibre bundle and <span class="math-container">$\mathrm{Con}(\pi)$</span> the bundle of smooth principal connections over <span class="math-container">$\pi$</span>. This can actually be realized as an <em>affine</em> bundle over <span class="math-container">$M$</span> whose total space is <span class="math-container">$J^1(\pi)/G$</span>, where <span class="math-container">$G$</span> is the structure group. The corresponding vector bundle is <span class="math-container">$\mathrm{Ad}(\pi)\otimes\Lambda^1(M)$</span>, where <span class="math-container">$\mathrm{Ad}(\pi)$</span> is the vector bundle associated to <span class="math-container">$\pi$</span> through the adjoint representation of <span class="math-container">$G$</span> on <span class="math-container">$\mathfrak{g}$</span>.</p> <p>More generally if <span class="math-container">$\pi:N\rightarrow M$</span> is a fibered manifold, then the space of all smooth connections on <span class="math-container">$\pi$</span> can be identified with <span class="math-container">$\Gamma(\pi^1_0)$</span>, where <span class="math-container">$\pi^1_0:J^1(\pi)\rightarrow N$</span> is the first affine jet bundle of <span class="math-container">$\pi$</span>. This is again an affine bundle (although over <span class="math-container">$N$</span> rather than <span class="math-container">$M$</span>). The corresponding vector bundle is <span class="math-container">$V(\pi)\otimes_N\Lambda^1(M)$</span>, where the tensor product is taken over <span class="math-container">$N$</span>, and <span class="math-container">$V(\pi)$</span> is the vertical tangent bundle of <span class="math-container">$\pi:N\rightarrow M$</span>.</p> <p>In somewhat more familiar terms, connections of a given type (i.e. smooth sections of the corresponding bundles) always form an affine space. Affine spaces are "almost linear", but are nonetheless not vector spaces. Arbitrary linear combinations of connections are not themselves connections.</p>
|
Physics
|
|homework-and-exercises|potential|vector-fields|calculus|
|
Vector potential of position field
|
<p>It is not possible to find a vector potential <span class="math-container">$\mathbf{A}$</span> such that <span class="math-container">$$\nabla\times\mathbf{A}=\mathbf{r}. \tag{1}$$</span></p> <p>You can prove this by contradiction.<br /> Assume (1) is possible, and apply the divergence operator (<span class="math-container">$\nabla\cdot$</span>) to it. Then you get <span class="math-container">$$\underbrace {\nabla\cdot\nabla\times\mathbf{A}}_{=0} =\underbrace {\nabla\cdot\mathbf{r}}_{=3}.$$</span></p> <p>This is obviously a contradicction, and hence (1) is not possible.</p>
|
Physics
|
|electromagnetism|electric-fields|electric-current|voltage|
|
Why can define the electric potential (voltage) in alternating current?
|
<p>Yes, when textbooks treat AC circuits without using the full apparatus of Maxwell's equations which take into account all of the electrodynamical aspects of circuits, they are assuming that the fields are such that they <em>vary slowly</em>. For a field that is varying slowly, <span class="math-container">$${\partial\vec B\over\partial t}=0,$$</span> therefore Faraday's law implies that electrostatic conditions apply, i.e. <span class="math-container">$\nabla\times\vec E=0$</span>. It is reasonable to ask one's self: is 60Hz really slowly varying? Well, given that the electric potentials satisfy the wave equation, then yes. Electromagnetic waves travel at the speed of light: <span class="math-container">$1/\sqrt{\epsilon_0\mu_0}$</span>, thus changes in the circuit are equalized so rapidly that violations slowly varying assumption only become important at much larger frequencies.</p>
|
Physics
|
|astrophysics|galaxies|quasars|
|
Quasars to a galaxy
|
<p>A quasar is a name given to a particular state of activity of an active galactic nucleus - characterised by high levels of accretion onto a supermassive black hole.</p> <p>i.e. Quasars are not "objects" in their own right and if the accretion stops then so does the quasar-like activity and you end up with a more dormant galactic nucleus, like the one in our own Milky Way galaxy.</p> <p>Thus quasars do not "make a galaxy", although there are interesting but poorly understood correlations between the properties of galaxies, their supermassive back holes, their history of mass accretion and the properties of the galaxies they reside in. They thus probably play an important role in galaxy formation and evolution, principally through the injection of momentum and energy from their winds, jets and radiation.</p> <p>There is also no sense in which they can collapse. A supermassive black hole is as compact as it can be. It can of course continue to accrete matter and get bigger, but most supermassive blackholes have a mass that is only a small fraction of the mass of the galaxy they are at the center of.</p>
|
Physics
|
|cosmology|spacetime|universe|structure-formation|
|
Cosmic web shape
|
<p>It looks like a neutral net:</p> <p><a href="https://i.stack.imgur.com/Smzr2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Smzr2.jpg" alt="enter image description here" /></a></p> <p>It should be random on the largest scale, but there is clearly structure from dynamic interactions on large scales.</p> <p>You really can't say what it would look if it looked like a giant galaxy other than, wait for it, a giant galaxy.</p> <p>You'd have to get quantitive, include dark matter (ofc), and work out mass distributions for proper galactic rotation curves. It might not scale when the would-be galaxy size/time-scales require the Hubble constant be included into the dynamics.</p>
|
Physics
|
|curvature|stars|gravitational-lensing|
|
Can stars bend light?
|
<p>Yes stars can bend light with the immensity of their gravitational fields. A good example of this phenomenon being put to good use is in the historical incidence of the first major verification of Einstein's theory of General relativity, which of course predicts the phenomenon in the first place.</p> <p>In 1919, the famous astronomer Arthur Eddington, verified that the bending of starlight occurred to the degree predicted by Einstein's theory by observing the light from <em>occulted</em> stars during an ordinary solar eclipse, thus Eddington witnessed the gravitational field of our own sun bending star light!</p> <p>There is nothing really special or odd about the gravity of black holes save from the fact that the fields are unusually strong, however, ignoring rotation, charge, and the cosmological constant; the solutions of Einstein's equation that describe the gravity around a spherical black hole are entirely similar to the description of the fields around stars, i.e. the curved space-time around spherically symmetric massive bodies is that described by the Schwarzschild metric. Ordinary stars, however, are not massive enough to have an external "event horizon" or point of no return from which nothing can escape, hence their light is able to shine out into surrounding space.</p>
|
Physics
|
|electrostatics|potential|electrochemistry|
|
Potential difference between electrolyte and electrodes in terms of electrostatics concept
|
<blockquote> <p>I understand that the electric potential at a point closer to a charged particle, is nearly infinite</p> </blockquote> <p>This is not true in general, but is specific to a classical point charge. Many other configurations of charge density besides a classical point particle do not have this behavior.</p> <blockquote> <p>why there exist a finite potential difference between two charged objects that are in contact with each other?</p> </blockquote> <p>Because the charge distribution is not approximately a classical point particle. On large scales the distribution is a surface polarization density rather than a point charge. On microscopic scales the charges are quantum mechanical rather than classical, and quantum mechanics keeps them separated enough for the forces and potentials to be finite.</p>
|
Physics
|
|gravity|spacetime|photons|
|
How does light get affected by gravity?
|
<p>Any massless particle, like a photon or gluon, can only travel at c. It can never have zero speed or exist at zero speed. So there is no acceleration and it is not appropriate to think about acceleration in the normal sense.</p>
|
Physics
|
|general-relativity|metric-tensor|gauge-theory|gravitational-waves|linearized-theory|
|
How do physicists mathematically define gravitational waves?
|
<p>The most straightforward way is to simply take the transverse-traceless (TT) part of <span class="math-container">$h_{ij}$</span>. The TT part of the metric, denoted <span class="math-container">$h^{\mathrm{TT}}_{ij}$</span>, contains precisely the two propagating degrees of freedom, which correspond to the two polarizations of gravitational waves. This enables coordinate effects to be removed and exposes the true propagating gravitational waves. It is possible to find a gauge transformation in which the only nonzero part of <span class="math-container">$h_{\mu\nu}$</span> is <span class="math-container">$h^{\mathrm{TT}}_{ij}$</span>. This is known as the TT gauge.</p> <p>The wavevectors can then be found by taking the Fourier transform of <span class="math-container">$h^{\mathrm{TT}}_{ij}$</span>. If there is just one single gravitational wave with propagation direction <span class="math-container">$n^i$</span>, it is possible to find <span class="math-container">$h^{\mathrm{TT}}_{ij}$</span> by defining <span class="math-container">$P_{ij} = \delta_{ij} - n_i n_j$</span>. Then, given <span class="math-container">$h_{kl}$</span> in the Lorenz gauge, <span class="math-container">$$h^{\mathrm{TT}}_{ij} = \left(P_{ik}P_{jl} - \frac{1}{2}P_{ij}P_{kl}\right)h_{kl}.$$</span></p> <p>In your example, since there is only one spatial term in your <span class="math-container">$h_{\mu\nu}$</span> and it is on the diagonal, it is immediately obvious that its TT part is zero.</p>
|
Physics
|
|lagrangian-formalism|mass|antimatter|point-particles|
|
Lagrangian for a free antimatter particle
|
<p>The Lagrangian of a classical charged particle <span class="math-container">$\mathrm Q$</span> in an EM field is given by</p> <p><span class="math-container">$$L_{\mathrm Q}(V,\mathbf A)=-mc^2\sqrt{1-\frac{v^2}{c^2}}+q\mathbf v \cdot \mathbf A -q V,$$</span> where <span class="math-container">$q$</span> is the charge, <span class="math-container">$\mathbf A$</span> is the vector potential and <span class="math-container">$V$</span> the electric potential.</p> <p>For its anti-particle <span class="math-container">$\overline{\mathrm Q}$</span>, we have</p> <p><span class="math-container">$$L_{\overline{\mathrm Q}}(V,\mathbf A)=-mc^2\sqrt{1-\frac{v^2}{c^2}}-q\mathbf v \cdot \mathbf A +q V.$$</span></p> <p>It's the charge signs that change, not the mass. When we set <span class="math-container">$V=0$</span> and <span class="math-container">$\mathbf A=0$</span>, we get <span class="math-container">$L_{\rm Q}(0,0)=L_{\overline{\mathrm Q}}(0,0)$</span> thus the same free particle Lagrangians.</p>
|
Physics
|
|homework-and-exercises|electromagnetism|lagrangian-formalism|maxwell-equations|variational-calculus|
|
Derivation of Maxwell's equations using Lagrangian formalism
|
<p>The principle of least action is extremely robust and has been employed in so many interesting ways.</p> <p>The Lagrangian for the electromagnetic field is given by: <span class="math-container">$$\mathcal L=-{1\over 4}F_{\mu\nu}F^{\mu\nu}.$$</span> There are different conventions that use differing values for the constant out front, for example that which you see in J.D. Jackson's text, however, they all contain the "quadratic" form in <span class="math-container">$F_{\mu\nu}$</span>.</p> <p>Here, <span class="math-container">$F_{\mu\nu}$</span> and <span class="math-container">$F^{\mu\nu}$</span> are second order covariant and contravariant rank tensors known as the Faraday or field strength tensor, however, this is not so daunting as it may sound in this context because you can identify these quantities with matrices, albeit matrices that transform in the proper way under rotation. See Goldstein's <em>Classical Mechanics</em> chapter 13 for a good introduction to the Lagrangian formulation for continuous systems and fields, as that is precisely what the electromagnetic field is. At any rate, the Euler Lagrange equations for such systems has the form:<span class="math-container">$$\partial_\mu \bigg({\partial\mathcal L\over\partial (\partial_\mu \phi_\rho)}\bigg)-{\partial\mathcal L\over\partial\phi_\rho}=0.$$</span> <span class="math-container">$$\vdots$$</span> Where we have as many equations as we have fields. Notice that we have avoided differentiating with respect to the usual generalized coordinates and have instead differentiated with respect to some functions <span class="math-container">$\phi_\rho$</span>. The functions <span class="math-container">$\phi_\rho$</span> are any set of functions which act as the "coordinates" of the Lagrangian, which in the continuous system is now a field or <em>density</em> that is defined everywhere in space, i.e. for a continuous system the lagrangian is such that: <span class="math-container">$$\mathcal L=\mathcal L(\phi_{\rho}, \partial_\mu\phi_{\rho}, x^\mu).$$</span> The Lagrangian may be a function of any number of fields, their derivatives and possibly the raw coordinates themselves! You are right when you say that the subject is interesting! Now back to Maxwell's theory. The field strength tensor is defined as: <span class="math-container">$$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu,$$</span> and <span class="math-container">$$F^{\mu\nu}=\partial^\mu A^\nu-\partial^\nu A^\mu.$$</span> So for the electromagnetic system, the field involved is the 4-vector potential <span class="math-container">$A^\mu$</span>, so inserting these into the given Lagrangian expression we get: <span class="math-container">$$\mathcal L=-{1\over 4}(\partial_\mu A_\nu-\partial_\nu A_\mu)(\partial^\mu A^\nu-\partial^\nu A^\mu).$$</span> So Maxwell's equations can be derived via the Euler Lagrange equations: <span class="math-container">$$\partial_\nu\bigg({\partial\mathcal L\over\partial (\partial_\nu A^\mu)}\bigg)-{\partial\mathcal L\over\partial A^\mu}=0.$$</span> Now, this calculation is straightforward, <em>however</em> it does require that you get comfortable with manipulating indices, thus I will leave off the derivation for now and offer you some good pieces of advice that I hope you will pursue as your time admits.</p> <p>Firstly, read 7.4-7.6 of Goldstein's <em>Classical Mechanics</em>, then read chapter 13 or at least 13.1-13.2, then you will be ready to calculate the above derivatives and find Maxwell's equations from the Lagrangian formulation. This is really not a lot of material to cover, and it serve as excellent preparation for more advanced physics.</p>
|
Physics
|
|quantum-mechanics|wavefunction|schroedinger-equation|hydrogen|
|
Expectation Value Involving $s$-Wave Solutions to Central Potential
|
<p>It cannot be true for the harmonic oscillator, if you plug in the <span class="math-container">$n$</span>-th solution there, your <span class="math-container">$\phi(0)$</span> becomes lower (<span class="math-container">$\sim 1/\sqrt{n}$</span> if it is in one dimension, steeper when in more dimensions). At the same time the <span class="math-container">$dV/dr$</span> expectation value becomes actually <em>larger</em> because the radial wave function will see more of the larger <span class="math-container">$r$</span> region, where <span class="math-container">$dV/dr$</span> is larger.</p> <p>Of course the harmonic oscillator in in this respect opposite to the Coulomb potential, which has its <span class="math-container">$dV/dr$</span> decreasing with larger <span class="math-container">$r$</span>. For that situation what you describe can be true, but certainly not on general.</p>
|
Physics
|
|gravity|visible-light|molecules|
|
Can molecules bend light?
|
<p>Technically, there is no reason to believe that any amount of matter, no matter how small, fails to warp space-time according to the amount of its gravitational influence as indicated by the Einstein field equations. However, would that amount of space-time warp be measurable? No. absolutely not, at least not by today's experimental capabilities.</p>
|
Physics
|
|quantum-mechanics|atomic-physics|harmonic-oscillator|thermal-radiation|
|
Planck's quantum explanation of black body radiators
|
<p>Oscillators in the sense of material systems considered to make up a black body are not electrons, or atoms with electrons. They are an abstract description of matter, a (very) simplifying replacement of the complicated system of immense number of nuclei and electrons, with dense and complicated energy level structure.</p> <p>When atoms are so close to each other as in a solid body, they no longer have simple energy spectrum we use to describe rarified emitting gas, like <span class="math-container">$E_n = - K/n^2$</span> for hydrogen , but instead the whole system (of many atoms) has very dense, sometimes called "quasi-continuous", spectrum of energy levels. This is because of the mutual interaction between the atoms - it destroys degeneracy, causes splitting of the few discrete levels, valid when atoms are far apart, into very many dense levels, valid when atoms are densely packed.</p> <p>Matter made of (quantum) harmonic oscillators is just the simplest mathematical model of a solid interacting with equilibrium radiation to analyze in quantum theory - and it can lead to the correct formula for spectrum of radiation, so people use it. This is so already from the times of Max Planck (who however preferred to think of the matter system as made of many classical harmonic oscillators with continuously changing energy, which he called "resonators").</p>
|
Physics
|
|general-relativity|visible-light|singularities|geodesics|absorption|
|
Aren't places where geodesics end singularities?
|
<p>First let me clarify an important point about black holes. <strong>In the case of realistic, rotating black holes, it is not at all true that all incident bodies and light rays necessarily end up in the singularity.</strong> No, that's a common, widespread misrepresentation. In other words, most geodesic orbits do not meet the singularity. Neither timelike geodesics nor lightlike geodesics. <em>Translated into concrete terms, this means that if a test body or spacecraft enters the event horizon, it is not at all necessarily destined to enter the singularities and be destroyed.</em> It depends on the properties of Kerr spacetime (and Kerr-Newman spacetime), on what kind of geodesics they form. In fact, careful analyses show that very specific conditions must be met for geodesics to end up in the singularity; ergo, <em>the vast majority of possible geodesic and other orbits avoid the ring singularity.</em></p> <p>Why is it so common that all incident bodies are necessarily destroyed in the singularity? Probably because of the Schwarzschild spacetime, the non-rotating, static black hole. That's really the case. But the Kerr spacetime, the rotating black hole, is radically different, what I just wrote is true. Scientists have known these things since the late 1960s, yet the misinterpretation persists today. Here is one of the key articles, from 1968, that revealed how geodesics really approach and avoid the singularity. (Chapter 3 of the article deals with these issues)</p> <p><a href="https://luth.obspm.fr/%7Eluthier/carter/trav/Carter68.pdf" rel="nofollow noreferrer">Carter paper about Kerr spacetime</a></p> <p>Now let's get to answering the question. Geodesics (for both material bodies and photons) are the orbits in a given spacetime that are freely moving. So, if they are only affected by gravity, i.e. the curvature of spacetime, they move on geodesics. <em>If they are subject to any other force, interacting with other material bodies, they will be deviated from the geodesic.</em> In other words, if a ray of light is reflected by a surface or absorbed by a body, its motion will be diverted from the geodesic path it was on and it will be on a different world line. <strong>Geodetic paths are properties of spacetime itself, not the real life paths of bodies or photons.</strong> So the beam of light in question only moves geodesically until it reaches the point where it interacts with bodies. The theoretically calculated geodesic path continues and does not end at the reflecting or absorbing body. But these bodies prevent the beam of light from continuing along that geodesic.</p> <p>The geodesics do not describe what specific accidents happen to real bodies and photons. They describe the trajectories that spacetime theoretically imposes on things that move freely, obeying only the curvature of spacetime, without obstacles.</p>
|
Physics
|
|newtonian-mechanics|forces|rotational-dynamics|work|friction|
|
Is work done by hinge force always 0?
|
<p>In an ideal hinge, the hinge does no work when rotating about its axis. Obviously real hinges can do work. Wolphram Jonny points out that a rusty hinge can indeed slow down a door, showing that the rusty hinge indeed does work. A non-rusty hinge does less work. A well-lubricated hinge does even less. A hinge on an air bearing can have nearly no friction. Taking it to the extreme, an ideal hinge does no work.</p> <p>An ideal hinge <em>can</em> do work if it moves in other directions besides its rotation axis. As a trivial example, if I have hinge whose axis is vertical (like a door hinge), and I use it to lift the door upwards, it must do work. It's only in the rotation axis that it cannot do any work.</p>
|
Physics
|
|quantum-field-theory|path-integral|approximations|partition-function|semiclassical|
|
Making sense of stationary phase method for the path integral
|
<p>You want to compute the generating functional <span class="math-container">$$Z[J]=\int [d\phi] e^{\frac{i}{\hbar} (S[\phi]+\phi \cdot J)}, \quad Z[0]=1, \quad \phi\cdot J:=\int\! dx \, \phi(x) J(x)$$</span> in the quasiclassical approximation. To this end you perform the transformation of variables <span class="math-container">$\phi= \phi_c + \sqrt{\hbar} \, \chi$</span> with the new integration variable <span class="math-container">$\chi$</span>. The function <span class="math-container">$\phi_c=\phi_c[J]$</span> is determined in such a way that the term linear in <span class="math-container">$\chi$</span> in the expansion <span class="math-container">$$\begin{align}S[\phi_c+\sqrt{\hbar} \,\chi]+(\phi_c+\sqrt{\hbar} \, \chi)\cdot J&=S[\phi_c]+ \phi_c \cdot J \\[5pt] &+\sqrt{\hbar}\int \! dx \,\left(\frac{\delta S[\phi]}{\delta \phi(x)} {\huge|}_{\phi=\phi_c}\!\!+J(x)\right) \chi(x) \\[5pt]& {}+ \frac{\hbar}{2} \int \! dx_1 \, dx_2 \, \frac{\delta^2 S[\phi]}{\delta\phi(x_1) \delta \phi(x_2)}{\huge|}_{\phi=\phi_c}\chi(x_1) \chi(x_2) \\[5pt] &+ \mathcal{O}(\hbar^{3/2}), \end{align}$$</span> <em>vanishes</em>, such that <span class="math-container">$$\frac{\delta S[\phi]}{\delta \phi(x)} {\huge|}_{\phi=\phi_c}+J(x) =0.$$</span> Using the translation invariance of the measure in the functional integral, one finds <span class="math-container">$$Z[J]= e^{\frac{i}{\hbar} (S[\phi_c]+ \phi_c \cdot J)} \int [d\chi] e^{\frac{i}{2} \delta^2 S/\delta \phi^2|_{\phi=\phi_c} \chi^2 } + \ldots$$</span></p>
|
Physics
|
|electrostatics|electric-fields|voltage|capacitance|
|
Why is the electric field strength ($V/d$) constant in a charging capacitor?
|
<p>Others have said this but the integral in the OP:</p> <p><span class="math-container">$$f_{\rm Wrong}(x)=\int_0^{2\pi}\int_0^{100}{1\over r^2 +x^2}+{1\over r^2+(d-x)^2}{\rm d}r{\rm d}\theta \ , $$</span></p> <p>is not quite correct. Firstly, the area element in 2D is <span class="math-container">$r{\rm d}r{\rm d}\theta$</span>, secondly you are I suppose adding the x-components along a line joining the midpoints of two circular plates but the integrand has the magnitudes. Fixing these issues gives us:</p> <p><span class="math-container">$$E_x(x)=2\pi\int_0^R r\left[\frac{x}{(r^2 +x^2)^{3/2}}+\frac{d-x}{(r^2 +(d-x)^2)^{3/2}}\right]{\rm d}r \ . $$</span> note that I can define <span class="math-container">$\xi=x/d$</span> which goes from <span class="math-container">$0$</span> to <span class="math-container">$1$</span> and <span class="math-container">$y=r/d$</span> going from <span class="math-container">$0$</span> to <span class="math-container">$\rho=R/d$</span>. With this notation change we have</p> <p><span class="math-container">$$E_x(\xi,\rho) = 2\pi\int_0^\rho y\left[\frac{\xi}{(y^2 +\xi^2)^{3/2}}+\frac{1-\xi}{(y^2 +(1-\xi)^2)^{3/2}}\right] { \rm d} y \ , $$</span></p> <p>which shows that the field has the nice property that it only depends on the aspect ratio <span class="math-container">$\rho$</span> of the plate radius to the separation, and the fraction of the separation one looks at. OPs original expression <span class="math-container">$f_{\rm Wrong}(x)$</span> lacks this property.</p> <p>In fact, this function is pretty flat even for relatively small <span class="math-container">$\rho$</span>! <a href="https://i.stack.imgur.com/jnGEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jnGEC.png" alt="enter image description here" /></a></p>
|
Physics
|
|quantum-field-theory|lagrangian-formalism|quantum-electrodynamics|renormalization|dimensional-analysis|
|
Charge renormalization choice in QED
|
<p>This is common practice whenever dimensional regularization is used. Simply consider the dimensions of the various terms in <span class="math-container">$d$</span> space-time dimensions. As the action <span class="math-container">$S= \int d^d x \, \mathcal{L}$</span> is dimensionless, you can read off <span class="math-container">$$[\mathcal{L}] =d, \quad [A_0]=d/2, \quad [\psi_0]=(d-1)/2, \quad [e_0]=2-d/2, \quad \text{etc.} $$</span> With <span class="math-container">$d=4-2 \epsilon$</span> you obtain <span class="math-container">$[e_0]=\epsilon$</span>. As <span class="math-container">$e$</span> is supposed to become the <em>dimensionless</em> coupling constant in <span class="math-container">$d=4$</span> dimensions and because of <span class="math-container">$[Z_e]=0$</span> you introduce the factor <span class="math-container">$\mu^\epsilon$</span> (the energy scale <span class="math-container">$\mu$</span> has dimension <span class="math-container">$1$</span>, so <span class="math-container">$[\mu^\epsilon]=\epsilon$</span>) in <span class="math-container">$e_0 = Z_e e \mu^\epsilon$</span>.</p>
|
Physics
|
|lagrangian-formalism|field-theory|fermions|dirac-equation|charge-conjugation|
|
Dirac Lagrangian under charge conjugation
|
<p>Note that charge conjugation interchanges the anticommuting <span class="math-container">$\bar \psi$</span> and <span class="math-container">$\psi$</span>, so after conjugation by <span class="math-container">$\hat C$</span> the <span class="math-container">$\partial_\mu$</span> acts on the <span class="math-container">$\bar\psi$</span>, and needs a sign-changing integration by parts.</p> <p>In detail: <span class="math-container">$$ \hat C \psi \hat C^{-1}= {\mathcal C}^{-1}\bar\psi^T\\ \hat C \bar \psi \hat C^{-1} = - \psi^T {\mathcal C} $$</span> where the <span class="math-container">${\mathcal C}$</span> matrix obeys <span class="math-container">$$ {\mathcal C}\gamma^\mu {\mathcal C}^{-1}= -(\gamma^\mu)^T. $$</span></p>
|
Physics
|
|electromagnetism|magnetic-fields|electric-fields|
|
Why are magnetic fields represented as revolving around the direction of motion?
|
<p>the "shape" isn't directly transforming. Instead, a changing electric field creates a magnetic field with a specific orientation and vice versa. Imagine a spinning charged sphere. The electric field from the stationary charge itself might be radial (outward from the sphere). But due to the spinning motion (which is a form of changing electric field), a magnetic field would also be generated, following a circular pattern around the sphere's axis.</p>
|
Physics
|
|spacetime|speed-of-light|space-expansion|faster-than-light|
|
Can we use the fabric of spacetime to go faster than the speed of light?
|
<blockquote> <p>If the fabric of spacetime isn't bound by the limit of the speed of light (the universe is expanding faster than the speed of light),</p> </blockquote> <p>If you consider cosmic expansion to be faster than light, then there is certainly no theoretical obstacle to a spaceship traveling faster than light in the same way.</p> <p>Simply send the spaceship away from Earth at, say, 96% the speed of light. Now transform into "synchronous" coordinates, similar to what we use to describe the universe. The idea is that we want clocks on the Earth and the spaceship to advance at the same rate. To do this, we boost into an intermediate reference frame, in which the Earth and the spaceship are receding in opposite directions at the same speed. Due to relativistic velocity addition, this mutual speed turns out to be 75% the speed of light, since <span class="math-container">$2\times 0.75/(1+0.75^2)=0.96$</span>. But now the trick is that we measure time with respect to the synchronized clocks on the Earth and spaceship, which are slowed by a factor of about <span class="math-container">$(1-0.75^2)^{1/2}\simeq 0.66$</span> due to time dilation. By comparing distance traveled to the time elapsed on these clocks, we would conclude that the Earth and the spaceship are traveling in opposite directions each at <span class="math-container">$0.75/0.66 = 1.13$</span> times the speed of light.</p> <p>If you don't think that this is faster-than-light travel, then you shouldn't think the universe is expanding faster than light either!</p>
|
Physics
|
|atomic-physics|schroedinger-equation|computational-physics|simulations|
|
Simulating the helium atom using the Schrödinger equation numerically?
|
<p>The Hamilton operator of the helium atom is given by <span class="math-container">$$ H=-\frac{\hbar^2}{2m}\Delta_1 - \frac{\hbar^2}{2m} \Delta_2-\frac{2e^2}{r_1}-\frac{2e^2}{r_2}+\frac{e^2}{r_{12}},\qquad r_{1}= |\vec x_{1} |, \; r_2= |\vec{x}_2|,\; r_{12}= |\vec{x}_1-\vec{x}_2|,$$</span> acting on wave functions <span class="math-container">$\psi(\vec{x}_1,\sigma_1; \vec{x}_2,\sigma_2)$</span>. According to the Pauli exclusion principle for fermions, the wave function has to be <em>antisymmetric</em> under the interchange <span class="math-container">$(\vec{x}_1,\sigma_1) \leftrightarrow (\vec{x}_2, \sigma_2) $</span> of the position and spin coordinates of the two particles, i.e. <span class="math-container">$$\psi(\vec{x}_1,\sigma_1; \vec{x}_2,\sigma_2) =- \psi(\vec{x}_2,\sigma_2; \vec{x}_1, \sigma_1).$$</span> As the Hamilton operator does not depend on the spins <span class="math-container">$\vec{S}_{1,2}$</span>, the operators <span class="math-container">$H$</span>, <span class="math-container">$\vec{S}^2$</span> and <span class="math-container">$S_z$</span> (with <span class="math-container">$\vec{S}=\vec{S}_1+\vec{S}_2$</span>) can be diagonalized <em>simultaneously</em>. The addition of two spin <span class="math-container">$1/2$</span> angular momenta can either result in a total spin <span class="math-container">$s=0$</span> (spin singlet) or a total spin <span class="math-container">$s=1$</span> (spin triplet). In the first case (parahelium), the wave function assumes the form <span class="math-container">$$\begin{pmatrix} \psi(\vec{x}_1, \uparrow; \vec{x}_2, \uparrow)\\ \psi(\vec{x}_1, \uparrow; \vec{x}_2, \downarrow) \\\psi(\vec{x}_1, \downarrow; \vec{x}_2 \uparrow) \\ \psi(\vec{x}_1 \downarrow; \vec{x}_2, \downarrow)\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 0 \\1 \\ -1\\0 \end{pmatrix} \phi_s(\vec{x}_1, \vec{x}_2),$$</span> whith a <em>symmetric</em> spatial wave function <span class="math-container">$\phi_s(\vec{x}_1, \vec{x}_2)=\phi_s(\vec{x}_2,\vec{x}_1).$</span> The triplet states (orthohelium) with total spin <span class="math-container">$s=1$</span> and <span class="math-container">$s_z= +1,0,-1$</span> are described by <span class="math-container">$$\begin{pmatrix} 1\\ 0 \\0\\0 \end{pmatrix} \phi_a(\vec{x}_1, \vec{x}_2), \quad \frac{1}{\sqrt{2}}\begin{pmatrix} 0\\ 1 \\ 1 \\ 0 \end{pmatrix}\phi_a(\vec{x}_1,\vec{x}_2), \quad \begin{pmatrix}0\\0\\0\\1 \end{pmatrix}\phi_a(\vec{x}_1,\vec{x}_2)$$</span> with an <em>antisymmetric</em> spatial wave function <span class="math-container">$\phi_a(\vec{x}_1,\vec{x}_2) =-\phi_a(\vec{x}_2, \vec{x}_1)$</span>.</p> <p>Thus, <em>in principle</em>, finding the energy eigenvalues of the He atom boils down to solving the eigenvalue problem <span class="math-container">$H \phi = E\phi$</span> for functions <span class="math-container">$\phi(\vec{x}_1, \vec{x}_2) \in L^2(\mathbb R^6)$</span> being either symmetric (parahelium) or antisymmetric (orthohelium) under <span class="math-container">$\vec{x}_1 \leftrightarrow \vec{x}_2$</span>. However, in contrast to the simple case of the hydrogen atom, the energy eigenfunctions of the He atom cannot be found in closed form and one has to appeal to suitable approximation methods. Note that an excellent upper bound for the He ground state energy can be found by a variational ansatz for the ground state wave function, whereas a lower bound can be derived from a certain operator inequality (see e.g. Walter Thirring, Quantum Mathematical Physics - Atoms, Molecules and Large Systems, Springer, chapter 4.3).</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|energy-conservation|
|
Getting different values using energy principle and momentum principle
|
<p>The second derivation using conservation of energy is wrong, because the total potential energy of the system when the mass is at its old equilibrium (unstretched spring) is <em>different</em> than the potential energy when the mass is at its new equilibrium (where the forces cancel). Thus, one <em>cannot say</em> that <span class="math-container">$\Delta U = 0$</span>.</p> <p>In fact, if the mass starts at the equilibrium position of the <em>unstretched</em> spring, the mass will fall under the influence of gravity. This is because at the beginning, the force exerted by the spring is zero, whereas the gravitational force is non-zero. As the mass falls, the spring stretches, and hence the spring force increases. At first, while the <span class="math-container">$F_{\mathrm{spring}}$</span> is smaller <span class="math-container">$F_{\mathrm{grav}}$</span>, the mass accelerates. Once the mass moves through the new equilibrium position, <span class="math-container">$F_{\mathrm{spring}} > F_{\mathrm{grav}}$</span>, and so the mass comes to a stop, at which point it will start to move upward. In other words, the object undergoes oscillatory motion about the new equilibrium position. It's true that energy is conserved, but this includes the kinetic energy, i.e., <span class="math-container">$$ 0=\Delta E = \Delta K + \Delta U_{\mathrm{spring}}+ \Delta U_{\mathrm{grav}}\,. $$</span></p> <p>In order to get the object to start at the unstretched-spring equilibrium position and move to the actual equilibrium position, while having initial and final speeds be zero, there must be some <em>other</em> force doing work on the system, hence changing its energy.</p>
|
Physics
|
|newtonian-mechanics|velocity|everyday-life|collision|speed|
|
How does a seatbelt help in a car crash?
|
<p>A seatbelt locks in place once it reaches some g-force the indicates it may be in a collision (you'll notice this when you break hard or try to put on a seatbelt when you're turning sharply).</p> <p>You're right that you're going to need to slow down to 0 mph no matter what. The maximum force on your body during that process can be found using <span class="math-container">$\Delta p = F \cdot \Delta t $</span> if you use a simple model of constant decleration.</p> <p>You can slow down with the seatbelt, which will stretch even when it locks, or you can slow down by hitting the rigid dashboard which has very little give.</p> <p>The seatbelt greatly reduces the maximum force by spreading out the force over time, and also prevents the force from being applied to your head from the dashboard.</p>
|
Physics
|
|optics|reflection|
|
Trapping light by total internal reflection
|
<p>Heading over to <a href="https://phet.colorado.edu/sims/html/bending-light/latest/bending-light_all.html" rel="nofollow noreferrer">PhET's Bending Light Simulator</a>, you can try out this experiment yourself. As it turns out, you cannot get the light to be infinitely contained in the glass if you shine light from the outside. But it is possible if the light source is within the glass, as is shown below.</p> <p><a href="https://i.stack.imgur.com/ef0cy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ef0cy.jpg" alt="light not trapped if source outside circular glass" /></a> <a href="https://i.stack.imgur.com/b5xER.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b5xER.jpg" alt="light trapped if source within circular glass" /></a></p> <p>But why is this true?</p> <p>To answer this question, let us first try to find out the necessary conditions for the light to have an angle of incidence equal to the critical angle at its second incidence (when it goes from glass to air).</p> <p>[I’ll be using “the first incidence of light” to refer to when the light goes from the air to the glass, and “the second incidence of light” to refer to when it goes from the glass to the air.]</p> <p>Have a look at the following image.</p> <p><a href="https://i.stack.imgur.com/s9F2P.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s9F2P.jpg" alt="enter image description here" /></a></p> <p>First you must understand that the normals drawn at points B and C have the radii of the circle as their part, i.e., if extended, they intersect at the centre always. Why is this true? Because the normals are by definition perpendicular to the tangents to the circular glass drawn at those points — B and C. But it is known that the radius of a circle is the line segment that is perpendicular to a tangent drawn at any point on the circumference. Therefore, the normals must be extensions of the radii of the circle.</p> <p>If you didn’t get the above part, you may ignore it; just know that OB and OC are radii of the circle, hence they are equal. This would imply that triangle BOC is isosceles, and so ∠OBC = ∠OCB.</p> <p>Now we know by Snell’s law that <span class="math-container">$\sin{i} \propto \sin{r}$</span>. In other words, more the angle of incidence, more the angle of refraction, and vice versa, because the sine function increases from <span class="math-container">$0°$</span> to <span class="math-container">$90°$</span>, and those are the only angles we are considering here. Thus, if we look at the first incidence, to maximise <span class="math-container">$r_{1}$</span> (∠OBC), we need to maximise <span class="math-container">$i_{1}$</span> (∠EBA).</p> <p>But <span class="math-container">$i_{1}$</span> cannot be any bigger than <span class="math-container">$90°$</span>, because if it were greater, the light would basically have its source within the glass, and we know that the light does get infinitely contained if its source is in the glass.</p> <p>Now what is the angle of refraction <span class="math-container">$r_{1}$</span> corresponding to a <span class="math-container">$90°$</span> <span class="math-container">$i_{1}$</span>? Let’s find that out using Snell’s law! Assuming <span class="math-container">$1.5$</span> to be the refractive index (<span class="math-container">$\mu$</span>) of glass,</p> <p><span class="math-container">\begin{align} & \frac{sin(i)}{sin(r)} = \mu \\ & \implies \frac{sin(90°)}{sin(r_{1})} = 1.5 \\ & \implies \frac{1}{sin(r_{1})} = 1.5 \\ & \implies r_{1} = sin^{-1}(1/1.5) \\ & \implies r_{1} \approx 41.8103149° \end{align}</span></p> <p>[Which is exactly equal to the critical angle for glass and air, in accordance with the principle of reversibility of light.]</p> <p><span class="math-container">$\therefore ∠OBC = ∠OCB \approx 41.81°$</span></p> <p>But this wasn’t our true objective was it? We wanted the light to get totally internally reflected, but for that, <span class="math-container">$r_{2}$</span> must be greater than <span class="math-container">$90°$</span>. Hence <span class="math-container">$i_{2}$</span> must be greater than <span class="math-container">$41.81°$</span> (because <span class="math-container">$i \propto r$</span>, by Snell's law), which implies that <span class="math-container">$r_{1}$</span> must be greater than <span class="math-container">$41.81°$</span> (because <span class="math-container">$r_{1}$</span> = <span class="math-container">$r_{2}$</span>), which means that <span class="math-container">$i_{1}$</span> must be greater than <span class="math-container">$90°$</span> (again, because <span class="math-container">$i \propto r$</span>). That is impossible, unless the source be in the glass itself, thus proving our observations.</p>
|
Physics
|
|operators|conformal-field-theory|
|
Reading operators from Kac tables and performing operator product expansions with them
|
<p>In 2d the conformal algebra factorizes into two Virasoro algebras, left and right. So primary fields have two dimensions <span class="math-container">$\Delta,\bar\Delta$</span>. However, single-valuedness of correlation functions (and other assumptions) imply <span class="math-container">$\Delta-\bar\Delta \in \mathbb{Z}$</span>. A simple way to satisfy this is to consider diagonal fields i.e. <span class="math-container">$\Delta=\bar\Delta$</span>.</p> <p>A-series minimal models such as the Ising minimal model are diagonal, so all their fields are diagonal and characterized by their left = right conformal dimension. Or by their Kac indices <span class="math-container">$(m,n)$</span>, which are the same on the left and on the right. This does not mean that the fields factorize as products of objects that depend on <span class="math-container">$z$</span> and <span class="math-container">$\bar z$</span>.</p> <p>A relation such as <span class="math-container">$\sigma\times\sigma = I + \epsilon$</span> can have two meanings. It can be a statement about fusion products of representations of the Virasoro algebra. Or it can be a statement about OPEs of diagonal fields in the Ising model. In this context, the cross terms that you write are not present, as they involve fields that violate <span class="math-container">$\Delta-\bar\Delta\in\mathbb{Z}$</span>. If you run OPEs separately on the left and on the right you get all the terms that are allowed by conformal symmetry, but then some of them get eliminated by other constraints such as single-valuedness.</p>
|
Physics
|
|homework-and-exercises|classical-mechanics|scattering|scattering-cross-section|
|
Numerically Stable form of Scattering Angle
|
<p><span class="math-container">$$ \begin{align} \tag1\text{Let }r&=\frac{r_m}{1-\rho^2} \quad\text{so that}\quad \frac{\mathrm dr}{\mathrm d\rho}=\frac{2r_m\rho}{(1-\rho^2)^2}=\frac{2r^2\rho}{r_m}\\ \tag{3.96}\Theta(s)&=\pi-2\int_{r_m}^\infty \frac{s\,\mathrm dr}{r\sqrt{r^2\left[1-\frac{V(r)}E\right]-s^2}}\\ \tag2\therefore\qquad\Theta(s)&=\pi-4s\int_0^1\frac{\rho\,\mathrm d\rho}{\sqrt{r_m^2\left[1-\frac{V(r)}E\right]-s^2(1-\rho^2)^2}} \end {align} $$</span> That is, the question had a typo at this point by missing the square on the <span class="math-container">$(1-\rho^2)^2$</span> in the square root. If it had it correct, then it would connect to the next form of the integral because <span class="math-container">$(1-\rho^2)^2=1-2\rho^2+\rho^4$</span> is needed, with the fact that <span class="math-container">$\frac{s^2E}{r_m^2}=\frac{\ell^2}{2mr_m^2}=E-V(r_m)$</span></p>
|
Physics
|
|thermodynamics|energy|reversibility|
|
Clarification on the Use of $\frac{dS}{dE} = \frac{1}{T}$ vs. $\frac{dS}{dQ} = \frac{1}{T}$ in Thermodynamics
|
<p>Formula 1 should be written as the partial derivative of entropy with respect to internal energy for constant number of particle, <span class="math-container">$N$</span> and volume <span class="math-container">$V$</span> as</p> <p><span class="math-container">$$\biggl (\frac{\delta S}{\delta E}\biggr )_{N,V}=\frac{1}{T}$$</span></p> <p>This is the thermodynamic definition of temperature.</p> <p>Formula 2 should be written as</p> <p><span class="math-container">$$dS=\frac{Q_{rev}}{T}$$</span></p> <p>This is the definition of entropy change. While defined for a reversible transfer of heat, it applies to any process, reversible or irreversible.</p> <blockquote> <p>However, I can not understand this. For an adiabatic process <span class="math-container">$Q=0$</span>, <span class="math-container">$\Delta E=Q-W$</span>. Which implies <span class="math-container">$\Delta E=-W$</span>. Now if we use the first formula becomes <span class="math-container">$\frac{dS}{dW} = -\frac{1}{T}$</span> instead of <span class="math-container">$\frac{dS}{dQ} = \frac{1}{T}$</span>. I think it's a result of my understanding of the notation but I just couldn't figure out which part is wrong.</p> </blockquote> <p>There's nothing wrong because the first formula is for a closed system and constant volume. Therefore, <span class="math-container">$W=0$</span>.</p> <blockquote> <p>Also, I saw that for a reversible work source, we need to have the conditions impermeable, adiabatic, and slow relaxation times. The first 2 are straightforward but I couldn't quite understand the slow relation times part.</p> </blockquote> <p>Callan doesn't say "slow" relaxation times. He says "short" relaxation time. "Short" means the system comes back into equilibrium quickly after being perturbed. That allows a process to be carried out more quickly and yet still be quasi-static, making the process more practical (not taking forever to carry out).</p> <p>Hope this helps.</p>
|
Physics
|
|general-relativity|coordinate-systems|tensor-calculus|gauge-invariance|diffeomorphism-invariance|
|
In general relativity, is gauge invariance the same as coordinate invariance?
|
<p>In the context of perturbation theory in general relativity, gauge transformations are something distinct from coordinate transformations. To make the distinction clear lets introduce the notion of gauge dependence in a coordinate independent way.</p> <p>Suppose we have two manifolds <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> and two tensor fields <span class="math-container">$T_1$</span> and <span class="math-container">$T_2$</span> (of the same rank and type) living on <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> respectively. Now, we want to know if <span class="math-container">$T_1$</span> and <span class="math-container">$T_2$</span> are similar. We cannot just directly compare them because they live in different mathematical spaces. The first thing we will need is a diffeomorphims <span class="math-container">$\phi: M_1\to M_2$</span>, that associates the points of <span class="math-container">$M_1$</span> to the points of <span class="math-container">$M_2$</span>. This map then induces a map between the tensor bundles on <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> allowing us to construct the pulled-back <span class="math-container">$\phi_{*}T_2$</span> as a tensor field living on <span class="math-container">$M_2$</span>. Consequently, we can now study <span class="math-container">$\delta{T}= \phi_{*}T_2-T_1$</span> to say things about how similar the two are (or aren't).</p> <p>Now the construction of <span class="math-container">$\delta{T}$</span> depends on the choice of <span class="math-container">$\phi$</span>, and we could have chosen a different diffeomorphism <span class="math-container">$\phi'$</span>. In general, <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> will differ by and automorphism <span class="math-container">$\psi: M_1\to M_1$</span> such that <span class="math-container">$\phi' = \phi\circ\psi$</span>. The value of <span class="math-container">$\delta{T}$</span> will consequently differ by <span class="math-container">$\psi_{*}T_1-T_1$</span>. This is the gauge freedom in determining the difference <span class="math-container">$\delta{T}$</span>.</p> <p>For infinitesimal automorphisms <span class="math-container">$\psi$</span> is generated by a vector field <span class="math-container">$\xi$</span> and the freedom in <span class="math-container">$\delta{T}$</span> is given simply by the Lie derivative <span class="math-container">$\mathcal{L}_\xi T_1$</span>.</p> <p>In perturbation theory, you compare a perturbed spacetime (plus the tensor on it) <span class="math-container">$(M,g)$</span> to some background spacetime <span class="math-container">$(M^0,g^0)$</span>. The perturbed metric,e.g. , is given by <span class="math-container">$h = \phi_{*}g - g^0$</span>, and is ambiguous up to gauge transformations <span class="math-container">$\mathcal{L}_\xi g_0$</span>.</p> <p>Now to get back to the statement in the paper mentioned in the OP. In that paper the authors consider perturbations around Minkowski space <span class="math-container">$(\mathbb{R}^4,\eta)$</span>. Minkowksi space has a Weyl tensor <span class="math-container">$C_0$</span> that is identically zero. Consequently, <span class="math-container">$\mathcal{L}_\xi C_0 =0$</span> and the Weyl tensor <span class="math-container">$C$</span> of the perturbed space there is not ambiguous under (infinitesimal) gauge transformations. This in contrast to <span class="math-container">$h$</span> which is only determined up to gauge transformations <span class="math-container">$(\mathcal{L}_\xi \eta)_{\mu\nu} = \partial_{(\mu}\xi_{\nu)}$</span>.</p>
|
Physics
|
|quantum-field-theory|
|
In which systems does Planck's constant apply? Is everything thus quantized?
|
<p>Of course, at the classical or macroscopic level, not all things are quantized, e.g. the energy of a ball rolling downhill or the amount of soup a person can eat in the morning. However, at the fundamental level there are indeed many examples of quantization in nature: charge comes in integer multiples of <span class="math-container">$e$</span>, and the electromagnetic, weak and strong interactions have been quantized. However, the gravitational interaction, one of the four fundamental interactions found in nature, has not yet been successfully put on a quantized basis, i.e. there is no complete quantum theory of gravity. The modern Einsteinian theory of gravity is a field theoretic development of the geometry of space-time in correlation with matter-energy density that no one yet understands at the plank level of distance, or at correspondingly high energies.</p> <p>Good reading? there is much literature on the subject, and it would be easier to recommend something if the community knew your level of competence with mathematics and physics. Assuming you are a beginner, try reading George Gamow's <em>Thirty Years that Shook Physics</em> and Albert Einstein's <em>Relativity</em> for aged but good introductions to quantum and relativistic physics. For a more modern read that talks more about the quantum gravity and related problems specifically, see Michio Kaku's <em>Beyond Einstein</em>.</p>
|
Physics
|
|thermodynamics|fluid-dynamics|rocket-science|home-experiment|applied-physics|
|
Okay, I know the risks. ( Amateur Rocket.)
|
<p><span class="math-container">$V_e$</span> is a property of the chosen propellant. In particular its <a href="https://en.wikipedia.org/wiki/Specific_impulse" rel="nofollow noreferrer">specific impulse</a>. For example the liquid-hydrogen liquid oxygen combination has a specific impulse of 450 seconds. This means that a thrust of 450 pounds weight is produced when the fuel is being burned at a rate of one pound mass per second.</p> <p>Incidently I think your thrust equation should be be <span class="math-container">$$ T= \dot m v_{e} \approx A(p_{chamber}- p_{outside}) $$</span> i.e there is an approximately-equal sign and not a sum of the two terms.</p>
|
Physics
|
|newtonian-mechanics|energy-conservation|drag|rocket-science|dissipation|
|
Rocket attached to a pendulum. How is energy conserved?
|
<p>If the pendulum is in equilibrium then the rocket motor does no work on the pendulum. It exerts a force on the pendulum, but because the pendulum is not moving, this force does no work on the pendulum. It is exactly as if the pendulum was held by a length of rope - the rope exerts force on the pendulum but does no work on it.</p> <p>The rocket motor, of course, does work by expelling its exhaust, but the energy that goes into the exhaust is initially seen as kinetic energy of the exhaust, and is eventually dissipated into the environment as sound and heat.</p> <p>Note that during the initial phase of the motion - as the rocket motor moves the pendulum from vertical to its new equilibrium position - the velocity of the exhaust is lower than in the equilibrium position. This is because the exhaust has a fixed velocity <em>relative to the rocket motor</em>, which is now moving. So in this initial phase the rocket motor does less work on the exhaust and does some work on the pendulum instead - and this energy is stored as the potential energy of the pendulum in its new equilibrium position.</p>
|
Physics
|
|harmonic-oscillator|eigenvalue|normal-modes|
|
Eigenvalues and Normal Modes in SHM
|
<p>This is equivalent to asking what happens when the dimension of the eigenspace corresponding to <span class="math-container">$b_n$</span> is not one-dimensional. Say it has dimension <span class="math-container">$k$</span>, then we can form an orthonormal basis of this eigenspace to obtain <span class="math-container">$k$</span> orthonormal eigenvectors <span class="math-container">$A_1^n,\dots,A_k^n$</span>. So these vectors are mutually orthogonal, but also then eigenspace corresponding to <span class="math-container">$b_m$</span> is orthogonal to the eigenspace corresponding to <span class="math-container">$b_n$</span> if <span class="math-container">$b_n\neq b_m$</span> via the standard proof, regardless of dimension of the eigenspaces.</p> <p>Edit: To see this last fact, let <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> be normalized eigenvectors corresponding to a symmetric matrix <span class="math-container">$S$</span> corresponding to <span class="math-container">$b_1\neq b_2$</span>. Then <span class="math-container">$$ b_1\langle A_2, A_1\rangle = \langle A_2, b_1A_1\rangle = \langle A_2, SA_1\rangle = \langle SA_2, A_1\rangle = \langle b_2A_2, SA_1\rangle = \overline{b}_2\langle A_2, A_1\rangle. $$</span> Since <span class="math-container">$S$</span> is symmetric, <span class="math-container">$b_1$</span> and <span class="math-container">$b_2$</span> are real, so we have <span class="math-container">$b_1\langle A_2, A_1\rangle = b_2\langle A_2, A_1\rangle$</span>. However, <span class="math-container">$b_1\neq b_2$</span>, which implies that <span class="math-container">$\langle A_1,A_2\rangle=0$</span>. <span class="math-container">$\blacksquare$</span></p>
|
Physics
|
|quantum-field-theory|black-holes|hawking-radiation|qft-in-curved-spacetime|
|
Does matter in the outside universe affect Hawking radiation?
|
<p>Hawking radiation emits all sorts of particles, including charged particles. Adding a big charge to the outside of the black hole would not alter, in principle, the particles emitted by the black hole, although it could alter their distribution: if the charge is positive, then electrons, for example, would be attracted to it while positrons would be repelled.</p> <p>Nevertheless, there is a separate effect that may come into play if the electromagnetic field is strong enough: the Schwinger effect. The <a href="https://en.wikipedia.org/wiki/Schwinger_effect" rel="nofollow noreferrer">Schwinger effect</a> is a different particle creation effect similar to the Hawking effect in which the presence of a strong enough electromagnetic field creates particle-antiparticle pairs out of the vacuum. I suppose a big enough charge could induce such a strong electromagnetic field so that the Schwinger effect becomes relevant and other particles are "pulled out" of the vacuum.</p>
|
Physics
|
|thermodynamics|statistical-mechanics|probability|physical-chemistry|
|
Understanding $\mathrm dP_x$ in the derivation of Maxwell-Boltzmann distribution
|
<p><span class="math-container">$P_x$</span> is a probability distribution of speed in the <span class="math-container">$x$</span> direction, If you integrate between <span class="math-container">$u=0$</span> and <span class="math-container">$u=\infty$</span> the result of the integral is 1. Where <span class="math-container">$P_x$</span> is a probability density with units of inverse velocity. <span class="math-container">$P_x$</span> is a function of <span class="math-container">$u$</span>, that is <span class="math-container">$P_x=f(u)$</span>.</p> <p>In this context, it is not true that <span class="math-container">$dP_x=(dP_x/du)du$</span>, but rather, <span class="math-container">$dP_x=P_x du$</span>, which is a very different thing.</p>
|
Physics
|
|electric-circuits|electric-fields|electric-current|
|
Electric field in a circuit when a battery is connected to it during transient phase and steady phase
|
<p>The force <span class="math-container">$F$</span> produced on an electric charge <span class="math-container">$Q$</span> in a circuit due to the presence of an electric field <span class="math-container">$E$</span> given by</p> <p><span class="math-container">$$F=EQ$$</span>.</p> <p>The relationship the electric field <span class="math-container">$E$</span> in a conductor and the voltage change <span class="math-container">$dV$</span> along its length <span class="math-container">$ds$</span> is given by the negative of the gradient of the voltage, or</p> <p><span class="math-container">$$E=-\frac{dV}{ds}$$</span></p> <p>The wire they refer to is one that theoretically has zero resistance. For such a wire <span class="math-container">$dV$</span> anywhere along its length is zero. Thus the electric field is zero and a charge experiences no electrical force.</p> <p>On the other hand, for a uniform resistor, the voltage across the resistor is, by Ohm's law, <span class="math-container">$V=IR$</span>. The electric field in the resistor is then the voltage difference between the terminals of the resistor divided by its length. Then a charge moving through the resistor will experience a force due to the electric field.</p> <p>It must be kept in mind however that, with the exception of superconductors, all wires have some resistance, though it is typically considered negligible compared to the resistance of circuit components. Thus there will be a voltage difference between any two points of the wire per Ohm's law, or <span class="math-container">$V=IR$</span>. So for the wire having resistance there will be a voltage gradient and thus an electric field exerting force on the charges moving through the resistor.</p> <p>Hope this helps.</p>
|
Physics
|
|electromagnetism|magnetic-fields|electricity|conductors|
|
Would this simple design function as an electromagnet?
|
<p>Yes, it does seem like this would be a simple way to create an electromagnet, since the usual concept of an electromagnet requires a magnetic field generated by a current carrying wire and an iron core of some sort. It would probably not be the most efficient design, but it would work and it would be quite simple.</p>
|
Physics
|
|quantum-mechanics|atoms|hydrogen|positronium|
|
Positronium radius more than expected
|
<p>Let me firstly point out that according to your assumptions R is still only half the distance of the particles so the result is not wrong by a factor 2 but by a factor 4.</p> <p>Secondly(See edit below before reading this paragraph): This is mathematically exactly what you would expect when dealing with your semi classical model: increase of particle distance at same distance to center of rotation by factor 2 → decrease of coulomb attraction (equal centripetal force) by factor 4 → decrease of particle speed at any stable classical radius by 2 → increase of De Broglie wavelength by 2 → increase of distance to rotation center according to Bohrs model by 2 → increase of particle distance by 4.</p> <p>Thirdly: while mathematically correct this calculation is physically pointless if the goal is as stated in your question to calculate properties of a quantum mechanical system like positronium. The correct formula to start from when calculating properties of positronium would be the formula for Hydrogen like Atoms in relative coordinates:</p> <p><span class="math-container">$$\left( -\frac{\hbar^2}{2 \mu} \nabla^2 - \frac{e^2}{4 \pi \varepsilon_0 r} \right) \psi (r, \theta, \varphi) = E \psi (r, \theta, \varphi)$$</span> (<a href="https://en.wikipedia.org/wiki/Hydrogen_atom" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Hydrogen_atom</a>)</p> <p>with the inter particle distance <span class="math-container">$r$</span> and the reduced mass <span class="math-container">$\mu$</span>. Since you will use the identical formula for hydrogen and positronium you will also get the identical expression for your Bohr Radius defined as radius with maximum probability of inter particle distance in ground state (Note, that this is not the expectation value). So only the value of your reduced mass will be different, which means that your first formula is correct.</p> <p>Finally: The Problem here is that Bohrs model has been falsified long ago and while it does give surprisingly accurate results in some special cases, you should never try to expand it for the sake of calculating properties of other quantum mechanical systems, because those results won’t tell you anything as you would still have to confirm that they are agreeing with standart quantum theory or even more elaborate stuff. Today its really just a toy model. In general you should never use “semi-classical” theories unless you are sure that you are operating in one of the limits in witch they are correct.</p> <p>Edit: After looking at this again I realized that the result of my second paragraph is not the same as the result of your calculation as it predicts a factor 4 compared to Hydrogen while your calculation predicts a factor 4 compared to the true positronium result. This is due to some thinking error on my side due to neglecting the dependece of <span class="math-container">$v(R)$</span> when deriving the change of wavelength. I probably should stop trying to visualize stuff like this with this fuzzy arrow diagrams and use equations instead as that is what they are meant for. Anyways I still believe that your result is the mathematically correct prediction of your physically wrong model.</p> <p>Secondly I just realized that you are using a quantum number in your definition of the Bohr radius which is somewhat incompatible with the definition I use above. The maximum of the probability density in ground state Hydrogen Atom is the only real physical meaning of the Bohr radius I am aware of. I therefore would advice against the usage of higher order radii calculated with Bohrs model for pretty much anything. Feel free to comment if anyone knows any use cases for those.</p>
|
Physics
|
|quantum-mechanics|quantum-information|open-quantum-systems|
|
Lindblad evolution as continuum limit of a discrete process coupling system and environment
|
<p>You are correct: if you reset the state of system and environment to a product state after each infinitesimal time step, you obtain a unitary time evolution. Instead, the derivation (due to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0301010401003305" rel="nofollow noreferrer">Lidar et al.</a>, see also <a href="https://journals.aps.org/pra/abstract/10.1103/PhysRevA.88.012103" rel="nofollow noreferrer">Majenz et al.</a>) goes as follows.</p> <ul> <li>System and environment start in a product state at time <span class="math-container">$t=0$</span>. They evolve together for a finite time <span class="math-container">$\tau$</span>. Without any approximation, one can derive that <span class="math-container">$\rho(\tau) - \rho(0) = \tau\, \mathcal L_\tau[ \rho(0) ]$</span> for <em>any</em> <span class="math-container">$\tau$</span>. Here, <span class="math-container">$\rho$</span> is the reduced state of the system and <span class="math-container">$\mathcal L_\tau$</span> a superoperator in Lindblad form.</li> <li>We now fix a <span class="math-container">$\tau$</span> such that it is much larger than the bath's relaxation time-scale and much smaller than the characteristic system time scale. The first approximation that we make is to reset the state to a product state at the (discrete!) time points <span class="math-container">$t_n = n\tau$</span>. Therefore, <span class="math-container">$\rho(n\tau + \tau) - \rho(n\tau) = \tau\, \mathcal L_\tau[ \rho(n\tau) ]$</span>.</li> <li>The second approximation is to consider the coarse-grained state <span class="math-container">$\tilde\rho(t) = \frac 1 \tau \int_t^{t+\tau} \rho(t') dt'$</span>. It does not change much on the time-scale <span class="math-container">$\tau$</span> and therefore approximately satisfies <span class="math-container">$\partial_t \tilde\rho(t) = \mathcal L_\tau[ \tilde\rho(t) ]$</span> for all times.</li> </ul> <p>So this is explicitly not a continuum limit of discrete coupling / reset processes, but a coarse-graining.</p>
|
Physics
|
|optics|visible-light|reflection|
|
Height of Mirror Required
|
<p>You will be able to understand it better if you imagine that the mirror forms a reflection of every object present in front of its reflecting surface.</p> <p>It will only be visible to you when your field of view makes it possible to see the reflection.</p> <p><a href="https://i.stack.imgur.com/5mGyY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5mGyY.png" alt="https://commons.wikimedia.org/wiki/File:Plane_mirror.png#/media/File:Plane_mirror.png" /></a></p> <p>In this diagram, object A forms a reflection A'. It is only when the field of view of the person covers the reflection for it to be visible.</p> <p>Note that if the mirror were to be smaller and not lie beneath A, even then a reflection would be formed whose view would be limited by the light rays reaching you after reflecting at a certain angle.</p>
|
Physics
|
|quantum-field-theory|renormalization|feynman-diagrams|perturbation-theory|effective-field-theory|
|
Why are the corrections to the effective Lagrangian (Wilsonian renormalization) given by connected diagrams only?
|
<ol> <li><p>The <a href="https://www.google.com/search?as_q=wilsonian+effective+action" rel="nofollow noreferrer">Wilsonian effective action</a> is defined as <span class="math-container">$$\begin{align} \exp&\left\{-\frac{1}{\hbar}W_c[J^H,\phi_L] \right\} \cr ~:=~& \int_{\Lambda_L\leq |k|\leq \Lambda_H} \! {\cal D}\frac{\phi_H}{\sqrt{\hbar}}~\exp\left\{ \frac{1}{\hbar} \left(-S[\phi_L+\phi_H]+J^H\phi_H\right)\right\},\end{align}\tag{W}$$</span> cf. eqs. (12.5) & (12.6) in Ref. 1.</p> </li> <li><p>The right-hand side of eq. (W) has an interpretation as the sum of <em>all</em> Feynman diagrams of heavy/high modes <span class="math-container">$\phi_H$</span> using arguments similar to my Phys.SE answer <a href="https://physics.stackexchange.com/a/804375/2451">here</a>.</p> </li> <li><p>The Wilsonian effective action <span class="math-container">$W_c[J^H,\phi_L]$</span> consists of <em>connected</em> Feynman diagrams of <span class="math-container">$\phi_H$</span>, due to the <a href="https://en.wikipedia.org/wiki/Feynman_diagram#Connected_diagrams:_linked-cluster_theorem" rel="nofollow noreferrer">linked cluster theorem</a>, cf. e.g. <a href="https://physics.stackexchange.com/q/324252/2451">this</a> Phys.SE post.</p> </li> </ol> <p>References:</p> <ol> <li>M.E. Peskin & D.V. Schroeder, <em>An Intro to QFT,</em> 1995; section 12.1.</li> </ol>
|
Physics
|
|thermodynamics|solid-state-physics|
|
Coefficient of thermal expansion on cooling and heating
|
<p>The problem is that the defining equation is inaccurate. It should read <span class="math-container">$$\alpha=\frac{1}{L}\frac{dL}{dT}$$</span>So <span class="math-container">$$\frac{L_2}{L_1}=e^{\alpha(T_2-T_1)}$$</span>and <span class="math-container">$$\frac{L_1}{L_2}=e^{\alpha(T_1-T_2)}$$</span></p>
|
Physics
|
|electromagnetic-radiation|speed-of-light|refraction|maxwell-equations|dielectric|
|
Why is $c = \frac{1}{\sqrt{\mu_0 \epsilon_0}}$?
|
<p>A wave travelling with speed <span class="math-container">$v$</span> satisfies the equation <span class="math-container">$$ \partial_t^2 \phi (t,\vec{x}) = v^2 \nabla^2 \phi (t,\vec{x}) . $$</span> Comparing this to Maxwell's equation, we see that the speed at which light travels is given by <span class="math-container">$$ c = \frac{1}{\sqrt{\mu_0 \epsilon_0}}. $$</span> That this same speed is the maximum allowed speed is an insight by Einstein. That does not automatically follow from Maxwell's equations.</p>
|
Physics
|
|classical-mechanics|work|
|
Should the work done on a rigid body be calculated with respect to the point of application, or the center of mass?
|
<blockquote> <p>For example: <a href="https://physics.stackexchange.com/questions/557754/work-done-on-rigid-body-vs-particle">Work done on rigid body vs particle</a> Here the latter is mentioned, and elsewhere the former approach is mentioned.</p> </blockquote> <p>In user258881's answer to the linked question there are two terms:</p> <blockquote> <p><span class="math-container">$$W=\underbrace{\int \mathbf F\cdot \mathrm d \mathbf r}_{\text{work done by the force}}+\underbrace{\int \boldsymbol{\tau}\cdot \mathrm d \boldsymbol{\theta}}_{\text{work done by the torque}}$$</span></p> </blockquote> <p>The "work done by the force" isn't well-named. This term accounts for the work that goes into increasing the translational kinetic energy of the body. But it doesn't really account for all the work done by the applied force. It only accounts for the component of the force that acts through the center of mass.</p> <p>The "work done by the torque" accounts for the work that goes into increasing the rotational kinetic energy of the body. This accounts for the component of force acting perpendicular to the vector between the point of application and the center of mass.</p> <p>If you sum these up, you get the total work done by the force. This is the same as your first method of integrating over the curve travelled by the point of application of the force.</p> <p>The issue is that "the force" and "the torque", as named by user258881, aren't two separate things. In your conceptualization of the problem, they are both components of the applied force. The main difference is that you are imagining a floating in body in space with only one force acting on it, so you can identify "the point of application of the force" as a single point. User258881 was considering a body acted on by multiple forces, so there is no single "point of application" and it is more convenient to first separate each force into translational component and torque component, and sum them up separately before integrating to find the total work done.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.