StudentsEducators

Zeeman Splitting

Zeeman Splitting is a phenomenon observed in atomic physics where spectral lines are split into multiple components in the presence of a magnetic field. This effect occurs due to the interaction between the magnetic field and the magnetic dipole moment associated with the angular momentum of electrons in an atom. When an external magnetic field is applied, the energy levels of the atomic states are shifted, leading to the splitting of the spectral lines.

The energy shift can be described by the equation:

ΔE=μB⋅B⋅mj\Delta E = \mu_B \cdot B \cdot m_jΔE=μB​⋅B⋅mj​

where ΔE\Delta EΔE is the energy shift, μB\mu_BμB​ is the Bohr magneton, BBB is the magnetic field strength, and mjm_jmj​ is the magnetic quantum number. The resulting pattern can be classified into two main types: normal Zeeman effect (where the splitting occurs in triplet forms) and anomalous Zeeman effect (which can involve more complex splitting patterns). This phenomenon is crucial for various applications, including magnetic resonance imaging (MRI) and the study of stellar atmospheres.

Other related terms

contact us

Let's get started

Start your personalized study experience with acemate today. Sign up for free and find summaries and mock exams for your university.

logoTurn your courses into an interactive learning experience.
Antong Yin

Antong Yin

Co-Founder & CEO

Jan Tiegges

Jan Tiegges

Co-Founder & CTO

Paul Herman

Paul Herman

Co-Founder & CPO

© 2025 acemate UG (haftungsbeschränkt)  |   Terms and Conditions  |   Privacy Policy  |   Imprint  |   Careers   |  
iconlogo
Log in

Bayes' Theorem

Bayes' Theorem is a fundamental concept in probability theory that describes how to update the probability of a hypothesis based on new evidence. It mathematically expresses the idea of conditional probability, showing how the probability P(H∣E)P(H | E)P(H∣E) of a hypothesis HHH given an event EEE can be calculated using the formula:

P(H∣E)=P(E∣H)⋅P(H)P(E)P(H | E) = \frac{P(E | H) \cdot P(H)}{P(E)}P(H∣E)=P(E)P(E∣H)⋅P(H)​

In this equation:

  • P(H∣E)P(H | E)P(H∣E) is the posterior probability, the updated probability of the hypothesis after considering the evidence.
  • P(E∣H)P(E | H)P(E∣H) is the likelihood, the probability of observing the evidence given that the hypothesis is true.
  • P(H)P(H)P(H) is the prior probability, the initial probability of the hypothesis before considering the evidence.
  • P(E)P(E)P(E) is the marginal likelihood, the total probability of the evidence under all possible hypotheses.

Bayes' Theorem is widely used in various fields such as statistics, machine learning, and medical diagnosis, allowing for a rigorous method to refine predictions as new data becomes available.

Hamming Bound

The Hamming Bound is a fundamental concept in coding theory that establishes a limit on the number of codewords in a block code, given its parameters. It states that for a code of length nnn that can correct up to ttt errors, the total number of distinct codewords must satisfy the inequality:

M⋅∑i=0t(ni)≤2nM \cdot \sum_{i=0}^{t} \binom{n}{i} \leq 2^nM⋅i=0∑t​(in​)≤2n

where MMM is the number of codewords in the code, and (ni)\binom{n}{i}(in​) is the binomial coefficient representing the number of ways to choose iii positions from nnn. This bound ensures that the spheres of influence (or spheres of radius ttt) for each codeword do not overlap, maintaining unique decodability. If a code meets this bound, it is said to achieve the Hamming Bound, indicating that it is optimal in terms of error correction capability for the given parameters.

Solow Growth Model Assumptions

The Solow Growth Model is based on several key assumptions that help to explain long-term economic growth. Firstly, it assumes a production function characterized by constant returns to scale, typically represented as Y=F(K,L)Y = F(K, L)Y=F(K,L), where YYY is output, KKK is capital, and LLL is labor. Furthermore, the model presumes that both labor and capital are subject to diminishing returns, meaning that as more capital is added to a fixed amount of labor, the additional output produced will eventually decrease.

Another important assumption is the exogenous nature of technological progress, which is regarded as a key driver of sustained economic growth. This implies that advancements in technology occur independently of the economic system. Additionally, the model operates under the premise of a closed economy without government intervention, ensuring that savings are equal to investment. Lastly, it assumes that the population grows at a constant rate, influencing both labor supply and the dynamics of capital accumulation.

Bragg Grating Reflectivity

Bragg Grating Reflectivity refers to the ability of a Bragg grating to reflect specific wavelengths of light based on its periodic structure. A Bragg grating is formed by periodically varying the refractive index of a medium, such as optical fibers or semiconductor waveguides. The condition for constructive interference, which results in maximum reflectivity, is given by the Bragg condition:

λB=2nΛ\lambda_B = 2n\LambdaλB​=2nΛ

where λB\lambda_BλB​ is the wavelength of light, nnn is the effective refractive index of the medium, and Λ\LambdaΛ is the grating period. When light at this wavelength encounters the grating, it is reflected back, while other wavelengths are transmitted or diffracted. The reflectivity of the grating can be enhanced by increasing the modulation depth of the refractive index change or optimizing the grating length, making Bragg gratings essential in applications such as optical filters, sensors, and lasers.

Theta Function

The Theta Function is a special mathematical function that plays a significant role in various fields such as complex analysis, number theory, and mathematical physics. It is commonly defined in terms of its series expansion and can be denoted as θ(z,τ)\theta(z, \tau)θ(z,τ), where zzz is a complex variable and τ\tauτ is a complex parameter. The function is typically expressed using the series:

θ(z,τ)=∑n=−∞∞eπin2τe2πinz\theta(z, \tau) = \sum_{n=-\infty}^{\infty} e^{\pi i n^2 \tau} e^{2 \pi i n z}θ(z,τ)=n=−∞∑∞​eπin2τe2πinz

This series converges for τ\tauτ in the upper half-plane, making the Theta Function useful in the study of elliptic functions and modular forms. Key properties of the Theta Function include its transformation under modular transformations and its connection to the solutions of certain differential equations. Additionally, the Theta Function can be used to generate partitions, making it a valuable tool in combinatorial mathematics.

Krylov Subspace

The Krylov subspace is a fundamental concept in numerical linear algebra, particularly useful for solving large systems of linear equations and eigenvalue problems. Given a square matrix AAA and a vector bbb, the kkk-th Krylov subspace is defined as:

Kk(A,b)=span{b,Ab,A2b,…,Ak−1b}K_k(A, b) = \text{span}\{ b, Ab, A^2b, \ldots, A^{k-1}b \}Kk​(A,b)=span{b,Ab,A2b,…,Ak−1b}

This subspace encapsulates the behavior of the matrix AAA as it acts on the vector bbb through multiple iterations. Krylov subspaces are crucial in iterative methods such as the Conjugate Gradient and GMRES (Generalized Minimal Residual) methods, as they allow for the approximation of solutions in a lower-dimensional space, which significantly reduces computational costs. By focusing on these subspaces, one can achieve effective convergence properties while maintaining numerical stability, making them a powerful tool in scientific computing and engineering applications.