I want to get a personal project done over the course of my gap year, and few ideas appeal to me more than toying around with computational quantum mechanics calculations. I recently coded a Schodinger-Poisson self-consistent field solver as part of a class project, and am eager to use similar methods to toy around with simulating more complex structures. The Schrodinger-Poisson solver I wrote was for a bunch of electrons in a spherical potential well, which is a very simple quantum-mechanical problem in comparison with electrons in a Coulomb potential.
An Over-Simplified History of Quantum Mechanics
Before I can get into the details of Hartree-Fock, I need to first explain what problem it is trying to solve. So, for a quick history lesson. By the late 19th century, everyone thought we basically knew how the world worked. We had Newtonian forces and momentum to explain forces, Newtonian gravity to explain the motion of the planets, a detailed theory of electricity and magnetism brought to us by Coulomb, Gauss, Ampere, Faraday, Maxwell, and Lorentz, and thermodynamic theories about energy and heat brought to us by Leibniz, du Chatelet, Watt, Carnot, Clausius, Kelvin, and Boltzmann. To round all of this out, we had well-developed and mathematically rigorous methods of calculating the mechanics formulated by Newton, brought to us by Euler, Lagrange, and Hamilton (not the founding father). Physicists at the time thought, “Well, we have a detailed theory of how electromagnetism works, we have a good explanation of light, and we have a good explanation of the nature of heat. So, we should be able to combine the two and explain why hot things glow the way they do. Right?”
Yeah, no. When they tried to do this, they ended up concluding that everything in the universe produces excessive amounts of ultraviolet radiation, which is simply not the case.
Then comes Max Planck. He comes along and says, “Well, let’s assume that light can only be emitted in these discrete quantities of energy which get bigger as the wavelength of the light decreases.” And it works. But now everyone’s wondering why it works, and physicists spend the next thirty years figuring that out. Einstein says, “Well that explains the photoelectric effect–these packets of light have to be at a certain energy in order to eject electrons from their atoms.” Niels Bohr comes along and says that the electrons’ orbits around the nucleus must be quantized, and that explains why they only absorb and emit certain wavelengths when excited (thus explaining the mystery of why spectral lines exist). He further conjectures the same thing regarding angular momentum (because quantized electron orbits results in discrete levels of angular momentum for the electrons), leading to the Stern-Gerlach experiment which tested the discretization of atomic magnetic moments that should result from this. Louis de Broglie comes along and says, “Well, if light is a wave that comes in discrete packets, maybe all these other things that come in discrete packets (like, for instance, electrons) must be waves too.” And then Heisenberg and Schrodinger take all this and craft their mathematical formulations of quantum mechanics, and Paul Dirac makes their formulation compatible with special relativity and then makes a quantum formulation of electromagnetic theory, thus finally completing the initial spur of discoveries that wondered how light can be both a particle and a wave in the first place.
From the Schrodinger Equation to Hartree-Fock
When Schrodinger came up with his formulation of quantum mechanics in 1926, this is the equation he came up with:
The point of this equation is to solve for Ψ, which gives us a wave function. Various operators, such as the momentum and position operators, can give us information about various observables. Of particular interest is the energy operator, defined as:
If our wave function Ψ describes a quantum state with a perfectly well-defined energy, then it should be an eigenfunction of the energy operator:
where E (no hat) is the energy of the system described by the wavefunction.
One thing that this tells us about the wavefunction is that its time dependence can be separated out into a complex exponential term, like so:
This gets us to the time-independent Schrodinger equation:
He then solved the time-independent equation for a hydrogen atom, with the following potential energy:
and he got energy eigenvalues that were perfectly in line with the observed energy levels of the hydrogen atom.
Things get more complicated when dealing with multi-electron systems. One reason is that now, the wavefunction is now a function of multiple position vectors:
Furthermore, we now have to account for the mutual electrostatic repulsion of the electrons, as well as the kinetic energies of all the electrons. The result is the following monster of a Schrodinger equation:
where each index i represents a particular electron. This is not usually analytically solvable, unlike the hydrogen atom (in which there is only one electron). Given that we have a single wavefunction to describe multiple electrons, this is often not even numerically solvable either, even with the most powerful computers available to us (let alone my puny-in-comparison gaming PC). So, we have to make a number of simplifying assumptions. One such assumption is that each electron has an individual wavefunction ψi (with each ψi being a function of only one position vector), and that the positional probability density of each ψi can be used to construct a charge density with which one can obtain an electric potential. This allows us to solve a larger number of somewhat simpler Schrodinger equations:
This form of the Schrodinger equation is known as the Hartree equation (after Douglas Hartree, the first guy to use this approximation method). is still probably impossible to solve analytically in all but the most trivial of cases, but it is now possible to work with this numerically in a computer. The electron-electron interaction term is thus approximated as seen in the above equation, and this term is known as the Hartree energy.
However, there is one part of the picture that is missing, and that is the “exchange energy” caused by the fermionic behavior of electrons. Wavefunctions describing multiple identical fermions have to exhibit the following property (known as antisymmetry):
or, in English, if two of the position variables are exchanged, the wavefunction becomes the negative of what it was before the exchange. The easiest way to capture this effect is by assuming the wavefunction to be in the form of the following:
This form of wavefunction is known as the Slater determinant, after John C. Slater, who was the first person to use this form of wavefunction. The Slater determinant method, by way of the Slater-Condon rules (which I will go into at a later post), results in one term being added to the single-electron equation:
This, then, is the full Hartree-Fock equation. Now, you might be wondering, “What the heck is all this good for? It has nothing to do with all of the light spectrum stuff you were talking about in the over-simplified history lesson.” Well, the quantum mechanics that was developed to solve that problem turns out to be what’s necessary for understanding in detail the behavior of electrons in pretty much any atomic or molecular situation. Thus, the Hartree-Fock equation is used to calculate the behavior of electrons in atoms, molecules, crystal lattices, and nanomaterials. It is necessary for understanding the ways in which chemical behavior emerges from the quantum mechanical behavior of electrons.
So now, we have an equation that could be solved, if we have a big enough computer. In a project I am planning for the summer, I am planning to implement this using a Python script. I will go into how computers are used to solve this (and how I am planning to script a solver) in my next post. Until then, feel free to let all of the math marinate in your head.