Blog

  • ObscuroWeb — Encrypting Web Assets at Rest

    Introduction

    Encryption has become a vital tool for protecting data online. For protecting data in transit we have the now-ubiquitous HTTPS. For protecting data at rest we have a number of schemes for data encryption. However, for protecting web assets at rest there are very few options.

    If one wants to serve data to a large number of clients, they generally have to use third-party services such as a CDN or a DDoS protection service. Cloud services are able to see the data that is being stored and served.

    This poses a problem for anyone handling sensitive or confidential information. That is why we have developed a solution.

    ObscuroWeb — Functional Overview

    The person setting up the website encrypts the HTML files, as well as any other assets, and publishes them somewhere accessible on the web. Then, they create another page which contains a loader and decryption key. The loader will download, decrypt, and display the page. While the page and assets may be on a number of CDNs or cloud servers, the actual key is only accessible in a particular place. The key might also be fetched from an API or from the user’s own browser storage.

    Potential Use Cases

    An organization may host an internal newsletter with sensitive information on a server that resides on premises. With ObscuroWeb, they could move the newsletter to a cloud-based CDN and host only the loader and decryption key on the server. This would reduce server load and possibly speed up the page loads.

    Someone dealing with sensitive records could store them in an S3 bucket, but restrict access using the loader and a system to lock the key behind some form of authentication.

    Drawbacks and Limitations

    • Decryption can be slow.
    • Decryption might use more battery.
    • There is no effort made to verify assets that are downloaded, except that they were encrypted with the key used.
    • A lot of memory is used as multiple copies of the data must exist in full simultaneously.
    • If the key is publicly available, the privacy benefits are negligible.
    • Keys cannot be easily changed, as all assets have to be re-encrypted and re-uploaded. The loader must also be re-configured with the new key. This will almost inevitably cause a disruption in service.
    • Any attempts to apply compression to the files as they are being transferred (i.e. Brotli, GZip) will be either of no benefit or actively counterproductive.
    • Webpage loading is somewhat ugly, as there is a delay for content to load.
    • Websites using JavaScript will need to be redesigned somewhat, as updating an entire page at once will not run the scrips present on the new page.

    Demo

    A demonstration is available on JimmyNet: https://jimmyhoke.net/obscurowebdemo

    Conclusion

    ObscuroWeb shows how you can, with just a few lines of JavaScript and libsodium, dynamically load and decrypt web resources.

  • Single Atom Hartree-Fock

    I want to get a personal project done over the course of my gap year, and few ideas appeal to me more than toying around with computational quantum mechanics calculations. I recently coded a Schodinger-Poisson self-consistent field solver as part of a class project, and am eager to use similar methods to toy around with simulating more complex structures. The Schrodinger-Poisson solver I wrote was for a bunch of electrons in a spherical potential well, which is a very simple quantum-mechanical problem in comparison with electrons in a Coulomb potential.

    An Over-Simplified History of Quantum Mechanics

    Before I can get into the details of Hartree-Fock, I need to first explain what problem it is trying to solve. So, for a quick history lesson. By the late 19th century, everyone thought we basically knew how the world worked. We had Newtonian forces and momentum to explain forces, Newtonian gravity to explain the motion of the planets, a detailed theory of electricity and magnetism brought to us by Coulomb, Gauss, Ampere, Faraday, Maxwell, and Lorentz, and thermodynamic theories about energy and heat brought to us by Leibniz, du Chatelet, Watt, Carnot, Clausius, Kelvin, and Boltzmann. To round all of this out, we had well-developed and mathematically rigorous methods of calculating the mechanics formulated by Newton, brought to us by Euler, Lagrange, and Hamilton (not the founding father). Physicists at the time thought, “Well, we have a detailed theory of how electromagnetism works, we have a good explanation of light, and we have a good explanation of the nature of heat. So, we should be able to combine the two and explain why hot things glow the way they do. Right?”

    Yeah, no. When they tried to do this, they ended up concluding that everything in the universe produces excessive amounts of ultraviolet radiation, which is simply not the case.

    Then comes Max Planck. He comes along and says, “Well, let’s assume that light can only be emitted in these discrete quantities of energy which get bigger as the wavelength of the light decreases.” And it works. But now everyone’s wondering why it works, and physicists spend the next thirty years figuring that out. Einstein says, “Well that explains the photoelectric effect–these packets of light have to be at a certain energy in order to eject electrons from their atoms.” Niels Bohr comes along and says that the electrons’ orbits around the nucleus must be quantized, and that explains why they only absorb and emit certain wavelengths when excited (thus explaining the mystery of why spectral lines exist). He further conjectures the same thing regarding angular momentum (because quantized electron orbits results in discrete levels of angular momentum for the electrons), leading to the Stern-Gerlach experiment which tested the discretization of atomic magnetic moments that should result from this. Louis de Broglie comes along and says, “Well, if light is a wave that comes in discrete packets, maybe all these other things that come in discrete packets (like, for instance, electrons) must be waves too.” And then Heisenberg and Schrodinger take all this and craft their mathematical formulations of quantum mechanics, and Paul Dirac makes their formulation compatible with special relativity and then makes a quantum formulation of electromagnetic theory, thus finally completing the initial spur of discoveries that wondered how light can be both a particle and a wave in the first place.

    From the Schrodinger Equation to Hartree-Fock

    When Schrodinger came up with his formulation of quantum mechanics in 1926, this is the equation he came up with:

    i Ψ t = 2 2 m 2 Ψ + V ( 𝐫 )

    The point of this equation is to solve for Ψ, which gives us a wave function. Various operators, such as the momentum and position operators, can give us information about various observables. Of particular interest is the energy operator, defined as:

    E ^ = i t

    If our wave function Ψ describes a quantum state with a perfectly well-defined energy, then it should be an eigenfunction of the energy operator:

    E ^ Ψ = E Ψ

    where E (no hat) is the energy of the system described by the wavefunction.

    One thing that this tells us about the wavefunction is that its time dependence can be separated out into a complex exponential term, like so:

    Ψ ( 𝐫 , t ) = e i E t ψ ( 𝐫 )

    This gets us to the time-independent Schrodinger equation:

    E ψ = 2 m 2 ψ + V ( 𝐫 )

    He then solved the time-independent equation for a hydrogen atom, with the following potential energy:

    V ( r ) = 1 4 π ϵ 0 e 2 r 2

    and he got energy eigenvalues that were perfectly in line with the observed energy levels of the hydrogen atom.

    Things get more complicated when dealing with multi-electron systems. One reason is that now, the wavefunction is now a function of multiple position vectors:

    ψ ( 𝐫 1 , 𝐫 2 , 𝐫 3 , , 𝐫 n )

    Furthermore, we now have to account for the mutual electrostatic repulsion of the electrons, as well as the kinetic energies of all the electrons. The result is the following monster of a Schrodinger equation:

    E ψ = i 2 2 m i 2 ψ + i V e x t ( 𝐫 i ) ψ + i j 1 4 π ϵ 0 e 2 | 𝐫 i 𝐫 j | ψ

    where each index i represents a particular electron. This is not usually analytically solvable, unlike the hydrogen atom (in which there is only one electron). Given that we have a single wavefunction to describe multiple electrons, this is often not even numerically solvable either, even with the most powerful computers available to us (let alone my puny-in-comparison gaming PC). So, we have to make a number of simplifying assumptions. One such assumption is that each electron has an individual wavefunction ψi (with each ψi being a function of only one position vector), and that the positional probability density of each ψi can be used to construct a charge density with which one can obtain an electric potential. This allows us to solve a larger number of somewhat simpler Schrodinger equations:

    E ψ i = 2 2 m 2 ψ i + V e x t ( 𝐫 ) ψ i + j i e 2 4 π ϵ 0 [ | ψ j ( 𝐫 ) | 2 | 𝐫 𝐫 | d 𝐫 ] ψ i

    This form of the Schrodinger equation is known as the Hartree equation (after Douglas Hartree, the first guy to use this approximation method). is still probably impossible to solve analytically in all but the most trivial of cases, but it is now possible to work with this numerically in a computer. The electron-electron interaction term is thus approximated as seen in the above equation, and this term is known as the Hartree energy.

    However, there is one part of the picture that is missing, and that is the “exchange energy” caused by the fermionic behavior of electrons. Wavefunctions describing multiple identical fermions have to exhibit the following property (known as antisymmetry):

    ψ ( , 𝐫 i , , 𝐫 j , ) = ψ ( , 𝐫 j , , 𝐫 i , )

    or, in English, if two of the position variables are exchanged, the wavefunction becomes the negative of what it was before the exchange. The easiest way to capture this effect is by assuming the wavefunction to be in the form of the following:

    ψ ( 𝐫 1 , 𝐫 2 , , 𝐫 n ) = 1 N ! d e t ( [ ψ 1 ( 𝐫 1 ) ψ 2 ( 𝐫 1 ) ψ n ( 𝐫 1 ) ψ 1 ( 𝐫 2 ) ψ 2 ( 𝐫 2 ) ψ n ( 𝐫 2 ) ψ 1 ( 𝐫 n ) ψ 2 ( 𝐫 n ) ψ n ( 𝐫 n ) ] )

    This form of wavefunction is known as the Slater determinant, after John C. Slater, who was the first person to use this form of wavefunction. The Slater determinant method, by way of the Slater-Condon rules (which I will go into at a later post), results in one term being added to the single-electron equation:

    E ψ i = 2 2 m 2 ψ i + V e x t ( 𝐫 ) ψ i + j i e 2 4 π ϵ 0 [ | ψ j ( 𝐫 ) | 2 | 𝐫 𝐫 | d 𝐫 ] ψ i j i e 2 4 π ϵ 0 [ ψ j ( 𝐫 ) ψ i ( 𝐫 ) | 𝐫 𝐫 | d 𝐫 ] ψ j ( 𝐫 )

    This, then, is the full Hartree-Fock equation. Now, you might be wondering, “What the heck is all this good for? It has nothing to do with all of the light spectrum stuff you were talking about in the over-simplified history lesson.” Well, the quantum mechanics that was developed to solve that problem turns out to be what’s necessary for understanding in detail the behavior of electrons in pretty much any atomic or molecular situation. Thus, the Hartree-Fock equation is used to calculate the behavior of electrons in atoms, molecules, crystal lattices, and nanomaterials. It is necessary for understanding the ways in which chemical behavior emerges from the quantum mechanical behavior of electrons.

    So now, we have an equation that could be solved, if we have a big enough computer. In a project I am planning for the summer, I am planning to implement this using a Python script. I will go into how computers are used to solve this (and how I am planning to script a solver) in my next post. Until then, feel free to let all of the math marinate in your head.