Optical data storage breakthrough increases capacity of diamonds by circumventing the diffraction limit

https://phys.org/news/2023-12-optical-storage-breakthrough-capacity-diamonds.html

"store many different images at same place in diamond using slightly different color laser to store different information into different atoms in same microscopic spots... control color centers' electrical charge precisely using narrow-band laser and cryogenic conditions, enabling writing and reading data at finer level down to single atom... circumvents diffraction limit exploiting slight color (wavelength) changes existing between color centers... writing data with sub-diffraction resolution... can infinitely write, erase, and rewrite "

Related:

Record-breaking diamond storage can save data for millions of years

Atomic arrays enable negative refraction, bypassing metamaterial limitations

Silicon nitride-based electromagnetic metamaterial with industrial potential developed

Simon Derricutt commented by email:

For the negative refractive index, there's some bending of the meanings of words. An array of atoms would in my book be seen as a metamaterial, so this doesn't really eliminate the need for metamaterials, just gives
another way to achieve the metamaterial required. All seems to be in theory at the moment, too, so may just be a bid for 15 minutes of fame.

Not really sure about the Silicon Nitride with Tungsten added. Seems more military uses (stealth coatings) than industrial. Given that the Tungsten is randomly arranged, the microscopic properties will not be
the same as the bulk properties, putting a size-limit on how small you can make items in this material and still get the defined properties. Note also that the odd properties here have limited frequency ranges.

The Boron Nitride (BN) "hyperlenses" use interesting tricks to get around the normal limitations, where the surface polaritons have much shorter wavelengths and thus enable better resolution, but only at the
wavelengths that the BN can pass which is in the IR band and thus longer wavelengths anyway.

However, an image from a lens is composed of overlapping Gaussian dots from each part of the source image, where each dot size depends on the wavelength, lens dimensions, and distance from the lens. In image terms, it's a convolution, and if we put that image onto a sensor array each pixel in the final image will sum the total incident photons to give you a 32-bit or more value. If the image stays the same, and we
shift the sensor array across half a pixel, we get a different value of each pixel than the first picture, and we could put those two arrays through a mathematical process (de-convolution) that would deliver a
final image with twice the resolution of either image. This process can be extended pretty-well as far as you want, by taking more images with a slightly-displaced sensor and then processing all those images to
produce a final image with much better resolution than the Gaussian limit we started with. The only real limitation on this is that the subject doesn't move between successive images. Though standard theory
says that the information (resolution) is degraded by that Gaussian limit, in fact that information is still there but harder to isolate, but if you set up your sampling correctly then you can get the resolution back. It just needs more photos that sample different image locations and some processing. Choice between whether you shift the photoreceptor array or the lens, maybe using piezo-actuators.

Should be able to get from a 500nm resolution (around optical limit) to 50nm resolution using 100 shots and an image processor, thus get high- resolution photos using visible light and standard glass lenses, maybe
even just a different imaging array (with the piezo-actuators shifting the array in a defined sequence) in a standard microscope. The basic technology for this exists now, just needs building.










Comments