In addition to viewing images, your data can also be heard
NASA has announced a “new and immersive” form of explore some of the first images and data taken by him james webb space telescope: through sound.
Now they have been revealed various soundscapesin some of device captures just a few months ago, with the intention, according to Matt Russ, a musician and professor of physics at the University of Toronto, to make the images and data from the telescope “more understandable through sound, helping listeners to create their own mental images“.
This material includes the Cosmic Cliffs of the Carina Nebula, the contrasting hues of two images of the South Ring Nebula and even individual data points from the transmission spectrum of the hot gas giant exoplanet WASP-96b.
In addition, in order to adapt the Webb data, a team of scientists, musicians and a member of the blind community and visually impaired, who also had the support of the telescope mission and of Universe of Learning from NASA.
The setting of the Carina Nebula
In the case of the Cosmic Cliffs of the Carina NebulaNASA details that it has been mapped “in a symphony of sounds“, where the musicians assigned unique notes to the semi-transparent and diaphanous regions (areas that allow almost all of the light to pass through), as well as to the dense regions of gas and dust in the nebularesulting in a “buzzing” soundscape.
This image was scanned from left to right, where, for example, the gas and dust in the upper half of the image is rendered in “blue” tones and drone-like sounds. On the other hand, in the lower part of the image, there are reddish tones that have a clearer and more melodic composition.
On the other hand, the vertical position of the light also gives the frequency of the sound. The bright light near the top of the image sounds loud and sharp, but the light near the middle sounds, but in a lower keywhile the dimmer areas the frequencies are lower and the notes are clearer, without distortion.
The double environment in the South Ring Nebula
For the South Ring Nebula, there are two sound variantsas well as two images captured by the Webb, with near infrared light (left) and half (right).
In this case, the sound tones were assigned based on the colors of the images, that is, the light frequencies were converted into the respective, but audio. Near-infrared light is represented by a range of higher frequencies at the beginning and midway, these change to become lower to reflect the light waves picked up by the mid-infrared.
These variants allow different sounds to be picked up in each of the images, for example at seconds 15 and 44. In the first case only one star is “heard” and in the second two are captured, which means that a pair of stars was detected with the mid-infrared.
The water signal on an exoplanet
Lastly, for the hot gas giant exoplanet, WASP-96btranslated into sound individual data points of the transmission spectrum that allowed reveal the existence of water.
Here each of the tones corresponds to the frequencies of light represented by each point, where longer wavelengths of light have lower frequencies and are heard with lower pitches. In addition, the volume indicates the amount of light detected in each of the captured data.
For the four water signaturesThey are represented by the sound of falling dropsalthough it is actually a signature with multiple data points.
learning even with sounds
NASA details that these audio tracks are compatible first with hearing blind or low visionbut also to “captivate” anyone who tunes in, translating the visual images through the information captured into sounds.
This makes it easier to view the telescope’s shots, and according to Kimberly Arcand, a visualization scientist at the Chandra X-ray Center, preliminary results from a survey showed that people who are blind or have low vision, as well as sighted people learned something about astronomical images listening only.
Since these audios are not real sounds recorded in space, Matt Russo, a musician and professor of physics at the University of Toronto, teamed up with musician Andrew Santaguida to compose music with the telescope data, but taking care to represent the details that they wanted listeners to focus on.