Deep Learning for Space Exploration
Last Updated on August 1, 2023 by Editorial Team
Author(s): Argo Saakyan
Originally published on Towards AI.
I was always obsessed with space and neural nets. As a Computer Vision Researcher, I see a lot of opportunities for Deep Learning in space exploration. By that, I mean both research and processing data for astrophysics and practical things like landing a rover and automating as many things as possible (which is critical for future missions).
This is an overview of some exciting cases where Deep Learning is indispensable for our exploration. Some of these examples are already deployed by us humans, but some of them are just ideas.
Dangerous asteroid detection
Our telescopes can generate a huge amount of images, for example, with Vera C. Rubin Observatory we need to process 200 000 images per day, so itβs not possible to do it manually. We already have several systems like ATLAS that monitor sky view and automatically detect asteroids. This is handy if we donβt want to end up like dinosaurs. So this system consists of several telescopes on different continents, and every night each telescope is scanning the sky several times.
Based on that, we get several images of exactly the same part of the sky. Stars would not move from our point of view because they are too far away, but asteroids would. So we can subtract one image from another to βdeleteβ the background starts and find moving objects β asteroids.
This is a classical computer vision solution, without deep learning, but there is a DL solution for this task too, especially, if two images donβt align perfectly.
Landing system
Letβs assume we need to land a rover on the surface of Mars. But we are interested in a specific area. Unfortunately for our rover, the chosen terrain is pretty hard to land on and thatβs why we need deep learning.
With a computer vision system (some sort of deep learning algorithm with Convolutional layers β CNNs), the rover can automatically find a safe place to land and do it in real-time while getting closer to the land. Remember, Mars is far away from Earth, so even light (signal) needs time to get there. We are talking about 3 minutes of light travel. Thatβs why we need rover to be independent.
What is remarkable, this system has already been used by NASA, at least while landing Perseverance (2021).
Searching for minerals and water
Perseverance has a drone on the board β Ingenuity. It can fly on Mars! Thatβs the first time we have ever had a controlled flight on such a device outside Earth. That was a proof of concept, and in the future, we can do a lot more with this technology.
We can use a computer vision model and camera on the drone to detect minerals and water near the rover. We need water literary for everything, including getting oxygen or rocket fuel, if liquid hydrogen is used. Minerals might be used for building houses or for scientific research.
So our system would help to create a map of interest. And this idea scales pretty well. We could run several drones and explore the surface. PyTorch is pretty sure we are going to have powerful GPUs to run deep-learning algorithms on the next-gen rovers. This might be the most efficient way in the future to explore the surface for several critical tasks.
One more thing in this category, as Chris Mattmann from NASA pointed out, is that we need to send pictures to Earth to make decisions, but with deep learning, we can create a text description from images and analyze a million images per day as a text instead of 200 images (as the bottleneck, for now, is the amount of data we can send from Mars to Earth)
3D printed buildings
3D printing is a great technology as it tries to automate house building. For now, we probably can print walls, but not really the whole house. We lack stability and reproducibility in that kind of build. For example, in a building process, we use concrete, and in needs to be fluid enough to poor through the nozzle, but it has to harden fast enough, so the next layer can be poured successfully. The consistency of the concrete depends on the temperature, humidity, and air pressure. Another problem is that building in different places means that we have different grounds to build on, which brings inconsistency in the process. Finally, there might be some anomalies during the build. And thatβs all exactly why we need deep learning here.
Firstly, we can use temperature, humidity, air pressure sensors, and deep learning to adopt the needed compound, so we get the needed consistency of the concrete. Secondly, to work in different locations with anomalies and inconsistencies, we need to have a computer vision model, so our robot can analyze whatβs it building.
And finally, if we want to use this technology on other celestial bodies (like this example), we need a maximum amount of automation and robustness, so we should use deep learning solutions.
Asteroid mining
Right now, asteroid mining is going to be too expensive and unrealistic, but when spaceflights get more affordable β we might seriously think about asteroid mining. The point is to find a celestial body with rare elements like palladium, gold, platinum, osmium, tungsten, and others, send an automated system there to mine those elements, and bring them back (or use them for some kind of space station). Someday, we even might use water (hydrogen) or methane from asteroids as fuel for our long spaceflights.
But how is deep learning going to help us here? There are a ton of celestial bodies, and we need to find the most cost-effective flight. More rare elements there are β better, and with deep learning, we can automate our search. The first step is to use computer vision to find the needed asteroid, and then, using spectroscopy data, a neural net can try to predict what metals are located there.
Moreover, we should remember earlier points that with deep learning, we can not only find the best asteroid to mine on but also can:
- Find a landing spot;
- Drive autonomously, navigating around objects;
- Look for rare elements with computer vision system (camera data/x-ray data)
Manufacturing
Maybe someday we will want to move heavy manufacturing to the Moon so our environment is safe. In that case, we need as much smart automation as possible, as we donβt want to leave a lot of people there. Itβs a wide sphere, but computer vision would have a ton of applications here. Here are some use cases for computer vision in factories:
- Quality control β detect anomalies, and defects;
- Autonomous robots move stuff and do other things humans do;
- Counting stuff;
- Safety (for example β fire detection)
Predictive maintenance
As we have already discussed, we need as much automation as possible. That means that we are going to have a lot of things that could potentially break. Itβs a good idea to monitor that and even better β to predict the breaking moment to prevent it.
Letβs say we have a motor with several sensors for monitoring. Using a deep learning algorithm, we can predict when our motor is going to break based on the data from sensors. Thatβs a pretty common task even here on Earth, and it is going to be handy elsewhere. This really is just an example; we can monitor a lot of different things, even with microphones or cameras.
Healthcare
When we are going to be far away from Earth on a mission, maybe on the Moon or Mars, we canβt take every specialist on board. Thatβs why automation is important also in healthcare. With some equipment and AI assistance, we can diagnose a wide variety of diseases.
We can already do a lot of things in healthcare with computer vision today. Here are a just couple of examples:
- Broken bones, bone loss, and other bone pathologies;
- Kidney stones, different types of cancer, and other soft tissue pathologies.
Moreover, we can use symptoms and other data to try to diagnose other (non-visual) diseases and to generate a treatment. What is more interesting, we can potentially mimic a real doctorβs visit. We can analyze video data, audio data, and symptoms to create a better understanding of the problem. Finally, we can work with mental health, too, analyzing speech and/or behavior.
Astrophysicistβs assistant
There are a lot of tasks where deep learning can help astrophysicists. And it is really relevant now because old algorithms are still being used despite their modest quality. Sometimes even volunteers are involved in projects like Galaxy Zoo. Itβs important to keep in mind that we generate a lot of data with telescopes, and we need to analyze it in some way.
Here are just some examples where deep learning is really useful:
- Galaxy detection/classification;
- Jupiter vortex analysis;
- Exoplanet search;
- Supernova detection;
- Extended X-ray emission detection;
- Black holes detectionβ¦
Although computer vision is going to be used for some examples, in the majority of cases, we might not have a valuable image from the telescope, but signal data, like measured brightness. In that case, we are going to use other deep learning algorithms to analyze signal data.
A good example is exoplanet detection. At least with Kepler Space Telescope we got brightness measurements on the timeline, so we can use Recurrent Neural Networks (for example LSTM layer), but that might change with James Webb Space Telescope, as it is going to give higher res images. Anyway, it doesnβt really matter if we are going to analyze image data or other signal data. Deep learning algorithms are going to be used here.
Conclusion
There are a ton of other implementations of deep learning, like space weather forecasting, space debris detection, robots, assistants like chatGPT, farming automation, and a lot more. I just tried to mention remarkable projects that, in my opinion, are immensely inspiring.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI