Skip to content

AryanPrakhar/astro-autoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Astronomical Image Autoencoder

This repository contains a deep learning model for compressing and reconstructing astronomical images using a convolutional autoencoder with U-Net architecture. test_image_16_latent_analysis

Table of Contents

Project Overview

This project implements a neural network autoencoder specifically designed for astronomical image data. The autoencoder compresses FITS (Flexible Image Transport System) astronomical images into a compact 8-dimensional latent space representation and then reconstructs them with high fidelity.

Project Overview Diagram

project_overview_flowchart

The project has applications in:

  • Data compression for large astronomical datasets
  • Anomaly detection for identifying unusual celestial objects
  • Feature extraction for downstream machine learning tasks
  • Denoising astronomical images
  • Exploring latent space representations of astronomical data

Features

  • Custom data loader for FITS astronomical images
  • U-Net architecture optimized for astronomical image reconstruction
  • Comprehensive evaluation metrics (MSE, SSIM, MS-SSIM)
  • Latent space visualization and analysis tools
  • Customizable inference pipeline for testing

Installation

To run this project, install the required dependencies:

 pip install numpy matplotlib torch torchvision astropy scikit-image tqdm pytorch-msssim

Dataset

The model is trained on FITS astronomical image data. FITS is a standard format in astronomy for storing images and multi-dimensional data.

Sample Image

Modern Square Typographic Fashion Brand Logo

The dataset preprocessing includes:

  • Loading FITS files with variable dimensionality
  • Resizing to a consistent 187×187 resolution
  • Normalizing pixel values to [0,1] range
  • Applying data augmentation (random flips and rotations) during training

Model Architecture

The autoencoder is based on a U-Net architecture with customizations for astronomical data:

U-Net Architecture

unet_architecture_diagram

Encoder

  • Four convolutional blocks with progressively increasing filters (16→32→64→128)
  • Each block contains two convolutional layers with batch normalization and ReLU activations
  • Max pooling for downsampling
  • Final fully connected layer to create the latent space representation (8 dimensions)

Decoder

  • Fully connected layer to reshape from latent space
  • Series of upsampling blocks with skip connections from the encoder
  • Each block contains transposed convolutions and regular convolutions
  • Final sigmoid activation for output normalization

Skip Connections

  • Preserve spatial information from encoder to decoder
  • Help maintain fine details in the reconstructed images

Training

The model was trained using:

  • Loss function: Binary Cross-Entropy
  • Optimizer: Adam with learning rate 0.001
  • Batch size: 16
  • Epochs: 150 (checkpoints every 10 epochs)
  • Train/Validation split: 80/20

Training Progress

{744BC49D-FE7F-4EB9-B4CE-4CB14B315FBD}

Results

The trained model achieves excellent reconstruction quality:

Reconstruction Examples

{CC6568F6-C64A-489B-A65D-C37A59989778}

Quantitative Metrics

  • Mean Squared Error (MSE): 0.000054 (lower is better)
  • Structural Similarity Index (SSIM): 0.9868 (higher is better)
  • Multi-Scale SSIM (MS-SSIM): 0.9982 (higher is better)

Metrics Visualization

{DD1792FD-FAB9-4AB3-905F-9078ABFB8C1F}

Latent space exploration

The 8-dimensional latent space captures key features of astronomical images:

Latent space visualization

{D48814B5-753D-4F20-953F-2E5E528EC8BE}

Sample latent space analysis

test_image_0_latent_analysis test_image_0_latent_analysis

About

U-Net-based autoencoder for compressing and reconstructing astronomical images.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published