Skip to content

Earth-Innovation-Hub/deepgis-xr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

DeepGIS-XR

Demo clip:

DeepGIS-XR Demo Video image image image image

Advanced Geospatial Visualization Platform with AI-Powered Analysis

License Python Django Cesium

DeepGIS-XR is a comprehensive geospatial visualization and analysis platform that combines advanced 3D mapping, AI-powered image analysis, and adaptive sampling systems for Earth and lunar exploration.


๐ŸŒŸ Key Features

๐Ÿค– AI-Powered Viewport Analysis

  • Segment Anything Model (SAM): Universal image segmentation with no training required

    • Three model sizes: Base (375MB), Large (1.2GB), Huge (2.4GB)
    • Automatic region detection and boundary identification
    • GeoJSON export with polygon simplification
  • YOLOv8 Detection: Ultra-fast real-time object detection

    • Multiple model sizes: Nano, Small, Medium, Large, XLarge
    • 80 COCO object categories
    • Class filtering support
  • Grounding DINO: Open-vocabulary text-based object detection

    • Detect ANY object by describing it in natural language
    • Text prompts like: "rock . boulder . crater . debris"
    • Supports remote API deployment for GPU acceleration
    • Ideal for domain-specific detection (geology, archaeology, agriculture)
  • Zero-Shot Object Detection: Pre-trained COCO model for 80 object categories

    • Detects: person, car, bicycle, truck, bus, animals, and more
    • Confidence-based filtering
    • Class-labeled visualizations
  • Mask2Former: State-of-the-art instance segmentation

    • More accurate than Zero-Shot for complex scenes
    • Pre-trained on COCO dataset

๐ŸŒ World Sampler - Adaptive Geospatial Sampling

  • Intelligent Spatial Sampling: Probabilistic framework for location sampling
  • Adaptive Learning: Updates distribution based on feedback and rewards
  • Survey Mode: Cycle through sampled points with automatic navigation
  • Spatial Queries: Efficient region-based queries and statistics
  • Multiple Initialization Strategies: Uniform, Gaussian, Gaussian mixture, custom

๐ŸŒ™ Moon Viewer

  • Lunar Visualization: Full Moon globe with LROC QuickMap imagery
  • Apollo Landing Sites: Historical mission locations
  • Aviation-Style Navigation: Heading dial, attitude indicator, sun/moon info
  • LOLA Terrain: High-resolution lunar elevation data
  • Lunar Digital Twin: Navigational decision support system

๐ŸŒค๏ธ Weather Stations Integration

  • NWS Weather Stations: Real-time weather data from National Weather Service API
  • Multi-State Support: Quick load stations from California, Arizona, Colorado, and Nevada
  • Interactive Display: Temperature labels, weather icons, and detailed popups
  • Auto-Update: Automatic refresh every 15 minutes for current conditions
  • HUD Integration: Weather stations accessible via bottom toolbar layer button
  • 21 Default Stations: Pre-configured stations across four western US states

๐Ÿ—บ๏ธ Advanced Geospatial Features

  • 3D Globe Visualization: CesiumJS-powered Earth and Moon globes
  • 3D Buildings Layer: OpenStreetMap buildings with worldwide coverage
    • Toggle via View panel checkbox or press B key
    • Free and open data source (ODbL license)
    • Smart loading: loads once, toggles visibility thereafter
  • Multi-Layer Support: Raster and vector layer management
  • Tile Server Integration: Custom tile server for large datasets
  • 3D Model Support: GLB/GLTF model loading and visualization
  • Coordinate Systems: Support for multiple projections and ellipsolds
  • Drone Navigation: Fly mode and orbit mode for automated camera movement
  • Measurement Tools: Distance, area, and height measurement capabilities

๐Ÿ”— Experience URL Sharing

  • Shareable URLs: Generate URLs that capture complete camera state
  • Camera Parameters: Position (lon, lat, alt), orientation (heading, pitch, roll)
  • View Mode Preservation: Remembers 2D, 3D, or Columbus view mode
  • Drone State Capture: Includes fly distance, speeds, orbit settings
  • Active Mode Restoration: Restores takeoff, landing, fly, and orbit modes
  • QR Code Generation: Share experiences via QR codes
  • Keyboard Shortcuts: Press S to share current view

๐Ÿš€ Quick Start

Prerequisites

  • Docker and Docker Compose
  • NVIDIA GPU (optional, for AI features)
  • Python 3.8+ (for local development)

Installation

  1. Clone the repository

    git clone https://github.com/Earth-Innovation-Hub/deepgis-xr.git
    cd deepgis-xr
  2. Start with Docker Compose

    docker-compose up -d
  3. Access the application

Local Development Setup

  1. Install dependencies

    pip install -r requirements.txt
  2. Run migrations

    python manage.py migrate
  3. Start development server

    python manage.py runserver

๐Ÿ—๏ธ Architecture

Technology Stack

Backend:

  • Django 4.0+ (Python web framework)
  • Django REST Framework (API endpoints)
  • PostgreSQL/SQLite (database)
  • Celery (async tasks, optional)

Frontend:

  • CesiumJS (3D globe visualization)
  • JavaScript ES6+ (modern frontend)
  • Bootstrap (UI components)

AI/ML:

  • Segment Anything Model (SAM) - Meta AI
  • Grounding DINO - Open-vocabulary detection (IDEA Research)
  • YOLOv8 - Real-time object detection (Ultralytics)
  • Zero-Shot Detection (Mask R-CNN) - COCO pre-trained
  • Mask2Former - Instance segmentation
  • PyTorch (deep learning framework)

Infrastructure:

  • Docker & Docker Compose (containerization)
  • Nginx (reverse proxy, optional)
  • TileServer GL (tile serving)

Project Structure

deepgis-xr/
โ”œโ”€โ”€ deepgis_xr/              # Django project
โ”‚   โ”œโ”€โ”€ apps/
โ”‚   โ”‚   โ”œโ”€โ”€ web/             # Web application
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ world_sampler_api.py  # Sampling & AI APIs
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ views.py     # View handlers
โ”‚   โ”‚   โ”œโ”€โ”€ core/            # Core models
โ”‚   โ”‚   โ””โ”€โ”€ ml/              # ML models
โ”‚   โ””โ”€โ”€ settings.py          # Django settings
โ”œโ”€โ”€ staticfiles/             # Static assets
โ”‚   โ””โ”€โ”€ web/
โ”‚       โ””โ”€โ”€ js/
โ”‚           โ”œโ”€โ”€ main.js              # Main application entry
โ”‚           โ”œโ”€โ”€ world-sampler-ui.js  # World sampler UI logic
โ”‚           โ”œโ”€โ”€ widgets/
โ”‚           โ”‚   โ””โ”€โ”€ weather-stations.js  # Weather stations widget
โ”‚           โ””โ”€โ”€ utils/
โ”‚               โ””โ”€โ”€ nws-weather-stations.js  # NWS API integration
โ”œโ”€โ”€ docker-compose.yml       # Docker configuration
โ”œโ”€โ”€ requirements.txt        # Python dependencies
โ””โ”€โ”€ README.md              # This file

๐Ÿ”Œ API Endpoints

World Sampler API

  • POST /webclient/sampler/initialize - Initialize new sampler
  • POST /webclient/sampler/sample - Get sample locations
  • POST /webclient/sampler/update - Update distribution
  • GET /webclient/sampler/query - Query spatial region
  • GET /webclient/sampler/statistics - Get distribution stats
  • POST /webclient/sampler/reset - Reset sampler
  • GET /webclient/sampler/history - View sample history

AI Analysis API

  • POST /webclient/sampler/analyze-viewport - Analyze viewport with AI
    • Parameters:
      • model_type: 'sam', 'yolov8', 'grounding_dino', 'zero_shot', or 'mask2former'
      • image: Base64-encoded viewport image
      • location: Camera position metadata
      • SAM options:
        • sam_model: 'vit_b', 'vit_l', or 'vit_h'
        • min_area: Minimum segment area in pixels
      • YOLOv8 options:
        • yolo_model: 'yolov8n', 'yolov8s', 'yolov8m', 'yolov8l', 'yolov8x'
        • confidence_threshold: 0.0-1.0
        • class_filter: Comma-separated class names (e.g., "person,car,truck")
      • Grounding DINO options:
        • text_prompt: Dot-separated object descriptions (e.g., "rock . boulder . crater")
        • box_threshold: Detection confidence threshold (default: 0.3)
        • text_threshold: Text matching threshold (default: 0.25)
      • Zero-Shot/Mask2Former options:
        • confidence_threshold: 0.0-1.0
    • Returns: GeoJSON with segments/detections, metadata, saved file paths

Labeling API

  • POST /label/semi-supervised/api/generate-labels/ - Generate assisted labels
  • POST /label/semi-supervised/api/save-labels/ - Save labels
  • GET /label/semi-supervised/api/get-images/ - Get label images

๐Ÿง  AI Viewport Analysis Architecture

The AI Viewport Analysis system supports multiple detection models, including remote API deployment for GPU-intensive models like Grounding DINO.

System Flow

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                         DeepGIS-XR Frontend                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  AI Viewport Analysis Panel                                   โ”‚  โ”‚
โ”‚  โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚  โ”‚
โ”‚  โ”‚  โ”‚ Analysis Type: [Grounding DINO (Open Vocab) โ–ผ]         โ”‚  โ”‚  โ”‚
โ”‚  โ”‚  โ”‚ Text Prompt:   [rock . boulder . crater . debris    ]  โ”‚  โ”‚  โ”‚
โ”‚  โ”‚  โ”‚ Box Threshold: [โ•โ•โ•โ•โ•โ•โ•โ—โ•โ•โ•] 0.30                      โ”‚  โ”‚  โ”‚
โ”‚  โ”‚  โ”‚ Text Threshold:[โ•โ•โ•โ•โ•โ•โ—โ•โ•โ•โ•] 0.25                      โ”‚  โ”‚  โ”‚
โ”‚  โ”‚  โ”‚ [  ๐Ÿง  Analyze Viewport  ]                              โ”‚  โ”‚  โ”‚
โ”‚  โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                               โ”‚ POST /webclient/sampler/analyze-viewport
                               โ”‚ {image, location, model_type, text_prompt, ...}
                               โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    DeepGIS-XR Django Backend                        โ”‚
โ”‚  world_sampler_api.py::analyze_viewport()                          โ”‚
โ”‚     โ”œโ”€โ”€ model_type == 'sam'           โ†’ Local SAM inference        โ”‚
โ”‚     โ”œโ”€โ”€ model_type == 'yolov8'        โ†’ Local YOLOv8 inference     โ”‚
โ”‚     โ”œโ”€โ”€ model_type == 'grounding_dino'โ†’ Remote API call โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚     โ”œโ”€โ”€ model_type == 'zero_shot'     โ†’ Local Mask R-CNN      โ”‚    โ”‚
โ”‚     โ””โ”€โ”€ model_type == 'mask2former'   โ†’ Local Mask2Former     โ”‚    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”‚โ”€โ”€โ”€โ”€โ”˜
                               โ”‚                                โ”‚
                               โ–ผ                                โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚     Local GPU/CPU Processing         โ”‚  โ”‚  Remote Grounding DINO    โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚  โ”‚  API Server               โ”‚
โ”‚  โ”‚ โ€ข SAM (vit_b, vit_l, vit_h)   โ”‚  โ”‚  โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚ โ€ข YOLOv8 (n, s, m, l, x)      โ”‚  โ”‚  โ”‚  โ”‚ POST /predict       โ”‚  โ”‚
โ”‚  โ”‚ โ€ข Mask R-CNN (COCO)           โ”‚  โ”‚  โ”‚  โ”‚ POST /predict_batch โ”‚  โ”‚
โ”‚  โ”‚ โ€ข Mask2Former (COCO)          โ”‚  โ”‚  โ”‚  โ”‚ GET  /health        โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚  โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Remote AI APIs

GPU-accelerated AI services on dedicated server for open-vocabulary detection and segmentation.

Grounding DINO (port 5000): Text-based detection
detection_visualization

Grounded-SAM-2 (port 5001): Detection + high-quality segmentation segmentation_visualization

# Grounding DINO - Detection only
curl -X POST http://192.168.0.232:5000/api/predict \
    -F "file=@image.jpg" -F "text_prompt=rock . boulder . crater"

# Grounded-SAM-2 - Detection + Segmentation
curl -X POST http://192.168.0.232:5001/detect \
    -F "image=@image.jpg" -F "text_prompt=rock . boulder . crater"

# Python client
./grounding_dino_api_client.py --image viewport.jpg --prompt "rock . boulder"

Example Prompts: Geology: "rock . boulder . crater" | Urban: "building . car . tree" | Wildlife: "animal . bird . nest"


๐ŸŽฏ Usage Examples

AI Viewport Analysis

  1. Navigate to DeepGIS Search (/label/3d/search/)
  2. Open AI Viewport Analysis panel (brain icon in HUD)
  3. Select analysis type:
    • SAM: Universal segmentation (all regions)
    • YOLOv8: Fast real-time detection (80 COCO categories)
    • Grounding DINO: Open-vocabulary detection (describe any object)
    • Zero-Shot: Pre-trained COCO detection
    • Mask2Former: High-accuracy instance segmentation
  4. Configure parameters:
    • SAM: Model size (Base/Large/Huge), minimum segment area
    • YOLOv8: Model size (Nano to XLarge), confidence, class filter
    • Grounding DINO: Text prompt (e.g., "rock . crater . boulder"), thresholds
    • Zero-Shot/Mask2Former: Confidence threshold
  5. Click "Analyze Viewport"
  6. View results on map with color-coded polygons and labels

World Sampler

  1. Initialize sampler with desired strategy
  2. Sample locations based on adaptive distribution
  3. Navigate to samples using survey mode
  4. Update distribution based on feedback
  5. Query regions for spatial analysis

Moon Viewer

  1. Navigate to Moon Viewer (/label/3d/moon/)
  2. Explore lunar surface with LROC imagery
  3. View Apollo landing sites and historical locations
  4. Use navigation widgets for precise control
  5. Adjust camera with aviation-style controls

3D Buildings Layer

  1. Navigate to DeepGIS Search (/label/3d/search/)
  2. Enable buildings:
    • Click "View" button in HUD toolbar
    • Check "3D Buildings (OSM)" checkbox
    • Or press B key to toggle instantly
  3. Best viewed in 3D mode (press V to switch to 3D)
  4. Zoom to urban areas to see detailed building models
  5. Coverage: Worldwide, based on OpenStreetMap data quality

Weather Stations

  1. Navigate to DeepGIS Search (/label/3d/search/)
  2. Click "Weather" button in the bottom HUD toolbar
  3. Toggle "Show Weather Stations" to enable
  4. Load stations:
    • Click "All States" to load all 21 stations (CA, AZ, CO, NV)
    • Or click individual state buttons (CA, AZ, CO, NV) for specific regions
  5. View weather data: Click on station markers for detailed information
  6. Auto-update: Stations refresh every 15 minutes automatically

Experience URL Sharing

  1. Navigate to any view in DeepGIS Search
  2. Configure your experience:
    • Set camera position and orientation
    • Choose view mode (2D/3D/Columbus) - press V to toggle
    • Enable drone modes (fly, orbit, takeoff, landing)
  3. Share your view:
    • Press S or click the Share button
    • URL is automatically copied to clipboard
  4. Generate QR Code: Click QR button to display scannable code
  5. URL includes:
    Parameter Description
    lon, lat, alt Camera position
    heading, pitch, roll Camera orientation
    viewMode 2D, 3D, or Columbus
    flyDist, hSpeed, vSpeed Drone fly settings
    orbRadius, orbPitch, orbYaw Orbit settings
    orbiting, flying, takeoff, landing Active mode flags

Keyboard Shortcuts

Press H to view all shortcuts in-app. Key shortcuts include:

Key Action
B Toggle 3D Buildings
V Toggle View Mode (2D/3D/Columbus)
F Toggle Full Screen
H Show Keyboard Shortcuts Help
S Share Current View
Q Toggle QR Code
T Hide/Show Toolbars
W Toggle Wireframe
D Drone Fly Forward
U Takeoff (Up)
L Land
O Start Orbit
P Pause/Stop Orbit
J Toggle Virtual Joysticks
โ†‘ โ†“ โ† โ†’ Camera Perspectives (N/S/W/E)
ESC Stop Orbit / Close Panels

๐Ÿ”ง Configuration

Environment Variables

DEBUG=True
DJANGO_SETTINGS_MODULE=deepgis_xr.settings
NVIDIA_VISIBLE_DEVICES=all  # For GPU support

# Remote AI Services (optional - defaults in docker-compose.yml)
# GROUNDING_DINO_API_URL=http://192.168.0.232:5000

Docker Configuration

The docker-compose.yml includes:

  • Web service: Django application with GPU support
  • Tile server: MapTiler TileServer GL for tile serving
  • Volume mounts:
    • dreams_laboratory/scripts - ML model scripts
    • deepgis_results - AI analysis results (shared with host)

GPU Support

To enable GPU for AI features:

  1. Install NVIDIA Docker runtime
  2. Uncomment GPU configuration in docker-compose.yml
  3. Ensure NVIDIA_VISIBLE_DEVICES=all is set

๐Ÿ“Š Recent Updates (December 2025)

  • โœ… 3D Buildings Layer: OpenStreetMap buildings worldwide; toggleable via UI or B key; free and open data (ODbL)
  • โœ… Experience URL Sharing: Complete camera state sharing via URL; supports takeoff/landing/fly/orbit modes; QR code generation
  • โœ… Grounding DINO: Open-vocabulary detection with text prompts; remote API architecture for GPU servers
  • โœ… Weather Stations: NWS integration with 21 stations across CA, AZ, CO, NV; HUD toolbar integration; auto-update every 15 min
  • โœ… UI/UX: HUD toolbar with floating panels; aviation-style navigation widgets; drone fly/orbit modes
  • โœ… AI/ML: YOLOv8 and Mask2Former integration; SAM optimization; clean viewport capture
  • โœ… Performance: Memory optimization; improved error handling; duplicate entity prevention
  • โœ… View Mode Switching: 2D/3D/Columbus view toggle with keyboard shortcut (V key); auto-restore from URL

๐Ÿค Contributing

Contributions are welcome! Please follow these guidelines:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Code Style

  • Follow PEP 8 for Python code
  • Use ESLint for JavaScript
  • Add docstrings to functions and classes
  • Include tests for new features

๐Ÿ“ License

This project is licensed under the MIT License - see the LICENSE file for details.


๐Ÿ™ Acknowledgments

DeepGISโ€‘XR builds on concepts and systems originally developed for the Oceanographic Decision Support System (ODSS, MBARI), the Agricultural Decision Support System (AgDSS, University of Pennsylvania), the OpenUAV Project (University of Pennsylvania, Arizona State University), and DeepGIS (Arizona State University). The DeepGIS project acknowledges support from the National Science Foundation, the United States Department of Agriculture, and the National Aeronautics and Space Administration.

Technology Acknowledgments

  • CesiumJS: 3D globe visualization
  • Meta AI: Segment Anything Model
  • IDEA Research: Grounding DINO open-vocabulary detection
  • Ultralytics: YOLOv8 real-time detection
  • NASA/GSFC/ASU: LROC QuickMap lunar imagery
  • COCO Dataset: Object detection categories

๐Ÿ“ง Contact & Support


๐Ÿ—บ๏ธ Roadmap

โœ… Completed

  • Zero-Shot Detection integration
  • SAM viewport analysis
  • YOLOv8 real-time detection
  • Grounding DINO open-vocabulary detection
  • Remote AI API integration architecture
  • World Sampler adaptive sampling
  • Moon viewer with navigation widgets
  • Weather stations integration
  • HUD toolbar and panel system
  • Multi-state weather station support
  • Experience URL sharing with full state capture
  • QR code generation for mobile sharing
  • 2D/3D/Columbus view mode switching
  • 3D Buildings layer with OpenStreetMap data

๐Ÿ”„ Q1 2026 - Near Term

  • Mars Terrain Viewer: Extend lunar capabilities to Mars with HiRISE/CTX imagery
  • Mission Export Formats: MAVLink waypoint export for drone autopilots
  • Enhanced Annotation Tools: Polygon editing, snapping, and undo/redo
  • Time-Series Layers: Temporal slider for historical imagery comparison
  • Geofence Alerts: Real-time boundary violation notifications
  • WebXR/VR Support: Immersive 3D globe exploration with VR headsets
  • Real-Time Telemetry: Live drone/vehicle position tracking via MAVLink/ROS
  • Collaborative Sessions: Multi-user annotation with real-time sync
  • Custom Model Training: Upload datasets and train custom detection models
  • Advanced Export: Shapefile, KML, GeoPackage, and Cloud Optimized GeoTIFF
  • Performance Dashboard: GPU/memory monitoring and optimization hints

๐Ÿ”ฎ Q1-Q4 2026 - Long Term

  • CLIP/VLM Semantic Search: Natural language queries for geospatial features
  • Autonomous Survey Planning: AI-optimized flight path generation
  • Digital Twin Integration: Real-time sensor fusion and 3D reconstruction
  • Model Marketplace: Community-shared detection models and configs
  • Edge Deployment: Lightweight inference for embedded/field devices
  • AR Field Overlay: Mobile AR for on-site navigation and annotation
  • Integration with Google Earth Engine
  • Integration with OpenTopography and support for point cloud visualization (LAS/LAZ)
  • 3D building/structure modeling from imagery
  • Automated change detection between time periods

โš ๏ธ Disclaimer

This software is provided "as is" without warranty. Use at your own risk. Intended for research and educational purposes. AI analysis results should be validated independently for critical applications.


Powered by Earth Innovation Hub (Arizona STEAM non-profit corporation)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published