ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
-
Updated
Nov 4, 2024 - Shell
ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
RunPod serverless worker for Fooocus-API. Standalone or with network volume
The Big List of Protests - An AI-assisted Protest Flyer parser and event aggregator
Production-ready RunPod serverless endpoint and pod for Qwen-Image (20B) - Text-to-image generation with exceptional English and Chinese text rendering
Streamlit web app for scheduling RunPod serverless models with automatic cronjobs to prevent cold starts. Includes Slack notifications and real-time monitoring.
RunPod Serverless Worker for the Stable Diffusion WebUI Forge API
Runpod-LLM provides ready-to-use container scripts for running large language models (LLMs) easily on RunPod.
Production-ready RunPod serverless endpoint for Kokoro TTS. Features high-quality text-to-speech, voice mixing, word-level timestamps, and phoneme generation. Optimized for fast cold starts and auto-scaling.
Headless threejs using Puppeteer
A Rust SDK implementation of the Runpod API that enables seamless integration of GPU infrastructure into your applications, workflows, and automation systems.
RunPod serverless worker for the vLLM AI text-gen inference. Simple, optimized and customisable.
Python client script for sending and save prompt to A1111 serverless workers endpoints
Deploy FinGPT-MT-Llama-3-8B-LoRA on RunPod Serverless with llama.cpp + CUDA. Auto-scaling, OpenAI-compatible API, Q4_K_M quantization. Pay-per-use serverless inference.
Adds diarization to faster-whisper Runpod worker
Build and deploy the PGCView pipeline endpoint in a RunPod serverless GPU environment.
This repository contains the runpod serverless component of the SDGP project "quizzifyme"
A Chrome extension that helps improve reading comprehension by generating an interactive, multiple choice quiz for any website
This is an AI/ML Project Deployment Template, which uses Litserve & Runpod as backend service & gradio for UI.
MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
This project hosts the LLaMA 3.1 CPP model on RunPod's serverless platform using Docker. It features a Python 3.11 environment with CUDA 12.2, enabling scalable AI request processing through configurable payload options and GPU support.
Add a description, image, and links to the runpod-serverless topic page so that developers can more easily learn about it.
To associate your repository with the runpod-serverless topic, visit your repo's landing page and select "manage topics."