Astra is a modern, real-time chat application that leverages Google's Gemini AI for intelligent conversations. Built with React, TypeScript, and Vite, it offers a seamless chat experience with features like conversation management, real-time typing indicators, and AI-powered responses.
- Real-time AI chat with streaming via Google Gemini, providing token-by-token responses for instant feedback
- Session-aware conversations saved in Supabase (
chat_sessions,chat_messages) - Automatic session creation on server and propagation to the client via
X-Conversation-Id - Secure authentication with Supabase and Bearer tokens
- Sidebar conversation list with TanStack Query caching and virtualization for large histories
- Smart auto-scroll only when the user is at the bottom during streaming
- Keyboard-friendly UX (Enter to send, Shift+Enter for newline)
- Auto-resizing input for comfortable typing backed by a custom hook
- TypeScript-first codebase for safety and maintainability
- React 19 with TypeScript
- Vite for fast development and building
- TailwindCSS for styling
- shadcn/ui for UI components
- React Hook Form with Zod for form validation
- TanStack Query for server state management
- Lucide React for icons
- Supabase
- Authentication
- Database
- Real-time subscriptions
- Google Gemini AI for chat responses
- Streaming API implementation
- ElevenLabs for voice chat
Path: src/components/chat/Conversation.tsx in handleSendMessage
const response = await fetch(`${API_URL}/api/chat`, {
method: "POST",
headers: {
"Content-Type": "application/json",
...(accessToken ? { Authorization: `Bearer ${accessToken}` } : {}),
},
body: JSON.stringify({
messages: updatedMessages.map(({ sender, content }) => ({
role: sender === "user" ? "user" : "assistant",
content,
})),
// Prefer globally selected conversation; fall back to local when missing
conversationId: selectedId ?? conversationId,
message: currentInput,
wantTitle: false,
}),
});
// Adopt the server-issued conversation id so subsequent sends are consistent
const serverConversationId = response.headers.get("X-Conversation-Id");
if (serverConversationId) {
if (conversationId !== serverConversationId)
setConversationId(serverConversationId);
if (selectedId !== serverConversationId) setSelectedId(serverConversationId);
}
const reader = response.body!.getReader();
const decoder = new TextDecoder();
let aiResponse = "";
let lastUpdateTime = 0;
const throttleMs = 16; // ~60fps UI updates
while (true) {
const { done, value } = await reader.read();
if (done) break;
aiResponse += decoder.decode(value, { stream: true });
const now = Date.now();
if (now - lastUpdateTime >= throttleMs) {
lastUpdateTime = now;
setMessages((prev) =>
prev.map((m) =>
m.id === aiMessageId ? { ...m, content: aiResponse } : m
)
);
}
}Path: server/index.js in app.post("/api/chat", ...)
// Create session if conversationId is missing
let sessionId = conversationId;
if (!sessionId) {
const { data: session } = await supabase
.from("chat_sessions")
.insert({ user_id: user.id })
.select()
.single();
sessionId = session.id;
}
// Expose custom headers so the browser can read them
res.setHeader("Content-Type", "text/plain");
res.setHeader(
"Access-Control-Expose-Headers",
"X-Conversation-Id, X-Generated-Title"
);
res.setHeader("Transfer-Encoding", "chunked");
res.setHeader("X-Conversation-Id", sessionId);
if (generatedTitle) res.setHeader("X-Generated-Title", generatedTitle);
// Stream the AI response
const result = await chat.sendMessageStream(userMessage.content);
let fullResponse = "";
for await (const chunk of result.stream) {
const chunkText = chunk.text();
fullResponse += chunkText;
res.write(chunkText);
}
res.end();Path: src/components/AppSidebar.tsx
const { data: conversations = [] } = useQuery({
queryKey: ["chat_sessions"],
queryFn: async () => {
const { data } = await supabase
.from("chat_sessions")
.select("id, title")
.order("created_at", { ascending: false });
return data ?? [];
},
staleTime: 1000 * 60 * 5,
});
const rowVirtualizer = useVirtualizer({
count: conversations.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 60,
overscan: 5,
});
function handleConversationClick(id: string) {
setSelectedId(id);
onConversationSelect();
}Path: src/components/Dashboard.tsx
useEffect(() => {
if (!selectedId) return;
(async () => {
const fetched = await getChatMessages(selectedId);
const formatted = (fetched ?? []).map((msg: any) => ({
id: String(msg.id),
content: msg.content,
sender: (msg.role === "assistant" ? "ai" : "user") as "user" | "ai",
timestamp: msg.created_at,
}));
setMessages(formatted);
setHasActiveChat(true);
})();
}, [selectedId]);- Context API for global state (e.g.
SelectedConversationContext) - Custom hooks (e.g.
use-auto-resize-textarea) for reusable UX logic - Component composition:
DashboardorchestratesNewChatvsConversationvsVoiceChat - TanStack Query for server state and caching (
chat_sessions) - TypeScript everywhere for safety and IDE support
- Input sanitization
- Form validation
- Auth token management
- Rate limiting
- Error handling
- Clone the repository:
git clone https://github.com/ShyneADL/astra.git
cd astra- Install dependencies:
npm install- Set up environment variables:
cp .env.example .env- Run the development server:
npm run dev- Run the API server:
cd server
npm install
npm run serverEnsure .env in server/ contains SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY, and GEMINI_API_KEY.
- Fork the repository
- Create your feature branch:
git checkout -b feature/AmazingFeature- Commit your changes:
git commit -m 'Add some AmazingFeature'- Push to the branch:
git push origin feature/AmazingFeature- Open a Pull Request
- Use conventional commits
- Keep commits atomic and focused
- Include relevant tests
- Update documentation
-
Input Validation
- Zod schemas for form validation
- Sanitization of user inputs
- Type checking with TypeScript
-
Authentication
- JWT token management
- Secure session handling
-
API Security
- Rate limiting
- CORS configuration
- Error handling
This project is licensed under the MIT License - see the LICENSE file for details.
- Google Gemini AI team
- Supabase team
- shadcn/ui
Built with β€οΈ by ShyneADL
- Node.js (v18 or higher)
- Supabase account
- Google AI API key (for Gemini and embeddings)
Create a .env file in the server directory with the following variables:
# Supabase Configuration
SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
# Google AI Configuration
GEMINI_API_KEY=your_gemini_api_key
# Server Configuration
PORT=3001-
Run the SQL migration in your Supabase dashboard:
-- Execute the contents of server/migrations/create-therapy-knowledge-table.sql -
The therapy knowledge base will be automatically populated on first server startup.
-
Install client dependencies:
npm install
-
Install server dependencies:
cd server npm install
-
Start the server:
cd server npm run dev:server -
Start the client (in a new terminal):
npm run dev
The RAG system consists of three main components:
- Generates vector embeddings using Google's text-embedding-004 model
- Calculates cosine similarity for document retrieval
- Stores therapy knowledge with embeddings in Supabase
- Performs semantic search for relevant therapeutic content
- Manages conversation history context
- Main RAG orchestration function
- Detects topic deviation using keyword analysis
- Builds therapeutic prompts with relevant context
- Maintains focus on mental health conversations
The system automatically detects when users try to discuss non-mental health topics and gently redirects them back to therapeutic conversations.
therapeutic_approach: CBT, DBT, and other therapy methodstherapeutic_technique: Active listening, validation, etc.mental_health_condition: Anxiety, depression support strategiescrisis_management: Safety protocols and resource guidanceprofessional_ethics: Boundary setting and scope limitations
The system includes built-in crisis detection and appropriate resource guidance while maintaining professional boundaries.
POST /api/chat- Main chat endpoint with RAG integrationGET /api/health- Health check endpoint
- Follow the established code patterns
- Ensure all therapeutic content is evidence-based
- Test topic deviation scenarios thoroughly
- Maintain professional therapeutic boundaries in all responses
This project is licensed under the ISC License.
