-
Notifications
You must be signed in to change notification settings - Fork 10
refactor vector search to chat widget #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds a full RAG chat feature: server endpoint, use-case logic, OpenAI chat client with retries, a React hook with sessionStorage persistence and optimistic updates, UI chat components, and replaces the admin vector-search page with a chat-driven Course Assistant. Changes
Sequence DiagramsequenceDiagram
participant User
participant ChatUI as Chat UI
participant Hook as useRagChat (client)
participant ServerFn as ragChatFn (server)
participant RAGUseCase as ragChatUseCase
participant VectorDB as Vector Search
participant OpenAI as OpenAI API
User->>ChatUI: enters message
ChatUI->>Hook: sendMessage(content)
Hook->>Hook: append optimistic user message / save to sessionStorage
Hook->>ServerFn: POST { userMessage, conversationHistory }
ServerFn->>RAGUseCase: invoke ragChatUseCase
RAGUseCase->>RAGUseCase: create embedding for userMessage
RAGUseCase->>VectorDB: search top chunks (limit 10)
VectorDB-->>RAGUseCase: search results
RAGUseCase->>RAGUseCase: format context + extract sources
RAGUseCase->>OpenAI: createChatCompletion(messages + context)
OpenAI-->>RAGUseCase: assistant response
RAGUseCase-->>ServerFn: { response, sources }
ServerFn-->>Hook: mutation success
Hook->>Hook: append assistant message + sources / save to sessionStorage
Hook->>ChatUI: updated messages & sources
ChatUI-->>User: display assistant reply
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @src/hooks/use-rag-chat.ts:
- Around line 68-109: The mutationFn is using a stale closure over messages so
conversationHistory can miss recent optimistic messages; fix by introducing a
messagesRef that you keep in sync whenever you call setMessages (use functional
updates inside onMutate and any other setters to update state, then set
messagesRef.current to the new array), and change mutationFn to read
conversationHistory from messagesRef.current when calling ragChatFn; ensure
onMutate still creates the optimistic user message (using setMessages(prev =>
{...}) and then sets messagesRef.current to that new array) so the server always
receives the latest messages.
In @src/use-cases/rag-chat.ts:
- Around line 88-122: The conversation history messages lack a content length
cap allowing very large inputs to be injected into the prompt; update the
validation schema used by ragChatInputSchema by adding a maximum length (for
example .max(2000)) to the content field in conversationMessageSchema in
src/fn/rag-chat.ts so each ConversationMessage.content is bounded before
buildMessages inserts recentHistory entries into the prompt; ensure the updated
conversationMessageSchema is exported/used by ragChatInputSchema so the
server-side validation rejects overly long message contents.
🧹 Nitpick comments (6)
src/lib/openai-chat.ts (1)
32-42: Consider exportingChatCompletionErrorfor better error handling by consumers.The custom error class provides useful context (
code,status,context) but isn't exported. Consumers may want to catch this specific error type to access these properties.Suggested change
-class ChatCompletionError extends Error { +export class ChatCompletionError extends Error {src/routes/admin/vector-search/-components/source-videos-panel.tsx (1)
67-78: Consider extracting similarity thresholds as constants.The magic numbers
0.8and0.6for badge variant selection could be extracted to named constants at the top of the file for clarity.Suggested change
+const HIGH_SIMILARITY_THRESHOLD = 0.8; +const MEDIUM_SIMILARITY_THRESHOLD = 0.6; + export function SourceVideosPanel({ sources }: SourceVideosPanelProps) {Then in the Badge:
<Badge variant={ - source.similarity > 0.8 + source.similarity > HIGH_SIMILARITY_THRESHOLD ? "default" - : source.similarity > 0.6 + : source.similarity > MEDIUM_SIMILARITY_THRESHOLD ? "secondary" : "outline" }src/hooks/use-rag-chat.ts (2)
99-108: Error rollback may remove wrong message in edge cases.The error handler assumes the last message is the user message that failed. If timing issues occur or state updates are batched unexpectedly, this could remove the wrong message. Consider using the mutation context from
onMutateto track which message to remove.Proposed improvement using mutation context
- onMutate: (userMessage) => { + onMutate: (userMessage) => { + const userMsgId = crypto.randomUUID(); const userMsg: ConversationMessage = { - id: crypto.randomUUID(), + id: userMsgId, role: "user", content: userMessage, timestamp: new Date().toISOString(), }; setMessages((prev) => [...prev, userMsg]); setCurrentSources([]); + return { userMsgId }; }, onSuccess: (result) => { // ... unchanged }, - onError: (error) => { + onError: (error, _variables, context) => { setMessages((prev) => { - const lastMessage = prev[prev.length - 1]; - if (lastMessage?.role === "user") { - return prev.slice(0, -1); - } - return prev; + if (context?.userMsgId) { + return prev.filter((msg) => msg.id !== context.userMsgId); + } + return prev; }); console.error("[RAG Chat] Error:", error); },
111-117: Silent return on validation failure may confuse callers.When
content.trim()is empty ormutation.isPendingis true, the function returns silently. Consider returning a boolean or throwing to inform callers of the outcome.src/use-cases/rag-chat.ts (2)
67-86: First chunk only: higher-similarity chunks from the same segment are discarded.When deduplicating by
segmentId, only the first occurrence is kept. If the search results aren't pre-sorted by similarity within each segment, a lower-similarity chunk may be retained while a higher-similarity one is discarded. Consider explicitly keeping the highest-similarity chunk per segment.Proposed fix to keep highest similarity per segment
function searchResultsToSources(searchResults: SearchResult[]): VideoSource[] { const uniqueSources = new Map<number, VideoSource>(); for (const result of searchResults) { - if (!uniqueSources.has(result.segmentId)) { + const existing = uniqueSources.get(result.segmentId); + if (!existing || result.similarity > existing.similarity) { uniqueSources.set(result.segmentId, { segmentId: result.segmentId, segmentTitle: result.segmentTitle, segmentSlug: result.segmentSlug, moduleTitle: result.moduleTitle, chunkText: result.chunkText, similarity: result.similarity, }); } }
124-147: Verbose logging may impact performance and log volume in production.The detailed console.log statements (lines 127-147) include search results with titles and similarities for every request. While useful for debugging, this can create significant log volume in production. Consider using a configurable log level or structured logging.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
src/fn/rag-chat.tssrc/hooks/use-rag-chat.tssrc/lib/openai-chat.tssrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/chat-container.tsxsrc/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/use-cases/rag-chat.ts
🧰 Additional context used
📓 Path-based instructions (10)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/clean-code.mdc)
**/*.{ts,tsx}: Never hard code magic numbers into code; consolidate them to the top of the file or in/src/config/index.tsto keep the code clean
Never allow a file to exceed 1,000 lines of code; split overly large files into smaller modular components to maintain code maintainability
**/*.{ts,tsx}: UsecreateServerFnfrom TanStack Start for server-side operations, with required middleware, input validator, and use case calls (never import Drizzle objects directly)
Pass data to server functions via thedataproperty (e.g.,serverFn({ data: { key: value } }))
Never hard code magic numbers; consolidate them at the top of the file or in/src/config/index.ts
Never let a file exceed 1,000 lines; split into smaller modular components
Use React Query with server-side prefetching viarouterWithQueryClientfor data fetching
Use React Hook Form with Zod validation for form handling
All cards should use the shadcn Card component with CardTitle, CardDescription, etc.
Component styling follows shadcn/ui patterns with Tailwind CSS v4
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/lib/openai-chat.tssrc/hooks/use-rag-chat.tssrc/fn/rag-chat.tssrc/routes/admin/vector-search/-components/chat-container.tsxsrc/use-cases/rag-chat.ts
**/routes/admin/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Protect admin routes with
beforeLoad: () => assertIsAdminFn()in TanStack Router file-based routes
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/routes/admin/vector-search/-components/chat-container.tsx
**/routes/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/routes/**/*.{ts,tsx}: Implement error boundaries at the route level usingDefaultCatchBoundary
Pages should use the Page component and PageHeader when possible
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/routes/admin/vector-search/-components/chat-container.tsx
src/routes/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/routes/**/*.{ts,tsx}: TanStack Router file-based routes and page entry points should be placed insrc/routes/
Route files should be named by their path (e.g.,src/routes/index.tsx)
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/routes/admin/vector-search/-components/chat-container.tsx
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript with React and ES modules; follow existing code patterns in
src/
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/lib/openai-chat.tssrc/hooks/use-rag-chat.tssrc/fn/rag-chat.tssrc/routes/admin/vector-search/-components/chat-container.tsxsrc/use-cases/rag-chat.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation, double quotes, and semicolons in TypeScript/React files
Files:
src/routes/admin/vector-search/-components/chat-input.tsxsrc/routes/admin/vector-search.tsxsrc/routes/admin/vector-search/-components/source-videos-panel.tsxsrc/routes/admin/vector-search/-components/chat-message.tsxsrc/lib/openai-chat.tssrc/hooks/use-rag-chat.tssrc/fn/rag-chat.tssrc/routes/admin/vector-search/-components/chat-container.tsxsrc/use-cases/rag-chat.ts
src/fn/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/tanstack-server-functions.mdc)
src/fn/**/*.ts: When creating a tanstack start server function, always include a middleware function call. Existing middleware functions are available insrc/lib/auth.ts
When using a server function, always attach a validator to it
When using server functions, never import drizzle related objects inside the function handler. Server functions should always invoke use cases based on the layered architecture pattern. Referencedocs/technical/layered-architecture.mdfor layered architecture guidelines
Files:
src/fn/rag-chat.ts
**/fn/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/fn/**/*.{ts,tsx}: Server functions should follow the naming conventionverbNounFn(e.g.,createUserFn)
Never import Drizzle objects directly in server functions; always call use cases instead
Server functions must include middleware (authenticatedMiddleware, adminMiddleware, or unauthenticatedMiddleware) from~/lib/auth
Implement session-based authentication accessed viagetUserFromSession()function
Files:
src/fn/rag-chat.ts
src/{data-access,use-cases,fn}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Persistence logic should be placed in
src/data-access/, business logic insrc/use-cases/, and server functions insrc/fn/
Files:
src/fn/rag-chat.tssrc/use-cases/rag-chat.ts
**/use-cases/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use cases should follow the naming convention
verbNounUseCase(e.g.,createUserUseCase)
Files:
src/use-cases/rag-chat.ts
🧠 Learnings (5)
📚 Learning: 2026-01-01T20:17:48.881Z
Learnt from: CR
Repo: webdevcody/agentic-jumpstart PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-01T20:17:48.881Z
Learning: Applies to **/routes/**/*.{ts,tsx} : Pages should use the Page component and PageHeader when possible
Applied to files:
src/routes/admin/vector-search.tsx
📚 Learning: 2026-01-01T20:17:48.881Z
Learnt from: CR
Repo: webdevcody/agentic-jumpstart PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-01T20:17:48.881Z
Learning: Applies to **/*.{ts,tsx} : All cards should use the shadcn Card component with CardTitle, CardDescription, etc.
Applied to files:
src/routes/admin/vector-search/-components/source-videos-panel.tsx
📚 Learning: 2025-12-22T03:59:58.018Z
Learnt from: CR
Repo: webdevcody/agentic-jumpstart PR: 0
File: .cursor/rules/tanstack-server-functions.mdc:0-0
Timestamp: 2025-12-22T03:59:58.018Z
Learning: Applies to src/fn/**/*.ts : When creating a tanstack start server function, always include a middleware function call. Existing middleware functions are available in `src/lib/auth.ts`
Applied to files:
src/fn/rag-chat.ts
📚 Learning: 2026-01-01T20:17:48.881Z
Learnt from: CR
Repo: webdevcody/agentic-jumpstart PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-01T20:17:48.881Z
Learning: Applies to **/*.{ts,tsx} : Use `createServerFn` from TanStack Start for server-side operations, with required middleware, input validator, and use case calls (never import Drizzle objects directly)
Applied to files:
src/fn/rag-chat.ts
📚 Learning: 2026-01-01T20:17:48.881Z
Learnt from: CR
Repo: webdevcody/agentic-jumpstart PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-01T20:17:48.881Z
Learning: Use TanStack Start `createServerFn` for all server-side operations instead of traditional API routes
Applied to files:
src/fn/rag-chat.ts
🧬 Code graph analysis (9)
src/routes/admin/vector-search/-components/chat-input.tsx (1)
src/components/ui/button.tsx (1)
Button(59-59)
src/routes/admin/vector-search.tsx (9)
src/routes/admin/vectorization.tsx (1)
Route(35-38)src/fn/auth.ts (1)
assertIsAdminFn(12-24)src/hooks/use-rag-chat.ts (1)
useRagChat(52-134)src/routes/admin/-components/page.tsx (1)
Page(7-21)src/components/ui/button.tsx (1)
Button(59-59)src/components/ui/card.tsx (1)
Card(85-85)src/routes/admin/vector-search/-components/chat-container.tsx (1)
ChatContainer(19-75)src/routes/admin/vector-search/-components/chat-input.tsx (1)
ChatInput(13-63)src/routes/admin/vector-search/-components/source-videos-panel.tsx (1)
SourceVideosPanel(17-96)
src/routes/admin/vector-search/-components/source-videos-panel.tsx (3)
src/use-cases/rag-chat.ts (1)
VideoSource(8-15)src/components/ui/card.tsx (5)
Card(85-85)CardHeader(86-86)CardTitle(88-88)CardDescription(90-90)CardContent(91-91)src/components/ui/badge.tsx (1)
Badge(38-38)
src/routes/admin/vector-search/-components/chat-message.tsx (4)
src/hooks/use-rag-chat.ts (1)
ConversationMessage(6-6)src/use-cases/rag-chat.ts (1)
ConversationMessage(17-23)src/lib/openai-chat.ts (1)
ChatMessage(12-15)src/components/ui/badge.tsx (1)
Badge(38-38)
src/lib/openai-chat.ts (1)
src/utils/env.ts (1)
env(27-47)
src/hooks/use-rag-chat.ts (2)
src/use-cases/rag-chat.ts (2)
ConversationMessage(17-23)VideoSource(8-15)src/fn/rag-chat.ts (1)
ragChatFn(28-36)
src/fn/rag-chat.ts (1)
src/use-cases/rag-chat.ts (1)
ragChatUseCase(124-178)
src/routes/admin/vector-search/-components/chat-container.tsx (2)
src/hooks/use-rag-chat.ts (1)
ConversationMessage(6-6)src/routes/admin/vector-search/-components/chat-message.tsx (1)
ChatMessage(10-57)
src/use-cases/rag-chat.ts (3)
src/data-access/transcript-chunks.ts (2)
SearchResult(72-81)searchByEmbedding(83-110)src/lib/openai-chat.ts (2)
ChatMessage(12-15)createChatCompletion(97-146)src/lib/openai.ts (1)
generateEmbedding(78-108)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run Playwright Tests
🔇 Additional comments (22)
src/lib/openai-chat.ts (3)
1-11: LGTM - Well-structured module initialization.Constants are correctly consolidated at the top of the file per coding guidelines, and the OpenAI client is properly initialized using the centralized
envutility.
48-95: LGTM - Robust retry logic with exponential backoff.The
withRetryutility correctly handles transient API errors (429, 500, 502, 503) with exponential backoff. The error wrapping provides good context for debugging.
97-146: LGTM - Clean implementation with proper validation and response handling.The function validates input, applies sensible defaults, and correctly maps the API response structure. The usage information is properly transformed to camelCase.
src/fn/rag-chat.ts (1)
1-36: LGTM - Server function follows all required patterns.The implementation correctly:
- Uses
createServerFnwith POST method- Includes
adminMiddlewarefor authentication- Attaches input validator with Zod schema
- Delegates to
ragChatUseCaseinstead of importing Drizzle objectsBased on learnings, this aligns with the layered architecture guidelines.
src/routes/admin/vector-search/-components/chat-container.tsx (3)
12-17: LGTM - Example prompts consolidated at top.Constants are correctly placed at the top of the file per coding guidelines.
19-51: LGTM - Clean empty state implementation.The welcome panel with example prompts provides good UX guidance for users. The
onSelectPromptcorrectly pre-fills the input without auto-submitting, giving users control.
53-74: LGTM - Messages rendering with auto-scroll.The auto-scroll effect correctly triggers on both
messagesandisLoadingchanges. The loading indicator provides clear feedback during API calls.src/routes/admin/vector-search/-components/chat-input.tsx (2)
13-34: LGTM - Well-implemented controlled input with auto-resize.The auto-resize logic, keyboard handling (Enter to submit, Shift+Enter for newline), and loading state management are all correctly implemented.
36-62: LGTM - Clean UI with proper accessibility.The component correctly disables interaction during loading and provides visual feedback with the spinner. Using shadcn's
ButtonandTextareacomponents follows the project patterns.src/routes/admin/vector-search/-components/source-videos-panel.tsx (2)
1-37: LGTM - Proper use of shadcn Card components.The empty state correctly uses
Card,CardHeader,CardTitle,CardDescription, andCardContentas per coding guidelines.
39-95: LGTM - Clean populated state rendering.The source cards are well-structured with proper truncation, responsive layout, and navigation links.
src/routes/admin/vector-search/-components/chat-message.tsx (1)
1-57: LGTM - Well-structured message component with conditional rendering.The component correctly differentiates between user and assistant messages with appropriate styling. The Markdown rendering for assistant responses and the source badges with overflow handling are well implemented.
src/routes/admin/vector-search.tsx (2)
1-17: LGTM - Proper route setup with admin protection.The route correctly uses
beforeLoad: () => assertIsAdminFn()for admin authentication as required by coding guidelines.
19-53: LGTM - Clean page composition using Page and PageHeader.The component correctly uses
PageandPageHeaderper coding guidelines. The conditional "Clear Chat" button and responsive grid layout provide good UX. Based on learnings, pages should use these components when possible.src/hooks/use-rag-chat.ts (3)
1-9: LGTM!Imports are clean and constants are properly consolidated at the top of the file as per coding guidelines. Re-exporting types from the use-case layer is a good pattern for client-side consumers.
11-41: LGTM!Storage helpers are well-implemented with proper SSR guards (
typeof window === "undefined") and defensive error handling. The generic typing provides good type safety.
119-134: LGTM!The
clearChatfunction properly resets all state and storage, and the return object exposes a clean API surface.src/use-cases/rag-chat.ts (5)
1-6: LGTM!Imports are appropriate and constants are properly consolidated at the top of the file per coding guidelines.
8-33: LGTM!Interface definitions are clean and well-structured. The separation of
VideoSource,ConversationMessage,RagChatInput, andRagChatResultprovides good type safety and clarity.
35-49: LGTM!The system prompt is well-crafted with clear guidelines for the AI assistant. Placing it as a module-level constant keeps it maintainable.
51-65: LGTM!The context formatting function handles the empty results case gracefully and produces a clear, structured output for the LLM.
148-178: LGTM!The main use case flow is well-structured: embed → search → format → complete → return. The timing metrics and usage logging are helpful for observability.
- Introduced a maximum message content length of 2000 characters for user messages and conversation history to prevent overflow. - Updated the `ragChat` input schema to enforce the new content length restriction. - Implemented a ref to manage conversation messages more effectively, ensuring the latest messages are used during API calls. - Improved state management by synchronizing the messages reference with the current state, enhancing performance and reliability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @src/hooks/use-rag-chat.ts:
- Around line 11-22: The loadFromStorage function unsafely casts parsed JSON to
T; add runtime validation by wiring a validator into loadFromStorage (e.g.,
accept a Zod schema or type guard function) and use it after JSON.parse to
verify shape before returning; if validation fails, log the error and fall back
to the provided fallback value instead of returning the unchecked cast.
Specifically, update the loadFromStorage signature to accept a validator (or map
keys to schemas), call JSON.parse(stored), validate with the schema/typeGuard,
and only return the parsed value when validation passes—otherwise return
fallback and log the validation error.
- Around line 90-94: Remove the direct mutation of messagesRef.current inside
the setMessages callback: in the setMessages((prev) => { ... }) block that
appends userMsg, delete the line messagesRef.current = newMessages and let the
existing sync effect (the useEffect that updates messagesRef on render) handle
the ref update; if you truly need the ref updated immediately, set
messagesRef.current = newMessages after calling setMessages(userMsg append)
(outside the state updater), but prefer relying on the sync effect to avoid
duplication and desynchronization.
- Around line 108-117: The onError handler currently rolls back the optimistic
user message by calling setMessages but doesn't immediately update
messagesRef.current, causing a brief inconsistency; modify the onError logic in
the onError callback so that after computing the new messages array (the same
slice logic used to remove the last user message) you assign it to
messagesRef.current as well (i.e., compute newMessages inside the setMessages
updater or right after and set messagesRef.current = newMessages) before logging
the error to ensure both state and the ref stay in sync.
- Around line 74-82: The mutation currently sends userMessage and
messagesRef.current to ragChatFn without client-side size checks; add validation
and trimming before calling ragChatFn by ensuring userMessage.length <= 2000
(reject or surface an error immediately) and by trimming messagesRef.current (or
conversationHistory passed from sendMessage) so the combined serialized
character length of history does not exceed 2000 – e.g., drop oldest messages
until the length constraint is met or truncate individual messages to 2000
chars; perform these checks/trims in the mutationFn (and/or sendMessage wrapper)
before invoking ragChatFn to avoid round-trips and rejected requests.
🧹 Nitpick comments (2)
src/hooks/use-rag-chat.ts (2)
65-71: Consider debouncing sessionStorage writes for performance.The persistence effects run on every state change, which could be frequent during rapid message exchanges. While sessionStorage operations are generally fast, debouncing these writes could improve performance and reduce unnecessary I/O, especially on slower devices.
♻️ Optional debouncing implementation
You could use a custom debounce hook or library:
+import { useDebounce } from "use-debounce"; + export function useRagChat(): UseRagChatReturn { const [messages, setMessages] = useState<ConversationMessage[]>(() => loadFromStorage<ConversationMessage[]>(STORAGE_KEY, []) ); const [currentSources, setCurrentSources] = useState<VideoSource[]>(() => loadFromStorage<VideoSource[]>(SOURCES_STORAGE_KEY, []) ); + const [debouncedMessages] = useDebounce(messages, 500); + const [debouncedSources] = useDebounce(currentSources, 500); const messagesRef = useRef<ConversationMessage[]>(messages); useEffect(() => { messagesRef.current = messages; }, [messages]); useEffect(() => { - saveToStorage(STORAGE_KEY, messages); - }, [messages]); + saveToStorage(STORAGE_KEY, debouncedMessages); + }, [debouncedMessages]); useEffect(() => { - saveToStorage(SOURCES_STORAGE_KEY, currentSources); - }, [currentSources]); + saveToStorage(SOURCES_STORAGE_KEY, debouncedSources); + }, [debouncedSources]);
120-126: Consider using explicit mutation dependencies for clarity.The callback depends on the entire
mutationobject (line 125). While this works because React Query provides a stable reference, it's more idiomatic and explicit to depend on specific properties (mutation.mutateAsyncandmutation.isPending) that are actually used in the callback.♻️ Suggested refactor
const sendMessage = useCallback( async (content: string) => { - if (!content.trim() || mutation.isPending) return; - await mutation.mutateAsync(content.trim()); + if (!content.trim() || isPending) return; + await mutateAsync(content.trim()); }, - [mutation] + [mutateAsync, isPending] );You'd need to destructure these at the top:
const mutation = useMutation({ // ... config }); +const { mutateAsync, isPending } = mutation;This makes the dependencies more explicit and easier to understand at a glance.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/fn/rag-chat.tssrc/hooks/use-rag-chat.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/fn/rag-chat.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/clean-code.mdc)
**/*.{ts,tsx}: Never hard code magic numbers into code; consolidate them to the top of the file or in/src/config/index.tsto keep the code clean
Never allow a file to exceed 1,000 lines of code; split overly large files into smaller modular components to maintain code maintainability
**/*.{ts,tsx}: UsecreateServerFnfrom TanStack Start for server-side operations, with required middleware, input validator, and use case calls (never import Drizzle objects directly)
Pass data to server functions via thedataproperty (e.g.,serverFn({ data: { key: value } }))
Never hard code magic numbers; consolidate them at the top of the file or in/src/config/index.ts
Never let a file exceed 1,000 lines; split into smaller modular components
Use React Query with server-side prefetching viarouterWithQueryClientfor data fetching
Use React Hook Form with Zod validation for form handling
All cards should use the shadcn Card component with CardTitle, CardDescription, etc.
Component styling follows shadcn/ui patterns with Tailwind CSS v4
Files:
src/hooks/use-rag-chat.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript with React and ES modules; follow existing code patterns in
src/
Files:
src/hooks/use-rag-chat.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation, double quotes, and semicolons in TypeScript/React files
Files:
src/hooks/use-rag-chat.ts
🧬 Code graph analysis (1)
src/hooks/use-rag-chat.ts (2)
src/use-cases/rag-chat.ts (2)
ConversationMessage(17-23)VideoSource(8-15)src/fn/rag-chat.ts (1)
ragChatFn(37-45)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run Playwright Tests
🔇 Additional comments (9)
src/hooks/use-rag-chat.ts (9)
1-6: LGTM!The imports are well-organized and the type re-exports provide a clean public API for consumers of this hook.
8-9: LGTM!Storage keys are properly defined as constants at the top of the file, following best practices.
24-41: LGTM!The storage helper functions have appropriate SSR guards, error handling, and logging.
43-50: LGTM!The interface clearly defines the hook's return type with appropriate methods and properties.
52-63: LGTM - Good use of ref for accessing latest state in callbacks.The
messagesRefpattern correctly ensures that the mutation callbacks access the most recent messages without encountering stale closure issues. The sync effect on lines 61-63 keeps the ref updated.
97-107: LGTM!The success handler correctly appends the assistant's response and updates the sources. The ref synchronization is properly handled by the effect.
128-133: LGTM!The
clearChatfunction comprehensively resets all state, storage, and mutation status. The implementation is correct and thorough.
135-142: LGTM!The return object provides a clean, well-typed API with all necessary state and functions for consuming components.
85-85: Thecrypto.randomUUID()API is fully supported in modern browsers and aligns with this project's explicit ES2022 target. No compatibility concerns for this codebase.The project's TypeScript configuration targets ES2022 with no
.browserslistrcor legacy browser support, indicating modern-only browser targets.crypto.randomUUID()is supported in all major browsers since 2021-2022 (Chrome 92+, Firefox 95+, Safari 15.4+, Edge 92+), which precedes ES2022 adoption timelines.The only valid consideration is the secure context (HTTPS) requirement, which is a deployment/environment concern, not a code quality issue.
Likely an incorrect or invalid review comment.
src/hooks/use-rag-chat.ts
Outdated
| function loadFromStorage<T>(key: string, fallback: T): T { | ||
| if (typeof window === "undefined") return fallback; | ||
| try { | ||
| const stored = sessionStorage.getItem(key); | ||
| if (stored) { | ||
| return JSON.parse(stored) as T; | ||
| } | ||
| } catch (error) { | ||
| console.error(`[RAG Chat] Failed to load from sessionStorage:`, error); | ||
| } | ||
| return fallback; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add runtime validation for loaded sessionStorage data.
The type assertion as T on line 16 is unsafe without validating that the parsed data actually matches the expected structure. If sessionStorage contains corrupted or tampered data, this could cause runtime errors when the hook or components try to use the loaded data.
Consider adding runtime validation using Zod schemas or type guards to ensure the loaded data matches the expected structure before returning it.
🛡️ Suggested validation approach
You could create Zod schemas for the stored data types and validate before returning:
+import { z } from "zod";
+import type { VideoSource, ConversationMessage } from "~/use-cases/rag-chat";
+
+const conversationMessageSchema = z.object({
+ id: z.string(),
+ role: z.enum(["user", "assistant"]),
+ content: z.string(),
+ timestamp: z.string(),
+ sources: z.array(z.any()).optional(),
+});
+
+const conversationHistorySchema = z.array(conversationMessageSchema);
+const videoSourcesSchema = z.array(z.any());
+
function loadFromStorage<T>(key: string, fallback: T): T {
if (typeof window === "undefined") return fallback;
try {
const stored = sessionStorage.getItem(key);
if (stored) {
- return JSON.parse(stored) as T;
+ const parsed = JSON.parse(stored);
+ // Add validation based on key
+ if (key === STORAGE_KEY) {
+ const validated = conversationHistorySchema.safeParse(parsed);
+ if (validated.success) return validated.data as T;
+ } else if (key === SOURCES_STORAGE_KEY) {
+ const validated = videoSourcesSchema.safeParse(parsed);
+ if (validated.success) return validated.data as T;
+ }
+ console.warn(`[RAG Chat] Invalid data structure in sessionStorage for key: ${key}`);
}
} catch (error) {
console.error(`[RAG Chat] Failed to load from sessionStorage:`, error);
}
return fallback;
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @src/hooks/use-rag-chat.ts around lines 11 - 22, The loadFromStorage function
unsafely casts parsed JSON to T; add runtime validation by wiring a validator
into loadFromStorage (e.g., accept a Zod schema or type guard function) and use
it after JSON.parse to verify shape before returning; if validation fails, log
the error and fall back to the provided fallback value instead of returning the
unchecked cast. Specifically, update the loadFromStorage signature to accept a
validator (or map keys to schemas), call JSON.parse(stored), validate with the
schema/typeGuard, and only return the parsed value when validation
passes—otherwise return fallback and log the validation error.
| onError: (error) => { | ||
| setMessages((prev) => { | ||
| const lastMessage = prev[prev.length - 1]; | ||
| if (lastMessage?.role === "user") { | ||
| return prev.slice(0, -1); | ||
| } | ||
| return prev; | ||
| }); | ||
| console.error("[RAG Chat] Error:", error); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sync messagesRef when rolling back optimistic updates.
When the error handler rolls back the optimistic user message (lines 109-115), it doesn't update messagesRef.current. While the sync effect will eventually update the ref, there's a brief period of inconsistency. For better correctness and to avoid potential edge cases, update the ref immediately after rolling back.
🔧 Suggested fix
onError: (error) => {
setMessages((prev) => {
const lastMessage = prev[prev.length - 1];
if (lastMessage?.role === "user") {
- return prev.slice(0, -1);
+ const rolled = prev.slice(0, -1);
+ messagesRef.current = rolled;
+ return rolled;
}
+ messagesRef.current = prev;
return prev;
});
console.error("[RAG Chat] Error:", error);
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onError: (error) => { | |
| setMessages((prev) => { | |
| const lastMessage = prev[prev.length - 1]; | |
| if (lastMessage?.role === "user") { | |
| return prev.slice(0, -1); | |
| } | |
| return prev; | |
| }); | |
| console.error("[RAG Chat] Error:", error); | |
| }, | |
| onError: (error) => { | |
| setMessages((prev) => { | |
| const lastMessage = prev[prev.length - 1]; | |
| if (lastMessage?.role === "user") { | |
| const rolled = prev.slice(0, -1); | |
| messagesRef.current = rolled; | |
| return rolled; | |
| } | |
| messagesRef.current = prev; | |
| return prev; | |
| }); | |
| console.error("[RAG Chat] Error:", error); | |
| }, |
🤖 Prompt for AI Agents
In @src/hooks/use-rag-chat.ts around lines 108 - 117, The onError handler
currently rolls back the optimistic user message by calling setMessages but
doesn't immediately update messagesRef.current, causing a brief inconsistency;
modify the onError logic in the onError callback so that after computing the new
messages array (the same slice logic used to remove the last user message) you
assign it to messagesRef.current as well (i.e., compute newMessages inside the
setMessages updater or right after and set messagesRef.current = newMessages)
before logging the error to ensure both state and the ref stay in sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on February 18
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| setCurrentSources([]); | ||
| clearStorage(); | ||
| mutation.reset(); | ||
| }, [mutation]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear chat does not prevent orphaned response from in-flight request
Low Severity
When a user clicks "Clear Chat" while an API request is in flight, clearChat calls mutation.reset() which resets mutation state but does not cancel the pending request. When the request eventually completes, onSuccess still executes and adds the assistant response to the now-empty messages array via setMessages((prev) => [...prev, assistantMsg]). This results in an orphaned assistant message with no corresponding user question, creating a confusing UI state. The "Clear Chat" button is not disabled during loading, making this race condition easy to trigger.
🔬 Verification Test
Why verification test was not possible: This race condition requires mocking React Query's mutation lifecycle timing and simulating user interaction mid-request. The bug manifests when: (1) a mutation is pending, (2) clearChat is called, (3) the mutation completes. Testing would require a full React rendering environment with React Query and precise timing control, which cannot be done through simple unit tests without the complete application context.
Additional Locations (1)
…gChat - Introduced Zod schemas for validating video sources and conversation messages, ensuring data integrity when loading from session storage. - Implemented a trimming function to manage conversation history length, preventing excessive data storage and improving performance. - Updated loadFromStorage function to include schema validation, enhancing error handling and debugging capabilities. - Added maximum length checks for user messages to enforce content restrictions and improve user experience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @src/hooks/use-rag-chat.ts:
- Around line 98-105: The UseRagChatReturn.error type doesn't match the React
Query mutation error (mutation.error is unknown); either change the interface's
error to unknown or type the mutation's error by providing the TError generic to
useMutation. Update the UseRagChatReturn interface (symbol: UseRagChatReturn) to
use error: unknown OR update the mutation declaration (symbol: mutation and
useMutation) to include the error type (e.g., useMutation<ReturnType, Error,
string>) so mutation.error aligns with the interface.
🧹 Nitpick comments (2)
src/hooks/use-rag-chat.ts (2)
9-12: Consider moving constants to config file.Per coding guidelines, magic numbers should be consolidated in
/src/config/index.ts. While these constants are correctly placed at the top of the file, movingMAX_MESSAGE_LENGTHandMAX_HISTORY_LENGTHto the central config would improve maintainability and make it easier to adjust limits across the application.♻️ Suggested refactor
In
/src/config/index.ts, add:export const RAG_CHAT_CONFIG = { MAX_MESSAGE_LENGTH: 2000, MAX_HISTORY_LENGTH: 20000, } as const;Then import and use in this file:
+import { RAG_CHAT_CONFIG } from "~/config"; + const STORAGE_KEY = "rag-chat-history"; const SOURCES_STORAGE_KEY = "rag-chat-sources"; -const MAX_MESSAGE_LENGTH = 2000; -const MAX_HISTORY_LENGTH = 20000;And update references throughout the file to use
RAG_CHAT_CONFIG.MAX_MESSAGE_LENGTHandRAG_CHAT_CONFIG.MAX_HISTORY_LENGTH.
175-187: Consider handling validation errors consistently.The message length validation on line 179-183 throws an error synchronously, which means it won't be captured by the mutation's
onErrorhandler and won't populatemutation.error. Callers must use try-catch when callingsendMessage, which may be unexpected.Consider either:
- Returning early with a user-friendly notification instead of throwing
- Wrapping the validation in the mutation itself so all errors flow through
mutation.error- Documenting that callers must handle these validation errors separately
♻️ Proposed fix to make error handling consistent
Option 1 - Return early without throwing:
const sendMessage = useCallback( async (content: string) => { const trimmed = content.trim(); if (!trimmed || mutation.isPending) return; if (trimmed.length > MAX_MESSAGE_LENGTH) { - throw new Error( - `Message exceeds maximum length of ${MAX_MESSAGE_LENGTH} characters` - ); + console.error(`Message exceeds maximum length of ${MAX_MESSAGE_LENGTH} characters`); + return; } await mutation.mutateAsync(trimmed); }, [mutation] );Option 2 - Move validation into mutation so errors are handled uniformly:
const mutation = useMutation({ mutationFn: async (userMessage: string) => { + if (userMessage.length > MAX_MESSAGE_LENGTH) { + throw new Error( + `Message exceeds maximum length of ${MAX_MESSAGE_LENGTH} characters` + ); + } const trimmedHistory = trimConversationHistory( messagesRef.current, MAX_HISTORY_LENGTH ); // ... rest of mutation }, // ... callbacks }); const sendMessage = useCallback( async (content: string) => { const trimmed = content.trim(); if (!trimmed || mutation.isPending) return; - if (trimmed.length > MAX_MESSAGE_LENGTH) { - throw new Error( - `Message exceeds maximum length of ${MAX_MESSAGE_LENGTH} characters` - ); - } await mutation.mutateAsync(trimmed); }, [mutation] );
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/hooks/use-rag-chat.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/clean-code.mdc)
**/*.{ts,tsx}: Never hard code magic numbers into code; consolidate them to the top of the file or in/src/config/index.tsto keep the code clean
Never allow a file to exceed 1,000 lines of code; split overly large files into smaller modular components to maintain code maintainability
**/*.{ts,tsx}: UsecreateServerFnfrom TanStack Start for server-side operations, with required middleware, input validator, and use case calls (never import Drizzle objects directly)
Pass data to server functions via thedataproperty (e.g.,serverFn({ data: { key: value } }))
Never hard code magic numbers; consolidate them at the top of the file or in/src/config/index.ts
Never let a file exceed 1,000 lines; split into smaller modular components
Use React Query with server-side prefetching viarouterWithQueryClientfor data fetching
Use React Hook Form with Zod validation for form handling
All cards should use the shadcn Card component with CardTitle, CardDescription, etc.
Component styling follows shadcn/ui patterns with Tailwind CSS v4
Files:
src/hooks/use-rag-chat.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript with React and ES modules; follow existing code patterns in
src/
Files:
src/hooks/use-rag-chat.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation, double quotes, and semicolons in TypeScript/React files
Files:
src/hooks/use-rag-chat.ts
🧬 Code graph analysis (1)
src/hooks/use-rag-chat.ts (2)
src/use-cases/rag-chat.ts (2)
ConversationMessage(17-23)VideoSource(8-15)src/fn/rag-chat.ts (1)
ragChatFn(37-45)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run Playwright Tests
| export interface UseRagChatReturn { | ||
| messages: ConversationMessage[]; | ||
| isLoading: boolean; | ||
| error: Error | null; | ||
| sendMessage: (content: string) => Promise<void>; | ||
| clearChat: () => void; | ||
| currentSources: VideoSource[]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the ragChatFn definition and its error type specification
rg -nP --type=ts -C5 'ragChatFn.*=.*createServerFn' src/Repository: webdevcody/agentic-jumpstart
Length of output: 656
🏁 Script executed:
#!/bin/bash
# Check the useMutation configuration and error handling in the hook
rg -nP --type=ts -C10 'useMutation|error.*Error|Error.*null' src/hooks/use-rag-chat.tsRepository: webdevcody/agentic-jumpstart
Length of output: 2626
🏁 Script executed:
#!/bin/bash
# Get a fuller picture of the hook implementation to see how error is used
head -150 src/hooks/use-rag-chat.ts | tail -80Repository: webdevcody/agentic-jumpstart
Length of output: 2337
🏁 Script executed:
#!/bin/bash
# Get the rest of the useRagChat hook implementation to see how error is returned
tail -60 src/hooks/use-rag-chat.tsRepository: webdevcody/agentic-jumpstart
Length of output: 1609
Fix error type mismatch in UseRagChatReturn interface.
The error property is typed as Error | null, but mutation.error from React Query's useMutation is actually unknown since no TError type parameter is specified. Either declare error: unknown in the interface or specify the error type in the mutation configuration:
const mutation = useMutation<ReturnType, Error, string>({
mutationFn: async (userMessage: string) => { ... }
})🤖 Prompt for AI Agents
In @src/hooks/use-rag-chat.ts around lines 98 - 105, The UseRagChatReturn.error
type doesn't match the React Query mutation error (mutation.error is unknown);
either change the interface's error to unknown or type the mutation's error by
providing the TError generic to useMutation. Update the UseRagChatReturn
interface (symbol: UseRagChatReturn) to use error: unknown OR update the
mutation declaration (symbol: mutation and useMutation) to include the error
type (e.g., useMutation<ReturnType, Error, string>) so mutation.error aligns
with the interface.
| if (!value.trim() || isLoading) return; | ||
| onSend(value.trim()); | ||
| onChange(""); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Input cleared on async error losing user message
Medium Severity
The handleSubmit function calls onSend(value.trim()) without awaiting it, then immediately clears the input with onChange(""). Since sendMessage is async and throws an error when the message exceeds MAX_MESSAGE_LENGTH (2000 chars), the input gets cleared regardless of whether sending succeeded. When a user types a message that's too long, their input is silently lost with no error feedback because the throw happens before mutateAsync is called, so mutation.error is never set.
🔬 Verification Test
Why verification test was not possible: This is a React component interaction bug that requires a browser environment with user interaction. The bug can be traced through static analysis: handleSubmit doesn't await the async onSend, so onChange("") executes before sendMessage completes or throws. When sendMessage throws on line 180 (before calling mutateAsync), the rejected promise becomes unhandled while the input has already been cleared on line 26.
Additional Locations (1)
| function CourseAssistantPage() { | ||
| const [inputValue, setInputValue] = useState(""); | ||
| const { messages, isLoading, sendMessage, clearChat, currentSources } = | ||
| useRagChat(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API errors cause silent message disappearance without feedback
Medium Severity
The useRagChat hook returns an error property (from mutation.error) but CourseAssistantPage doesn't destructure or use it. When the API call fails, onError removes the user's message from the chat and sets mutation.error, but since error is never displayed, the user experiences their message silently disappearing with no explanation. The user has no way to know what went wrong or whether they should retry.
🔬 Verification Test
Why verification test was not possible: This requires simulating an API failure in a running React application with the actual mutation hook. The bug can be verified by code inspection: the hook interface at line 101 declares error: Error | null, line 199 returns error: mutation.error, but line 21-22 in the page component explicitly omits error from destructuring, and no error UI exists in the component's render output.
Note
Replaces the vector search page with a RAG-powered chat assistant that answers questions using course transcript chunks and surfaces relevant video sources.
ragChatFn(POST, admin-protected) with zod validation callingragChatUseCaseragChatUseCase: generates embeddings, searches transcript chunks, builds system/context messages, calls OpenAI chat, returns response + deduped sourceslib/openai-chat: OpenAI chat wrapper with retries, error typing, and token usage reportinguseRagChathook: manages messages, optimistic updates, error handling, sessionStorage persistence, and history trimming/admin/vector-search: chat container/input, markdown rendering, example prompts, loading state, "Clear Chat" action, and a source videos panel with similarity badges and linksWritten by Cursor Bugbot for commit 3d70b6d. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.