Eurostar AI vulnerability when a chatbot goes off the rails #1705
+63
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🤖 Automated Content Update
This PR was automatically generated by the HackTricks News Bot based on a technical blog post.
📝 Source Information
🎯 Content Summary
Overview
The blog dissects Eurostar’s public AI chatbot at
https://www.eurostar.com/uk-en, focusing on its REST backendhttps://site-api.eurostar.com/chatbot/api/agents/default. Using Burp Suite to intercept traffic, the author reverse-engineers how the chat frontend constructs requests, how the guardrail layer and signatures work, and how the LLM is invoked. From this, they derive and exploit four concrete issues: guardrail bypass, information d...🔧 Technical Details
Guardrail bypass via partial signature validation
If an LLM chatbot sends the full
chat_historyon each request but only validates a cryptographic signature on the latest message, an attacker can bypass guardrails by editing earlier messages. The exploitation pattern is: intercept the API call in a proxy (e.g. Burp Suite), change an earlier message into a malicious instruction (potentially withrole": "system"), then make the final user message harmless or empty so it passes the guardrail and gets a valid signature. Because the backend then forwards the entire modified history to the LLM as trusted context, the attacker’s injected message runs with system-level influence despite the presence of guardrails.Prompt injection to exfiltrate internal model configuration
Once the attacker can control earlier messages that the model treats as legitimate context, they can use prompt injection to extract sensitiv...
🤖 Agent Actions
Summary:
src/pentesting-web/xss-cross-site-scripting/README.md, detailing how Eurostar’s chatbot guardrail flaw lets attackers tamper with signed histories, coerce the LLM to rebuild<script>blocks, and potentially pivot to stored/shared XSS through client-controlled conversation IDs. Included exploitation workflow, expandable PoC request, impact analysis, and the blog reference.src/AI/AI-Prompts.mdwith a “Guardrail bypass via signed-history tampering (Eurostar chatbot)” subsection describing the prompt-injection primitive, how it leaks system prompts and drives HTML injection, plus mitigation notes and a cross-link to the new XSS section. Updated the AI prompt references with the blog source.Tests: Not run (not requested).
This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.
📚 Repository Maintenance
All .md files have been checked for proper formatting (headers, includes, etc.).