Skip to content

Conversation

@carlospolop
Copy link
Collaborator

🤖 Automated Content Update

This PR was automatically generated by the HackTricks News Bot based on a technical blog post.

📝 Source Information

🎯 Content Summary

Overview
The blog dissects Eurostar’s public AI chatbot at https://www.eurostar.com/uk-en, focusing on its REST backend https://site-api.eurostar.com/chatbot/api/agents/default. Using Burp Suite to intercept traffic, the author reverse-engineers how the chat frontend constructs requests, how the guardrail layer and signatures work, and how the LLM is invoked. From this, they derive and exploit four concrete issues: guardrail bypass, information d...

🔧 Technical Details

Guardrail bypass via partial signature validation
If an LLM chatbot sends the full chat_history on each request but only validates a cryptographic signature on the latest message, an attacker can bypass guardrails by editing earlier messages. The exploitation pattern is: intercept the API call in a proxy (e.g. Burp Suite), change an earlier message into a malicious instruction (potentially with role": "system"), then make the final user message harmless or empty so it passes the guardrail and gets a valid signature. Because the backend then forwards the entire modified history to the LLM as trusted context, the attacker’s injected message runs with system-level influence despite the presence of guardrails.


Prompt injection to exfiltrate internal model configuration
Once the attacker can control earlier messages that the model treats as legitimate context, they can use prompt injection to extract sensitiv...

🤖 Agent Actions

Summary:

  • Added a new “LLM-driven HTML/JS reconstruction” section to src/pentesting-web/xss-cross-site-scripting/README.md, detailing how Eurostar’s chatbot guardrail flaw lets attackers tamper with signed histories, coerce the LLM to rebuild <script> blocks, and potentially pivot to stored/shared XSS through client-controlled conversation IDs. Included exploitation workflow, expandable PoC request, impact analysis, and the blog reference.
  • Extended src/AI/AI-Prompts.md with a “Guardrail bypass via signed-history tampering (Eurostar chatbot)” subsection describing the prompt-injection primitive, how it leaks system prompts and drives HTML injection, plus mitigation notes and a cross-link to the new XSS section. Updated the AI prompt references with the blog source.

Tests: Not run (not requested).


This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.

📚 Repository Maintenance

  • MD files processed: 935
  • MD files fixed: 1

All .md files have been checked for proper formatting (headers, includes, etc.).

@carlospolop
Copy link
Collaborator Author

🔗 Additional Context

Original Blog Post: https://www.pentestpartners.com/security-blog/eurostar-ai-vulnerability-when-a-chatbot-goes-off-the-rails/

Content Categories: Based on the analysis, this content was categorized under "Pentesting Web → XSS / HTML Injection (with a subsection on LLM-driven HTML/JS reconstruction) and cross-linked from AI → AI Security → AI Models RCE / Prompt Injection Patterns".

Repository Maintenance:

  • MD Files Formatting: 935 files processed (1 files fixed)

Review Notes:

  • This content was automatically processed and may require human review for accuracy
  • Check that the placement within the repository structure is appropriate
  • Verify that all technical details are correct and up-to-date
  • All .md files have been checked for proper formatting (headers, includes, etc.)

Bot Version: HackTricks News Bot v1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants