Accepted for Poster, Presentation & Proceedings at: 3rd International Workshop on AI for Quantum and Quantum for AI (AIQxQIA 2025), ECAI 2025, Bologna, Italy, 25–30 October 2025
🚀 Key Discovery: Vision Transformer (ViT) embeddings unlock quantum machine learning advantage. This is the first systematic evidence that the choice of embeddings determines quantum kernel success, showing a fundamental synergy between transformer attention and quantum feature spaces.
- Project Page: Embedding-Aware Quantum
- GitHub Repository: QuantumVE
- Research Paper: Embedding-Aware Quantum-Classical SVMs
- Dataset on HuggingFace: QuantumEmbeddings
- Interactive Demo: Colab Notebook
- Fashion-MNIST: +8.02% accuracy vs classical SVM
- MNIST: +4.42% accuracy boost
- Embedding Insights: ViT embeddings enable quantum advantage; CNN features degrade performance
- Scalability: 16-qubit tensor network simulation via cuTensorNet
- Efficiency: Class-balanced k-means distillation for quantum data preprocessing
QuantumVE/
├── data_processing/ # Class-balanced k-means distillation procedures
├── embeddings/ # Vision Transformer & CNN embedding extraction
├── qve/ # Core quantum-classical modules and utilities
└── scripts/ # Experimental pipelines with cross-validation
├── classical_baseline.py # Traditional SVM benchmarks
├── cross_validation_baseline.py # Cross-validation framework
└── qsvm_cuda_embeddings.py # Our embedding-aware quantum method
# Create conda environment
conda create -n QuantumVE python=3.11 -y
conda activate QuantumVE
# Clone and install
git clone https://github.com/sebasmos/QuantumVE.git
cd QuantumVE
pip install -e .
# For Ryzen devices - Install MPI
conda install -c conda-forge mpi4py openmpiMNIST Embeddings:
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/mnist_embeddings.zip && \
unzip mnist_embeddings.zip -d data && \
rm mnist_embeddings.zipFashion-MNIST Embeddings:
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/fashionmnist_embeddings.zip && \
unzip fashionmnist_embeddings.zip -d data && \
rm fashionmnist_embeddings.zipSingle Node:
# Classical baseline with cross-validation
python scripts/classical_baseline.py
# Cross-validation framework
python scripts/cross_validation_baseline.py
# Our embedding-aware quantum method
python scripts/qsvm_cuda_embeddings.pyMulti-Node with MPI:
# Run with 2 processes
mpirun -np 2 python scripts/qsvm_cuda_embeddings.py
mpirun -np 2 python scripts/cross_validation_baseline.pyOur key insight: embedding choice is critical for quantum advantage. While CNN features degrade in quantum systems, Vision Transformer embeddings create a unique synergy with quantum feature spaces, enabling measurable performance gains through:
- Class-balanced distillation reduces quantum overhead while preserving critical patterns
- ViT attention mechanisms align naturally with quantum superposition states
- Tensor network simulation scales to practical problem sizes (16+ qubits)
We welcome contributions! Help us advance quantum machine learning:
- Fork the QuantumVE repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Submit a pull request with detailed description
Areas for contribution:
- New embedding architectures (BERT, CLIP, etc.)
- Additional quantum backends
- Performance optimizations
- Documentation improvements
This work was supported by the Google Cloud Research Credits program under award number GCP19980904.
@inproceedings{ordonez2025embedding,
title={Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning},
author={Ord{\'o}{\~n}ez, Sebasti{\'a}n Andr{\'e}s Cajas and Torres, Luis Fernando Torres and Bifulco, Mario and Duran, Carlos Andres and Bosch, Cristian and Carbajo, Ricardo Simon},
booktitle={3rd International Workshop on AI for Quantum and Quantum for AI (AIQxQIA 2025), ECAI 2025},
year={2025},
month={October},
address={Bologna, Italy},
note={Accepted for Poster, Presentation \& Proceedings},
url={https://arxiv.org/abs/2508.00024}
}🌟 Star us on GitHub if this helps your research! 🌟