Data Re-uploading Classifier
Overview
Data re-uploading is a quantum classification technique that achieves universal approximation with just a single qubit. Instead of encoding data once and then applying variational layers (as in a standard VQC), it re-encodes the same input data multiple times, interspersed with trainable rotations.
This remarkable result, proved by Perez-Salinas et al. (2020), means that a single qubit with enough layers can learn any classification boundary -- no entanglement, no multi-qubit operations required.
The Universality Theorem
Theorem (Perez-Salinas et al., 2020): A single-qubit data re-uploading classifier with L re-uploading layers can approximate any function from R^n to {0, 1} to arbitrary precision, given sufficient layers.
The mathematical foundation: each re-uploading layer contributes a term to a Fourier series on the Bloch sphere. With L layers, the circuit computes a Fourier series of degree L, and increasing L increases the complexity of learnable decision boundaries.
| Layers | Parameters | Fourier Degree | Expressiveness |
|---|---|---|---|
| 1 | 2 | 1 | Linear boundaries |
| 2 | 4 | 2 | Simple curves |
| 3 | 6 | 3 | Complex boundaries |
| L | 2L | L | Arbitrary (L -> infinity) |
The Circuit
CODE+---------+ +---------+ +--------+ +--------+ +---------+ +---------+ +--------+ +--------+ +-+ q_0: | RX(x0pi)|-| RZ(x1pi)|-| RY(th0)|-| RZ(th1)|--*--| RX(x0pi)|-| RZ(x1pi)|-| RY(th2)|-| RZ(th3)|-|M| +---------+ +---------+ +--------+ +--------+ +---------+ +---------+ +--------+ +--------+ +-+ Data Data Weights Weights Data Data Weights Weights <--------- Layer 1 ---------> <--------- Layer 2 --------->
Each layer has the same structure:
- Data encoding: RX(x0 * pi), RZ(x1 * pi) -- the same data, every layer
- Variational rotations: RY(theta_j), RZ(theta_j+1) -- trainable parameters
The key is that data is uploaded L times, not once. This repeated encoding creates an increasingly rich data-dependent transformation on the Bloch sphere.
Why It Works
Consider the effect of L layers on the Bloch sphere:
CODELayer 1: Rotate by data -> Rotate by weights -> Point p1 on sphere Layer 2: Rotate by data -> Rotate by weights -> Point p2 on sphere ... Layer L: Rotate by data -> Rotate by weights -> Final point pL
The composition of these rotations creates a complex, data-dependent trajectory on the Bloch sphere. The measurement probability P(|1>) at the end depends on where the final point lands -- above or below the equator.
With enough layers, this trajectory can carve out any decision boundary in the input space.
Running the Circuit
PYTHONfrom circuit import ( run_circuit, predict, train_classifier, verify_classifier, ) # Single prediction (random parameters) result = run_circuit(x=[0.5, -0.3], n_layers=3) print(f"Qubits: {result['n_qubits']}") # Just 1! print(f"Prediction: class {result['prediction']}") # Train on XOR-like dataset trained = train_classifier(max_iterations=120, seed=42) print(f"Accuracy: {trained['accuracy']:.1%}") # Predict with trained model theta_opt = trained["optimal_theta"] pred = predict([0.5, 0.5], theta_opt, n_layers=3) print(f"Class: {pred['prediction']}, Confidence: {pred['confidence']:.1%}") # Verify v = verify_classifier() print(f"All checks passed: {v['passed']}")
Default Dataset: XOR Pattern
The default dataset tests non-linear classification (XOR is not linearly separable):
CODEx1 1 | 0 | 1 | | ---|-----|----- | | -1 | 1 | 0 -1 0 1 x0
| Input | Label | Pattern |
|---|---|---|
| [0.5, 0.5] | 0 | Same sign |
| [-0.5, -0.5] | 0 | Same sign |
| [0.5, -0.5] | 1 | Opposite sign |
| [-0.5, 0.5] | 1 | Opposite sign |
A single layer (linear boundary) cannot solve XOR. Multiple re-uploading layers can.
Comparison with Other Classifiers
| Classifier | Qubits | Entanglement | Universality | Min Layers |
|---|---|---|---|---|
| Data Re-uploading | 1 | None | Yes | Depends on task |
| Standard VQC | N (features) | Required | Depends on ansatz | 1-3 |
| Quantum Kernel SVM | N | Required | Depends on kernel | N/A |
| Classical Perceptron | N/A | N/A | No (linear only) | 1 |
| Classical Neural Net | N/A | N/A | Yes | Depends on task |
Advantages
- Minimal hardware: Just 1 qubit, no entanglement gates
- Noise-resilient: Single qubit = fewest error sources possible
- Universal: Can learn any decision boundary with enough layers
- Interpretable: Bloch sphere visualization of decision boundary
- NISQ-friendly: Runs on any quantum hardware, even the noisiest
Limitations
- Training cost: COBYLA optimization can require many iterations
- Shot noise: Single-qubit measurement has high variance
- Scaling: For high-dimensional data, need more rotations per layer
- Circuit depth: Deep circuits (many layers) still suffer from noise
Applications
- Binary classification: Any 2-class problem on 2D input
- NISQ benchmarking: Minimal hardware requirements for testing
- Educational: Clear demonstration of quantum ML universality
- Hybrid models: Single-qubit quantum layer in classical networks
Learn More
- Data Re-uploading for a Universal Quantum Classifier (Perez-Salinas et al., 2020)
- One Qubit as a Universal Approximant (Perez-Salinas et al., 2021)
- Effect of Data Encoding on Expressive Power (Schuld et al., 2021)