Data Re-uploading Quantum Classifier
Reproduction of Perez-Salinas et al. (2020) — demonstrating that a single qubit with data re-uploading is a universal quantum classifier.
| Property | Value |
|---|---|
| Category | Research Reproduction |
| Difficulty | Advanced |
| Framework | PennyLane |
| Qubits | 1 |
| Gates | RZ, RY |
| Paper | Quantum 4, 226 (2020) |
| DOI | 10.22331/q-2020-02-06-226 |
| arXiv | 1907.02085 |
1. Paper Summary
Perez-Salinas, Cervera-Lierta, Gil-Fuster, and Latorre (2020) introduced the data re-uploading strategy for quantum machine learning and proved a remarkable universality result: a single qubit is sufficient for universal classification, provided classical data is re-encoded at every layer of the circuit.
The core theorem (Theorem 1) states:
A single-qubit classifier with L layers of data re-uploading can approximate any Boolean function f: {0, 1}^{2L} -> {0, 1}.
This is the quantum analogue of the classical universal approximation theorem. While classical networks achieve universality through depth (stacked non-linear layers), the quantum version achieves it through repeated data encoding into the same qubit. The non-linearity arises from the composition of SU(2) rotations with data-dependent angles, which creates complex "folds" on the Bloch sphere.
Key contributions of the paper:
- Proof that data re-uploading yields universal classification with a single qubit
- Demonstration on 2D classification tasks (circle, annulus, wavy boundary)
- Analysis of how classification accuracy scales with the number of layers
- Connection to classical neural networks as quantum-classical equivalence
2. Prerequisites
Before studying this research reproduction, you should be comfortable with:
- Single-qubit rotations: RX, RY, RZ gates and the Bloch sphere
- Parameterized circuits: Variational quantum circuits with trainable angles
- Data re-uploading basics: The introductory tutorial at ../../../intermediate/qml/data_reuploading/
- Gradient-based optimization: Parameter-shift rule, cost functions
- PennyLane fundamentals: QNodes, devices, automatic differentiation
3. The Data Re-uploading Idea
Traditional approach (single encoding)
Classical data is encoded once, then a parameterized unitary is applied:
|0> --> S(x) --> U(theta) --> Measure
This architecture is limited: the set of functions it can represent is constrained by the expressivity of a single unitary rotation after fixed data encoding.
Data re-uploading (interleaved encoding)
Data is re-encoded at every layer, interleaved with trainable parameters:
|0> --> U(theta_1, x) --> U(theta_2, x) --> ... --> U(theta_L, x) --> Measure
Each layer mixes data and parameters differently, and the composition of rotations creates a non-linear mapping from input space to measurement probabilities. This is what gives the architecture its universal approximation power.
4. Circuit Architecture
Each of the L layers applies a general SU(2) rotation with data-dependent angles:
U_l(theta, x) = RZ(theta_{l,1} + x_1 * w_{l,1}) . RY(theta_{l,2} + x_2 * w_{l,2}) . RZ(theta_{l,3})
where:
theta_{l,k}: Learnable bias angles (3 per layer)w_{l,k}: Learnable input scaling weights (3 per layer, though w_3 is unused)x_k: Input features (scaled to [0, 2*pi])
The full circuit diagram for L layers on a single qubit:
CODE+-----------------+ +-----------------+ +-----------------+ |0> ---- | U_1(theta_1, x) |--| U_2(theta_2, x) |--...-| U_L(theta_L, x) |---- <Z> +-----------------+ +-----------------+ +-----------------+ Each U_l expands to: --[RZ(theta_1 + x_1*w_1)]--[RY(theta_2 + x_2*w_2)]--[RZ(theta_3)]--
Parameters per layer: 6 (3 biases + 3 weights) Total parameters: 6L
5. Running the Circuit
Quick start
PYTHONfrom circuit import run_circuit result = run_circuit( n_layers_list=[1, 2, 3, 4], n_train=50, n_test=20, n_epochs=30, ) for n_layers, data in result["layer_analysis"].items(): print(f"L={n_layers}: train={data['train_accuracy']:.1%}, test={data['test_accuracy']:.1%}")
Individual components
PYTHONfrom circuit import ( create_single_qubit_classifier, generate_circle_dataset, train_classifier, evaluate_classifier, verify_reproduction, ) # Create classifier with 3 re-uploading layers classifier = create_single_qubit_classifier(n_layers=3) # Generate circle dataset (points inside vs. outside radius 0.5) X_train, y_train = generate_circle_dataset(n_samples=100, noise=0.1) X_test, y_test = generate_circle_dataset(n_samples=50, noise=0.1, seed=99) # Train with gradient descent (parameter-shift rule) weights, losses = train_classifier(classifier, X_train, y_train, n_layers=3, n_epochs=50) # Evaluate accuracy = evaluate_classifier(classifier, weights, X_test, y_test) print(f"Test accuracy: {accuracy:.1%}")
Verify reproduction
PYTHONresult = run_circuit(n_layers_list=[1, 2, 3], n_train=50, n_test=20, n_epochs=50) verification = result["verification"] print(f"Checks passed: {verification['passed']}/{verification['total']}") for check in verification["checks"]: print(f" [{'PASS' if check['status'] == 'PASS' else 'FAIL'}] {check['name']}")
6. Expected Results
Circle dataset (2D classification)
| Layers | Parameters | Train Accuracy | Test Accuracy | Notes |
|---|---|---|---|---|
| 1 | 6 | ~60% | ~55% | Linear-like boundary, cannot capture circle |
| 2 | 12 | ~75% | ~70% | Begins to curve the decision boundary |
| 3 | 18 | ~85% | ~80% | Good approximation of circular boundary |
| 4 | 24 | ~90% | ~85% | Near-paper-quality classification |
Note: Results with small training sets (n < 50) may show higher variance. The paper uses larger datasets and reports >95% for L >= 4 with sufficient training.
What the verification checks confirm
- Accuracy improves with depth — more layers yield better classification
- Multi-layer beats single layer — L > 1 outperforms L = 1 on non-linear tasks
- Training converges — loss decreases below 1.0 (random baseline)
- Parameter count matches paper — exactly 6 parameters per layer
7. Universality Theorem
The central theoretical result from the paper:
Theorem 1 (Perez-Salinas et al., 2020): A single-qubit quantum classifier with L layers of data re-uploading can approximate any Boolean function f: {0, 1}^{2L} -> {0, 1}.
Why this matters
- Minimal resources: Universal classification with just 1 qubit challenges the assumption that more qubits = more power.
- Quantum advantage pathway: The number of parameters scales linearly with L (6L), while the function class grows exponentially (2^{2L} possible Boolean functions).
- Classical equivalence: The data re-uploading circuit with L layers is at least as expressive as a classical neural network with L hidden layers, but uses exponentially fewer parameters for certain function classes.
Intuition
Each re-uploading layer creates a "fold" on the Bloch sphere. With one layer, the qubit state traces a simple curve as the input varies. With L layers, the trajectory can fold back on itself L times, creating arbitrarily complex decision regions when projected back to a measurement probability.
8. Implementation Details
Feature encoding
PYTHONangle = theta + x * w
where theta is a learnable bias, x is an input feature (scaled to [0, 2*pi]), and w is a learnable weight. This affine encoding is applied to the RZ and RY rotation angles.
Loss function
MSE loss between the circuit output <Z> (in [-1, +1]) and targets in {-1, +1}:
PYTHONloss = mean((predictions - targets)^2)
where targets = 2 * labels - 1 maps {0, 1} to {-1, +1}.
Optimizer
PennyLane's GradientDescentOptimizer with the parameter-shift rule for exact analytic gradients. Default learning rate: 0.1.
Classification threshold
The expectation value <Z> is thresholded at 0:
<Z> > 0-> class 1 (outside circle)<Z> <= 0-> class 0 (inside circle)
9. Comparison with Classical Models
| Model | Parameters | Qubits | Expressivity | Decision Boundary |
|---|---|---|---|---|
| Single perceptron | 3 | 0 | Linear only | Hyperplane |
| Data re-upload (L=1) | 6 | 1 | Limited non-linear | Simple curve |
| Data re-upload (L=3) | 18 | 1 | Universal | Arbitrary |
| Data re-upload (L=4) | 24 | 1 | Universal | Arbitrary (finer) |
| Classical NN (3 layers, 10 neurons) | 100+ | 0 | Universal | Arbitrary |
The data re-uploading classifier achieves universality with significantly fewer parameters than a classical neural network of comparable depth, though at the cost of requiring quantum circuit execution for each forward pass.
10. Limitations and Caveats
- Simulator only: This reproduction runs on PennyLane's
default.qubitsimulator. Hardware noise would degrade accuracy. - Small datasets: The default parameters use small training sets for fast execution. Paper results use larger datasets (hundreds of samples) and more epochs.
- 2D inputs only: This implementation handles 2-feature inputs. The paper also explores higher-dimensional classification.
- No barren plateaus analysis: The paper does not deeply analyze trainability; single-qubit circuits are generally free of barren plateaus, but multi-qubit extensions may not be.
- Gradient descent only: The paper also discusses other optimizers (Adam, Nelder-Mead); this reproduction uses vanilla gradient descent.
11. Further Reading
Primary reference
- Perez-Salinas, A., Cervera-Lierta, A., Gil-Fuster, E., & Latorre, J. I. "Data re-uploading for a universal quantum classifier." Quantum 4, 226 (2020). DOI: 10.22331/q-2020-02-06-226. arXiv: 1907.02085.
Related work
- Schuld, M., Sweke, R., & Meyer, J. K. "Effect of data encoding on the expressive power of variational quantum machine learning models." Physical Review A 103, 032430 (2021). DOI: 10.1103/PhysRevA.103.032430.
- Mitarai, K., Negoro, M., Kitagawa, M., & Fujii, K. "Quantum circuit learning." Physical Review A 98, 032309 (2018). DOI: 10.1103/PhysRevA.98.032309.
- Havlicek, V. et al. "Supervised learning with quantum-enhanced feature spaces." Nature 567, 209-212 (2019). DOI: 10.1038/s41586-019-0980-2.
Tutorials
12. Citation
BIBTEX@article{perez-salinas2020data, title = {Data re-uploading for a universal quantum classifier}, author = {P{\'e}rez-Salinas, Adri{\'a}n and Cervera-Lierta, Alba and Gil-Fuster, Elies and Latorre, Jos{\'e} Ignacio}, journal = {Quantum}, volume = {4}, pages = {226}, year = {2020}, publisher = {Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}, doi = {10.22331/q-2020-02-06-226}, url = {https://doi.org/10.22331/q-2020-02-06-226} }