Approximation and Generalization Capacities of Parametrized Quantum Circuits for Functions in Sobolev Spaces (original) (raw)
Parametrized quantum circuits (PQC) are quantum circuits which consist of both fixed and parametrized gates. In recent approaches to quantum machine learning (QML), PQCs are essentially ubiquitous and play the role analogous to classical neural networks. They are used to learn various types of data, with an underlying expectation that if the PQC is made sufficiently deep, and the data plentiful, the generalization error will vanish, and the model will capture the essential features of the distribution. While there exist results proving the approximability of square-integrable functions by PQCs under the L2L^2L2 distance, the approximation for other function spaces and under other distances has been less explored. In this work we show that PQCs can approximate the space of continuous functions, ppp-integrable functions and the HkH^kHk Sobolev spaces under specific distances. Moreover, we develop generalization bounds that connect different function spaces and distances. These results provide a theoretical basis for different applications of PQCs, for example for solving differential equations. Furthermore, they provide us with new insight on the role of the data normalization in PQCs and of loss functions which better suit the specific needs of the users.

Featured image: This figure illustrates how the input re-scaling strategy affects the ability of the PQC to approximate a linear function. Even in this simple case, the approximation deteriorates under the re-scalings applied in the middle and right panels. This is particularly visible near the boundaries of the domain. In contrast, with the re-scaling applied in the left panel, the PQC accurately approximates both the function and its derivatives, as predicted by our theoretical analysis.
Popular summary
The ability of parametrized quantum circuits (PQCs) to learn functions from data has recently been a central focus of research in quantum machine learning. In this work, we study the use of PQCs to learn both functions and their derivatives. Our results have applications for studying systems where both function values and rates of change matter, such as modelling how financial options change under certain market parameters, or for solving differential equations with physics-informed approaches.
A key challenge is the capacity of PQCs to approximate both functions and their derivatives sufficiently well. We prove that naive strategies relying on simple fitting must in general fail for fundamental reasons that are similar to the Gibbs phenomenon from harmonic analysis. They lead to not only poor approximations of the function derivatives but also cause large errors in the function approximation at the boundary of the domain. We propose a solution involving a simple but specific input re-scaling strategy, which enables significant improvements in the simultaneous approximation of function values and derivatives.
Beyond function approximation, ensuring good generalization is another key challenge in quantum machine learning. In particular, training a PQC with the commonly used L2L^2L2-loss function, which computes the average of the squared differences between approximated and true function values over all training points, does not allow for arbitrary precision at every function value across the entire domain. However, we prove that including both function values and their derivatives in the training allows PQCs to achieve this in principle, which is especially valuable in practical settings where derivative data is often available at little or no extra cost, such as in physical measurements or financial modelling.
Our results broaden the potential application of PQCs in fields where understanding both behavior and trends is crucial, from solving differential equations in physics to analyzing financial risks.
► BibTeX data
► References
[1] Robert A Adams and John JF Fournier. Sobolev spaces. Elsevier, 2003.
[2] Victor Burenkov. Extension theorems for sobolev spaces. In The Maz’ya Anniversary Collection: Volume 1: On Maz’ya’s work in functional analysis, partial differential equations and applications, pages 187–200. Springer, 1999. 10.1007/978-3-0348-8675-8_13.
https://doi.org/10.1007/978-3-0348-8675-8_13
[3] Claudio Canuto and Alfio Quarteroni. Approximation results for orthogonal polynomials in sobolev spaces. Mathematics of Computation, 38 (157): 67–86, 1982. 10.1090/S0025-5718-1982-0637287-3.
https://doi.org/10.1090/S0025-5718-1982-0637287-3
[4] Matthias C. Caro, Elies Gil-Fuster, Johannes Jakob Meyer, Jens Eisert, and Ryan Sweke. Encoding-dependent generalization bounds for parametrized quantum circuits. Quantum, 5: 582, November 2021. ISSN 2521-327X. 10.22331/q-2021-11-17-582.
https://doi.org/10.22331/q-2021-11-17-582
[5] Berta Casas and Alba Cervera-Lierta. Multidimensional fourier series with quantum circuits. Physical Review A, 107 (6): 062612, 2023. 10.1103/PhysRevA.107.062612.
https://doi.org/10.1103/PhysRevA.107.062612
[6] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2 (4): 303–314, 1989. 10.1007/BF02551274.
https://doi.org/10.1007/BF02551274
[7] Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, and Dacheng Tao. Expressive power of parametrized quantum circuits. Physical Review Research, 2 (3): 033125, 2020. 10.1103/PhysRevResearch.2.033125.
https://doi.org/10.1103/PhysRevResearch.2.033125
[8] Leopold Fejér. Untersuchungen über fouriersche reihen. Mathematische Annalen, 58 (1-2): 51–69, 1903.
[9] Francisco Javier Gil Vidal and Dirk Oliver Theis. Input redundancy for parameterized quantum circuits. Frontiers in Physics, 8: 297, 2020. 10.3389/fphy.2020.00297.
https://doi.org/10.3389/fphy.2020.00297
[10] Lukas Gonon and Antoine Jacquier. Universal approximation theorem and error bounds for quantum neural networks and quantum reservoirs. arXiv preprint arXiv:2307.12904, 2023. 10.48550/arXiv.2307.12904.
https://doi.org/10.48550/arXiv.2307.12904
arXiv:2307.12904
[11] Takahiro Goto, Quoc Hoan Tran, and Kohei Nakajima. Universal approximation property of quantum machine learning models in quantum-enhanced feature spaces. Physical Review Letters, 127 (9): 090506, 2021. 10.1103/PhysRevLett.127.090506.
https://doi.org/10.1103/PhysRevLett.127.090506
[12] David Gottlieb and Chi-Wang Shu. On the gibbs phenomenon and its resolution. SIAM Review, 39 (4): 644–668, 1997. 10.1137/S0036144596301390.
https://doi.org/10.1137/S0036144596301390
[13] Brian Huge and Antoine Savine. Differential machine learning. arXiv preprint arXiv:2005.02347, 2020. 10.48550/arXiv.2005.02347.
https://doi.org/10.48550/arXiv.2005.02347
arXiv:2005.02347
[14] Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. Physical Review A, 98 (3): 032309, 2018. 10.1103/PhysRevA.98.032309.
https://doi.org/10.1103/PhysRevA.98.032309
[15] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018. 10.1007/s00362-019-01124-9.
https://doi.org/10.1007/s00362-019-01124-9
[16] Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, and José I Latorre. Data re-uploading for a universal quantum classifier. Quantum, 4: 226, 2020. 10.22331/q-2020-02-06-226.
https://doi.org/10.22331/q-2020-02-06-226
[17] Adrián Pérez-Salinas, David López-Núñez, Artur García-Sáez, and José I Latorre. One qubit as a universal approximant. Physical Review A, 104 (1): 012405, 2021. 10.1103/PhysRevA.104.012405.
https://doi.org/10.1103/PhysRevA.104.012405
[18] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017. https://doi.org/10.48550/arXiv.1711.10561.
https://doi.org/10.48550/arXiv.1711.10561
arXiv:1711.10561
[19] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. Physical review letters, 113 (13): 130503, 2014. 10.1103/PhysRevLett.113.130503.
https://doi.org/10.1103/PhysRevLett.113.130503
[20] Halsey Lawrence Royden and Patrick Fitzpatrick. Real analysis, volume 2. Macmillan New York, 1968.
[21] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine-learning models. Physical Review A, 103 (3): 032430, 2021. 10.1103/PhysRevA.103.032430.
https://doi.org/10.1103/PhysRevA.103.032430
[22] Ferenc Weisz. ell_1\ell_1ell_1-summability of higher-dimensional fourier series. Journal of Approximation Theory, 163 (2): 99–116, 2011. ISSN 0021-9045. https://doi.org/10.1016/j.jat.2010.07.011.
https://doi.org/10.1016/j.jat.2010.07.011
[23] Michael M. Wolf. Mathematical foundations of supervised learning. Lecture Notes, 2023.
[24] Zhan Yu, Hongshun Yao, Mujin Li, and Xin Wang. Power and limitations of single-qubit native quantum neural networks. Advances in Neural Information Processing Systems, 35: 27810–27823, 2022.
Cited by
[1] Adrián Pérez-Salinas, Mahtab Yaghubi Rad, Alice Barthe, and Vedran Dunjko, "Universal approximation of continuous functions with minimal quantum circuits", Physical Review Research 7 4, 043282 (2025).
[2] Said Lantigua, Gilson Giraldi, and Renato Portugal, "Classical-quantum hybrid architecture for physics-informed neural networks", Physical Review A 113 4, 042446 (2026).
[3] Shreyan Basu Ray and Soujanya Ray, Information Systems Engineering and Management 65, 205 (2025) ISBN:978-3-031-99785-3.
The above citations are from Crossref's cited-by service (last updated successfully 2026-04-23 06:57:10). The list may be incomplete as not all publishers provide suitable and complete citation data.
Could not fetch ADS cited-by data during last attempt 2026-04-23 06:57:10: cURL error 28: Operation timed out after 10001 milliseconds with 0 bytes received
This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.