Quantum error correction Research Papers (original) (raw)
2025, 2021 IEEE International Conference on Quantum Computing and Engineering (QCE)
We consider the problem of estimating quantum observables on a collection of qubits, given as a linear combination of Pauli operators, with shallow quantum circuits consisting of singlequbit rotations. We introduce estimators based on... more
We consider the problem of estimating quantum observables on a collection of qubits, given as a linear combination of Pauli operators, with shallow quantum circuits consisting of singlequbit rotations. We introduce estimators based on randomised measurements, which use decision diagrams to sample from probability distributions on measurement bases. This approach generalises previously known uniform and locally-biased randomised estimators. The decision diagrams are constructed given target quantum operators and can be optimised considering different strategies. We show numerically that the estimators introduced here can produce more precise estimates on some quantum chemistry Hamiltonians, compared to previously known randomised protocols and Pauli grouping methods.
2025, Modern Physics Letters A
We present the scalar moduli stabilization from the perspective of the real intrinsic geometry. In this paper, we describe the physical nature of the vacuum moduli fluctuations of an arbitrary Fayet configuration. For finitely many... more
We present the scalar moduli stabilization from the perspective of the real intrinsic geometry. In this paper, we describe the physical nature of the vacuum moduli fluctuations of an arbitrary Fayet configuration. For finitely many abelian scalar fields, we show that the framework of the real intrinsic geometry investigates the mixing between the marginal and threshold vacua. Interestingly, we find that the phenomena of wall crossing and the search of the stable vacuum configurations, pertaining to D-term and F -term scalar moduli, can be accomplished for the abelian charges. For given vacuum expectation values of the moduli scalars, we provide phenomenological aspects of the vacuum fluctuations and phase transitions in the supersymmetry breaking configurations.
2025, Bulletin of the American Physical Society
We present low temperature measurements of back gated FET structures and donor implanted SETs fabricated from strained silicon on insulator substrates with a low doped handle. This strained silicon system is useful for studying the... more
We present low temperature measurements of back gated FET structures and donor implanted SETs fabricated from strained silicon on insulator substrates with a low doped handle. This strained silicon system is useful for studying the effects of strain on both single donor physics and may provide insight into the behavior of strained silicon channels for quantum dots. We use FET thresholds to characterize the oxide/Si defect density. Back gating influences the transient time response, mobility, and FET threshold. These parameters are also modified by above band gap light illumination. Two transport channels are observed, which also strongly depend on back gate voltage and illumination.
2025, Part 16
This paper explores the implications of assuming the existence of a fundamental quantum of energy, referred to here as the joulino, on the limits of errorless exponential quantum computation. Building on the theoretical framework of... more
This paper explores the implications of assuming the existence of a fundamental quantum of energy, referred to here as the joulino, on the limits of errorless exponential quantum computation. Building on the theoretical framework of füberphysics and its extension into Information Mechanics, we examine how the granularity of energy in quantum systems imposes upper bounds on the number of qubits that can engage in coherent, unitary evolution suitable for exponential computational tasks. Us...
2025, Physical Review A
High-precision, robust quantum gates are essential components in quantum computation and information processing. In this study, we present an alternative perspective, exploring the potential applicability of quantum gates that exhibit... more
High-precision, robust quantum gates are essential components in quantum computation and information processing. In this study, we present an alternative perspective, exploring the potential applicability of quantum gates that exhibit heightened sensitivity to errors. We investigate such sensitive quantum gates, which, beyond their established use in in vivo nuclear magnetic resonance spectroscopy and polarization optics, may offer significant utility in the other areas where selectivity, filtering, sensing, localization, or addressing properties are of interest. Utilizing the composite pulses technique, we derive three fundamental quantum gates with narrowband and passband characteristics-the X (NOT) gate, the HADAMARD gate, and gates enabling arbitrary rotations. To systematically design these composite pulse sequences, we introduce the SU(2), modified-SU(2), and regularization random search methodologies. These approaches demonstrate superior performance compared to established sequences in the literature, including NB1, SK1, and PB1.
2025, Authoera
The Khayyam-Pascal Triangle (also known as the Binomial Triangle) a fundamental structure in mathematical combinatorics and number theory, powerfully visualizes the regular and predictable distribution of binomial coefficients. This study... more
The Khayyam-Pascal Triangle (also known as the Binomial Triangle) a fundamental structure in mathematical combinatorics and number theory, powerfully visualizes the regular and predictable distribution of binomial coefficients. This study introduces an innovative approach termed the "Keçeci Binomial Square" (KBS, Keçeci's Arithmetical Square (first defined: March 2025)), which defines numerical series within Pascal's Triangle characterized by specific geometric and structural properties. Rather than directly manipulating the standard binomial expansion, KBS focuses on a specialized selection and analysis of Pascal's Triangle elements that constitute the coefficients of these expansions. The core definition of KBS relies on selecting an N x N square region from Pascal's Triangle. This selection is dynamically determined by a user-specified start_row_index and an alignment_type ("left", "right", "center") that dictates how coefficients are positioned within each row. The resulting numerical series comprises N segments of N elements each, drawn from consecutive rows of Pascal's Triangle. The sum of these elements constitutes one of the primary outputs of the KBS. This structure offers an opportunity to examine not only the individual values of binomial coefficients but also their behavior within a particular regional integration. The academic value of the KBS concept lies in its ability to facilitate the discovery of local patterns and relationships within Pascal's Triangle. This approach not only aids in visualizing binomial coefficients and combinatorial principles in mathematical education but also provides a framework for investigating specific additive properties in number theory or particular cases in algorithmic analysis. For instance, the impact of different alignment types on the selected series sums, or potential connections between KBS series for specific N values and other known number sequences (e.g., Fibonacci, Catalan), present fertile grounds for future research. In conclusion, the Keçeci Binomial Square offers a systematic method for re-contextualizing a well-established mathematical structure, thereby revealing hidden relationships and additive properties among binomial coefficients. This framework holds the potential to stimulate new research in theoretical mathematics and offer novel perspectives in applied fields (e.g., combinatorial optimization, data analysis). Future work can delve into a deeper mathematical analysis of different KBS configurations and test their practical implications across various disciplines.
2025, From füberphysics to Information Mechanics Part 11: Discrete Energy Exchange and the Physical Limits of Quantum Computation
We examine the implications of a discrete energy quantization framework, inspired by (a, b) variables in relativistic models, for the physical realization of quantum computation. This framework postulates that all energy exchanges occur... more
We examine the implications of a discrete energy quantization framework, inspired by (a, b) variables in relativistic models, for the physical realization of quantum computation. This framework postulates that all energy exchanges occur in integer multiples of a fundamental quantum , especially when measured from a preferred symmetrical frame corresponding to the isotropic velocity of the cosmic microwave background. We explore whether such a discretization introduces limits to idealized exponential quantum computing using entangled qubits. Our analysis supports 't Hooft's recent claim that unbounded exponential computation may be unrealistic due to the fundamentally discrete and deterministic nature of physical interactions.
2025, Advances in Theoretical and Mathematical Physics
In this paper, we shall describe some correlation function computations in perturbative heterotic strings that generalize B model computations. On the (2,2) locus, correlation functions in the B model receive no quantum corrections, but... more
In this paper, we shall describe some correlation function computations in perturbative heterotic strings that generalize B model computations. On the (2,2) locus, correlation functions in the B model receive no quantum corrections, but off the (2,2) locus, that can change. Classically, the (0,2) analogue of the B model is equivalent to the previously discussed (0,2) analogue of the A model, but with the gauge bundle dualizedour generalization of the A model also simultaneously generalizes the B model. The A and B analogues sometimes have different regularizations, however, which distinguish them quantum-mechanically. We discuss how properties of the (2,2) B model, such as the lack of quantum corrections, are realized in (0,2) A model language. In an appendix, we also extensively discuss how the Calabi-Yau condition for the closed string B model (uncoupled to topological gravity) can be weakened slightly, a detail which does not seem to have been covered in the literature previously. That weakening also manifests in the description of the (2,2) B model as a (0,2) A model.
2025, arXiv (Cornell University)
A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess... more
A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess intrinsic noise-tolerant features, hold the promise for performing robust quantum computation. In particular, quantum holonomies, i.e., non-Abelian geometric phases, naturally lead to universal quantum computation due to their non-commutativity. Although quantum gates based on adiabatic holonomies have already been proposed, the slow evolution eventually compromises qubit coherence and computational power. Here, we propose a general approach to speed up an implementation of adiabatic holonomic gates by using transitionless driving techniques and show how such a universal set of fast geometric quantum gates in a superconducting circuit architecture can be obtained in an all-geometric approach. Compared with standard non-adiabatic holonomic quantum computation, the holonomies obtained in our approach tends asymptotically to those of the adiabatic approach in the long run-time limit and thus might open up a new horizon for realizing a practical quantum computer. Fast and robust quantum gates play a central role in realizing a practical quantum computer. While the robustness offers resilience to certain errors such as parameter fluctuations, the fast implementation of designated quantum gates increases computational speed, which in turn decreases environment-induced errors. A possible approach towards robust quantum computation is to implement quantum gates by means of different types of geometric phases ; an approach known as holonomic quantum computation (HQC) . Such geometric gates depend solely on the path of a system evolution, rather than its dynamical details. Universal quantum computation based purely on geometric means has been proposed in the adiabatic regime, resulting in a precise control of a quantum-mechanical system 7 . Despite the appealing features, the adiabatic evolution is associated with long run time, which increases the exposure to detrimental decoherence and noise. However, this drawback can be eliminated by using non-adiabatic HQC schemes based on Abelian 8,9 or non-Abelian geometric phases 10 . The latter has been developed further in refs 11-14 experimentally demonstrated in refs 15-18 and its robustness to a variety of errors has been studied in refs 19,20. Adiabatic processes can also be carried out swiftly by employing transitionless quantum driving algorithm (TQDA) if the quantum system consists of non-degenerate subspaces 21 . This is also known as adiabatic shortcut in the literature . A key notion of TQDA is to seek a transitionless Hamiltonian so that the system evolves exactly along the same adiabatic passage of a given target Hamiltonian, but at any desired rate. This is achieved with the aid of an additional Hamiltonian that suppresses the energy level fluctuations caused by the changes in the system parameters. In this report, we generalize TQDA to degenerate subspaces, where non-Abelian geometric phases are acquired after a cyclic evolution. With the help of the generalized TQDA, we propose a universal set of non-adiabatic holonomic single-and two-qubit gates. Specifically, non-Abelian geometric phases or quantum holonomies are acquired by a degenerate subspace after a cyclic evolution. TQDA-based geometric phases are realized via non-adiabatic evolution, dictated by an additional transition-suppressing Hamiltonian. We further simplify the transitionless Hamiltonian by selectively choosing geodesic path segments forming a loop in the system parameter space. This
2025
Quantum computing (QC) harnesses superposition and entanglement to tackle classically intractable tasks. In the measurement–based quantum computing (MBQC) paradigm, computation is driven by preparing a highly entangled cluster state and... more
Quantum computing (QC) harnesses superposition and entanglement to tackle classically intractable tasks. In the measurement–based quantum computing (MBQC) paradigm, computation is driven by preparing a highly entangled cluster state and performing adaptive single–qubit measurements. Here, we revisit You et al.’s one–step superconducting–circuit scheme for cluster–state generation, providing full analytical derivations and numerical validation for a 4–qubit instantiation. Under ideal (noise–free) Hamiltonian evolution, fidelity revivals at odd multiples of π reaches 100%, confirming accurate state synthesis. Incorporating energy relaxation (T1) yields first–revival fidelity >90% and a drop to ∼80% by the fourth peak, while pure dephasing (T2) causes faster contrast loss (first >90%, fourth ∼70%). When both channels act simultaneously, the first revival falls to ∼85% and later peaks drop below 70%. Post–projection coherence under combined noise decays to 50% within 15 time units, versus >70% under T1–only. These results quantify how T2 degrades cluster–state preparation more rapidly than T1, and they highlight the critical need for targeted error–mitigation strategies in near–term MBQC implementations.
2025, New Journal of Physics
Noise is the greatest obstacle in quantum metrology that limits it achievable precision and sensitivity. There are many techniques to mitigate the effect of noise, but this can never be done completely. One commonly proposed technique is... more
Noise is the greatest obstacle in quantum metrology that limits it achievable precision and sensitivity. There are many techniques to mitigate the effect of noise, but this can never be done completely. One commonly proposed technique is to repeatedly apply quantum error correction. Unfortunately, the required repetition frequency needed to recover the Heisenberg limit is unachievable with the existing quantum technologies. In this article we explore the discrete application of quantum error correction with current technological limitations in mind. We establish that quantum error correction can be beneficial and highlight the factors which need to be improved so one can reliably reach the Heisenberg limit level precision.
2025
Magic, a key quantum resource beyond entanglement, remains poorly understood in terms of its structure and classification. In this paper, we demonstrate a striking connection between high-dimensional symmetric lattices and quantum magic... more
Magic, a key quantum resource beyond entanglement, remains poorly understood in terms of its structure and classification. In this paper, we demonstrate a striking connection between high-dimensional symmetric lattices and quantum magic states. By mapping vectors from the E 8 , BW 16 , and E 6 lattices into Hilbert space, we construct and classify stabiliser and maximal magic states for two-qubit, three-qubit and one-qutrit systems. In particular, this geometric approach allows us to construct, for the first time, closed-form expressions for the maximal magic states in the three-qubit and one-qutrit systems, and to conjecture their total counts. In the three-qubit case, we further classify the extremal magic states according to their entanglement structure. We also examine the distinctive behaviour of one-qutrit maximal magic states with respect to Clifford orbits. Our findings suggest that deep algebraic and geometric symmetries underlie the structure of extremal magic states.
2025, Physical Review Letters
We report on the application of a dynamic decoherence control pulse sequence on a nuclear quadrupole transition in P r 3+ :Y2SiO5 . Process tomography is used to analyse the effect of the pulse sequence. The pulse sequence was found to... more
We report on the application of a dynamic decoherence control pulse sequence on a nuclear quadrupole transition in P r 3+ :Y2SiO5 . Process tomography is used to analyse the effect of the pulse sequence. The pulse sequence was found to increase the decoherence time of the transition to over 30 seconds. Although the decoherence time was significantly increased, the population terms were found to rapidly decay on the application of the pulse sequence. The increase of this decay rate is attributed to inhomogeneity in the ensemble. Methods to circumvent this limit are discussed.
2025, Nature
To build a universal quantum computer from fragile physical qubits, effective implementation of quantum error correction (QEC) 1 is an essential requirement and a central challenge. Existing demonstrations of QEC are based on an active... more
To build a universal quantum computer from fragile physical qubits, effective implementation of quantum error correction (QEC) 1 is an essential requirement and a central challenge. Existing demonstrations of QEC are based on an active schedule of error syndrome measurements and adaptive recovery operations 2-7 that are hardware intensive and prone to introducing and propagating errors. In principle, QEC can be realized autonomously and continuously by tailoring dissipation within the quantum system 1,8-14 , but so far it has remained challenging to achieve the specific form of dissipation to counter the most prominent errors in a physical platform. Here we encode a logical qubit in Schrödinger cat-like multiphoton states 15 of a superconducting cavity, and demonstrate a corrective dissipation process that stabilizes an error syndrome operator: the photon number parity. Implemented with continuous-wave control fields only, this passive protocol realizes autonomous correction against single-photon loss and boosts the coherence time of the multiphoton qubit by over a factor of two. Notably, QEC is realized in a modest hardware setup with neither high-fidelity readout nor fast digital feedback, in contrast to the technological sophistication required for prior QEC demonstrations. Compatible with additional phase-stabilization and fault-tolerant techniques [16][17][18] , our experiment suggests reservoir engineering as a resource-efficient alternative or supplement to active QEC in future quantum computing architectures.
2025
Tasks involving black boxes appear frequently in quantum computer science. An example that has been deeply studied is quantum channel discrimination. In this work, we study the discrimination between two quantum unitary channels in the... more
Tasks involving black boxes appear frequently in quantum computer science. An example that has been deeply studied is quantum channel discrimination. In this work, we study the discrimination between two quantum unitary channels in the multiple-shot scenario. We challenge the theoretical results concerning the probability of correct discrimination with the results collected from experiments performed on the IBM Quantum processor Brisbane. Our analysis shows that neither too deep quantum circuits nor circuits that create too much entanglement are suitable for the discrimination task. We conclude that circuit architectures which minimize entanglement overhead while preserving discrimination power are significantly more resilient to hardware noise if their depth does not overpass threshold value.
2025, arXiv preprint arXiv:0708.0250
We show that holography arises naturally in the context of spherically symmetric loop quantum gravity. The result is not dependent on detailed assumptions about the dynamics of the theory being considered. It ties strongly the amount of... more
We show that holography arises naturally in the context of spherically symmetric loop quantum gravity. The result is not dependent on detailed assumptions about the dynamics of the theory being considered. It ties strongly the amount of information contained in a region of space to the tight mathematical underpinnings of loop quantum geometry, at least in this particular context.
2025, 2011 IEEE International Symposium on Information Theory Proceedings
We face the following dilemma for designing lowdensity parity-check codes (LDPC) for quantum error correction. 1) The row weights of parity-check should be large: The minimum distances are bounded above by the minimum row weights of... more
We face the following dilemma for designing lowdensity parity-check codes (LDPC) for quantum error correction. 1) The row weights of parity-check should be large: The minimum distances are bounded above by the minimum row weights of parity-check matrices of constituent classical codes. Small minimum distance tends to result in poor decoding performance at the error-floor region. 2) The row weights of parity-check matrices should not be large: The sum-product decoding performance at the water-fall region is degraded as the row weight increases. Recently, Kudekar et al. showed spatially-coupled (SC) LDPC codes exhibit capacity-achieving performance for classical channels. SC LDPC codes have both large row weight and capacityachieving error-floor and water-fall performance. In this paper, we design SC LDPC-CSS (Calderbank, Shor and Steane) codes for quantum error correction over the depolarizing channels.
2025, Physical Review A
Composite pulse sequences, which produce arbitrary predefined rotations of a qubit on the Bloch sphere, are presented. The composite sequences contain up to 17 pulses and can compensate up to 8 orders of experimental errors in the pulse... more
Composite pulse sequences, which produce arbitrary predefined rotations of a qubit on the Bloch sphere, are presented. The composite sequences contain up to 17 pulses and can compensate up to 8 orders of experimental errors in the pulse amplitude and the pulse duration. Composite sequences for three basic quantum gates, X (NOT), Hadamard and arbitrary rotation, are derived. Three classes of composite sequences are presented: one symmetric and two asymmetric. They contain as their lowest members two well-known composite sequences: the three-pulse symmetric SCROFULOUS pulse and the four-pulse asymmetric BB1 pulse, which compensate first-and second-order errors, respectively. The shorter sequences are derived analytically, and the longer ones numerically (instead by nesting and concatenation, as usually done hitherto). Consequently, the composite sequences derived here match or outperform the existing ones in terms of either speed or accuracy, or both. For example, we derive a second-order composite sequence, which is faster (by about 13%) than the famous BB1 sequence. For higher-order sequences, the speedup becomes much more pronounced. This is important for quantum information processing as the sequences derived here provide more options for finding the sweet spot between ultrahigh fidelity and high speed.
2025, Garrett Nicholas
G-Theory primary contributions to science... ( You may wish to just copy and paste the content of the paper to your preferred ai. And request an evaluation. Cheers!) This paper proposes a radical new **Unified Field Theory,... more
2025
We investigate the effects of prime-indexed pulse timing on quantum coherence in nitrogen-vacancy (NV) centers within diamond lattices. Using a Lagrangian formulation motivated by the Universal Model Framework (UMF), we simulate... more
We investigate the effects of prime-indexed pulse timing on quantum coherence in nitrogen-vacancy (NV) centers within diamond lattices. Using a Lagrangian formulation motivated by the Universal Model Framework (UMF), we simulate decoherence dynamics under prime pulse sequences and compare them to standard exponential decay models. Our analysis encompasses coherence behavior across varying dephasing rates, pulse counts, initial quantum states, and temporal scales. Results indicate that prime-gated pulse sequences preserve structured coherence patterns and induce fractal modulation characteristics that exhibit resilience to environmental noise and temporal rescaling effects.
2025, Physical review
We investigate the most general mechanisms that lead to perfect synchronization of the quantum states of all subsystems of an open quantum system starting from an arbitrary initial state. We provide a necessary and sufficient condition... more
We investigate the most general mechanisms that lead to perfect synchronization of the quantum states of all subsystems of an open quantum system starting from an arbitrary initial state. We provide a necessary and sufficient condition for such "quantum-state synchronization", prove tight lower bounds on the dimension of the ancilla's Hilbert space in two main classes of quantum-state synchronizers, and give an analytical solution for their construction. The functioning of the found quantum-state synchronizer of two qubits is demonstrated experimentally on an IBM quantum computer and we show that the remaining asynchronicity is a sensitive measure of the quantum computer's imperfection.
2025, arXiv (Cornell University)
We investigate the most general mechanisms that lead to perfect synchronization of the quantum states of all subsystems of an open quantum system starting from an arbitrary initial state. We provide a necessary and sufficient condition... more
We investigate the most general mechanisms that lead to perfect synchronization of the quantum states of all subsystems of an open quantum system starting from an arbitrary initial state. We provide a necessary and sufficient condition for such "quantum-state synchronization", prove tight lower bounds on the dimension of the ancilla's Hilbert space in two main classes of quantum-state synchronizers, and give an analytical solution for their construction. The functioning of the found quantum-state synchronizer of two qubits is demonstrated experimentally on an IBM quantum computer and we show that the remaining asynchronicity is a sensitive measure of the quantum computer's imperfection.
2025
This paper covers the basic aspects of the mathematical formalism of quantum mechanics in general and quantum computing in particular, underscoring the differences between quantum computing and classical computing. This paper culminates... more
This paper covers the basic aspects of the mathematical formalism of quantum mechanics in general and quantum computing in particular, underscoring the differences between quantum computing and classical computing. This paper culminates in a discussion of Shor’s algorithm, a quantum computational algorithm for factoring composite numbers that runs in polynomial time, making it faster than any known classical algorithm for factorization. This paper serves as a survey of Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer by Peter Shor[3].
2025
In certain media, light has been observed with group velocities faster than the speed of light. The recent OPERA report of superluminal 17 GeV neutrinos may describe a similar phenomenon.
2025
The Ward identities for amplitudes at the tree level are derived from symmetries of the corresponding classical dynamical systems. The result are aplied to some 2 → n amplitudes
2025
Grover’s algorithm provides a quadratic speedup for unstructured search problems, yet its standard implementation becomes resource-intensive when mapped onto real quantum hardware. In this work, we demonstrate that evolved quantum... more
Grover’s algorithm provides a quadratic speedup for unstructured search problems, yet its standard implementation
becomes resource-intensive when mapped onto real quantum hardware. In this work, we demonstrate that
evolved quantum circuits, discovered via grammatical evolution (GE), can outperform the canonical Grover design
on noisy intermediate-scale quantum (NISQ) devices.
Using a symbolic BNF grammar and hardware-aware evaluation, we evolved Grover-like circuits for all 8 basis states
of a 3-qubit system. On IBM’s ibm brisbane backend, the best evolved circuit achieved 97.9% fidelity (|000⟩) and
the lowest still reached 89.1% fidelity (|011⟩), while standard Grover circuits ranged from only 44.2% to 47.6%
under the same conditions.
Furthermore, evolved circuits achieved up to 93.3% reduction in depth and 92.7% reduction in gate count
compared to standard Grover implementations. These results highlight the potential of symbolic AI techniques to
generate hardware-efficient quantum programs that outperform hand-crafted designs in practical NISQ settings.
2025, Physical Review B
Static dielectric screening in undoped semiconductors at zero temperature is formulated within the framework of the Thomas-Fermi-Dirac (TFD) model of a homogeneous and isotropic solid. At each point in the solid the valence electrons are... more
Static dielectric screening in undoped semiconductors at zero temperature is formulated within the framework of the Thomas-Fermi-Dirac (TFD) model of a homogeneous and isotropic solid. At each point in the solid the valence electrons are treated as a degenerate gas in statistical equilibrium in the space-varying self-consistent potential of a point-charge impurity. The theory involves the electrostatic, kinetic, and exchange energies of the electrons in the development of a nonlinear TFD equation for the screened potential. The Thomas-Fermi (TF) theory of dielectric screening is recovered when exchange effects are neglected. Closed analytical expressions for the wave-vectordependent dielectric function and the spatial dielectric function are obtained by linearization of the TFD equation and the range of validity of approximation investigated. Numerical solutions of the nonlinear TFD equation for point-charge screening show an increasing departure from linear behavior with impurity charge. These properties of the nonlinear TFD theory are already manifest in the TF scheme. A comparison between TFD-and TF-model dielectric functions shows impor- tant differences due to exchange. In the linear screening regime, it is found that impurity potentials are more effectively reduced when exchange effects are included. As a result, the TF theory com- pares more favorably with accurate band-structure calculations of the dielectric functions for silicon and germanium. It is expected that improvement in the TFD dielectric functions depends on ex- tending the treatment to include correlation and/or the quantum correction. In the nonlinear re- gime, attractive potentials are more effectively screened in the TFD theory, while the opposite is not generally true for repulsive potentials. Finally, it is seen that donor-acceptor asymmetry is stronger in the presence of exchange effects.
2025, Physical Review B
The quantum corrections to the partition function of nonlinear systems with a nonlocal kinetic energy are calculated by extending a variational approach based on the path integral and developed in a previous paper, which allows one to... more
The quantum corrections to the partition function of nonlinear systems with a nonlocal kinetic energy are calculated by extending a variational approach based on the path integral and developed in a previous paper, which allows one to take into account in the quantum scheme of the quadratic part of the Hamiltonian. This extension can be useful to study the role of the out- of-plane fluctuations which cause deviations from the sine-Gordon model for some magnetic chains.
2025, Physical Review Letters
The path-integral method is used for determination of the quantum corrections to the free energy of nonlinear systems. All quantum effects of the harmonic part of the potential are considered and a variational principle is used to account... more
The path-integral method is used for determination of the quantum corrections to the free energy of nonlinear systems. All quantum effects of the harmonic part of the potential are considered and a variational principle is used to account for the quantum corrections due to the anharmonic part. Correct renormalized frequencies are obtained at any temperature and an effective potential to be inserted in the configurational integral is found. A new general expression for the partition function at any temperature in the low-coupling limit is obtained.
2025, Open Science Articles (OSAs)
This study comprehensively examines the profound technological and methodological synergies existing between observations made via interferometric detectors such as LIGO and Virgo, which have revolutionized the field of gravitational wave... more
This study comprehensively examines the profound technological and methodological synergies existing between observations made via interferometric detectors such as LIGO and Virgo, which have revolutionized the field of gravitational wave (GW) astrophysics, and the rapidly advancing quantum computing (QC) technologies. As both disciplines aim to perform measurements pushing the limits of precision, the effective control and mitigation of environmental and quantum-originated noise pose a critical challenge. In this context, the quantum squeezing technique stands out as a fundamental tool in both domains, employed to reduce measurement uncertainty below the quantum limit in GW detectors and to enhance the sensitivity of quantum bits (qubits) in QCs. Carlton Caves' pioneering 1980 paper [1] first theoretically established the inevitable presence of quantum mechanical radiation pressure fluctuations in laser interferometers and their impact on measurement sensitivity, thereby laying the groundwork for integrating quantum optics principles into high-precision metrology. This theoretical framework has also provided the intellectual basis for quantum noise reduction strategies developed to enhance the sensitivity of GW detectors. Similarly, Peter Shor's development of quantum error correction (QEC) codes in 1996 [3] represented a landmark, offering a solution to decoherence and operational errors-one of the biggest obstacles for QCs-and paving the way for scalable and fault-tolerant quantum computation. The present work meticulously compares the parallel technological advancements and conceptual intersections in these two pioneering fields, highlighting a rich interdisciplinary potential that can yield mutual benefits and inspire innovative solutions. In this vein, the study analyses the historical evolution and current technological challenges of both GW observations and QCs, while also envisioning potential future areas of interaction and collaboration-such as advanced sensors, novel signal processing algorithms, and the application of quantum information theory to physical systems-thereby aiming to establish a solid foundation for a deeper and more fruitful integration of quantum technologies in these two distinct yet complementary domains.
2025, Open Science Articles (OSAs)
Quantum computers promise to revolutionize science and technology by offering the potential to solve complex problems intractable with classical approaches. However, realizing this potential hinges on effectively managing the noise and... more
Quantum computers promise to revolutionize science and technology by offering the potential to solve complex problems intractable with classical approaches. However, realizing this potential hinges on effectively managing the noise and errors inherent in quantum systems, which threaten computational accuracy. This work has explored a broad spectrum, from the fundamentals of quantum computation to strategies for enhancing the performance of devices in the Noisy Intermediate-Scale Quantum (NISQ) era, with a particular focus on the critical role of quantum error correction (QEC) codes and the decoder algorithms developed for them. While various methods exist for characterizing and manipulating quantum states, the scalability of these methods becomes a significant issue as the number of qubits increases. The measurement process itself also requires careful planning as it perturbs the quantum state. QEC codes, especially topological codes like surface codes, developed to overcome these challenges, form the foundation of fault-tolerant quantum computation. The success of a QEC code largely depends on the performance of its decoder algorithm, which analyses error syndromes to detect and correct the most probable errors. Alongside classical approaches like Minimum-Weight Perfect Matching (MWPM) and Union-Find, newer and potentially more powerful methods such as Maximum-Likelihood Decoders (MLD) and Neural Networkbased Decoders (NNbD) are active areas of research. A prominent aspect of this study is the demonstration that, even with limited classical computing resources, the theoretical scalability of quantum error correction mechanisms can be pushed to remarkable limits using sophisticated simulation techniques and algorithmic ingenuity. Notably, striking results such as the simulation and verification of surface code error correction algorithms for systems of 25 million theoretical qubits have been achieved on a personal computer. Furthermore, the graphical visualization of error correction solutions for systems exceeding 100,000 theoretical qubits underscores the analysability of such complex systems. These findings indicate that error correction principles are theoretically applicable to very large systems and that classical simulations continue to be a valuable tool in this exploratory journey. In the future, key objectives will include the development of more efficient and scalable decoders, the discovery of new QEC codes, the creation of realistic noise models, advancements in hardware-software co-design, and the execution of complex algorithms on logical qubits. Quantum error correction will continue to play a central role on the path to fault-tolerant quantum computation, and theoretical and simulational work in this area will offer significant contributions to the realization of practical quantum computers. Large-scale simulation achievements driven by the creativity of individual researchers, as highlighted here, bolster hopes for the future of the field.
2025, Open Science Articles (OSAs)
Quantum computers hold the potential to solve complex problems intractable for classical supercomputers. However, the inherent susceptibility of quantum systems to decoherence and environmental noise poses the most significant barrier to... more
Quantum computers hold the potential to solve complex problems intractable for classical supercomputers. However, the inherent susceptibility of quantum systems to decoherence and environmental noise poses the most significant barrier to realizing this potential. Quantum Error Correction (QEC) codes aim to preserve the integrity of quantum information by actively detecting and correcting these noisy effects, thereby enabling fault-tolerant, scalable quantum computation. This paper begins with the fundamental concepts in QEC, discussing early seminal approaches such as stabilizer codes, notably the Shor and Steane codes. It then focuses on topological error correction codes, which are currently an intensive area of research, particularly surface codes and color codes. The advantages of these codes, such as high error thresholds and local interaction requirements, are discussed alongside their drawbacks, including physical qubit overhead and challenges in implementing logical gates. The paper also examines alternative approaches like Low-Density Parity-Check (LDPC) codes and their potential benefits. Fundamental challenges in implementing QEC, the practical implications of the threshold theorem, the importance of noise modeling (including non-Markovian and correlated errors), and the role of characterization techniques like quantum process tomography and randomized benchmarking are highlighted. Finally, current research trends such as dynamic encoding, error suppression, and hardware-software co-design are evaluated, along with potential future directions and open problems for QEC strategies. This work aims to underscore the central role of QEC in the future of quantum computing and the importance of continuous progress in this field.
2025, Open Science Articles (OSAs)
Nanoscale quantum computers (nQCs) represent a revolutionary research frontier aiming to transcend current macroscopic quantum computing approaches by integrating the fundamental principles of quantum mechanics at the most elemental level... more
Nanoscale quantum computers (nQCs) represent a revolutionary research frontier aiming to transcend current macroscopic quantum computing approaches by integrating the fundamental principles of quantum mechanics at the most elemental level of hardware. This vision seeks to create systems not only at nanometre dimensions (1-100 nm) but also where quantum effects are dominant, operating on 100% quantum principles. Potential advantages of nQCs include enhanced coherence times, significantly reduced energy consumption, higher qubit density, and improved noise resilience. Superconductivity plays a central role in achieving these goals; its various forms, from conventional BCS theory to high-temperature superconductors and the quest for room-temperature superconductivity, underpin qubits (e.g., transmons, fluxoniums) and, crucially, pwave symmetric superconductors capable of hosting exotic, topologically protected quasiparticles like Majorana fermions. Nanostructures such as carbon nanotubes, graphene, and other two-dimensional materials are promising building blocks for qubits, interconnects, and alternatives to Josephson junctions, like quantum dot junctions (QDJs). Manufacturing technologies necessitate a transition from microelectromechanical systems (MEMS) to nano-electromechanical systems (NEMS) and the development of nanofabrication techniques with atomic precision. Topological insulators and superconductors are novel classes of materials characterized by topological properties, such as the Z2 invariant, offering inherent protection against decoherence. However, creating stable and scalable quantum systems at the nanoscale presents significant challenges. These include quasiparticle poisoning caused by environmental radiation (cosmic rays, natural radioactivity), decoherence mechanisms limiting coherence times, and the integration of complex error correction codes (e.g., the surface code) for fault-tolerant quantum computation. Alternative computational paradigms like quantum annealing and adiabatic quantum computation also enrich research in this domain. In the future, nanoscale quantum computers are expected to spearhead groundbreaking advancements in numerous fields, from materials science and drug discovery to optimization problems and fundamental physics research. This signifies not merely a miniaturization of existing technologies but potentially the dawn of a new computational era where quantum phenomena are harnessed in their purest form.
2025, arXiv: Mathematical Physics
In this paper we show the relation between sp(4,mathbbR)sp(4,\mathbb{R})sp(4,mathbbR), the Lie algebra of the symplectic group, and the elements of the symplectic group Sp(4,mathbbR)Sp(4,\mathbb{R})Sp(4,mathbbR). We use this relation to provide a classical analog of the squeeze... more
In this paper we show the relation between sp(4,mathbbR)sp(4,\mathbb{R})sp(4,mathbbR), the Lie algebra of the symplectic group, and the elements of the symplectic group Sp(4,mathbbR)Sp(4,\mathbb{R})Sp(4,mathbbR). We use this relation to provide a classical analog of the squeeze operator widehatS(zeta)\widehat{S}(\zeta)widehatS(zeta). This classical squeeze matrix shares some similarities with the correlation matrix bfV(2){\bf V}^{(2)}bfV(2) and its amount of squeezing is half of that in the correlation matrix.
2025
Quantum error correction (QEC) is fundamental to the advancement of practical quantum computing. The paper by Google Quantum AI (2023) gives a significant experimental breakthrough, demonstrating that increasing the code distance in a... more
Quantum error correction (QEC) is fundamental to the advancement of practical quantum computing. The paper by Google Quantum AI (2023) gives a significant experimental breakthrough, demonstrating that increasing the code distance in a surface code leads to improved logical qubit performance. This response critically calculates the methodology, results, and implications of the research, highlighting its importance in progressing toward scalable, fault-tolerant quantum computation. Introduction Quantum computing holds transformative potential across fields such as cryptography, chemistry, and optimization (Feynman, 1982; Shor, 1999; Lloyd, 1996). However, the high error rates of quantum gates and decoherence pose formidable challenges. Quantum error correction (QEC), particularly surface code implementations, offers a promising path forward (Gottesman, 1997; Fowler et al., 2012). The study by Google Quantum AI (2023) addresses a crucial milestone: whether increasing the number of qubits in a surface code leads to a net decrease in logical error rates-a necessary condition for fault-tolerant quantum computing. Summary of Key Findings Google's 72-qubit superconducting platform implemented both distance-3 and distance-5 surface codes. The distance-5 code demonstrated a modest yet statistically significant reduction in logical error per cycle (2.914% ± 0.016%) compared to the average of four distance-3 codes (3.028% ± 0.023%) (Google Quantum AI, 2023). This experiment marks the first instance where increasing code distance correlates with improved logical qubit performance on a real device, confirming theoretical predictions under certain noise thresholds (Dennis et al., 2002). Critical Evaluation Experimental Design The experimental layout is robust, featuring a well-calibrated Sycamore chip and comprehensive benchmarking. Their comparison between multiple distance-3 and one distance-5 layout mitigates the influence of spatial inhomogeneity in hardware performance. Decoder and Error Modeling The use of both belief-matching and tensor network decoding enriches the study. Particularly, the tensor network decoder approximates maximum-likelihood decoding, yielding better error suppression insights. The adoption of error hypergraphs and refined noise modeling reflects a sophisticated approach to simulating real-device behavior (Chubb & Flammia, 2021). Limitations and Challenges Despite improvements, the logical error rate reduction from increasing code distance was marginal (about 4%). Notably, one of the distance-3 codes individually outperformed the distance-5 code, suggesting lingering sensitivity to spatial variability or leakage accumulation (McEwen et al., 2021). Moreover, the experimental regime appears to reside near the error threshold-performance gains might not generalize without further error suppression. Implications and Future Work
2025, arXiv (Cornell University)
We discuss the performance of the Search and Fourier Transform algorithms on a hibrid computer constituted of classical and quantum processors working together. We show that this semi-quantum computer would be an improvement over a pure... more
We discuss the performance of the Search and Fourier Transform algorithms on a hibrid computer constituted of classical and quantum processors working together. We show that this semi-quantum computer would be an improvement over a pure classical architecture, no matter how few qubits are available and, therefore, it suggests an easier implementable technology than a pure quantum computer with arbitrary number of qubits.
2025, Academia Quantum
A gate sequence of single-qubit transformations may be condensed into a single microwave pulse that maps a qubit from an initialized state directly into the desired state of the composite transformation. Here, machine learning is used to... more
A gate sequence of single-qubit transformations may be condensed into a single microwave pulse that maps a qubit from an initialized state directly into the desired state of the composite transformation. Here, machine learning is used to learn the parameterized values for a single driving pulse associated with a transformation of three sequential gate operations on a qubit. This implies that future quantum circuits may contain roughly a third of the number of single-qubit operations performed, greatly reducing the problems of noise and decoherence. There is a potential for even greater condensation and efficiency using the methods of quantum machine learning.
2025, European Physical Journal D
Two possible applications of random decoupling are discussed. Whereas so far decoupling methods have been considered merely for quantum memories, here it is demonstrated that random decoupling is also a convenient tool for stabilizing... more
Two possible applications of random decoupling are discussed. Whereas so far decoupling methods have been considered merely for quantum memories, here it is demonstrated that random decoupling is also a convenient tool for stabilizing quantum algorithms. Furthermore, a decoupling scheme is presented which involves a random decoupling method compatible with detected-jump error correcting quantum codes. With this combined error correcting strategy it is possible to stabilize quantum information against both spontaneous decay and static imperfections of a qubit-based quantum information processor in an efficient way.
2025, Physical Review A
The approach to equilibrium of quantum mechanical systems is a topic as old as quantum mechanics itself, but has recently seen a surge of interest due to applications in quantum technologies, including, but not limited to, quantum... more
The approach to equilibrium of quantum mechanical systems is a topic as old as quantum mechanics itself, but has recently seen a surge of interest due to applications in quantum technologies, including, but not limited to, quantum computation and sensing. The mechanisms by which a quantum system approaches its long-time, limiting stationary state are fascinating and, sometimes, quite different from their classical counterparts. In this respect, quantum networks represent a mesoscopic quantum systems of interest. In such a case, the graph encodes the elementary quantum systems (say qubits) at its vertices, while the links define the interactions between them. We study here the relaxation to equilibrium for a fully connected quantum network with CNOT gates representing the interaction between the constituting qubits. We give a number of results for the equilibration in these systems, including analytic estimates. The results are checked using numerical methods for systems with up to 15-16 qubits. It is emphasized in which way the size of the network controls the convergency.
2025, Nuclear Physics B
Compactification of M theory in the presence of G-fluxes yields N = 2 five-dimensional gauged supergravity with a potential that lifts all supersymmetric vacua. We derive the effective superpotential directly from the Kaluza-Klein... more
Compactification of M theory in the presence of G-fluxes yields N = 2 five-dimensional gauged supergravity with a potential that lifts all supersymmetric vacua. We derive the effective superpotential directly from the Kaluza-Klein reduction of the eleven-dimensional action on a Calabi-Yau three-fold and compare it with the superpotential obtained by means of calibrations. We discuss an explicit domain wall solution, which represents fivebranes wrapped over holomorphic cycles. This solution has a "running volume" and we comment on the possibility that quantum corrections provide a lower bound allowing for an AdS 5 vacuum of the 5-dimensional supergravity.
2025, Nuclear Physics B
Compactification of M theory in the presence of G-fluxes yields N = 2 five-dimensional gauged supergravity with a potential that lifts all supersymmetric vacua. We derive the effective superpotential directly from the Kaluza-Klein... more
Compactification of M theory in the presence of G-fluxes yields N = 2 five-dimensional gauged supergravity with a potential that lifts all supersymmetric vacua. We derive the effective superpotential directly from the Kaluza-Klein reduction of the eleven-dimensional action on a Calabi-Yau three-fold and compare it with the superpotential obtained by means of calibrations. We discuss an explicit domain wall solution, which represents fivebranes wrapped over holomorphic cycles. This solution has a "running volume" and we comment on the possibility that quantum corrections provide a lower bound allowing for an AdS 5 vacuum of the 5-dimensional supergravity.
2025, Journal of High Energy Physics
Typical de Sitter (dS) vacua of gauged supergravity correspond to saddle points of the potential and often the unstable mode runs into a singularity. We explore the possibility to obtain dS points where the unstable mode goes on both... more
Typical de Sitter (dS) vacua of gauged supergravity correspond to saddle points of the potential and often the unstable mode runs into a singularity. We explore the possibility to obtain dS points where the unstable mode goes on both sides into a supersymmetric smooth vacuum. Within N = 2 gauged supergravity coupled to the universal hypermultiplet, we have found a potential which has two supersymmetric minima (one of them can be flat) and these are connected by a de Sitter saddle point. In order to obtain this potential by an Abelian gauging, it was important to include the recently proposed quantum corrections to the universal hypermultiplet sector. Our results apply to four as well as five dimensional gauged supergravity theories.
2025, Nuclear Physics B - Proceedings Supplements
In this talk I address some aspects in the recent developments for N = 2 black holes in 4 dimensions. I restrict myself on axion-free solutions that can classically be related to intersections of isotropic D-or M -branes. After reviewing... more
In this talk I address some aspects in the recent developments for N = 2 black holes in 4 dimensions. I restrict myself on axion-free solutions that can classically be related to intersections of isotropic D-or M -branes. After reviewing of some classical properties I include corrections coming from a general cubic prepotential. On the heterotic side these are quantum corrections for these black hole solutions. Finally, I discuss a microscopic interpretation of the entropy for the extremal as well as near-extremal black hole.
2025, Nuclear Physics B
We consider axion-free quantum corrected black hole solutions in the context of the heterotic S-T model with half the N = 2, D = 4 supersymmetries unbroken. We express the perturbatively corrected entropy in terms of the electric and... more
We consider axion-free quantum corrected black hole solutions in the context of the heterotic S-T model with half the N = 2, D = 4 supersymmetries unbroken. We express the perturbatively corrected entropy in terms of the electric and magnetic charges in such a way, that target-space duality invariance is manifest. We also discuss the microscopic origin of particular quantum black hole configurations. We propose a microscopic interpretation in terms of a gas of closed membranes for the instanton corrections to the entropy.
2025, Nuclear Physics B
We consider the gauge dyonic string solution of the K3 compactified heterotic string theory in a four dimensional cosmological context. Since for this solution Green-Schwarz as well as Chern-Simons corrections have been taken into account... more
We consider the gauge dyonic string solution of the K3 compactified heterotic string theory in a four dimensional cosmological context. Since for this solution Green-Schwarz as well as Chern-Simons corrections have been taken into account it contains both world sheet and string loop corrections. The cosmological picture is obtained by rotating the world volume of the gauge dyonic string into two space like dimensions and compactifying those dimensions on a two torus. We compare the result with gauge neutral extreme and non-extreme cosmologies and find that the non-trivial Yang Mills background leads to a solution without any singularities whereas for trivial Yang-Mills backgrounds some of the fields become always singular at the big bang.
2025, Physical Review A
The implementation of quantum gates with fidelities that exceed the threshold for reliable quantum computing requires robust gates whose performance is not limited by the precision of the available control fields. The performance of these... more
The implementation of quantum gates with fidelities that exceed the threshold for reliable quantum computing requires robust gates whose performance is not limited by the precision of the available control fields. The performance of these gates also should not be affected by the noisy environment of the quantum register. Here we use randomized benchmarking of quantum gate operations to compare the performance of different families of gates that compensate errors in the control field amplitudes and decouple the system from the environmental noise. We obtain average fidelities of up to 99.8%, which exceeds the threshold value for some quantum error correction schemes as well as the expected limit from the dephasing induced by the environment.
2025
The science fiction writer Arthur C Clarke famously said that “Any sufficiently advanced technology is indistinguishable from magic.” Happily for Clarke fans, the magic described in this essay, whilst so far short on stirring-up the... more
2025
The science fiction writer Arthur C Clarke famously said that “Any sufficiently advanced technology is indistinguishable from magic.” Happily for Clarke fans, the magic described in this essay, whilst so far short on stirring-up the... more
2025, Natural Computing
The gate-based model is one of the leading quantum computing paradigms for representing quantum circuits. Within this paradigm, a quantum algorithm is expressed in terms of a set of quantum gates that are executed on the quantum hardware... more
The gate-based model is one of the leading quantum computing paradigms for representing quantum circuits. Within this paradigm, a quantum algorithm is expressed in terms of a set of quantum gates that are executed on the quantum hardware over time, subject to a number of constraints whose satisfaction must be guaranteed before running the circuit, to allow for feasible execution. The need to guarantee the previous feasibility condition gives rise to the Quantum Circuit Compilation Problem (QCCP). The QCCP has been demonstrated to be NP-Complete, and can be considered as a Planning and Scheduling problem. In this paper, we consider quantum compilation instances deriving from the general Quantum Approximation Optimization Algorithm (QAOA), applied to the MaxCut problem, devised to be executed on Noisy Intermediate Scale Quantum (NISQ) hardware architectures. More specifically, in addition to the basic QCCP version, we also tackle other variants of the same problem such as the QCCP-X (...