Tiago Novello | Instituto Nacional de Matematica Pura e Aplicada IMPA (original) (raw)
Papers by Tiago Novello
arXiv (Cornell University), Feb 3, 2024
Ensaios Matemáticos, 2021
In this expository paper, we present a survey about the history of the geometrization conjecture ... more In this expository paper, we present a survey about the history of the geometrization conjecture and the background material on the classification of Thurston's eight geometries. We also discuss recent techniques for immersive visualization of relevant three-dimensional manifolds in the context of the Geometrization Conjecture.
Synthesis lectures on visual computing, 2022
Ray Tracing Gems II, 2021
This chapter describes how to use intersection and closest-hit shaders to implement real-time vis... more This chapter describes how to use intersection and closest-hit shaders to implement real-time visualizations of complex fractals using distance functions. The Mandelbulb and Julia Sets are used as examples.
arXiv (Cornell University), Dec 4, 2022
In this work, we investigate the representation capacity of multilayer perceptron networks that u... more In this work, we investigate the representation capacity of multilayer perceptron networks that use the sine as activation function, sinusoidal neural networks. We show that the layer composition in such networks compacts information. For this, we prove that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network's first layer. We provide the expression of the corresponding amplitudes in terms of the Bessel functions, and give an upper bound for them that can be used to control the resulting approximation.
2022 35th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
Synthesis lectures on visual computing, 2022
This work investigates the use of neural networks admitting high-order derivatives for modeling d... more This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a kkk-parameter family of surfaces ScS_cSc, specified as a neural network function f:mathbbR3timesmathbbRkrightarrowmathbbRf : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}f:mathbbR3timesmathbbRkrightarrowmathbbR, where ScS_cSc is the zero-level set of the implicit function f(cdot,c):mathbbR3rightarrowmathbbRf(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} f(cdot,c):mathbbR3rightarrowmathbbR, with cinmathbbRkc \in \mathbb{R}^kcinmathbbRk, with variations induced by the control variable ccc. In that context, restricted to each coordinate of mathbbRk\mathbb{R}^kmathbbRk, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
Computers & Graphics
We introduce a neural implicit framework that exploits the differentiable properties of neural ne... more We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions. To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches. We also use the analytical derivatives of a neural implicit function to estimate the differential measures of the underlying point-sampled surface.
2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
This survey presents methods that use neural networks for implicit representations of 3D geometry... more This survey presents methods that use neural networks for implicit representations of 3D geometry — neural implicit functions. We explore the different aspects of neural implicit functions for shape modeling and synthesis. We aim to provide a theoretical analysis of 3D shape reconstruction using deep neural networks and introduce a discussion between researchers interested in this research field.
GPU Ray Tracing in Non-Euclidean Spaces
We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide t... more We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide the problem into macro and meso components. We rely on the iterative nature of the sphere tracing algorithm, the spatial continuity of the Neural Implicit representation, and the association of the network architecture complexity with the details it can represent. This approach does not rely on spatial data structures, and can be used to mix Neural Implicits trained previously and separately as detail levels. We also introduce Neural Implicit Normal Mapping, which is a core component of the problem factorization. This concept is very close and analogous to the classic normal mapping on meshes, broadly used in Computer Graphics. Finally, we derive an analytic equation and an algorithm to simplify the normal calculation of Neural Implicits, adapted to be evaluated by the General Matrix Multiply algorithm (GEMM). Current approaches rely on finite differences, which impose additional inferenc...
This work investigates the use of neural networks admitting high-order derivatives for modeling d... more This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a kkk-parameter family of surfaces ScS_cSc, specified as a neural network function f:mathbbR3timesmathbbRkrightarrowmathbbRf : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}f:mathbbR3timesmathbbRkrightarrowmathbbR, where ScS_cSc is the zero-level set of the implicit function f(cdot,c):mathbbR3rightarrowmathbbRf(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} f(cdot,c):mathbbR3rightarrowmathbbR, with cinmathbbRkc \in \mathbb{R}^kcinmathbbRk, with variations induced by the control variable ccc. In that context, restricted to each coordinate of mathbbRk\mathbb{R}^kmathbbRk, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.
Synthesis Lectures on Visual Computing, 2022
In this work, we propose a novel ray tracing model for immersive visualization of Riemannian mani... more In this work, we propose a novel ray tracing model for immersive visualization of Riemannian manifolds. To do this we introduce Riemannian ray tracing, a generalization of the classic Computer Graphics concept. Specifically, our model is capable of interactive real-time VR visualizations of Nil, Sol, and SL2, Thurston's most nontrivial geometries. These experiences have the potential to allow insights with impact in physics/cosmology research, education, special effects, and games, among other aspects. The Riemannian ray tracing is implemented using the ray-tracing capabilities of the NVidia RTX platform. We discuss the general algorithm in CPU and show how to map the computations to the RTX pipeline.
arXiv (Cornell University), Feb 3, 2024
Ensaios Matemáticos, 2021
In this expository paper, we present a survey about the history of the geometrization conjecture ... more In this expository paper, we present a survey about the history of the geometrization conjecture and the background material on the classification of Thurston's eight geometries. We also discuss recent techniques for immersive visualization of relevant three-dimensional manifolds in the context of the Geometrization Conjecture.
Synthesis lectures on visual computing, 2022
Ray Tracing Gems II, 2021
This chapter describes how to use intersection and closest-hit shaders to implement real-time vis... more This chapter describes how to use intersection and closest-hit shaders to implement real-time visualizations of complex fractals using distance functions. The Mandelbulb and Julia Sets are used as examples.
arXiv (Cornell University), Dec 4, 2022
In this work, we investigate the representation capacity of multilayer perceptron networks that u... more In this work, we investigate the representation capacity of multilayer perceptron networks that use the sine as activation function, sinusoidal neural networks. We show that the layer composition in such networks compacts information. For this, we prove that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network's first layer. We provide the expression of the corresponding amplitudes in terms of the Bessel functions, and give an upper bound for them that can be used to control the resulting approximation.
2022 35th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
Synthesis lectures on visual computing, 2022
This work investigates the use of neural networks admitting high-order derivatives for modeling d... more This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a kkk-parameter family of surfaces ScS_cSc, specified as a neural network function f:mathbbR3timesmathbbRkrightarrowmathbbRf : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}f:mathbbR3timesmathbbRkrightarrowmathbbR, where ScS_cSc is the zero-level set of the implicit function f(cdot,c):mathbbR3rightarrowmathbbRf(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} f(cdot,c):mathbbR3rightarrowmathbbR, with cinmathbbRkc \in \mathbb{R}^kcinmathbbRk, with variations induced by the control variable ccc. In that context, restricted to each coordinate of mathbbRk\mathbb{R}^kmathbbRk, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
GPU Ray Tracing in Non-Euclidean Spaces, 2022
Computers & Graphics
We introduce a neural implicit framework that exploits the differentiable properties of neural ne... more We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions. To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches. We also use the analytical derivatives of a neural implicit function to estimate the differential measures of the underlying point-sampled surface.
2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
This survey presents methods that use neural networks for implicit representations of 3D geometry... more This survey presents methods that use neural networks for implicit representations of 3D geometry — neural implicit functions. We explore the different aspects of neural implicit functions for shape modeling and synthesis. We aim to provide a theoretical analysis of 3D shape reconstruction using deep neural networks and introduce a discussion between researchers interested in this research field.
GPU Ray Tracing in Non-Euclidean Spaces
We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide t... more We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide the problem into macro and meso components. We rely on the iterative nature of the sphere tracing algorithm, the spatial continuity of the Neural Implicit representation, and the association of the network architecture complexity with the details it can represent. This approach does not rely on spatial data structures, and can be used to mix Neural Implicits trained previously and separately as detail levels. We also introduce Neural Implicit Normal Mapping, which is a core component of the problem factorization. This concept is very close and analogous to the classic normal mapping on meshes, broadly used in Computer Graphics. Finally, we derive an analytic equation and an algorithm to simplify the normal calculation of Neural Implicits, adapted to be evaluated by the General Matrix Multiply algorithm (GEMM). Current approaches rely on finite differences, which impose additional inferenc...
This work investigates the use of neural networks admitting high-order derivatives for modeling d... more This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a kkk-parameter family of surfaces ScS_cSc, specified as a neural network function f:mathbbR3timesmathbbRkrightarrowmathbbRf : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}f:mathbbR3timesmathbbRkrightarrowmathbbR, where ScS_cSc is the zero-level set of the implicit function f(cdot,c):mathbbR3rightarrowmathbbRf(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} f(cdot,c):mathbbR3rightarrowmathbbR, with cinmathbbRkc \in \mathbb{R}^kcinmathbbRk, with variations induced by the control variable ccc. In that context, restricted to each coordinate of mathbbRk\mathbb{R}^kmathbbRk, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.
Synthesis Lectures on Visual Computing, 2022
In this work, we propose a novel ray tracing model for immersive visualization of Riemannian mani... more In this work, we propose a novel ray tracing model for immersive visualization of Riemannian manifolds. To do this we introduce Riemannian ray tracing, a generalization of the classic Computer Graphics concept. Specifically, our model is capable of interactive real-time VR visualizations of Nil, Sol, and SL2, Thurston's most nontrivial geometries. These experiences have the potential to allow insights with impact in physics/cosmology research, education, special effects, and games, among other aspects. The Riemannian ray tracing is implemented using the ray-tracing capabilities of the NVidia RTX platform. We discuss the general algorithm in CPU and show how to map the computations to the RTX pipeline.