Scaling Law Research Papers - Academia.edu (original) (raw)
Nonmodal transient growth studies and estimation of optimal perturbations have been made for the compressible plane Couette flow with three-dimensional disturbances. The steady mean flow is characterized by a non-uniform shear-rate and a... more
Nonmodal transient growth studies and estimation of optimal perturbations have been made for the compressible plane Couette flow with three-dimensional disturbances. The steady mean flow is characterized by a non-uniform shear-rate and a varying temperature across the wall-normal direction for an appropriate perfect gas model. The maximum amplification of perturbation energy over time, G max , is found to increase with increasing Reynolds number Re, but decreases with increasing Mach number M. More specifically, the optimal energy amplification G opt (the supremum of G max over both the streamwise and spanwise wavenumbers) is maximum in the incompressible limit and decreases monotonically as M increases. The corresponding optimal streamwise wavenumber, α opt , is non-zero at M = 0, increases with increasing M , reaching a maximum for some value of M and then decreases, eventually becoming zero at high Mach numbers. While the pure streamwise vortices are the optimal
The Sweet-Parker layer in a system that exceeds a critical value of the Lundquist number ͑S͒ is unstable to the plasmoid instability. In this paper, a numerical scaling study has been done with an island coalescing system driven by a low... more
The Sweet-Parker layer in a system that exceeds a critical value of the Lundquist number ͑S͒ is unstable to the plasmoid instability. In this paper, a numerical scaling study has been done with an island coalescing system driven by a low level of random noise. In the early stage, a primary Sweet-Parker layer forms between the two coalescing islands. The primary Sweet-Parker layer breaks into multiple plasmoids and even thinner current sheets through multiple levels of cascading if the Lundquist number is greater than a critical value S c Ӎ 4 ϫ 10 4 . As a result of the plasmoid instability, the system realizes a fast nonlinear reconnection rate that is nearly independent of S, and is only weakly dependent on the level of noise. The number of plasmoids in the linear regime is found to scales as S 3/8 , as predicted by an earlier asymptotic analysis ͓N. F. Loureiro et al., Phys. Plasmas 14, 100703 ͑2007͔͒. In the nonlinear regime, the number of plasmoids follows a steeper scaling, and is proportional to S. The thickness and length of current sheets are found to scale as S −1 , and the local current densities of current sheets scale as S −1 . Heuristic arguments are given in support of theses scaling relations.
An electrostatic atomization technique has been developed to generate ultra-"ne spray of ZrO and SiC ceramic suspensions in a range of 4}5 m with a narrow size distribution (1}9 m). The aim of this work is to generate "ne spray of ceramic... more
An electrostatic atomization technique has been developed to generate ultra-"ne spray of ZrO and SiC ceramic suspensions in a range of 4}5 m with a narrow size distribution (1}9 m). The aim of this work is to generate "ne spray of ceramic suspensions for the preparation of uniform thin "lms of these ceramic materials on substrates. Thin-"lm formation using electrostatic atomization process allows one to tightly control the process while meeting the economics in comparison with some other competing process technologies such as chemical vapour deposition, physical vapour deposition and plasma spray, etc. Preliminary results have shown that for low through put atomization, the cone-jet is the most suitable method to produce a "ne charged aerosol with a narrow size distribution. It was found that the droplet size of the spray is in the range of a few micrometers with a narrow size distribution and that droplet size and spray current obey theoretical prediction of scaling law. As prepared ZrO and SiC thin "lms were observed to be homogenous with a particle size of less than 10 m.
This paper reports results from centrifuge tests designed to investigate capillary rise in soils subjected to different gravitational fields. The experimental programme is part of the EU-funded NECER project (Network of European... more
This paper reports results from centrifuge tests designed to investigate capillary rise in soils subjected to different gravitational fields. The experimental programme is part of the EU-funded NECER project (Network of European Centrifuges for Environmental Geotechnic Research), whose objective is to investigate the appropriateness of geotechnical centrifuge modelling for the investigation of geoenvironmental problems, particularly with reference to partially saturated
- by Aissa Rezzoug and +2
- •
- Engineering, Civil Engineering, Kinetics, Engineering Geology
Local assortativity has been recently proposed as a measure to analyse complex networks. It has been noted that the Internet Autonomous System level networks show a markedly different local assortativity profile to most biological and... more
Local assortativity has been recently proposed as a measure to analyse complex networks. It has been noted that the Internet Autonomous System level networks show a markedly different local assortativity profile to most biological and social networks. In this paper we show that, even though several Internet growth models exist, none of them produce the local assortativity profile that can be observed in the real AS networks. We introduce a new generic growth model which can produce a linear local assortativity profile similar to that of the Internet. We verify that this model accurately depicts the local assortativity profile criteria of Internet, while also satisfactorily modelling other attributes of AS networks already explained by existing models.
A new scaling law for the planetary magnetic field strengths is obtained assuming the magnetostrophic balance. The velocity of the convection current v~in a planetary core is estimated by the geometric mean of the possible maximum and... more
A new scaling law for the planetary magnetic field strengths is obtained assuming the magnetostrophic balance. The velocity of the convection current v~in a planetary core is estimated by the geometric mean of the possible maximum and minimum values to be v, c(1~/4ir,w)h/2, where~is angular velocity of the core rotation, c the electric conductivity, c speed of light, and p the magnetic permeability. Overall agreement of the prediction by the present new scaling law with the data on magnetic field strengths of various planets is superior to those by previously proposed scaling laws. The present scaling law is also found to predict well the Neptune's magnetic field recently determined by the Voyager 2 observation.
Within the framework of the RIVIERA project (french acronym for Risks in the city: amenities, networks, archaelogy), a research study aims at the building of a 3D geological model at the city scale. Data are also analyzed on specific... more
Within the framework of the RIVIERA project (french acronym for Risks in the city: amenities, networks, archaelogy), a research study aims at the building of a 3D geological model at the city scale. Data are also analyzed on specific geographical areas with the purpose of professionnal applications regarding geotechnical engineering, urban archaelogy and asset management of buried networks. This paper regards the geotechnical modelling at the town scale and it is focussed on the Pessac town case, in the southern part of Bordeaux city. The purpose is, from the global geological model built for the near-surface layers, to give information useful for urban planning, in relation with underground properties. These information will condition site prospection or urban planning choices. In a first step, a large amount of data have been gathered from various geotechnical prospection reports. Thus, we have;
A key parameter that controls the crystallization of primordial oceans in large icy moons is the presence of antifreeze compounds, which may have maintained primordial oceans over the age of the solar system. Here we investigate the... more
A key parameter that controls the crystallization of primordial oceans in large icy moons is the presence of antifreeze compounds, which may have maintained primordial oceans over the age of the solar system. Here we investigate the influence of methanol, a possible anti-freeze candidate, on the crystallization of Titan's primordial ocean. Using a thermodynamic model of the solar nebula and assuming a plausible composition of its initial gas phase, we first calculate the condensation sequence of ices in Saturn's feeding zone, and show that in Titan's building blocks methanol can have a mass fraction of ∼4 wt% relative to water, i.e., methanol can be up to four times more abundant than ammonia. We then combine available data on the phase diagram of the water-methanol system and scaling laws derived from thermal convection to estimate the influence of methanol on the dynamics of the outer ice I shell and on the heat transfer through this layer. For a fraction of methanol consistent with the building blocks composition we determined, the vigor of convection in the ice I shell is strongly reduced. The effect of 5 wt% methanol is equivalent to that of 3 wt% ammonia. Thus, if methanol is present in the primordial ocean of Titan, the crystallization may stop, and a sub-surface ocean may be maintained between the ice I and high-pressure ice layers. A preliminary estimate indicates that the presence of 4 wt% methanol and 1 wt% ammonia may result in an ocean of thickness at least 90 km.
In the study of protein adsorption at fluid interfaces, the relation between the surface concentration and the surface pressure is of general interest and is usually quantitated by state equations. In the dilute range, the ideal gas law... more
In the study of protein adsorption at fluid interfaces, the relation between the surface concentration and the surface pressure is of general interest and is usually quantitated by state equations. In the dilute range, the ideal gas law is a good approximation which allows the calculation of the molecular mass of the molecule. It may be extended on the increasing concentration side by the two-dimensional solution (2D solution) approximation which allows the calculation of both the molecular mass and the molecular area of the protein at the interface. When the surface concentration increases beyond a critical value, the interfacial layer enters a semi-dilute regime where a scaling law approach gives a good approximation of the structure and of the properties of the layer but which only allows a ''rough'' calculation of either the molecular area or the molecular mass from the other parameter. Moreover, in the scaling law approach, the semi-dilute regime is connected to the gas-like regime by an angulous transition which is not observed in the experimental isotherms. In this communication, a mixed approach is proposed in which the state equation is the 2D solution law below the critical surface concentration and the scaling law beyond this value where both the surface pressure and its derivative with respect to the surface concentration are equal when expressed from one law or from the other. Simple relations are found between the critical surface concentrations calculated from either the 2D solution, the scaling law or the mixed approaches. Within that framework, it is possible to define precisely the range of data belonging to the 2D solution regime and to know whether the molecular mass and molecular area calculations are performed by fitting the right equation to the right data or not.
Measurements have been made of the critical current on an Nb 3 Sn superconducting strand destined for the ITER (International Thermonuclear Experimental Reactor) prototype cable-in-conduit conductors. Characterization of the strand was... more
Measurements have been made of the critical current on an Nb 3 Sn superconducting strand destined for the ITER (International Thermonuclear Experimental Reactor) prototype cable-in-conduit conductors. Characterization of the strand was performed on a recently developed spring device, named Pacman, allowing measurements of the voltage-current characteristic of an Nb 3 Sn strand over a wide range of applied axial strain, magnetic field, temperature and currents up to at least 700 A. The strand was measured in a magnetic field between 4 and 11 T, temperatures of 4.2-10 K and applied axial strain ranged from −0.9% (compressive) to +0.3% (tensile). The critical currents were then used to derive the superconducting and the deformation-related parameters for the scaling of measured results, based on the so-called 'improved' deviatoric strain model. We also demonstrate that the same values can be derived from a partial critical-current data set without spoiling the overall scaling accuracy. This indicates that the proposed scaling relation can be used not only as a fitting tool, but is promising for reliable extrapolation as well, providing substantial savings in cost and time for the experimental routine.
Scaling world city size distributions are analyzed in terms of q-exponentials, the distributions which naturally emerge within nonextensive statistical mechanics. These distributions allow to estimate hierarchy and scale parameters for 29... more
Scaling world city size distributions are analyzed in terms of q-exponentials, the distributions which naturally emerge within nonextensive statistical mechanics. These distributions allow to estimate hierarchy and scale parameters for 29 historical periods, thus conveniently replacing the Zipfian scaling law by an historically sensitive and nuanced model of how long-term oscillations in urban hierarchies co-evolve with urban industrial, commercial and financial innovations. Our present model emerges from attempts to understand our scaling results by validation tests using multiple political and economic variables. We investigate the model so as to explore the extent and by-play of endogenous dynamics and exogenous shocks as causes of synchronization. One such step consists of identifying a network macro-structural variable, the average urban hub adjacency ratio, to help explain interactive network dynamics of endogenous synchronization. We find, further, in examining our validation-check variables, that innovation clusters in space and time -the leading sectors of urban economies -are differentially embedded in periods of alternating hierarchical and heterarchical urban scaling regimes. Urban infrastructure and demography, as expected, change slowly compared to the pace of Schumpeterian K-cyclic tendencies in bursts of competitive economic innovation, but the synchrony of these changes in time and space, while a surprising discovery, is consistent with our network-economic theory. [DOUG: I do not understand the sentence above.] Sandwiched in an intermediate layer of change in terms of its spatiotemporal scalings we find the shifts in the leading cities and states, as studied by Modelski and Thompson, forming a synchronously-embedded three-layer model of economic, political, and urban structural oscillations. These findings have potential for unifying a new perspective on glacially slow and then quickly triggered structural changes in the city hierarchies and infrastructure with perspectives on long-cycles as studied through the world political and economic perspectives.
Planet formation models suggest the primordial main belt experienced a short but intense period of collisional evolution shortly after the formation of planetary embryos. This period is believed to have lasted until Jupiter reached its... more
Planet formation models suggest the primordial main belt experienced a short but intense period of collisional evolution shortly after the formation of planetary embryos. This period is believed to have lasted until Jupiter reached its full size, when dynamical processes (e.g., sweeping resonances, excitation via planetary embryos) ejected most planetesimals from the main belt zone. The few planetesimals left behind continued to undergo comminution at a reduced rate until the present day. We investigated how this scenario affects the main belt size distribution over Solar System history using a collisional evolution model (CoEM) that accounts for these events. CoEM does not explicitly include results from dynamical models, but instead treats the unknown size of the primordial main belt and the nature/timing of its dynamical depletion using innovative but approximate methods. Model constraints were provided by the observed size frequency distribution of the asteroid belt, the observed population of asteroid families, the cratered surface of differentiated Asteroid (4) Vesta, and the relatively constant crater production rate of the Earth and Moon over the last 3 Gyr. Using CoEM, we solved for both the shape of the initial main belt size distribution after accretion and the asteroid disruption scaling law Q * D . In contrast to previous efforts, we find our derived Q * D function is very similar to results produced by numerical hydrocode simulations of asteroid impacts. Our best fit results suggest the asteroid belt experienced as much comminution over its early history as it has since it reached its low-mass state approximately 3.9-4.5 Ga. These results suggest the main belt's wavy-shaped size-frequency distribution is a "fossil" from this violent early epoch. We find that most diameter D 120 km asteroids are primordial, with their physical properties likely determined during the accretion epoch. Conversely, most smaller asteroids are byproducts of fragmentation events. The observed changes in the asteroid spin rate and lightcurve distributions near D ∼ 100-120 km are likely to be a byproduct of this difference. Estimates based on our results imply the primordial main belt population (in the form of D < 1000 km bodies) was 150-250 times larger than it is today, in agreement with recent dynamical simulations.
SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-reviewed journals or proceedings volumes, but... more
SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant. ©NOTICE: This working paper is included by permission of the contributing author(s) as a means to ensure timely distribution of the scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the author(s). It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may be reposted only with the explicit permission of the copyright holder.
Seismologists, and more recently computational and condensed-matter physicists, have made extensive use of computer modeling to investigate the physics of earthquakes. Here, the authors describe a simple cellular automaton model that... more
Seismologists, and more recently computational and condensed-matter physicists, have made extensive use of computer modeling to investigate the physics of earthquakes. Here, the authors describe a simple cellular automaton model that explores the possible relationship between Gutenberg-Richter scaling and critical phenomena.
1] Numerous observations substantiate a pronounced contrast in lightning activity between continents and oceans. The traditional explanation for continental dominance is based on a contrast in thermal properties of land and sea. A more... more
1] Numerous observations substantiate a pronounced contrast in lightning activity between continents and oceans. The traditional explanation for continental dominance is based on a contrast in thermal properties of land and sea. A more recent idea is based on the contrast in boundary layer aerosol concentration between land and sea. This study makes use of islands as miniature continents of varying area to distinguish between these two hypotheses. Scaling law analysis is used to predict transitional island areas for the two hypotheses. NASA Tropical Rainfall Measuring Mission satellite observations provide a uniform data set on island activity. The island area dependences of lightning activity are more consistent with the thermal hypothesis than the aerosol hypothesis, but this conclusion must be tempered with the extreme simplification of the theoretical predictions.
- by Earle Williams and +1
- •
- Prediction, Multidisciplinary, Concentration, Lightning
Neste artigo vamos apresentar de forma sucinta e interdisciplinar a relação entre as leis de escala na física e a dinâmica do crescimento em estruturas biológicas. Inicialmente, serão discutidos os conceitos preliminares nos quais se... more
Neste artigo vamos apresentar de forma sucinta e interdisciplinar a relação entre as leis de escala na física e a dinâmica do crescimento em estruturas biológicas. Inicialmente, serão discutidos os conceitos preliminares nos quais se baseiam as leis de escala aplicadas na biologia. Em seguida, usaremos a hipótese de similaridade de West para formular, de maneira didática e detutiva, uma equação diferencial generalizada para estudar o crescimento dos organismos em geral. Palavras-chave: leis de escala, crescimento, similaridade. In this article we will briefly present an interdisciplinary relationship between scaling laws in physics and growth dynamics in biological structures. First, will be discussed the preliminary concepts of the scaling laws applied in biology. By using the West similarity hypothesis, we formulate, in a deductive and didactic way, a generalized differential equation to study the growth of organisms in general.
An experimental investigation is presented into the geometrically similar scaling laws for circular plates impacted by cylindrical strikers with blunt ends travelling at velocities up to 5 m s-1 which produce large permanent transverse... more
An experimental investigation is presented into the geometrically similar scaling laws for circular plates impacted by cylindrical strikers with blunt ends travelling at velocities up to 5 m s-1 which produce large permanent transverse displacements and perforation in some cases. The plate dimensions have a scale range of four for the mild steel (strain rate sensitive) specimens and approximately five for the aluminium alloy (strain rate insensitive) specimens. The experimental results obey the geometrically similar scaling laws within the accuracy expected for such tests. It is observed that the impossibility of geometrically similar scaling of the material strain rate sensitive effects appears not to influence the plate perforation energies, at least within the range of experimental parameters studied in the present investigation.
A Lorentz force flowmeter is a device for the contactless measurement of flow rates in electrically conducting fluids. It is based on the measurement of a force on a magnet system that acts upon the flow. We formulate the theory of the... more
A Lorentz force flowmeter is a device for the contactless measurement of flow rates in electrically conducting fluids. It is based on the measurement of a force on a magnet system that acts upon the flow. We formulate the theory of the Lorentz force flowmeter which connects the measured force to the unknown flow rate. We first apply the theory to three specific cases, namely (i) pipe flow exposed to a longitudinal magnetic field, (ii) pipe flow under the influence of a transverse magnetic field and (iii) interaction of a localized distribution of magnetic material with a uniformly moving sheet of metal. These examples provide the key scaling laws of the method and illustrate how the force depends on the shape of the velocity profile and the presence of turbulent fluctuations in the flow. Moreover, we formulate the general kinematic theory which holds for arbitrary distributions of magnetic material or electric currents and for any velocity distribution and which provides a rational framework for the prediction of the sensitivity of Lorentz force flowmeters in laboratory experiments and in industrial practice.
A flow visualisation study was performed to investigate a periodic flow instability in a bifurcating duct within the tip of the flares at the Shell refinery in Clyde, NSW, to verify the trigger of a combustion-driven oscillation proposed... more
A flow visualisation study was performed to investigate a periodic flow instability in a bifurcating duct within the tip of the flares at the Shell refinery in Clyde, NSW, to verify the trigger of a combustion-driven oscillation proposed in Part A of this study, and to identify its features. The model study assessed only the flow instability, uncoupled from the
The well known scaling laws relating critical exponents in a second order phase transition have been generalized to the case of an arbitrarily higher order phase transition. In a higher order transition, such as one suggested for the... more
The well known scaling laws relating critical exponents in a second order phase transition have been generalized to the case of an arbitrarily higher order phase transition. In a higher order transition, such as one suggested for the superconducting transition in Ba 0.6 K 0.4 BiO 3 and in Bi 2 Sr 2 CaCu 2 O 8 , there are singularities in higher order derivatives of the free energy. A relation between exponents of different observables has been found, regardless of whether the exponents are classical (mean-field theory, no fluctuations, integer order of a transition) or not (fluctuation effects included). We also comment on the phase transition in a thin film.
Trait-based approaches to community structure are increasingly used in terrestrial ecology. We show that such an approach, augmented by a mechanistic analysis of tradeoffs among functional traits, can be successfully used to explain... more
Trait-based approaches to community structure are increasingly used in terrestrial ecology. We show that such an approach, augmented by a mechanistic analysis of tradeoffs among functional traits, can be successfully used to explain community composition of marine phytoplankton along environmental gradients. Our analysis of literature on major functional traits in phytoplankton, such as parameters of nutrient-dependent growth and uptake, reveals physiological trade-offs in species abilities to acquire and utilize resources. These trade-offs, arising from fundamental relations such as cellular scaling laws and enzyme kinetics, define contrasting ecological strategies of nutrient acquisition. Major groups of marine eukaryotic phytoplankton have adopted distinct strategies with associated traits. These diverse strategies of nutrient utilization can explain the distribution patterns of major functional groups and size classes along nutrient availability gradients.
This paper reviews historical methods for estimating surge hazards and concludes that the class of solutions produced with Joint Probability Method (JPM) solutions provides a much more stable estimate of hazard levels than alternative... more
This paper reviews historical methods for estimating surge hazards and concludes that the class of solutions produced with Joint Probability Method (JPM) solutions provides a much more stable estimate of hazard levels than alternative methods. We proceed to describe changes in our understanding of the winds in hurricanes approaching a coast and the physics of surge generation that have required recent modifications to procedures utilized in earlier JPM studies. Of critical importance to the accuracy of hazard estimates is the ability to maintain a high level of fidelity in the numerical simulations while allowing for a sufficient number of simulations to populate the joint probability matrices for the surges. To accomplish this, it is important to maximize the information content in the sample storm set to be simulated. This paper introduces the fundamentals of a method based on the functional specification of the surge response for this purpose, along with an example of its application in the New Orleans area. A companion paper in this special issue ) provides details of the portion of this new method related to interpolating/extrapolating along spatial dimensions.
The total fracture energy of edge-cracked beams under bending load is strongly dependent on specimen size, so the Charpy energy can only be measured on standard specimens. By means of a simplistic mechanical model a mathematical relation... more
The total fracture energy of edge-cracked beams under bending load is strongly dependent on specimen size, so the Charpy energy can only be measured on standard specimens. By means of a simplistic mechanical model a mathematical relation between the total fracture energy of an edge-cracked beam under bending and the fracture toughness is derived, from which a mathematical relation between the fracture energy and specimen size is obtained. It can be used to scale-up the fracture energy of sub-sized tests, and then use the evaluation procedure for standard Charpy specimens mentioned above. Unlike the commonly used empirical correlation formulas, the presented scaling law is applicable to any elastic-plastic material. It holds for the upper-shelf regime, and as a lower bound also in the brittle-to-ductile transition regime. The results are compared with experimental data obtained from different specimen sizes
Oil-in-water (O/W) emulsions produced by static mixers in the laminar flow regime are characterized for their oil drop size spectra. The emulsions are used in the first process step for the production of microspheres for pharmaceutical... more
Oil-in-water (O/W) emulsions produced by static mixers in the laminar flow regime are characterized for their oil drop size spectra. The emulsions are used in the first process step for the production of microspheres for pharmaceutical applications by the emulsion extraction method. However, emulsion generation by static mixers in the laminar flow regime is rarely discussed in the scientific literature.
The success of new scientific areas can be assessed by their potential for contributing to new theoretical approaches and in applications to real-world problems. Complex networks have fared extremely well in both of these aspects, with... more
The success of new scientific areas can be assessed by their potential for contributing to new theoretical approaches and in applications to real-world problems. Complex networks have fared extremely well in both of these aspects, with their sound theoretical basis developed over the years and with a variety of applications. In this survey, we analyze the applications of complex networks to real-world problems and data, with emphasis in representation, analysis and modeling, after an introduction to the main concepts and models. A diversity of phenomena are surveyed, which may be classified into no less than 22 areas, providing a clear indication of the impact of the field of complex networks.
We provide a brief historical background of the development of hydraulic fracturing models for use in the petroleum and other industries. We discuss scaling laws and the propagation regimes that control the growth of hydraulic fractures... more
We provide a brief historical background of the development of hydraulic fracturing models for use in the petroleum and other industries. We discuss scaling laws and the propagation regimes that control the growth of hydraulic fractures from the laboratory to the field scale. We introduce the mathematical equations and boundary conditions that govern the hydraulic fracturing process, and discuss numerical implementation issues including: tracking of the fracture footprint, the control of the growth of the hydraulic fracture as a function of time, coupling of the equations, and time-stepping schemes. We demonstrate the complexity of hydraulic fracturing by means of an application example based on real data. Finally, we highlight some key areas of research that need to be addressed in order to improve current models. r
This review deals with several microscopic ("agent-based") models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch,... more
This review deals with several microscopic ("agent-based") models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics ( e.g. moving from the notion of fat tails to the more concrete one of a power-law with index around three), it became clear that financial markets dynamics give rise to some kind of universal scaling laws. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic was pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavors of multi-agent models that have appeared by now, we discuss the Cont-Bouchaud, Solomon-Levy-Huang and Lux-Marchesi models. Open research questions are discussed in our concluding section.
An alternate inner wall variable, for flow over a transitional rough pipe surface, is defined as the ratio of normal coordinate measured above the mean roughness level to the wall roughness scale. The Reynolds equations for mean turbulent... more
An alternate inner wall variable, for flow over a transitional rough pipe surface, is defined as the ratio of normal coordinate measured above the mean roughness level to the wall roughness scale. The Reynolds equations for mean turbulent flow in a transitional rough pipe, in two layers (inner and outer) are considered. The predictions of the mean velocity and friction factor in fully developed turbulent flow in a rough pipe flow, presented here, covers all types of roughness. The data for a particular case of the machine honed Princeton superpipe roughness, analogous to inflectional type roughness of Nikuradse, is presented, as two expressions using our roughness scale. The velocity profile and friction factor, on a transitional rough wall, are shown to be governed by the new log laws, which are explicitly independent of the transitional wall roughness. Further, the inflectional roughness has also been connected with geometric roughness parameters; like, arithmetic mean roughness, mean peak to valley heights roughness, root mean square (rms), roughness based on texture measure; and the friction factor implicit and approximate explicit formulas have also been proposed. In entire transition region between fully smooth and fully rough wall, monotonic roughness of Colebrook (Moody Chart) over estimaton the friction factor when compared with present inlectional roughness. Ó
We study both experimentally and theoretically the classical problem of the circular hydraulic jump. By means of elementary hydrodynamics we investigate the scaling laws governing the position of the hydraulic jump and compare our... more
We study both experimentally and theoretically the classical problem of the circular hydraulic jump. By means of elementary hydrodynamics we investigate the scaling laws governing the position of the hydraulic jump and compare our predictions with experimental data. The results of our simple model are in good agreement with the experiments and with more elaborate approaches. The problem can be effectively used for educational purposes, being appropriate both for experimental investigations and for theoretical application of many fluid mechanics concepts.
Starting with Volume 165, Springer Tracts in Modern Physics is part of the [SpringerLink] service. For all customers with standing orders for Springer Tracts in Modern Physics we offer the full text in electronic form via [SpringerLink]... more
Starting with Volume 165, Springer Tracts in Modern Physics is part of the [SpringerLink] service. For all customers with standing orders for Springer Tracts in Modern Physics we offer the full text in electronic form via [SpringerLink] free of charge. Please contact your librarian who can receive a password for free access to the full articles by registration at: spri ngerlink.com If you do not have a standing order, you can nevertheless browse online through the table of conten ts of the volumes and the abstracts of each article and perform a full text search.
Considerable effort is being directed toward updating safety codes and standards in preparation for production, distribution, and retail of hydrogen as a consumer energy source. In the present study, measurements were performed in... more
Considerable effort is being directed toward updating safety codes and standards in preparation for production, distribution, and retail of hydrogen as a consumer energy source. In the present study, measurements were performed in large-scale, vertical flames to characterize the dimensional and radiative properties of an ignited hydrogen jet. These data are relevant to the safety scenario of a sudden leak in a high-pressure hydrogen containment vessel. Specifically, the data will provide a technological basis for determining hazardous length scales associated with unintended releases at hydrogen storage and distribution centers. Visible and infrared video and ultraviolet flame luminescence imaging were used to evaluate flame length, diameter and structure. Radiometer measurements allowed determination of the radiant heat flux from the flame. The results show that flame length increases with total jet mass flow rate and jet nozzle diameter. When plotted as a function of Froude number, which measures the relative importance of jet momentum and buoyancy, the measured flame lengths for a range of operating conditions collapse onto the same curve. Good comparison with hydrocarbon jet flame lengths is found, demonstrating that the non-dimensional correlations are valid for a variety of fuel types. The radiative heat flux measurements for hydrogen flames show good agreement with non-dimensional correlations and scaling laws developed for a range of fuels and flame conditions. This result verifies that such correlations can be used to predict radiative heat flux from a wide variety of hydrogen flames and establishes a basis for predicting a priori the characteristics of flames resulting from accidental releases.
A definition of K41 scaling law for suitable families of measures is given and investigated. First, a number of necessary conditions are proved. They imply the absence of scaling laws for 2D stochastic Navier-Stokes equations and for the... more
A definition of K41 scaling law for suitable families of measures is given and investigated. First, a number of necessary conditions are proved. They imply the absence of scaling laws for 2D stochastic Navier-Stokes equations and for the stochastic Stokes (linear) problem in any dimension, while they imply a lower bound on the mean vortex stretching in 3D. Second, for
We describe our work in the collection and analysis of massive data describing the connections between participants to online social networks. Alternative approaches to social network data collection are defined and evaluated in practice,... more
We describe our work in the collection and analysis of massive data describing the connections between participants to online social networks. Alternative approaches to social network data collection are defined and evaluated in practice, against the popular Facebook Web site. Thanks to our ad-hoc, privacy-compliant crawlers, two large samples, comprising millions of connections, have been collected; the data is anonymous and organized as an undirected graph. We describe a set of tools that we developed to analyze specific properties of such social-network graphs, i.e., among others, degree distribution, centrality measures, scaling laws and distribution of friendship.
Econophysics has already made a number of important empirical contributions to our understanding of the social and economic world. These fall mainly into the areas of finance and industrial economics, where in each case there is a large... more
Econophysics has already made a number of important empirical contributions to our understanding of the social and economic world. These fall mainly into the areas of finance and industrial economics, where in each case there is a large amount of reasonably well-defined data.
Starting with Volume 165, Springer Tracts in Modern Physics is part of the [SpringerLink] service. For all customers with standing orders for Springer Tracts in Modern Physics we offer the full text in electronic form via [SpringerLink]... more
Starting with Volume 165, Springer Tracts in Modern Physics is part of the [SpringerLink] service. For all customers with standing orders for Springer Tracts in Modern Physics we offer the full text in electronic form via [SpringerLink] free of charge. Please contact your librarian who can receive a password for free access to the full articles by registration at: spri ngerlink.com If you do not have a standing order, you can nevertheless browse online through the table of conten ts of the volumes and the abstracts of each article and perform a full text search.
Size exclusion chromatography (SEC) with dual detection, i.e. employing a refractive index (RI), concentration sensitive, detector together with a multiangle light scattering (MALS) detector which is sensitive to molecular size, has been... more
Size exclusion chromatography (SEC) with dual detection, i.e. employing a refractive index (RI), concentration sensitive, detector together with a multiangle light scattering (MALS) detector which is sensitive to molecular size, has been applied to study the solution properties of poly(diallyldimethylammonium chloride) (PDDA) in water containing different electrolytes, namely: NaCl, NaBr and LiCl, at 25 8C. The analysis of a single highly polydisperse sample is enough for obtaining calibration curves for molecular weight and radius of gyration and the scaling law coefficients. The effect of the ionic strength on the conformational properties of the polymer can also be analyzed and unperturbed dimensions can be obtained by extrapolation of the values measured in a good solvent. The values of the characteristic ratio of the unperturbed dimensions thus obtained were: 17, 11 and 17, respectively, for NaCl, NaBr and LiCl solutions. Viscosity and conductivity measurements support the results obtained by SEC. Moreover, the experimental results are in good agreement with the theoretical calculations performed by combining molecular dynamics and Monte Carlo sampling procedures. q
Geology-based methods for Probabilistic Seismic Hazard Assessment (PSHA) have been developing in Italy. These methods require information on the geometric, kinematic and energetic parameters of the major seismogenic faults. In this paper,... more
Geology-based methods for Probabilistic Seismic Hazard Assessment (PSHA) have been developing in Italy. These methods require information on the geometric, kinematic and energetic parameters of the major seismogenic faults. In this paper, we define a model of 3D seismogenic sources in the central Apennines of Italy. Our approach is mainly structural-seismotectonic: we integrate surface geology data (trace of active faults, i.e. 2D features) with seismicity and subsurface geological–geophysical data (3D approach). A fundamental step is to fix constraints on the thickness of the seismogenic layer and deep geometry of faults: we use constraints from the depth distribution of aftershock zones and background seismicity; we also use information on the structural style of the extensional deformation at crustal scale (mainly from seismic reflection data), as well as on the strength and behaviour (brittle versus plastic) of the crust by rheological profiling. Geological observations allow us to define a segmentation model consisting of major fault structures separated by first-order (kilometric scale) structural-geometric complexities considered as likely barriers to the propagation of major earthquake ruptures. Once defined the 3D fault features and the segmentation model, the step onward is the computation of the maximum magnitude of the expected earthquake (M max). We compare three different estimates of M max: (1) from association of past earthquakes to faults; (2) from 3D fault geometry and (3) from geometrical estimate ‘corrected’ by earthquake scaling laws. By integrating all the data, we define a model of seismogenic sources (seismogenic boxes), which can be directly used for regional-scale PSHA. Preliminary applications of PSHA indicate that the 3D approach may allow to hazard scenarios more realistic than those previously proposed.
Big' history is the time between the Big Bang and contemporary technological life on Earth. The stretch of big history can be considered as a series of developments in systems that manage ever-greater levels of energy flow, or... more
Big' history is the time between the Big Bang and contemporary technological life on Earth. The stretch of big history can be considered as a series of developments in systems that manage ever-greater levels of energy flow, or thermodynamic disequilibrium. Recent theory suggests that step-wise changes in the work accomplished by a system can be explained using steady-state non-equilibrium thermodynamics. Major transitions in big history can therefore be rigorously defined as transitions between non-equilibrium thermodynamic steady-states (or NESSTs). The time between NESSTs represents a historical period, while larger categories of time can be identified by empirically discovering breaks in the rate of change in processes underlying macrohistorical trends among qualities of NESSTs. Two levels of periodization can be identified through this procedure. First, there are two major eons: cosmological and terrestrial, which exhibit qualitatively different kinds of historical scaling laws with respect to NESST duration and the gaps between NESSTs: the first eon decelerating, the second accelerating. Accelerating rates of historical change are achieved during the Terrestrial Eon by the invention of information inheritance processes. Second, eras can also be defined within Earth history by differences in the scaling of energy flow improvement per NESST. This is because each era is based on a different kind of energy source: the material era depends on nuclear fusion, the biological era on metabolism, the cultural era on tools, and the technological era on machines. Periodizing big history allows historians to uncover the mechanisms which trigger the innovations and novel organisations that spur thermodynamic transitions, as well as the mechanisms which keep historical processes under control.
The disagreement between the weak dependence of the energy confinement time on normalised pressure, b, observed in dedicated scans and the strongly negative dependence in the confinement scaling laws used for the design of next step... more
The disagreement between the weak dependence of the energy confinement time on normalised pressure, b, observed in dedicated scans and the strongly negative dependence in the confinement scaling laws used for the design of next step tokamaks and future reactors, remains an outstanding problem. As such, scans of b have been undertaken in single null, low triangularity (d ≈ 0.2) ELMy-H Mode plasmas in JET with the MarkIIGB-SRP divertor. The scans varied b by a factor of 2.8 (Normalised b from 0.72-2.04) and covered a range of magnetic fields (1.5-2.3 T), plasma currents
Inductive Voltage Adder (IVA) accelerators were developed to provide high-current (100s of kA) power pukes at high voltage (up to 20 MV) using robust modular components. This architecture simultaneously resolves problems found in... more
Inductive Voltage Adder (IVA) accelerators were developed to provide high-current (100s of kA) power pukes at high voltage (up to 20 MV) using robust modular components. This architecture simultaneously resolves problems found in conventional pulsed and linear induction accelerators. A variety of high-brightness pulsed x-my radiographic sources are needed from sub megavolt to 16-MeV endpoints with greater source brightness (dose/q&) than presently available. We are applying IVA systems to produce very intense (up to 75 TW/cm ) electron beams for these flash radiographic applications.
- by D. Droemer and +1
- •
- Magnetic field, High Voltage, X Rays, Experimental Study
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency... more
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency (RF) breakdowns occurring in accelerating cavities influence efficiency optimisation and cost reduction issues. Studying vacuum arcs both theoretically as well as experimentally under well-defined and reproducible direct-current (DC) conditions
A heuristic model is given for anisotropic magnetohydrodynamics (MHD) turbulence in the presence of a uniform external magnetic field B 0ê . The model is valid for both moderate and strong B 0 and is able to describe both the strong and... more
A heuristic model is given for anisotropic magnetohydrodynamics (MHD) turbulence in the presence of a uniform external magnetic field B 0ê . The model is valid for both moderate and strong B 0 and is able to describe both the strong and weak wave turbulence regimes as well as the transition between them. The main ingredient of the model is the assumption of constant ratio at all scales between the linear wave period and the nonlinear turnover timescale. Contrary to the model of critical balance introduced by Goldreich and Sridhar [P. Goldreich and S. Sridhar, ApJ 438, 763 (1995)], it is not assumed in addition that this ratio be equal to unity at all scales which allows us to use the Iroshnikov-Kraichnan phenomenology. It is then possible to recover the widely observed anisotropic scaling law k ∝ k 2/3 ⊥ between parallel and perpendicular wavenumbers (with reference to B 0ê ) and to obtain the universal prediction, 3α + 2β = 7, for the total energy spectrum
Ion-driven fast ignition ͑IFI͒ may have significant advantages over electron-driven FI due to the potentially large reduction in the amount of energy required for the ignition beam and the laser driver. Recent experiments at the Los... more
Ion-driven fast ignition ͑IFI͒ may have significant advantages over electron-driven FI due to the potentially large reduction in the amount of energy required for the ignition beam and the laser driver. Recent experiments at the Los Alamos National Laboratory's Trident facility employing novel Au flat-top cone targets have produced a fourfold increase in laser-energy to ion-energy efficiency, a 13-fold increase in the number of ions above 10 MeV, and a few times increase in the maximum ion energy compared to Au flat-foil targets. Compared to recently published scaling laws, these gains are even greater. If the efficiency scales with intensity in accordance to flat-foil scaling, then, with little modification, these targets can be used to generate the pulse of ions needed to ignite thermonuclear fusion in the fast ignitor scheme. A proton energy of at least 30 MeV was measured from the flat-top cone targets, and particle-in-cell ͑PIC͒ simulations show that the maximum cutoff energy may be as high as 40-45 MeV at modest intensity of 1 ϫ 10 19 W / cm 2 with 20 J in 600 fs. Simulations indicate that the observed energy and efficiency increase can be attributed to the cone target's ability to guide laser light into the neck to produce hot electrons and transport these electrons to the flat-top of the cone where they can be heated to much higher temperatures, creating a hotter, denser sheath. The PIC simulations also elucidate the critical parameters for obtaining superior proton acceleration such as the dependence on laser contrast/plasma prefill, as well as longitudinal and transverse laser pointing, and cone geometry. These novel cones have the potential to revolutionize inertial confinement fusion target design and fabrication via their ability to be mass produced. In addition, they could have an impact on the general physics community studying basic electron and radiation transport phenomena or as better sources of particle beams to study equations of state and warm dense matter, or for hadron therapy, as new radioisotope generators, or for compact proton radiography sources.
Allometric relations for tree growth modelling have been subject to research for decades, partly as empirical models, and partly as process models such as the pipe model, hydraulic architecture, mechanical approaches or the fractal-like... more
Allometric relations for tree growth modelling have been subject to research for decades, partly as empirical models, and partly as process models such as the pipe model, hydraulic architecture, mechanical approaches or the fractal-like nature of plant architecture. Unlike empirical studies, process models aim at explaining the scaling within tree architecture as a function of biological, physical or mechanical factors and at modelling their effect on functionality and growth of different parts of an individual tree. The goal of the underlying study is to link theoretical explanation to empirical approaches of tree biomass estimation by the example of Norway spruce (Picea abies [L.] Karst.). Decisively, this article tries to take allometry out of the purely curve-fitting exercise common in literature and derives implications for the use of allometric biomass functions.
The rate and extent of deforestation determine the timing and magnitude of disturbance to both terrestrial and aquatic ecosystems. Rapid change can lead to transient impacts to hydrology and biogeochemistry, while complete and permanent... more
The rate and extent of deforestation determine the timing and magnitude of disturbance to both terrestrial and aquatic ecosystems. Rapid change can lead to transient impacts to hydrology and biogeochemistry, while complete and permanent conversion to other land uses can lead to chronic changes. A large population of watershed boundaries (N ¼ 4788) and a time series of Landsat TM imagery in the southwestern Amazon Basin showed that even small watersheds (2.5-15 km 2 ) were deforested relatively slowly over 7-21 years. Less than 1% of all small watersheds were more than 50% cleared in a single year, and clearing rates averaged 5.6%/yr during active clearing. A large proportion (26%) of the small watersheds had a cumulative deforestation extent of more than 75%. The cumulative deforestation extent was highly spatially autocorrelated up to a 100-150 km lag due to the geometry of the agricultural zone and road network, so watersheds as large as ;40 000 km 2 were more than 50% deforested by 1999. The rate of deforestation had minimal spatial autocorrelation beyond a lag of ;30 km, and the mean rate decreased rapidly with increasing area. Approximately 85% of the cleared area remained in pasture, so deforestation in watersheds of Rondoˆnia was a relatively slow, permanent, and complete transition to pasture, rather than a rapid, transient, and partial cutting with regrowth. Given the observed landcover transitions, the regional stream biogeochemical response is likely to resemble the chronic changes observed in streams draining established pastures, rather than a temporary pulse from slash-and-burn.