L-F Pau | Copenhagen Business School, CBS (original) (raw)
Papers by L-F Pau
Journal of Economic Dynamics and Control, 1980
SSRN Electronic Journal, 2010
The paper describes an approximate model built from real sub-system performance data, of a public... more The paper describes an approximate model built from real sub-system performance data, of a public wireless network (3G / LTE) in view of minimum net energy consumption or minimum emissions per time unit and per user. This approach is justified in order to generate the integrated view required for “green” optimizations, while taking into account service demand and operations While subsystems with lower native energy footprints are being migrated into public networks, the many adaptation mechanisms at sub-system, protocol and management levels make system complexity too high to design major comprehensive “green” trade-offs. However, by focusing on the incremental effects of a new network user, the approximate model allows marginal effects to be estimated with good accuracy. This capability allows for the provisioning of personalized energy emissions reducing tariffs to end users with inherent advantages both to operators, energy suppliers and users. One key advantage is the possibility to reduce waste capacity, and thus energy consumption in the network, by allowing the user to specify just the service capacity and demands he/she has. The computational implementations exist at different levels: Static non-linear, non-linear with average traffic intensities, randomized over users by Monte-Carlo simulation, and determination of energy /emissions levels by value at risk. From an engineering point of view, the incremental model allows to tune sub-system characteristics jointly, especially transceivers, transmission and storage. From a configuration point of view, the model allows to determine which nodes in the network benefit most from back-up and renewable power sources. From a business perspective, the model allows to determine trade-offs between personalized bundle characteristics and the energy cost share in the marginal operating expense share. Detailed sub-system model and design improvements are carried out on a continuous basis in collaboration with industry.
ArXiv, 2016
While several paths have emerged in microelectronics and computing as follow-ons to Turing archit... more While several paths have emerged in microelectronics and computing as follow-ons to Turing architectures, and have been implemented using essentially silicon circuits, very little beyond Moore research has considered: (1) first biological processes instead of sequential instructions, and, (2) the implementation of these processes exploiting particle physics interactions. This last combination enables native spatial-temporal integration and correlation, but also powerful interference filtering, gating, splitting and more. These biological functions, their realization by quantum and charge carrier interactions, allow proposing a novel computing architecture, with interfaces, information storage, and programmability. The paper presents the underlying biological processes, the particle physics phenomena which are exploited, and the proposed architecture, as well as an algebraic design formalism.
Pattern Recognition, 2019
Internet: www.erim.eur.nl Bibliographic data and classifications of all the ERIM reports are also... more Internet: www.erim.eur.nl Bibliographic data and classifications of all the ERIM reports are also available on the ERIM website: www.erim.eur.nl
Sensor fusion42–44 consists in combining several physical or measurement principles, in order to ... more Sensor fusion42–44 consists in combining several physical or measurement principles, in order to achieve lower false alarm and nondetection rates in the area of testing. The tremendous importance of this new approach in electronics stems from the fact that, so far, visual inspection and electrical testing have been kept separate, hence the term “integrated testing” (Figure 46). Two cases will be discussed—precap silicon IC and GaAs IC testing.
Pattern Recognition and Signal Processing, 1978
Finite learning sample size problems in pattern recognition. LF PAU 1978. Major problems and solu... more Finite learning sample size problems in pattern recognition. LF PAU 1978. Major problems and solutions relating finite learning sample constraints to the design of a statistical pattern classifier are surveyed. These problems are ...
IFAC Proceedings Volumes, 1986
Geoexploration, 1984
Abstract This paper studies in theory, and gives a solution to, the following concerns: (1) obtai... more Abstract This paper studies in theory, and gives a solution to, the following concerns: (1) obtain alternative classification decisions, ranked by some decreasing order of class membership probabilities; (2) imperfect teacher at the learning stage, or effect of labelling errors due to unsupervised learning; (3) non-cooperative teacher manipulating the a-priori class probabilities; (4) unknown a-priori class probabilities. This is carried out by considering a game between the recognition system and the teacher, in a game theoretical framework. Both players will ultimately select “mixed strategies” which are probability distributions over the set of N pattern classes. Within the context of signal classification, these N classes are N alternate signal classes e.g., sources, modelled by autoregressive processes. This approach, and the concerns (1)–(4) are especially relevant for the performance enhancement of a number of acoustic signal classification systems (e.g., seismic exploration, intrusion detection, sonar).
IFAC Proceedings Volumes, 1981
IFAC Proceedings Volumes, 1976
Real-Time Object Measurement and Classification, 1988
This paper gives the implementation architecture for a multilevel knowledge representation scheme... more This paper gives the implementation architecture for a multilevel knowledge representation scheme, aimed at sensor fusion of 3-dimensional scenes. PROLOG procedures are given for the extraction of edge, vertex, and region attributes of the corresponding software objects, from each sensor. Sensor-fusion is carried out by a truth maintenance procedure operating a classification of all objects into non-contradicting scene contexts. Context filtering gives the attributes of the sensor fusion region objects, which themselves are used in scripts for later scene evaluation. Implementation considerations are discussed in relation to an object oriented PROLOG environment. This architecture is being used in target classification, vision, mapping, threat assessment and change of activity [12,13]
2006 IEEE/IFIP Network Operations and Management Symposium NOMS 2006, 2006
ABSTRACT In this paper we consider an M/G/1 queue-based analytical model. The end-to-end performa... more ABSTRACT In this paper we consider an M/G/1 queue-based analytical model. The end-to-end performance of a tandem wireless router network with batch arrivals is optimized. The mean of the transmission delay (or 'response time') is minimized subject to an upper limit on the rate of losses and finite capacity queueing and recovery buffers. The optimal ratio of arrival-buffer size to recovery-buffer size is determined, which is a critical quantity, affecting both loss rate and transmission time. The impact of the retransmission probability is investigated: too high a value leads to congestion and so higher response times, too low and packets are lost forever, yielding a different penalty
Computer Vision for Electronics Manufacturing, 1990
Efficient image processing relies to a great extent upon the choice of data structures and knowle... more Efficient image processing relies to a great extent upon the choice of data structures and knowledge representations which allow for a minimization of data transfer overheads and for the search of pointer or attribute lengths.
Handbook of Statistics, 1988
Publisher Summary This chapter discusses the applications of pattern recognition in failure diagn... more Publisher Summary This chapter discusses the applications of pattern recognition in failure diagnosis and quality control and defines the technical diagnostics as the field dealing with all methods, processes, devices and systems whereby one can detect, localize, analyse and monitor failure modes of a system—that is, defects and degradations. Failure diagnosis has itself evolved from the utilization of stand-alone tools (for example, calibers), to heuristic procedures, later codified into maintenance manuals. At a later stage, automatic test systems and nondestructive testing instruments, based on specific test sequences and sensors, have assisted the diagnosis. Examples discussed are: rotating machine vibration monitoring, signature analysis, optical flaw detection, ultrasonics, ferrography, wear sensors, process parameters, and thermography. There has been implementations and research on evolved diagnostic processes, with heavier emphasis on sensor integration, signal/image processing, software and communications. Research is carried out on automated failure diagnosis, and on expert systems to accumulate and structure failure symptoms and diagnostic strategies. The chapter discusses the basic concepts in technical diagnostics, some of the measurement problems, and the basic diagnostic strategies.
While you refuel for gas, why not refuel for information or upload vehicle data, using a cheap wi... more While you refuel for gas, why not refuel for information or upload vehicle data, using a cheap wireless technology as WiFi? This paper analyzes in extensive detail the user segmentation by vehicle usage, service offering, and full business models from WiFi hot spot services delivered to and from vehicles (private, professional, public) around gas stations. Are also analyzed the parties which play a role in such services: authorization, provisioning and delivery, with all the dependencies modelled by attributed digraphs. Account is made of WiFi base station technical capabilities and costs. Five year financial models (CAPEX, OPEX), and data pertain to two possible service suppliers: multi-service oil companies, and mobile service operators (or MVNOs). Model optimization on the return-on-investment (R.O.I.) is carried out for different deployment scenarios, geographical coverage assumptions, as well as tariff structures. Comparison is also being made with public GPRS and 3G data servi...
Pattern Recognition, 1977
International Journal of Business Data Communications and Networking, 2013
As 3G, HSPDA and already now LTE wireless networks become ever more pervasive, especially for wir... more As 3G, HSPDA and already now LTE wireless networks become ever more pervasive, especially for wireless high data rate and Internet traffic (>100 Mbps), increasing focus is given on ways to offload access by re-utilizing WiFi access points available in-doors (offices, homes), or installing such access points outdoors in/alongside high demand density public areas (hot spots, public areas, road traffic lanes, etc..). In view of the relative much higher WiFi access node power consumption and much smaller coverage compatible with interference reduction, the WiFi off-loading access may have a significant negative impact on energy consumption and emissions per user. The paper builds on earlier extensive work on the modeling of 3G or LTE wireless infrastructure energy consumption on an incremental basis per new user. It addresses the questions of the best mix between LTE cellular base stations and WiFi off-load access nodes from the energy/emissions perspective. Detailed sub-system model...
ERIM Report Series reference number ERS-2008-011-LIS Publication March 2008 Number of pages 8 Per... more ERIM Report Series reference number ERS-2008-011-LIS Publication March 2008 Number of pages 8 Persistent paper URL http://hdl.handle.net/1765/11762 Email address corresponding author lpau@rsm.nl Address Erasmus Research Institute of Management (ERIM) RSM ...
Journal of Economic Dynamics and Control, 1980
SSRN Electronic Journal, 2010
The paper describes an approximate model built from real sub-system performance data, of a public... more The paper describes an approximate model built from real sub-system performance data, of a public wireless network (3G / LTE) in view of minimum net energy consumption or minimum emissions per time unit and per user. This approach is justified in order to generate the integrated view required for “green” optimizations, while taking into account service demand and operations While subsystems with lower native energy footprints are being migrated into public networks, the many adaptation mechanisms at sub-system, protocol and management levels make system complexity too high to design major comprehensive “green” trade-offs. However, by focusing on the incremental effects of a new network user, the approximate model allows marginal effects to be estimated with good accuracy. This capability allows for the provisioning of personalized energy emissions reducing tariffs to end users with inherent advantages both to operators, energy suppliers and users. One key advantage is the possibility to reduce waste capacity, and thus energy consumption in the network, by allowing the user to specify just the service capacity and demands he/she has. The computational implementations exist at different levels: Static non-linear, non-linear with average traffic intensities, randomized over users by Monte-Carlo simulation, and determination of energy /emissions levels by value at risk. From an engineering point of view, the incremental model allows to tune sub-system characteristics jointly, especially transceivers, transmission and storage. From a configuration point of view, the model allows to determine which nodes in the network benefit most from back-up and renewable power sources. From a business perspective, the model allows to determine trade-offs between personalized bundle characteristics and the energy cost share in the marginal operating expense share. Detailed sub-system model and design improvements are carried out on a continuous basis in collaboration with industry.
ArXiv, 2016
While several paths have emerged in microelectronics and computing as follow-ons to Turing archit... more While several paths have emerged in microelectronics and computing as follow-ons to Turing architectures, and have been implemented using essentially silicon circuits, very little beyond Moore research has considered: (1) first biological processes instead of sequential instructions, and, (2) the implementation of these processes exploiting particle physics interactions. This last combination enables native spatial-temporal integration and correlation, but also powerful interference filtering, gating, splitting and more. These biological functions, their realization by quantum and charge carrier interactions, allow proposing a novel computing architecture, with interfaces, information storage, and programmability. The paper presents the underlying biological processes, the particle physics phenomena which are exploited, and the proposed architecture, as well as an algebraic design formalism.
Pattern Recognition, 2019
Internet: www.erim.eur.nl Bibliographic data and classifications of all the ERIM reports are also... more Internet: www.erim.eur.nl Bibliographic data and classifications of all the ERIM reports are also available on the ERIM website: www.erim.eur.nl
Sensor fusion42–44 consists in combining several physical or measurement principles, in order to ... more Sensor fusion42–44 consists in combining several physical or measurement principles, in order to achieve lower false alarm and nondetection rates in the area of testing. The tremendous importance of this new approach in electronics stems from the fact that, so far, visual inspection and electrical testing have been kept separate, hence the term “integrated testing” (Figure 46). Two cases will be discussed—precap silicon IC and GaAs IC testing.
Pattern Recognition and Signal Processing, 1978
Finite learning sample size problems in pattern recognition. LF PAU 1978. Major problems and solu... more Finite learning sample size problems in pattern recognition. LF PAU 1978. Major problems and solutions relating finite learning sample constraints to the design of a statistical pattern classifier are surveyed. These problems are ...
IFAC Proceedings Volumes, 1986
Geoexploration, 1984
Abstract This paper studies in theory, and gives a solution to, the following concerns: (1) obtai... more Abstract This paper studies in theory, and gives a solution to, the following concerns: (1) obtain alternative classification decisions, ranked by some decreasing order of class membership probabilities; (2) imperfect teacher at the learning stage, or effect of labelling errors due to unsupervised learning; (3) non-cooperative teacher manipulating the a-priori class probabilities; (4) unknown a-priori class probabilities. This is carried out by considering a game between the recognition system and the teacher, in a game theoretical framework. Both players will ultimately select “mixed strategies” which are probability distributions over the set of N pattern classes. Within the context of signal classification, these N classes are N alternate signal classes e.g., sources, modelled by autoregressive processes. This approach, and the concerns (1)–(4) are especially relevant for the performance enhancement of a number of acoustic signal classification systems (e.g., seismic exploration, intrusion detection, sonar).
IFAC Proceedings Volumes, 1981
IFAC Proceedings Volumes, 1976
Real-Time Object Measurement and Classification, 1988
This paper gives the implementation architecture for a multilevel knowledge representation scheme... more This paper gives the implementation architecture for a multilevel knowledge representation scheme, aimed at sensor fusion of 3-dimensional scenes. PROLOG procedures are given for the extraction of edge, vertex, and region attributes of the corresponding software objects, from each sensor. Sensor-fusion is carried out by a truth maintenance procedure operating a classification of all objects into non-contradicting scene contexts. Context filtering gives the attributes of the sensor fusion region objects, which themselves are used in scripts for later scene evaluation. Implementation considerations are discussed in relation to an object oriented PROLOG environment. This architecture is being used in target classification, vision, mapping, threat assessment and change of activity [12,13]
2006 IEEE/IFIP Network Operations and Management Symposium NOMS 2006, 2006
ABSTRACT In this paper we consider an M/G/1 queue-based analytical model. The end-to-end performa... more ABSTRACT In this paper we consider an M/G/1 queue-based analytical model. The end-to-end performance of a tandem wireless router network with batch arrivals is optimized. The mean of the transmission delay (or 'response time') is minimized subject to an upper limit on the rate of losses and finite capacity queueing and recovery buffers. The optimal ratio of arrival-buffer size to recovery-buffer size is determined, which is a critical quantity, affecting both loss rate and transmission time. The impact of the retransmission probability is investigated: too high a value leads to congestion and so higher response times, too low and packets are lost forever, yielding a different penalty
Computer Vision for Electronics Manufacturing, 1990
Efficient image processing relies to a great extent upon the choice of data structures and knowle... more Efficient image processing relies to a great extent upon the choice of data structures and knowledge representations which allow for a minimization of data transfer overheads and for the search of pointer or attribute lengths.
Handbook of Statistics, 1988
Publisher Summary This chapter discusses the applications of pattern recognition in failure diagn... more Publisher Summary This chapter discusses the applications of pattern recognition in failure diagnosis and quality control and defines the technical diagnostics as the field dealing with all methods, processes, devices and systems whereby one can detect, localize, analyse and monitor failure modes of a system—that is, defects and degradations. Failure diagnosis has itself evolved from the utilization of stand-alone tools (for example, calibers), to heuristic procedures, later codified into maintenance manuals. At a later stage, automatic test systems and nondestructive testing instruments, based on specific test sequences and sensors, have assisted the diagnosis. Examples discussed are: rotating machine vibration monitoring, signature analysis, optical flaw detection, ultrasonics, ferrography, wear sensors, process parameters, and thermography. There has been implementations and research on evolved diagnostic processes, with heavier emphasis on sensor integration, signal/image processing, software and communications. Research is carried out on automated failure diagnosis, and on expert systems to accumulate and structure failure symptoms and diagnostic strategies. The chapter discusses the basic concepts in technical diagnostics, some of the measurement problems, and the basic diagnostic strategies.
While you refuel for gas, why not refuel for information or upload vehicle data, using a cheap wi... more While you refuel for gas, why not refuel for information or upload vehicle data, using a cheap wireless technology as WiFi? This paper analyzes in extensive detail the user segmentation by vehicle usage, service offering, and full business models from WiFi hot spot services delivered to and from vehicles (private, professional, public) around gas stations. Are also analyzed the parties which play a role in such services: authorization, provisioning and delivery, with all the dependencies modelled by attributed digraphs. Account is made of WiFi base station technical capabilities and costs. Five year financial models (CAPEX, OPEX), and data pertain to two possible service suppliers: multi-service oil companies, and mobile service operators (or MVNOs). Model optimization on the return-on-investment (R.O.I.) is carried out for different deployment scenarios, geographical coverage assumptions, as well as tariff structures. Comparison is also being made with public GPRS and 3G data servi...
Pattern Recognition, 1977
International Journal of Business Data Communications and Networking, 2013
As 3G, HSPDA and already now LTE wireless networks become ever more pervasive, especially for wir... more As 3G, HSPDA and already now LTE wireless networks become ever more pervasive, especially for wireless high data rate and Internet traffic (>100 Mbps), increasing focus is given on ways to offload access by re-utilizing WiFi access points available in-doors (offices, homes), or installing such access points outdoors in/alongside high demand density public areas (hot spots, public areas, road traffic lanes, etc..). In view of the relative much higher WiFi access node power consumption and much smaller coverage compatible with interference reduction, the WiFi off-loading access may have a significant negative impact on energy consumption and emissions per user. The paper builds on earlier extensive work on the modeling of 3G or LTE wireless infrastructure energy consumption on an incremental basis per new user. It addresses the questions of the best mix between LTE cellular base stations and WiFi off-load access nodes from the energy/emissions perspective. Detailed sub-system model...
ERIM Report Series reference number ERS-2008-011-LIS Publication March 2008 Number of pages 8 Per... more ERIM Report Series reference number ERS-2008-011-LIS Publication March 2008 Number of pages 8 Persistent paper URL http://hdl.handle.net/1765/11762 Email address corresponding author lpau@rsm.nl Address Erasmus Research Institute of Management (ERIM) RSM ...