Waste minimisation Research Papers - Academia.edu (original) (raw)

We study the problem of computing a planar curve, restricted to lie between two given polygonal chains, such that the integral of the square of arc-length derivative of curvature along the curve is minimized. We introduce the minimum... more

We study the problem of computing a planar curve, restricted to lie between two given polygonal chains, such that the integral of the square of arc-length derivative of curvature along the curve is minimized. We introduce the minimum variation B-spline problem, which is a linearly constrained optimization problem over curves, defined by B-spline functions only. An empirical investigation indicates that this problem has one unique solution among all uniform quartic B-spline functions. Furthermore, we prove that, for any B-spline function, the convexity properties of the problem are preserved subject to a scaling and translation of the knot sequence defining the B-spline.

This paper deals with the problem of noise cancellation of speech signals in an acoustic environment. In this regard, generally, different adaptive filter algorithms are employed, many of them may lack the flexibility of controlling the... more

This paper deals with the problem of noise cancellation of speech signals in an acoustic environment. In this regard, generally, different adaptive filter algorithms are employed, many of them may lack the flexibility of controlling the convergence rate, range of variation of filter coefficients, and consistency in error within tolerance limit. In order to achieve these desirable attributes as well as to cancel noise effectively, unlike conventional approaches, we formulate the task of noise cancellation as a coefficient optimization problem whereby we introduce and exploit the particle swarm optimization (PSO) algorithm. In this problem, the PSO is designed to perform the error minimization in frequency domain. The outcomes from extensive experimentations show that the proposed PSO based acoustic noise cancellation method provides high performance in terms of SNR improvements with a satisfactory convergence rate in comparison to that obtained by some of the state-of-the-art methods.

Camless internal combustion engines offer improvements over traditional engines in terms of torque performance, reduction of emissions, reduction of pumping losses and fuel economy. Theoretically, electromagnetic valve actuators offer the... more

Camless internal combustion engines offer improvements over traditional engines in terms of torque performance, reduction of emissions, reduction of pumping losses and fuel economy. Theoretically, electromagnetic valve actuators offer the highest potentials for improving efficiency due to their control flexibility. For real applications, however, the valve actuators developed so far suffer from high power consumption and other control problems. One key point is the design of the reference trajectory to be tracked by the closed loop controller. In this brief, a design technique aimed at minimizing power consumption is proposed. A constrained optimization problem is formulated and its solution is approximated by exploiting local flatness and physical properties of the system. The performance of the designed trajectory is validated via an industrial simulator of the valve actuator.

Diffusion of mass in a solid cylinder with concentration dependent diffusivity (or temperature-dependent thermal conductivity in case of heat diffusion) does not admit of an analytical solution except in special cases. The 'shrinking core... more

Diffusion of mass in a solid cylinder with concentration dependent diffusivity (or temperature-dependent thermal conductivity in case of heat diffusion) does not admit of an analytical solution except in special cases. The 'shrinking core model' has been used to develop an approximate analytical solution in certain circumstances. The model, generally useful to describe heterogeneous solid-fluid reactions, is applied to theoretically analyze the adsorption-diffusion phenomena of methylene blue dye in a glass fiber in the present work. Theoretical equations have been derived for the case of diffusivity as an exponential function of concentration. The diffusivity parameters are evaluated by global minimization of the error between the experimental and the theoretical concentration history. Other forms of diffusivity, namely constant diffusivity and diffusivity varying linearly with concentration are found to involve larger errors. A parametric sensitivity analysis of the error has been done. The shrinking core model could satisfactorily interpret the experimental dye concentration profile in the substrate.

At the planning of combined heat and power (CHP)-based micro-grid, its distributed energy resources (DER) capacity is to be selected and deployed in such a way that it becomes economically self-sufficient to cater all the loads of the... more

At the planning of combined heat and power (CHP)-based micro-grid, its distributed energy resources (DER) capacity is to be selected and deployed in such a way that it becomes economically self-sufficient to cater all the loads of the system without utility's participation. Economic deployment of DERs is meant to select optimal locations, optimal sizes, and optimal technologies. Optimal locations and sizes, which are independent of CHP-based DERs types, are selected, here, by loss sensitivity index (LSI) and by loss minimization using particle swarm optimization (PSO) method, respectively. In a micro-grid, both fuel costs and NO x emissions are, mainly, dependent on types of DERs used. So the main focus of the present paper is to incorporate originality in ideas to evaluate how different optimal output sets of DER-mix, operating within their respective capacity limits, could share an electrical tracking demand, economically, among micro-turbines and diesel generators of various sizes, satisfying different heat demands, on the basis of multi-objective optimization compromising between fuel cost and emission in a 4-DER 14-bus radial micro-grid. Optimization is done using differential evolution (DE) technique under real power demand equality constraint, heat balance inequality constraint, and DER capacity limits constraint. DE results are compared with PSO.

The ability to improve a disk file's access-time performance is severely limited by an inverse fourth-power increase in actuator-power dissipation. Two areas are germane to the actuator-power problem: the design of the actuator coil and... more

The ability to improve a disk file's access-time performance is severely limited by an inverse fourth-power increase in actuator-power dissipation. Two areas are germane to the actuator-power problem: the design of the actuator coil and the design of the control trajectory. Design considerations for optimal solutions in both of these areas are presented.

This work is aimed at looking into the flatness control of the crane detailing adopted mechanisms and approaches in order to be able to control this system and to solve problems encountered during its functioning. The control objective is... more

This work is aimed at looking into the flatness control of the crane detailing adopted mechanisms and approaches in order to be able to control this system and to solve problems encountered during its functioning. The control objective is the sway free transportation of the crane s load taking the commands of the crane operator into account. Based on the mathematical model linearizing and stabilizing control laws for the slewing and luffing motion are derived using the input/output linearization approach. The method allows for transportation of the payload to a selected point and ensures minimisation of its swings when the motion is finished. To achieve this goal a mathematical model of the control system of the displacement of the payload has been constructed. A theory of control which ensures swing free stop of the payload is presented. Selected results of numerical simulations are shown. At the end of this work, a comparative study between the real moving and the desired one has been presented.

We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are... more

We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). The computation of points on the surface is local, which results in an out-of-core technique that can handle any point set. We show that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.

The problem of specifying the two free parameters that arise in spatial Pythagorean-hodograph (PH) quintic interpolants to given first-order Hermite data is addressed. Conditions on the data that identify when the "ordinary" cubic... more

The problem of specifying the two free parameters that arise in spatial Pythagorean-hodograph (PH) quintic interpolants to given first-order Hermite data is addressed. Conditions on the data that identify when the "ordinary" cubic interpolant becomes a PH curve are formulated, since it is desired that the selection procedure should reproduce such curves whenever possible. Moreover, it is shown that the arc length of the interpolants depends on only one of the parameters, and that four (general) helical PH quintic interpolants always exist, corresponding to extrema of the arc length. Motivated by the desire to improve the fairness of interpolants to general data at reasonable computational cost, three selection criteria are proposed. The first criterion is based on minimizing a bivariate function that measures how "close" the PH quintic interpolants are to a PH cubic. For the second criterion, one of the parameters is fixed by first selecting interpolants of extremal arc length, and the other parameter is then determined by minimizing the distance measure of the first method, considered as a univariate function. The third method employs a heuristic but efficient procedure to select one parameter, suggested by the circumstances in which the "ordinary" cubic interpolant is a PH curve, and the other parameter is then determined as in the second method. After presenting the theory underlying these three methods, a comparison of empirical results from their implementation is described, and recommendations for their use in practical design applications are made.

Abstract-Backtracking search is frequently applied to solve a constraint-based search prob!em, but it often suffers from exponential growth of computing time. We present an alternative to backtracking search: local search with conflict... more

Abstract-Backtracking search is frequently applied to solve a constraint-based search prob!em, but it often suffers from exponential growth of computing time. We present an alternative to backtracking search: local search with conflict minimization. We have applied this ...

For a given graph G over n vertices, let OPT G denote the size of an optimal solution in G of a particular minimization problem (e.g., the size of a minimum vertex cover). A randomized algorithm will be called an α-approximation algorithm... more

For a given graph G over n vertices, let OPT G denote the size of an optimal solution in G of a particular minimization problem (e.g., the size of a minimum vertex cover). A randomized algorithm will be called an α-approximation algorithm with an additive error for this minimization problem, if for any given additive error parameter > 0 it computes a value OPT such that, with probability at least 2/3, it holds that OPT G ≤ OPT ≤ α · OPT G + n .

There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to... more

There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission critical and time sensitive applications. This work explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of services are considered where TCP connections are assigned to these classes and mapped to two underlying queues with round robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.

Geometric hashing is a model-based recognition technique based on matching of transformation-invariant object representations stored in a hash table. In the last decade, a number of enhancements have been suggested to the basic method... more

Geometric hashing is a model-based recognition technique based on matching of transformation-invariant object representations stored in a hash table. In the last decade, a number of enhancements have been suggested to the basic method improving its performance and reliability. One of the important enhancements is rehashing, improving the computational performance by dealing with the problem of non-uniform occupancy of hash bins. However, the proposed rehashing schemes aim to redistribute the hash entries uniformly, which is not appropriate for Bayesian approach, another enhancement optimizing the recognition rate in presence of noise. In this paper, we derive the rehashing for Bayesian voting scheme, thus improving the computational performance by minimizing the hash table size and the number of bins accessed, while maintaining optimal recognition rate.

Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a... more

Camera calibration has been studied extensively in computer vision and photogrammetry and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (self-calibration using unknown scene points). Yet, this paper proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with free-moving 1D objects, but can be solved if one point is fixed. A closed-form solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Singularities have also been studied. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.

Software conflicts arising because of conflicting changes are a regular occurrence and delay projects. The main precept of workspace awareness tools has been to identify potential conflicts early, while changes are still small and easier... more

Software conflicts arising because of conflicting changes are a regular occurrence and delay projects. The main precept of workspace awareness tools has been to identify potential conflicts early, while changes are still small and easier to resolve. However, in this approach conflicts still occur and require developer time and effort to resolve. We present a novel conflict minimization technique that proactively identifies potential conflicts, encodes them as constraints, and solves the constraint space to recommend a set of conflict-minimal development paths for the team. Here we present a study of four open source projects to characterize the distribution of conflicts and their resolution efforts. We then explain our conflict minimization technique and the design and implementation of this technique in our prototype, Cassandra. We show that Cassandra would have successfully avoided a majority of conflicts in the four open source test subjects. We demonstrate the efficiency of our approach by applying the technique to a simulated set of scenarios with higher than normal incidence of conflicts.

This paper presents a methodology and algorithm for generating diffeomorphisms of the sphere onto itself, given the displacements of a finite set of template landmarks. Deformation maps are constructed by integration of velocity fields... more

This paper presents a methodology and algorithm for generating diffeomorphisms of the sphere onto itself, given the displacements of a finite set of template landmarks. Deformation maps are constructed by integration of velocity fields that minimize a quadratic smoothness energy under the specified landmark constraints. We present additional formulations of this problem which incorporate a given error variance in the positions of the landmarks. Finally, some experimental results are presented. This work has application in brain mapping, where surface data is typically mapped to the sphere as a common coordinate system.

There is growing interest in managing water demand in the UK. A series of waste minimization clubs have been set up within the country and this paper identifies the effectiveness of these clubs in reducing the demand for water within... more

There is growing interest in managing water demand in the UK. A series of waste minimization clubs have been set up within the country and this paper identifies the effectiveness of these clubs in reducing the demand for water within industry. Membership of these clubs is voluntary and the only incentive for industry to reduce water consumption, and consequently the production of effluent, is the almost immediate fmancial saving made by the company, often achieved by accounting for the water consumption and loss within site from the point of input from the water supplier to output in the form of effluent. On average, companies are able to reduce water consumption by up to 30 percent. If the entire industrial sector within the UK were to achieve this degree of savings, it is possible that approximately l300MlId could be saved.

A method for computing the Disjoint-Sum-Of-Products (DSOP) form of Boolean functions is described. The algorithm exploits the property of the most binate variable in a set of cubes to compute a DSOP form. The technique uses a minimized... more

A method for computing the Disjoint-Sum-Of-Products (DSOP) form of Boolean functions is described. The algorithm exploits the property of the most binate variable in a set of cubes to compute a DSOP form. The technique uses a minimized Sum-Of-Products (SOP) cube list as input. Experimental results comparing the size of the DSOP cube list produced by this algorithm and those produced by other methods demonstrate the efficiency of this technique and show that superior results occur in many cases for a set of benchmark functions.

This paper presents a particle swarm optimization (PSO) as a tool for loss reduction study. This issue can be formulated as a nonlinear optimization problem. The proposed application consists of using a developed optimal power flow based... more

This paper presents a particle swarm optimization (PSO) as a tool for loss reduction study. This issue can be formulated as a nonlinear optimization problem. The proposed application consists of using a developed optimal power flow based on loss minimization function by expanding the original PSO. The study is carried out in two steps. First, by using the tangent vector technique, the critical area of the power system is identified under the point of view of voltage instability. Second, once this area is identified, the PSO technique calculates the amount of shunt reactive power compensation that takes place in each bus. The proposed approach has been examined and tested with promising numerical results using the IEEE 118-bus system.

In this paper, a new block adaptive decision feedback equalizer (DFE) implemented in the frequency domain is derived. The new algorithm is suitable for applications requiring long adaptive equalizers, as is the case in several high-speed... more

In this paper, a new block adaptive decision feedback equalizer (DFE) implemented in the frequency domain is derived. The new algorithm is suitable for applications requiring long adaptive equalizers, as is the case in several high-speed wireless communication systems. The inherent "causality" problem appearing in the block adaptive formulation of the DFE equations is overcome by using tentative decisions in place of the unknown ones within each block. These tentative decisions are subsequently improved by using an efficient iterative procedure, which finally converges to the optimum decisions in a few iterations. This procedure is properly initialized by applying a minimization criterion that utilizes all the available information. The whole algorithm, including the iterative procedure, is implemented in the frequency domain and exhibits a considerable reduction in computational complexity, as compared with the conventional DFE, offering, at the same time, a noticeable increase in convergence speed. Additionally, the level of the steady-state MSE, which is achieved by the new algorithm, is practically insensitive to the block length.

This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are... more

This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.

AbstractÐHerein, we present a variational model devoted to image classification coupled with an edge-preserving regularization process. The discrete nature of classification (i.e., to attribute a label to each pixel) has led to the... more

AbstractÐHerein, we present a variational model devoted to image classification coupled with an edge-preserving regularization process. The discrete nature of classification (i.e., to attribute a label to each pixel) has led to the development of many probabilistic image classification models, but rarely to variational ones. In the last decade, the variational approach has proven its efficiency in the field of edge-preserving restoration. In this paper, we add a classification capability which contributes to provide images composed of homogeneous regions with regularized boundaries, a region being defined as a set of pixels belonging to the same class. The soundness of our model is based on the works developed on the phase transition theory in mechanics. The proposed algorithm is fast, easy to implement, and efficient. We compare our results on both synthetic and satellite images with the ones obtained by a stochastic model using a Potts regularization.

In this paper we study a 1.5-dimensional cutting stock and assortment problem which includes determination of the number of different widths of roll stocks to be maintained as inventory and determination of how these roll stocks should be... more

In this paper we study a 1.5-dimensional cutting stock and assortment problem which includes determination of the number of different widths of roll stocks to be maintained as inventory and determination of how these roll stocks should be cut by choosing the optimal cutting pattern combinations. We propose a new multi-objective mixed integer linear programming (MILP) model in the form

In this paper, we address parallel machine scheduling problems with an objective of minimizing the maximum weighted absolute lateness. Memetic algorithms are applied to solve this problem. The proposed method is compared with genetic... more

In this paper, we address parallel machine scheduling problems with an objective of minimizing the maximum weighted absolute lateness. Memetic algorithms are applied to solve this problem. The proposed method is compared with genetic algorithms and heuristics on randomly generated test problems. The results show that the memetic algorithm outperforms the others

This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are... more

This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.

Using bi-criteria decision making analysis, a new model for test suite minimization has been developed that pursues two objectives: minimizing a test suite with regard to a particular level of coverage while simultaneously maximizing... more

Using bi-criteria decision making analysis, a new model for test suite minimization has been developed that pursues two objectives: minimizing a test suite with regard to a particular level of coverage while simultaneously maximizing error detection rates. This new representation makes it possible to achieve significant reductions in test suite size without experiencing a decrease in error detection rates. Using the all-uses interprocedural data flow testing criterion, two binary integer linear programming models were evaluated, one a single-objective model, the other a weighted-sums bicriteria model. The applicability of the bi-criteria model to regression test suite maintenance was also evaluated. The data show that minimization based solely on definition-use association coverage may have a negative impact on the error detection rate as compared to minimization performed with a bi-criteria model that also takes into account the ability of test cases to reveal error. Results obtained with the bi-criteria model also indicate that test suites minimized with respect to a collection of program faults are effective at revealing subsequent program faults.

Cloud computing is an emerging technology that allows users to utilize on-demand computation, storage, data and services from around the world. However, Cloud service providers charge users for these services. Specifically, to access data... more

Cloud computing is an emerging technology that allows users to utilize on-demand computation, storage, data and services from around the world. However, Cloud service providers charge users for these services. Specifically, to access data from their globally distributed storage edge servers, providers charge users depending on the user's location and the amount of data transferred. When deploying data-intensive applications in a Cloud computing environment, optimizing the cost of transferring data to and from these edge servers is a priority, as data play the dominant role in the application's execution. In this paper, we formulate a non-linear programming model to minimize the data retrieval and execution cost of data-intensive workflows in Clouds. Our model retrieves data from Cloud storage resources such that the amount of data transferred is inversely proportional to the communication cost. We take an example of an 'intrusion detection' application workflow, where the data logs are made available from globally distributed Cloud storage servers. We construct the application as a workflow and experiment with Cloud based storage and compute resources. We compare the cost of multiple executions of the workflow given by a solution of our non-linear program against that given by Amazon CloudFront's 'nearest' single data source selection. Our results show a savings of three-quarters of total cost using our model.

This paper describes a new algorithm, called MDQL, for the solution of multiple objective optimization problems. MDQL is based on a new distributed Q-learning algorithm, called DQL, which is also introduced in this paper. In DQL a family... more

This paper describes a new algorithm, called MDQL, for the solution of multiple objective optimization problems. MDQL is based on a new distributed Q-learning algorithm, called DQL, which is also introduced in this paper. In DQL a family of independent agents, explo- ring different options, finds a common policy in a common environment. Information about action goodness is transmitted using traces over state- action pairs. MDQL extends this idea to multiple objectives, assigning a family of agents for each objective involved. A non-dominant criterion is used to construct Pareto fronts and by delaying adjustments on the rewards MDQL achieves better distributions of solutions. Furthermore, an extension for applying reinforcement learning to continuous functions is also given. Successful results of MDQL on several test-bed problems suggested in the literature are described.

The school bus routing problem discussed in this paper, is similar to the standard vehicle routing problem, but has several interesting additional features. In the standard VRP all stops to visit are given. In our school bus routing... more

The school bus routing problem discussed in this paper, is similar to the standard vehicle routing problem, but has several interesting additional features. In the standard VRP all stops to visit are given. In our school bus routing problem, we assume that a set of potential stops is given, as well as a set of students that can walk to one or more of these potential stops. The school buses used to pick up the students and transport them to their schools have a finite capacity. The goal of this routing problem is to select a subset of stops that will actually be visited by the buses, determine which stop each student should walk to and develop a set of tours that minimize the total distance travelled by all buses. We develop an integer programming formulation for this problem, as well as a problem instance generator. We then show how the problem can be solved using a commercial integer programming solver and discuss some of our results on small instances.

The unprecedented scale of food waste in global food supply chains is attracting increasing attention due to its environmental, social and economic impacts. Drawing on interviews with food waste specialists, this study construes the... more

The unprecedented scale of food waste in global food supply chains is attracting increasing attention due to its environmental, social and economic impacts. Drawing on interviews with food waste specialists, this study construes the boundaries between food surplus and food waste, avoidable and unavoidable food waste, and between waste prevention and waste management. This study suggests that the first step towards a more sustainable resolution of the food waste issue is to adopt a sustainable production and consumption approach and tackle food surplus and waste throughout the global food supply chain. The authors examine the factors that give rise to food waste throughout the food supply chain, and propose a framework to identify and prioritize the most appropriate options for prevention and management of food waste. The proposed framework interprets and applies the waste hierarchy in the context of food waste. It considers the three dimensions of sustainability (environmental, economic, and social), offering a more holistic approach in addressing food waste. Additionally, it considers the materiality and temporality of food. The food waste hierarchy posits that prevention, through minimization of food surplus and avoidable food waste, is the most attractive option. The second most attractive option involves the distribution of food surplus to groups affected by food poverty, followed by the option of converting food waste to animal feed. Although the proposed food waste hierarchy requires a fundamental re-think of the current practices and systems in place, it has the potential to deliver substantial environmental, social and economic benefits.

This study highlights the perceptions of waste administrators regarding their main roles and responsibilities, efforts in promoting recycling or waste minimisation and awareness to the problems or constraints they face. Public waste... more

This study highlights the perceptions of waste administrators regarding their main roles and responsibilities, efforts in promoting recycling or waste minimisation and awareness to the problems or constraints they face. Public waste administrators are actors in a waste management system and often involved in initiating community activities making decision and implement policies, which should benefit the communities and the environment. They help to facilitate recycling campaigns in the hope that this will increase awareness and prompt the public to practice sustainable waste management behaviour. However, studies conducted in Malaysia reveal that there is still low public participation in recycling, indifference of the public towards waste minimisation effort and no clear guidelines on effective ways for administrators to conduct effective people based approaches. The lack of enforcements for recycling is also perceived to contribute to the lack of participation from the public. However, this study finds that the administrators are more enthusiastic about school communities' participation in recycling programme as compared to recycling activities run by other volunteers in the community. Administrators perceive that recycling effort should be the responsibility of each individual but the lack of commitment from the public in general to participate, misuse of recycling infrastructure, financial constraints and the absence of proper guidelines hamper many programmes sustainability. Generally, their main concern is to ensure waste is collected and the works monitored while communities should champion these activities with minimal interventions from the authority.

This paper describes a new algorithm, called MDQL, for the solution of multiple objective optimization problems. MDQL is based on a new distributed Q-learning algorithm, called DQL, which is also introduced in this paper. In DQL a family... more

This paper describes a new algorithm, called MDQL, for the solution of multiple objective optimization problems. MDQL is based on a new distributed Q-learning algorithm, called DQL, which is also introduced in this paper. In DQL a family of independent agents, explo- ring different options, finds a common policy in a common environment. Information about action goodness is transmitted using traces over state- action pairs. MDQL extends this idea to multiple objectives, assigning a family of agents for each objective involved. A non-dominant criterion is used to construct Pareto fronts and by delaying adjustments on the rewards MDQL achieves better distributions of solutions. Furthermore, an extension for applying reinforcement learning to continuous functions is also given. Successful results of MDQL on several test-bed problems suggested in the literature are described.

Page 1. A Graph Theoretic Approach to Minimize Total Wire Length in Channel Routing I Pralay Mitra Nabin Ghoshal Rajit K. Pal Depn. of Computer Sc. & Tech. Howrah-711 103, West Bengal, India. USlC Depn. of... more

Page 1. A Graph Theoretic Approach to Minimize Total Wire Length in Channel Routing I Pralay Mitra Nabin Ghoshal Rajit K. Pal Depn. of Computer Sc. & Tech. Howrah-711 103, West Bengal, India. USlC Depn. of Computer Sc. & Engg. ...

This report proposes a waste disposal system which includes integrated informal recycling, small scale biomethanation, MBT and RDF/WTE. Informal recycling can be integrated into the formal system by training and employing waste pickers... more

This report proposes a waste disposal system which includes integrated informal recycling, small scale biomethanation, MBT and RDF/WTE.
Informal recycling can be integrated into the formal system by training and employing waste pickers to conduct door-to-door collection of wastes, and by allowing them to sell the recyclables they collected. Waste pickers should also be employed at material recovery facilities (or MRFs) to increase the percentage of recycling. Single households, restaurants, food courts and other sources of separated organic waste should be encouraged to employ small scale biomethanation and use the biogas for cooking purposes. Use of compost product from mixed wastes for agriculture should be regulated. It should be used for gardening purposes only or as landfill cover. Rejects from the composting facility should be combusted in a waste-to-energy facility to recover energy. Ash from WTE facilities should be used to make bricks or should be contained in a sanitary landfill facility.
Such a system will divert 93.5% of MSW from landfilling, and increase the life span of a landfill from 20 years to 300 years. It will also decrease disease, improve the quality of life of urban Indians, and avoid environmental pollution.

The unprecedented scale of food waste in global food supply chains is attracting increasing attention due to its environmental, social and economic impacts. From a climate change perspective, the food sector is thought to be the cause of... more

The unprecedented scale of food waste in global food supply chains is attracting increasing attention due to its environmental, social and economic impacts. From a climate change perspective, the food sector is thought to be the cause of 22 per cent of the global warming potential in the EU. Drawing on interviews with food waste specialists, this study construes the boundaries between food surplus and food waste, avoidable and unavoidable food waste, and between waste prevention and waste management. This study suggests that the first step towards a more sustainable resolution of the growing food waste issue is to adopt a sustainable production and consumption approach and tackle food surplus and waste throughout the global food supply chain. The authors examine the factors that give rise to food waste throughout the global food supply chain, and propose a framework to identify and prioritize the most appropriate options for the prevention and management of food waste.

Finite automata are probably best known for being equivalent to right-linear context-free grammars and, thus, for capturing the lowest level of the Chomsky-hierarchy, the family of regular languages. Over the last half century, a vast... more

Finite automata are probably best known for being equivalent to right-linear context-free grammars and, thus, for capturing the lowest level of the Chomsky-hierarchy, the family of regular languages. Over the last half century, a vast literature documenting the importance of deterministic, nondeterministic, and alternating finite automata as an enormously valuable concept has been developed. In the present paper, we tour a fragment of this literature. Mostly, we discuss developments relevant to finite automata related problems like, for example, (i) simulation of and by several types of finite automata, (ii) standard automata problems such as fixed and general membership, emptiness, universality, equivalence, and related problems, and (iii) minimization and approximation. We thus come across descriptional and computational complexity issues of finite automata. We do not prove these results but we merely draw attention to the big picture and some of the main ideas involved.

This study describes the preparation, characterization and evaluation of performance and antifouling properties of mixed matrix nanofiltration membranes. The membranes were prepared by acid oxidized multiwalled carbon nanotubes (MWCNTs)... more

This study describes the preparation, characterization and evaluation of performance and antifouling properties of mixed matrix nanofiltration membranes. The membranes were prepared by acid oxidized multiwalled carbon nanotubes (MWCNTs) embedded in polyethersulfone as matrix polymer. The hydrophilicity of the membrane was enhanced by blending MWCNTs due to migration of functionalized MWCNTs to membrane surface during the phase inversion process. The morphology studies of the prepared NF membranes by scanning electron microscopy (SEM) showed that very large macrovoids appeared in sub-layer by addition of low amount of functionalized MWCNT leading to increase of pure water flux. By using the proper amount of modified MWCNTs, it was possible to increase both the flux and the salt rejection of the membranes. In this work, the effect of CNT/polymer membrane for fouling minimization is presented. The antifouling performance of membranes fouled by bovine serum albumin (BSA) was characterized by means of measuring the pure water flux recovery. The results indicate that the surface roughness of membranes play an important role in antibiofouling resistance of MWCNT membranes. The membrane with lower roughness (0.04 wt% MWCNT/PES) represented the superior antifouling property. The salt retention by the negatively charged MWCNT embedded membrane indicated Donnan exclusion mechanism. The salt retention sequence for 0.04 wt% MWCNT was Na 2 SO 4 (75%) > MgSO 4 (42%) > NaCl (17%) after 60 min filtration.

Rendón, The fuzzy classifier system: motivations and first results, Proc. First Intl. Conf. on Parallel Problem Solving from Nature-PPSN I, Springer, Berlin, 1991, pp. 330-334 (scatter Mamdani fuzzy rules for control/modeling problems) M.... more

Rendón, The fuzzy classifier system: motivations and first results, Proc. First Intl. Conf. on Parallel Problem Solving from Nature-PPSN I, Springer, Berlin, 1991, pp. 330-334 (scatter Mamdani fuzzy rules for control/modeling problems) M. Valenzuela-Rendón, Reinforcement learning in the fuzzy classifier system, Expert Systems with Applications 14 (1998) 237-247 (scatter Mamdani fuzzy rules for control/modeling problems) J.R. Velasco, Genetic-based on-line learning for fuzzy process control, IJIS 13 (10-11) (1998) 891-903 (scatter Mamdani fuzzy rules for control problems)

In permanent magnet (PM) synchronous machines, iron losses form a larger portion of the total losses than in induction machines. This is partly due to the elimination of significant rotor loss in PM machines and partly due to the... more

In permanent magnet (PM) synchronous machines, iron losses form a larger portion of the total losses than in induction machines. This is partly due to the elimination of significant rotor loss in PM machines and partly due to the nonsinusoidal flux density waveforms in the stator core of PM machines. Therefore, minimization of iron losses is of particular importance in PM motor design. This paper considers the minimizing of iron losses of PM synchronous machines through the proper design of magnets and slots, and through the choice of the number of poles. Both time-stepped finite element method (FEM) and the iron loss model from a previous study are used in this paper to draw the conclusions. . His main interests are power-semiconductor-controlled electric drives and power-electronics circuits, and the application of programmable electronics circuits in this field. He has a strong interest in laboratories for power engineering to support graduate research and education in engineering.

ABSTRACT Solving the inverse kinematics problem is at the core of the kinematics control of any articulated mechanism. It refers to determine the joint configuration that places the end-effector in an arbitrary position and orientation in... more

ABSTRACT Solving the inverse kinematics problem is at the core of the kinematics control of any articulated mechanism. It refers to determine the joint configuration that places the end-effector in an arbitrary position and orientation in the workspace. Kinematic inversion algorithms are generally based on the (pseudo)inverse Jacobian matrix, however, these methods are local and unstable in the vicinity of singular joint configurations. Alternatively, the inverse kinematics can be formulated as a constrained minimization problem in the robot configuration space. In a previous work, Differential Evolution (DE) was used to solve this optimization problem for a non-redundant robot manipulator. Although, the algorithm was successful in finding accurate solutions it showed a low convergence speed rate. In this paper, a memetic approach is proposed to increase the convergence speed of the DE by introducing a local search mechanism, called discarding. The proposed approach is tested in a simulation environment to solve the kinematic inversion problem of a non-redundant 3DOF robot manipulator. Experimental results shows that the proposed algorithm is able to find solutions with high accuracy in less generations than the original DE approach.

In this paper, we consider a supply chain network design problem with popup stores which can be opened for a few weeks or months before closing seasonally in a marketplace. The proposed model is multi-period and multi-stage with... more

In this paper, we consider a supply chain network design problem with popup stores which can be opened for a few weeks or months before closing seasonally in a marketplace. The proposed model is multi-period and multi-stage with multi-choice goals under inventory management constraints and formulated by 0-1 mixed integer linear programming. The design tasks of the problem involve the choice of the popup stores to be opened and the distribution network design to satisfy the demand with three multi-choice goals. The first goal is minimization of the sum of transportation costs in all stages; the second is to minimization of set up costs of popup stores; and the third goal is minimization of inventory holding and backordering costs. Revised multi-choice goal programming approach is applied to solve this mixed integer linear programming model. Also, we provide a realworld industrial case to demonstrate how the proposed model works.

This paper introduces a constrained version of the recently proposed set-membership affine projection algorithm based on the set-membership criteria for coefficient update. The algorithm is suitable for linearly-constrained... more

This paper introduces a constrained version of the recently proposed set-membership affine projection algorithm based on the set-membership criteria for coefficient update. The algorithm is suitable for linearly-constrained minimum-variance filtering applications. The data selective property of the proposed algorithm greatly reduces the computational burden as compared with a nonselective approach. Simulation results show the good performance in terms convergence, final misadjustment, and reduced computational complexity.

walking, we present a control strategy for biologically realistic walking based on the principle of spin angular momentum regulation. Using a morphologically realistic human model and kinematic gait data, we compute the total spin angular... more

walking, we present a control strategy for biologically realistic walking based on the principle of spin angular momentum regulation. Using a morphologically realistic human model and kinematic gait data, we compute the total spin angular momentum at a self-selected walking speed for one human test subject. We find that dimensionless spin angular momentum remains small ( ( ) Velocity Height Mass S i CM . We employ this relationship to rapidly generate biologically realistic CP and CM reference trajectories. Using an open loop optimization strategy, we show that biologically realistic leg joint kinematics emerge through the minimization of spin angular momentum and the total sum of joint torque squared, suggesting that both angular momentum and energetic factors are important considerations for biomimetic controllers.

This paper presents several aspects of the application of regularization theory in image restoration. This is accomplished by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems. Image... more

This paper presents several aspects of the application of regularization theory in image restoration. This is accomplished by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems. Image restoration is formulated as the constrained minimization of a stabilizing functional. The choice of a particular quadratic functional to be minimized is related to the a priori knowledge regarding the original object through a formulation of image restoration as a maximum a posteriori estimation problem. This formulation is based on image representation by certain stochastic partial differential equation image models. The analytical study and computational treatment of the resulting optimization problem are subsequently presented. As a result, a variety of regularizing filters and iterative regularizing algorithms are proposed. A relationship between the regularized solutions proposed and optimal Wiener estimation is also identified. The filters and algorithms proposed are evaluated through several experimental results.

In this paper, a new nonlinear control strategy is proposed for a permanent-magnet salient-pole synchronous motor. This control strategy simultaneously achieves accurate torque control and copper losses minimization without recurring to... more

In this paper, a new nonlinear control strategy is proposed for a permanent-magnet salient-pole synchronous motor. This control strategy simultaneously achieves accurate torque control and copper losses minimization without recurring to an internal current loop nor to any feedforward compensation. It takes advantage of the rotor saliency by allowing the current (i d ) to have nonzero values. This, in turn, allows us to increase the power factor of the machine and to raise the maximum admissible torque. We apply input-output linearization techniques where the inputs are the stator voltages and the outputs are the torque and a judiciously chosen new output. This new output insures a well-defined relative degree and is linked to the copper losses in such a way that, when forced to zero, it leads to maximum machine efficiency. The performance of our nonlinear controller is demonstrated by a real-time implementation using a digital signal processor (DSP) chip on a permanent-magnet salient-pole synchronous motor with sinusoidal flux distribution. The results are compared to the ones obtained with a scheme which forces the i d current to zero.

Effective flocculation and dewatering of mineral processing streams containing colloidal clays has become increasingly urgent. Release of water from slurries in tailings streams and dam beds for recycle water consumption, is usually slow... more

Effective flocculation and dewatering of mineral processing streams containing colloidal clays has become increasingly urgent. Release of water from slurries in tailings streams and dam beds for recycle water consumption, is usually slow and incomplete. To achieve fast settling and minimization of retained water, individual particles need to be bound, in the initial stages of thickening, into large, high-density aggregates, which may sediment more rapidly with lower intra-aggregate water content. Quantitative cryo-SEM image analysis shows that the structure of aggregates formed before flocculant addition has a determinative effect on these outcomes. Without flocculant addition, 3 stages occur in the mechanism of primary dewatering of kaolinite at pH 8: initially, the dispersed structures already show edgeedge (EE) and edge-face (EF) inter-particle associations but these are open, loose and easily disrupted; in the hindered settling region, aggregates are in adherent, chain-like structures of EE and stairstep face-face (FF) associations; this network structure slowly partially rearranges from EE chains to more compact face-face (FF) contacts densifying the aggregates with increased settling rates. During settling, the sponge-like network structure with EE and FF string-like aggregates, limits dewatering because the steric effects in the resulting partially-gelled aggregate structures are dominant. With flocculant addition, the internal structure and networking of the pre-aggregates is largely preserved but they are rapidly and effectively bound together by the aggregate-bridging action of the flocculant. The effects of initial pH and Ca ion addition on these structures are also analyzed. Statistical analysis from cryo-SEM imaging shows that there is an inverse correlation of intra-aggregate porosity with Darcian inter-aggregate permeability whereas there is a strong positive correlation of Darcian permeability with settling and primary dewatering rate as a function of pH in suspension. Graphs of partial void contributions also suggest that it is not total porosity that dominates permeability in these systems but the abundance of larger intra-aggregate voids.