Applications of Neural Networks in Modeling and Design of Structural Systems (original) (raw)

Artificial Neural Networks in Structural Engineering: Concept and Applications

Journal of King Abdulaziz University-Engineering Sciences, 1999

ABSTRACTAE Artificial neural networks are algorithms for cognitive tasks, such as learning and optimization. They have the ability to learn and generalize from examples without knowledge of rules. Research into artificial neural networks and their application to structural engineering problems is gaining interest and is growing rapidly. The use of artificial neural networks in structural engineering has evolved as a new computing paradigm, even though still very limited.

Structural Design Optimisation Using Genetic Algorithms and Neural Networks

This paper relates to the optimisation of structural design using Genetic Algorithms (GAs) and presents an improved method for determining the fitness of genetic codes that represent possible design solutions. Two significant problems that often hinder design optimisation using genetic algorithms are expensive fitness evaluation and high epistasis. Expensive fitness evaluation results in slow evolution and occurs when it is computationally expensive to test the effectiveness of possible design solutions using an objective function. High epistasis occurs when certain genes lose their significance or value when other genes change. Consequently, when a fit genetic code has an important gene changed this can have a dramatic effect on the fitness of that genetic code. Often the reduction in fitness results in failure of the genetic code being selected for reproduction and inclusion in the next generation. This loss of evolved genetic information can result in the solution taking consider...

An adaptive neural network strategy for improving the computational performance of evolutionary structural optimization

Computer Methods in Applied Mechanics and Engineering, 2005

This work is focused on improving the computational efficiency of evolutionary algorithms implemented in largescale structural optimization problems. Locating optimal structural designs using evolutionary algorithms is a task associated with high computational cost, since a complete finite element (FE) analysis needs to be carried out for each parent and offspring design vector of the populations considered. Each of these FE solutions facilitates decision making regarding the feasibility or infeasibility of the corresponding structural design by evaluating the displacement and stress constraints specified for the structural problem at hand. This paper presents a neural network (NN) strategy to reliably predict, in the framework of an evolution strategies (ES) procedure for structural optimization, the feasibility or infeasibility of structural designs avoiding computationally expensive FE analyses. The proposed NN implementation is adaptive in the sense that the utilized NN configuration is appropriately updated as the ES process evolves by performing NN retrainings using information gradually accumulated during the ES execution. The prediction capabilities and the computational advantages offered by this adaptive NN scheme coupled with domain decomposition solution techniques are investigated in the context of design optimization of skeletal structures on both sequential and parallel computing environments.

Optimum design of structures by an improved genetic algorithm using neural networks

Advances in Engineering Software, 2005

Optimum design of large-scale structures by standard genetic algorithm (GA) makes the computational burden of the process very high. To reduce the computational cost of standard GA, two different strategies are used. The first strategy is by modifying the standard GA, called virtual sub-population method (VSP). The second strategy is by using artificial neural networks for approximating the structural analysis. In this study, radial basis function (RBF), counter propagation (CP) and generalized regression (GR) neural networks are used. Using neural networks within the framework of VSP creates a robust tool for optimum design of structures.

Learning improvement of neural networks used in structural optimization

Advances in Engineering Software, 2004

The performance of feed-forward neural networks can be substantially impaired by the ill-conditioning of the corresponding Jacobian matrix. Ill-conditioning appearing in feed-forward learning process is related to the properties of the activation function used. It will be shown that the performance of the network training can be improved using an adaptive activation function with a properly updated gain parameter during the learning process. The efficiency of the proposed adaptive procedure is examined in structural optimization problems where a trained neural network is used to replace the structural analysis phase and capture the necessary data for the optimizer. The optimizer used in this study is an algorithm based on evolution strategies. q

Structural Analysis with Neural Networks

This document describes what Madaline lineal informatics neuronal networks (lineal adaptation networks) are and how they can be applied to analyze lineal structures, as well as for the estimated line answers by lineal adjustment and non-lineal structures. Their advantages and limitations are discussed in comparison with the usual calculation algorithms of structures in digital computation. A mathematical model of the system was used with a degree of freedom to train its neuronal model. The methodology is presented to solve systems with multiple degrees of freedom. It is further noted that the calculation procedure is accelerated with acceptable results. A BPN type RNA simulator is analyzed, as well as its application to an example of the demolition of a building with explosives.

Soft computing methodologies for structural optimization

Applied Soft Computing, 2003

The paper examines the efficiency of soft computing techniques in structural optimization, in particular algorithms based on evolution strategies combined with neural networks, for solving large-scale, continuous or discrete structural optimization problems. The proposed combined algorithms are implemented both in deterministic and reliability based structural optimization problems, in an effort to increase the computational efficiency as well as the robustness of the optimization procedure. The use of neural networks was motivated by the time-consuming repeated finite element analyses required during the optimization process. A trained neural network is used to perform either the deterministic constraints check or, in the case of reliability based optimization, both the deterministic and the probabilistic constraints checks. The suitability of the neural network predictions is investigated in a number of structural optimization problems in order to demonstrate the computational advantages of the proposed methodologies.

Design Space Exploration of Initial Structural Design Alternatives via Artificial Neural Networks

Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference, 2019

Increasing implementation of digital tools within a design process generates exponentially growing data in each phase, and inevitably, decision making within a design space with increasing complexity will be a great challenge for the designers in the future. Hence, this research aimed to seek potentials of captured data within a design space and solution space of a truss design problem for proposing an initial novel approach to augment capabilities of digital tools by artificial intelligence where designers are allowed to make a wise guess within the initial design space via performance feedbacks from the objective space. Initial structural design and modelling phase of a truss section was selected as a material of this study since decisions within this stage affect the whole process and performance of the end product. As a method, a generic framework was proposed that can help designers to understand the trade-offs between initial structural design alternatives to make informed decisions and optimizations during the initial stage. Finally, the proposed framework was presented in a case study, and future potentials of the research were discussed.

Feasibility Study of Using Artificial Neural Networks for Approximation of n-dimensional Objective Functions in Memetic Algorithms for Structural Optimization

Procedia Engineering, 2017

Evaluation of objective function for problems of structural optimization is generally considered as computationally expensive and can take from few seconds to hours or even days. After certain number of solutions has been evaluated during optimization, artificial neural networks (ANNs) can be trained and used to approximate the objective function. The number of training points depends on the character and topology of objective function, but the most important factor is the dimensionality of objective function. Similarly as the performance of optimization algorithms, requirements on training data for ANNs are affected by so called "curse of dimensionality". To achieve the same precision of ANN approximation over n-dimensional space, the number of training points grows exponentially with the number of dimensions. This paper presents a feasibility study of using ANNs for approximation of objective function for problems solved by structural optimization with respect to the number of optimization variables. The goal of this study was to find the maximum number of dimensions, where it is feasible to use ANNs for approximation of objective function. Test problem with varying number of optimization variables was used to assess the feasibility of using ANN.

Recurrent Neural Networks for Structural Optimization

Computer-aided Civil and Infrastructure Engineering, 1999

This paper presents an improvement for an artificial neural network paradigm that has shown a significant potential for successful application to a class of optimization problems in structural engineering. The artificial neural network paradigm includes algorithms that belong to the class of single-layer, relaxationtype recurrent neural networks. The suggested improvement enhances the convergence performance and involves a technique that sets the values of weight parameters of the recurrent neural network algorithm.