Impact of Shrinking Technologies on the Activation Function of Neurons (original) (raw)

Abstract

Artificial neural networks are able to solve a great variety of different applications, e.g. classification or approximation tasks. To utilize their advantages in technical systems various hardware realizations do exist. In this work, the impact of shrinking device sizes on the activation function of neurons is investigated with respect to area demands, power consumption and the maximum resolution in their information processing. Furthermore, analog and digital implementations are compared in emerging silicon technologies beyond 100 nm feature size.

This work was supported by the Graduate College 776 - Automatic Configuration in Open Systems - funded by the German Research Foundation (DFG).

Preview

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Isaac, R.D.: The future of cmos technology. IBM Journal of Research and Development 44(3), 369–378 (2000)
    Google Scholar
  2. Compañó, R.: Technology roadmap for nanoelectronics. Technical Report 2nd edn. European Commission (2000)
    Google Scholar
  3. Beiu, V., Taylor, J.G.: On the circuit complexity of sigmoid feedforward neural networks. Neural Netw. 9(7), 1155–1171 (1996)
    Article Google Scholar
  4. Arbib, M.A.: The Handbook of Brain Theory and Neural Networks, 2nd edn. The MIT Press, Cambridge, MA, USA (2002)
    Google Scholar
  5. Dennard, R.H., Gaensslen, F.H., Rideout, V.L., Bassous, E., LeBlanc, A.R.: Design of ion-implanted MOSFET’s with very small physical dimensions. IEEE Journal of Solid-State Circuits 9(5), 256–268 (1974)
    Article Google Scholar
  6. Frank, D., Dennard, R., Nowak, E., Solomon, P., Taur, Y., Wong, H.S.P.: Device scaling limits of Si MOSFETs and their application dependencies. Proceedings of the IEEE 89, 259–288 (2001)
    Article Google Scholar
  7. Taur, Y.: CMOS design near the limit of scaling. IBM Journal of Research and Development 46(2/3), 213–222 (2002)
    Article Google Scholar
  8. Haykin, S.: Neural Networks. A Comprehensive Foundation, 2nd edn. Prentice Hall, New Jersey, USA (1999)
    MATH Google Scholar
  9. Razavi, B.: Design of Analog CMOS Integrated Circuits. McGraw-Hill, New York (2000)
    Google Scholar
  10. Mead, C.: Analog VLSI and neural systems. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA (1989)
    MATH Google Scholar
  11. Shannon, C.E.: A mathematical theory of communication. Bell System Technical Journal 27, 379–423, 623–656 (1948)
    MATH MathSciNet Google Scholar
  12. Zhao, W., Cao, Y.: New generation of predictive technology model for sub-45nm design exploration. In: ISQED ’06. Proceedings of the 7th International Symposium on Quality Electronic Design, pp. 585–590. IEEE Computer Society Press, Washington, DC (2006)
    Google Scholar
  13. Simoen, E., Claeys, C.: On the flicker noise in submicron silicon MOSFETs. Solid-State Electronics 43(5), 865–882 (1999)
    Article Google Scholar
  14. Beiu, V., Peperstraete, J.A., Vandewalle, J., Lauwereins, R.: VLSI Complexity Reduction by Piece-Wise Approximation of the Sigmoid Function. In: Verleysen, M. (ed.) Proc. of the ESANN, Bruges, Belgium, pp. 181–186 (April 1994)
    Google Scholar
  15. Semiconductor Industry Association: International Technology Roadmap for Semiconductors - Edition (2005), http://public.itrs.net/

Download references

Author information

Authors and Affiliations

  1. Chair of Circuit Design and Network Theory, Dresden University of Technology, Germany
    Ralf Eickhoff
  2. Heinz Nixdorf Institute, System and Circuit Technology, University of Paderborn, Germany
    Tim Kaulmann & Ulrich Rückert

Authors

  1. Ralf Eickhoff
  2. Tim Kaulmann
  3. Ulrich Rückert

Editor information

Joaquim Marques de Sá Luís A. Alexandre Włodzisław Duch Danilo Mandic

Rights and permissions

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Eickhoff, R., Kaulmann, T., Rückert, U. (2007). Impact of Shrinking Technologies on the Activation Function of Neurons. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds) Artificial Neural Networks – ICANN 2007. ICANN 2007. Lecture Notes in Computer Science, vol 4668. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74690-4\_51

Download citation

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Publish with us