Skip to main content

Evolutionary Neural Architecture Search Supporting Approximate Multipliers

  • Conference paper
  • First Online:
Book cover Genetic Programming (EuroGP 2021)

Abstract

There is a growing interest in automated neural architecture search (NAS) methods. They are employed to routinely deliver high-quality neural network architectures for various challenging data sets and reduce the designer’s effort. The NAS methods utilizing multi-objective evolutionary algorithms are especially useful when the objective is not only to minimize the network error but also to minimize the number of parameters (weights) or power consumption of the inference phase. We propose a multi-objective NAS method based on Cartesian genetic programming for evolving convolutional neural networks (CNN). The method allows approximate operations to be used in CNNs to reduce power consumption of a target hardware implementation. During the NAS process, a suitable CNN architecture is evolved together with approximate multipliers to deliver the best trade-offs between the accuracy, network size and power consumption. The most suitable approximate multipliers are automatically selected from a library of approximate multipliers. Evolved CNNs are compared with common human-created CNNs of a similar complexity on the CIFAR-10 benchmark problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.fit.vutbr.cz/research/groups/ehw/approxlib/.

References

  1. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(10), 281–305 (2012)

    MathSciNet  MATH  Google Scholar 

  2. Capra, M., Bussolino, B., Marchisio, A., Shafique, M., Masera, G., Martina, M.: An updated survey of efficient hardware architectures for accelerating deep convolutional neural networks. Future Internet 12(7), 113 (2020)

    Article  Google Scholar 

  3. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  4. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(55), 1–21 (2019)

    MathSciNet  MATH  Google Scholar 

  5. Gysel, P., Pimentel, J., Motamedi, M., Ghiasi, S.: Ristretto: a framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29(11), 5784–5789 (2018)

    Article  Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  7. Hsu, C., et al.: MONAS: multi-objective neural architecture search using reinforcement learning. CoRR abs/1806.10332 (2018). http://arxiv.org/abs/1806.10332

  8. Jiang, W., Yang, L., Dasgupta, S., Hu, J., Shi, Y.: Standing on the shoulders of giants: hardware and neural architecture co-search with hot start (2020). https://arxiv.org/abs/2007.09087

  9. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/~kriz/cifar.html

  10. Liu, C., et al.: Progressive neural architecture search. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 19–35. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_2

    Chapter  Google Scholar 

  11. Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. CoRR abs/1806.09055 (2018). http://arxiv.org/abs/1806.09055

  12. Lu, Z., et al.: NSGA-Net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427. ACM (2019)

    Google Scholar 

  13. Miller, J.F.: Cartesian Genetic Programming. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-17310-3

  14. Mrazek, V., Hrbacek, R., et al.: EvoApprox8b: library of approximate adders and multipliers for circuit design and benchmarking of approximation methods. In: Proceedings of DATE 2017, pp. 258–261 (2017)

    Google Scholar 

  15. Mrazek, V., Vasicek, Z., Sekanina, L., Hanif, A.M., Shafique, M.: ALWANN: automatic layer-wise approximation of deep neural network accelerators without retraining. In: Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, pp. 1–8. IEEE (2019)

    Google Scholar 

  16. Panda, P., Sengupta, A., Sarwar, S.S., Srinivasan, G., Venkataramani, S., Raghunathan, A., Roy, K.: Invited - cross-layer approximations for neuromorphic computing: from devices to circuits and systems. In: 53nd Design Automation Conference, pp. 1–6. IEEE (2016). https://doi.org/10.1145/2897937.2905009

  17. Real, E., et al.: Large-scale evolution of image classifiers. arXiv e-prints arXiv:1703.01041, March 2017

  18. Sarwar, S.S., Venkataramani, S., Ankit, A., Raghunathan, A., Roy, K.: Energy-efficient neural computing with approximate multipliers. J. Emerg. Technol. Comput. Syst. 14(2), 16:1–16:23 (2018)

    Google Scholar 

  19. Shorten, C., Khoshgoftaar, T.: A survey on image data augmentation for deep learning. J. Big Data 6, 1–48 (2019)

    Article  Google Scholar 

  20. Stanley, K.O., Clune, J., Lehman1, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nat. Mach. Intell. 1, 24–35 (2019)

    Google Scholar 

  21. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)

    Article  Google Scholar 

  22. Suganuma, M., Shirakawa, S., Nagao, T.: A genetic programming approach to designing convolutional neural network architectures. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 497–504. ACM (2017)

    Google Scholar 

  23. Sze, V., Chen, Y., Yang, T., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

    Article  Google Scholar 

  24. Tann, H., Hashemi, S., Reda, S.: Lightweight deep neural network accelerators using approximate SW/HW techniques. In: Reda, S., Shafique, M. (eds.) Approximate Circuits, pp. 289–305. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-99322-5_14

    Chapter  Google Scholar 

  25. Vaverka, F., Mrazek, V., Vasicek, Z., Sekanina, L.: TFApprox: towards a fast emulation of DNN approximate hardware accelerators on GPU. In: Design, Automation and Test in Europe, pp. 1–4 (2020)

    Google Scholar 

  26. Wistuba, M., Rawat, A., Pedapati, T.: A survey on neural architecture search. CoRR abs/1905.01392 (2019). http://arxiv.org/abs/1905.01392

  27. Xie, L., Yuille, A.: Genetic CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1388–1397. IEEE (2017)

    Google Scholar 

  28. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  29. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. CoRR abs/1611.01578 (2016). http://arxiv.org/abs/1611.01578

Download references

Acknowledgements

This work was supported by the Czech science foundation project 21-13001S. The computational experiments were supported by The Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations project “e-Infrastructure CZ – LM2018140”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Pinos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pinos, M., Mrazek, V., Sekanina, L. (2021). Evolutionary Neural Architecture Search Supporting Approximate Multipliers. In: Hu, T., Lourenço, N., Medvet, E. (eds) Genetic Programming. EuroGP 2021. Lecture Notes in Computer Science(), vol 12691. Springer, Cham. https://doi.org/10.1007/978-3-030-72812-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72812-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72811-3

  • Online ISBN: 978-3-030-72812-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics