Skip to main navigation Skip to search Skip to main content

Accelerating the neural network controller embedded implementation on FPGA with novel dropout techniques for a solar inverter

  • Jordan Sturtz
  • , Kushal Kalyan Devalampeta Surendranath
  • , Maxwell Sam
  • , Xingang Fu
  • , Chanakya Dinesh Hingu
  • , Rajab Challoo
  • , Letu Qingge
  • Industrial and systems engineering with North Carolina A&T State University
  • University of Nevada
  • Texas A&M University-Kingsville

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Accelerating neural network (NN) controllers is important for improving the performance, efficiency, scalability, and reliability of real-time systems, particularly in resource-constrained embedded systems. This paper introduces a novel weight-dropout method for training neural network controllers in real-time closed-loop systems, aimed at accelerating the embedded implementation for solar inverters. The core idea is to eliminate small-magnitude weights during training, thereby reducing the number of necessary connections while ensuring the network's convergence. To maintain convergence, only non-diagonal elements of the weight matrices were dropped. This dropout technique was integrated into the Levenberg–Marquardt and Forward Accumulation Through Time algorithms, resulting in more efficient training for trajectory tracking. We executed the proposed training algorithm with dropout on the AWS cloud, observing a performance increase of approximately four times compared to local execution. Furthermore, implementing the neural network controller on the Intel Cyclone V Field Programmable Gate Array (FPGA) demonstrates significant improvements in computational and resource efficiency due to the proposed dropout technique leading to sparse weight matrices. This optimization enhances the suitability of the neural network controller for embedded environments. In comparison to Sturtz et al. (2023), which dropped 11 weights, our approach eliminated 18 weights, significantly boosting resource efficiency. This resulted in a 16.40% reduction in Adaptive Logic Modules (ALMs), decreasing the count to 47,426.5. Combinational Look-Up Tables (LUTs) and dedicated logic registers saw reductions of 17.80% and 15.55%, respectively. However, the impact on block memory bits is minimal, showing only a 1% improvement, indicating that memory resources are less affected by weight dropout. In contrast, the usage of Memory 10 Kilobits (MK10s) dropped from 97 to 87, marking a 10% improvement. We also propose an adaptive dropout technique to further improve the previous results.
Original languageEnglish
Article number101975
JournalPervasive and Mobile Computing
Volume104
DOIs
StatePublished - Nov 1 2024

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 8 - Decent Work and Economic Growth
    SDG 8 Decent Work and Economic Growth
  2. SDG 12 - Responsible Consumption and Production
    SDG 12 Responsible Consumption and Production

Keywords

  • Cloudbank
  • Field Programmable Gate Array (FPGA)
  • Forward Accumulation Through Time
  • Levenberg–Marquardt algorithm
  • Neural network controller
  • Weight dropout technique

Fingerprint

Dive into the research topics of 'Accelerating the neural network controller embedded implementation on FPGA with novel dropout techniques for a solar inverter'. Together they form a unique fingerprint.

Cite this