TY - JOUR
T1 - A randomized neural network based Petrov–Galerkin method for approximating the solution of fractional order boundary value problems
AU - Roop, John
PY - 2024/8/1
Y1 - 2024/8/1
N2 - This article presents the implementation of a randomized neural network (RNN) approach in approximating the solution of fractional order boundary value problems using a Petrov–Galerkin framework with Lagrange basis test functions. Traditional methods, like Physics Informed Neural Networks (PINNs), use standard deep learning techniques, which suffer from a computational bottleneck. In contrast, RNNs offer an alternative by employing a random structure with random coefficients, only solving for the output layer. We allow for the application of numerical analysis principles by using RNNs as trial functions and piecewise Lagrange polynomials as test functions. The article covers the construction and properties of the RNN basis, the definition and solution of fractional boundary value problems, and the implementation of the RNN Petrov–Galerkin method. We derive the stiffness matrix and solve it using least squares. Error analysis shows that the method meets the requirements of the Lax–Milgram lemma along with a Ceá inequality, ensuring optimal error estimates, depending on the regularity of the exact solution. Computational experiments demonstrate the method's efficacy, including multiples cases with both regular and irregular solutions. The results highlight the utility of RNN-based Petrov–Galerkin methods in solving fractional differential equations with experimental convergence.
AB - This article presents the implementation of a randomized neural network (RNN) approach in approximating the solution of fractional order boundary value problems using a Petrov–Galerkin framework with Lagrange basis test functions. Traditional methods, like Physics Informed Neural Networks (PINNs), use standard deep learning techniques, which suffer from a computational bottleneck. In contrast, RNNs offer an alternative by employing a random structure with random coefficients, only solving for the output layer. We allow for the application of numerical analysis principles by using RNNs as trial functions and piecewise Lagrange polynomials as test functions. The article covers the construction and properties of the RNN basis, the definition and solution of fractional boundary value problems, and the implementation of the RNN Petrov–Galerkin method. We derive the stiffness matrix and solve it using least squares. Error analysis shows that the method meets the requirements of the Lax–Milgram lemma along with a Ceá inequality, ensuring optimal error estimates, depending on the regularity of the exact solution. Computational experiments demonstrate the method's efficacy, including multiples cases with both regular and irregular solutions. The results highlight the utility of RNN-based Petrov–Galerkin methods in solving fractional differential equations with experimental convergence.
KW - Fractional derivative
KW - Petrov–Galerkin method
KW - Randomized neural network
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85202927494&origin=inward
UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85202927494&origin=inward
U2 - 10.1016/j.rinam.2024.100493
DO - 10.1016/j.rinam.2024.100493
M3 - Article
SN - 2590-0374
VL - 23
JO - Results in Applied Mathematics
JF - Results in Applied Mathematics
IS - 24-Aug
M1 - 100493
ER -