Skip to content

Compact, decoder-only Transformer PINN using Fourier features to mitigate spectral bias and self-attention to capture spatiotemporal correlations

License

Notifications You must be signed in to change notification settings

rtenacity/decoder-only-pinnsformer

Repository files navigation

Physics-Informed Neural Networks with Learnable Fourier Features and Attention-Driven Decoding

Abstract

Physics-Informed Neural Networks (PINNs) are a useful framework for approximating partial differential equations using deep learning methods. While standard PINNs typically employ multilayer perceptrons (MLPs), these architectures often struggle to capture complex solution dynamics due to their limited expressiveness, poor scalability, and inefficient gradient propagation. In this paper, we build upon the PINNsFormer, a Transformer-based architecture designed to enhance PINN performance through the use of self-attention mechanisms. Unlike the full encoder-decoder Transformer structure in prior work, we propose a Decoder-Only PINNsFormer, which simplifies the architecture while preserving the ability to model long-range dependencies via attention. Furthermore, we enhance the input representation with learnable Fourier features, allowing the model to adaptively encode spatial and temporal coordinates in the spectral domain without the need for a full-size encoder. Our model achieves competitive performance on benchmark PDEs, demonstrating improved performance over baseline PINNs and improved/comparable performance to the original PINNsFormer while being a smaller model.

Accepted to "The Reach and Limits of AI for Scientific Discovery" at NeurIPS 2025.

About

Compact, decoder-only Transformer PINN using Fourier features to mitigate spectral bias and self-attention to capture spatiotemporal correlations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published