diff --git a/Airfoil-Design-AirfRANS/LICENSE b/Airfoil-Design-AirfRANS/LICENSE
deleted file mode 100644
index c734a1d..0000000
--- a/Airfoil-Design-AirfRANS/LICENSE
+++ /dev/null
@@ -1,540 +0,0 @@
-## ODC Open Database License (ODbL)
-
-### Preamble
-
-The Open Database License (ODbL) is a license agreement intended to
-allow users to freely share, modify, and use this Database while
-maintaining this same freedom for others. Many databases are covered by
-copyright, and therefore this document licenses these rights. Some
-jurisdictions, mainly in the European Union, have specific rights that
-cover databases, and so the ODbL addresses these rights, too. Finally,
-the ODbL is also an agreement in contract for users of this Database to
-act in certain ways in return for accessing this Database.
-
-Databases can contain a wide variety of types of content (images,
-audiovisual material, and sounds all in the same database, for example),
-and so the ODbL only governs the rights over the Database, and not the
-contents of the Database individually. Licensors should use the ODbL
-together with another license for the contents, if the contents have a
-single set of rights that uniformly covers all of the contents. If the
-contents have multiple sets of different rights, Licensors should
-describe what rights govern what contents together in the individual
-record or in some other way that clarifies what rights apply.
-
-Sometimes the contents of a database, or the database itself, can be
-covered by other rights not addressed here (such as private contracts,
-trade mark over the name, or privacy rights / data protection rights
-over information in the contents), and so you are advised that you may
-have to consult other documents or clear other rights before doing
-activities not covered by this License.
-
-------
-
-The Licensor (as defined below)
-
-and
-
-You (as defined below)
-
-agree as follows:
-
-### 1.0 Definitions of Capitalised Words
-
-"Collective Database" – Means this Database in unmodified form as part
-of a collection of independent databases in themselves that together are
-assembled into a collective whole. A work that constitutes a Collective
-Database will not be considered a Derivative Database.
-
-"Convey" – As a verb, means Using the Database, a Derivative Database,
-or the Database as part of a Collective Database in any way that enables
-a Person to make or receive copies of the Database or a Derivative
-Database. Conveying does not include interaction with a user through a
-computer network, or creating and Using a Produced Work, where no
-transfer of a copy of the Database or a Derivative Database occurs.
-"Contents" – The contents of this Database, which includes the
-information, independent works, or other material collected into the
-Database. For example, the contents of the Database could be factual
-data or works such as images, audiovisual material, text, or sounds.
-
-"Database" – A collection of material (the Contents) arranged in a
-systematic or methodical way and individually accessible by electronic
-or other means offered under the terms of this License.
-
-"Database Directive" – Means Directive 96/9/EC of the European
-Parliament and of the Council of 11 March 1996 on the legal protection
-of databases, as amended or succeeded.
-
-"Database Right" – Means rights resulting from the Chapter III ("sui
-generis") rights in the Database Directive (as amended and as transposed
-by member states), which includes the Extraction and Re-utilisation of
-the whole or a Substantial part of the Contents, as well as any similar
-rights available in the relevant jurisdiction under Section 10.4.
-
-"Derivative Database" – Means a database based upon the Database, and
-includes any translation, adaptation, arrangement, modification, or any
-other alteration of the Database or of a Substantial part of the
-Contents. This includes, but is not limited to, Extracting or
-Re-utilising the whole or a Substantial part of the Contents in a new
-Database.
-
-"Extraction" – Means the permanent or temporary transfer of all or a
-Substantial part of the Contents to another medium by any means or in
-any form.
-
-"License" – Means this license agreement and is both a license of rights
-such as copyright and Database Rights and an agreement in contract.
-
-"Licensor" – Means the Person that offers the Database under the terms
-of this License.
-
-"Person" – Means a natural or legal person or a body of persons
-corporate or incorporate.
-
-"Produced Work" – a work (such as an image, audiovisual material, text,
-or sounds) resulting from using the whole or a Substantial part of the
-Contents (via a search or other query) from this Database, a Derivative
-Database, or this Database as part of a Collective Database.
-
-"Publicly" – means to Persons other than You or under Your control by
-either more than 50% ownership or by the power to direct their
-activities (such as contracting with an independent consultant).
-
-"Re-utilisation" – means any form of making available to the public all
-or a Substantial part of the Contents by the distribution of copies, by
-renting, by online or other forms of transmission.
-
-"Substantial" – Means substantial in terms of quantity or quality or a
-combination of both. The repeated and systematic Extraction or
-Re-utilisation of insubstantial parts of the Contents may amount to the
-Extraction or Re-utilisation of a Substantial part of the Contents.
-
-"Use" – As a verb, means doing any act that is restricted by copyright
-or Database Rights whether in the original medium or any other; and
-includes without limitation distributing, copying, publicly performing,
-publicly displaying, and preparing derivative works of the Database, as
-well as modifying the Database as may be technically necessary to use it
-in a different mode or format.
-
-"You" – Means a Person exercising rights under this License who has not
-previously violated the terms of this License with respect to the
-Database, or who has received express permission from the Licensor to
-exercise rights under this License despite a previous violation.
-
-Words in the singular include the plural and vice versa.
-
-### 2.0 What this License covers
-
-2.1. Legal effect of this document. This License is:
-
- a. A license of applicable copyright and neighbouring rights;
-
- b. A license of the Database Right; and
-
- c. An agreement in contract between You and the Licensor.
-
-2.2 Legal rights covered. This License covers the legal rights in the
-Database, including:
-
- a. Copyright. Any copyright or neighbouring rights in the Database.
- The copyright licensed includes any individual elements of the
- Database, but does not cover the copyright over the Contents
- independent of this Database. See Section 2.4 for details. Copyright
- law varies between jurisdictions, but is likely to cover: the Database
- model or schema, which is the structure, arrangement, and organisation
- of the Database, and can also include the Database tables and table
- indexes; the data entry and output sheets; and the Field names of
- Contents stored in the Database;
-
- b. Database Rights. Database Rights only extend to the Extraction and
- Re-utilisation of the whole or a Substantial part of the Contents.
- Database Rights can apply even when there is no copyright over the
- Database. Database Rights can also apply when the Contents are removed
- from the Database and are selected and arranged in a way that would
- not infringe any applicable copyright; and
-
- c. Contract. This is an agreement between You and the Licensor for
- access to the Database. In return you agree to certain conditions of
- use on this access as outlined in this License.
-
-2.3 Rights not covered.
-
- a. This License does not apply to computer programs used in the making
- or operation of the Database;
-
- b. This License does not cover any patents over the Contents or the
- Database; and
-
- c. This License does not cover any trademarks associated with the
- Database.
-
-2.4 Relationship to Contents in the Database. The individual items of
-the Contents contained in this Database may be covered by other rights,
-including copyright, patent, data protection, privacy, or personality
-rights, and this License does not cover any rights (other than Database
-Rights or in contract) in individual Contents contained in the Database.
-For example, if used on a Database of images (the Contents), this
-License would not apply to copyright over individual images, which could
-have their own separate licenses, or one single license covering all of
-the rights over the images.
-
-### 3.0 Rights granted
-
-3.1 Subject to the terms and conditions of this License, the Licensor
-grants to You a worldwide, royalty-free, non-exclusive, terminable (but
-only under Section 9) license to Use the Database for the duration of
-any applicable copyright and Database Rights. These rights explicitly
-include commercial use, and do not exclude any field of endeavour. To
-the extent possible in the relevant jurisdiction, these rights may be
-exercised in all media and formats whether now known or created in the
-future.
-
-The rights granted cover, for example:
-
- a. Extraction and Re-utilisation of the whole or a Substantial part of
- the Contents;
-
- b. Creation of Derivative Databases;
-
- c. Creation of Collective Databases;
-
- d. Creation of temporary or permanent reproductions by any means and
- in any form, in whole or in part, including of any Derivative
- Databases or as a part of Collective Databases; and
-
- e. Distribution, communication, display, lending, making available, or
- performance to the public by any means and in any form, in whole or in
- part, including of any Derivative Database or as a part of Collective
- Databases.
-
-3.2 Compulsory license schemes. For the avoidance of doubt:
-
- a. Non-waivable compulsory license schemes. In those jurisdictions in
- which the right to collect royalties through any statutory or
- compulsory licensing scheme cannot be waived, the Licensor reserves
- the exclusive right to collect such royalties for any exercise by You
- of the rights granted under this License;
-
- b. Waivable compulsory license schemes. In those jurisdictions in
- which the right to collect royalties through any statutory or
- compulsory licensing scheme can be waived, the Licensor waives the
- exclusive right to collect such royalties for any exercise by You of
- the rights granted under this License; and,
-
- c. Voluntary license schemes. The Licensor waives the right to collect
- royalties, whether individually or, in the event that the Licensor is
- a member of a collecting society that administers voluntary licensing
- schemes, via that society, from any exercise by You of the rights
- granted under this License.
-
-3.3 The right to release the Database under different terms, or to stop
-distributing or making available the Database, is reserved. Note that
-this Database may be multiple-licensed, and so You may have the choice
-of using alternative licenses for this Database. Subject to Section
-10.4, all other rights not expressly granted by Licensor are reserved.
-
-### 4.0 Conditions of Use
-
-4.1 The rights granted in Section 3 above are expressly made subject to
-Your complying with the following conditions of use. These are important
-conditions of this License, and if You fail to follow them, You will be
-in material breach of its terms.
-
-4.2 Notices. If You Publicly Convey this Database, any Derivative
-Database, or the Database as part of a Collective Database, then You
-must:
-
- a. Do so only under the terms of this License or another license
- permitted under Section 4.4;
-
- b. Include a copy of this License (or, as applicable, a license
- permitted under Section 4.4) or its Uniform Resource Identifier (URI)
- with the Database or Derivative Database, including both in the
- Database or Derivative Database and in any relevant documentation; and
-
- c. Keep intact any copyright or Database Right notices and notices
- that refer to this License.
-
- d. If it is not possible to put the required notices in a particular
- file due to its structure, then You must include the notices in a
- location (such as a relevant directory) where users would be likely to
- look for it.
-
-4.3 Notice for using output (Contents). Creating and Using a Produced
-Work does not require the notice in Section 4.2. However, if you
-Publicly Use a Produced Work, You must include a notice associated with
-the Produced Work reasonably calculated to make any Person that uses,
-views, accesses, interacts with, or is otherwise exposed to the Produced
-Work aware that Content was obtained from the Database, Derivative
-Database, or the Database as part of a Collective Database, and that it
-is available under this License.
-
- a. Example notice. The following text will satisfy notice under
- Section 4.3:
-
- Contains information from DATABASE NAME, which is made available
- here under the Open Database License (ODbL).
-
-DATABASE NAME should be replaced with the name of the Database and a
-hyperlink to the URI of the Database. "Open Database License" should
-contain a hyperlink to the URI of the text of this License. If
-hyperlinks are not possible, You should include the plain text of the
-required URI's with the above notice.
-
-4.4 Share alike.
-
- a. Any Derivative Database that You Publicly Use must be only under
- the terms of:
-
- i. This License;
-
- ii. A later version of this License similar in spirit to this
- License; or
-
- iii. A compatible license.
-
- If You license the Derivative Database under one of the licenses
- mentioned in (iii), You must comply with the terms of that license.
-
- b. For the avoidance of doubt, Extraction or Re-utilisation of the
- whole or a Substantial part of the Contents into a new database is a
- Derivative Database and must comply with Section 4.4.
-
- c. Derivative Databases and Produced Works. A Derivative Database is
- Publicly Used and so must comply with Section 4.4. if a Produced Work
- created from the Derivative Database is Publicly Used.
-
- d. Share Alike and additional Contents. For the avoidance of doubt,
- You must not add Contents to Derivative Databases under Section 4.4 a
- that are incompatible with the rights granted under this License.
-
- e. Compatible licenses. Licensors may authorise a proxy to determine
- compatible licenses under Section 4.4 a iii. If they do so, the
- authorised proxy's public statement of acceptance of a compatible
- license grants You permission to use the compatible license.
-
-
-4.5 Limits of Share Alike. The requirements of Section 4.4 do not apply
-in the following:
-
- a. For the avoidance of doubt, You are not required to license
- Collective Databases under this License if You incorporate this
- Database or a Derivative Database in the collection, but this License
- still applies to this Database or a Derivative Database as a part of
- the Collective Database;
-
- b. Using this Database, a Derivative Database, or this Database as
- part of a Collective Database to create a Produced Work does not
- create a Derivative Database for purposes of Section 4.4; and
-
- c. Use of a Derivative Database internally within an organisation is
- not to the public and therefore does not fall under the requirements
- of Section 4.4.
-
-4.6 Access to Derivative Databases. If You Publicly Use a Derivative
-Database or a Produced Work from a Derivative Database, You must also
-offer to recipients of the Derivative Database or Produced Work a copy
-in a machine readable form of:
-
- a. The entire Derivative Database; or
-
- b. A file containing all of the alterations made to the Database or
- the method of making the alterations to the Database (such as an
- algorithm), including any additional Contents, that make up all the
- differences between the Database and the Derivative Database.
-
-The Derivative Database (under a.) or alteration file (under b.) must be
-available at no more than a reasonable production cost for physical
-distributions and free of charge if distributed over the internet.
-
-4.7 Technological measures and additional terms
-
- a. This License does not allow You to impose (except subject to
- Section 4.7 b.) any terms or any technological measures on the
- Database, a Derivative Database, or the whole or a Substantial part of
- the Contents that alter or restrict the terms of this License, or any
- rights granted under it, or have the effect or intent of restricting
- the ability of any person to exercise those rights.
-
- b. Parallel distribution. You may impose terms or technological
- measures on the Database, a Derivative Database, or the whole or a
- Substantial part of the Contents (a "Restricted Database") in
- contravention of Section 4.74 a. only if You also make a copy of the
- Database or a Derivative Database available to the recipient of the
- Restricted Database:
-
- i. That is available without additional fee;
-
- ii. That is available in a medium that does not alter or restrict
- the terms of this License, or any rights granted under it, or have
- the effect or intent of restricting the ability of any person to
- exercise those rights (an "Unrestricted Database"); and
-
- iii. The Unrestricted Database is at least as accessible to the
- recipient as a practical matter as the Restricted Database.
-
- c. For the avoidance of doubt, You may place this Database or a
- Derivative Database in an authenticated environment, behind a
- password, or within a similar access control scheme provided that You
- do not alter or restrict the terms of this License or any rights
- granted under it or have the effect or intent of restricting the
- ability of any person to exercise those rights.
-
-4.8 Licensing of others. You may not sublicense the Database. Each time
-You communicate the Database, the whole or Substantial part of the
-Contents, or any Derivative Database to anyone else in any way, the
-Licensor offers to the recipient a license to the Database on the same
-terms and conditions as this License. You are not responsible for
-enforcing compliance by third parties with this License, but You may
-enforce any rights that You have over a Derivative Database. You are
-solely responsible for any modifications of a Derivative Database made
-by You or another Person at Your direction. You may not impose any
-further restrictions on the exercise of the rights granted or affirmed
-under this License.
-
-### 5.0 Moral rights
-
-5.1 Moral rights. This section covers moral rights, including any rights
-to be identified as the author of the Database or to object to treatment
-that would otherwise prejudice the author's honour and reputation, or
-any other derogatory treatment:
-
- a. For jurisdictions allowing waiver of moral rights, Licensor waives
- all moral rights that Licensor may have in the Database to the fullest
- extent possible by the law of the relevant jurisdiction under Section
- 10.4;
-
- b. If waiver of moral rights under Section 5.1 a in the relevant
- jurisdiction is not possible, Licensor agrees not to assert any moral
- rights over the Database and waives all claims in moral rights to the
- fullest extent possible by the law of the relevant jurisdiction under
- Section 10.4; and
-
- c. For jurisdictions not allowing waiver or an agreement not to assert
- moral rights under Section 5.1 a and b, the author may retain their
- moral rights over certain aspects of the Database.
-
-Please note that some jurisdictions do not allow for the waiver of moral
-rights, and so moral rights may still subsist over the Database in some
-jurisdictions.
-
-### 6.0 Fair dealing, Database exceptions, and other rights not affected
-
-6.1 This License does not affect any rights that You or anyone else may
-independently have under any applicable law to make any use of this
-Database, including without limitation:
-
- a. Exceptions to the Database Right including: Extraction of Contents
- from non-electronic Databases for private purposes, Extraction for
- purposes of illustration for teaching or scientific research, and
- Extraction or Re-utilisation for public security or an administrative
- or judicial procedure.
-
- b. Fair dealing, fair use, or any other legally recognised limitation
- or exception to infringement of copyright or other applicable laws.
-
-6.2 This License does not affect any rights of lawful users to Extract
-and Re-utilise insubstantial parts of the Contents, evaluated
-quantitatively or qualitatively, for any purposes whatsoever, including
-creating a Derivative Database (subject to other rights over the
-Contents, see Section 2.4). The repeated and systematic Extraction or
-Re-utilisation of insubstantial parts of the Contents may however amount
-to the Extraction or Re-utilisation of a Substantial part of the
-Contents.
-
-### 7.0 Warranties and Disclaimer
-
-7.1 The Database is licensed by the Licensor "as is" and without any
-warranty of any kind, either express, implied, or arising by statute,
-custom, course of dealing, or trade usage. Licensor specifically
-disclaims any and all implied warranties or conditions of title,
-non-infringement, accuracy or completeness, the presence or absence of
-errors, fitness for a particular purpose, merchantability, or otherwise.
-Some jurisdictions do not allow the exclusion of implied warranties, so
-this exclusion may not apply to You.
-
-### 8.0 Limitation of liability
-
-8.1 Subject to any liability that may not be excluded or limited by law,
-the Licensor is not liable for, and expressly excludes, all liability
-for loss or damage however and whenever caused to anyone by any use
-under this License, whether by You or by anyone else, and whether caused
-by any fault on the part of the Licensor or not. This exclusion of
-liability includes, but is not limited to, any special, incidental,
-consequential, punitive, or exemplary damages such as loss of revenue,
-data, anticipated profits, and lost business. This exclusion applies
-even if the Licensor has been advised of the possibility of such
-damages.
-
-8.2 If liability may not be excluded by law, it is limited to actual and
-direct financial loss to the extent it is caused by proved negligence on
-the part of the Licensor.
-
-### 9.0 Termination of Your rights under this License
-
-9.1 Any breach by You of the terms and conditions of this License
-automatically terminates this License with immediate effect and without
-notice to You. For the avoidance of doubt, Persons who have received the
-Database, the whole or a Substantial part of the Contents, Derivative
-Databases, or the Database as part of a Collective Database from You
-under this License will not have their licenses terminated provided
-their use is in full compliance with this License or a license granted
-under Section 4.8 of this License. Sections 1, 2, 7, 8, 9 and 10 will
-survive any termination of this License.
-
-9.2 If You are not in breach of the terms of this License, the Licensor
-will not terminate Your rights under it.
-
-9.3 Unless terminated under Section 9.1, this License is granted to You
-for the duration of applicable rights in the Database.
-
-9.4 Reinstatement of rights. If you cease any breach of the terms and
-conditions of this License, then your full rights under this License
-will be reinstated:
-
- a. Provisionally and subject to permanent termination until the 60th
- day after cessation of breach;
-
- b. Permanently on the 60th day after cessation of breach unless
- otherwise reasonably notified by the Licensor; or
-
- c. Permanently if reasonably notified by the Licensor of the
- violation, this is the first time You have received notice of
- violation of this License from the Licensor, and You cure the
- violation prior to 30 days after your receipt of the notice.
-
-Persons subject to permanent termination of rights are not eligible to
-be a recipient and receive a license under Section 4.8.
-
-9.5 Notwithstanding the above, Licensor reserves the right to release
-the Database under different license terms or to stop distributing or
-making available the Database. Releasing the Database under different
-license terms or stopping the distribution of the Database will not
-withdraw this License (or any other license that has been, or is
-required to be, granted under the terms of this License), and this
-License will continue in full force and effect unless terminated as
-stated above.
-
-### 10.0 General
-
-10.1 If any provision of this License is held to be invalid or
-unenforceable, that must not affect the validity or enforceability of
-the remainder of the terms and conditions of this License and each
-remaining provision of this License shall be valid and enforced to the
-fullest extent permitted by law.
-
-10.2 This License is the entire agreement between the parties with
-respect to the rights granted here over the Database. It replaces any
-earlier understandings, agreements or representations with respect to
-the Database.
-
-10.3 If You are in breach of the terms of this License, You will not be
-entitled to rely on the terms of this License or to complain of any
-breach by the Licensor.
-
-10.4 Choice of law. This License takes effect in and will be governed by
-the laws of the relevant jurisdiction in which the License terms are
-sought to be enforced. If the standard suite of rights granted under
-applicable copyright law and Database Rights in the relevant
-jurisdiction includes additional rights not granted under this License,
-these additional rights are granted in this License in order to meet the
-terms of this License.
diff --git a/Airfoil-Design-AirfRANS/README.md b/Airfoil-Design-AirfRANS/README.md
deleted file mode 100644
index 9a1ee7e..0000000
--- a/Airfoil-Design-AirfRANS/README.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# Transolver for Airfoil Design
-
-**Paper Correction:** There is a typo in our descriptions about evaluation metrics for the physics fields of volume and surface. The paper's reported results are MSE (not Relative L2), which is completely the same as the AirfRANS paper. We are sincerely sorry for this mistake.
-
-We test [Transolver](https://arxiv.org/abs/2402.02366) on practical design tasks. The airfoil design task requires the model to estimate the surrounding and surface physical quantities of a 2D airfoil under different Reynolds and angles of attacks.
-
-
-
-
-Figure 1. Airfoil design task. Left: surrounding pressure; Right: x-direction wind speed.
-
-
-## Get Started
-
-This part of code is developed based on the [[AirfRANS]](https://github.com/Extrality/AirfRANS).
-
-1. Install Python 3.8. For convenience, execute the following command.
-
-```bash
-pip install -r requirements.txt
-```
-
-Note: You need to install [pytorch_geometric](https://github.com/pyg-team/pytorch_geometric).
-
-2. Prepare Data.
-
-The experiment data is provided by [[AirfRANS]](https://github.com/Extrality/AirfRANS). You can directly download it with this [link](https://data.isir.upmc.fr/extrality/NeurIPS_2022/Dataset.zip) (9.3GB).
-
-3. Train and evaluate model. We provide the experiment scripts under the folder `./scripts/`. You can reproduce the experiment results as the following examples:
-
-```bash
-bash scripts/Transolver.sh # for Training Transolver (will take 20-24 hours on one single A100)
-bash scripts/Evaluation.sh # for Evaluation
-bash scripts/GraphSAGE.sh # for Training GraphSAGE (will take 30-36 hours on one single A100)
-```
-
-Note: You need to change the argument `--my_path` to your dataset path.
-
-4. Test model with different settings. This benchmark supports four types of settings.
-
-| Settings | Argument |
-| -------------------------------------------- | ------------- |
-| Use full data | `-t full` |
-| Use scarce data | `-t scarce` |
-| Test on out-of-distribution Reynolds | `-t reynolds` |
-| Test on out-of-distribution Angle of Attacks | `-t aoa` |
-
-5. Develop your own model. Here are the instructions:
-
- - Add the model file under folder `./models/`.
-
- - Add the training details in `./params.yaml`. If you donot want to change setting, just copy other models' configuration.
-
- - Add the model configuration into `./main.py`.
-
- - Add a script file under folder `./scripts/` and change the argument `--model`.
-
-## Main Results
-
-Transolver achieves the consistent best performance in practical design tasks.
-
-
-
-
-Table 1. Model comparisons on the practical design tasks.
-
-
-## Citation
-
-If you find this repo useful, please cite our paper.
-
-```
-@inproceedings{wu2024Transolver,
- title={Transolver: A Fast Transformer Solver for PDEs on General Geometries},
- author={Haixu Wu and Huakun Luo and Haowen Wang and Jianmin Wang and Mingsheng Long},
- booktitle={International Conference on Machine Learning},
- year={2024}
-}
-```
-
-## Contact
-
-If you have any questions or want to use the code, please contact [wuhx23@mails.tsinghua.edu.cn](mailto:wuhx23@mails.tsinghua.edu.cn).
-
-## Acknowledgement
-
-We appreciate the following github repos a lot for their valuable code base or datasets:
-
-https://github.com/Extrality/AirfRANS
diff --git a/Airfoil-Design-AirfRANS/dataset/dataset.py b/Airfoil-Design-AirfRANS/dataset/dataset.py
deleted file mode 100644
index c2db892..0000000
--- a/Airfoil-Design-AirfRANS/dataset/dataset.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import numpy as np
-import pyvista as pv
-from utils.reorganize import reorganize
-import os.path as osp
-
-import torch
-from torch_geometric.data import Data
-
-from tqdm import tqdm
-
-
-def cell_sampling_2d(cell_points, cell_attr=None):
- '''
- Sample points in a two dimensional cell via parallelogram sampling and triangle interpolation via barycentric coordinates. The vertices have to be ordered in a certain way.
-
- Args:
- cell_points (array): Vertices of the 2 dimensional cells. Shape (N, 4) for N cells with 4 vertices.
- cell_attr (array, optional): Features of the vertices of the 2 dimensional cells. Shape (N, 4, k) for N cells with 4 edges and k features.
- If given shape (N, 4) it will resize it automatically in a (N, 4, 1) array. Default: ``None``
- '''
- # Sampling via triangulation of the cell and parallelogram sampling
- v0, v1 = cell_points[:, 1] - cell_points[:, 0], cell_points[:, 3] - cell_points[:, 0]
- v2, v3 = cell_points[:, 3] - cell_points[:, 2], cell_points[:, 1] - cell_points[:, 2]
- a0, a1 = np.abs(np.linalg.det(np.hstack([v0[:, :2], v1[:, :2]]).reshape(-1, 2, 2))), np.abs(
- np.linalg.det(np.hstack([v2[:, :2], v3[:, :2]]).reshape(-1, 2, 2)))
- p = a0 / (a0 + a1)
- index_triangle = np.random.binomial(1, p)[:, None]
- u = np.random.uniform(size=(len(p), 2))
- sampled_point = index_triangle * (u[:, 0:1] * v0 + u[:, 1:2] * v1) + (1 - index_triangle) * (
- u[:, 0:1] * v2 + u[:, 1:2] * v3)
- sampled_point_mirror = index_triangle * ((1 - u[:, 0:1]) * v0 + (1 - u[:, 1:2]) * v1) + (1 - index_triangle) * (
- (1 - u[:, 0:1]) * v2 + (1 - u[:, 1:2]) * v3)
- reflex = (u.sum(axis=1) > 1)
- sampled_point[reflex] = sampled_point_mirror[reflex]
-
- # Interpolation on a triangle via barycentric coordinates
- if cell_attr is not None:
- t0, t1, t2 = np.zeros_like(v0), index_triangle * v0 + (1 - index_triangle) * v2, index_triangle * v1 + (
- 1 - index_triangle) * v3
- w = (t1[:, 1] - t2[:, 1]) * (t0[:, 0] - t2[:, 0]) + (t2[:, 0] - t1[:, 0]) * (t0[:, 1] - t2[:, 1])
- w0 = (t1[:, 1] - t2[:, 1]) * (sampled_point[:, 0] - t2[:, 0]) + (t2[:, 0] - t1[:, 0]) * (
- sampled_point[:, 1] - t2[:, 1])
- w1 = (t2[:, 1] - t0[:, 1]) * (sampled_point[:, 0] - t2[:, 0]) + (t0[:, 0] - t2[:, 0]) * (
- sampled_point[:, 1] - t2[:, 1])
- w0, w1 = w0 / w, w1 / w
- w2 = 1 - w0 - w1
-
- if len(cell_attr.shape) == 2:
- cell_attr = cell_attr[:, :, None]
- attr0 = index_triangle * cell_attr[:, 0] + (1 - index_triangle) * cell_attr[:, 2]
- attr1 = index_triangle * cell_attr[:, 1] + (1 - index_triangle) * cell_attr[:, 1]
- attr2 = index_triangle * cell_attr[:, 3] + (1 - index_triangle) * cell_attr[:, 3]
- sampled_attr = w0[:, None] * attr0 + w1[:, None] * attr1 + w2[:, None] * attr2
-
- sampled_point += index_triangle * cell_points[:, 0] + (1 - index_triangle) * cell_points[:, 2]
-
- return np.hstack([sampled_point[:, :2], sampled_attr]) if cell_attr is not None else sampled_point[:, :2]
-
-
-def cell_sampling_1d(line_points, line_attr=None):
- '''
- Sample points in a one dimensional cell via linear sampling and interpolation.
-
- Args:
- line_points (array): Edges of the 1 dimensional cells. Shape (N, 2) for N cells with 2 edges.
- line_attr (array, optional): Features of the edges of the 1 dimensional cells. Shape (N, 2, k) for N cells with 2 edges and k features.
- If given shape (N, 2) it will resize it automatically in a (N, 2, 1) array. Default: ``None``
- '''
- # Linear sampling
- u = np.random.uniform(size=(len(line_points), 1))
- sampled_point = u * line_points[:, 0] + (1 - u) * line_points[:, 1]
-
- # Linear interpolation
- if line_attr is not None:
- if len(line_attr.shape) == 2:
- line_attr = line_attr[:, :, None]
- sampled_attr = u * line_attr[:, 0] + (1 - u) * line_attr[:, 1]
-
- return np.hstack([sampled_point[:, :2], sampled_attr]) if line_attr is not None else sampled_point[:, :2]
-
-
-def Dataset(set, norm=False, coef_norm=None, crop=None, sample=None, n_boot=int(5e5), surf_ratio=.1,
- my_path='/data/path'):
- '''
- Create a list of simulation to input in a PyTorch Geometric DataLoader. Simulation are transformed by keeping vertices of the CFD mesh or
- by sampling (uniformly or via the mesh density) points in the simulation cells.
-
- Args:
- set (list): List of geometry names to include in the dataset.
- norm (bool, optional): If norm is set to ``True``, the mean and the standard deviation of the dataset will be computed and returned.
- Moreover, the dataset will be normalized by these quantities. Ignored when ``coef_norm`` is not None. Default: ``False``
- coef_norm (tuple, optional): This has to be a tuple of the form (mean input, std input, mean output, std ouput) if not None.
- The dataset generated will be normalized by those quantites. Default: ``None``
- crop (list, optional): List of the vertices of the rectangular [xmin, xmax, ymin, ymax] box to crop simulations. Default: ``None``
- sample (string, optional): Type of sampling. If ``None``, no sampling strategy is applied and the nodes of the CFD mesh are returned.
- If ``uniform`` or ``mesh`` is chosen, uniform or mesh density sampling is applied on the domain. Default: ``None``
- n_boot (int, optional): Used only if sample is not None, gives the size of the sampling for each simulation. Defaul: ``int(5e5)``
- surf_ratio (float, optional): Used only if sample is not None, gives the ratio of point over the airfoil to sample with respect to point
- in the volume. Default: ``0.1``
- '''
- if norm and coef_norm is not None:
- raise ValueError('If coef_norm is not None and norm is True, the normalization will be done via coef_norm')
-
- dataset = []
-
- for k, s in enumerate(tqdm(set)):
- # Get the 3D mesh, add the signed distance function and slice it to return in 2D
- internal = pv.read(osp.join(my_path, s, s + '_internal.vtu'))
- aerofoil = pv.read(osp.join(my_path, s, s + '_aerofoil.vtp'))
- internal = internal.compute_cell_sizes(length=False, volume=False)
-
- # Cropping if needed, crinkle is True.
- if crop is not None:
- bounds = (crop[0], crop[1], crop[2], crop[3], 0, 1)
- internal = internal.clip_box(bounds=bounds, invert=False, crinkle=True)
-
- # If sampling strategy is chosen, it will sample points in the cells of the simulation instead of directly taking the nodes of the mesh.
- if sample is not None:
- # Sample on a new point cloud
- if sample == 'uniform': # Uniform sampling strategy
- p = internal.cell_data['Area'] / internal.cell_data['Area'].sum()
- sampled_cell_indices = np.random.choice(internal.n_cells, size=n_boot, p=p)
- surf_p = aerofoil.cell_data['Length'] / aerofoil.cell_data['Length'].sum()
- sampled_line_indices = np.random.choice(aerofoil.n_cells, size=int(n_boot * surf_ratio), p=surf_p)
- elif sample == 'mesh': # Sample via mesh density
- sampled_cell_indices = np.random.choice(internal.n_cells, size=n_boot)
- sampled_line_indices = np.random.choice(aerofoil.n_cells, size=int(n_boot * surf_ratio))
-
- cell_dict = internal.cells.reshape(-1, 5)[sampled_cell_indices, 1:]
- cell_points = internal.points[cell_dict]
- line_dict = aerofoil.lines.reshape(-1, 3)[sampled_line_indices, 1:]
- line_points = aerofoil.points[line_dict]
-
- # Geometry information
- geom = -internal.point_data['implicit_distance'][cell_dict, None] # Signed distance function
- Uinf, alpha = float(s.split('_')[2]), float(s.split('_')[3]) * np.pi / 180
- # u = (np.array([np.cos(alpha), np.sin(alpha)])*Uinf).reshape(1, 2)*(internal.point_data['U'][cell_dict, :1] != 0)
- u = (np.array([np.cos(alpha), np.sin(alpha)]) * Uinf).reshape(1, 2) * np.ones_like(
- internal.point_data['U'][cell_dict, :1])
- normal = np.zeros_like(u)
-
- surf_geom = np.zeros_like(aerofoil.point_data['U'][line_dict, :1])
- # surf_u = np.zeros_like(aerofoil.point_data['U'][line_dict, :2])
- surf_u = (np.array([np.cos(alpha), np.sin(alpha)]) * Uinf).reshape(1, 2) * np.ones_like(
- aerofoil.point_data['U'][line_dict, :1])
- surf_normal = -aerofoil.point_data['Normals'][line_dict, :2]
-
- attr = np.concatenate([u, geom, normal, internal.point_data['U'][cell_dict, :2],
- internal.point_data['p'][cell_dict, None],
- internal.point_data['nut'][cell_dict, None]], axis=-1)
- surf_attr = np.concatenate([surf_u, surf_geom, surf_normal, aerofoil.point_data['U'][line_dict, :2],
- aerofoil.point_data['p'][line_dict, None],
- aerofoil.point_data['nut'][line_dict, None]], axis=-1)
- sampled_points = cell_sampling_2d(cell_points, attr)
- surf_sampled_points = cell_sampling_1d(line_points, surf_attr)
-
- # Define the inputs and the targets
- pos = sampled_points[:, :2]
- init = sampled_points[:, :7]
- target = sampled_points[:, 7:]
- surf_pos = surf_sampled_points[:, :2]
- surf_init = surf_sampled_points[:, :7]
- surf_target = surf_sampled_points[:, 7:]
-
- # Put everything in tensor
- surf = torch.cat([torch.zeros(len(pos)), torch.ones(len(surf_pos))], dim=0)
- pos = torch.cat([torch.tensor(pos, dtype=torch.float), torch.tensor(surf_pos, dtype=torch.float)], dim=0)
- x = torch.cat([torch.tensor(init, dtype=torch.float), torch.tensor(surf_init, dtype=torch.float)], dim=0)
- y = torch.cat([torch.tensor(target, dtype=torch.float), torch.tensor(surf_target, dtype=torch.float)],
- dim=0)
-
- else: # Keep the mesh nodes
- surf_bool = (internal.point_data['U'][:, 0] == 0)
- geom = -internal.point_data['implicit_distance'][:, None] # Signed distance function
- Uinf, alpha = float(s.split('_')[2]), float(s.split('_')[3]) * np.pi / 180
- u = (np.array([np.cos(alpha), np.sin(alpha)]) * Uinf).reshape(1, 2) * np.ones_like(
- internal.point_data['U'][:, :1])
- normal = np.zeros_like(u)
- normal[surf_bool] = reorganize(aerofoil.points[:, :2], internal.points[surf_bool, :2],
- -aerofoil.point_data['Normals'][:, :2])
-
- attr = np.concatenate([u, geom, normal,
- internal.point_data['U'][:, :2], internal.point_data['p'][:, None],
- internal.point_data['nut'][:, None]], axis=-1)
-
- pos = internal.points[:, :2]
- init = np.concatenate([pos, attr[:, :5]], axis=1)
- target = attr[:, 5:]
-
- # Put everything in tensor
- surf = torch.tensor(surf_bool)
- pos = torch.tensor(pos, dtype=torch.float)
- x = torch.tensor(init, dtype=torch.float)
- y = torch.tensor(target, dtype=torch.float)
-
- if norm and coef_norm is None:
- if k == 0:
- old_length = init.shape[0]
- mean_in = init.mean(axis=0, dtype=np.double)
- mean_out = target.mean(axis=0, dtype=np.double)
- else:
- new_length = old_length + init.shape[0]
- mean_in += (init.sum(axis=0, dtype=np.double) - init.shape[0] * mean_in) / new_length
- mean_out += (target.sum(axis=0, dtype=np.double) - init.shape[0] * mean_out) / new_length
- old_length = new_length
-
- data = Data(pos=pos, x=x, y=y, surf=surf.bool())
- dataset.append(data)
-
- if norm and coef_norm is None:
- # Compute normalization
- mean_in = mean_in.astype(np.single)
- mean_out = mean_out.astype(np.single)
- for k, data in enumerate(dataset):
-
- if k == 0:
- old_length = data.x.numpy().shape[0]
- std_in = ((data.x.numpy() - mean_in) ** 2).sum(axis=0, dtype=np.double) / old_length
- std_out = ((data.y.numpy() - mean_out) ** 2).sum(axis=0, dtype=np.double) / old_length
- else:
- new_length = old_length + data.x.numpy().shape[0]
- std_in += (((data.x.numpy() - mean_in) ** 2).sum(axis=0, dtype=np.double) - data.x.numpy().shape[
- 0] * std_in) / new_length
- std_out += (((data.y.numpy() - mean_out) ** 2).sum(axis=0, dtype=np.double) - data.x.numpy().shape[
- 0] * std_out) / new_length
- old_length = new_length
-
- std_in = np.sqrt(std_in).astype(np.single)
- std_out = np.sqrt(std_out).astype(np.single)
-
- # Normalize
- for data in dataset:
- data.x = (data.x - mean_in) / (std_in + 1e-8)
- data.y = (data.y - mean_out) / (std_out + 1e-8)
-
- coef_norm = (mean_in, std_in, mean_out, std_out)
- dataset = (dataset, coef_norm)
-
- elif coef_norm is not None:
- # Normalize
- for data in dataset:
- data.x = (data.x - coef_norm[0]) / (coef_norm[1] + 1e-8)
- data.y = (data.y - coef_norm[2]) / (coef_norm[3] + 1e-8)
-
- return dataset
diff --git a/Airfoil-Design-AirfRANS/dataset/dataset_stats.ipynb b/Airfoil-Design-AirfRANS/dataset/dataset_stats.ipynb
deleted file mode 100644
index 3b3e452..0000000
--- a/Airfoil-Design-AirfRANS/dataset/dataset_stats.ipynb
+++ /dev/null
@@ -1,103 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import numpy as np\n",
- "import json\n",
- "import matplotlib.pyplot as plt\n",
- "import seaborn as sns\n",
- "import os.path as osp\n",
- "\n",
- "NU = 1.56e-5\n",
- "\n",
- "sets = ['full_train', 'scarce_train', 'reynolds_train', 'aoa_train', 'full_test', 'reynolds_test', 'aoa_test']\n",
- "colors = ['cornflowerblue']*4 + ['burlywood']*3\n",
- "data_dir = 'MY_ROOT_DIRECTORY'\n",
- "\n",
- "for c, s in zip(colors, sets):\n",
- " with open(osp.join(data_dir, 'manifest.json'), 'r') as f:\n",
- " manifest = json.load(f)[s]\n",
- "\n",
- " us = []\n",
- " angles = []\n",
- " digits4 = []\n",
- " digits5 = []\n",
- " for sim in manifest:\n",
- " params = sim.split('_')\n",
- " us.append(float(params[2])/NU)\n",
- " angles.append(float(params[3]))\n",
- "\n",
- " if len(params) == 7:\n",
- " digits4.append(list(map(float, params[-3:])))\n",
- " else:\n",
- " digits5.append(list(map(float, params[-4:])))\n",
- "\n",
- " digits4 = np.array(digits4)\n",
- " digits5 = np.array(digits5)\n",
- "\n",
- " sns.set()\n",
- "\n",
- " fig, ax = plt.subplots(3, 3, figsize = (12, 12))\n",
- " ax[2, 1].hist(us, bins = 20, color = c)\n",
- " ax[2, 1].set_title('Reynolds number')\n",
- "\n",
- " ax[2, 2].hist(angles, bins = 20, color = c)\n",
- " ax[2, 2].set_xlabel('Degrees')\n",
- " ax[2, 2].set_title('Angle of attack')\n",
- "\n",
- " ax[0, 0].hist(digits4[:, 0], bins = 20, color = c)\n",
- " ax[0, 0].set_title(r'$1^{st}$ digit')\n",
- "\n",
- " ax[0, 1].hist(digits4[:, 1], bins = 20, color = c)\n",
- " ax[0, 1].set_title(r'$2^{nd}$ digit')\n",
- "\n",
- " ax[0, 2].hist(digits4[:, 2], bins = 20, color = c)\n",
- " ax[0, 2].set_title(r'$3^{rd}$ and $4^{th}$ digits')\n",
- "\n",
- " ax[1, 0].hist(digits5[:, 0], bins = 20, color = c)\n",
- " ax[1, 0].set_title(r'$1^{st}$ digit')\n",
- "\n",
- " ax[1, 1].hist(digits5[:, 1], bins = 20, color = c)\n",
- " ax[1, 1].set_title(r'$2^{nd}$ digit')\n",
- "\n",
- " ax[2, 0].hist(digits5[:, 2], bins = 2, color = c)\n",
- " ax[2, 0].set_title(r'$3^{rd}$ digit')\n",
- "\n",
- " ax[1, 2].hist(digits5[:, 3], bins = 20, color = c)\n",
- " ax[1, 2].set_title(r'$4^{th}$ and $5^{th}$ digits');\n",
- " fig.savefig(s, bbox_inches = 'tight', dpi = 150)"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3.9.12 ('isir')",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.12"
- },
- "orig_nbformat": 4,
- "vscode": {
- "interpreter": {
- "hash": "d00e44851a3a4d5201bc229183e4c0de3fea7314717b82800f8d82d2168b4a23"
- }
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/Airfoil-Design-AirfRANS/fig/results.png b/Airfoil-Design-AirfRANS/fig/results.png
deleted file mode 100644
index 1549623..0000000
Binary files a/Airfoil-Design-AirfRANS/fig/results.png and /dev/null differ
diff --git a/Airfoil-Design-AirfRANS/fig/task.png b/Airfoil-Design-AirfRANS/fig/task.png
deleted file mode 100644
index d86f5b6..0000000
Binary files a/Airfoil-Design-AirfRANS/fig/task.png and /dev/null differ
diff --git a/Airfoil-Design-AirfRANS/main.py b/Airfoil-Design-AirfRANS/main.py
deleted file mode 100644
index a84f775..0000000
--- a/Airfoil-Design-AirfRANS/main.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import argparse, yaml, json
-import torch
-import train
-import utils.metrics as metrics
-from dataset.dataset import Dataset
-import os.path as osp
-import numpy as np
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--model', help='The model you want to train, choose between MLP, GraphSAGE, PointNet, GUNet',
- type=str)
-parser.add_argument('-n', '--nmodel', help='Number of trained models for standard deviation estimation (default: 1)',
- default=1, type=int)
-parser.add_argument('-w', '--weight', help='Weight in front of the surface loss (default: 1)', default=1, type=float)
-parser.add_argument('-t', '--task',
- help='Task to train on. Choose between "full", "scarce", "reynolds" and "aoa" (default: full)',
- default='full', type=str)
-parser.add_argument('-s', '--score',
- help='If you want to compute the score of the models on the associated test set. (default: 0)',
- default=0, type=int)
-parser.add_argument('--my_path',
- default='/data/path', type=str)
-parser.add_argument('--save_path',
- default='metrics', type=str)
-args = parser.parse_args()
-
-with open(args.my_path + '/manifest.json', 'r') as f:
- manifest = json.load(f)
-
-manifest_train = manifest[args.task + '_train']
-test_dataset = manifest[args.task + '_test'] if args.task != 'scarce' else manifest['full_test']
-n = int(.1 * len(manifest_train))
-train_dataset = manifest_train[:-n]
-val_dataset = manifest_train[-n:]
-print("start load data")
-train_dataset, coef_norm = Dataset(train_dataset, norm=True, sample=None, my_path=args.my_path)
-val_dataset = Dataset(val_dataset, sample=None, coef_norm=coef_norm, my_path=args.my_path)
-print("load data finish")
-# Cuda
-use_cuda = torch.cuda.is_available()
-device = 'cuda:0' if use_cuda else 'cpu'
-if use_cuda:
- print('Using GPU')
-else:
- print('Using CPU')
-
-with open('params.yaml', 'r') as f: # hyperparameters of the model
- hparams = yaml.safe_load(f)[args.model]
-
-from models.MLP import MLP
-
-models = []
-for i in range(args.nmodel):
-
- if args.model == 'Transolver':
- from models.Transolver import Transolver
-
- model = Transolver(n_hidden=256,
- n_layers=8,
- space_dim=7,
- fun_dim=0,
- n_head=8,
- mlp_ratio=2,
- out_dim=4,
- slice_num=32,
- unified_pos=1).cuda()
- else:
- encoder = MLP(hparams['encoder'], batch_norm=False)
- decoder = MLP(hparams['decoder'], batch_norm=False)
- if args.model == 'GraphSAGE':
- from models.GraphSAGE import GraphSAGE
-
- model = GraphSAGE(hparams, encoder, decoder)
-
- elif args.model == 'PointNet':
- from models.PointNet import PointNet
-
- model = PointNet(hparams, encoder, decoder)
-
- elif args.model == 'MLP':
- from models.NN import NN
-
- model = NN(hparams, encoder, decoder)
-
- elif args.model == 'GUNet':
- from models.GUNet import GUNet
-
- model = GUNet(hparams, encoder, decoder)
-
- log_path = osp.join(args.save_path, args.task, args.model) # path where you want to save log and figures
- print('start training')
- model = train.main(device, train_dataset, val_dataset, model, hparams, log_path,
- criterion='MSE_weighted', val_iter=10, reg=args.weight, name_mod=args.model, val_sample=True)
- print('end training')
- models.append(model)
-torch.save(models, osp.join(args.save_path, args.task, args.model, args.model))
-
-if bool(args.score):
- print('start score')
- s = args.task + '_test' if args.task != 'scarce' else 'full_test'
- coefs = metrics.Results_test(device, [models], [hparams], coef_norm, args.my_path, path_out='scores', n_test=3,
- criterion='MSE', s=s)
- # models can be a stack of the same model (for example MLP) on the task s, if you have another stack of another model (for example GraphSAGE)
- # you can put in model argument [models_MLP, models_GraphSAGE] and it will output the results for both models (mean and std) in an ordered array.
- np.save(osp.join('scores', args.task, 'true_coefs'), coefs[0])
- np.save(osp.join('scores', args.task, 'pred_coefs_mean'), coefs[1])
- np.save(osp.join('scores', args.task, 'pred_coefs_std'), coefs[2])
- for n, file in enumerate(coefs[3]):
- np.save(osp.join('scores', args.task, 'true_surf_coefs_' + str(n)), file)
- for n, file in enumerate(coefs[4]):
- np.save(osp.join('scores', args.task, 'surf_coefs_' + str(n)), file)
- np.save(osp.join('scores', args.task, 'true_bls'), coefs[5])
- np.save(osp.join('scores', args.task, 'bls'), coefs[6])
- print('end score')
diff --git a/Airfoil-Design-AirfRANS/main_evaluation.py b/Airfoil-Design-AirfRANS/main_evaluation.py
deleted file mode 100644
index 7823c89..0000000
--- a/Airfoil-Design-AirfRANS/main_evaluation.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import yaml, json
-import torch
-import utils.metrics as metrics
-from dataset.dataset import Dataset
-import os.path as osp
-import argparse
-import numpy as np
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--my_path', default='/data/path', type=str) # data save path
-parser.add_argument('--save_path', default='./', type=str) # model save path
-args = parser.parse_args()
-
-# Compute the normalization used for the training
-
-use_cuda = torch.cuda.is_available()
-device = 'cuda:0' if use_cuda else 'cpu'
-if use_cuda:
- print('Using GPU')
-else:
- print('Using CPU')
-
-data_root_dir = args.my_path
-ckpt_root_dir = args.save_path
-
-tasks = ['full']
-
-for task in tasks:
- print('Generating results for task ' + task + '...')
- # task = 'full' # Choose between 'full', 'scarce', 'reynolds', and 'aoa'
- s = task + '_test' if task != 'scarce' else 'full_test'
- s_train = task + '_train'
-
- data_dir = osp.join(data_root_dir, 'Dataset')
- with open(osp.join(data_dir, 'manifest.json'), 'r') as f:
- manifest = json.load(f)
-
- manifest_train = manifest[s_train]
- n = int(.1 * len(manifest_train))
- train_dataset = manifest_train[:-n]
-
- _, coef_norm = Dataset(train_dataset, norm=True, sample=None, my_path=data_dir)
-
- # Compute the scores on the test set
-
- model_names = ['Transolver']
- models = []
- hparams = []
-
- for model in model_names:
- model_path = osp.join(ckpt_root_dir, 'metrics', task, model, model)
- mod = torch.load(model_path)
- print(mod)
- mod = [m.to(device) for m in mod]
- models.append(mod)
-
- with open('params.yaml', 'r') as f:
- hparam = yaml.safe_load(f)[model]
- hparams.append(hparam)
-
- results_dir = osp.join(ckpt_root_dir, 'scores', task)
- coefs = metrics.Results_test(device, models, hparams, coef_norm, data_dir, results_dir, n_test=3, criterion='MSE',
- s=s)
- # models can be a stack of the same model (for example MLP) on the task s, if you have another stack of another model (for example GraphSAGE)
- # you can put in model argument [models_MLP, models_GraphSAGE] and it will output the results for both models (mean and std) in an ordered array.
-
- np.save(osp.join(results_dir, 'true_coefs'), coefs[0])
- np.save(osp.join(results_dir, 'pred_coefs_mean'), coefs[1])
- np.save(osp.join(results_dir, 'pred_coefs_std'), coefs[2])
- for n, file in enumerate(coefs[3]):
- np.save(osp.join(results_dir, 'true_surf_coefs_' + str(n)), file)
- for n, file in enumerate(coefs[4]):
- np.save(osp.join(results_dir, 'surf_coefs_' + str(n)), file)
- np.save(osp.join(results_dir, 'true_bls'), coefs[5])
- np.save(osp.join(results_dir, 'bls'), coefs[6])
diff --git a/Airfoil-Design-AirfRANS/models/GUNet.py b/Airfoil-Design-AirfRANS/models/GUNet.py
deleted file mode 100644
index cf61ad2..0000000
--- a/Airfoil-Design-AirfRANS/models/GUNet.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import torch
-import torch.nn as nn
-import torch_geometric.nn as nng
-import random
-
-def DownSample(id, x, edge_index, pos_x, pool, pool_ratio, r, max_neighbors):
- y = x.clone()
- n = int(x.size(0))
-
- if pool is not None:
- y, _, _, _, id_sampled, _ = pool(y, edge_index)
- else:
- k = int((pool_ratio*torch.tensor(n, dtype = torch.float)).ceil())
- id_sampled = random.sample(range(n), k)
- id_sampled = torch.tensor(id_sampled, dtype = torch.long)
- y = y[id_sampled]
-
- pos_x = pos_x[id_sampled]
- id.append(id_sampled)
-
- # if training:
- # edge_index_sampled = nng.radius_graph(x = pos_x.detach(), r = r, loop = True, max_num_neighbors = 64)
- # else:
- # edge_index_sampled = nng.radius_graph(x = pos_x.detach(), r = r, loop = True, max_num_neighbors = 512)
- edge_index_sampled = nng.radius_graph(x = pos_x.detach(), r = r, loop = True, max_num_neighbors = max_neighbors)
-
- return y, edge_index_sampled
-
-def UpSample(x, pos_x_up, pos_x_down):
- cluster = nng.nearest(pos_x_up, pos_x_down)
- x_up = x[cluster]
-
- return x_up
-
-class GUNet(nn.Module):
- def __init__(self, hparams, encoder, decoder):
- super(GUNet, self).__init__()
-
- self.L = hparams['nb_scale']
- self.layer = hparams['layer']
- self.pool_type = hparams['pool']
- self.pool_ratio = hparams['pool_ratio']
- self.list_r = hparams['list_r']
- self.size_hidden_layers = hparams['size_hidden_layers']
- self.size_hidden_layers_init = hparams['size_hidden_layers']
- self.max_neighbors = hparams['max_neighbors']
- self.dim_enc = hparams['encoder'][-1]
- self.bn_bool = hparams['batchnorm']
- self.res = hparams['res']
- self.head = 2
- self.activation = nn.ReLU()
-
- self.encoder = encoder
- self.decoder = decoder
-
- self.down_layers = nn.ModuleList()
-
- if self.pool_type != 'random':
- self.pool = nn.ModuleList()
- else:
- self.pool = None
-
- if self.layer == 'SAGE':
- self.down_layers.append(nng.SAGEConv(
- in_channels = self.dim_enc,
- out_channels = self.size_hidden_layers
- ))
- bn_in = self.size_hidden_layers
-
- elif self.layer == 'GAT':
- self.down_layers.append(nng.GATConv(
- in_channels = self.dim_enc,
- out_channels = self.size_hidden_layers,
- heads = self.head,
- add_self_loops = False,
- concat = True
- ))
- bn_in = self.head*self.size_hidden_layers
-
- if self.bn_bool == True:
- self.bn = nn.ModuleList()
- self.bn.append(nng.BatchNorm(
- in_channels = bn_in,
- track_running_stats = False
- ))
- else:
- self.bn = None
-
-
- for n in range(1, self.L):
- if self.pool_type != 'random':
- self.pool.append(nng.TopKPooling(
- in_channels = self.size_hidden_layers,
- ratio = self.pool_ratio[n - 1],
- nonlinearity = torch.sigmoid
- ))
-
- if self.layer == 'SAGE':
- self.down_layers.append(nng.SAGEConv(
- in_channels = self.size_hidden_layers,
- out_channels = 2*self.size_hidden_layers,
- ))
- self.size_hidden_layers = 2*self.size_hidden_layers
- bn_in = self.size_hidden_layers
-
- elif self.layer == 'GAT':
- self.down_layers.append(nng.GATConv(
- in_channels = self.head*self.size_hidden_layers,
- out_channels = self.size_hidden_layers,
- heads = 2,
- add_self_loops = False,
- concat = True
- ))
-
- if self.bn_bool == True:
- self.bn.append(nng.BatchNorm(
- in_channels = bn_in,
- track_running_stats = False
- ))
-
- self.up_layers = nn.ModuleList()
-
- if self.layer == 'SAGE':
- self.up_layers.append(nng.SAGEConv(
- in_channels = 3*self.size_hidden_layers_init,
- out_channels = self.dim_enc
- ))
- self.size_hidden_layers_init = 2*self.size_hidden_layers_init
-
- elif self.layer == 'GAT':
- self.up_layers.append(nng.GATConv(
- in_channels = 2*self.head*self.size_hidden_layers,
- out_channels = self.dim_enc,
- heads = 2,
- add_self_loops = False,
- concat = False
- ))
-
- if self.bn_bool == True:
- self.bn.append(nng.BatchNorm(
- in_channels = self.dim_enc,
- track_running_stats = False
- ))
-
- for n in range(1, self.L - 1):
- if self.layer == 'SAGE':
- self.up_layers.append(nng.SAGEConv(
- in_channels = 3*self.size_hidden_layers_init,
- out_channels = self.size_hidden_layers_init,
- ))
- bn_in = self.size_hidden_layers_init
- self.size_hidden_layers_init = 2*self.size_hidden_layers_init
-
- elif self.layer == 'GAT':
- self.up_layers.append(nng.GATConv(
- in_channels = 2*self.head*self.size_hidden_layers,
- out_channels = self.size_hidden_layers,
- heads = 2,
- add_self_loops = False,
- concat = True
- ))
-
- if self.bn_bool == True:
- self.bn.append(nng.BatchNorm(
- in_channels = bn_in,
- track_running_stats = False
- ))
-
- def forward(self, data):
- x, edge_index = data.x, data.edge_index
- id = []
- edge_index_list = [edge_index.clone()]
- pos_x_list = []
- z = self.encoder(x)
- if self.res:
- z_res = z.clone()
-
- z = self.down_layers[0](z, edge_index)
-
- if self.bn_bool == True:
- z = self.bn[0](z)
-
- z = self.activation(z)
- z_list = [z.clone()]
- for n in range(self.L - 1):
- pos_x = x[:, :2] if n == 0 else pos_x[id[n - 1]]
- pos_x_list.append(pos_x.clone())
-
- if self.pool_type != 'random':
- z, edge_index = DownSample(id, z, edge_index, pos_x, self.pool[n], self.pool_ratio[n], self.list_r[n], self.max_neighbors)
- else:
- z, edge_index = DownSample(id, z, edge_index, pos_x, None, self.pool_ratio[n], self.list_r[n], self.max_neighbors)
- edge_index_list.append(edge_index.clone())
-
- z = self.down_layers[n + 1](z, edge_index)
-
- if self.bn_bool == True:
- z = self.bn[n + 1](z)
-
- z = self.activation(z)
- z_list.append(z.clone())
- pos_x_list.append(pos_x[id[-1]].clone())
-
- for n in range(self.L - 1, 0, -1):
- z = UpSample(z, pos_x_list[n - 1], pos_x_list[n])
- z = torch.cat([z, z_list[n - 1]], dim = 1)
- z = self.up_layers[n - 1](z, edge_index_list[n - 1])
-
- if self.bn_bool == True:
- z = self.bn[self.L + n - 1](z)
-
- z = self.activation(z) if n != 1 else z
-
- del(z_list, pos_x_list, edge_index_list)
-
- if self.res:
- z = z + z_res
-
- z = self.decoder(z)
-
- return z
diff --git a/Airfoil-Design-AirfRANS/models/GraphSAGE.py b/Airfoil-Design-AirfRANS/models/GraphSAGE.py
deleted file mode 100644
index 765c1d6..0000000
--- a/Airfoil-Design-AirfRANS/models/GraphSAGE.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import torch.nn as nn
-import torch_geometric.nn as nng
-
-class GraphSAGE(nn.Module):
- def __init__(self, hparams, encoder, decoder):
- super(GraphSAGE, self).__init__()
-
- self.nb_hidden_layers = hparams['nb_hidden_layers']
- self.size_hidden_layers = hparams['size_hidden_layers']
- self.bn_bool = hparams['bn_bool']
- self.activation = nn.ReLU()
-
- self.encoder = encoder
- self.decoder = decoder
-
- self.in_layer = nng.SAGEConv(
- in_channels = hparams['encoder'][-1],
- out_channels = self.size_hidden_layers
- )
-
- self.hidden_layers = nn.ModuleList()
- for n in range(self.nb_hidden_layers - 1):
- self.hidden_layers.append(nng.SAGEConv(
- in_channels = self.size_hidden_layers,
- out_channels = self.size_hidden_layers
- ))
-
-
- self.out_layer = nng.SAGEConv(
- in_channels = self.size_hidden_layers,
- out_channels = hparams['decoder'][0]
- )
-
- if self.bn_bool:
- self.bn = nn.ModuleList()
- for n in range(self.nb_hidden_layers):
- self.bn.append(nn.BatchNorm1d(self.size_hidden_layers, track_running_stats = False))
-
- def forward(self, data):
- z, edge_index = data.x, data.edge_index
- z = self.encoder(z)
-
- z = self.in_layer(z, edge_index)
- if self.bn_bool:
- z = self.bn[0](z)
- z = self.activation(z)
-
- for n in range(self.nb_hidden_layers - 1):
- z = self.hidden_layers[n](z, edge_index)
- if self.bn_bool:
- z = self.bn[n + 1](z)
- z = self.activation(z)
-
- z = self.out_layer(z, edge_index)
-
- z = self.decoder(z)
-
- return z
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/models/MLP.py b/Airfoil-Design-AirfRANS/models/MLP.py
deleted file mode 100644
index 9592692..0000000
--- a/Airfoil-Design-AirfRANS/models/MLP.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import Tensor
-from torch.nn import BatchNorm1d, Identity
-from torch_geometric.nn import Linear
-
-class MLP(torch.nn.Module):
- r"""A multi-layer perception (MLP) model.
-
- Args:
- channel_list (List[int]): List of input, intermediate and output
- channels. :obj:`len(channel_list) - 1` denotes the number of layers
- of the MLP.
- dropout (float, optional): Dropout probability of each hidden
- embedding. (default: :obj:`0.`)
- batch_norm (bool, optional): If set to :obj:`False`, will not make use
- of batch normalization. (default: :obj:`True`)
- relu_first (bool, optional): If set to :obj:`True`, ReLU activation is
- applied before batch normalization. (default: :obj:`False`)
- """
- def __init__(self, channel_list, dropout = 0.,
- batch_norm = True, relu_first = False):
- super().__init__()
- assert len(channel_list) >= 2
- self.channel_list = channel_list
- self.dropout = dropout
- self.relu_first = relu_first
-
- self.lins = torch.nn.ModuleList()
- for dims in zip(self.channel_list[:-1], self.channel_list[1:]):
- self.lins.append(Linear(*dims))
-
- self.norms = torch.nn.ModuleList()
- for dim in zip(self.channel_list[1:-1]):
- self.norms.append(BatchNorm1d(dim, track_running_stats = False) if batch_norm else Identity())
-
- self.reset_parameters()
-
- def reset_parameters(self):
- for lin in self.lins:
- lin.reset_parameters()
- for norm in self.norms:
- if hasattr(norm, 'reset_parameters'):
- norm.reset_parameters()
-
-
- def forward(self, x: Tensor) -> Tensor:
- """"""
- x = self.lins[0](x)
- for lin, norm in zip(self.lins[1:], self.norms):
- if self.relu_first:
- x = x.relu_()
- x = norm(x)
- if not self.relu_first:
- x = x.relu_()
- x = F.dropout(x, p = self.dropout, training = self.training)
- x = lin.forward(x)
- return x
-
-
- def __repr__(self) -> str:
- return f'{self.__class__.__name__}({str(self.channel_list)[1:-1]})'
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/models/NN.py b/Airfoil-Design-AirfRANS/models/NN.py
deleted file mode 100644
index d9b8609..0000000
--- a/Airfoil-Design-AirfRANS/models/NN.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import torch.nn as nn
-from models.MLP import MLP
-
-class NN(nn.Module):
- def __init__(self, hparams, encoder, decoder):
- super(NN, self).__init__()
-
- self.nb_hidden_layers = hparams['nb_hidden_layers']
- self.size_hidden_layers = hparams['size_hidden_layers']
- self.bn_bool = hparams['bn_bool']
- self.activation = nn.ReLU()
-
- self.encoder = encoder
- self.decoder = decoder
-
- self.dim_enc = hparams['encoder'][-1]
-
- self.nn = MLP([self.dim_enc] + [self.size_hidden_layers]*self.nb_hidden_layers + [self.dim_enc], batch_norm = self.bn_bool)
-
- def forward(self, data):
- z = self.encoder(data.x)
- z = self.nn(z)
- z = self.decoder(z)
-
- return z
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/models/PointNet.py b/Airfoil-Design-AirfRANS/models/PointNet.py
deleted file mode 100644
index b27d023..0000000
--- a/Airfoil-Design-AirfRANS/models/PointNet.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import torch
-import torch.nn as nn
-import torch_geometric.nn as nng
-from models.MLP import MLP
-
-
-class PointNet(nn.Module):
- def __init__(self, hparams, encoder, decoder):
- super(PointNet, self).__init__()
-
- self.base_nb = hparams['base_nb']
-
- self.in_block = MLP([hparams['encoder'][-1], self.base_nb, self.base_nb * 2], batch_norm=False)
- self.max_block = MLP([self.base_nb * 2, self.base_nb * 4, self.base_nb * 8, self.base_nb * 32],
- batch_norm=False)
-
- self.out_block = MLP([self.base_nb * (32 + 2), self.base_nb * 16, self.base_nb * 8, self.base_nb * 4],
- batch_norm=False)
-
- self.encoder = encoder
- self.decoder = decoder
-
- self.fcfinal = nn.Linear(self.base_nb * 4, hparams['encoder'][-1])
-
- def forward(self, data):
- z, batch = data.x.float(), data.batch.long()
-
- z = self.encoder(z)
- z = self.in_block(z)
-
- global_coef = self.max_block(z)
- global_coef = nng.global_max_pool(global_coef, batch=batch)
- nb_points = torch.zeros(global_coef.shape[0], device=z.device)
-
- for i in range(batch.max() + 1):
- nb_points[i] = (batch == i).sum()
- nb_points = nb_points.long()
- global_coef = torch.repeat_interleave(global_coef, nb_points, dim=0)
-
- z = torch.cat([z, global_coef], dim=1)
- z = self.out_block(z)
- z = self.fcfinal(z)
- z = self.decoder(z)
-
- return z
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/models/Transolver.py b/Airfoil-Design-AirfRANS/models/Transolver.py
deleted file mode 100644
index 9f0ac40..0000000
--- a/Airfoil-Design-AirfRANS/models/Transolver.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import torch
-import numpy as np
-import torch.nn as nn
-from timm.models.layers import trunc_normal_
-from einops import rearrange, repeat
-
-ACTIVATION = {'gelu': nn.GELU, 'tanh': nn.Tanh, 'sigmoid': nn.Sigmoid, 'relu': nn.ReLU, 'leaky_relu': nn.LeakyReLU(0.1),
- 'softplus': nn.Softplus, 'ELU': nn.ELU, 'silu': nn.SiLU}
-
-
-class Physics_Attention_Irregular_Mesh(nn.Module):
- def __init__(self, dim, heads=8, dim_head=64, dropout=0., slice_num=64):
- super().__init__()
- inner_dim = dim_head * heads
- self.dim_head = dim_head
- self.heads = heads
- self.scale = dim_head ** -0.5
- self.softmax = nn.Softmax(dim=-1)
- self.dropout = nn.Dropout(dropout)
- self.temperature = nn.Parameter(torch.ones([1, heads, 1, 1]) * 0.5)
-
- self.in_project_x = nn.Linear(dim, inner_dim)
- self.in_project_fx = nn.Linear(dim, inner_dim)
- self.in_project_slice = nn.Linear(dim_head, slice_num)
- for l in [self.in_project_slice]:
- torch.nn.init.orthogonal_(l.weight) # use a principled initialization
- self.to_q = nn.Linear(dim_head, dim_head, bias=False)
- self.to_k = nn.Linear(dim_head, dim_head, bias=False)
- self.to_v = nn.Linear(dim_head, dim_head, bias=False)
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x):
- # B N C
- B, N, C = x.shape
-
- ### (1) Slice
- fx_mid = self.in_project_fx(x).reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- x_mid = self.in_project_x(x).reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- slice_weights = self.softmax(self.in_project_slice(x_mid) / self.temperature) # B H N G
- slice_norm = slice_weights.sum(2) # B H G
- slice_token = torch.einsum("bhnc,bhng->bhgc", fx_mid, slice_weights)
- slice_token = slice_token / ((slice_norm + 1e-5)[:, :, :, None].repeat(1, 1, 1, self.dim_head))
-
- ### (2) Attention among slice tokens
- q_slice_token = self.to_q(slice_token)
- k_slice_token = self.to_k(slice_token)
- v_slice_token = self.to_v(slice_token)
- dots = torch.matmul(q_slice_token, k_slice_token.transpose(-1, -2)) * self.scale
- attn = self.softmax(dots)
- attn = self.dropout(attn)
- out_slice_token = torch.matmul(attn, v_slice_token) # B H G D
-
- ### (3) Deslice
- out_x = torch.einsum("bhgc,bhng->bhnc", out_slice_token, slice_weights)
- out_x = rearrange(out_x, 'b h n d -> b n (h d)')
- return self.to_out(out_x)
-
-
-class MLP(nn.Module):
- def __init__(self, n_input, n_hidden, n_output, n_layers=1, act='gelu', res=True):
- super(MLP, self).__init__()
-
- if act in ACTIVATION.keys():
- act = ACTIVATION[act]
- else:
- raise NotImplementedError
- self.n_input = n_input
- self.n_hidden = n_hidden
- self.n_output = n_output
- self.n_layers = n_layers
- self.res = res
- self.linear_pre = nn.Sequential(nn.Linear(n_input, n_hidden), act())
- self.linear_post = nn.Linear(n_hidden, n_output)
- self.linears = nn.ModuleList([nn.Sequential(nn.Linear(n_hidden, n_hidden), act()) for _ in range(n_layers)])
-
- def forward(self, x):
- x = self.linear_pre(x)
- for i in range(self.n_layers):
- if self.res:
- x = self.linears[i](x) + x
- else:
- x = self.linears[i](x)
- x = self.linear_post(x)
- return x
-
-
-class Transolver_block(nn.Module):
- """Transformer encoder block."""
-
- def __init__(
- self,
- num_heads: int,
- hidden_dim: int,
- dropout: float,
- act='gelu',
- mlp_ratio=4,
- last_layer=False,
- out_dim=1,
- slice_num=32,
- ):
- super().__init__()
- self.last_layer = last_layer
- self.ln_1 = nn.LayerNorm(hidden_dim)
- self.Attn = Physics_Attention_Irregular_Mesh(hidden_dim, heads=num_heads, dim_head=hidden_dim // num_heads,
- dropout=dropout, slice_num=slice_num)
- self.ln_2 = nn.LayerNorm(hidden_dim)
- self.mlp = MLP(hidden_dim, hidden_dim * mlp_ratio, hidden_dim, n_layers=0, res=False, act=act)
- self.mlp_new = MLP(hidden_dim, hidden_dim * mlp_ratio, hidden_dim, n_layers=0, res=False, act=act)
- if self.last_layer:
- self.ln_3 = nn.LayerNorm(hidden_dim)
- self.mlp2 = nn.Linear(hidden_dim, out_dim)
-
- def forward(self, fx):
- fx = self.Attn(self.ln_1(fx)) + fx
- fx = self.mlp(self.ln_2(fx)) + fx
- if self.last_layer:
- return self.mlp2(self.ln_3(fx))
- else:
- return fx
-
-
-class Transolver(nn.Module):
- def __init__(self,
- space_dim=1,
- n_layers=5,
- n_hidden=256,
- dropout=0,
- n_head=8,
- act='gelu',
- mlp_ratio=1,
- fun_dim=1,
- out_dim=1,
- slice_num=32,
- ref=8,
- unified_pos=False
- ):
- super(Transolver, self).__init__()
- self.__name__ = 'Transolver'
- self.ref = ref
- self.unified_pos = unified_pos
- if self.unified_pos:
- self.preprocess = MLP(fun_dim + space_dim + self.ref * self.ref, n_hidden * 2, n_hidden,
- n_layers=0,
- res=False, act=act)
- else:
- self.preprocess = MLP(fun_dim + space_dim, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
-
- self.n_hidden = n_hidden
- self.space_dim = space_dim
-
- self.blocks = nn.ModuleList([Transolver_block(num_heads=n_head, hidden_dim=n_hidden,
- dropout=dropout,
- act=act,
- mlp_ratio=mlp_ratio, out_dim=out_dim,
- slice_num=slice_num,
- last_layer=(_ == n_layers - 1))
- for _ in range(n_layers)])
- self.initialize_weights()
- self.placeholder = nn.Parameter((1 / (n_hidden)) * torch.rand(n_hidden, dtype=torch.float))
-
- def initialize_weights(self):
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm, nn.BatchNorm1d)):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_grid(self, my_pos):
- # my_pos 1 N 3
- batchsize = my_pos.shape[0]
-
- gridx = torch.tensor(np.linspace(-2, 4, self.ref), dtype=torch.float)
- gridx = gridx.reshape(1, self.ref, 1, 1).repeat([batchsize, 1, self.ref, 1])
- gridy = torch.tensor(np.linspace(-1.5, 1.5, self.ref), dtype=torch.float)
- gridy = gridy.reshape(1, 1, self.ref, 1).repeat([batchsize, self.ref, 1, 1])
- grid_ref = torch.cat((gridx, gridy), dim=-1).cuda().reshape(batchsize, self.ref ** 2, 2) # B 4 4 4 3
-
- pos = torch.sqrt(
- torch.sum((my_pos[:, :, None, :] - grid_ref[:, None, :, :]) ** 2,
- dim=-1)). \
- reshape(batchsize, my_pos.shape[1], self.ref * self.ref).contiguous()
- return pos
-
- def forward(self, data):
- x, fx, T = data.x, None, None
- x = x[None, :, :]
- if self.unified_pos:
- new_pos = self.get_grid(data.pos[None, :, :])
- x = torch.cat((x, new_pos), dim=-1)
- if fx is not None:
- fx = torch.cat((x, fx), -1)
- fx = self.preprocess(fx)
- else:
- fx = self.preprocess(x)
- fx = fx + self.placeholder[None, None, :]
-
- for block in self.blocks:
- fx = block(fx)
-
- return fx[0]
diff --git a/Airfoil-Design-AirfRANS/params.yaml b/Airfoil-Design-AirfRANS/params.yaml
deleted file mode 100644
index 0954592..0000000
--- a/Airfoil-Design-AirfRANS/params.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-GraphSAGE:
- encoder: [ 7, 64, 64, 8 ]
- decoder: [ 8, 64, 64, 4 ]
-
- nb_hidden_layers: 3
- size_hidden_layers: 64
- batch_size: 1
- nb_epochs: 398
- lr: 0.001
- max_neighbors: 64
- bn_bool: True
- subsampling: 32000
- r: 0.05
-
-Transolver:
- batch_size: 1
- nb_epochs: 398
- lr: 0.001
- max_neighbors: 64
- subsampling: 32000
- r: 0.05
-
-PointNet:
- encoder: [ 7, 64, 64, 8 ]
- decoder: [ 8, 64, 64, 4 ]
-
- base_nb: 8
- batch_size: 1
- nb_epochs: 398
- lr: 0.001
- subsampling: 32000
-
-MLP:
- encoder: [ 7, 64, 64, 8 ]
- decoder: [ 8, 64, 64, 4 ]
-
- nb_hidden_layers: 3
- size_hidden_layers: 64
- batch_size: 1
- nb_epochs: 398
- lr: 0.001
- bn_bool: True
- subsampling: 32000
-
-GUNet:
- encoder: [ 7, 64, 64, 8 ]
- decoder: [ 8, 64, 64, 4 ]
-
- layer: 'SAGE'
- pool: 'random'
- nb_scale: 5
- pool_ratio: [ .5, .5, .5, .5 ]
- list_r: [ .05, .2, .5, 1, 10 ]
- size_hidden_layers: 8
- batchnorm: True
- res: False
-
- batch_size: 1
- nb_epochs: 398
- lr: 0.001
- max_neighbors: 64
- subsampling: 32000
- r: 0.05
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/requirements.txt b/Airfoil-Design-AirfRANS/requirements.txt
deleted file mode 100644
index 81767c5..0000000
--- a/Airfoil-Design-AirfRANS/requirements.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-torch
-torch_geometric
-torch-cluster
-vtk
-timm
-einops
-seaborn
-pyvista
diff --git a/Airfoil-Design-AirfRANS/scripts/Evaluation.sh b/Airfoil-Design-AirfRANS/scripts/Evaluation.sh
deleted file mode 100644
index 59ba093..0000000
--- a/Airfoil-Design-AirfRANS/scripts/Evaluation.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-export CUDA_VISIBLE_DEVICES=4
-
-python main_evaluation.py --my_path /data/naca/
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/scripts/GraphSAGE.sh b/Airfoil-Design-AirfRANS/scripts/GraphSAGE.sh
deleted file mode 100644
index 97ea9bd..0000000
--- a/Airfoil-Design-AirfRANS/scripts/GraphSAGE.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-export CUDA_VISIBLE_DEVICES=4
-
-python main.py --model GraphSAGE -t full --my_path /data/naca/Dataset --score 1
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/scripts/Transolver.sh b/Airfoil-Design-AirfRANS/scripts/Transolver.sh
deleted file mode 100644
index 2525d39..0000000
--- a/Airfoil-Design-AirfRANS/scripts/Transolver.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-export CUDA_VISIBLE_DEVICES=6
-
-python main.py --model Transolver -t full --my_path /data/naca/Dataset --score 1
\ No newline at end of file
diff --git a/Airfoil-Design-AirfRANS/train.py b/Airfoil-Design-AirfRANS/train.py
deleted file mode 100644
index a1d7648..0000000
--- a/Airfoil-Design-AirfRANS/train.py
+++ /dev/null
@@ -1,343 +0,0 @@
-import random
-import numpy as np
-import matplotlib.pyplot as plt
-import seaborn as sns
-
-import time, json
-
-import torch
-import torch.nn as nn
-import torch_geometric.nn as nng
-from torch_geometric.loader import DataLoader
-
-from tqdm import tqdm
-
-from pathlib import Path
-import os.path as osp
-
-
-def get_nb_trainable_params(model):
- '''
- Return the number of trainable parameters
- '''
- model_parameters = filter(lambda p: p.requires_grad, model.parameters())
- return sum([np.prod(p.size()) for p in model_parameters])
-
-
-def train(device, model, train_loader, optimizer, scheduler, criterion='MSE', reg=1):
- model.train()
- avg_loss_per_var = torch.zeros(4, device=device)
- avg_loss = 0
- avg_loss_surf_var = torch.zeros(4, device=device)
- avg_loss_vol_var = torch.zeros(4, device=device)
- avg_loss_surf = 0
- avg_loss_vol = 0
- iter = 0
-
- for data in train_loader:
- data_clone = data.clone()
- data_clone = data_clone.to(device)
- optimizer.zero_grad()
- out = model(data_clone)
- targets = data_clone.y
-
- if criterion == 'MSE' or criterion == 'MSE_weighted':
- loss_criterion = nn.MSELoss(reduction='none')
- elif criterion == 'MAE':
- loss_criterion = nn.L1Loss(reduction='none')
- loss_per_var = loss_criterion(out, targets).mean(dim=0)
- total_loss = loss_per_var.mean()
- loss_surf_var = loss_criterion(out[data_clone.surf, :], targets[data_clone.surf, :]).mean(dim=0)
- loss_vol_var = loss_criterion(out[~data_clone.surf, :], targets[~data_clone.surf, :]).mean(dim=0)
- loss_surf = loss_surf_var.mean()
- loss_vol = loss_vol_var.mean()
-
- if criterion == 'MSE_weighted':
- (loss_vol + reg * loss_surf).backward()
- else:
- total_loss.backward()
-
- optimizer.step()
- scheduler.step()
- avg_loss_per_var += loss_per_var
- avg_loss += total_loss
- avg_loss_surf_var += loss_surf_var
- avg_loss_vol_var += loss_vol_var
- avg_loss_surf += loss_surf
- avg_loss_vol += loss_vol
- iter += 1
-
- return avg_loss.cpu().data.numpy() / iter, avg_loss_per_var.cpu().data.numpy() / iter, avg_loss_surf_var.cpu().data.numpy() / iter, avg_loss_vol_var.cpu().data.numpy() / iter, \
- avg_loss_surf.cpu().data.numpy() / iter, avg_loss_vol.cpu().data.numpy() / iter
-
-
-@torch.no_grad()
-def test(device, model, test_loader, criterion='MSE'):
- model.eval()
- avg_loss_per_var = np.zeros(4)
- avg_loss = 0
- avg_loss_surf_var = np.zeros(4)
- avg_loss_vol_var = np.zeros(4)
- avg_loss_surf = 0
- avg_loss_vol = 0
- iter = 0
-
- for data in test_loader:
- data_clone = data.clone()
- data_clone = data_clone.to(device)
- out = model(data_clone)
-
- targets = data_clone.y
- if criterion == 'MSE' or 'MSE_weighted':
- loss_criterion = nn.MSELoss(reduction='none')
- elif criterion == 'MAE':
- loss_criterion = nn.L1Loss(reduction='none')
-
- loss_per_var = loss_criterion(out, targets).mean(dim=0)
- loss = loss_per_var.mean()
- loss_surf_var = loss_criterion(out[data_clone.surf, :], targets[data_clone.surf, :]).mean(dim=0)
- loss_vol_var = loss_criterion(out[~data_clone.surf, :], targets[~data_clone.surf, :]).mean(dim=0)
- loss_surf = loss_surf_var.mean()
- loss_vol = loss_vol_var.mean()
-
- avg_loss_per_var += loss_per_var.cpu().numpy()
- avg_loss += loss.cpu().numpy()
- avg_loss_surf_var += loss_surf_var.cpu().numpy()
- avg_loss_vol_var += loss_vol_var.cpu().numpy()
- avg_loss_surf += loss_surf.cpu().numpy()
- avg_loss_vol += loss_vol.cpu().numpy()
- iter += 1
-
- return avg_loss / iter, avg_loss_per_var / iter, avg_loss_surf_var / iter, avg_loss_vol_var / iter, avg_loss_surf / iter, avg_loss_vol / iter
-
-
-class NumpyEncoder(json.JSONEncoder):
- def default(self, obj):
- if isinstance(obj, np.ndarray):
- return obj.tolist()
- return json.JSONEncoder.default(self, obj)
-
-
-def main(device, train_dataset, val_dataset, Net, hparams, path, criterion='MSE', reg=1, val_iter=10,
- name_mod='GraphSAGE', val_sample=True):
- '''
- Args:
- device (str): device on which you want to do the computation.
- train_dataset (list): list of the data in the training set.
- val_dataset (list): list of the data in the validation set.
- Net (class): network to train.
- hparams (dict): hyper parameters of the network.
- path (str): where to save the trained model and the figures.
- criterion (str, optional): chose between 'MSE', 'MAE', and 'MSE_weigthed'. The latter is the volumetric MSE plus the surface MSE computed independently. Default: 'MSE'.
- reg (float, optional): weigth for the surface loss when criterion is 'MSE_weighted'. Default: 1.
- val_iter (int, optional): number of epochs between each validation step. Default: 10.
- name_mod (str, optional): type of model. Default: 'GraphSAGE'.
- '''
- Path(path).mkdir(parents=True, exist_ok=True)
-
- model = Net.to(device)
- optimizer = torch.optim.Adam(model.parameters(), lr=hparams['lr'])
- lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(
- optimizer,
- max_lr=hparams['lr'],
- total_steps=(len(train_dataset) // hparams['batch_size'] + 1) * hparams['nb_epochs'],
- )
- val_loader = DataLoader(val_dataset, batch_size=1)
- start = time.time()
-
- train_loss_surf_list = []
- train_loss_vol_list = []
- loss_surf_var_list = []
- loss_vol_var_list = []
- val_surf_list = []
- val_vol_list = []
- val_surf_var_list = []
- val_vol_var_list = []
-
- pbar_train = tqdm(range(hparams['nb_epochs']), position=0)
- for epoch in pbar_train:
- train_dataset_sampled = []
- for data in train_dataset:
- data_sampled = data.clone()
- idx = random.sample(range(data_sampled.x.size(0)), hparams['subsampling'])
- idx = torch.tensor(idx)
-
- data_sampled.pos = data_sampled.pos[idx]
- data_sampled.x = data_sampled.x[idx]
- data_sampled.y = data_sampled.y[idx]
- data_sampled.surf = data_sampled.surf[idx]
-
- if name_mod != 'PointNet' and name_mod != 'MLP':
- data_sampled.edge_index = nng.radius_graph(x=data_sampled.pos.to(device), r=hparams['r'], loop=True,
- max_num_neighbors=int(hparams['max_neighbors'])).cpu()
-
- train_dataset_sampled.append(data_sampled)
- train_loader = DataLoader(train_dataset_sampled, batch_size=hparams['batch_size'], shuffle=True)
- del (train_dataset_sampled)
-
- train_loss, _, loss_surf_var, loss_vol_var, loss_surf, loss_vol = train(device, model, train_loader, optimizer,
- lr_scheduler, criterion, reg=reg)
- print('epoch: ' + str(epoch))
- print('train_loss: ' + str(train_loss))
- print('loss_vol: ' + str(loss_vol))
- print('loss_surf: ' + str(loss_surf))
-
- if criterion == 'MSE_weighted':
- train_loss = reg * loss_surf + loss_vol
- del (train_loader)
-
- train_loss_surf_list.append(loss_surf)
- train_loss_vol_list.append(loss_vol)
- loss_surf_var_list.append(loss_surf_var)
- loss_vol_var_list.append(loss_vol_var)
-
- if val_iter is not None:
- if epoch % val_iter == val_iter - 1 or epoch == 0:
- if val_sample:
- val_surf_vars, val_vol_vars, val_surfs, val_vols = [], [], [], []
- for i in range(20):
- val_dataset_sampled = []
- for data in val_dataset:
- data_sampled = data.clone()
- idx = random.sample(range(data_sampled.x.size(0)), hparams['subsampling'])
- idx = torch.tensor(idx)
-
- data_sampled.pos = data_sampled.pos[idx]
- data_sampled.x = data_sampled.x[idx]
- data_sampled.y = data_sampled.y[idx]
- data_sampled.surf = data_sampled.surf[idx]
-
- if name_mod != 'PointNet' and name_mod != 'MLP':
- data_sampled.edge_index = nng.radius_graph(x=data_sampled.pos.to(device),
- r=hparams['r'], loop=True,
- max_num_neighbors=int(
- hparams['max_neighbors'])).cpu()
-
- # if name_mod == 'GNO' or name_mod == 'MGNO':
- # x, edge_index = data_sampled.x, data_sampled.edge_index
- # x_i, x_j = x[edge_index[0], 0:2], x[edge_index[1], 0:2]
- # v_i, v_j = x[edge_index[0], 2:4], x[edge_index[1], 2:4]
- # p_i, p_j = x[edge_index[0], 4:5], x[edge_index[1], 4:5]
- # v_inf = torch.linalg.norm(v_i, dim = 1, keepdim = True)
- # sdf_i, sdf_j = x[edge_index[0], 5:6], x[edge_index[1], 5:6]
- # normal_i, normal_j = x[edge_index[0], 6:8], x[edge_index[1], 6:8]
-
- # data_sampled.edge_attr = torch.cat([x_i - x_j, v_i - v_j, p_i - p_j, sdf_i, sdf_j, v_inf, normal_i, normal_j], dim = 1)
-
- val_dataset_sampled.append(data_sampled)
- val_loader = DataLoader(val_dataset_sampled, batch_size=1, shuffle=True)
- del (val_dataset_sampled)
-
- val_loss, _, val_surf_var, val_vol_var, val_surf, val_vol = test(device, model, val_loader,
- criterion)
- del (val_loader)
- val_surf_vars.append(val_surf_var)
- val_vol_vars.append(val_vol_var)
- val_surfs.append(val_surf)
- val_vols.append(val_vol)
- val_surf_var = np.array(val_surf_vars).mean(axis=0)
- val_vol_var = np.array(val_vol_vars).mean(axis=0)
- val_surf = np.array(val_surfs).mean(axis=0)
- val_vol = np.array(val_vols).mean(axis=0)
- else:
- val_loss, _, val_surf_var, val_vol_var, val_surf, val_vol = test(device, model, val_loader,
- criterion)
- print("=====validation=====")
- print('epoch: ' + str(epoch))
- print('val_vol: ' + str(val_vol))
- print('val_surf: ' + str(val_surf))
- if criterion == 'MSE_weigthed':
- val_loss = reg * val_surf + val_vol
- val_surf_list.append(val_surf)
- val_vol_list.append(val_vol)
- val_surf_var_list.append(val_surf_var)
- val_vol_var_list.append(val_vol_var)
- pbar_train.set_postfix(train_loss=train_loss, loss_surf=loss_surf, val_loss=val_loss, val_surf=val_surf)
- else:
- pbar_train.set_postfix(train_loss=train_loss, loss_surf=loss_surf, val_loss=val_loss, val_surf=val_surf)
- else:
- pbar_train.set_postfix(train_loss=train_loss, loss_surf=loss_surf)
-
- loss_surf_var_list = np.array(loss_surf_var_list)
- loss_vol_var_list = np.array(loss_vol_var_list)
- val_surf_var_list = np.array(val_surf_var_list)
- val_vol_var_list = np.array(val_vol_var_list)
-
- end = time.time()
- time_elapsed = end - start
- params_model = get_nb_trainable_params(model).astype('float')
- print('Number of parameters:', params_model)
- print('Time elapsed: {0:.2f} seconds'.format(time_elapsed))
- torch.save(model, osp.join(path, 'model'))
-
- sns.set()
- fig_train_surf, ax_train_surf = plt.subplots(figsize=(20, 5))
- ax_train_surf.plot(train_loss_surf_list, label='Mean loss')
- ax_train_surf.plot(loss_surf_var_list[:, 0], label=r'$v_x$ loss')
- ax_train_surf.plot(loss_surf_var_list[:, 1], label=r'$v_y$ loss')
- ax_train_surf.plot(loss_surf_var_list[:, 2], label=r'$p$ loss')
- ax_train_surf.plot(loss_surf_var_list[:, 3], label=r'$\nu_t$ loss')
- ax_train_surf.set_xlabel('epochs')
- ax_train_surf.set_yscale('log')
- ax_train_surf.set_title('Train losses over the surface')
- ax_train_surf.legend(loc='best')
- fig_train_surf.savefig(osp.join(path, 'train_loss_surf.png'), dpi=150, bbox_inches='tight')
-
- fig_train_vol, ax_train_vol = plt.subplots(figsize=(20, 5))
- ax_train_vol.plot(train_loss_vol_list, label='Mean loss')
- ax_train_vol.plot(loss_vol_var_list[:, 0], label=r'$v_x$ loss')
- ax_train_vol.plot(loss_vol_var_list[:, 1], label=r'$v_y$ loss')
- ax_train_vol.plot(loss_vol_var_list[:, 2], label=r'$p$ loss')
- ax_train_vol.plot(loss_vol_var_list[:, 3], label=r'$\nu_t$ loss')
- ax_train_vol.set_xlabel('epochs')
- ax_train_vol.set_yscale('log')
- ax_train_vol.set_title('Train losses over the volume')
- ax_train_vol.legend(loc='best')
- fig_train_vol.savefig(osp.join(path, 'train_loss_vol.png'), dpi=150, bbox_inches='tight')
-
- if val_iter is not None:
- fig_val_surf, ax_val_surf = plt.subplots(figsize=(20, 5))
- ax_val_surf.plot(val_surf_list, label='Mean loss')
- ax_val_surf.plot(val_surf_var_list[:, 0], label=r'$v_x$ loss')
- ax_val_surf.plot(val_surf_var_list[:, 1], label=r'$v_y$ loss')
- ax_val_surf.plot(val_surf_var_list[:, 2], label=r'$p$ loss')
- ax_val_surf.plot(val_surf_var_list[:, 3], label=r'$\nu_t$ loss')
- ax_val_surf.set_xlabel('epochs')
- ax_val_surf.set_yscale('log')
- ax_val_surf.set_title('Validation losses over the surface')
- ax_val_surf.legend(loc='best')
- fig_val_surf.savefig(osp.join(path, 'val_loss_surf.png'), dpi=150, bbox_inches='tight')
-
- fig_val_vol, ax_val_vol = plt.subplots(figsize=(20, 5))
- ax_val_vol.plot(val_vol_list, label='Mean loss')
- ax_val_vol.plot(val_vol_var_list[:, 0], label=r'$v_x$ loss')
- ax_val_vol.plot(val_vol_var_list[:, 1], label=r'$v_y$ loss')
- ax_val_vol.plot(val_vol_var_list[:, 2], label=r'$p$ loss')
- ax_val_vol.plot(val_vol_var_list[:, 3], label=r'$\nu_t$ loss')
- ax_val_vol.set_xlabel('epochs')
- ax_val_vol.set_yscale('log')
- ax_val_vol.set_title('Validation losses over the volume')
- ax_val_vol.legend(loc='best')
- fig_val_vol.savefig(osp.join(path, 'val_loss_vol.png'), dpi=150, bbox_inches='tight')
-
- if val_iter is not None:
- with open(osp.join(path, name_mod + '_log.json'), 'a') as f:
- json.dump(
- {
- 'regression': 'Total',
- 'loss': 'MSE',
- 'nb_parameters': params_model,
- 'time_elapsed': time_elapsed,
- 'hparams': hparams,
- 'train_loss_surf': train_loss_surf_list[-1],
- 'train_loss_surf_var': loss_surf_var_list[-1],
- 'train_loss_vol': train_loss_vol_list[-1],
- 'train_loss_vol_var': loss_vol_var_list[-1],
- 'val_loss_surf': val_surf_list[-1],
- 'val_loss_surf_var': val_surf_var_list[-1],
- 'val_loss_vol': val_vol_list[-1],
- 'val_loss_vol_var': val_vol_var_list[-1],
- }, f, indent=12, cls=NumpyEncoder
- )
-
- return model
diff --git a/Airfoil-Design-AirfRANS/utils/metrics.py b/Airfoil-Design-AirfRANS/utils/metrics.py
deleted file mode 100644
index 062b7df..0000000
--- a/Airfoil-Design-AirfRANS/utils/metrics.py
+++ /dev/null
@@ -1,447 +0,0 @@
-import os.path as osp
-import pathlib
-
-import numpy as np
-import scipy as sc
-import torch
-import torch.nn as nn
-import torch_geometric.nn as nng
-from torch_geometric.loader import DataLoader
-
-import pyvista as pv
-import json
-import seaborn as sns
-import random
-import time
-
-import utils.metrics_NACA as metrics_NACA
-from utils.reorganize import reorganize
-from dataset.dataset import Dataset
-
-from tqdm import tqdm
-
-NU = np.array(1.56e-5)
-
-
-class NumpyEncoder(json.JSONEncoder):
- def default(self, obj):
- if isinstance(obj, np.ndarray):
- return obj.tolist()
- return json.JSONEncoder.default(self, obj)
-
-
-def rsquared(predict, true):
- '''
- Args:
- predict (tensor): Predicted values, shape (N, *)
- true (tensor): True values, shape (N, *)
-
- Out:
- rsquared (tensor): Coefficient of determination of the prediction, shape (*,)
- '''
- mean = true.mean(dim=0)
- return 1 - ((true - predict) ** 2).sum(dim=0) / ((true - mean) ** 2).sum(dim=0)
-
-
-def rel_err(a, b):
- return np.abs((a - b) / a)
-
-
-def WallShearStress(Jacob_U, normals):
- S = .5 * (Jacob_U + Jacob_U.transpose(0, 2, 1))
- S = S - S.trace(axis1=1, axis2=2).reshape(-1, 1, 1) * np.eye(2)[None] / 3
- ShearStress = 2 * NU.reshape(-1, 1, 1) * S
- ShearStress = (ShearStress * normals[:, :2].reshape(-1, 1, 2)).sum(axis=2)
-
- return ShearStress
-
-
-@torch.no_grad()
-def Infer_test(device, models, hparams, data, coef_norm=None):
- # Inference procedure on new simulation
- outs = [torch.zeros_like(data.y)] * len(models)
- n_out = torch.zeros_like(data.y[:, :1])
- idx_points = set(map(tuple, data.pos[:, :2].numpy()))
- cond = True
- i = 0
- while cond:
- i += 1
- data_sampled = data.clone()
- idx = random.sample(range(data_sampled.x.size(0)), hparams[0]['subsampling'])
- idx = torch.tensor(idx)
- idx_points = idx_points - set(map(tuple, data_sampled.pos[idx, :2].numpy()))
- data_sampled.pos = data_sampled.pos[idx]
- data_sampled.x = data_sampled.x[idx]
- data_sampled.y = data_sampled.y[idx]
- data_sampled.surf = data_sampled.surf[idx]
- data_sampled.batch = data_sampled.batch[idx]
-
- # try:
- # data_sampled.edge_index = nng.radius_graph(x = data_sampled.pos.to(device), r = hparams['r'], loop = True, max_num_neighbors = int(hparams['max_neighbors'])).cpu()
- # except KeyError:
- # None
-
- out = [torch.zeros_like(data.y)] * len(models)
- tim = np.zeros(len(models))
- for n, model in enumerate(models):
- try:
- data_sampled.edge_index = nng.radius_graph(x=data_sampled.pos.to(device), r=hparams[n]['r'], loop=True,
- max_num_neighbors=int(hparams[n]['max_neighbors'])).cpu()
- except KeyError:
- data_sampled.edge_index = None
-
- model.eval()
- data_sampled = data_sampled.to(device)
- start = time.time()
- o = model(data_sampled)
- tim[n] += time.time() - start
- out[n][idx] = o.cpu()
-
- outs[n] = outs[n] + out[n]
- n_out[idx] = n_out[idx] + torch.ones_like(n_out[idx])
-
- cond = (len(idx_points) > 0)
-
- for n, out in enumerate(outs):
- outs[n] = out / n_out
- if coef_norm is not None:
- outs[n][data.surf, :2] = -torch.tensor(coef_norm[2][None, :2]) * torch.ones_like(out[data.surf, :2]) / (
- torch.tensor(coef_norm[3][None, :2]) + 1e-8)
- outs[n][data.surf, 3] = -torch.tensor(coef_norm[2][3]) * torch.ones_like(out[data.surf, 3]) / (
- torch.tensor(coef_norm[3][3]) + 1e-8)
- else:
- outs[n][data.surf, :2] = torch.zeros_like(out[data.surf, :2])
- outs[n][data.surf, 3] = torch.zeros_like(out[data.surf, 3])
-
- return outs, tim / i
-
-
-def Airfoil_test(internal, airfoil, outs, coef_norm, bool_surf):
- # Produce multiple copies of a simulation for different predictions.
- # stocker les internals, airfoils, calculer le wss, calculer le drag, le lift, plot pressure coef, plot skin friction coef, plot drag/drag, plot lift/lift
- # calcul spearsman coef, boundary layer
- internals = []
- airfoils = []
- for out in outs:
- intern = internal.copy()
- aerofoil = airfoil.copy()
-
- point_mesh = intern.points[bool_surf, :2]
- point_surf = aerofoil.points[:, :2]
- out = (out * (coef_norm[3] + 1e-8) + coef_norm[2]).numpy()
- out[bool_surf.numpy(), :2] = np.zeros_like(out[bool_surf.numpy(), :2])
- out[bool_surf.numpy(), 3] = np.zeros_like(out[bool_surf.numpy(), 3])
- intern.point_data['U'][:, :2] = out[:, :2]
- intern.point_data['p'] = out[:, 2]
- intern.point_data['nut'] = out[:, 3]
-
- surf_p = intern.point_data['p'][bool_surf]
- surf_p = reorganize(point_mesh, point_surf, surf_p)
- aerofoil.point_data['p'] = surf_p
-
- intern = intern.ptc(pass_point_data=True)
- aerofoil = aerofoil.ptc(pass_point_data=True)
-
- internals.append(intern)
- airfoils.append(aerofoil)
-
- return internals, airfoils
-
-
-def Airfoil_mean(internals, airfoils):
- # Average multiple prediction over one simulation
-
- oi_point = np.zeros((internals[0].points.shape[0], 4))
- oi_cell = np.zeros((internals[0].cell_data['U'].shape[0], 4))
- oa_point = np.zeros((airfoils[0].points.shape[0], 4))
- oa_cell = np.zeros((airfoils[0].cell_data['U'].shape[0], 4))
-
- for k in range(len(internals)):
- oi_point[:, :2] += internals[k].point_data['U'][:, :2]
- oi_point[:, 2] += internals[k].point_data['p']
- oi_point[:, 3] += internals[k].point_data['nut']
- oi_cell[:, :2] += internals[k].cell_data['U'][:, :2]
- oi_cell[:, 2] += internals[k].cell_data['p']
- oi_cell[:, 3] += internals[k].cell_data['nut']
-
- oa_point[:, :2] += airfoils[k].point_data['U'][:, :2]
- oa_point[:, 2] += airfoils[k].point_data['p']
- oa_point[:, 3] += airfoils[k].point_data['nut']
- oa_cell[:, :2] += airfoils[k].cell_data['U'][:, :2]
- oa_cell[:, 2] += airfoils[k].cell_data['p']
- oa_cell[:, 3] += airfoils[k].cell_data['nut']
- oi_point = oi_point / len(internals)
- oi_cell = oi_cell / len(internals)
- oa_point = oa_point / len(airfoils)
- oa_cell = oa_cell / len(airfoils)
- internal_mean = internals[0].copy()
- internal_mean.point_data['U'][:, :2] = oi_point[:, :2]
- internal_mean.point_data['p'] = oi_point[:, 2]
- internal_mean.point_data['nut'] = oi_point[:, 3]
- internal_mean.cell_data['U'][:, :2] = oi_cell[:, :2]
- internal_mean.cell_data['p'] = oi_cell[:, 2]
- internal_mean.cell_data['nut'] = oi_cell[:, 3]
-
- airfoil_mean = airfoils[0].copy()
- airfoil_mean.point_data['U'][:, :2] = oa_point[:, :2]
- airfoil_mean.point_data['p'] = oa_point[:, 2]
- airfoil_mean.point_data['nut'] = oa_point[:, 3]
- airfoil_mean.cell_data['U'][:, :2] = oa_cell[:, :2]
- airfoil_mean.cell_data['p'] = oa_cell[:, 2]
- airfoil_mean.cell_data['nut'] = oa_cell[:, 3]
-
- return internal_mean, airfoil_mean
-
-
-def Compute_coefficients(internals, airfoils, bool_surf, Uinf, angle, keep_vtk=False):
- # Compute force coefficients, if keet_vtk is True, also return the .vtu/.vtp with wall shear stress added over the airfoil and velocity gradient over the volume.
-
- coefs = []
- if keep_vtk:
- new_internals = []
- new_airfoils = []
-
- for internal, airfoil in zip(internals, airfoils):
- intern = internal.copy()
- aerofoil = airfoil.copy()
-
- point_mesh = intern.points[bool_surf, :2]
- point_surf = aerofoil.points[:, :2]
-
- intern = intern.compute_derivative(scalars='U', gradient='pred_grad')
-
- surf_grad = intern.point_data['pred_grad'].reshape(-1, 3, 3)[bool_surf, :2, :2]
- surf_p = intern.point_data['p'][bool_surf]
-
- surf_grad = reorganize(point_mesh, point_surf, surf_grad)
- surf_p = reorganize(point_mesh, point_surf, surf_p)
-
- Wss_pred = WallShearStress(surf_grad, -aerofoil.point_data['Normals'])
- aerofoil.point_data['wallShearStress'] = Wss_pred
- aerofoil.point_data['p'] = surf_p
-
- intern = intern.ptc(pass_point_data=True)
- aerofoil = aerofoil.ptc(pass_point_data=True)
-
- WP_int = -aerofoil.cell_data['p'][:, None] * aerofoil.cell_data['Normals'][:, :2]
-
- Wss_int = (aerofoil.cell_data['wallShearStress'] * aerofoil.cell_data['Length'].reshape(-1, 1)).sum(axis=0)
- WP_int = (WP_int * aerofoil.cell_data['Length'].reshape(-1, 1)).sum(axis=0)
- force = Wss_int - WP_int
-
- alpha = angle * np.pi / 180
- basis = np.array([[np.cos(alpha), np.sin(alpha)], [-np.sin(alpha), np.cos(alpha)]])
- force_rot = basis @ force
- coef = 2 * force_rot / Uinf ** 2
- coefs.append(coef)
- if keep_vtk:
- new_internals.append(intern)
- new_airfoils.append(aerofoil)
-
- if keep_vtk:
- return coefs, new_internals, new_airfoils
- else:
- return coefs
-
-
-def Results_test(device, models, hparams, coef_norm, path_in, path_out, n_test=3, criterion='MSE',
- x_bl=[.2, .4, .6, .8], s='full_test'):
- '''
- Compute criterion scores for the fields over the volume and the surface, and for the force coefficients. Also compute Spearman's correlation scores
- for the force coefficients and the relative error for the wall shear stress and the pressure over the airfoil. Outputs the true, the mean predicted
- and the std predicted integrated force coefficients, the true and mean predicted local force coefficients (at the surface of airfoils) and the true
- mean predicted boundary layers at chord x_bl.
-
- Args:
- device (str): Device on which you do the prediction.
- models (torch_geometric.nn.Module): List of models to predict with. It is a list of a list of different training of the same model.
- For example, it can be [model_MLP, model_GraphSAGE] where model_MLP is itself a list of the form [MLP_1, MLP_2].
- hparams (list): List of dictionnaries of hyperparameters of the models.
- coef_norm (tuple): Tuple of the form (mean_in, mean_out, std_in, std_out) for the denormalization of the data.
- path_in (str): Path to find the manifest.json file and the dataset.
- path_out (str): Path to write the scores.
- n_test (int, optional): Number of airfoils on which you want to infer (they will be drawn randomly in the given set). Default: ``3``
- criterion(str, optional): Criterion for the fields scores. Choose between MSE and MAE. Default: ``"MSE"``
- x_bl (list, optional): List of chord where the extract boundary layer prediction will be extracted. Default: ``[.2, .4, .6, .8]``
- s (str, optional): Dataset in which the simulation names are going to be sampled. Default: ``"full_test"``
- '''
- # Compute scores and all metrics for a
- sns.set()
- pathlib.Path(path_out).mkdir(parents=True, exist_ok=True)
-
- with open(osp.join(path_in, 'manifest.json'), 'r') as f:
- manifest = json.load(f)
-
- test_dataset = manifest[s]
- idx = random.sample(range(len(test_dataset)), k=n_test)
- idx.sort()
-
- test_dataset_vtk = Dataset(test_dataset, sample=None, coef_norm=coef_norm, my_path=path_in)
- test_loader = DataLoader(test_dataset_vtk, shuffle=False)
-
- if criterion == 'MSE':
- criterion = nn.MSELoss(reduction='none')
- elif criterion == 'MAE':
- criterion = nn.L1Loss(reduction='none')
-
- scores_vol = []
- scores_surf = []
- scores_force = []
- scores_p = []
- scores_wss = []
- internals = []
- airfoils = []
- true_internals = []
- true_airfoils = []
- times = []
- true_coefs = []
- pred_coefs = []
- for i in range(len(models[0])):
- model = [models[n][i] for n in range(len(models))]
- avg_loss_per_var = np.zeros((len(model), 4))
- avg_loss = np.zeros(len(model))
- avg_loss_surf_var = np.zeros((len(model), 4))
- avg_loss_vol_var = np.zeros((len(model), 4))
- avg_loss_surf = np.zeros(len(model))
- avg_loss_vol = np.zeros(len(model))
- avg_rel_err_force = np.zeros((len(model), 2))
- avg_loss_p = np.zeros((len(model)))
- avg_loss_wss = np.zeros((len(model), 2))
- internal = []
- airfoil = []
- pred_coef = []
-
- for j, data in enumerate(tqdm(test_loader)):
- Uinf, angle = float(test_dataset[j].split('_')[2]), float(test_dataset[j].split('_')[3])
- outs, tim = Infer_test(device, model, hparams, data, coef_norm=coef_norm)
- times.append(tim)
- intern = pv.read(osp.join(path_in, test_dataset[j], test_dataset[j] + '_internal.vtu'))
- aerofoil = pv.read(osp.join(path_in, test_dataset[j], test_dataset[j] + '_aerofoil.vtp'))
- tc, true_intern, true_airfoil = Compute_coefficients([intern], [aerofoil], data.surf, Uinf, angle,
- keep_vtk=True)
- tc, true_intern, true_airfoil = tc[0], true_intern[0], true_airfoil[0]
- intern, aerofoil = Airfoil_test(intern, aerofoil, outs, coef_norm, data.surf)
- pc, intern, aerofoil = Compute_coefficients(intern, aerofoil, data.surf, Uinf, angle, keep_vtk=True)
- if i == 0:
- true_coefs.append(tc)
- pred_coef.append(pc)
-
- if j in idx:
- internal.append(intern)
- airfoil.append(aerofoil)
- if i == 0:
- true_internals.append(true_intern)
- true_airfoils.append(true_airfoil)
-
- for n, out in enumerate(outs):
- loss_per_var = criterion(out, data.y).mean(dim=0)
- loss = loss_per_var.mean()
- loss_surf_var = criterion(out[data.surf, :], data.y[data.surf, :]).mean(dim=0)
- loss_vol_var = criterion(out[~data.surf, :], data.y[~data.surf, :]).mean(dim=0)
- loss_surf = loss_surf_var.mean()
- loss_vol = loss_vol_var.mean()
-
- avg_loss_per_var[n] += loss_per_var.cpu().numpy()
- avg_loss[n] += loss.cpu().numpy()
- avg_loss_surf_var[n] += loss_surf_var.cpu().numpy()
- avg_loss_vol_var[n] += loss_vol_var.cpu().numpy()
- avg_loss_surf[n] += loss_surf.cpu().numpy()
- avg_loss_vol[n] += loss_vol.cpu().numpy()
- avg_rel_err_force[n] += rel_err(tc, pc[n])
- avg_loss_wss[n] += rel_err(true_airfoil.point_data['wallShearStress'],
- aerofoil[n].point_data['wallShearStress']).mean(axis=0)
- avg_loss_p[n] += rel_err(true_airfoil.point_data['p'], aerofoil[n].point_data['p']).mean(axis=0)
-
- internals.append(internal)
- airfoils.append(airfoil)
- pred_coefs.append(pred_coef)
-
- score_var = np.array(avg_loss_per_var) / len(test_loader)
- score = np.array(avg_loss) / len(test_loader)
- score_surf_var = np.array(avg_loss_surf_var) / len(test_loader)
- score_vol_var = np.array(avg_loss_vol_var) / len(test_loader)
- score_surf = np.array(avg_loss_surf) / len(test_loader)
- score_vol = np.array(avg_loss_vol) / len(test_loader)
- score_force = np.array(avg_rel_err_force) / len(test_loader)
- score_p = np.array(avg_loss_p) / len(test_loader)
- score_wss = np.array(avg_loss_wss) / len(test_loader)
-
- score = score_surf + score_vol
- scores_vol.append(score_vol_var)
- scores_surf.append(score_surf_var)
- scores_force.append(score_force)
- scores_p.append(score_p)
- scores_wss.append(score_wss)
-
- scores_vol = np.array(scores_vol)
- scores_surf = np.array(scores_surf)
- scores_force = np.array(scores_force)
- scores_p = np.array(scores_p)
- scores_wss = np.array(scores_wss)
- times = np.array(times)
- true_coefs = np.array(true_coefs)
- pred_coefs = np.array(pred_coefs)
- pred_coefs_mean = pred_coefs.mean(axis=0)
- pred_coefs_std = pred_coefs.std(axis=0)
-
- spear_coefs = []
- for j in range(pred_coefs.shape[0]):
- spear_coef = []
- for k in range(pred_coefs.shape[2]):
- spear_drag = sc.stats.spearmanr(true_coefs[:, 0], pred_coefs[j, :, k, 0])[0]
- spear_lift = sc.stats.spearmanr(true_coefs[:, 1], pred_coefs[j, :, k, 1])[0]
- spear_coef.append([spear_drag, spear_lift])
- spear_coefs.append(spear_coef)
- spear_coefs = np.array(spear_coefs)
-
- with open(osp.join(path_out, 'score.json'), 'w') as f:
- json.dump(
- {
- 'mean_time': times.mean(axis=0),
- 'std_time': times.std(axis=0),
- 'mean_score_vol': scores_vol.mean(axis=0),
- 'std_score_vol': scores_vol.std(axis=0),
- 'mean_score_surf': scores_surf.mean(axis=0),
- 'std_score_surf': scores_surf.std(axis=0),
- 'mean_rel_p': scores_p.mean(axis=0),
- 'std_rel_p': scores_p.std(axis=0),
- 'mean_rel_wss': scores_wss.mean(axis=0),
- 'std_rel_wss': scores_wss.std(axis=0),
- 'mean_score_force': scores_force.mean(axis=0),
- 'std_score_force': scores_force.std(axis=0),
- 'spearman_coef_mean': spear_coefs.mean(axis=0),
- 'spearman_coef_std': spear_coefs.std(axis=0)
- }, f, indent=4, cls=NumpyEncoder
- )
-
- surf_coefs = []
- true_surf_coefs = []
- bls = []
- true_bls = []
- for i in range(len(internals[0])):
- aero_name = test_dataset[idx[i]]
- true_internal = true_internals[i]
- true_airfoil = true_airfoils[i]
- surf_coef = []
- bl = []
- for j in range(len(internals[0][0])):
- internal_mean, airfoil_mean = Airfoil_mean([internals[k][i][j] for k in range(len(internals))],
- [airfoils[k][i][j] for k in range(len(airfoils))])
- internal_mean.save(osp.join(path_out, test_dataset[idx[i]] + '_' + str(j) + '.vtu'))
- surf_coef.append(np.array(metrics_NACA.surface_coefficients(airfoil_mean, aero_name)))
- b = []
- for x in x_bl:
- b.append(np.array(metrics_NACA.boundary_layer(airfoil_mean, internal_mean, aero_name, x)))
- bl.append(np.array(b))
- true_surf_coefs.append(np.array(metrics_NACA.surface_coefficients(true_airfoil, aero_name)))
- true_bl = []
- for x in x_bl:
- true_bl.append(np.array(metrics_NACA.boundary_layer(true_airfoil, true_internal, aero_name, x)))
- true_bls.append(np.array(true_bl))
- surf_coefs.append(np.array(surf_coef))
- bls.append(np.array(bl))
-
- true_bls = np.array(true_bls)
- bls = np.array(bls)
-
- return true_coefs, pred_coefs_mean, pred_coefs_std, true_surf_coefs, surf_coefs, true_bls, bls
diff --git a/Airfoil-Design-AirfRANS/utils/metrics_NACA.py b/Airfoil-Design-AirfRANS/utils/metrics_NACA.py
deleted file mode 100644
index 3507d74..0000000
--- a/Airfoil-Design-AirfRANS/utils/metrics_NACA.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-import seaborn as sns
-from utils.naca_generator import camber_line
-
-sns.set()
-
-# Properties of air at sea level and 293.15K
-RHO = 1.184
-NU = 1.56e-5
-C = 346.1
-P_ref = 1.013e5
-
-
-def surface_coefficients(airfoil, aero_name, compressible=False, extrado=False):
- u_inf = float(aero_name.split('_')[2])
- digits = list(map(float, aero_name.split('_')[4:-1]))
- if compressible:
- qInf = 0.5 * u_inf ** 2 * RHO
- else:
- qInf = 0.5 * u_inf ** 2
-
- if extrado:
- camber = camber_line(digits, airfoil.points[:, 0])[0]
- idx_extrado = (airfoil.points[:, 1] > camber)
- points = airfoil.points[:, 0]
- pressure = airfoil.point_data['p']
- wss = np.linalg.norm(airfoil.point_data['wallShearStress'][:, :2], axis=1)
-
- c_p = np.concatenate([points[:, None], pressure[:, None] / qInf], axis=1)
- c_l = np.concatenate([points[:, None], wss[:, None] / qInf], axis=1)
-
- if extrado:
- return c_p, c_l, idx_extrado
- else:
- return c_p, c_l
-
-
-def compare_surface_coefs(coefs1, coefs2, extrado=True, path=None):
- ycp1, ycp2, c_p1, c_p2 = coefs1[0][:, 0], coefs2[0][:, 0], coefs1[0][:, 1], coefs2[0][:, 1]
- ycl1, ycl2, c_f1, c_f2 = coefs1[1][:, 0], coefs2[1][:, 0], coefs1[1][:, 1], coefs2[1][:, 1]
-
- fig, ax = plt.subplots(2, figsize=(20, 10))
- if extrado:
- n_extrado1, n_extrado2 = coefs1[2], coefs2[2]
- ax[0].scatter(ycp1[:n_extrado1], c_p1[:n_extrado1], label='Extrado 1')
- ax[0].scatter(ycp1[n_extrado1:], c_p1[n_extrado1:], color='r', marker='x', label='Intrado 1')
- ax[0].scatter(ycp2[:n_extrado2], c_p2[:n_extrado2], color='y', label='Extrado Target')
- ax[0].scatter(ycp2[n_extrado2:], c_p2[n_extrado2:], color='g', marker='x', label='Intrado Target')
-
- ax[1].scatter(ycl1[:n_extrado1], c_f1[:n_extrado1], label='Extrado 1')
- ax[1].scatter(ycl1[n_extrado1:], c_f1[n_extrado1:], color='r', marker='x', label='Intrado 1')
- ax[1].scatter(ycl2[:n_extrado2], c_f2[:n_extrado2], color='y', label='Extrado Target')
- ax[1].scatter(ycl2[n_extrado2:], c_f2[n_extrado2:], color='g', marker='x', label='Intrado Target')
-
- else:
- ax[0].scatter(ycp1, c_p1, label='Experiment 1')
- ax[0].scatter(ycp2, c_p2, color='y', label='Experiment Target')
-
- ax[1].scatter(ycl1, c_f1, label='Experiment 1')
- ax[1].scatter(ycl2, c_f2, color='y', label='Experiment Targer')
-
- ax[0].invert_yaxis()
- ax[0].set_xlabel('x/c')
- ax[1].set_xlabel('x/c')
- ax[0].set_ylabel(r'$C_p$')
- ax[1].set_ylabel(r'$C_f$')
- ax[0].set_title('Pressure coefficient')
- ax[1].set_title('Skin friction coefficient')
- ax[0].legend(loc='best')
- ax[1].legend(loc='best')
-
- if path != None:
- fig.savefig(path + 'surface_coefs.png', bbox_inches='tight', dpi=150)
-
-
-def boundary_layer(airfoil, internal, aero_name, x, y=1e-3, resolution=int(1e3), direction='normals', rotation=False,
- extrado=True):
- u_inf = float(aero_name.split('_')[2])
- digits = list(map(float, aero_name.split('_')[4:-1]))
- camber = camber_line(digits, airfoil.points[:, 0])[0]
- idx_extrado = (airfoil.points[:, 1] > camber)
-
- if extrado:
- arg = np.argmin(np.abs(airfoil.points[idx_extrado, 0] - x)) + 1
- arg = np.argwhere(idx_extrado.cumsum() == arg).min()
- else:
- arg = np.argmin(np.abs(airfoil.points[~idx_extrado, 0] - x)) + 1
- arg = np.argwhere((~idx_extrado).cumsum() == arg).min()
-
- if direction == 'normals':
- normals = -airfoil.point_data['Normals'][arg]
-
- elif direction == 'y':
- normals = np.array([0, 2 * int(extrado) - 1, 0])
-
- a, b = airfoil.points[arg], airfoil.points[arg] + y * normals
- bl = internal.sample_over_line(a, b, resolution=resolution)
-
- if rotation:
- rot = np.array([[0, 1, 0], [-1, 0, 0], [0, 0, 1]])
- u = (bl.point_data['U'] * (rot @ normals)).sum(axis=1)
- v = (bl.point_data['U'] * normals).sum(axis=1)
- else:
- u = bl.point_data['U'][:, 0]
- v = bl.point_data['U'][:, 1]
-
- nut = bl.point_data['nut']
- yc = bl.points[:, 1] - a[1]
-
- return yc, u / u_inf, v / u_inf, nut / NU
-
-
-def compare_boundary_layer(coefs1, coefs2, ylim=.1, path=None, ylog=False):
- yc1, u1, v1, nut1 = coefs1
- yc2, u2, v2, nut2 = coefs2
-
- fig, ax = plt.subplots(1, 3, figsize=(30, 10))
- ax[0].scatter(u1, yc1, label='Experiment 1')
- ax[0].scatter(u2, yc2, label='Experiment 2', color='r', marker='x')
- ax[0].set_xlabel(r'$u/U_\infty$')
- ax[0].set_ylabel(r'$(y-y_0)/c$')
- # ax[0].set_xlim([-0.2, 1.4])
- # ax[0].set_ylim([0, ylim])
- ax[0].legend(loc='best')
-
- ax[1].scatter(v1, yc1, label='Experiment 1')
- ax[1].scatter(v2, yc2, label='Experiment 2', color='r', marker='x')
- ax[1].set_xlabel(r'$v/U_\infty$')
- ax[1].set_ylabel(r'$(y-y_0)/c$')
- # ax[1].set_xlim([-0.2, 0.2])
- # ax[1].set_ylim([0, ylim])
- ax[1].legend(loc='best')
-
- ax[2].scatter(nut1, yc1, label='Experience 1')
- ax[2].scatter(nut2, yc2, label='Experience 2', color='r', marker='x')
- # ax[2].set_ylim([0, ylim])
- ax[2].set_xlabel(r'$\nu_t/\nu$')
- ax[2].set_ylabel(r'$(y-y_0)/c$')
- ax[2].legend(loc='best')
-
- if ylog:
- ax[0].set_yscale('log')
- ax[1].set_yscale('log')
- ax[2].set_yscale('log')
-
- if path != None:
- fig.savefig(path + 'boundary_layer.png', bbox_inches='tight', dpi=150)
-
-
-def plot_residuals(path, params):
- datas = dict()
- if params['turbulence'] == 'SA':
- fields = ['Ux', 'Uy', 'p', 'nuTilda']
- elif params['turbulence'] == 'SST':
- fields = ['Ux', 'Uy', 'p', 'k', 'omega']
- for field in fields:
- data = np.loadtxt(path + 'logs/' + field + '_0')[:, 1]
- datas[field] = data
-
- if params['turbulence'] == 'SA':
- fig, ax = plt.subplots(2, 2, figsize=(20, 20))
- ax[1, 1].plot(datas['nuTilda'])
- ax[1, 1].set_yscale('log')
- ax[1, 1].set_title('nuTilda residual')
- ax[1, 1].set_xlabel('Number of iterations')
-
- elif params['turbulence'] == 'SST':
- fig, ax = plt.subplots(3, 2, figsize=(30, 20))
- ax[1, 1].plot(datas['k'])
- ax[1, 1].set_yscale('log')
- ax[1, 1].set_title('k residual')
- ax[1, 1].set_xlabel('Number of iterations')
-
- ax[2, 0].plot(datas['omega'])
- ax[2, 0].set_yscale('log')
- ax[2, 0].set_title('omega residual')
- ax[2, 0].set_xlabel('Number of iterations');
-
- ax[0, 0].plot(datas['Ux'])
- ax[0, 0].set_yscale('log')
- ax[0, 0].set_title('Ux residual')
-
- ax[0, 1].plot(datas['Uy'])
- ax[0, 1].set_yscale('log')
- ax[0, 1].set_title('Uy residual')
-
- ax[1, 0].plot(datas['p'])
- ax[1, 0].set_yscale('log')
- ax[1, 0].set_title('p residual')
- ax[1, 0].set_xlabel('Number of iterations');
-
- fig.savefig(path + 'residuals.png', bbox_inches='tight', dpi=150)
-
- return datas
-
-
-def plot_coef_convergence(path, params):
- datas = dict()
- datas['c_d'] = np.loadtxt(path + 'postProcessing/forceCoeffs1/0/coefficient.dat')[:, 1]
- datas['c_l'] = np.loadtxt(path + 'postProcessing/forceCoeffs1/0/coefficient.dat')[:, 3]
- c_d, c_l = datas['c_d'][-1], datas['c_l'][-1]
-
- fig, ax = plt.subplots(2, figsize=(30, 15))
- ax[0].plot(datas['c_d'])
- ax[0].set_ylim([.5 * c_d, 1.5 * c_d])
- ax[0].set_title('Drag coefficient')
- ax[0].set_xlabel('Number of iterations')
- ax[0].set_ylabel(r'$C_D$')
-
- ax[1].plot(datas['c_l'])
- ax[1].set_title('Lift coefficient')
- ax[1].set_ylim([.5 * c_l, 1.5 * c_l])
- ax[1].set_ylabel(r'$C_L$')
- ax[1].set_xlabel('Number of iterations');
-
- print('Drag coefficient: {0:.5}, lift coefficient: {1:.5}'.format(c_d, c_l))
-
- fig.savefig(path + 'coef_convergence.png', bbox_inches='tight', dpi=150)
-
- return datas, c_d, c_l
diff --git a/Airfoil-Design-AirfRANS/utils/naca_generator.py b/Airfoil-Design-AirfRANS/utils/naca_generator.py
deleted file mode 100644
index 0f98c1f..0000000
--- a/Airfoil-Design-AirfRANS/utils/naca_generator.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import numpy as np
-
-
-def thickness_dist(t, x, CTE=True):
- # CTE for close trailing edge
- if CTE:
- a = -0.1036
- else:
- a = -0.1015
- return 5 * t * (0.2969 * np.sqrt(x) - 0.1260 * x - 0.3516 * x ** 2 + 0.2843 * x ** 3 + a * x ** 4)
-
-
-def camber_line(params, x):
- # assert np.all(np.logical_and(x >= -1e-1, x <= 1)), 'Found x > 1 or x < 0'
- y_c = np.zeros_like(x)
- dy_c = np.zeros_like(x)
-
- if len(params) == 2:
- m = params[0] / 100
- p = params[1] / 10
-
- if p == 0:
- dy_c = -2 * m * x
- return y_c, dy_c
- elif p == 1:
- dy_c = 2 * m * (1 - x)
- return y_c, dy_c
-
- mask1 = (x < p)
- mask2 = (x >= p)
- y_c[mask1] = (m / p ** 2) * (2 * p * x[mask1] - x[mask1] ** 2)
- dy_c[mask1] = (2 * m / p ** 2) * (p - x[mask1])
- y_c[mask2] = (m / (1 - p) ** 2) * ((1 - 2 * p) + 2 * p * x[mask2] - x[mask2] ** 2)
- dy_c[mask2] = (2 * m / (1 - p) ** 2) * (p - x[mask2])
-
- elif len(params) == 3:
- l, p, q = params
- c_l, x_f = 3 / 20 * l, p / 20
-
- f = lambda x: x * (1 - np.sqrt(x / 3)) - x_f
- df = lambda x: 1 - 3 * np.sqrt(x / 3) / 2
- old_m = 0.5
- cond = True
- while cond:
- new_m = np.max([old_m - f(old_m) / df(old_m), 0])
- cond = (np.abs(old_m - new_m) > 1e-15)
- old_m = new_m
- m = old_m
- r = (3 * m - 7 * m ** 2 + 8 * m ** 3 - 4 * m ** 4) / np.sqrt(m * (1 - m)) - 3 / 2 * (1 - 2 * m) * (
- np.pi / 2 - np.arcsin(1 - 2 * m))
- k_1 = c_l / r
-
- mask1 = (x <= m)
- mask2 = (x > m)
- if q == 0:
- y_c[mask1] = k_1 * ((x[mask1] ** 3 - 3 * m * x[mask1] ** 2 + m ** 2 * (3 - m) * x[mask1]))
- dy_c[mask1] = k_1 * (3 * x[mask1] ** 2 - 6 * m * x[mask1] + m ** 2 * (3 - m))
- y_c[mask2] = k_1 * m ** 3 * (1 - x[mask2])
- dy_c[mask2] = -k_1 * m ** 3 * np.ones_like(dy_c[mask2])
-
- elif q == 1:
- k = (3 * (m - x_f) ** 2 - m ** 3) / (1 - m) ** 3
- y_c[mask1] = k_1 * ((x[mask1] - m) ** 3 - k * (1 - m) ** 3 * x[mask1] - m ** 3 * x[mask1] + m ** 3)
- dy_c[mask1] = k_1 * (3 * (x[mask1] - m) ** 2 - k * (1 - m) ** 3 - m ** 3)
- y_c[mask2] = k_1 * (k * (x[mask2] - m) ** 3 - k * (1 - m) ** 3 * x[mask2] - m ** 3 * x[mask2] + m ** 3)
- dy_c[mask2] = k_1 * (3 * k * (x[mask2] - m) ** 2 - k * (1 - m) ** 3 - m ** 3)
-
- else:
- raise ValueError('Q must be 0 for normal camber or 1 for reflex camber.')
-
- else:
- raise ValueError('The first input must be a tuple of the 2 or 3 digits that represent the camber line.')
-
- return y_c, dy_c
-
-
-def naca_generator(params, nb_samples=400, scale=1, origin=(0, 0), cosine_spacing=True, verbose=True, CTE=True):
- if len(params) == 3:
- params_c = params[:2]
- t = params[2] / 100
- if verbose:
- print(f'Generating naca M = {params_c[0]}, P = {params_c[1]}, XX = {t * 100}')
- elif len(params) == 4:
- params_c = params[:3]
- t = params[3] / 100
- if verbose:
- print(f'Generating naca L = {params_c[0]}, P = {params_c[1]}, Q = {params_c[2]}, XX = {t * 100}')
- else:
- raise ValueError('The first argument must be a tuple of the 4 or 5 digits of the airfoil.')
-
- if cosine_spacing:
- beta = np.pi * np.linspace(1, 0, nb_samples + 1, endpoint=True)
- x = (1 - np.cos(beta)) / 2
- else:
- x = np.linspace(1, 0, nb_samples + 1, endpoint=True)
-
- y_c, dy_c = camber_line(params_c, x)
- y_t = thickness_dist(t, x, CTE)
- theta = np.arctan(dy_c)
- x_u = x - y_t * np.sin(theta)
- x_l = x + y_t * np.sin(theta)
- y_u = y_c + y_t * np.cos(theta)
- y_l = y_c - y_t * np.cos(theta)
- x = np.concatenate([x_u, x_l[:-1][::-1]], axis=0)
- y = np.concatenate([y_u, y_l[:-1][::-1]], axis=0)
- pos = np.stack([
- x * scale + origin[0],
- y * scale + origin[1]
- ], axis=-1
- )
- pos[0], pos[-1] = np.array([1, 0]), np.array([1, 0])
- return pos
diff --git a/Airfoil-Design-AirfRANS/utils/reorganize.py b/Airfoil-Design-AirfRANS/utils/reorganize.py
deleted file mode 100644
index d099028..0000000
--- a/Airfoil-Design-AirfRANS/utils/reorganize.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import numpy as np
-
-def reorganize(in_order_points, out_order_points, quantity_to_reordered):
- n = out_order_points.shape[0]
- idx = np.zeros(n)
- for i in range(n):
- cond = (out_order_points[i] == in_order_points)
- cond = cond[:, 0]*cond[:, 1]
- idx[i] = np.argwhere(cond)[0][0]
- idx = idx.astype('int')
-
- assert (in_order_points[idx] == out_order_points).all()
-
- return quantity_to_reordered[idx]
\ No newline at end of file
diff --git a/PDE-Solving-StandardBenchmark/README.md b/PDE-Solving-StandardBenchmark/README.md
deleted file mode 100644
index 544abb5..0000000
--- a/PDE-Solving-StandardBenchmark/README.md
+++ /dev/null
@@ -1,98 +0,0 @@
-# Transolver for PDE Solving
-
-We evaluate [Transolver](https://arxiv.org/abs/2402.02366) with six widely used PDE-solving benchmarks, which is provided by [FNO and GeoFNO](https://github.com/neuraloperator/neuraloperator).
-
-**Transolver achieves 22% averaged relative promotion over the previous second-best model, presenting favorable efficiency and scalibility.**
-
-
-
-
-Table 1. Comparison in six standard benchmarks. Relative L2 is recorded.
-
-
-
-## Get Started
-
-1. Install Python 3.8. For convenience, execute the following command.
-
-```bash
-pip install -r requirements.txt
-```
-
-2. Prepare Data. You can obtain experimental datasets from the following links.
-
-
-| Dataset | Task | Geometry | Link |
-| ------------- | --------------------------------------- | --------------- | ------------------------------------------------------------ |
-| Elasticity | Estimate material inner stress | Point Cloud | [[Google Cloud]](https://drive.google.com/drive/folders/1YBuaoTdOSr_qzaow-G-iwvbUI7fiUzu8) |
-| Plasticity | Estimate material deformation over time | Structured Mesh | [[Google Cloud]](https://drive.google.com/drive/folders/1YBuaoTdOSr_qzaow-G-iwvbUI7fiUzu8) |
-| Navier-Stokes | Predict future fluid velocity | Regular Grid | [[Google Cloud]](https://drive.google.com/drive/folders/1UnbQh2WWc6knEHbLn-ZaXrKUZhp7pjt-) |
-| Darcy | Estimate fluid pressure through medium | Regular Grid | [[Google Cloud]](https://drive.google.com/drive/folders/1UnbQh2WWc6knEHbLn-ZaXrKUZhp7pjt-) |
-| AirFoil | Estimate airflow velocity around airfoil | Structured Mesh | [[Google Cloud]](https://drive.google.com/drive/folders/1YBuaoTdOSr_qzaow-G-iwvbUI7fiUzu8) |
-| Pipe | Estimate fluid velocity in a pipe | Structured Mesh | [[Google Cloud]](https://drive.google.com/drive/folders/1YBuaoTdOSr_qzaow-G-iwvbUI7fiUzu8) |
-
-3. Train and evaluate model. We provide the experiment scripts of all benchmarks under the folder `./scripts/`. You can reproduce the experiment results as the following examples:
-
-```bash
-bash scripts/Transolver_Elas.sh # for Elasticity
-bash scripts/Transolver_Plas.sh # for Plasticity
-bash scripts/Transolver_NS.sh # for Navier-Stokes
-bash scripts/Transolver_Darcy.sh # for Darcy
-bash scripts/Transolver_Airfoil.sh # for Airfoil
-bash scripts/Transolver_Pipe.sh # for Pipe
-```
-
- Note: You need to change the argument `--data_path` to your dataset path.
-
-4. Develop your own model. Here are the instructions:
-
- - Add the model file under folder `./models/`.
- - Add the model name into `./model_dict.py`.
- - Add a script file under folder `./scripts/` and change the argument `--model`.
-
-## Visualization
-
-Transolver can handle PDEs under various geometrics well, such as predicting the future fluid and estimating the [[shock wave]](https://en.wikipedia.org/wiki/Shock_wave) around airfoil.
-
-
-
-
-Figure 1. Case study of different models.
-
-
-## PDE Solving at Scale
-
-To align with previous model, we only experiment with 8-layer Transolver in the main text. Actually, you can easily obtain a better performance by **scaling up Transolver**. The relative L2 generally decreases when we adding more layers.
-
-
-
-
-Figure 2. Scaling up Transolver: relative L2 curve w.r.t. model layers.
-
-
-## Citation
-
-If you find this repo useful, please cite our paper.
-
-```
-@inproceedings{wu2024Transolver,
- title={Transolver: A Fast Transformer Solver for PDEs on General Geometries},
- author={Haixu Wu and Huakun Luo and Haowen Wang and Jianmin Wang and Mingsheng Long},
- booktitle={International Conference on Machine Learning},
- year={2024}
-}
-```
-
-## Contact
-
-If you have any questions or want to use the code, please contact [wuhx23@mails.tsinghua.edu.cn](mailto:wuhx23@mails.tsinghua.edu.cn).
-
-## Acknowledgement
-
-We appreciate the following github repos a lot for their valuable code base or datasets:
-
-https://github.com/neuraloperator/neuraloperator
-
-https://github.com/neuraloperator/Geo-FNO
-
-https://github.com/thuml/Latent-Spectral-Models
diff --git a/PDE-Solving-StandardBenchmark/exp_airfoil.py b/PDE-Solving-StandardBenchmark/exp_airfoil.py
deleted file mode 100644
index f3128ee..0000000
--- a/PDE-Solving-StandardBenchmark/exp_airfoil.py
+++ /dev/null
@@ -1,230 +0,0 @@
-import os
-import argparse
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-from tqdm import *
-from utils.testloss import TestLoss
-from model_dict import get_model
-
-parser = argparse.ArgumentParser('Training Transformer')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_2D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='0', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsamplex', type=int, default=1)
-parser.add_argument('--downsampley', type=int, default=1)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='airfoil_Transolver')
-parser.add_argument('--data_path', type=str, default='/data/fno/airfoil/naca')
-args = parser.parse_args()
-eval = args.eval
-save_name = args.save_name
-
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def main():
- INPUT_X = args.data_path + '/NACA_Cylinder_X.npy'
- INPUT_Y = args.data_path + '/NACA_Cylinder_Y.npy'
- OUTPUT_Sigma = args.data_path + '/NACA_Cylinder_Q.npy'
-
- ntrain = 1000
- ntest = 200
-
- r1 = args.downsamplex
- r2 = args.downsampley
- s1 = int(((221 - 1) / r1) + 1)
- s2 = int(((51 - 1) / r2) + 1)
-
- inputX = np.load(INPUT_X)
- inputX = torch.tensor(inputX, dtype=torch.float)
- inputY = np.load(INPUT_Y)
- inputY = torch.tensor(inputY, dtype=torch.float)
- input = torch.stack([inputX, inputY], dim=-1)
-
- output = np.load(OUTPUT_Sigma)[:, 4]
- output = torch.tensor(output, dtype=torch.float)
- print(input.shape, output.shape)
-
- x_train = input[:ntrain, ::r1, ::r2][:, :s1, :s2]
- y_train = output[:ntrain, ::r1, ::r2][:, :s1, :s2]
- x_test = input[ntrain:ntrain + ntest, ::r1, ::r2][:, :s1, :s2]
- y_test = output[ntrain:ntrain + ntest, ::r1, ::r2][:, :s1, :s2]
- x_train = x_train.reshape(ntrain, -1, 2)
- x_test = x_test.reshape(ntest, -1, 2)
- y_train = y_train.reshape(ntrain, -1)
- y_test = y_test.reshape(ntest, -1)
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(x_train, x_train, y_train),
- batch_size=args.batch_size,
- shuffle=True)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(x_test, x_test, y_test),
- batch_size=args.batch_size,
- shuffle=False)
-
- print("Dataloading is over.")
-
- model = get_model(args).Model(space_dim=2,
- n_layers=args.n_layers,
- n_hidden=args.n_hidden,
- dropout=args.dropout,
- n_head=args.n_heads,
- Time_Input=False,
- mlp_ratio=args.mlp_ratio,
- fun_dim=0,
- out_dim=1,
- slice_num=args.slice_num,
- ref=args.ref,
- unified_pos=args.unified_pos,
- H=s1, W=s2).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
- print(args)
- print(model)
- count_parameters(model)
-
- scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=args.epochs,
- steps_per_epoch=len(train_loader))
- myloss = TestLoss(size_average=False)
-
- if eval:
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"))
- model.eval()
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
-
- rel_err = 0.0
- showcase = 10
- id = 0
-
- with torch.no_grad():
- for pos, fx, y in test_loader:
- id += 1
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
-
- tl = myloss(out, y).item()
- rel_err += tl
- if id < showcase:
- print(id)
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- x[0, :, 1].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- np.zeros([140, 35]),
- shading='auto',
- edgecolors='black', linewidths=0.1)
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "input_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- x[0, :, 1].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- out[0, :].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 1.2)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "pred_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- x[0, :, 1].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- y[0, :].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 1.2)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "gt_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- x[0, :, 1].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- out[0, :].reshape(221, 51)[40:180, :35].detach().cpu().numpy() - \
- y[0, :].reshape(221, 51)[40:180, :35].detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(-0.2, 0.2)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "error_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
- else:
- for ep in range(args.epochs):
-
- model.train()
- train_loss = 0
-
- for pos, fx, y in train_loader:
-
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda() # x:B,N,2 fx:B,N,2 y:B,N
- optimizer.zero_grad()
- out = model(x, None).squeeze(-1)
- loss = myloss(out, y)
- loss.backward()
-
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
- train_loss += loss.item()
- scheduler.step()
-
- train_loss = train_loss / ntrain
- print("Epoch {} Train loss : {:.5f}".format(ep, train_loss))
-
- model.eval()
- rel_err = 0.0
- with torch.no_grad():
- for pos, fx, y in test_loader:
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
-
- tl = myloss(out, y).item()
- rel_err += tl
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
-
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/exp_darcy.py b/PDE-Solving-StandardBenchmark/exp_darcy.py
deleted file mode 100644
index d58730a..0000000
--- a/PDE-Solving-StandardBenchmark/exp_darcy.py
+++ /dev/null
@@ -1,272 +0,0 @@
-import os
-import argparse
-import numpy as np
-import scipy.io as scio
-import torch
-import torch.nn.functional as F
-from tqdm import *
-from utils.testloss import TestLoss
-from einops import rearrange
-from model_dict import get_model
-from utils.normalizer import UnitTransformer
-import matplotlib.pyplot as plt
-
-parser = argparse.ArgumentParser('Training Transolver')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_2D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='1', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsample', type=int, default=5)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--ntrain', type=int, default=1000)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='darcy_Transolver')
-parser.add_argument('--data_path', type=str, default='/data/fno')
-args = parser.parse_args()
-
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-train_path = args.data_path + '/piececonst_r421_N1024_smooth1.mat'
-test_path = args.data_path + '/piececonst_r421_N1024_smooth2.mat'
-ntrain = args.ntrain
-ntest = 200
-epochs = args.epochs
-eval = args.eval
-save_name = args.save_name
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def central_diff(x: torch.Tensor, h, resolution):
- # assuming PBC
- # x: (batch, n, feats), h is the step size, assuming n = h*w
- x = rearrange(x, 'b (h w) c -> b h w c', h=resolution, w=resolution)
- x = F.pad(x,
- (0, 0, 1, 1, 1, 1), mode='constant', value=0.) # [b c t h+2 w+2]
- grad_x = (x[:, 1:-1, 2:, :] - x[:, 1:-1, :-2, :]) / (2 * h) # f(x+h) - f(x-h) / 2h
- grad_y = (x[:, 2:, 1:-1, :] - x[:, :-2, 1:-1, :]) / (2 * h) # f(x+h) - f(x-h) / 2h
-
- return grad_x, grad_y
-
-
-def main():
- r = args.downsample
- h = int(((421 - 1) / r) + 1)
- s = h
- dx = 1.0 / s
-
- train_data = scio.loadmat(train_path)
- x_train = train_data['coeff'][:ntrain, ::r, ::r][:, :s, :s]
- x_train = x_train.reshape(ntrain, -1)
- x_train = torch.from_numpy(x_train).float()
- y_train = train_data['sol'][:ntrain, ::r, ::r][:, :s, :s]
- y_train = y_train.reshape(ntrain, -1)
- y_train = torch.from_numpy(y_train)
-
- test_data = scio.loadmat(test_path)
- x_test = test_data['coeff'][:ntest, ::r, ::r][:, :s, :s]
- x_test = x_test.reshape(ntest, -1)
- x_test = torch.from_numpy(x_test).float()
- y_test = test_data['sol'][:ntest, ::r, ::r][:, :s, :s]
- y_test = y_test.reshape(ntest, -1)
- y_test = torch.from_numpy(y_test)
-
- x_normalizer = UnitTransformer(x_train)
- y_normalizer = UnitTransformer(y_train)
-
- x_train = x_normalizer.encode(x_train)
- x_test = x_normalizer.encode(x_test)
- y_train = y_normalizer.encode(y_train)
-
- x_normalizer.cuda()
- y_normalizer.cuda()
-
- x = np.linspace(0, 1, s)
- y = np.linspace(0, 1, s)
- x, y = np.meshgrid(x, y)
- pos = np.c_[x.ravel(), y.ravel()]
- pos = torch.tensor(pos, dtype=torch.float).unsqueeze(0)
-
- pos_train = pos.repeat(ntrain, 1, 1)
- pos_test = pos.repeat(ntest, 1, 1)
- print("Dataloading is over.")
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_train, x_train, y_train),
- batch_size=args.batch_size, shuffle=True)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_test, x_test, y_test),
- batch_size=args.batch_size, shuffle=False)
-
- model = get_model(args).Model(space_dim=2,
- n_layers=args.n_layers,
- n_hidden=args.n_hidden,
- dropout=args.dropout,
- n_head=args.n_heads,
- Time_Input=False,
- mlp_ratio=args.mlp_ratio,
- fun_dim=1,
- out_dim=1,
- slice_num=args.slice_num,
- ref=args.ref,
- unified_pos=args.unified_pos,
- H=s, W=s).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
-
- print(args)
- print(model)
- count_parameters(model)
-
- scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=epochs,
- steps_per_epoch=len(train_loader))
- myloss = TestLoss(size_average=False)
- de_x = TestLoss(size_average=False)
- de_y = TestLoss(size_average=False)
-
- if eval:
- print("model evaluation")
- print(s, s)
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"), strict=False)
- model.eval()
- showcase = 10
- id = 0
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
-
- with torch.no_grad():
- rel_err = 0.0
- with torch.no_grad():
- for x, fx, y in test_loader:
- id += 1
- x, fx, y = x.cuda(), fx.cuda(), y.cuda()
- out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1)
- out = y_normalizer.decode(out)
- tl = myloss(out, y).item()
-
- rel_err += tl
-
- if id < showcase:
- print(id)
- plt.figure()
- plt.axis('off')
- plt.imshow(out[0, :].reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "case_" + str(id) + "_pred.pdf"))
- plt.close()
- # ============ #
- plt.figure()
- plt.axis('off')
- plt.imshow(y[0, :].reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_gt.pdf"))
- plt.close()
- # ============ #
- plt.figure()
- plt.axis('off')
- plt.imshow((y[0, :] - out[0, :]).reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.clim(-0.0005, 0.0005)
- plt.savefig(
- os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_error.pdf"))
- plt.close()
- # ============ #
- plt.figure()
- plt.axis('off')
- plt.imshow((fx[0, :].unsqueeze(-1)).reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_input.pdf"))
- plt.close()
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
- else:
- for ep in range(args.epochs):
- model.train()
- train_loss = 0
- reg = 0
- for x, fx, y in train_loader:
- x, fx, y = x.cuda(), fx.cuda(), y.cuda()
- optimizer.zero_grad()
-
- out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1) # B, N , 2, fx: B, N, y: B, N
- out = y_normalizer.decode(out)
- y = y_normalizer.decode(y)
-
- l2loss = myloss(out, y)
-
- out = rearrange(out.unsqueeze(-1), 'b (h w) c -> b c h w', h=s)
- out = out[..., 1:-1, 1:-1].contiguous()
- out = F.pad(out, (1, 1, 1, 1), "constant", 0)
- out = rearrange(out, 'b c h w -> b (h w) c')
- gt_grad_x, gt_grad_y = central_diff(y.unsqueeze(-1), dx, s)
- pred_grad_x, pred_grad_y = central_diff(out, dx, s)
- deriv_loss = de_x(pred_grad_x, gt_grad_x) + de_y(pred_grad_y, gt_grad_y)
- loss = 0.1 * deriv_loss + l2loss
- loss.backward()
-
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
- train_loss += l2loss.item()
- reg += deriv_loss.item()
- scheduler.step()
-
- train_loss /= ntrain
- reg /= ntrain
- print("Epoch {} Reg : {:.5f} Train loss : {:.5f}".format(ep, reg, train_loss))
-
- model.eval()
- rel_err = 0.0
- id = 0
- with torch.no_grad():
- for x, fx, y in test_loader:
- id += 1
- if id == 2:
- vis = True
- else:
- vis = False
- x, fx, y = x.cuda(), fx.cuda(), y.cuda()
- out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1)
- out = y_normalizer.decode(out)
- tl = myloss(out, y).item()
- rel_err += tl
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
-
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/exp_elas.py b/PDE-Solving-StandardBenchmark/exp_elas.py
deleted file mode 100644
index 3c15957..0000000
--- a/PDE-Solving-StandardBenchmark/exp_elas.py
+++ /dev/null
@@ -1,208 +0,0 @@
-import os
-import argparse
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-from tqdm import *
-from utils.testloss import TestLoss
-from model_dict import get_model
-from utils.normalizer import UnitTransformer
-
-parser = argparse.ArgumentParser('Training Transformer')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_1D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='1', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsample', type=int, default=5)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--ntrain', type=int, default=1000)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='elas_Transolver')
-parser.add_argument('--data_path', type=str, default='/data/fno')
-args = parser.parse_args()
-eval = args.eval
-save_name = args.save_name
-
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def main():
- ntrain = args.ntrain
- ntest = 200
-
- PATH_Sigma = args.data_path + '/elasticity/Meshes/Random_UnitCell_sigma_10.npy'
- PATH_XY = args.data_path + '/elasticity/Meshes/Random_UnitCell_XY_10.npy'
-
- input_s = np.load(PATH_Sigma)
- input_s = torch.tensor(input_s, dtype=torch.float).permute(1, 0)
- input_xy = np.load(PATH_XY)
- input_xy = torch.tensor(input_xy, dtype=torch.float).permute(2, 0, 1)
-
- train_s = input_s[:ntrain]
- test_s = input_s[-ntest:]
- train_xy = input_xy[:ntrain]
- test_xy = input_xy[-ntest:]
-
- print(input_s.shape, input_xy.shape)
-
- y_normalizer = UnitTransformer(train_s)
-
- train_s = y_normalizer.encode(train_s)
- y_normalizer.cuda()
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(train_xy, train_xy, train_s),
- batch_size=args.batch_size,
- shuffle=True)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(test_xy, test_xy, test_s),
- batch_size=args.batch_size,
- shuffle=False)
-
- print("Dataloading is over.")
-
- model = get_model(args).Model(space_dim=2,
- n_layers=args.n_layers,
- n_hidden=args.n_hidden,
- dropout=args.dropout,
- n_head=args.n_heads,
- Time_Input=False,
- mlp_ratio=args.mlp_ratio,
- fun_dim=0,
- out_dim=1,
- slice_num=args.slice_num,
- ref=args.ref,
- unified_pos=args.unified_pos).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
-
- print(args)
- print(model)
- count_parameters(model)
-
- scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=args.epochs)
-
- myloss = TestLoss(size_average=False)
-
- if eval:
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"))
- model.eval()
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
- rel_err = 0.0
- showcase = 10
- id = 0
-
- with torch.no_grad():
- for pos, fx, y in test_loader:
- id += 1
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
- out = y_normalizer.decode(out)
- tl = myloss(out, y).item()
- rel_err += tl
- if id < showcase:
- print(id)
- plt.axis('off')
- plt.scatter(x=fx[0, :, 0].detach().cpu().numpy(), y=fx[0, :, 1].detach().cpu().numpy(),
- c=y[0, :].detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 1000)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "gt_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- plt.axis('off')
- plt.scatter(x=fx[0, :, 0].detach().cpu().numpy(), y=fx[0, :, 1].detach().cpu().numpy(),
- c=out[0, :].detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 1000)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "pred_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- plt.axis('off')
- plt.scatter(x=fx[0, :, 0].detach().cpu().numpy(), y=fx[0, :, 1].detach().cpu().numpy(),
- c=((y[0, :] - out[0, :])).detach().cpu().numpy(), cmap='coolwarm')
- plt.clim(-8, 8)
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "error_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- rel_err /= ntest
- print("rel_err : {}".format(rel_err))
- else:
- for ep in range(args.epochs):
-
- model.train()
- train_loss = 0
-
- for pos, fx, y in train_loader:
-
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda() # x:B,N,2 fx:B,N,2 y:B,N,
- optimizer.zero_grad()
- out = model(x, None).squeeze(-1)
- out = y_normalizer.decode(out)
- y = y_normalizer.decode(y)
- loss = myloss(out, y)
- loss.backward()
-
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
- train_loss += loss.item()
- scheduler.step()
-
- train_loss = train_loss / ntrain
- print("Epoch {} Train loss : {:.5f}".format(ep, train_loss))
-
- model.eval()
- rel_err = 0.0
- with torch.no_grad():
- for pos, fx, y in test_loader:
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
- out = y_normalizer.decode(out)
- tl = myloss(out, y).item()
- rel_err += tl
-
- rel_err /= ntest
- print("rel_err : {}".format(rel_err))
-
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/exp_ns.py b/PDE-Solving-StandardBenchmark/exp_ns.py
deleted file mode 100644
index 79d5d3f..0000000
--- a/PDE-Solving-StandardBenchmark/exp_ns.py
+++ /dev/null
@@ -1,244 +0,0 @@
-import os
-import matplotlib.pyplot as plt
-import argparse
-import scipy.io as scio
-import numpy as np
-import torch
-from tqdm import *
-from utils.testloss import TestLoss
-from model_dict import get_model
-
-parser = argparse.ArgumentParser('Training Transformer')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_2D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='0', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsample', type=int, default=1)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='ns_2d_UniPDE')
-parser.add_argument('--data_path', type=str, default='/data/fno')
-args = parser.parse_args()
-
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-data_path = args.data_path + '/NavierStokes_V1e-5_N1200_T20/NavierStokes_V1e-5_N1200_T20.mat'
-# data_path = args.data_path + '/NavierStokes_V1e-5_N1200_T20.mat'
-ntrain = 1000
-ntest = 200
-T_in = 10
-T = 10
-step = 1
-eval = args.eval
-save_name = args.save_name
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def main():
- r = args.downsample
- h = int(((64 - 1) / r) + 1)
-
- data = scio.loadmat(data_path)
- print(data['u'].shape)
- train_a = data['u'][:ntrain, ::r, ::r, :T_in][:, :h, :h, :]
- train_a = train_a.reshape(train_a.shape[0], -1, train_a.shape[-1])
- train_a = torch.from_numpy(train_a)
- train_u = data['u'][:ntrain, ::r, ::r, T_in:T + T_in][:, :h, :h, :]
- train_u = train_u.reshape(train_u.shape[0], -1, train_u.shape[-1])
- train_u = torch.from_numpy(train_u)
-
- test_a = data['u'][-ntest:, ::r, ::r, :T_in][:, :h, :h, :]
- test_a = test_a.reshape(test_a.shape[0], -1, test_a.shape[-1])
- test_a = torch.from_numpy(test_a)
- test_u = data['u'][-ntest:, ::r, ::r, T_in:T + T_in][:, :h, :h, :]
- test_u = test_u.reshape(test_u.shape[0], -1, test_u.shape[-1])
- test_u = torch.from_numpy(test_u)
-
- x = np.linspace(0, 1, h)
- y = np.linspace(0, 1, h)
- x, y = np.meshgrid(x, y)
- pos = np.c_[x.ravel(), y.ravel()]
- pos = torch.tensor(pos, dtype=torch.float).unsqueeze(0)
- pos_train = pos.repeat(ntrain, 1, 1)
- pos_test = pos.repeat(ntest, 1, 1)
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_train, train_a, train_u),
- batch_size=args.batch_size, shuffle=True)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_test, test_a, test_u),
- batch_size=args.batch_size, shuffle=False)
-
- print("Dataloading is over.")
-
- model = get_model(args).Model(space_dim=2,
- n_layers=args.n_layers,
- n_hidden=args.n_hidden,
- dropout=args.dropout,
- n_head=args.n_heads,
- Time_Input=False,
- mlp_ratio=args.mlp_ratio,
- fun_dim=T_in,
- out_dim=1,
- slice_num=args.slice_num,
- ref=args.ref,
- unified_pos=args.unified_pos,
- H=h, W=h).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
-
- print(args)
- print(model)
- count_parameters(model)
-
- scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=args.epochs,
- steps_per_epoch=len(train_loader))
- myloss = TestLoss(size_average=False)
-
- if eval:
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"), strict=False)
- model.eval()
- showcase = 10
- id = 0
-
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
-
- test_l2_full = 0
- with torch.no_grad():
- for x, fx, yy in test_loader:
- id += 1
- x, fx, yy = x.cuda(), fx.cuda(), yy.cuda() # x : B, 4096, 2 fx : B, 4096 y : B, 4096, T
- bsz = x.shape[0]
- for t in range(0, T, step):
- im = model(x, fx=fx)
-
- fx = torch.cat((fx[..., step:], im), dim=-1)
- if t == 0:
- pred = im
- else:
- pred = torch.cat((pred, im), -1)
-
- if id < showcase:
- print(id)
- plt.figure()
- plt.axis('off')
- plt.imshow(im[0, :, 0].reshape(64, 64).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.clim(-3, 3)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "case_" + str(id) + "_pred_" + str(20) + ".pdf"))
- plt.close()
- # ============ #
- plt.figure()
- plt.axis('off')
- plt.imshow(yy[0, :, t].reshape(64, 64).detach().cpu().numpy(), cmap='coolwarm')
- plt.colorbar()
- plt.clim(-3, 3)
- plt.savefig(
- os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_gt_" + str(20) + ".pdf"))
- plt.close()
- # ============ #
- plt.figure()
- plt.axis('off')
- plt.imshow((im[0, :, 0].reshape(64, 64) - yy[0, :, t].reshape(64, 64)).detach().cpu().numpy(),
- cmap='coolwarm')
- plt.colorbar()
- plt.clim(-2, 2)
- plt.savefig(
- os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_error_" + str(20) + ".pdf"))
- plt.close()
- test_l2_full += myloss(pred.reshape(bsz, -1), yy.reshape(bsz, -1)).item()
- print(test_l2_full / ntest)
- else:
- for ep in range(args.epochs):
-
- model.train()
- train_l2_step = 0
- train_l2_full = 0
-
- for x, fx, yy in train_loader:
- loss = 0
- x, fx, yy = x.cuda(), fx.cuda(), yy.cuda() # x: B,4096,2 fx: B,4096,T y: B,4096,T
- bsz = x.shape[0]
-
- for t in range(0, T, step):
- y = yy[..., t:t + step]
- im = model(x, fx=fx) # B , 4096 , 1
- loss += myloss(im.reshape(bsz, -1), y.reshape(bsz, -1))
- if t == 0:
- pred = im
- else:
- pred = torch.cat((pred, im), -1)
- fx = torch.cat((fx[..., step:], y), dim=-1) # detach() & groundtruth
-
- train_l2_step += loss.item()
- train_l2_full += myloss(pred.reshape(bsz, -1), yy.reshape(bsz, -1)).item()
- optimizer.zero_grad()
- loss.backward()
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
- scheduler.step()
-
- test_l2_step = 0
- test_l2_full = 0
-
- model.eval()
-
- with torch.no_grad():
- for x, fx, yy in test_loader:
- loss = 0
- x, fx, yy = x.cuda(), fx.cuda(), yy.cuda() # x : B, 4096, 2 fx : B, 4096 y : B, 4096, T
- bsz = x.shape[0]
- for t in range(0, T, step):
- y = yy[..., t:t + step]
- im = model(x, fx=fx)
- loss += myloss(im.reshape(bsz, -1), y.reshape(bsz, -1))
- if t == 0:
- pred = im
- else:
- pred = torch.cat((pred, im), -1)
- fx = torch.cat((fx[..., step:], im), dim=-1)
-
- test_l2_step += loss.item()
- test_l2_full += myloss(pred.reshape(bsz, -1), yy.reshape(bsz, -1)).item()
-
- print(
- "Epoch {} , train_step_loss:{:.5f} , train_full_loss:{:.5f} , test_step_loss:{:.5f} , test_full_loss:{:.5f}".format(
- ep, train_l2_step / ntrain / (T / step), train_l2_full / ntrain, test_l2_step / ntest / (T / step),
- test_l2_full / ntest))
-
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/exp_pipe.py b/PDE-Solving-StandardBenchmark/exp_pipe.py
deleted file mode 100644
index d8fb81a..0000000
--- a/PDE-Solving-StandardBenchmark/exp_pipe.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import os
-import argparse
-import matplotlib.pyplot as plt
-
-parser = argparse.ArgumentParser('Training Transformer')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_2D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='0', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsamplex', type=int, default=1)
-parser.add_argument('--downsampley', type=int, default=1)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='pipe_UniPDE')
-parser.add_argument('--data_path', type=str, default='/data/fno/pipe')
-args = parser.parse_args()
-eval = args.eval
-save_name = args.save_name
-
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-import numpy as np
-import torch
-from tqdm import *
-from utils.testloss import TestLoss
-from model_dict import get_model
-from utils.normalizer import UnitTransformer
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def main():
- INPUT_X = args.data_path + '/Pipe_X.npy'
- INPUT_Y = args.data_path + '/Pipe_Y.npy'
- OUTPUT_Sigma = args.data_path + '/Pipe_Q.npy'
-
- ntrain = 1000
- ntest = 200
- N = 1200
-
- r1 = args.downsamplex
- r2 = args.downsampley
- s1 = int(((129 - 1) / r1) + 1)
- s2 = int(((129 - 1) / r2) + 1)
-
- inputX = np.load(INPUT_X)
- inputX = torch.tensor(inputX, dtype=torch.float)
- inputY = np.load(INPUT_Y)
- inputY = torch.tensor(inputY, dtype=torch.float)
- input = torch.stack([inputX, inputY], dim=-1)
-
- output = np.load(OUTPUT_Sigma)[:, 0]
- output = torch.tensor(output, dtype=torch.float)
- print(input.shape, output.shape)
- x_train = input[:N][:ntrain, ::r1, ::r2][:, :s1, :s2]
- y_train = output[:N][:ntrain, ::r1, ::r2][:, :s1, :s2]
- x_test = input[:N][-ntest:, ::r1, ::r2][:, :s1, :s2]
- y_test = output[:N][-ntest:, ::r1, ::r2][:, :s1, :s2]
- x_train = x_train.reshape(ntrain, -1, 2)
- x_test = x_test.reshape(ntest, -1, 2)
- y_train = y_train.reshape(ntrain, -1)
- y_test = y_test.reshape(ntest, -1)
-
- x_normalizer = UnitTransformer(x_train)
- y_normalizer = UnitTransformer(y_train)
-
- x_train = x_normalizer.encode(x_train)
- x_test = x_normalizer.encode(x_test)
- y_train = y_normalizer.encode(y_train)
-
- x_normalizer.cuda()
- y_normalizer.cuda()
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(x_train, x_train, y_train),
- batch_size=args.batch_size,
- shuffle=True)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(x_test, x_test, y_test),
- batch_size=args.batch_size,
- shuffle=False)
-
- print("Dataloading is over.")
-
- model = get_model(args).Model(space_dim=2,
- n_layers=args.n_layers,
- n_hidden=args.n_hidden,
- dropout=args.dropout,
- n_head=args.n_heads,
- Time_Input=False,
- mlp_ratio=args.mlp_ratio,
- fun_dim=0,
- out_dim=1,
- slice_num=args.slice_num,
- ref=args.ref,
- unified_pos=args.unified_pos,
- H=s1, W=s2).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
-
- print(args)
- print(model)
- count_parameters(model)
-
- # scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=epochs)
- scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=args.epochs,
- steps_per_epoch=len(train_loader))
- myloss = TestLoss(size_average=False)
-
- if eval:
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"), strict=False)
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '_resave' + '.pt'))
- model.eval()
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
-
- rel_err = 0.0
- showcase = 10
- id = 0
-
- with torch.no_grad():
- for pos, fx, y in test_loader:
- id += 1
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
- out = y_normalizer.decode(out)
-
- tl = myloss(out, y).item()
- rel_err += tl
-
- if id < showcase:
- print(id)
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(129, 129).detach().cpu().numpy(),
- x[0, :, 1].reshape(129, 129).detach().cpu().numpy(),
- np.zeros([129, 129]),
- shading='auto',
- edgecolors='black', linewidths=0.1)
- plt.colorbar()
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "input_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(129, 129).detach().cpu().numpy(),
- x[0, :, 1].reshape(129, 129).detach().cpu().numpy(),
- out[0, :].reshape(129, 129).detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 0.3)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "pred_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(129, 129).detach().cpu().numpy(),
- x[0, :, 1].reshape(129, 129).detach().cpu().numpy(),
- y[0, :].reshape(129, 129).detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 0.3)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "gt_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
- plt.axis('off')
- plt.pcolormesh(x[0, :, 0].reshape(129, 129).detach().cpu().numpy(),
- x[0, :, 1].reshape(129, 129).detach().cpu().numpy(),
- out[0, :].reshape(129, 129).detach().cpu().numpy() - \
- y[0, :].reshape(129, 129).detach().cpu().numpy(),
- shading='auto', cmap='coolwarm')
- plt.colorbar()
- plt.clim(-0.02, 0.02)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "error_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
- else:
- for ep in range(args.epochs):
-
- model.train()
- train_loss = 0
-
- for pos, fx, y in train_loader:
-
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda() # x:B,N,2 fx:B,N,2 y:B,N
- optimizer.zero_grad()
- out = model(x, None).squeeze(-1)
-
- out = y_normalizer.decode(out)
- y = y_normalizer.decode(y)
-
- loss = myloss(out, y)
- loss.backward()
-
- # print("loss:{}".format(loss.item()/batch_size))
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
- train_loss += loss.item()
- scheduler.step()
-
- train_loss = train_loss / ntrain
- print("Epoch {} Train loss : {:.5f}".format(ep, train_loss))
-
- model.eval()
- rel_err = 0.0
- with torch.no_grad():
- for pos, fx, y in test_loader:
- x, fx, y = pos.cuda(), fx.cuda(), y.cuda()
- out = model(x, None).squeeze(-1)
- out = y_normalizer.decode(out)
-
- tl = myloss(out, y).item()
- rel_err += tl
-
- rel_err /= ntest
- print("rel_err:{}".format(rel_err))
-
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/exp_plas.py b/PDE-Solving-StandardBenchmark/exp_plas.py
deleted file mode 100644
index c5147a5..0000000
--- a/PDE-Solving-StandardBenchmark/exp_plas.py
+++ /dev/null
@@ -1,296 +0,0 @@
-import os
-import argparse
-import matplotlib.pyplot as plt
-
-parser = argparse.ArgumentParser('Training Transformer')
-
-parser.add_argument('--lr', type=float, default=1e-3)
-parser.add_argument('--epochs', type=int, default=500)
-parser.add_argument('--weight_decay', type=float, default=1e-5)
-parser.add_argument('--model', type=str, default='Transolver_2D')
-parser.add_argument('--n-hidden', type=int, default=64, help='hidden dim')
-parser.add_argument('--n-layers', type=int, default=3, help='layers')
-parser.add_argument('--n-heads', type=int, default=4)
-parser.add_argument('--batch-size', type=int, default=8)
-parser.add_argument("--gpu", type=str, default='0', help="GPU index to use")
-parser.add_argument('--max_grad_norm', type=float, default=None)
-parser.add_argument('--downsamplex', type=int, default=1)
-parser.add_argument('--downsampley', type=int, default=1)
-parser.add_argument('--mlp_ratio', type=int, default=1)
-parser.add_argument('--dropout', type=float, default=0.0)
-parser.add_argument('--unified_pos', type=int, default=0)
-parser.add_argument('--ref', type=int, default=8)
-parser.add_argument('--slice_num', type=int, default=32)
-parser.add_argument('--eval', type=int, default=0)
-parser.add_argument('--save_name', type=str, default='plas_Transolver')
-parser.add_argument('--data_path', type=str, default='/data/fno/plas_N987_T20.mat')
-args = parser.parse_args()
-eval = args.eval
-save_name = args.save_name
-os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
-
-import numpy as np
-import scipy.io as scio
-import torch
-from tqdm import *
-from utils.testloss import TestLoss
-from model_dict import get_model
-from utils.normalizer import UnitTransformer
-
-
-def count_parameters(model):
- total_params = 0
- for name, parameter in model.named_parameters():
- if not parameter.requires_grad: continue
- params = parameter.numel()
- total_params += params
- print(f"Total Trainable Params: {total_params}")
- return total_params
-
-
-def random_collate_fn(batch):
- shuffled_batch = []
- shuffled_u = None
- shuffled_t = None
- shuffled_a = None
- shuffled_pos = None
- for item in batch:
- pos = item[0]
- t = item[1]
- a = item[2]
- u = item[3]
-
- num_timesteps = t.size(0)
- permuted_indices = torch.randperm(num_timesteps)
-
- t = t[permuted_indices]
- u = u[..., permuted_indices]
-
- if shuffled_t is None:
- shuffled_pos = pos.unsqueeze(0)
- shuffled_t = t.unsqueeze(0)
- shuffled_u = u.unsqueeze(0)
- shuffled_a = a.unsqueeze(0)
- else:
- shuffled_pos = torch.cat((shuffled_pos, pos.unsqueeze(0)), 0)
- shuffled_t = torch.cat((shuffled_t, t.unsqueeze(0)), 0)
- shuffled_u = torch.cat((shuffled_u, u.unsqueeze(0)), 0)
- shuffled_a = torch.cat((shuffled_a, a.unsqueeze(0)), 0)
-
- shuffled_batch.append(shuffled_pos)
- shuffled_batch.append(shuffled_t)
- shuffled_batch.append(shuffled_a)
- shuffled_batch.append(shuffled_u)
-
- return shuffled_batch
-
-
-def main():
- DATA_PATH = args.data_path
-
- N = 987
- ntrain = 900
- ntest = 80
-
- s1 = 101
- s2 = 31
- T = 20
- Deformation = 4
-
- r1 = 1
- r2 = 1
- s1 = int(((s1 - 1) / r1) + 1)
- s2 = int(((s2 - 1) / r2) + 1)
-
- data = scio.loadmat(DATA_PATH)
- input = torch.tensor(data['input'], dtype=torch.float)
- output = torch.tensor(data['output'], dtype=torch.float).transpose(-2, -1)
- print(input.shape, output.shape)
- x_train = input[:ntrain, ::r1][:, :s1].reshape(ntrain, s1, 1).repeat(1, 1, s2)
- x_train = x_train.reshape(ntrain, -1, 1)
- y_train = output[:ntrain, ::r1, ::r2][:, :s1, :s2]
- y_train = y_train.reshape(ntrain, -1, Deformation, T)
- x_test = input[-ntest:, ::r1][:, :s1].reshape(ntest, s1, 1).repeat(1, 1, s2)
- x_test = x_test.reshape(ntest, -1, 1)
- y_test = output[-ntest:, ::r1, ::r2][:, :s1, :s2]
- y_test = y_test.reshape(ntest, -1, Deformation, T)
- print(x_train.shape, y_train.shape)
-
- x_normalizer = UnitTransformer(x_train)
- x_train = x_normalizer.encode(x_train)
- x_test = x_normalizer.encode(x_test)
- x_normalizer.cuda()
-
- x = np.linspace(0, 1, s1)
- y = np.linspace(0, 1, s2)
- x, y = np.meshgrid(x, y)
- pos = np.c_[x.ravel(), y.ravel()]
- pos = torch.tensor(pos, dtype=torch.float).unsqueeze(0)
-
- pos_train = pos.repeat(ntrain, 1, 1)
- pos_test = pos.repeat(ntest, 1, 1)
- print("Dataloading is over.")
-
- t = np.linspace(0, 1, T)
- t = torch.tensor(t, dtype=torch.float).unsqueeze(0)
- t_train = t.repeat(ntrain, 1)
- t_test = t.repeat(ntest, 1)
-
- train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_train, t_train, x_train, y_train),
- batch_size=args.batch_size, shuffle=True, collate_fn=random_collate_fn)
- test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_test, t_test, x_test, y_test),
- batch_size=args.batch_size, shuffle=False)
-
- print("Dataloading is over.")
- model = get_model(args).Model(space_dim=2,
- n_hidden=args.n_hidden,
- n_layers=args.n_layers,
- Time_Input=True,
- n_head=args.n_heads,
- fun_dim=1,
- out_dim=Deformation,
- mlp_ratio=args.mlp_ratio,
- slice_num=args.slice_num,
- unified_pos=args.unified_pos,
- H=s1,
- W=s2).cuda()
-
- optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
-
- print(args)
- print(model)
- count_parameters(model)
-
- scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=args.epochs,
- steps_per_epoch=len(train_loader))
- myloss = TestLoss(size_average=False)
-
- if eval:
- model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"), strict=False)
- model.eval()
- if not os.path.exists('./results/' + save_name + '/'):
- os.makedirs('./results/' + save_name + '/')
- test_l2_step = 0
- test_l2_full = 0
- showcase = 10
- id = 0
- with torch.no_grad():
- for x, tim, fx, yy in test_loader:
- id += 1
- loss = 0
- x, fx, tim, yy = x.cuda(), fx.cuda(), tim.cuda(), yy.cuda()
- bsz = x.shape[0]
-
- for t in range(T):
- y = yy[..., t:t + 1]
- input_T = tim[:, t:t + 1].reshape(bsz, 1)
- im = model(x, fx, T=input_T)
- loss += myloss(im.reshape(bsz, -1), y.reshape(bsz, -1))
- if t == 0:
- pred = im.unsqueeze(-1)
- else:
- pred = torch.cat((pred, im.unsqueeze(-1)), -1)
-
- if id < showcase:
- print(id)
- truth = y[0].reshape(101, 31, 4).squeeze().detach().cpu().numpy()
- pred_vis = im[0].reshape(101, 31, 4).squeeze().detach().cpu().numpy()
- truth_du = np.linalg.norm(truth[:, :, 2:], axis=-1)
- pred_du = np.linalg.norm(pred_vis[:, :, 2:], axis=-1)
-
- plt.axis('off')
- plt.scatter(truth[:, :, 0], truth[:, :, 1], 10, truth_du[:, :], cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 6)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "gt_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- plt.axis('off')
- plt.scatter(pred_vis[:, :, 0], pred_vis[:, :, 1], 10, pred_du[:, :], cmap='coolwarm')
- plt.colorbar()
- plt.clim(0, 6)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "pred_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- plt.axis('off')
- plt.scatter(truth[:, :, 0], truth[:, :, 1], 10, pred_du[:, :] - truth_du[:, :], cmap='coolwarm')
- plt.colorbar()
- plt.clim(-0.2, 0.2)
- plt.savefig(
- os.path.join('./results/' + save_name + '/',
- "error_" + str(id) + ".pdf"), bbox_inches='tight', pad_inches=0)
- plt.close()
-
- test_l2_step += loss.item()
- test_l2_full += myloss(pred.reshape(bsz, -1), yy.reshape(bsz, -1)).item()
-
- print("test_step_loss:{:.5f} , test_full_loss:{:.5f}".format(test_l2_step / ntest / T, test_l2_full / ntest))
- else:
- for ep in range(args.epochs):
-
- model.train()
- train_l2_step = 0
-
- for x, tim, fx, yy in train_loader:
- x, fx, tim, yy = x.cuda(), fx.cuda(), tim.cuda(), yy.cuda()
- bsz = x.shape[0]
-
- for t in range(T):
- y = yy[..., t:t + 1]
- input_T = tim[:, t:t + 1].reshape(bsz, 1) # B,step
- im = model(x, fx, T=input_T)
-
- loss = myloss(im.reshape(bsz, -1), y.reshape(bsz, -1))
- train_l2_step += loss.item()
- optimizer.zero_grad()
- loss.backward()
- if args.max_grad_norm is not None:
- torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
- optimizer.step()
-
- scheduler.step()
-
- model.eval()
- test_l2_step = 0
- test_l2_full = 0
- with torch.no_grad():
- for x, tim, fx, yy in test_loader:
- loss = 0
- x, fx, tim, yy = x.cuda(), fx.cuda(), tim.cuda(), yy.cuda()
- bsz = x.shape[0]
-
- for t in range(T):
- y = yy[..., t:t + 1]
- input_T = tim[:, t:t + 1].reshape(bsz, 1)
- im = model(x, fx, T=input_T)
- loss += myloss(im.reshape(bsz, -1), y.reshape(bsz, -1))
- if t == 0:
- pred = im.unsqueeze(-1)
- else:
- pred = torch.cat((pred, im.unsqueeze(-1)), -1)
-
- test_l2_step += loss.item()
- test_l2_full += myloss(pred.reshape(bsz, -1), yy.reshape(bsz, -1)).item()
-
- print("Epoch {} , train_step_loss:{:.5f} , test_step_loss:{:.5f} , test_full_loss:{:.5f}".format(ep,
- train_l2_step / ntrain / T,
- test_l2_step / ntest / T,
- test_l2_full / ntest))
- if ep % 100 == 0:
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
- if not os.path.exists('./checkpoints'):
- os.makedirs('./checkpoints')
- print('save model')
- torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))
-
-
-if __name__ == "__main__":
- main()
diff --git a/PDE-Solving-StandardBenchmark/fig/scalibility.png b/PDE-Solving-StandardBenchmark/fig/scalibility.png
deleted file mode 100644
index eaac142..0000000
Binary files a/PDE-Solving-StandardBenchmark/fig/scalibility.png and /dev/null differ
diff --git a/PDE-Solving-StandardBenchmark/fig/showcase.png b/PDE-Solving-StandardBenchmark/fig/showcase.png
deleted file mode 100644
index 574d101..0000000
Binary files a/PDE-Solving-StandardBenchmark/fig/showcase.png and /dev/null differ
diff --git a/PDE-Solving-StandardBenchmark/fig/standard_benchmark.png b/PDE-Solving-StandardBenchmark/fig/standard_benchmark.png
deleted file mode 100644
index 66e826c..0000000
Binary files a/PDE-Solving-StandardBenchmark/fig/standard_benchmark.png and /dev/null differ
diff --git a/PDE-Solving-StandardBenchmark/model/Embedding.py b/PDE-Solving-StandardBenchmark/model/Embedding.py
deleted file mode 100644
index 06d4c33..0000000
--- a/PDE-Solving-StandardBenchmark/model/Embedding.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-from einops import rearrange
-
-
-class RotaryEmbedding(nn.Module):
- def __init__(self, dim, min_freq=1 / 2, scale=1.):
- super().__init__()
- inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim))
- self.min_freq = min_freq
- self.scale = scale
- self.register_buffer('inv_freq', inv_freq)
-
- def forward(self, coordinates, device):
- # coordinates [b, n]
- t = coordinates.to(device).type_as(self.inv_freq)
- t = t * (self.scale / self.min_freq)
- freqs = torch.einsum('... i , j -> ... i j', t, self.inv_freq) # [b, n, d//2]
- return torch.cat((freqs, freqs), dim=-1) # [b, n, d]
-
-
-def rotate_half(x):
- x = rearrange(x, '... (j d) -> ... j d', j=2)
- x1, x2 = x.unbind(dim=-2)
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(t, freqs):
- return (t * freqs.cos()) + (rotate_half(t) * freqs.sin())
-
-
-def apply_2d_rotary_pos_emb(t, freqs_x, freqs_y):
- # split t into first half and second half
- # t: [b, h, n, d]
- # freq_x/y: [b, n, d]
- d = t.shape[-1]
- t_x, t_y = t[..., :d // 2], t[..., d // 2:]
-
- return torch.cat((apply_rotary_pos_emb(t_x, freqs_x),
- apply_rotary_pos_emb(t_y, freqs_y)), dim=-1)
-
-
-class PositionalEncoding(nn.Module):
- "Implement the PE function."
-
- def __init__(self, d_model, dropout, max_len=421 * 421):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- # Compute the positional encodings once in log space.
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len).unsqueeze(1)
- div_term = torch.exp(
- torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model)
- )
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0)
- self.register_buffer("pe", pe)
-
- def forward(self, x):
- x = x + self.pe[:, : x.size(1)].requires_grad_(False)
- return self.dropout(x)
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
-
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- return embedding
diff --git a/PDE-Solving-StandardBenchmark/model/Physics_Attention.py b/PDE-Solving-StandardBenchmark/model/Physics_Attention.py
deleted file mode 100644
index eb16ed4..0000000
--- a/PDE-Solving-StandardBenchmark/model/Physics_Attention.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import torch.nn as nn
-import torch
-from einops import rearrange, repeat
-
-
-class Physics_Attention_Irregular_Mesh(nn.Module):
- ## for irregular meshes in 1D, 2D or 3D space
- def __init__(self, dim, heads=8, dim_head=64, dropout=0., slice_num=64):
- super().__init__()
- inner_dim = dim_head * heads
- self.dim_head = dim_head
- self.heads = heads
- self.scale = dim_head ** -0.5
- self.softmax = nn.Softmax(dim=-1)
- self.dropout = nn.Dropout(dropout)
- self.temperature = nn.Parameter(torch.ones([1, heads, 1, 1]) * 0.5)
-
- self.in_project_x = nn.Linear(dim, inner_dim)
- self.in_project_fx = nn.Linear(dim, inner_dim)
- self.in_project_slice = nn.Linear(dim_head, slice_num)
- for l in [self.in_project_slice]:
- torch.nn.init.orthogonal_(l.weight) # use a principled initialization
- self.to_q = nn.Linear(dim_head, dim_head, bias=False)
- self.to_k = nn.Linear(dim_head, dim_head, bias=False)
- self.to_v = nn.Linear(dim_head, dim_head, bias=False)
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x):
- # B N C
- B, N, C = x.shape
-
- ### (1) Slice
- fx_mid = self.in_project_fx(x).reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- x_mid = self.in_project_x(x).reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- slice_weights = self.softmax(self.in_project_slice(x_mid) / self.temperature) # B H N G
- slice_norm = slice_weights.sum(2) # B H G
- slice_token = torch.einsum("bhnc,bhng->bhgc", fx_mid, slice_weights)
- slice_token = slice_token / ((slice_norm + 1e-5)[:, :, :, None].repeat(1, 1, 1, self.dim_head))
-
- ### (2) Attention among slice tokens
- q_slice_token = self.to_q(slice_token)
- k_slice_token = self.to_k(slice_token)
- v_slice_token = self.to_v(slice_token)
- dots = torch.matmul(q_slice_token, k_slice_token.transpose(-1, -2)) * self.scale
- attn = self.softmax(dots)
- attn = self.dropout(attn)
- out_slice_token = torch.matmul(attn, v_slice_token) # B H G D
-
- ### (3) Deslice
- out_x = torch.einsum("bhgc,bhng->bhnc", out_slice_token, slice_weights)
- out_x = rearrange(out_x, 'b h n d -> b n (h d)')
- return self.to_out(out_x)
-
-
-class Physics_Attention_Structured_Mesh_2D(nn.Module):
- ## for structured mesh in 2D space
- def __init__(self, dim, heads=8, dim_head=64, dropout=0., slice_num=64, H=101, W=31, kernel=3): # kernel=3):
- super().__init__()
- inner_dim = dim_head * heads
- self.dim_head = dim_head
- self.heads = heads
- self.scale = dim_head ** -0.5
- self.softmax = nn.Softmax(dim=-1)
- self.dropout = nn.Dropout(dropout)
- self.temperature = nn.Parameter(torch.ones([1, heads, 1, 1]) * 0.5)
- self.H = H
- self.W = W
-
- self.in_project_x = nn.Conv2d(dim, inner_dim, kernel, 1, kernel // 2)
- self.in_project_fx = nn.Conv2d(dim, inner_dim, kernel, 1, kernel // 2)
- self.in_project_slice = nn.Linear(dim_head, slice_num)
- for l in [self.in_project_slice]:
- torch.nn.init.orthogonal_(l.weight) # use a principled initialization
- self.to_q = nn.Linear(dim_head, dim_head, bias=False)
- self.to_k = nn.Linear(dim_head, dim_head, bias=False)
- self.to_v = nn.Linear(dim_head, dim_head, bias=False)
-
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x):
- # B N C
- B, N, C = x.shape
- x = x.reshape(B, self.H, self.W, C).contiguous().permute(0, 3, 1, 2).contiguous() # B C H W
-
- ### (1) Slice
- fx_mid = self.in_project_fx(x).permute(0, 2, 3, 1).contiguous().reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- x_mid = self.in_project_x(x).permute(0, 2, 3, 1).contiguous().reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N G
- slice_weights = self.softmax(
- self.in_project_slice(x_mid) / torch.clamp(self.temperature, min=0.1, max=5)) # B H N G
- slice_norm = slice_weights.sum(2) # B H G
- slice_token = torch.einsum("bhnc,bhng->bhgc", fx_mid, slice_weights)
- slice_token = slice_token / ((slice_norm + 1e-5)[:, :, :, None].repeat(1, 1, 1, self.dim_head))
-
- ### (2) Attention among slice tokens
- q_slice_token = self.to_q(slice_token)
- k_slice_token = self.to_k(slice_token)
- v_slice_token = self.to_v(slice_token)
- dots = torch.matmul(q_slice_token, k_slice_token.transpose(-1, -2)) * self.scale
- attn = self.softmax(dots)
- attn = self.dropout(attn)
- out_slice_token = torch.matmul(attn, v_slice_token) # B H G D
-
- ### (3) Deslice
- out_x = torch.einsum("bhgc,bhng->bhnc", out_slice_token, slice_weights)
- out_x = rearrange(out_x, 'b h n d -> b n (h d)')
- return self.to_out(out_x)
-
-
-class Physics_Attention_Structured_Mesh_3D(nn.Module):
- ## for structured mesh in 3D space
- def __init__(self, dim, heads=8, dim_head=64, dropout=0., slice_num=32, H=32, W=32, D=32, kernel=3):
- super().__init__()
- inner_dim = dim_head * heads
- self.dim_head = dim_head
- self.heads = heads
- self.scale = dim_head ** -0.5
- self.softmax = nn.Softmax(dim=-1)
- self.dropout = nn.Dropout(dropout)
- self.temperature = nn.Parameter(torch.ones([1, heads, 1, 1]) * 0.5)
- self.H = H
- self.W = W
- self.D = D
-
- self.in_project_x = nn.Conv3d(dim, inner_dim, kernel, 1, kernel // 2)
- self.in_project_fx = nn.Conv3d(dim, inner_dim, kernel, 1, kernel // 2)
- self.in_project_slice = nn.Linear(dim_head, slice_num)
- for l in [self.in_project_slice]:
- torch.nn.init.orthogonal_(l.weight) # use a principled initialization
- self.to_q = nn.Linear(dim_head, dim_head, bias=False)
- self.to_k = nn.Linear(dim_head, dim_head, bias=False)
- self.to_v = nn.Linear(dim_head, dim_head, bias=False)
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x):
- # B N C
- B, N, C = x.shape
- x = x.reshape(B, self.H, self.W, self.D, C).contiguous().permute(0, 4, 1, 2, 3).contiguous() # B C H W
-
- ### (1) Slice
- fx_mid = self.in_project_fx(x).permute(0, 2, 3, 4, 1).contiguous().reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N C
- x_mid = self.in_project_x(x).permute(0, 2, 3, 4, 1).contiguous().reshape(B, N, self.heads, self.dim_head) \
- .permute(0, 2, 1, 3).contiguous() # B H N G
- slice_weights = self.softmax(
- self.in_project_slice(x_mid) / torch.clamp(self.temperature, min=0.1, max=5)) # B H N G
- slice_norm = slice_weights.sum(2) # B H G
- slice_token = torch.einsum("bhnc,bhng->bhgc", fx_mid, slice_weights)
- slice_token = slice_token / ((slice_norm + 1e-5)[:, :, :, None].repeat(1, 1, 1, self.dim_head))
-
- ### (2) Attention among slice tokens
- q_slice_token = self.to_q(slice_token)
- k_slice_token = self.to_k(slice_token)
- v_slice_token = self.to_v(slice_token)
- dots = torch.matmul(q_slice_token, k_slice_token.transpose(-1, -2)) * self.scale
- attn = self.softmax(dots)
- attn = self.dropout(attn)
- out_slice_token = torch.matmul(attn, v_slice_token) # B H G D
-
- ### (3) Deslice
- out_x = torch.einsum("bhgc,bhng->bhnc", out_slice_token, slice_weights)
- out_x = rearrange(out_x, 'b h n d -> b n (h d)')
- return self.to_out(out_x)
diff --git a/PDE-Solving-StandardBenchmark/model/Transolver_Irregular_Mesh.py b/PDE-Solving-StandardBenchmark/model/Transolver_Irregular_Mesh.py
deleted file mode 100644
index e9484c9..0000000
--- a/PDE-Solving-StandardBenchmark/model/Transolver_Irregular_Mesh.py
+++ /dev/null
@@ -1,158 +0,0 @@
-import torch
-import torch.nn as nn
-from timm.models.layers import trunc_normal_
-from model.Embedding import timestep_embedding
-import numpy as np
-from model.Physics_Attention import Physics_Attention_Irregular_Mesh
-
-ACTIVATION = {'gelu': nn.GELU, 'tanh': nn.Tanh, 'sigmoid': nn.Sigmoid, 'relu': nn.ReLU, 'leaky_relu': nn.LeakyReLU(0.1),
- 'softplus': nn.Softplus, 'ELU': nn.ELU, 'silu': nn.SiLU}
-
-
-class MLP(nn.Module):
- def __init__(self, n_input, n_hidden, n_output, n_layers=1, act='gelu', res=True):
- super(MLP, self).__init__()
-
- if act in ACTIVATION.keys():
- act = ACTIVATION[act]
- else:
- raise NotImplementedError
- self.n_input = n_input
- self.n_hidden = n_hidden
- self.n_output = n_output
- self.n_layers = n_layers
- self.res = res
- self.linear_pre = nn.Sequential(nn.Linear(n_input, n_hidden), act())
- self.linear_post = nn.Linear(n_hidden, n_output)
- self.linears = nn.ModuleList([nn.Sequential(nn.Linear(n_hidden, n_hidden), act()) for _ in range(n_layers)])
-
- def forward(self, x):
- x = self.linear_pre(x)
- for i in range(self.n_layers):
- if self.res:
- x = self.linears[i](x) + x
- else:
- x = self.linears[i](x)
- x = self.linear_post(x)
- return x
-
-
-class Transolver_block(nn.Module):
- """Transformer encoder block."""
-
- def __init__(
- self,
- num_heads: int,
- hidden_dim: int,
- dropout: float,
- act='gelu',
- mlp_ratio=4,
- last_layer=False,
- out_dim=1,
- slice_num=32,
- ):
- super().__init__()
- self.last_layer = last_layer
- self.ln_1 = nn.LayerNorm(hidden_dim)
- self.Attn = Physics_Attention_Irregular_Mesh(hidden_dim, heads=num_heads, dim_head=hidden_dim // num_heads,
- dropout=dropout, slice_num=slice_num)
- self.ln_2 = nn.LayerNorm(hidden_dim)
- self.mlp = MLP(hidden_dim, hidden_dim * mlp_ratio, hidden_dim, n_layers=0, res=False, act=act)
- if self.last_layer:
- self.ln_3 = nn.LayerNorm(hidden_dim)
- self.mlp2 = nn.Linear(hidden_dim, out_dim)
-
- def forward(self, fx):
- fx = self.Attn(self.ln_1(fx)) + fx
- fx = self.mlp(self.ln_2(fx)) + fx
- if self.last_layer:
- return self.mlp2(self.ln_3(fx))
- else:
- return fx
-
-
-class Model(nn.Module):
- def __init__(self,
- space_dim=1,
- n_layers=5,
- n_hidden=256,
- dropout=0.0,
- n_head=8,
- Time_Input=False,
- act='gelu',
- mlp_ratio=1,
- fun_dim=1,
- out_dim=1,
- slice_num=32,
- ref=8,
- unified_pos=False
- ):
- super(Model, self).__init__()
- self.__name__ = 'Transolver_1D'
- self.ref = ref
- self.unified_pos = unified_pos
- self.Time_Input = Time_Input
- self.n_hidden = n_hidden
- self.space_dim = space_dim
- if self.unified_pos:
- self.preprocess = MLP(fun_dim + self.ref * self.ref, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
- else:
- self.preprocess = MLP(fun_dim + space_dim, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
- if Time_Input:
- self.time_fc = nn.Sequential(nn.Linear(n_hidden, n_hidden), nn.SiLU(), nn.Linear(n_hidden, n_hidden))
-
- self.blocks = nn.ModuleList([Transolver_block(num_heads=n_head, hidden_dim=n_hidden,
- dropout=dropout,
- act=act,
- mlp_ratio=mlp_ratio,
- out_dim=out_dim,
- slice_num=slice_num,
- last_layer=(_ == n_layers - 1))
- for _ in range(n_layers)])
- self.initialize_weights()
- self.placeholder = nn.Parameter((1 / (n_hidden)) * torch.rand(n_hidden, dtype=torch.float))
-
- def initialize_weights(self):
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm, nn.BatchNorm1d)):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_grid(self, x, batchsize=1):
- # x: B N 2
- # grid_ref
- gridx = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridx = gridx.reshape(1, self.ref, 1, 1).repeat([batchsize, 1, self.ref, 1])
- gridy = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridy = gridy.reshape(1, 1, self.ref, 1).repeat([batchsize, self.ref, 1, 1])
- grid_ref = torch.cat((gridx, gridy), dim=-1).cuda().reshape(batchsize, self.ref * self.ref, 2) # B H W 8 8 2
-
- pos = torch.sqrt(torch.sum((x[:, :, None, :] - grid_ref[:, None, :, :]) ** 2, dim=-1)). \
- reshape(batchsize, x.shape[1], self.ref * self.ref).contiguous()
- return pos
-
- def forward(self, x, fx, T=None):
- if self.unified_pos:
- x = self.get_grid(x, x.shape[0])
- if fx is not None:
- fx = torch.cat((x, fx), -1)
- fx = self.preprocess(fx)
- else:
- fx = self.preprocess(x)
- fx = fx + self.placeholder[None, None, :]
-
- if T is not None:
- Time_emb = timestep_embedding(T, self.n_hidden).repeat(1, x.shape[1], 1)
- Time_emb = self.time_fc(Time_emb)
- fx = fx + Time_emb
-
- for block in self.blocks:
- fx = block(fx)
-
- return fx
diff --git a/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_2D.py b/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_2D.py
deleted file mode 100644
index 0f1769d..0000000
--- a/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_2D.py
+++ /dev/null
@@ -1,174 +0,0 @@
-import torch
-import numpy as np
-import torch.nn as nn
-from timm.models.layers import trunc_normal_
-from model.Embedding import timestep_embedding
-from model.Physics_Attention import Physics_Attention_Structured_Mesh_2D
-
-ACTIVATION = {'gelu': nn.GELU, 'tanh': nn.Tanh, 'sigmoid': nn.Sigmoid, 'relu': nn.ReLU, 'leaky_relu': nn.LeakyReLU(0.1),
- 'softplus': nn.Softplus, 'ELU': nn.ELU, 'silu': nn.SiLU}
-
-
-class MLP(nn.Module):
- def __init__(self, n_input, n_hidden, n_output, n_layers=1, act='gelu', res=True):
- super(MLP, self).__init__()
-
- if act in ACTIVATION.keys():
- act = ACTIVATION[act]
- else:
- raise NotImplementedError
- self.n_input = n_input
- self.n_hidden = n_hidden
- self.n_output = n_output
- self.n_layers = n_layers
- self.res = res
- self.linear_pre = nn.Sequential(nn.Linear(n_input, n_hidden), act())
- self.linear_post = nn.Linear(n_hidden, n_output)
- self.linears = nn.ModuleList([nn.Sequential(nn.Linear(n_hidden, n_hidden), act()) for _ in range(n_layers)])
-
- def forward(self, x):
- x = self.linear_pre(x)
- for i in range(self.n_layers):
- if self.res:
- x = self.linears[i](x) + x
- else:
- x = self.linears[i](x)
- x = self.linear_post(x)
- return x
-
-
-class Transolver_block(nn.Module):
- """Transformer encoder block."""
-
- def __init__(
- self,
- num_heads: int,
- hidden_dim: int,
- dropout: float,
- act='gelu',
- mlp_ratio=4,
- last_layer=False,
- out_dim=1,
- slice_num=32,
- H=85,
- W=85
- ):
- super().__init__()
- self.last_layer = last_layer
- self.ln_1 = nn.LayerNorm(hidden_dim)
- self.Attn = Physics_Attention_Structured_Mesh_2D(hidden_dim, heads=num_heads, dim_head=hidden_dim // num_heads,
- dropout=dropout, slice_num=slice_num, H=H, W=W)
-
- self.ln_2 = nn.LayerNorm(hidden_dim)
- self.mlp = MLP(hidden_dim, hidden_dim * mlp_ratio, hidden_dim, n_layers=0, res=False, act=act)
- if self.last_layer:
- self.ln_3 = nn.LayerNorm(hidden_dim)
- self.mlp2 = nn.Linear(hidden_dim, out_dim)
-
- def forward(self, fx):
- fx = self.Attn(self.ln_1(fx)) + fx
- fx = self.mlp(self.ln_2(fx)) + fx
- if self.last_layer:
- return self.mlp2(self.ln_3(fx))
- else:
- return fx
-
-
-class Model(nn.Module):
- def __init__(self,
- space_dim=1,
- n_layers=5,
- n_hidden=256,
- dropout=0.0,
- n_head=8,
- Time_Input=False,
- act='gelu',
- mlp_ratio=1,
- fun_dim=1,
- out_dim=1,
- slice_num=32,
- ref=8,
- unified_pos=False,
- H=85,
- W=85,
- ):
- super(Model, self).__init__()
- self.__name__ = 'Transolver_2D'
- self.H = H
- self.W = W
- self.ref = ref
- self.unified_pos = unified_pos
- if self.unified_pos:
- self.pos = self.get_grid()
- self.preprocess = MLP(fun_dim + self.ref * self.ref, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
- else:
- self.preprocess = MLP(fun_dim + space_dim, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
-
- self.Time_Input = Time_Input
- self.n_hidden = n_hidden
- self.space_dim = space_dim
- if Time_Input:
- self.time_fc = nn.Sequential(nn.Linear(n_hidden, n_hidden), nn.SiLU(), nn.Linear(n_hidden, n_hidden))
-
- self.blocks = nn.ModuleList([Transolver_block(num_heads=n_head, hidden_dim=n_hidden,
- dropout=dropout,
- act=act,
- mlp_ratio=mlp_ratio,
- out_dim=out_dim,
- slice_num=slice_num,
- H=H,
- W=W,
- last_layer=(_ == n_layers - 1))
- for _ in range(n_layers)])
- self.initialize_weights()
- self.placeholder = nn.Parameter((1 / (n_hidden)) * torch.rand(n_hidden, dtype=torch.float))
-
- def initialize_weights(self):
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm, nn.BatchNorm1d)):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_grid(self, batchsize=1):
- size_x, size_y = self.H, self.W
- gridx = torch.tensor(np.linspace(0, 1, size_x), dtype=torch.float)
- gridx = gridx.reshape(1, size_x, 1, 1).repeat([batchsize, 1, size_y, 1])
- gridy = torch.tensor(np.linspace(0, 1, size_y), dtype=torch.float)
- gridy = gridy.reshape(1, 1, size_y, 1).repeat([batchsize, size_x, 1, 1])
- grid = torch.cat((gridx, gridy), dim=-1).cuda() # B H W 2
-
- gridx = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridx = gridx.reshape(1, self.ref, 1, 1).repeat([batchsize, 1, self.ref, 1])
- gridy = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridy = gridy.reshape(1, 1, self.ref, 1).repeat([batchsize, self.ref, 1, 1])
- grid_ref = torch.cat((gridx, gridy), dim=-1).cuda() # B H W 8 8 2
-
- pos = torch.sqrt(torch.sum((grid[:, :, :, None, None, :] - grid_ref[:, None, None, :, :, :]) ** 2, dim=-1)). \
- reshape(batchsize, size_x, size_y, self.ref * self.ref).contiguous()
- return pos
-
- def forward(self, x, fx, T=None):
- if self.unified_pos:
- x = self.pos.repeat(x.shape[0], 1, 1, 1).reshape(x.shape[0], self.H * self.W, self.ref * self.ref)
- if fx is not None:
- fx = torch.cat((x, fx), -1)
- fx = self.preprocess(fx)
- else:
- fx = self.preprocess(x)
- fx = fx + self.placeholder[None, None, :]
-
- if T is not None:
- Time_emb = timestep_embedding(T, self.n_hidden).repeat(1, x.shape[1], 1)
- Time_emb = self.time_fc(Time_emb)
- fx = fx + Time_emb
-
- for block in self.blocks:
- fx = block(fx)
-
- return fx
diff --git a/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_3D.py b/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_3D.py
deleted file mode 100644
index f5ecf17..0000000
--- a/PDE-Solving-StandardBenchmark/model/Transolver_Structured_Mesh_3D.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import torch
-import numpy as np
-import torch.nn as nn
-from timm.models.layers import trunc_normal_
-from model.Embedding import timestep_embedding
-import torch.utils.checkpoint as checkpoint
-from model.Physics_Attention import Physics_Attention_Structured_Mesh_3D
-
-ACTIVATION = {'gelu': nn.GELU, 'tanh': nn.Tanh, 'sigmoid': nn.Sigmoid, 'relu': nn.ReLU, 'leaky_relu': nn.LeakyReLU(0.1),
- 'softplus': nn.Softplus, 'ELU': nn.ELU, 'silu': nn.SiLU}
-
-
-class MLP(nn.Module):
- def __init__(self, n_input, n_hidden, n_output, n_layers=1, act='gelu', res=True):
- super(MLP, self).__init__()
-
- if act in ACTIVATION.keys():
- act = ACTIVATION[act]
- else:
- raise NotImplementedError
- self.n_input = n_input
- self.n_hidden = n_hidden
- self.n_output = n_output
- self.n_layers = n_layers
- self.res = res
- self.linear_pre = nn.Sequential(nn.Linear(n_input, n_hidden), act())
- self.linear_post = nn.Linear(n_hidden, n_output)
- self.linears = nn.ModuleList([nn.Sequential(nn.Linear(n_hidden, n_hidden), act()) for _ in range(n_layers)])
-
- def forward(self, x):
- x = self.linear_pre(x)
- for i in range(self.n_layers):
- if self.res:
- x = self.linears[i](x) + x
- else:
- x = self.linears[i](x)
- x = self.linear_post(x)
- return x
-
-
-class Transolver_block(nn.Module):
- """Transformer encoder block."""
-
- def __init__(
- self,
- num_heads: int,
- hidden_dim: int,
- dropout: float,
- act='gelu',
- mlp_ratio=4,
- last_layer=False,
- out_dim=1,
- slice_num=32,
- H=32,
- W=32,
- D=32
- ):
- super().__init__()
- self.last_layer = last_layer
- self.ln_1 = nn.LayerNorm(hidden_dim)
- self.Attn = Physics_Attention_Structured_Mesh_3D(hidden_dim, heads=num_heads, dim_head=hidden_dim // num_heads,
- dropout=dropout, slice_num=slice_num, H=H, W=W, D=D)
-
- self.ln_2 = nn.LayerNorm(hidden_dim)
- self.mlp = MLP(hidden_dim, hidden_dim * mlp_ratio, hidden_dim, n_layers=0, res=False, act=act)
- if self.last_layer:
- self.ln_3 = nn.LayerNorm(hidden_dim)
- self.mlp2 = nn.Linear(hidden_dim, out_dim)
-
- def forward(self, fx):
- fx = self.Attn(self.ln_1(fx)) + fx
- fx = self.mlp(self.ln_2(fx)) + fx
- if self.last_layer:
- return self.mlp2(self.ln_3(fx))
- else:
- return fx
-
-
-class Model(nn.Module):
- def __init__(self,
- space_dim=1,
- n_layers=5,
- n_hidden=256,
- dropout=0.0,
- n_head=8,
- Time_Input=False,
- act='gelu',
- mlp_ratio=1,
- fun_dim=1,
- out_dim=1,
- slice_num=32,
- ref=8,
- unified_pos=False,
- H=32,
- W=32,
- D=32,
- ):
- super(Model, self).__init__()
- self.__name__ = 'Transolver_3D'
- self.use_checkpoint = False
- self.H = H
- self.W = W
- self.D = D
- self.ref = ref
- self.unified_pos = unified_pos
- if self.unified_pos:
- self.pos = self.get_grid()
- self.preprocess = MLP(fun_dim + self.ref * self.ref * self.ref, n_hidden * 2, n_hidden, n_layers=0,
- res=False, act=act)
- else:
- self.preprocess = MLP(fun_dim + space_dim, n_hidden * 2, n_hidden, n_layers=0, res=False, act=act)
-
- self.Time_Input = Time_Input
- self.n_hidden = n_hidden
- self.space_dim = space_dim
- if Time_Input:
- self.time_fc = nn.Sequential(nn.Linear(n_hidden, n_hidden), nn.SiLU(), nn.Linear(n_hidden, n_hidden))
-
- self.blocks = nn.ModuleList([Transolver_block(num_heads=n_head, hidden_dim=n_hidden,
- dropout=dropout,
- act=act,
- mlp_ratio=mlp_ratio,
- out_dim=out_dim,
- slice_num=slice_num,
- H=H,
- W=W,
- D=D,
- last_layer=(_ == n_layers - 1))
- for _ in range(n_layers)])
- self.initialize_weights()
- self.placeholder = nn.Parameter((1 / (n_hidden)) * torch.rand(n_hidden, dtype=torch.float))
-
- def initialize_weights(self):
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm, nn.BatchNorm1d)):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_grid(self, batchsize=1):
- size_x, size_y, size_z = self.H, self.W, self.D
- gridx = torch.tensor(np.linspace(0, 1, size_x), dtype=torch.float)
- gridx = gridx.reshape(1, size_x, 1, 1, 1).repeat([batchsize, 1, size_y, size_z, 1])
- gridy = torch.tensor(np.linspace(0, 1, size_y), dtype=torch.float)
- gridy = gridy.reshape(1, 1, size_y, 1, 1).repeat([batchsize, size_x, 1, size_z, 1])
- gridz = torch.tensor(np.linspace(0, 1, size_z), dtype=torch.float)
- gridz = gridz.reshape(1, 1, 1, size_z, 1).repeat([batchsize, size_x, size_y, 1, 1])
- grid = torch.cat((gridx, gridy, gridz), dim=-1).cuda() # B H W D 3
-
- gridx = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridx = gridx.reshape(1, self.ref, 1, 1, 1).repeat([batchsize, 1, self.ref, self.ref, 1])
- gridy = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridy = gridy.reshape(1, 1, self.ref, 1, 1).repeat([batchsize, self.ref, 1, self.ref, 1])
- gridz = torch.tensor(np.linspace(0, 1, self.ref), dtype=torch.float)
- gridz = gridz.reshape(1, 1, 1, self.ref, 1).repeat([batchsize, self.ref, self.ref, 1, 1])
- grid_ref = torch.cat((gridx, gridy, gridz), dim=-1).cuda() # B 4 4 4 3
-
- pos = torch.sqrt(
- torch.sum((grid[:, :, :, :, None, None, None, :] - grid_ref[:, None, None, None, :, :, :, :]) ** 2,
- dim=-1)). \
- reshape(batchsize, size_x, size_y, size_z, self.ref * self.ref * self.ref).contiguous()
- return pos
-
- def forward(self, x, fx, T=None):
- if self.unified_pos:
- x = self.pos.repeat(x.shape[0], 1, 1, 1, 1).reshape(x.shape[0], self.H * self.W * self.D,
- self.ref * self.ref * self.ref)
- if fx is not None:
- fx = torch.cat((x, fx), -1)
- fx = self.preprocess(fx)
- else:
- fx = self.preprocess(x)
- fx = fx + self.placeholder[None, None, :]
-
- if T is not None:
- Time_emb = timestep_embedding(T, self.n_hidden).repeat(1, x.shape[1], 1)
- Time_emb = self.time_fc(Time_emb)
- fx = fx + Time_emb
-
- for block in self.blocks:
- if self.use_checkpoint:
- fx = checkpoint.checkpoint(block, fx)
- else:
- fx = block(fx)
-
- return fx
diff --git a/PDE-Solving-StandardBenchmark/model_dict.py b/PDE-Solving-StandardBenchmark/model_dict.py
deleted file mode 100644
index b001ca3..0000000
--- a/PDE-Solving-StandardBenchmark/model_dict.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from model import Transolver_Irregular_Mesh, Transolver_Structured_Mesh_2D, Transolver_Structured_Mesh_3D
-
-
-def get_model(args):
- model_dict = {
- 'Transolver_Irregular_Mesh': Transolver_Irregular_Mesh, # for PDEs in 1D space or in unstructured meshes
- 'Transolver_Structured_Mesh_2D': Transolver_Structured_Mesh_2D,
- 'Transolver_Structured_Mesh_3D': Transolver_Structured_Mesh_3D,
- }
- return model_dict[args.model]
diff --git a/PDE-Solving-StandardBenchmark/requirements.txt b/PDE-Solving-StandardBenchmark/requirements.txt
deleted file mode 100644
index 2631a66..0000000
--- a/PDE-Solving-StandardBenchmark/requirements.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-torch==1.10.1
-h5py==3.8.0
-dgl==1.1.0
-einops==0.6.1
-scipy==1.7.3
-timm==0.9.2
\ No newline at end of file
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_Airfoil.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_Airfoil.sh
deleted file mode 100644
index 59a648b..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_Airfoil.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-python exp_airfoil.py \
---gpu 5 \
---model Transolver_Structured_Mesh_2D \
---n-hidden 128 \
---n-heads 8 \
---n-layers 8 \
---lr 0.001 \
---max_grad_norm 0.1 \
---batch-size 4 \
---slice_num 64 \
---unified_pos 0 \
---ref 8 \
---eval 0 \
---save_name airfoil_Transolver
\ No newline at end of file
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_Darcy.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_Darcy.sh
deleted file mode 100644
index 8b5da67..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_Darcy.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-python exp_darcy.py \
---gpu 4 \
---model Transolver_Structured_Mesh_2D \
---n-hidden 128 \
---n-heads 8 \
---n-layers 8 \
---lr 0.001 \
---max_grad_norm 0.1 \
---batch-size 4 \
---slice_num 64 \
---unified_pos 1 \
---ref 8 \
---eval 0 \
---downsample 5 \
---save_name darcy_UniPDE
-
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_Elas.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_Elas.sh
deleted file mode 100644
index 62e1db8..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_Elas.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-python exp_elas.py \
---gpu 6 \
---model Transolver_Irregular_Mesh \
---n-hidden 128 \
---n-heads 8 \
---n-layers 8 \
---lr 0.001 \
---max_grad_norm 0.1 \
---batch-size 1 \
---slice_num 64 \
---unified_pos 0 \
---ref 8 \
---eval 0 \
---save_name elas_Transolver
\ No newline at end of file
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_NS.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_NS.sh
deleted file mode 100644
index 2c8bf37..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_NS.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-python exp_ns.py \
---gpu 2 \
---model Transolver_Structured_Mesh_2D \
---n-hidden 256 \
---n-heads 8 \
---n-layers 8 \
---lr 0.001 \
---batch-size 2 \
---slice_num 32 \
---unified_pos 1 \
---ref 8 \
---eval 0 \
---save_name ns_Transolver
-
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_Pipe.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_Pipe.sh
deleted file mode 100644
index d4629d1..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_Pipe.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-python exp_pipe.py \
---gpu 7 \
---model Transolver_Structured_Mesh_2D \
---n-hidden 128 \
---n-heads 8 \
---n-layers 8 \
---mlp_ratio 2 \
---lr 0.001 \
---max_grad_norm 0.1 \
---batch-size 8 \
---slice_num 64 \
---unified_pos 0 \
---ref 8 \
---eval 0 \
---save_name pipe_Transolver
-
diff --git a/PDE-Solving-StandardBenchmark/scripts/Transolver_Plas.sh b/PDE-Solving-StandardBenchmark/scripts/Transolver_Plas.sh
deleted file mode 100644
index 65a85d3..0000000
--- a/PDE-Solving-StandardBenchmark/scripts/Transolver_Plas.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-python exp_plas.py \
---gpu 3 \
---model Transolver_Structured_Mesh_2D \
---n-hidden 128 \
---n-heads 8 \
---n-layers 8 \
---lr 0.001 \
---max_grad_norm 0.1 \
---batch-size 8 \
---slice_num 64 \
---unified_pos 0 \
---ref 8 \
---eval 0 \
---save_name plas_Transolver
-
diff --git a/PDE-Solving-StandardBenchmark/utils/normalizer.py b/PDE-Solving-StandardBenchmark/utils/normalizer.py
deleted file mode 100644
index a42170c..0000000
--- a/PDE-Solving-StandardBenchmark/utils/normalizer.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import torch
-from tqdm import *
-
-
-class IdentityTransformer():
- def __init__(self, X):
- self.mean = X.mean(dim=0, keepdim=True)
- self.std = X.std(dim=0, keepdim=True) + 1e-8
-
- def to(self, device):
- self.mean = self.mean.to(device)
- self.std = self.std.to(device)
- return self
-
- def cuda(self):
- self.mean = self.mean.cuda()
- self.std = self.std.cuda()
-
- def cpu(self):
- self.mean = self.mean.cpu()
- self.std = self.std.cpu()
-
- def encode(self, x):
- return x
-
- def decode(self, x):
- return x
-
-
-class UnitTransformer():
- def __init__(self, X):
- self.mean = X.mean(dim=(0, 1), keepdim=True)
- self.std = X.std(dim=(0, 1), keepdim=True) + 1e-8
-
- def to(self, device):
- self.mean = self.mean.to(device)
- self.std = self.std.to(device)
- return self
-
- def cuda(self):
- self.mean = self.mean.cuda()
- self.std = self.std.cuda()
-
- def cpu(self):
- self.mean = self.mean.cpu()
- self.std = self.std.cpu()
-
- def encode(self, x):
- x = (x - self.mean) / (self.std)
- return x
-
- def decode(self, x):
- return x * self.std + self.mean
-
- def transform(self, X, inverse=True, component='all'):
- if component == 'all' or 'all-reduce':
- if inverse:
- orig_shape = X.shape
- return (X * (self.std - 1e-8) + self.mean).view(orig_shape)
- else:
- return (X - self.mean) / self.std
- else:
- if inverse:
- orig_shape = X.shape
- return (X * (self.std[:, component] - 1e-8) + self.mean[:, component]).view(orig_shape)
- else:
- return (X - self.mean[:, component]) / self.std[:, component]
-
-
-class UnitGaussianNormalizer(object):
- def __init__(self, x, eps=0.00001, time_last=True):
- super(UnitGaussianNormalizer, self).__init__()
-
- self.mean = torch.mean(x, 0)
- self.std = torch.std(x, 0)
- self.eps = eps
- self.time_last = time_last # if the time dimension is the last dim
-
- def encode(self, x):
- x = (x - self.mean) / (self.std + self.eps)
- return x
-
- def decode(self, x, sample_idx=None):
- # sample_idx is the spatial sampling mask
- if sample_idx is None:
- std = self.std + self.eps # n
- mean = self.mean
- else:
- if self.mean.ndim == sample_idx.ndim or self.time_last:
- std = self.std[sample_idx] + self.eps # batch*n
- mean = self.mean[sample_idx]
- if self.mean.ndim > sample_idx.ndim and not self.time_last:
- std = self.std[..., sample_idx] + self.eps # T*batch*n
- mean = self.mean[..., sample_idx]
- # x is in shape of batch*(spatial discretization size) or T*batch*(spatial discretization size)
- x = (x * std) + mean
- return x
-
- def to(self, device):
- if torch.is_tensor(self.mean):
- self.mean = self.mean.to(device)
- self.std = self.std.to(device)
- else:
- self.mean = torch.from_numpy(self.mean).to(device)
- self.std = torch.from_numpy(self.std).to(device)
- return self
-
- def cuda(self):
- self.mean = self.mean.cuda()
- self.std = self.std.cuda()
-
- def cpu(self):
- self.mean = self.mean.cpu()
- self.std = self.std.cpu()
diff --git a/PDE-Solving-StandardBenchmark/utils/testloss.py b/PDE-Solving-StandardBenchmark/utils/testloss.py
deleted file mode 100644
index c621557..0000000
--- a/PDE-Solving-StandardBenchmark/utils/testloss.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import torch
-
-
-class TestLoss(object):
- def __init__(self, d=2, p=2, size_average=True, reduction=True):
- super(TestLoss, self).__init__()
-
- assert d > 0 and p > 0
-
- self.d = d
- self.p = p
- self.reduction = reduction
- self.size_average = size_average
-
- def abs(self, x, y):
- num_examples = x.size()[0]
-
- h = 1.0 / (x.size()[1] - 1.0)
-
- all_norms = (h ** (self.d / self.p)) * torch.norm(x.view(num_examples, -1) - y.view(num_examples, -1), self.p,
- 1)
-
- if self.reduction:
- if self.size_average:
- return torch.mean(all_norms)
- else:
- return torch.sum(all_norms)
-
- return all_norms
-
- def rel(self, x, y):
- num_examples = x.size()[0]
-
- diff_norms = torch.norm(x.reshape(num_examples, -1) - y.reshape(num_examples, -1), self.p, 1)
- y_norms = torch.norm(y.reshape(num_examples, -1), self.p, 1)
- if self.reduction:
- if self.size_average:
- return torch.mean(diff_norms / y_norms)
- else:
- return torch.sum(diff_norms / y_norms)
-
- return diff_norms / y_norms
-
- def __call__(self, x, y):
- return self.rel(x, y)
diff --git a/README.md b/README.md
index 2010ece..565cafd 100644
--- a/README.md
+++ b/README.md
@@ -1,78 +1,80 @@
-# Transolver (ICML 2024 Spotlight)
+# Transolver for Car Design
-:triangular_flag_on_post:**News** (2025.04) We have released [Neural-Solver-Library](https://github.com/thuml/Neural-Solver-Library) as a simple and neat code base for PDE solving. It contains 17 well-reproduced neural solvers. Welcome to try this library and join the research in solving PDEs.
+We test [Transolver](https://arxiv.org/abs/2402.02366) on practical design tasks. The car design task requires the model to estimate the surrounding wind speed and surface pressure for a driving car.
-:triangular_flag_on_post:**News** (2025.02) We present an upgraded version of Transolver, named [Transolver++](https://arxiv.org/abs/2502.02414v1), which can handle million-scale geometries in one GPU with more accurate results.
-
-:triangular_flag_on_post:**News** (2024.10) Transolver has been integrated into [NVIDIA modulus](https://github.com/NVIDIA/modulus/tree/main/examples/cfd/darcy_transolver).
-
-Transolver: A Fast Transformer Solver for PDEs on General Geometries [[Paper]](https://arxiv.org/abs/2402.02366) [[Slides]](https://wuhaixu2016.github.io/pdf/ICML2024_Transolver.pdf) [[Poster]](https://wuhaixu2016.github.io/pdf/poster_ICML2024_Transolver.pdf)
-
-In real-world applications, PDEs are typically discretized into large-scale meshes with complex geometries. To capture intricate physical correlations hidden under multifarious meshes, we propose the Transolver with the following features:
+
+
+
+Figure 1. Car design task.
+
-- Going beyond previous work, Transolver **calculates attention among learned physical states** instead of mesh points, which empowers the model with **endogenetic geometry-general capability**.
-- Transolver achieves **22% error reduction over previous SOTA in six standard benchmarks** and excels in **large-scale industrial simulations**, including car and airfoil designs.
-- Transolver presents favorable **efficiency, scalability and out-of-distrbution generalizability**.
+Relative error of surrounding wind, surface pressure and [drag coefficient](https://en.wikipedia.org/wiki/Drag_coefficient) are recorded, as well as [Spearman's rank correlations](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient), which can be used to quantify the model's capability in ranking different designs.
-
+
-Figure 1. Overview of Transolver.
+Table 1. Model comparisons of the car design task.
-## Transolver v.s. Previous Transformer Operators
+## Get Started
-**All of the previous Transformer-based neural operators directly apply attention to mesh points.** However, the massive mesh points in practical applications will cause challenges in both computation cost and capturing physical correlations.
+1. Install Python 3.8. For convenience, execute the following command.
-Transolver is based on a more foundational idea, that is **learning intrinsic physical states under complex geometrics**. This design frees our model from superficial and unwieldy meshes and focuses more on physics modeling.
+```bash
+pip install -r requirements.txt
+```
-As shown below, **Transolver can precisely capture miscellaneous physical states of PDEs**, such as (a) various fluid-structure interactions in a Darcy flow, (b) different extrusion regions of elastic materials, (c) shock wave and wake flow around the airfoil, (d) front-back surfaces and up-bottom spaces of driving cars.
+Note: You need to install [pytorch_geometric](https://github.com/pyg-team/pytorch_geometric).
-
-
-
-Figure 2. Visualization of learned physical states.
-
+2. Prepare Data.
-## Get Started
+The raw data can be found [[here]](http://www.nobuyuki-umetani.com/publication/mlcfd_data.zip), which is provided by [Nobuyuki Umetani](https://dl.acm.org/doi/abs/10.1145/3197517.3201325).
-1. Please refer to different folders for detailed experiment instructions.
+3. Train and evaluate model. We provide the experiment scripts under the folder `./scripts/`. You can reproduce the experiment results as the following examples:
-2. List of experiments:
+```bash
+bash scripts/Transolver.sh # for Training (will take 8-10 hours on one single A100)
+bash scripts/Evaluation.sh # for Evaluation
+```
-- Core code: see [./Physics_Attention.py](https://github.com/thuml/Transolver/blob/main/Physics_Attention.py)
-- Standard benchmarks: see [./PDE-Solving-StandardBenchmark](https://github.com/thuml/Transolver/tree/main/PDE-Solving-StandardBenchmark)
-- Car design task: see [./Car-Design-ShapeNetCar](https://github.com/thuml/Transolver/tree/main/Car-Design-ShapeNetCar)
-- Airfoil design task: see [./Airfoil-Design-AirfRANS](https://github.com/thuml/Transolver/tree/main/Airfoil-Design-AirfRANS)
+Note: You need to change the argument `--data_dir` and `--save_dir` to your dataset path. Here `data_dir` is for the raw data and `save_dir` is to save the preprocessed data.
-## Results
+If you have already downloaded or generated the preprocecessed data, you can change `--preprocessed` as True for speed up.
-Transolver achieves consistent state-of-the-art in **six standard benchmarks and two practical design tasks**. **More than 20 baselines are compared.**
+4. Develop your own model. Here are the instructions:
-
-
-
-Table 1. Results on six standard benchmarks.
-
+ - Add the model file under folder `./models/`.
+ - Add the model configuration into `./main.py`.
+ - Add a script file under folder `./scripts/` and change the argument `--model`.
+
+## Slice Visualization
+
+Transolver proposes to **learn physical states** hidden under the unwieldy meshes.
+
+The following visualization demonstrates that Transolver can successfully learn to ascribe the points under similar physical state to the same slice, such as windshield, license plate and headlight.
-
+
-Table 2. Results on two design tasks: Car and Airfoild design.
+Figure 2. Visualization for Transolver learned physical states.
+
## Showcases
+Transolver achieves the best performance in complex geometries and hybrid physics.
+
-
+
-Figure 3. Comparison of Transolver and other models.
+Figure 3. Case study of Transolver and other models.
+
## Citation
-If you find this repo useful, please cite our paper.
+If you find this repo useful, please cite our paper.
```
@inproceedings{wu2024Transolver,
@@ -89,12 +91,8 @@ If you have any questions or want to use the code, please contact [wuhx23@mails.
## Acknowledgement
-We appreciate the following github repos a lot for their valuable code base or datasets:
-
-https://github.com/neuraloperator/neuraloperator
-
-https://github.com/neuraloperator/Geo-FNO
+We appreciate the following papers a lot for their valuable code base or datasets:
-https://github.com/thuml/Latent-Spectral-Models
+https://dl.acm.org/doi/abs/10.1145/3197517.3201325
-https://github.com/Extrality/AirfRANS
+https://openreview.net/forum?id=EyQO9RPhwN
diff --git a/pic/Transolver.png b/pic/Transolver.png
deleted file mode 100644
index cbfed89..0000000
Binary files a/pic/Transolver.png and /dev/null differ
diff --git a/pic/physical_states.png b/pic/physical_states.png
deleted file mode 100644
index 18299cb..0000000
Binary files a/pic/physical_states.png and /dev/null differ
diff --git a/pic/showcases.png b/pic/showcases.png
deleted file mode 100644
index 0a64990..0000000
Binary files a/pic/showcases.png and /dev/null differ