Skip to content

Conversation

@sebwolf-de
Copy link
Contributor

This PR includes Carsten's script for running convergence tests and some scripts for plotting the results. A follow-up PR in SeisSol/SeisSol will add documentation.

Comment on lines 12 to 24
### Settings for SuperMUC-NG
# on_a_cluster = True
# host_arch = 'skx'
# orders = range(2,8)
# resolutions = range(2,7)
# compilers = ['mpiicc', 'mpiicpc', 'mpiifort']

### Settings for Sebastians workstation
on_a_cluster = False
host_arch = "hsw"
orders = range(3, 7)
resolutions = range(2, 5)
compilers = ["mpicc", "mpicxx", "mpif90"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it make sense to add these to the argument parser (with suitable defaults)?

Comment on lines 201 to 207
if not on_a_cluster:
run_cmd = "OMP_NUM_THREADS=8 ./{} {}".format(
seissol_name(arch, equations, o), par_name(equations, n)
)
run_cmd += " > " + log_file
print(run_cmd)
os.system(run_cmd)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a bit confusing that there are two different modes of execution and also two modes of settings (here OMP_NUM_THREADS=8 is hardcoded for on_a_cluster=false whereas on a cluster one would edit job.template).

Given that job scripts can also be run as normal bash scripts, you could also create a "job script" for local execution. I imagine that you could select a job template via the argument parser and then have one job template for local execution and X job templates for X clusters. Then the work-flow would be somewhat unified for local execution and cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, now there's a section in the code with some cluster specific settings and a job template.

@krenzland
Copy link
Contributor

krenzland commented Jan 4, 2021

Hi,
I don't think this repository is the best place for that.
Imo we should keep scripting/examples/main code base as separate as possible.
I definitely won't offer support for that script (As everyone else I have my own, far better scripts!) - do you want to be responsible for this?
Right now we have a lot of scripts which have a similar situation - they are (maybe) used by some people, definitely not by all. Imo we should group them together.

Script are definitely more opinionated than examples.

Comment on lines 6 to 27
def calculate_error_rates(errors_df):
rate_dicts = []
for n in pd.unique(errors_df["norm"]):
for v in pd.unique(errors_df["var"]):
conv_df = errors_df[(errors_df["norm"] == n) & (errors_df["var"] == v)]
d = {"norm": n, "var": v}
resolutions = pd.unique(conv_df["h"])

for r in range(len(resolutions) - 1):
error_decay = (
conv_df.loc[conv_df["h"] == resolutions[r], "error"].values[0]
/ conv_df.loc[conv_df["h"] == resolutions[r + 1], "error"].values[0]
)
rate = np.log(error_decay) / np.log(resolutions[r] / resolutions[r + 1])
resolution_decay = "{}->{}".format(
np.round(resolutions[r], 3), np.round(resolutions[r + 1], 3)
)
d.update({resolution_decay: rate})
rate_dicts.append(d)

rate_df = pd.DataFrame(rate_dicts)
return rate_df
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can compute convergence in this way, a more reliable way is to compute the convergence via linear regression (log(h) vs log(error))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean with more reliable? Take for example a convergence test in single precision, which will come to a plateau pretty soon. Then you can see the correct rates in the beginning and the plateau later. With the regression, you only see the overall result.
Nonetheless I've added the regression feature ;-)

@sebwolf-de
Copy link
Contributor Author

Hi,
I got the impression that people are indeed interested in such scripts, so I wanted to share in the sense of "Document it once instead of explaining it twice".
I would also maintain this, since I work with this script.
As everyone else I have my own, far better scripts! True, but how many time do we lose every year by writing scripts, that somebody else has already implemented, just because we want to try out a fancy new (e.g plotting) feature, don't like the code structure, naming conventions, ...
Imo we should group them together. Which place do you suggest for this? I thought this belongs here, since there is already a convergence example in this repository.

@krenzland
Copy link
Contributor

As everyone else I have my own, far better scripts! True, but how many time do we lose every year by writing scripts, that somebody else has already implemented, just because we want to try out a fancy new (e.g plotting) feature, don't like the code structure, naming conventions, ...

Just to make it clear - that was sarcasm. My script is also terrible ;)

Imo we should group them together. Which place do you suggest for this? I thought this belongs here, since there is already a convergence example in this repository.

Good question. Right now we have a plethora of scripts scattered in every repository (examples, meshing, SeisSol, Visualization, Geodata) and no one has any idea where to find stuff.
I personally favour to keep repositories to one purpose if possible, especially because in this way we can have varying support per repository. E.g. SeisSol main repo is maintained, yateto is, examples is, random_visualization_scripts is not maintained, etc.

Does any one have any good ideas?

@uphoffc
Copy link

uphoffc commented Jan 4, 2021

Haha the scripts in SeisSol main are not maintained either :D, there are some which haven't been touched in a decade. But even though they are unmaintained and horrible, they might still prove useful.

@krenzland
Copy link
Contributor

Haha the scripts in SeisSol main are not maintained either :D, there are some which haven't been touched in a decade. But even though they are unmaintained and horrible, they might still prove useful.

Yes, exactly my point. We should move them so some place where they don't clutter our otherwise beautifully maintained code ;)

Moving them might actually be a chance to organize them. Just delete all of them and restore every tool for which we get at least one angry email.

@sebwolf-de
Copy link
Contributor Author

But should a cluttered repository be a reason not to push new code?
I mean, where should we put theses scripts otherwise? Open a new repository. As this Examples repo gathers all our setups for validation, I guess it is the right spot here.

@krenzland
Copy link
Contributor

Pushing this to a separate repository would be different.
If we have a script called do_all_convergence_tests in the official examples repository, people expect it to, well, do (?) all convergence tests.

When I push new convergence tests I won't update a script that I've never used, that I'm never going to use (because I have my own work flow which works better for me), and then we have a repository which is either 1) inconsistent or 2) where no one is going to push any more, because it's too much effort.

A convergence script is far more opinionated than adding another convergence test.

@Thomas-Ulrich
Copy link
Contributor

Thomas-Ulrich commented Jan 5, 2021 via email

@krenzland
Copy link
Contributor

krenzland commented Jan 5, 2021

@Thomas-Ulrich I do agree that it is a good idea to provide this script, I just think this is the wrong repository.

And future developer will be able to check that the convergence is not broken after refactoring.

This should be the status quo. I'm at least doing that...

A script with dynamic rupture would also be useful. E.g. does the on fault convergence rate gets better with the projected state variable?
Whole other problem. There is no convergence test for dynamic rupture and high order convergence isn't actually shown in any of the papers. The only things that have been done are a) comparing to other solutions b) comparing against our own, more highly resolved solution. b) only shows that we convergence to our own high-resolution solution (which might in fact be incorrect) and a) is also not great because it isn't really rigorous and also because the reference solutions differ a lot with each other.

To rigorously test that you need an analytical solution for the DR problems. As this is very hard (impossible?), you need to create a manufactured solution. You basically assume that some q* solves the PDE(+BC/DR), insert it into the PDE and derive some source terms, Your q* then is an analytical solution of the new problem (PDE + modifications).

It would be a good idea to automate the b) part mentioned above - that's quite easy. We just need to agree on the details. But this is definitely another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants