-
Notifications
You must be signed in to change notification settings - Fork 9
Sebastian/convergence #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| ### Settings for SuperMUC-NG | ||
| # on_a_cluster = True | ||
| # host_arch = 'skx' | ||
| # orders = range(2,8) | ||
| # resolutions = range(2,7) | ||
| # compilers = ['mpiicc', 'mpiicpc', 'mpiifort'] | ||
|
|
||
| ### Settings for Sebastians workstation | ||
| on_a_cluster = False | ||
| host_arch = "hsw" | ||
| orders = range(3, 7) | ||
| resolutions = range(2, 5) | ||
| compilers = ["mpicc", "mpicxx", "mpif90"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make sense to add these to the argument parser (with suitable defaults)?
| if not on_a_cluster: | ||
| run_cmd = "OMP_NUM_THREADS=8 ./{} {}".format( | ||
| seissol_name(arch, equations, o), par_name(equations, n) | ||
| ) | ||
| run_cmd += " > " + log_file | ||
| print(run_cmd) | ||
| os.system(run_cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a bit confusing that there are two different modes of execution and also two modes of settings (here OMP_NUM_THREADS=8 is hardcoded for on_a_cluster=false whereas on a cluster one would edit job.template).
Given that job scripts can also be run as normal bash scripts, you could also create a "job script" for local execution. I imagine that you could select a job template via the argument parser and then have one job template for local execution and X job templates for X clusters. Then the work-flow would be somewhat unified for local execution and cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, now there's a section in the code with some cluster specific settings and a job template.
|
Hi, Script are definitely more opinionated than examples. |
| def calculate_error_rates(errors_df): | ||
| rate_dicts = [] | ||
| for n in pd.unique(errors_df["norm"]): | ||
| for v in pd.unique(errors_df["var"]): | ||
| conv_df = errors_df[(errors_df["norm"] == n) & (errors_df["var"] == v)] | ||
| d = {"norm": n, "var": v} | ||
| resolutions = pd.unique(conv_df["h"]) | ||
|
|
||
| for r in range(len(resolutions) - 1): | ||
| error_decay = ( | ||
| conv_df.loc[conv_df["h"] == resolutions[r], "error"].values[0] | ||
| / conv_df.loc[conv_df["h"] == resolutions[r + 1], "error"].values[0] | ||
| ) | ||
| rate = np.log(error_decay) / np.log(resolutions[r] / resolutions[r + 1]) | ||
| resolution_decay = "{}->{}".format( | ||
| np.round(resolutions[r], 3), np.round(resolutions[r + 1], 3) | ||
| ) | ||
| d.update({resolution_decay: rate}) | ||
| rate_dicts.append(d) | ||
|
|
||
| rate_df = pd.DataFrame(rate_dicts) | ||
| return rate_df |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can compute convergence in this way, a more reliable way is to compute the convergence via linear regression (log(h) vs log(error))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean with more reliable? Take for example a convergence test in single precision, which will come to a plateau pretty soon. Then you can see the correct rates in the beginning and the plateau later. With the regression, you only see the overall result.
Nonetheless I've added the regression feature ;-)
|
Hi, |
Just to make it clear - that was sarcasm. My script is also terrible ;)
Good question. Right now we have a plethora of scripts scattered in every repository (examples, meshing, SeisSol, Visualization, Geodata) and no one has any idea where to find stuff. Does any one have any good ideas? |
|
Haha the scripts in SeisSol main are not maintained either :D, there are some which haven't been touched in a decade. But even though they are unmaintained and horrible, they might still prove useful. |
Yes, exactly my point. We should move them so some place where they don't clutter our otherwise beautifully maintained code ;) Moving them might actually be a chance to organize them. Just delete all of them and restore every tool for which we get at least one angry email. |
|
But should a cluttered repository be a reason not to push new code? |
|
Pushing this to a separate repository would be different. When I push new convergence tests I won't update a script that I've never used, that I'm never going to use (because I have my own work flow which works better for me), and then we have a repository which is either 1) inconsistent or 2) where no one is going to push any more, because it's too much effort. A convergence script is far more opinionated than adding another convergence test. |
|
I think that's a good idea to push the scripts. Finally we can prove that seissol in single precision works fine. A script with dynamic rupture would also be useful. E.g. does the on fault convergence rate gets better with the projected state variable?
If the release paper is ever written (which will eventually happen imo), such scripts will be very useful.
And future developer will be able to check that the convergence is not broken after refactoring.
Le 4 janvier 2021 14:39:26 GMT+01:00, Lukas Krenz <notifications@github.com> a écrit :
…Pushing this to a separate repository would be different.
If we have a script called do_all_convergence_tests in the official
examples repository, people expect it to, well, do (?) all convergence
tests.
When I push new convergence tests I won't update a script that I've
never used, that I'm never going to use (because I have my own work
flow which works better for me), and then we have a repository which is
either 1) inconsistent or 2) where no one is going to push any more,
because it's too much effort.
A convergence script is far more opinionated than adding another
convergence test.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#15 (comment)
|
|
@Thomas-Ulrich I do agree that it is a good idea to provide this script, I just think this is the wrong repository.
This should be the status quo. I'm at least doing that...
To rigorously test that you need an analytical solution for the DR problems. As this is very hard (impossible?), you need to create a manufactured solution. You basically assume that some q* solves the PDE(+BC/DR), insert it into the PDE and derive some source terms, Your q* then is an analytical solution of the new problem (PDE + modifications). It would be a good idea to automate the b) part mentioned above - that's quite easy. We just need to agree on the details. But this is definitely another issue. |
This PR includes Carsten's script for running convergence tests and some scripts for plotting the results. A follow-up PR in SeisSol/SeisSol will add documentation.