-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
I am trying to reproduce the results reported for the WPuQ dataset in your paper. My current configuration is as follows:
model_class=gpt-2
batch_size=256
num_sample=4000
num_sampling_step=100
decoder_layer=12
dim_base=512
dim_feedforward=2048
learning_rate=1e-4
train_season=whole_year
num_train_steps=100000
Data preprocessing is performed using preprocess_hp.py, and training is conducted with train_ddpm_pl.py.
The data settings are:
record_year=[2018, 2019, 2020]
resolution=1min
However, the evaluation metrics I obtained differ significantly from those reported in Table II of the paper.
Could you please advise whether this discrepancy is likely due to incorrect hyperparameter settings or issues in the data preprocessing pipeline?

Metadata
Metadata
Assignees
Labels
No labels