-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Dear authors,
When I check def calc_reconstr_error(self) in all of the three proposed methods, I see:
return -tf.reduce_sum(tf.log(tf.maximum(p_x_i_scores0 * weight_scores0, 1e-10)))
This multiplies weight_scores0 inside the logarithm. I think this differs from the log-likelihood defined in the paper e.g. (2). In particular, if I neglect max(, 1e-10) that seems to be for numerical stability, the logarithm above decomposes to
sum(log(p_x_i_scores0)) + sum(log(weight_scores0)).
Since the second part does not depend on parameters, it is a constant for the learning.
I am not very familiar with tensorflow, maybe I am missing something. Can you please explain what exactly this does even if different from the paper?
Thanks,
Alexander Shekhovtsov
Czech Technical University