-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
Hi!
Really nice and interesting work!
I was having a look at the code to better understand the bilinear classification step for srl (srl_bilinear method of output_fns (line 169)).
Why do you use a single MLP layer to project both the predicate vectors and the word (role) vectors (line 195), of which you take two slices afterward (line 196), to do all the computation, instead of using two separate MLPs for predicates and roles? Is it because in such a way the projection of roles affects also the predicates one, and the other way around (or at least, this is what should happen in my mind - it should be a fully connected layer).
many thanks!
Metadata
Metadata
Assignees
Labels
No labels