Conversation
python/paddle/v2/framework/layers.py
Outdated
| def sums(input, program=None, init_program=None): | ||
| helper = LayerHelper('sum', **locals()) | ||
| if not isinstance(input, list) and not isinstance(input, tuple): | ||
| input = [input] |
There was a problem hiding this comment.
There is no need to do the converting here. Operator's constructor will do this.
python/paddle/v2/framework/layers.py
Outdated
| helper = LayerHelper('sum', **locals()) | ||
| if not isinstance(input, list) and not isinstance(input, tuple): | ||
| input = [input] | ||
| out = helper.create_tmp_variable(dtype=input[0].data_type) |
There was a problem hiding this comment.
LayerHelper has a member function: input_dtype. It will check whether all the inputs are the same type and then return the type. I think using it here is a better choice.
def input_dtype(self, input_param_name='input'):
inputs = self.multiple_input(input_param_name)
dtype = None
for each in inputs:
if dtype is None:
dtype = each.data_type
elif dtype != each.data_type:
raise ValueError("Data Type mismatch")
return dtype
python/paddle/v2/framework/layers.py
Outdated
|
|
||
| def cos_sim(X, Y, program=None, init_program=None): | ||
| helper = LayerHelper('cos_sim', **locals()) | ||
| out = helper.create_tmp_variable(dtype=X.data_type) |
There was a problem hiding this comment.
Use LayerHelper::input_dtype(). See my above comments.
| }) | ||
|
|
||
| pre_act = helper.append_bias_op(pre_bias) | ||
| return helper.append_activation(pre_act) |
There was a problem hiding this comment.
I'm not sure sequence conv need a non-linear activation. So I remove it.
There was a problem hiding this comment.
I think we shall retain it. Users can set act as None to disable it.
| outputs={"Out": pool_out}, | ||
| attrs={"strategy": pool_type}) | ||
| outputs={"Out": [pool_out]}, | ||
| attrs={"strategy": ENUM_POOL_TYPE[pool_type.upper()]}) |
There was a problem hiding this comment.
What's this change for?
There was a problem hiding this comment.
"max", "sqrt" are keywords of Python, I cannot bear our keyword overloading anymore.....
There was a problem hiding this comment.
I am wondering why ENUM_POOL_TYPE becomes a map.
There was a problem hiding this comment.
Because attribute of "strategy" needs a fixed enum index, not a string.
AddAttr<int>(
"strategy",
"(int, default AVERAGE) the pooling strategy of SequencePoolOp.")
.SetDefault(AVERAGE)
.InEnum({AVERAGE, SUM, SQRT, MAX, LAST, FIRST});And this attribute will be definitely replaced with another PR, and the sequence_conv attributes, etc. Those tons of code will be rewritten after I finished the book chapter 5.
|
|
||
| return helper.append_activation(batch_norm_out) | ||
|
|
||
|
|
There was a problem hiding this comment.
Why remove batch_norm layer?
There was a problem hiding this comment.
That's a mistake remove caused by the conflict. It has been fixed.
python/paddle/v2/framework/nets.py
Outdated
| @@ -103,22 +103,18 @@ def sequence_conv_pool(input, | |||
| filter_size, | |||
| pool_size, | |||
| pool_stride, | |||
There was a problem hiding this comment.
pool_size and pool_stride are also useless. Please remove them.
python/paddle/v2/framework/nets.py
Outdated
| pool_size=pool_size, | ||
| pool_type='max', | ||
| pool_stride=pool_stride, | ||
| pool_type='sum', |
There was a problem hiding this comment.
I think max is a little more normal than sum, so I think it's fine to keep the default as max.
There was a problem hiding this comment.
Fixed.
Find a frequent usage -- embedding and then pooling in Sum is equivalent to the FC layer.
* "modify layers.py" * "fix pool interface" * "add export type to layers" * "fix based on comment"
fix python layer interface