The synthesis network can be expressed as:
\[
\,\\
G_{syn} = \ldots \sigma(W_{2}((A_{2}X) \odot \sigma(W_{1}((A_{1}X) \odot \sigma(W_{0}(A_{0}X))))))
\,\\
\]
-
where \(X \in \mathbb{R}^{3 \times HW} \) is the coordinate grid of size \(H \times W\),
-
Affine transformation matrix \(A_{i} \in \mathbb{R}^{n \times 3} \).
-
element-wise multiplication을 level별로 수행하면서 다항식의 차수를 점진적으로 증가시키고 모델이 표현의 복잡도를 학습
이런느낌?
def forward(self, z, class_embedding, x):
w = self.mapping_network(z, class_embedding)
for level in range(len(self.affine_layers)):
# 1x3 @ 3x3 -> 1x3
transformed_coord = self.affine_layers[level](w, x)
# 3 -> 512
linear_output = self.linears[level](transformed_coord)
# apply Leaky-ReLU
x = self.lrelu(x)
# element-wise multiplication
x *= linear_output
# 512 -> 3
rgb = self.output_layer(x)
return rgb