PG code discussion
This commit is contained in:
parent
9a3f1cc46a
commit
fc6547f78f
File diff suppressed because one or more lines are too long
@ -1,4 +1,4 @@
|
||||
Physical Gradients
|
||||
Physical Gradients and NNs
|
||||
=======================
|
||||
|
||||
Re-cap?
|
||||
|
@ -340,10 +340,12 @@ Even when the Jacobian is singular (because the function is not injective, chaot
|
||||
The update obtained with a regular gradient descent method has surprising shortcomings.
|
||||
The physical gradient instead allows us to more accurately backpropagate through nonlinear functions, provided that we have access to good inverse functions.
|
||||
|
||||
Before moving on to including PGs in NN training processes, the next example will illustrate ...
|
||||
Before moving on to including PGs in NN training processes, the next example will illustrate the differences between these approaches with a practical example.
|
||||
|
||||
|
||||
**todo, integrate comments below?**
|
||||
|
||||
|
||||
**TODO, sometime, integrate comments below?**
|
||||
|
||||
Old Note:
|
||||
The inverse function to a simulator is typically the time-reversed physical process.
|
||||
|
@ -269,7 +269,7 @@
|
||||
" if not transposed:\n",
|
||||
" block.add_module('%s_conv' % name, nn.Conv2d(in_c, out_c, kernel_size=size, stride=2, padding=pad, bias=True))\n",
|
||||
" else:\n",
|
||||
" block.add_module('%s_upsam' % name, nn.Upsample(scale_factor=2))\n",
|
||||
" block.add_module('%s_upsam' % name, nn.Upsample(scale_factor=2, mode='bilinear'))\n",
|
||||
" # reduce kernel size by one for the upsampling (ie decoder part)\n",
|
||||
" block.add_module('%s_tconv' % name, nn.Conv2d(in_c, out_c, kernel_size=(size-1), stride=1, padding=pad, bias=True))\n",
|
||||
"\n",
|
||||
|
Loading…
Reference in New Issue
Block a user