misc updates

This commit is contained in:
NT 2021-05-17 20:15:46 +08:00
parent 2374ecac67
commit 54740f1126
4 changed files with 826 additions and 821 deletions

View File

@ -15,7 +15,7 @@ advantages such as improved learning feedback and generalization, as we'll outli
In contrast to physics-informed loss functions, it also enables handling more complex
solution manifolds instead of single inverse problems. Thus instead of using deep learning
to solve single inverse problems, we'll show how to train ANNs that solve
to solve single inverse problems, we'll show how to train NNs that solve
larger classes of inverse problems very quickly.
```{figure} resources/diffphys-shortened.jpg

View File

@ -11,8 +11,13 @@ fte = re.compile(r"👋")
# TODO , replace phi symbol w text in phiflow
# TODO , filter tensorflow warnings?
# also torch "UserWarning:"
# TODO , filter tensorflow warnings? "WARNING:tensorflow:" eg in physloss-code
# also torch "UserWarning:" eg in supervised-airfoils
# from PINN burgers:
# u = np.asarray( [0.008612174447657694, 0.02584669669548606, 0.043136357266407785, 0.060491074685516746, 0.07793926183951633, 0.0954779141740818, 0.11311894389663882, 0.1308497114054023, 0.14867023658641343, 0.1665634396808965, 0.18452263429574314, 0.20253084411376132, 0.22057828799835133, 0.23865132431365316, 0.25673879161339097, 0.27483167307082423, 0.2929182325574904, 0.3109944766354339, 0.3290477753208284, 0.34707880794585116, 0.36507311960102307, 0.38303584302507954, 0.40094962955534186, 0.4188235294008765, 0.4366357052408043, 0.45439856841363885, 0.4720845505219581, 0.4897081943759776, 0.5072391070000235, 0.5247011051514834, 0.542067187709797, 0.5593576751669057, 0.5765465453632126, 0.5936507311857876, 0.6106452944663003, 0.6275435911624945, 0.6443221318186165, 0.6609900633731869, 0.67752574922899, 0.6939334022562877, 0.7101938106059631, 0.7263049537163667, 0.7422506131457406, 0.7580207366534812, 0.7736033721649875, 0.7889776974379873, 0.8041371279965555, 0.8190465276590387, 0.8337064887158392, 0.8480617965162781, 0.8621229412131242, 0.8758057344502199, 0.8891341984763013, 0.9019806505391214, 0.9143881632159129, 0.9261597966464793, 0.9373647624856912, 0.9476871303793314, 0.9572273019669029, 0.9654367940878237, 0.9724097482283165, 0.9767381835635638, 0.9669484658390122, 0.659083299684951, -0.659083180712816, -0.9669485121167052, -0.9767382069792288, -0.9724097635533602, -0.9654367970450167, -0.9572273263645859, -0.9476871280825523, -0.9373647681120841, -0.9261598056102645, -0.9143881718456056, -0.9019807055316369, -0.8891341634240081, -0.8758057205293912, -0.8621229450911845, -0.8480618138204272, -0.833706571569058, -0.8190466131476127, -0.8041372124868691, -0.7889777195422356, -0.7736033858767385, -0.758020740007683, -0.7422507481169578, -0.7263049162371344, -0.7101938950789042, -0.6939334061553678, -0.677525822052029, -0.6609901538934517, -0.6443222327338847, -0.6275436932970322, -0.6106454472814152, -0.5936507836778451, -0.5765466491708988, -0.5593578078967361, -0.5420672759411125, -0.5247011730988912, -0.5072391580614087, -0.4897082914472909, -0.47208460952428394, -0.4543985995006753, -0.4366355580500639, -0.41882350871539187, -0.40094955631843376, -0.38303594105786365, -0.36507302109186685, -0.3470786936847069, -0.3290476440540586, -0.31099441589505206, -0.2929180880304103, -0.27483158663081614, -0.2567388003912687, -0.2386513127155433, -0.22057831776499126, -0.20253089403524566, -0.18452269630486776, -0.1665634500729787, -0.14867027528284874, -0.13084990929476334, -0.1131191325854089, -0.09547794429803691, -0.07793928430794522, -0.06049114408297565, -0.0431364527809777, -0.025846763281087953, -0.00861212501518312] );
# ->
# u = np.asarray( [0.008612174447657694, 0.02584669669548606, ... ] )
path = "tmp2.txt" # simple
path = "tmp.txt" # utf8

View File

@ -10,7 +10,7 @@ As much as possible, the algorithms will come with hands-on code examples to qui
Beyond standard _supervised_ learning from data, we'll look at _physical loss_ constraints,
more tightly coupled learning algorithms with _differentiable simulations_, as well as extensions such
as reinforcement learning and uncertainty modeling.
These methods have a huge potential to fundamentally change what we can achieve
We live in exciting times: these methods have a huge potential to fundamentally change what we can achieve
with simulations.

File diff suppressed because one or more lines are too long