update physloss chapter
This commit is contained in:
parent
0981a281fe
commit
4f1763f696
2
_toc.yml
2
_toc.yml
@ -10,7 +10,7 @@ parts:
|
||||
- file: overview-burgers-forw.ipynb
|
||||
- file: overview-ns-forw.ipynb
|
||||
- file: overview-optconv.md
|
||||
- caption: Supervised Training
|
||||
- caption: Neural Surrogates and Operators
|
||||
chapters:
|
||||
- file: supervised.md
|
||||
- file: supervised-arch.md
|
||||
|
@ -52,8 +52,6 @@
|
||||
"source": [
|
||||
"!pip install --upgrade --quiet phiflow==3.2\n",
|
||||
"from phi.flow import *\n",
|
||||
"\n",
|
||||
"from phi import __version__\n",
|
||||
"print(\"Using phiflow version: {}\".format(phi.__version__))"
|
||||
]
|
||||
},
|
||||
|
File diff suppressed because one or more lines are too long
@ -3,12 +3,13 @@ Discussion of Physical Losses
|
||||
|
||||
The good news so far is - we have a DL method that can include
|
||||
physical laws in the form of soft constraints by minimizing residuals.
|
||||
However, as the very simple previous example illustrates, this is just a conceptual
|
||||
starting point.
|
||||
However, as the very simple previous example illustrates, this causes
|
||||
new difficulties, and is just a conceptual starting point.
|
||||
|
||||
On the positive side, we can leverage DL frameworks with backpropagation to compute
|
||||
the derivatives of the model. At the same time, this puts us at the mercy of the learned
|
||||
representation regarding the reliability of these derivatives. Also, each derivative
|
||||
the derivatives of the model. At the same time, this makes the loss landscape more
|
||||
complicated, relies on the learned
|
||||
representation regarding the reliability of the derivatives. Also, each derivative
|
||||
requires backpropagation through the full network. This can be very expensive, especially
|
||||
for higher-order derivatives.
|
||||
|
||||
@ -16,16 +17,12 @@ And while the setup is relatively simple, it is generally difficult to control.
|
||||
has flexibility to refine the solution by itself, but at the same time, tricks are necessary
|
||||
when it doesn't focus on the right regions of the solution.
|
||||
|
||||
## Is it "Machine Learning"?
|
||||
## Generalization?
|
||||
|
||||
One question that might also come to mind at this point is: _can we really call it machine learning_?
|
||||
Of course, such denomination questions are superficial - if an algorithm is useful, it doesn't matter
|
||||
what name it has. However, here the question helps to highlight some important properties
|
||||
that are typically associated with algorithms from fields like machine learning or optimization.
|
||||
|
||||
One main reason _not_ to call the optimization of the previous notebook machine learning (ML), is that the
|
||||
One aspect to note with the previous PINN optimization is that the
|
||||
positions where we test and constrain the solution are the final positions we are interested in.
|
||||
As such, there is no real distinction between training, validation and test sets.
|
||||
As such, from a classic ML standpoint, there is no real distinction between training, validation and test sets.
|
||||
Computing the solution for a known and given set of samples is much more akin to classical optimization,
|
||||
where inverse problems like the previous Burgers example stem from.
|
||||
|
||||
@ -33,7 +30,8 @@ For machine learning, we typically work under the assumption that the final perf
|
||||
model will be evaluated on a different, potentially unknown set of inputs. The _test data_
|
||||
should usually capture such _out of distribution_ (OOD) behavior, so that we can make estimates
|
||||
about how well our model will generalize to "real-world" cases that we will encounter when
|
||||
we deploy it in an application.
|
||||
we deploy it in an application. The v1 version, using a prescribed discretization actually
|
||||
had this property, and could generalized to new inputs.
|
||||
|
||||
In contrast, for the PINN training as described here, we reconstruct a single solution in a known
|
||||
and given space-time region. As such, any samples from this domain follow the same distribution
|
||||
@ -47,26 +45,27 @@ have to start training the NN from scratch.
|
||||
## Summary
|
||||
|
||||
Thus, the physical soft constraints allow us to encode solutions to
|
||||
PDEs with the tools of NNs.
|
||||
An inherent drawback of this variant 2 is that it yields single solutions,
|
||||
and that it does not combine with traditional numerical techniques well.
|
||||
PDEs with the tools of NNs. As they're more widely used, we'll focus on PINNs (v2) here:
|
||||
An inherent drawback is that they yield single solutions,
|
||||
and that they do not combine with traditional numerical techniques well.
|
||||
In comparison to the Neural surrogates/operators from {doc}`supervised` we've made a step backwards in some way.
|
||||
|
||||
E.g., the learned representation is not suitable to be refined with
|
||||
a classical iterative solver such as the conjugate gradient method.
|
||||
|
||||
This means many
|
||||
powerful techniques that were developed in the past decades cannot be used in this context.
|
||||
Bringing these numerical methods back into the picture will be one of the central
|
||||
goals of the next sections.
|
||||
|
||||
✅ Pro:
|
||||
- Uses physical model.
|
||||
- Derivatives can be conveniently computed via backpropagation.
|
||||
- Uses physical model
|
||||
- Derivatives can be conveniently computed via backpropagation
|
||||
|
||||
❌ Con:
|
||||
- Quite slow ...
|
||||
- Physical constraints are enforced only as soft constraints.
|
||||
- Largely incompatible with _classical_ numerical methods.
|
||||
- Accuracy of derivatives relies on learned representation.
|
||||
- Problematic convergence
|
||||
- Physical constraints are enforced only as soft constraints
|
||||
- Largely incompatible with _classical_ numerical methods
|
||||
- Usefulness of derivatives relies on learned representation
|
||||
|
||||
To address these issues,
|
||||
we'll next look at how we can leverage existing numerical methods to improve the DL process
|
||||
|
@ -8,7 +8,7 @@
|
||||
"source": [
|
||||
"# Learning the Helmholtz-Hodge Decomposition\n",
|
||||
"\n",
|
||||
"In the following notebook we'll following the aforementioned paper by Tompson et al. {cite}`tompson2017` and train a neural network that can essentially perform a Helmholtz-Hodge decomposition. This is a very classic and time consuming part of many numerical solvers, and enables splitting an arbitrary vector field into a solenoidal (divergence-free) and irrotational part (the pressure gradient). Because this is traditionally very time consuming, it's an interesting goal for a learned approach. As a stepping stone towards integrating full solvers, we'll formulate a physics-based loss via a discretized PDE-constraint.\n"
|
||||
"In the following notebook we'll following the aforementioned paper by Tompson et al. {cite}`tompson2017` and train a neural network that to perform a Helmholtz-Hodge decomposition. This is a very classic and time consuming part of many numerical solvers, and enables splitting an arbitrary vector field into a solenoidal (divergence-free) and irrotational part (the pressure gradient). Because this is traditionally very time consuming, it's an interesting goal for a learned approach. As a stepping stone towards integrating full solvers, we'll formulate a physics-based loss via a discretized PDE-constraint (the approach denoted as physical loss training _v1_ in the previous section).\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -64,7 +64,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install --quiet phiflow==3.1 tqdm\n",
|
||||
"!pip install --quiet phiflow==3.0 tqdm\n",
|
||||
"from tqdm import tqdm\n",
|
||||
"from phiml import nn\n",
|
||||
"\n",
|
||||
@ -157,7 +157,7 @@
|
||||
"id": "p2zXNACNCXwo"
|
||||
},
|
||||
"source": [
|
||||
"Here you can see the flowlines together with velocity magnitudes and the divergence per cell. The latter is exactly what we're aiming for removing. This visualization shows that the divergence is smaller than the actual magnitude of the velocities, with an average of around 0.4, as indicated by the L2 output right above the images.\n",
|
||||
"Here you can see the flowlines together with velocity magnitudes and the divergence per cell. The latter is exactly what we're want to remove. This visualization shows that the divergence is smaller than the actual magnitude of the velocities, with an average of around 0.4, as indicated by the L2 output right above the images.\n",
|
||||
"\n",
|
||||
"Next, we will define a Navier-Stokes simulation step. Given our reduced setup without external forces or obstacles, it's very simple: a call to an advection function in PhiFlow, followed by `fluid.make_incompressible()` to invoke the Poisson solver. We'll directly annotate this function and the following ones for JIT compilation with `@jit_compile`. This is important for good performance on GPUs, but it makes debugging much harder. So when changing the code, it's highly recommended to remove them. The code will work just as well without, just slower. Once everything's running as it should, re-activate JIT compilation for the _real_ training runs. \n"
|
||||
]
|
||||
@ -209,7 +209,7 @@
|
||||
"source": [
|
||||
"## Neural Network Training\n",
|
||||
"\n",
|
||||
"As NN architecture for an elliptic problem, an architecture with global communication is suitable. Below, we initialize a U-Net, but feel free to try the ResNet variant, which works less well due to its lack of a wide receptive field. (Given that property, it's doing surprisingly well.) As we're dealing with a periodic domain for simplicity, the NN likewise needs to be configured for periodic processing via `periodic=True`. It's input is a single channel (the divergence), and the output a very different content, the pressure. However, for the network this is likewise simply a single, scalar channel. The `filters=24` determine the total number of parameters. Feel free to increase this to improve accuracy (and reduce computational performance of the NN inference). This is the classic accuracy vs performance trade-off that NNs share with all classic numerical methods.\n"
|
||||
"Were facing an elliptic PDE problem here, and hence a NN architecture with global communication is important, cf. {doc}`supervised-arch`. Below, we initialize a U-Net, but feel free to try the ResNet variant, which works less well due to its local receptive field. (Given that property, it's doing surprisingly well.) As we're dealing with a periodic domain for simplicity, the NN likewise needs to be configured for periodic processing via `periodic=True`. It's input is a single channel (the divergence), and the output a very different content, the pressure. However, for the network this is likewise simply a single, scalar channel. The `filters=24` determine the total number of parameters. Feel free to increase this to improve accuracy (and reduce computational performance of the NN inference). This is the classic accuracy vs performance trade-off that NNs share with all classic numerical methods.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2,9 +2,9 @@ Physical Loss Terms
|
||||
=======================
|
||||
|
||||
The supervised setting of the previous sections can quickly
|
||||
yield approximate solutions with a fairly simple training process. However, what's
|
||||
quite sad to see here is that we only use physical models and numerical methods
|
||||
as an "external" tool to produce a big pile of data 😢.
|
||||
yield approximate solutions with a simple and stable training process. However, it's
|
||||
unfortunate that we only use physical models and numerical methods
|
||||
as an "external" tool to produce lots of data 😢.
|
||||
|
||||
We as humans have a lot of knowledge about how to describe physical processes
|
||||
mathematically. As the following chapters will show, we can improve the
|
||||
@ -23,20 +23,21 @@ $$
|
||||
\mathbf{u}_t = \mathcal F ( \mathbf{u}_{x}, \mathbf{u}_{xx}, ... \mathbf{u}_{xx...x} ) ,
|
||||
$$
|
||||
|
||||
where the $_{\mathbf{x}}$ subscripts denote spatial derivatives with respect to one of the spatial dimensions
|
||||
of higher and higher order (this can of course also include mixed derivatives with respect to different axes). $\mathbf{u}_t$ denotes the changes over time.
|
||||
|
||||
In this context, we can approximate the unknown $\mathbf{u}$ itself with a neural network. If the approximation, which we call $\tilde{\mathbf{u}}$, is accurate, the PDE should be satisfied naturally. In other words, the residual R should be equal to zero:
|
||||
where the $_{\mathbf{x}}$ subscripts denote spatial derivatives with respect to the spatial dimensions
|
||||
(this could of course also include mixed derivatives with respect to different axes). $\mathbf{u}_t$ denotes the changes over time.
|
||||
Given a solution $\mathbf{u}$, we can compute the residual R, which naturally should be equal to zero for a correct solution:
|
||||
|
||||
$$
|
||||
R = \mathbf{u}_t - \mathcal F ( \mathbf{u}_{x}, \mathbf{u}_{xx}, ... \mathbf{u}_{xx...x} ) = 0 .
|
||||
$$
|
||||
|
||||
In this context, we can approximate the unknown $\mathbf{u}$ itself with a neural network.
|
||||
If the approximation is accurate, the PDE residual should likewise be zero.
|
||||
|
||||
This nicely integrates with the objective for training a neural network: we can train for
|
||||
minimizing this residual in combination with direct loss terms.
|
||||
Similar to before, we can use pre-computed solutions
|
||||
$[x_0,y_0], ...[x_n,y_n]$ for $\mathbf{u}$ with $\mathbf{u}(\mathbf{x})=y$ as constraints
|
||||
in addition to the residual terms.
|
||||
In addition to relying on the residual, we can use pre-computed solutions
|
||||
$[x_0,y_0], ...[x_n,y_n]$ for $\mathbf{u}$ with $\mathbf{u}(\mathbf{x})=y$ as targets.
|
||||
This is typically important, as most practical PDEs do not have unique solutions
|
||||
unless initial and boundary conditions are specified. Hence, if we only consider $R$ we might
|
||||
get solutions with random offset or other undesirable components. The supervised sample points
|
||||
@ -51,19 +52,22 @@ where $\alpha_{0,1}$ denote hyperparameters that scale the contribution of the s
|
||||
the residual term, respectively. We could of course add additional residual terms with suitable scaling factors here.
|
||||
|
||||
It is instructive to note what the two different terms in equation {eq}`physloss-training` mean: The first term is a conventional, supervised L2-loss. If we were to optimize only this loss, our network would learn to approximate the training samples well, but might average multiple modes in the solutions, and do poorly in regions in between the sample points.
|
||||
If we, instead, were to optimize only the second term (the physical residual), our neural network might be able to locally satisfy the PDE, but still could produce solutions that are still far away from our training data. This can happen due to "null spaces" in the solutions, i.e., different solutions that all satisfy the residuals.
|
||||
If we, instead, were to optimize only the second term (the physical residual), our neural network might be able to locally satisfy the PDE, but
|
||||
could have large difficulties find a solution that fits globally.
|
||||
This can happen due to "null spaces" in the solutions, i.e., different solutions that all satisfy the residuals. Then local points can converge
|
||||
to different solutions, in combination yielding a very suboptimal one.
|
||||
Therefore, we optimize both objectives simultaneously such that, in the best case, the network learns to approximate the specific solutions of the training data while still capturing knowledge about the underlying PDE.
|
||||
|
||||
Note that, similar to the data samples used for supervised training, we have no guarantees that the
|
||||
residual terms $R$ will actually reach zero during training. The non-linear optimization of the training process
|
||||
will minimize the supervised and residual terms as much as possible, but there is no guarantee. Large, non-zero residual
|
||||
contributions can remain. We'll look at this in more detail in the upcoming code example, for now it's important
|
||||
to remember that physical constraints in this way only represent _soft constraints_, without guarantees
|
||||
to keep in mind that the physical constraints formulated this way only represent _soft constraints_, without guarantees
|
||||
of minimizing these constraints.
|
||||
|
||||
The previous overview did not really make clear how an NN produces $\mathbf{u}$.
|
||||
We can distinguish two different approaches here:
|
||||
via a chosen explicit representation of the target function (v1 in the following), or via using fully-connected neural networks to represent the solution (v2).
|
||||
via a chosen explicit representation of the target function (v1 in the following), or with a _Neural field_ based on fully-connected neural networks to represent the solution (v2).
|
||||
E.g., for v1 we could set up a _spatial_ grid (or graph, or a set of sample points), while in the second case no explicit representation exists, and the NN instead receives the _spatial coordinate_ to produce the solution at a query position.
|
||||
We'll outline these two variants in more detail the following.
|
||||
|
||||
@ -96,29 +100,28 @@ To learn this decomposition, we can approximate $p$ with a CNN on our computatio
|
||||
$\nabla \cdot \big( \mathbf{u}(0) - \nabla f(\mathbf{u}(0);\theta) \big)$.
|
||||
To implement this residual, all we need to do is provide the divergence operator $(\nabla \cdot)$ of $\mathbf u$ on our computational mesh. This is typically easy to do via
|
||||
a convolutional layer in the DL framework that contains the finite difference weights for the divergence.
|
||||
Nicely enough, in this case we don't even need additional supervised samples, and can typically purely train with this residual formulation. Also, in contrast to variant 2 below, we can directly handle fairly large spaces of solutions here (we're not restricted to learning single solutions)
|
||||
Nicely enough, in this case we don't even need additional supervised samples, and can typically purely train with this residual formulation. Also, in contrast to variant 2 below, we can directly handle fairly large spaces of solutions here (we're not restricted to learning single solutions).
|
||||
An example implementation can be found in this [code repository](https://github.com/tum-pbs/CG-Solver-in-the-Loop).
|
||||
|
||||
Overall, this variant 1 has a lot in common with _differentiable physics_ training (it's basically a subset). As we'll discuss differentiable physics in a lot more detail
|
||||
in {doc}`diffphys` and after, we'll focus on direct NN representations (variant 2) from now on.
|
||||
Overall, this variant 1 has a lot in common with _differentiable physics_ training (it's basically a subset) that will be covered with a lot more detail in {doc}`diffphys`. Hence, we'll focus a bit more on direct NN representations (variant 2) in this chapter.
|
||||
|
||||
---
|
||||
|
||||
## Variant 2: Derivatives from a neural network representation
|
||||
|
||||
The second variant of employing physical residuals as soft constraints
|
||||
instead uses fully connected NNs to represent $\mathbf{u}$. This _physics-informed_ approach was popularized by Raissi et al. {cite}`raissi2019pinn`, and has some interesting pros and cons that we'll outline in the following. We will target this physics-informed version (variant 2) in the following code examples and discussions.
|
||||
instead uses fully connected NNs to represent $\mathbf{u}$. This _physics-informed_ (PINN) approach was popularized by Raissi et al. {cite}`raissi2019pinn`, and has some interesting pros and cons that we'll outline in the following. By now, this approach can be seen as part of the _Neural field_ representations that e.g. also include NeRFs and learned signed distance functions.
|
||||
|
||||
The central idea here is that the aforementioned general function $f$ that we're after in our learning problems
|
||||
The central idea with Neural fields is that the aforementioned general function $f$ that we're after
|
||||
can also be used to obtain a representation of a physical field, e.g., a field $\mathbf{u}$ that satisfies $R=0$. This means $\mathbf{u}(\mathbf{x})$ will
|
||||
be turned into $\mathbf{u}(\mathbf{x}, \theta)$ where we choose the NN parameters $\theta$ such that a desired $\mathbf{u}$ is
|
||||
represented as precisely as possible.
|
||||
represented as precisely as possible, and $\mathbf{u}$ simply returns the right value at spatial location $\mathbf{x}$.
|
||||
|
||||
One nice side effect of this viewpoint is that NN representations inherently support the calculation of derivatives.
|
||||
One nice side effect of this viewpoint is that NN representations inherently support the calculation of derivatives w.r.t. inputs.
|
||||
The derivative $\partial f / \partial \theta$ was a key building block for learning via gradient descent, as explained
|
||||
in {doc}`overview`. Now, we can use the same tools to compute spatial derivatives such as $\partial \mathbf{u} / \partial x$,
|
||||
in {doc}`overview`. Now, we can use the same tools to compute spatial derivatives such as $\partial \mathbf{u} / \partial x = \partial f / \partial x$,
|
||||
Note that above for $R$ we've written this derivative in the shortened notation as $\mathbf{u}_{x}$.
|
||||
For functions over time this of course also works for $\partial \mathbf{u} / \partial t$, i.e. $\mathbf{u}_{t}$ in the notation above.
|
||||
For functions over time this of course also works by adding $t$ as input to compute $\partial \mathbf{u} / \partial t$, i.e. $\mathbf{u}_{t}$ in the notation above.
|
||||
|
||||
```{figure} resources/physloss-overview-v2.jpg
|
||||
---
|
||||
@ -138,18 +141,22 @@ To pick a simple example, Burgers equation in 1D,
|
||||
$\frac{\partial u}{\partial{t}} + u \nabla u = \nu \nabla \cdot \nabla u $ , we can directly
|
||||
formulate a loss term $R = \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial^2 u}{\partial x^2}$ that should be minimized as much as possible at training time. For each of the terms, e.g. $\frac{\partial u}{\partial x}$,
|
||||
we can simply query the DL framework that realizes $u$ to obtain the corresponding derivative.
|
||||
For higher order derivatives, such as $\frac{\partial^2 u}{\partial x^2}$, we can simply query the derivative function of the framework multiple times. In the following section, we'll give a specific example of how that works in tensorflow.
|
||||
For higher order derivatives, such as $\frac{\partial^2 u}{\partial x^2}$, we can query the derivative function of the framework multiple times.
|
||||
In the following section, we'll give a specific example of how that works in tensorflow.
|
||||
|
||||
|
||||
## Summary so far
|
||||
|
||||
The v2 approach above gives us a method to include physical equations into DL learning as a soft constraint: the residual loss.
|
||||
Typically, this setup is suitable for _inverse problems_, where we have certain measurements or observations
|
||||
for which we want to find a PDE solution. Because of the ill-posedness of the optimization and learning problem,
|
||||
The approach above gives us a method to include physical equations into DL learning as a soft constraint: the residual loss.
|
||||
While v1 relies on an inductive bias in the form of a discretization, v2 relies on derivatives computed by via autodiff.
|
||||
Typically, v2 is especially suitable for _inverse problems_, where we have certain measurements or observations
|
||||
for which we want to find a PDE solution.
|
||||
Because of the ill-posedness of the optimization and learning problem,
|
||||
and the high cost of the reconstruction (to be
|
||||
demonstrated in the following), the solution manifold shouldn't be overly complex for these PINN approaches.
|
||||
E.g., it is typically very involved to capture a wide range of solutions, such as with the previous supervised airfoil example.
|
||||
E.g., it is typically very difficult to capture time dependence or a wide range of solutions,
|
||||
such as with the previous supervised airfoil example.
|
||||
|
||||
Next, we'll demonstrate these concepts with code: first, we'll show how learning the Helmholtz decomposition works out in
|
||||
practice with a v1-approach. Afterwards, we'll illustrate the PINN-approaches with a practical example.
|
||||
practice with a **v1**-approach. Afterwards, we'll illustrate the **v2** PINN-approaches with a practical example.
|
||||
|
||||
|
@ -1,7 +1,15 @@
|
||||
Supervised Training
|
||||
=======================
|
||||
|
||||
_Supervised training_ is the central starting point for all projects in the context of deep learning.
|
||||
We will first target a "purely" data-driven approach, in line with classic machine learning. We'll refer
|
||||
to this as a _supervised_ approach in the following, to indicate that the network is fully supervised
|
||||
by data, and to distinguish it from using physics-based losses.
|
||||
One of the central advantages of the supervised approach is that
|
||||
we obtain a _surrogate model_ (also called "emulator", or "Neural operator"),
|
||||
i.e., a new function that mimics the behavior of the original $\mathcal{P}$.
|
||||
|
||||
The purely data-driven, _supervised training_ is the central starting point for
|
||||
all projects in the context of deep learning.
|
||||
While it can yield suboptimal results compared to approaches that more tightly
|
||||
couple with physics, it can be the only choice in certain application scenarios
|
||||
where no good model equations exist.
|
||||
@ -56,13 +64,10 @@ A visual overview of supervised training. It's simple, and a good starting point
|
||||
in comparison to the more complex variants we'll encounter later on.
|
||||
```
|
||||
|
||||
## Surrogate models
|
||||
## Looking ahead
|
||||
|
||||
One of the central advantages of the supervised approach above is that
|
||||
we obtain a _surrogate model_ (or "emulator", or "Neural operator"),
|
||||
i.e., a new function that mimics the behavior of the original $\mathcal{P}$.
|
||||
The numerical approximations of PDE models for real world phenomena are often very expensive to compute. A trained
|
||||
NN on the other hand incurs a constant cost per evaluation, and is typically trivial
|
||||
The numerical approximations of PDE models for real world phenomena are often very expensive to compute.
|
||||
A trained NN on the other hand incurs a constant cost per evaluation, and is typically trivial
|
||||
to evaluate on specialized hardware such as GPUs or NN compute units.
|
||||
|
||||
Despite this, it's important to be careful:
|
||||
@ -73,5 +78,5 @@ All these values at least need to be momentarily stored in memory, and processed
|
||||
Nonetheless, replacing complex and expensive solvers with fast, learned approximations
|
||||
is a very attractive and interesting direction.
|
||||
|
||||
An important decision to make at this stage is what neural network architecture to choose.
|
||||
An important decision to make at this stage is which neural network architecture to choose.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user