We are talking of relatively simple oscillator problem. How about if we have complex geometries for which FEM methods are most suited today? I have been reading of physics informed graph nets for the purpose of complex geomeries. Do you have any references for complex domains? Lets say i have a complex shaped mechanical component subjected to pressure fir which i normslly use FEM.?
OMG, very cool video!!! The training performance is highly dependent on the "lambda" value, do you have ideas about how to define its value? Many thanks.
A possibly useful method would be to have the neural network identify the invariants or a Lie group for a differential equation. Another approach, compute all scalar quantities and have neural network find the right combination of scalar quantities to find a Lagrangian for a physical system.
well done,the trend information is also very important,and it can be involved by a partial differential equation.i think maybe the parameters of the partial differential equation can also be the parameters of the neural network PINNS
similar question as some others. When we are solving even standard physics electrostatics, heat transfer etc, forget time domain, so only elliptic equations on complex CAD, I am wondering what applications can PINNs be used for. as opposed to using FEM. maybe shape optimization type problems? or inverse problems?
hello, are x_i/x_j vectors of the x,y coordinates, denoted as x_1 and x_2? from the initial slide i have understood that x_i and x_j form "together" the individual input datapoint to the network, x_i being eg. the initial conditions, or any measured quantity and x_j being a value that can be obtained from direct differentiation, but from the second slide i am doubting this assumption
Very nice lesson! I'm stuck on the Task 3 though, I can't get the network to converge for w0=80. Here's the code if anyone can spot what I'm missing here: torch.manual_seed(123) # define a neural network to train pinn = FCN(1,1,32,3) # define additional a,b learnable parameters in the ansatz # TODO: write code here a = torch.nn.Parameter(torch.zeros(1, requires_grad=True)) b = torch.nn.Parameter(torch.zeros(1, requires_grad=True)) # define boundary points, for the boundary loss t_boundary = torch.tensor(0.).view(-1,1).requires_grad_(True) # define training points over the entire domain, for the physics loss t_physics = torch.linspace(0,1,60).view(-1,1).requires_grad_(True) # train the PINN d, w0 = 2, 80# note w0 is higher! mu, k = 2*d, w0**2 t_test = torch.linspace(0,1,300).view(-1,1) u_exact = exact_solution(d, w0, t_test) # add a,b to the optimiser # TODO: write code here optimiser = torch.optim.Adam(list(pinn.parameters())+[a]+[b],lr=1e-3) for i in range(15001): optimiser.zero_grad() # compute each term of the PINN loss function above # using the following hyperparameters: lambda1, lambda2 = 1e-1, 1e-4 # compute boundary loss # TODO: write code here (change to ansatz formulation) u = pinn(t_boundary)*torch.sin(a*t_boundary+b) loss1 = (torch.squeeze(u) - 1)**2 dudt = torch.autograd.grad(u, t_boundary, torch.ones_like(u), create_graph=True)[0] loss2 = (torch.squeeze(dudt) - 0)**2 # compute physics loss # TODO: write code here (change to ansatz formulation) u = pinn(t_physics)*torch.sin(a*t_physics+b) dudt = torch.autograd.grad(u, t_physics, torch.ones_like(u), create_graph=True)[0] d2udt2 = torch.autograd.grad(dudt, t_physics, torch.ones_like(dudt), create_graph=True)[0] loss3 = torch.mean((d2udt2 + mu*dudt + k*u)**2) # backpropagate joint loss, take optimiser step # TODO: write code here loss = loss1 + lambda1*loss2 + lambda2*loss3 loss.backward() optimiser.step() # plot the result as training progresses if i % 5000 == 0: #print(u.abs().mean().item(), dudt.abs().mean().item(), d2udt2.abs().mean().item()) u = (pinn(t_test)*torch.sin(a*t_test+b)).detach() plt.figure(figsize=(6,2.5)) plt.scatter(t_physics.detach()[:,0], torch.zeros_like(t_physics)[:,0], s=20, lw=0, color="tab:green", alpha=0.6) plt.scatter(t_boundary.detach()[:,0], torch.zeros_like(t_boundary)[:,0], s=20, lw=0, color="tab:red", alpha=0.6) plt.plot(t_test[:,0], u_exact[:,0], label="Exact solution", color="tab:grey", alpha=0.6) plt.plot(t_test[:,0], u[:,0], label="PINN solution", color="tab:green") plt.title(f"Training step {i}") plt.legend() plt.show()
Hi Ben my Question is if I'm having an issue with audio and data strings bombardment maliciously engaging my synapse. Do you think fitting pinn's or over fitting pinn's to stabilise the nuclei would be the Answer. I've tried neural Clips and they come out/ tried Apache CNN and Hadoop to stabilise the nucleus. its been 4 years now and its very aggravating/infuriating and frustrating any help would be greatly appreciated
🧠PINNS in MATLAB: ua-cam.com/video/RTR_RklvAUQ/v-deo.html
+1 for Oxford PhD saying "timesing" instead of multiplying... respect! :D
Thanks for sharing this recording from the workshop. Thanks, Ben!
I love all of the questions!! 🤓 Ben is a great teacher!
at 14:30, it seems like external force will not operate on Unn. External force will be a constant term in the physics loss function.
But it is multiplying by U_NN term, so the loss can be derivate with respect to thega
Thank you for such an informative lecture on PINN.
Thanks for watching! :)
Very nice and clear presentation.
Fantastic introduction, much appreciated!
Great video on this fascinating field. Thanks for sharing.
Sure :)
Nice lesson and clear presentation. Thank you!
Thanks for this!
Great work!
A great introduction and massive thanks for sharing the knowledge!
Thanks for PINN , is code available ?
I think MIT developed something related to this, not sure whether it is opensource
We are talking of relatively simple oscillator problem. How about if we have complex geometries for which FEM methods are most suited today? I have been reading of physics informed graph nets for the purpose of complex geomeries. Do you have any references for complex domains? Lets say i have a complex shaped mechanical component subjected to pressure fir which i normslly use FEM.?
i have seen videos about PINNs talking also about fractional PINNs, which would be maybe interesting for you
I wonder if this give better results with PDE for option pricing
nice tutorial. thank you.
could you please provide the example code of PINN?. Link in the comments not working.
Thank you for your sharing!! But how to deal with the high frequency situation? looking forward to your reply.
great work
code link where can I get?
OMG, very cool video!!! The training performance is highly dependent on the "lambda" value, do you have ideas about how to define its value? Many thanks.
Greattttt
A possibly useful method would be to have the neural network identify the invariants or a Lie group for a differential equation. Another approach, compute all scalar quantities and have neural network find the right combination of scalar quantities to find a Lagrangian for a physical system.
well done,the trend information is also very important,and it can be involved by a partial differential equation.i think maybe the parameters of the partial differential equation can also be the parameters of the neural network PINNS
similar question as some others. When we are solving even standard physics electrostatics, heat transfer etc, forget time domain, so only elliptic equations on complex CAD, I am wondering what applications can PINNs be used for. as opposed to using FEM. maybe shape optimization type problems? or inverse problems?
Great 👍
Sure :)
Where can we download the python script file
10/10
hello, are x_i/x_j vectors of the x,y coordinates, denoted as x_1 and x_2? from the initial slide i have understood that x_i and x_j form "together" the individual input datapoint to the network, x_i being eg. the initial conditions, or any measured quantity and x_j being a value that can be obtained from direct differentiation, but from the second slide i am doubting this assumption
Im a beginner in PyTorch and OpenFOAM since the last few years, but today i learned that my "dream" is called "PINN" 🙂
Very nice lesson! I'm stuck on the Task 3 though, I can't get the network to converge for w0=80. Here's the code if anyone can spot what I'm missing here:
torch.manual_seed(123)
# define a neural network to train
pinn = FCN(1,1,32,3)
# define additional a,b learnable parameters in the ansatz
# TODO: write code here
a = torch.nn.Parameter(torch.zeros(1, requires_grad=True))
b = torch.nn.Parameter(torch.zeros(1, requires_grad=True))
# define boundary points, for the boundary loss
t_boundary = torch.tensor(0.).view(-1,1).requires_grad_(True)
# define training points over the entire domain, for the physics loss
t_physics = torch.linspace(0,1,60).view(-1,1).requires_grad_(True)
# train the PINN
d, w0 = 2, 80# note w0 is higher!
mu, k = 2*d, w0**2
t_test = torch.linspace(0,1,300).view(-1,1)
u_exact = exact_solution(d, w0, t_test)
# add a,b to the optimiser
# TODO: write code here
optimiser = torch.optim.Adam(list(pinn.parameters())+[a]+[b],lr=1e-3)
for i in range(15001):
optimiser.zero_grad()
# compute each term of the PINN loss function above
# using the following hyperparameters:
lambda1, lambda2 = 1e-1, 1e-4
# compute boundary loss
# TODO: write code here (change to ansatz formulation)
u = pinn(t_boundary)*torch.sin(a*t_boundary+b)
loss1 = (torch.squeeze(u) - 1)**2
dudt = torch.autograd.grad(u, t_boundary, torch.ones_like(u), create_graph=True)[0]
loss2 = (torch.squeeze(dudt) - 0)**2
# compute physics loss
# TODO: write code here (change to ansatz formulation)
u = pinn(t_physics)*torch.sin(a*t_physics+b)
dudt = torch.autograd.grad(u, t_physics, torch.ones_like(u), create_graph=True)[0]
d2udt2 = torch.autograd.grad(dudt, t_physics, torch.ones_like(dudt), create_graph=True)[0]
loss3 = torch.mean((d2udt2 + mu*dudt + k*u)**2)
# backpropagate joint loss, take optimiser step
# TODO: write code here
loss = loss1 + lambda1*loss2 + lambda2*loss3
loss.backward()
optimiser.step()
# plot the result as training progresses
if i % 5000 == 0:
#print(u.abs().mean().item(), dudt.abs().mean().item(), d2udt2.abs().mean().item())
u = (pinn(t_test)*torch.sin(a*t_test+b)).detach()
plt.figure(figsize=(6,2.5))
plt.scatter(t_physics.detach()[:,0],
torch.zeros_like(t_physics)[:,0], s=20, lw=0, color="tab:green", alpha=0.6)
plt.scatter(t_boundary.detach()[:,0],
torch.zeros_like(t_boundary)[:,0], s=20, lw=0, color="tab:red", alpha=0.6)
plt.plot(t_test[:,0], u_exact[:,0], label="Exact solution", color="tab:grey", alpha=0.6)
plt.plot(t_test[:,0], u[:,0], label="PINN solution", color="tab:green")
plt.title(f"Training step {i}")
plt.legend()
plt.show()
Hi Ben my Question is if I'm having an issue with audio and data strings bombardment maliciously engaging my synapse. Do you think fitting pinn's or over fitting pinn's to stabilise the nuclei would be the Answer. I've tried neural Clips and they come out/ tried Apache CNN and Hadoop to stabilise the nucleus. its been 4 years now and its very aggravating/infuriating and frustrating any help would be greatly appreciated