Hello.
I have tried to solve an inverse problem for the same equation as in #2073: 2.4 in the IC is changed into LAMBDA, and some data points ("measurements") are connected into the BC list. I haven't use the anchors argument, because even without anchors the whole PointSetBC data point set is connected into the training and testing sets by DeepXDE:
class PointSetBC:
...
def collocation_points(self, X):
if self.batch_size is not None:
self.batch_indices = self.batch_sampler.get_next(self.batch_size)
return self.points[self.batch_indices]
return self.points
I have found out the following error while working with the modified DeepXDE (this commit to fix the points mismatch during test loss calculations), but I think that the same applies to the stock DeepXDE (main branch, 30.03.2026).
I found out that the IC loss value on the test sample was very high (for example, O(1) or O(0.1)) even when the training IC loss value was low enough and the metrics with the ground truth solution were also low. I decided to calculate the IC loss myself manually using data.train_x and data.test_x points, and the manual calculation result was even lower than that of the training IC loss value in the log.
In ICs and BCs the function func is cached using npfunc_range_autocache(), and when the trainable variables involved in calculation of this func changes, the return value doesn't change, and moreover, the variable gradient becomes incorrect. If I disable the cache logic in wrapper_cache() and wrapper_cache_auxiliary(), my inverse problem converges easily, while the variable flew to >10 with the caching enabled.
Is this a known behavior? Is there a way to fix this? Maybe by passing a list of trainable variables involved in the function calculation. Or just disabling the caching? I can try to prepare a pull request if I get some guidance about DeepXDE caching and coding style politics.
I also don't know how Tensorflow handles such situations: I see that DeepXDE doesn't force it to cache the function values.
Hello.
I have tried to solve an inverse problem for the same equation as in #2073:
2.4in the IC is changed intoLAMBDA, and some data points ("measurements") are connected into the BC list. I haven't use theanchorsargument, because even without anchors the wholePointSetBCdata point set is connected into the training and testing sets by DeepXDE:I have found out the following error while working with the modified DeepXDE (this commit to fix the points mismatch during test loss calculations), but I think that the same applies to the stock DeepXDE (main branch, 30.03.2026).
I found out that the IC loss value on the test sample was very high (for example, O(1) or O(0.1)) even when the training IC loss value was low enough and the metrics with the ground truth solution were also low. I decided to calculate the IC loss myself manually using
data.train_xanddata.test_xpoints, and the manual calculation result was even lower than that of the training IC loss value in the log.In ICs and BCs the function
funcis cached usingnpfunc_range_autocache(), and when the trainable variables involved in calculation of thisfuncchanges, the return value doesn't change, and moreover, the variable gradient becomes incorrect. If I disable the cache logic inwrapper_cache()andwrapper_cache_auxiliary(), my inverse problem converges easily, while the variable flew to>10with the caching enabled.Is this a known behavior? Is there a way to fix this? Maybe by passing a list of trainable variables involved in the function calculation. Or just disabling the caching? I can try to prepare a pull request if I get some guidance about DeepXDE caching and coding style politics.
I also don't know how Tensorflow handles such situations: I see that DeepXDE doesn't force it to cache the function values.