-
Notifications
You must be signed in to change notification settings - Fork 959
Make basic BC/IC classes correctly trainable #2077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 5 commits
a4c41ea
7d93ae2
f6201fb
c055639
bb7396e
3219773
b65000c
b1e820b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -33,14 +33,16 @@ class BC(ABC): | |
| geom: A ``deepxde.geometry.Geometry`` instance. | ||
| on_boundary: A function: (x, Geometry.on_boundary(x)) -> True/False. | ||
| component: The output component satisfying this BC. | ||
| depends_on_trainable_variables: Whether this BC depends on any trainable variable or not. | ||
| """ | ||
|
|
||
| def __init__(self, geom, on_boundary, component): | ||
| def __init__(self, geom, on_boundary, component, depends_on_trainable_variables=False): | ||
| self.geom = geom | ||
| self.on_boundary = lambda x, on: np.array( | ||
| [on_boundary(x[i], on[i]) for i in range(len(x))] | ||
| ) | ||
| self.component = component | ||
| self.depends_on_trainable_variables = depends_on_trainable_variables | ||
|
|
||
| self.boundary_normal = npfunc_range_autocache( | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Aren't you forgetting to add the flag here?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If DeepXDE supports non-constant geometry, then indeed it is safer or even must-do to set the caching flag for boundary normal calculation, too. Correct me if I am wrong, but as far as I know, DeepXDE does not support such geometry, as even parametrized geometry is reduced to some normalized constant geometry with parameters transferred into the functions. If there is any concern that in future something may change, it is better to apply the flag here, too. Just like I have changed some lines with
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. DeepXDE only supports constant geometry, it seems, and there is no plan to make nonconstant in the near future to my knowledge, current focus is fixing current geometry bugs.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then it may be OK to leave the boundary normal calculation as it is now. I can add a comment for each creation of Or actually, because any changes for learnable geometry support might be done in other places of the library and this comment might be overlooked, I will test how no caching for
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, this is a more difficult question. If learnable geometry is introduced it is likely to be done via specifying settings of geometry objects, so one cannot pass the
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good |
||
| utils.return_tensor(self.geom.boundary_normal) | ||
|
|
@@ -67,9 +69,9 @@ def error(self, X, inputs, outputs, beg, end, aux_var=None): | |
| class DirichletBC(BC): | ||
| """Dirichlet boundary conditions: y(x) = func(x).""" | ||
|
|
||
| def __init__(self, geom, func, on_boundary, component=0): | ||
| super().__init__(geom, on_boundary, component) | ||
| self.func = npfunc_range_autocache(utils.return_tensor(func)) | ||
| def __init__(self, geom, func, on_boundary, component=0, depends_on_trainable_variables=False): | ||
| super().__init__(geom, on_boundary, component, depends_on_trainable_variables) | ||
| self.func = npfunc_range_autocache(utils.return_tensor(func), self.depends_on_trainable_variables) | ||
|
|
||
| def error(self, X, inputs, outputs, beg, end, aux_var=None): | ||
| values = self.func(X, beg, end, aux_var) | ||
|
|
@@ -84,9 +86,9 @@ def error(self, X, inputs, outputs, beg, end, aux_var=None): | |
| class NeumannBC(BC): | ||
| """Neumann boundary conditions: dy/dn(x) = func(x).""" | ||
|
|
||
| def __init__(self, geom, func, on_boundary, component=0): | ||
| super().__init__(geom, on_boundary, component) | ||
| self.func = npfunc_range_autocache(utils.return_tensor(func)) | ||
| def __init__(self, geom, func, on_boundary, component=0, depends_on_trainable_variables=False): | ||
| super().__init__(geom, on_boundary, component, depends_on_trainable_variables) | ||
| self.func = npfunc_range_autocache(utils.return_tensor(func), self.depends_on_trainable_variables) | ||
|
|
||
| def error(self, X, inputs, outputs, beg, end, aux_var=None): | ||
| values = self.func(X, beg, end, aux_var) | ||
|
|
@@ -96,8 +98,11 @@ def error(self, X, inputs, outputs, beg, end, aux_var=None): | |
| class RobinBC(BC): | ||
| """Robin boundary conditions: dy/dn(x) = func(x, y).""" | ||
|
|
||
| def __init__(self, geom, func, on_boundary, component=0): | ||
| super().__init__(geom, on_boundary, component) | ||
| def __init__(self, geom, func, on_boundary, component=0, depends_on_trainable_variables=False): | ||
| # `depends_on_trainable_variables` is here in order to be consistent | ||
| # with other BC/IC functions with `func` | ||
| # and in case in future caching is added here, too. | ||
| super().__init__(geom, on_boundary, component, depends_on_trainable_variables) | ||
| self.func = func | ||
|
|
||
| def error(self, X, inputs, outputs, beg, end, aux_var=None): | ||
|
|
@@ -110,7 +115,7 @@ class PeriodicBC(BC): | |
| """Periodic boundary conditions on component_x.""" | ||
|
|
||
| def __init__(self, geom, component_x, on_boundary, derivative_order=0, component=0): | ||
| super().__init__(geom, on_boundary, component) | ||
| super().__init__(geom, on_boundary, component, False) | ||
| self.component_x = component_x | ||
| self.derivative_order = derivative_order | ||
| if derivative_order > 1: | ||
|
|
@@ -145,6 +150,7 @@ class OperatorBC(BC): | |
| `inputs` and `outputs` are the network input and output tensors, | ||
| respectively; `X` are the NumPy array of the `inputs`. | ||
| on_boundary: (x, Geometry.on_boundary(x)) -> True/False. | ||
| depends_on_trainable_variables: Whether this BC depends on any trainable variable or not. | ||
|
|
||
| Warning: | ||
| If you use `X` in `func`, then do not set ``num_test`` when you define | ||
|
|
@@ -154,8 +160,11 @@ class OperatorBC(BC): | |
| which cannot be fixed in an easy way for all backends. | ||
| """ | ||
|
|
||
| def __init__(self, geom, func, on_boundary): | ||
| super().__init__(geom, on_boundary, 0) | ||
| def __init__(self, geom, func, on_boundary, depends_on_trainable_variables=False): | ||
| # `depends_on_trainable_variables` is here in order to be consistent | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with not putting a warning for those that don't use autocache.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should I change the default |
||
| # with other BC/IC functions with `func` | ||
| # and in case in future caching is added here, too. | ||
| super().__init__(geom, on_boundary, 0, depends_on_trainable_variables) | ||
| self.func = func | ||
|
|
||
| def error(self, X, inputs, outputs, beg, end, aux_var=None): | ||
|
|
@@ -253,13 +262,20 @@ class PointSetOperatorBC: | |
| Note, If you want to use batch size here, you should also set callback | ||
| 'dde.callbacks.PDEPointResampler(bc_points=True)' in training. | ||
| shuffle: Randomize the order on each pass through the data when batching. | ||
| depends_on_trainable_variables: Whether this BC depends on any trainable variable or not. | ||
| """ | ||
|
|
||
| def __init__(self, points, values, func, batch_size=None, shuffle=True): | ||
| def __init__(self, points, values, func, batch_size=None, shuffle=True, depends_on_trainable_variables=False): | ||
| self.points = np.array(points, dtype=config.real(np)) | ||
| if not isinstance(values, numbers.Number) and values.shape[1] != 1: | ||
| raise RuntimeError("PointSetOperatorBC should output 1D values") | ||
| self.values = bkd.as_tensor(values, dtype=config.real(bkd.lib)) | ||
|
|
||
| # `depends_on_trainable_variables` is here in order to be consistent | ||
| # with other BC/IC functions with `func` | ||
| # and in case in future caching is added here, too. | ||
| self.depends_on_trainable_variables = depends_on_trainable_variables | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The rationale behind this is just consistency right? Because this is not used.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The consistency and similar "interfaces" in API docs are beautiful, but I have added it here mainly to make it bug-proof in case of future changes when some type of caching is applied here, too. Apart from that, it is not necessary here at all, it is not even a style preference.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, when I said "style choice" I meant "design/robustness choice"
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Now there is indeed no need to add these lines here (and in 1 or 2 other places, where BC does not use caching for
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think adding a comment for the areas where the flag is added but isn't used is sufficient |
||
|
|
||
| self.func = func | ||
| self.batch_size = batch_size | ||
|
|
||
|
|
@@ -311,11 +327,14 @@ class Interface2DBC: | |
| on_boundary1: First edge func. (x, Geometry.on_boundary(x)) -> True/False. | ||
| on_boundary2: Second edge func. (x, Geometry.on_boundary(x)) -> True/False. | ||
| direction (string): "normal" or "tangent". | ||
| depends_on_trainable_variables: Whether this BC depends on any trainable variable or not. | ||
| """ | ||
|
|
||
| def __init__(self, geom, func, on_boundary1, on_boundary2, direction="normal"): | ||
| def __init__(self, geom, func, on_boundary1, on_boundary2, direction="normal", depends_on_trainable_variables=False): | ||
| self.geom = geom | ||
| self.func = npfunc_range_autocache(utils.return_tensor(func)) | ||
| self.depends_on_trainable_variables = depends_on_trainable_variables | ||
|
|
||
| self.func = npfunc_range_autocache(utils.return_tensor(func), self.depends_on_trainable_variables) | ||
| self.on_boundary1 = lambda x, on: np.array( | ||
| [on_boundary1(x[i], on[i]) for i in range(len(x))] | ||
| ) | ||
|
|
@@ -371,10 +390,11 @@ def error(self, X, inputs, outputs, beg, end, aux_var=None): | |
| return left_values + right_values - values | ||
|
|
||
|
|
||
| def npfunc_range_autocache(func): | ||
| def npfunc_range_autocache(func, disable_caching=False): | ||
| """Call a NumPy function on a range of the input ndarray. | ||
|
|
||
| If the backend is pytorch, the results are cached based on the id of X. | ||
| For BC/IC objects that depend on trainable variables caching must be disabled. | ||
| """ | ||
| # For some BCs, we need to call self.func(X[beg:end]) in BC.error(). For backend | ||
| # tensorflow.compat.v1/tensorflow, self.func() is only called once in graph mode, | ||
|
|
@@ -421,13 +441,13 @@ def wrapper_cache_auxiliary(X, beg, end, aux_var): | |
| cache[key] = func(X[beg:end], aux_var[beg:end]) | ||
| return cache[key] | ||
|
|
||
| if backend_name in ["tensorflow.compat.v1", "tensorflow", "jax"]: | ||
| if backend_name in ["tensorflow.compat.v1", "tensorflow", "jax"] or disable_caching: | ||
| if utils.get_num_args(func) == 1: | ||
| return wrapper_nocache | ||
| if utils.get_num_args(func) == 2: | ||
| elif utils.get_num_args(func) == 2: | ||
| return wrapper_nocache_auxiliary | ||
| if backend_name in ["pytorch", "paddle"]: | ||
| elif backend_name in ["pytorch", "paddle"]: | ||
| if utils.get_num_args(func) == 1: | ||
| return wrapper_cache | ||
| if utils.get_num_args(func) == 2: | ||
| elif utils.get_num_args(func) == 2: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What's the point of turning this into elif? Isn't this a common early return design pattern, where you don't need elif?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought that because of
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The outer elif seems necessary by this reasoning, but what about inner elif?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For consistency and explicitness.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I guess it's just a style choice. |
||
| return wrapper_nocache_auxiliary | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps it might be a good idea to warn the user of the issue if self.depends_on_trainable is not explicitly given
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see 2 ways to check whether
depends_on_trainableis set explicitly or not and print the warning:__init__()- this will bombard with the warnings even the users who do not solve inverse problems, but this does not create any interference between BC/IC classes and, for example, theModelclass that also can check the BCs properties;Model.compile()(i.e. examine the elements ofself.data.bcs) if and only ifexternal_trainable_variablesis not empty - this will not work for thetensorflow.compat.v1backend (this list is ignored byModel.compile()and, subsequently, by the users), and this may require a stricter control over future additions of classes connectible toModel.data(if they are not children ofDataclass).I have made a commit with the 1st option. Following the DeepXDE style, the warning is done with a usual print, so that it will not distract the users unaffected by the bug too much. Besides, I will make an example of an inverse problem with a trainable IC and explicitly state the necessity of setting
depends_on_trainableto True there. And maybe even add a new entry to the FAQ section.