Skip to content
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ DeepXDE has implemented many algorithms as shown above and supports many feature
- 4 **function spaces**: power series, Chebyshev polynomial, Gaussian random field (1D/2D).
- **data-parallel training** on multiple GPUs.
- different **optimizers**: Adam, L-BFGS, etc.
- example **hyperparameter optimization** integrations (via optional extra packages) using [HyperNOs](https://pypi.org/project/hypernos/) and [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) (install `hypernos` and `ray[tune]` separately).
- conveniently **save** the model during training, and **load** a trained model.
- **callbacks** to monitor the internal states and statistics of the model during training: early stopping, etc.
- **uncertainty quantification** using dropout.
Expand Down Expand Up @@ -105,6 +106,7 @@ $ git clone https://github.com/lululxvi/deepxde.git
- [Demos of forward problems](https://deepxde.readthedocs.io/en/latest/demos/pinn_forward.html)
- [Demos of inverse problems](https://deepxde.readthedocs.io/en/latest/demos/pinn_inverse.html)
- [Demos of operator learning](https://deepxde.readthedocs.io/en/latest/demos/operator.html)
- [Demos of hyperparameter optimization](https://deepxde.readthedocs.io/en/latest/demos/operator/advection_2d_hpo.html)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be considered that this may not be a very extensible approach. If someone were to add more hyperparam optimization examples, then they would have to individually link them.

Hyperparameter optimization doesn't fit neatly inside of operator, inverse, or forward, maybe you could create a hyperparameter folder, where it would also be extensible to people wanting to add examples of learning rate annealing, optuna, etc.

- [FAQ](https://deepxde.readthedocs.io/en/latest/user/faq.html)
- [Research papers used DeepXDE](https://deepxde.readthedocs.io/en/latest/user/research.html)
- [API](https://deepxde.readthedocs.io/en/latest/modules/deepxde.html)
Expand Down
14 changes: 14 additions & 0 deletions docs/demos/hyperparameter.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
Demos of Hyper-parameter Optimization
=====================================

Here are some demos of hyper-parameter optimization for scientific machine learning problems.

Operator learning
-----------------

.. toctree::
:maxdepth: 1

hyperparameter/advection_2d_hpo

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this space, it looks weird

Image

- `Poisson equation with HPO <https://github.com/lululxvi/deepxde/blob/master/examples/hyperparameter/poisson_1d_hpo.py>`_
Loading