Conversation
This enables tmol to work with the latest torch (2.2) and python (3.12). It also (hopefully) simplifies the packaging of tmol so that it can be used more easily in a greater number of environments. To support newer pytorch versions the compiler std version is bumped to c++17. There are also some additions of explicit device placements for tensors because torch has gotten more strict about automatically moving tensors in indexing operations. The deprecated torch.testing.assert_allclose is replaced by torch.testing.assert_close (see RFC: retire torch.testing.assert_allclose in favor of torch.testing.assert_close pytorch/pytorch#61844) To support python >= 3.11 typish is removed as a dependency and replaced by a vendored version which fixes a bug in equality comparison of subscripted types. package configuration is moved into pyproject.toml and now includes specification of dependencies so that they are automatically fetched on install. This also enables installing directly from the git repo ( pip install git+https://github.com/kleinhenz/tmol.git@bump_versions) git version information is embedded into the library and available at tmol.__version__ using setuptools_scm. The Dockerfile is simplified by using micromamba base images in the env.yml the cudatoolkit is changed to the version provided by nvidia at nvidia/label/cuda-12.1.1::cuda. All other dependencies are moved to pip since it makes propagating dependencies to other projects easier. A frozen set of known good dependency versions and a script to update them are provided in environments/linux-cuda. Authored by Joseph Kleinhenz <kleinhej@gene.com>
… back from tmol->RF2
| from tmol.optimization.sfxn_modules import CartesianSfxnNetwork as cart_sfxn_network | ||
|
|
||
| from tmol.optimization.lbfgs_armijo import LBFGS_Armijo as lbfgs_armijo |
There was a problem hiding this comment.
Better way of doing these?
| ) | ||
| from tmol.io.pose_stack_from_rosettafold2 import ( # noqa: F401 | ||
| pose_stack_from_rosettafold2, | ||
| pose_stack_to_rosettafold2, |
There was a problem hiding this comment.
Naming convention for 'to' function?
| ) | ||
|
|
||
|
|
||
| def pose_stack_to_rosettafold2(seq, xyz, chainlens, pose_stack): |
There was a problem hiding this comment.
This name no longer really fits the file. How should we rename the function/file?
| from tmol.system.kinematics import KinematicDescription | ||
| from tmol.system.score_support import kincoords_to_coords | ||
|
|
||
| # from tmol.system.score_support import kincoords_to_coords # causes circular import when importing tmol from RF2 |
There was a problem hiding this comment.
I was getting circular import errors when importing tmol from RF2 without commenting this out.
| ) | ||
|
|
||
|
|
||
| """ |
There was a problem hiding this comment.
Commented this out with the above, not sure if technically necessary since I don't think it would have been parsed unless used.
| rf2_at_is_real = rf2_at_is_real_map[seq] | ||
|
|
||
| rf2_coords = torch.full( # allocate xyz for RF2 instead, with correct size - length (sum(L_s)), 27, 3 | ||
| xyz.shape, |
There was a problem hiding this comment.
Right now I just steal the shape from the original xyz that we pass in. I guess technically all it should need is the L_s and seq?
|
Closing in favor of #298 |
This is a quick working example of getting RF2 and tmol talking.
I've tested an example in RF2 with minimization happening at each step, and putting the result back into the RF2 xyz.
Small detail - the N terminus hydrogens and the C terminus OXT are not preserved if I fetch the minimization result from tmol and then send it back and rescore it. I assume this is expected?