Skip to content

RF2 Integration#291

Closed
jflat06 wants to merge 3 commits intomasterfrom
jflat06/RF2-integration
Closed

RF2 Integration#291
jflat06 wants to merge 3 commits intomasterfrom
jflat06/RF2-integration

Conversation

@jflat06
Copy link
Copy Markdown
Collaborator

@jflat06 jflat06 commented Mar 19, 2024

This is a quick working example of getting RF2 and tmol talking.

I've tested an example in RF2 with minimization happening at each step, and putting the result back into the RF2 xyz.

Small detail - the N terminus hydrogens and the C terminus OXT are not preserved if I fetch the minimization result from tmol and then send it back and rescore it. I assume this is expected?

kleinhenz and others added 3 commits February 27, 2024 09:49
This enables tmol to work with the latest torch (2.2) and python (3.12). It also (hopefully) simplifies the packaging of tmol so that it can be used more easily in a greater number of environments.

To support newer pytorch versions the compiler std version is bumped to c++17. There are also some additions of explicit device placements for tensors because torch has gotten more strict about automatically moving tensors in indexing operations. The deprecated torch.testing.assert_allclose is replaced by torch.testing.assert_close (see RFC: retire torch.testing.assert_allclose in favor of torch.testing.assert_close pytorch/pytorch#61844)
To support python >= 3.11 typish is removed as a dependency and replaced by a vendored version which fixes a bug in equality comparison of subscripted types.
package configuration is moved into pyproject.toml and now includes specification of dependencies so that they are automatically fetched on install. This also enables installing directly from the git repo ( pip install git+https://github.com/kleinhenz/tmol.git@bump_versions)
git version information is embedded into the library and available at tmol.__version__ using setuptools_scm.
The Dockerfile is simplified by using micromamba base images
in the env.yml the cudatoolkit is changed to the version provided by nvidia at nvidia/label/cuda-12.1.1::cuda. All other dependencies are moved to pip since it makes propagating dependencies to other projects easier.
A frozen set of known good dependency versions and a script to update them are provided in environments/linux-cuda.

Authored by Joseph Kleinhenz <kleinhej@gene.com>
Comment thread tmol/__init__.py
Comment on lines +31 to +33
from tmol.optimization.sfxn_modules import CartesianSfxnNetwork as cart_sfxn_network

from tmol.optimization.lbfgs_armijo import LBFGS_Armijo as lbfgs_armijo
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better way of doing these?

Comment thread tmol/__init__.py
)
from tmol.io.pose_stack_from_rosettafold2 import ( # noqa: F401
pose_stack_from_rosettafold2,
pose_stack_to_rosettafold2,
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming convention for 'to' function?

)


def pose_stack_to_rosettafold2(seq, xyz, chainlens, pose_stack):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name no longer really fits the file. How should we rename the function/file?

from tmol.system.kinematics import KinematicDescription
from tmol.system.score_support import kincoords_to_coords

# from tmol.system.score_support import kincoords_to_coords # causes circular import when importing tmol from RF2
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was getting circular import errors when importing tmol from RF2 without commenting this out.

)


"""
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented this out with the above, not sure if technically necessary since I don't think it would have been parsed unless used.

rf2_at_is_real = rf2_at_is_real_map[seq]

rf2_coords = torch.full( # allocate xyz for RF2 instead, with correct size - length (sum(L_s)), 27, 3
xyz.shape,
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now I just steal the shape from the original xyz that we pass in. I guess technically all it should need is the L_s and seq?

@jflat06 jflat06 requested review from aleaverfay and fdimaio March 19, 2024 19:01
Base automatically changed from kleinhenz/bump_versions to master March 26, 2024 13:25
@jflat06
Copy link
Copy Markdown
Collaborator Author

jflat06 commented Apr 17, 2024

Closing in favor of #298

@jflat06 jflat06 closed this Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants