Conversation
|
Did you ever make the pickled atomworks file for testing? Or are we going to settle for the code generated one? |
fdimaio
left a comment
There was a problem hiding this comment.
LGTM (pending flake/black errors)
- black reformatted 39 files - pre-commit: changed black hook from language: python/python3.11 to language: system (uses the active environment's black) - .flake8: added per-file-ignores for pose_stack_from_atomworks.py (E201, E231, E241: intentional whitespace alignment in atom name tables) Made-with: Cursor
| entry: black | ||
| language: python | ||
| language_version: python3.11 | ||
| language: system |
There was a problem hiding this comment.
Why is this setting changing?
There was a problem hiding this comment.
I found the commit where you made this modification. If I understand correctly, this change will not use the version that pre-commit installs but rather the version that pip grabs, is that right? The danger here is that with an un-pinned version of black that we'll end up with constant small cosmetic changes at every commit and it'll make it hard to see the important changes from the unimportant.
There was a problem hiding this comment.
In fact, it looks like we ought to be pinning a particular version of black in pre-commit using the rev tag?
rev: 24.1.1
There was a problem hiding this comment.
@aleaverfay thanks for noting that! fixed the version now in precommit so should be consistent now
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## jflat06/sparse-dispatch #335 +/- ##
===========================================================
+ Coverage 95.03% 95.08% +0.04%
===========================================================
Files 300 302 +2
Lines 23401 23626 +225
===========================================================
+ Hits 22239 22464 +225
Misses 1162 1162
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Adds tmol.io.pose_stack_from_atomworks(), enabling tmol to construct a PoseStack directly from AtomWorks unified atom encoding tensors.
What's included
Motivation
AtomWorks uses a unified atom encoding that differs from tmol's residue-based representation. This bridge allows ML models built on AtomWorks to use tmol's Rosetta energy function for loss computation and structure refinement without manual coordinate conversion.