[FEATURE] Add history_length to Sensors#2655
[FEATURE] Add history_length to Sensors#2655Milotrince wants to merge 13 commits intoGenesis-Embodied-AI:mainfrom
history_length to Sensors#2655Conversation
3672bb2 to
b3f37bd
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3672bb240e
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
🔴 Benchmark Regression Detected ➡️ Report |
|
@claude review |
e042af4 to
307ed76
Compare
| def get_cloned_from_cache(self, sensor: "Sensor", is_ground_truth: bool = False) -> torch.Tensor: | ||
| dtype = sensor._get_cache_dtype() | ||
| if sensor._history_length > 0: | ||
| ring = ( | ||
| self._gt_timeline_ring.get(dtype) | ||
| if is_ground_truth | ||
| else (self._measured_history_ring.get(dtype) or self._gt_timeline_ring.get(dtype)) | ||
| ) | ||
| hist_cache_key = (is_ground_truth, dtype, sensor._cache_idx) | ||
| if (cached := self._history_clone_cache.get(hist_cache_key)) is not None: | ||
| return cached | ||
|
|
||
| hist_idx = torch.arange(sensor._history_length, device=ring.buffer.device, dtype=torch.int32) | ||
| blocks = [] | ||
| for rel_slice in sensor._cache_slices: | ||
| abs_slice = slice(sensor._cache_idx + rel_slice.start, sensor._cache_idx + rel_slice.stop) | ||
| hist = ring.at(hist_idx, slice(None), abs_slice) | ||
| blocks.append(hist.transpose(0, 1).flatten(1, 2)) | ||
| stacked = torch.cat(blocks, dim=1) | ||
| self._history_clone_cache[hist_cache_key] = stacked | ||
| return stacked |
There was a problem hiding this comment.
the current implementation of history is with minimal changes to the current system, so history per sensor is collected upon read like this. should we restructure somehow to make this more efficient or is this ok?
There was a problem hiding this comment.
Should we restructure somehow to make this more efficient or is this ok?
Yes. We should.
There was a problem hiding this comment.
what would you suggest? each sensor may have varying lengths of history
There was a problem hiding this comment.
currently: manager has _ground_truth_cache shape(dtype_size, B), _cache shape(dtype_size, B), calls _update_shared_ground_truth_cache() , _update_shared_cache() per sensor class based update_ground_truth_only
im thinking to do: manager has _ground_truth_cache and _cache shape (sensor_size, max_history_len_per_sensor_class, B), calls _update_shared_cache() per sensor class which updates both caches (whether to skip some computation for measured cache based on options can be decided per sensor cls)
i want to avoid flattening the history dim into the cache because it will complicate indexing, and having max history length per dtype instead of sensor might allocate too much unused space if, say, history=10 is used for ContactForce sensors in the scene and no history used for Proximity sensor
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: df53cf6e2f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
60b50a3 to
e9e471d
Compare
|
@claude review |
|
bump! |
| def _options_require_measured_cache(self): | ||
| return np.any(np.abs(self._options.delay) > gs.EPS) |
There was a problem hiding this comment.
Why adding this helper? I hate one-liner functions.
There was a problem hiding this comment.
sensor classes extend this with super()
| def _options_require_measured_cache(self) -> bool: | ||
| return super()._options_require_measured_cache() or ( | ||
| self._options.jitter > gs.EPS | ||
| or (self._options.interpolate and (self._delay_ts > 0 or self._options.jitter > gs.EPS)) | ||
| or np.any(np.abs(self._options.bias) > gs.EPS) | ||
| or np.any(np.abs(self._options.noise) > gs.EPS) | ||
| or np.any(np.abs(self._options.random_walk) > gs.EPS) | ||
| or np.any(np.abs(self._options.resolution) > gs.EPS) | ||
| or np.any(np.array(self._options.jitter) > gs.EPS) | ||
| ) |
There was a problem hiding this comment.
I'm not a huge fan of this branching. I would rather always have measured cache always enabled.
There was a problem hiding this comment.
let's check how much it affects performance
This reverts commit 597a6fc.
1d021be to
436859a
Compare
Description
history_lengthfor all sensorsupdate_ground_truth_onlyinstead of user having to set the optionfilter_link_idxforContactsensorRelated Issue
Resolves #2646
Motivation and Context
See issue
How Has This Been / Can This Be Tested?
Added
tests/test_sensors.py::test_sensor_history_length_contact_and_imuScreenshots (if appropriate):
Checklist:
Submitting Code Changessection of CONTRIBUTING document.