[FEATURE] Support visual mesh deformation of KinematicEntity and rendering of KinematicEntity in raycast sensor.#2721
Draft
Kashu7100 wants to merge 11 commits intoGenesis-Embodied-AI:mainfrom
Draft
Conversation
- Move use_visual_raycasting from entity property to gs.materials.Kinematic so it is configured declaratively at entity creation time. - Support multi-solver raycasting: the RaycasterSharedMetadata now holds an extra_visual_bvhs list so a single depth camera / lidar can see visual geometry from both RigidSolver and KinematicSolver simultaneously. Extra BVHs are built per participating solver; per-ray results are merged via a new kernel_merge_ray_hits kernel that keeps the closest hit. - Add examples/sensors/depth_camera_custom_vverts.py demonstrating a depth camera that sees a rigid plane + box together with a deforming kinematic mesh (updated each frame via set_vverts). Writes depth frames to PNGs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on, post-build setter - Raise if a sensor is attached to an entity whose solver differs from the shared metadata solver (prevents silent link_idx corruption across solvers). - Run kernel_forward_kinematics in _update_visual_bvh_for_solver when FK has not yet executed, ensuring non-root kinematic links have valid poses before the first BVH build. - Validate that no_hit_value >= max_range when multi-solver merge is active (the merge kernel compares raw distances, so a small no_hit_value would shadow real hits from the other BVH). - Change use_visual_raycasting setter to raise after scene.build() instead of silently storing a dead value. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the single-solver links_idx lookup with per-sensor (solver, link_idx) pairs stored in _sensor_link_solvers / _sensor_link_indices on RigidSensorMetadataMixin. At ray-cast time, _gather_sensor_link_poses() groups sensors by solver, does one bulk get_links_pos/quat call per solver, and scatters results into a (B, n_sensors, 3/4) tensor. Static sensors (entity_idx=-1) get identity transforms. This removes the restriction that all raycaster sensors must be on the same solver — a depth camera on a rigid entity can now coexist with one on a kinematic entity. The primary solver (for BVH geometry) defaults to rigid_solver when active, falling back to kinematic_solver. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After computing visual vertex positions from FK, entities that did NOT opt in via use_visual_raycasting have their vverts moved to (1e10, 1e10, 1e10) by kernel_invalidate_vverts_range. Their AABBs end up far outside any ray's max_range, so the BVH traversal naturally skips them without needing kernel signature changes. This means only explicitly opted-in entities are visible to the raycaster, even if other entities share the same solver. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Scene now exercises the full feature set: - RigidSolver: ground plane + Go2 quadruped (articulated, 13 links) - KinematicSolver: deforming sphere (use_visual_raycasting=True) + static box (use_visual_raycasting=False, invisible to rays) - Two depth cameras on different rigid entities (Go2 base + plane) verifying per-sensor link resolution across the shared BVH - Phase 3 filtering verified: the kinematic box is visible in the 3D viewer but absent from both depth images Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Extract _seg_key_for_geom() in RasterizerContext to deduplicate the 8-line if/elif block shared by add_rigid_node and add_skinned_node. - Extract _cast_visual_rays() classmethod in RaycasterSensor to wrap the 18-argument kernel_cast_rays_visual call (was copy-pasted 3 times). - Move defaultdict import to module level in raycaster.py. - Name the magic 1e10 as _VVERT_INVALIDATION_POS constant in raycast_qd.py. - Update custom_visual_mesh.py SMPL path to use the material-based use_visual_raycasting API instead of the old property setter. - Deduplicate trimesh.creation.box() call in the depth camera example. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Static sensors (entity_idx < 0) now append dummy entries to links_idx, offsets_pos, and offsets_quat so all per-sensor arrays stay aligned. Previously the early return skipped these, causing index drift when static and entity-attached sensors were mixed. - Set solver._is_forward_pos_updated = True after running FK in _update_visual_bvh_for_solver so it doesn't re-run every frame. The flag is reset by the solver's own step cycle, so subsequent pose changes are still picked up. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
list. Per-solver BVHs are built and traversed independently; per-ray results are merged
via a new kernel_merge_ray_hits that keeps the closest hit. A single depth camera can
now see rigid + kinematic geometry simultaneously.
per-sensor (solver, link_idx) pairs. _gather_sensor_link_poses() groups sensors by
solver, does one bulk get_links_pos/quat per solver, and scatters into a (B, n_sensors,
3/4) tensor. Static sensors (entity_idx=-1) get identity transforms with properly padded
per-sensor arrays.
gs.materials.Kinematic, so visibility to sensors is configured at entity creation.
Non-opt-in entities have their vverts moved to (1e10, 1e10, 1e10) after FK so their
AABBs fall outside any ray's max_range — no kernel-signature changes needed.
first BVH build so non-root kinematic links have valid poses; validate no_hit_value >=
max_range when multi-solver merge is active; setter for use_visual_raycasting raises
post-build instead of silently no-oping; set _is_forward_pos_updated=True after internal
FK to avoid redundant per-frame recomputation.
Go2 quadruped (rigid, articulated) + ground plane + a deforming kinematic sphere (opted
in) + a static kinematic box (opted out, visible in viewer but absent from depth).
examples/rendering/custom_visual_mesh.py updated to the new material API.
Related Issue
Resolves Genesis-Embodied-AI/Genesis#
Motivation and Context
How Has This Been / Can This Be Tested?
Screenshots (if appropriate):
Checklist:
Submitting Code Changessection of CONTRIBUTING document.