Skip to content

[FEATURE] Support visual mesh deformation of KinematicEntity and rendering of KinematicEntity in raycast sensor.#2721

Draft
Kashu7100 wants to merge 11 commits intoGenesis-Embodied-AI:mainfrom
Kashu7100:feat-smpl-vis2
Draft

[FEATURE] Support visual mesh deformation of KinematicEntity and rendering of KinematicEntity in raycast sensor.#2721
Kashu7100 wants to merge 11 commits intoGenesis-Embodied-AI:mainfrom
Kashu7100:feat-smpl-vis2

Conversation

@Kashu7100
Copy link
Copy Markdown
Collaborator

@Kashu7100 Kashu7100 commented Apr 20, 2026

Description

  • Multi-solver BVH raycasting. RaycasterSharedMetadata now carries an extra_visual_bvhs
    list. Per-solver BVHs are built and traversed independently; per-ray results are merged
    via a new kernel_merge_ray_hits that keeps the closest hit. A single depth camera can
    now see rigid + kinematic geometry simultaneously.
  • Cross-solver sensor attachment. Replaced the single-solver links_idx lookup with
    per-sensor (solver, link_idx) pairs. _gather_sensor_link_poses() groups sensors by
    solver, does one bulk get_links_pos/quat per solver, and scatters into a (B, n_sensors,
    3/4) tensor. Static sensors (entity_idx=-1) get identity transforms with properly padded
    per-sensor arrays.
  • Material-level opt-in. Moved use_visual_raycasting from an entity property to
    gs.materials.Kinematic, so visibility to sensors is configured at entity creation.
    Non-opt-in entities have their vverts moved to (1e10, 1e10, 1e10) after FK so their
    AABBs fall outside any ray's max_range — no kernel-signature changes needed.
  • Robustness fixes. Raise on cross-solver metadata mismatch; force-run FK before the
    first BVH build so non-root kinematic links have valid poses; validate no_hit_value >=
    max_range when multi-solver merge is active; setter for use_visual_raycasting raises
    post-build instead of silently no-oping; set _is_forward_pos_updated=True after internal
    FK to avoid redundant per-frame recomputation.
  • Examples. examples/sensors/depth_camera_custom_vverts.py exercises the full pipeline:
    Go2 quadruped (rigid, articulated) + ground plane + a deforming kinematic sphere (opted
    in) + a static kinematic box (opted out, visible in viewer but absent from depth).
    examples/rendering/custom_visual_mesh.py updated to the new material API.

Related Issue

Resolves Genesis-Embodied-AI/Genesis#

Motivation and Context

How Has This Been / Can This Be Tested?

Screenshots (if appropriate):

Checklist:

  • I read the CONTRIBUTING document.
  • I followed the Submitting Code Changes section of CONTRIBUTING document.
  • I tagged the title correctly (including BUG FIX/FEATURE/MISC/BREAKING)
  • I updated the documentation accordingly or no change is needed.
  • I tested my changes and added instructions on how to test it for reviewers.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

Kashu7100 and others added 11 commits April 14, 2026 22:12
- Move use_visual_raycasting from entity property to gs.materials.Kinematic
  so it is configured declaratively at entity creation time.
- Support multi-solver raycasting: the RaycasterSharedMetadata now holds an
  extra_visual_bvhs list so a single depth camera / lidar can see visual
  geometry from both RigidSolver and KinematicSolver simultaneously.  Extra
  BVHs are built per participating solver; per-ray results are merged via
  a new kernel_merge_ray_hits kernel that keeps the closest hit.
- Add examples/sensors/depth_camera_custom_vverts.py demonstrating a depth
  camera that sees a rigid plane + box together with a deforming kinematic
  mesh (updated each frame via set_vverts).  Writes depth frames to PNGs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on, post-build setter

- Raise if a sensor is attached to an entity whose solver differs from the
  shared metadata solver (prevents silent link_idx corruption across solvers).
- Run kernel_forward_kinematics in _update_visual_bvh_for_solver when FK has
  not yet executed, ensuring non-root kinematic links have valid poses before
  the first BVH build.
- Validate that no_hit_value >= max_range when multi-solver merge is active
  (the merge kernel compares raw distances, so a small no_hit_value would
  shadow real hits from the other BVH).
- Change use_visual_raycasting setter to raise after scene.build() instead of
  silently storing a dead value.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the single-solver links_idx lookup with per-sensor
(solver, link_idx) pairs stored in _sensor_link_solvers /
_sensor_link_indices on RigidSensorMetadataMixin.

At ray-cast time, _gather_sensor_link_poses() groups sensors by solver,
does one bulk get_links_pos/quat call per solver, and scatters results
into a (B, n_sensors, 3/4) tensor.  Static sensors (entity_idx=-1) get
identity transforms.

This removes the restriction that all raycaster sensors must be on the
same solver — a depth camera on a rigid entity can now coexist with one
on a kinematic entity.  The primary solver (for BVH geometry) defaults
to rigid_solver when active, falling back to kinematic_solver.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After computing visual vertex positions from FK, entities that did NOT
opt in via use_visual_raycasting have their vverts moved to (1e10, 1e10,
1e10) by kernel_invalidate_vverts_range.  Their AABBs end up far outside
any ray's max_range, so the BVH traversal naturally skips them without
needing kernel signature changes.

This means only explicitly opted-in entities are visible to the raycaster,
even if other entities share the same solver.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Scene now exercises the full feature set:
- RigidSolver: ground plane + Go2 quadruped (articulated, 13 links)
- KinematicSolver: deforming sphere (use_visual_raycasting=True) +
  static box (use_visual_raycasting=False, invisible to rays)
- Two depth cameras on different rigid entities (Go2 base + plane)
  verifying per-sensor link resolution across the shared BVH
- Phase 3 filtering verified: the kinematic box is visible in the 3D
  viewer but absent from both depth images

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Extract _seg_key_for_geom() in RasterizerContext to deduplicate the
  8-line if/elif block shared by add_rigid_node and add_skinned_node.
- Extract _cast_visual_rays() classmethod in RaycasterSensor to wrap the
  18-argument kernel_cast_rays_visual call (was copy-pasted 3 times).
- Move defaultdict import to module level in raycaster.py.
- Name the magic 1e10 as _VVERT_INVALIDATION_POS constant in raycast_qd.py.
- Update custom_visual_mesh.py SMPL path to use the material-based
  use_visual_raycasting API instead of the old property setter.
- Deduplicate trimesh.creation.box() call in the depth camera example.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Static sensors (entity_idx < 0) now append dummy entries to links_idx,
  offsets_pos, and offsets_quat so all per-sensor arrays stay aligned.
  Previously the early return skipped these, causing index drift when
  static and entity-attached sensors were mixed.
- Set solver._is_forward_pos_updated = True after running FK in
  _update_visual_bvh_for_solver so it doesn't re-run every frame.
  The flag is reset by the solver's own step cycle, so subsequent
  pose changes are still picked up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant