Skip to content

Robust Incremental Smoothing and Mapping (riSAM)#2409

Open
DanMcGann wants to merge 21 commits intoborglab:developfrom
DanMcGann:feature/risam
Open

Robust Incremental Smoothing and Mapping (riSAM)#2409
DanMcGann wants to merge 21 commits intoborglab:developfrom
DanMcGann:feature/risam

Conversation

@DanMcGann
Copy link
Copy Markdown
Contributor

@DanMcGann DanMcGann commented Feb 14, 2026

This is the second of 3 PRs that support the integration of riSAM into GTSAM. This PR adds the actual riSAM algorithm along with unit test to validate its functionality. In addition to the unit tests, I ran an independent test that confirmed that the implementation here matches exactly our internal riSAM implementation!

Overview

This PR adds and test sthe RISAM class, and its helpers RISAMGraduatedKernel and RISAMGraduatedFactor. RISAM is a drop in replacement for ISAM2 with the same update interface. However, users can wrap any potential outlier in a RISAMGraduatedFactor and riSAM will solve updates that involve these factors using an incrementalized version of Graduated Non-Convexity.

TODO - Python Interface

This PR needs to be updated to include its python interface. Unfortunately, I was unable to find an effective way to implement such an interface. The template for RISAMGraduatedFactor allows it to wrap ANY gtsam factor type. In order to define the python interface for this it appears that we would need to explicitly enumerate every factor type with all of their possible template combinations. This seemed like an unmaintainable definition. Thus before adding this interface I wanted to check in to see if the maintainers had any suggestions for the best way to approach the implementation.

Final Notes

This is PR 2/3 for adding RISAM. Once the python interface above is fixed the final PR will contain a jupiter notebook with a example / tutorial on using riSAM!

@dellaert
Copy link
Copy Markdown
Member

Awesome !

I'll wait to review thoroughly until other PR is merged. But, check naming convention esp. We're Google style except we camelCase variables and non-static methods. Also, please format everything with clang-format, Google style, if that was not already done. Finally, warnings are treated as errors, so please try to compile with that flag locally.

@dellaert
Copy link
Copy Markdown
Member

dellaert commented Mar 2, 2026

Could you merge in develop so PR diff is up to date?

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces the core Robust Incremental Smoothing and Mapping (riSAM) implementation into GTSAM by adding the RISAM solver and its graduated robust-kernel factor wrappers, plus supporting incremental tooling in iSAM2/BayesTree and accompanying unit tests.

Changes:

  • Add RISAM (drop-in ISAM2-like update interface) and the graduated-kernel infrastructure (RISAMGraduatedKernel, RISAMGraduatedFactor).
  • Add incremental “look-ahead” and traversal helpers (ISAM2::predictUpdateInfo, BayesTree::traverseTop) used by riSAM.
  • Add/extend tests for Dogleg Line Search, traversal/look-ahead behavior, and riSAM integration.

Reviewed changes

Copilot reviewed 17 out of 17 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
gtsam/sam/RISAM.h / gtsam/sam/RISAM.cpp Adds the riSAM algorithm wrapper around iSAM2, housekeeping, and GNC-style iterations.
gtsam/sam/RISAMGraduatedKernel.h / .cpp Adds graduated robust kernel interface + SIGKernel implementation.
gtsam/sam/RISAMGraduatedFactor.h / .cpp Adds factor wrapper enabling graduated robust weighting during linearization.
gtsam/sam/tests/testRISAM.cpp New unit/integration tests validating kernel math, factor behavior, and end-to-end riSAM behavior.
gtsam/nonlinear/ISAM2.h / gtsam/nonlinear/ISAM2.cpp Adds predictUpdateInfo and Dogleg Line Search support inside updateDelta.
gtsam/nonlinear/ISAM2Params.h Introduces ISAM2DoglegLineSearchParams and adds it to the optimization params variant.
gtsam/nonlinear/DoglegOptimizerImpl.h Implements DoglegLineSearchImpl::Iterate (line search over the dogleg arc).
gtsam/inference/BayesTree.h / gtsam/inference/BayesTree-inst.h Adds traverseTop (and helpers) for “contaminated” traversal.
tests/testGaussianISAM2.cpp Adds slamlike regression tests using Dogleg Line Search; adds predict_update_info test.
tests/testDoglegOptimizer.cpp Adds a test for DoglegLineSearchImpl::Iterate and a regression test for ComputeBlend mismatch.
gtsam/symbolic/tests/testSymbolicBayesTree.cpp Adds tests validating traverseTop behavior.
gtsam/nonlinear/nonlinear.i Exposes Dogleg Line Search params and predictUpdateInfo to the wrapper layer.
Comments suppressed due to low confidence (10)

gtsam/sam/RISAMGraduatedFactor.h:130

  • vblock is created but never used, which will trigger unused-variable warnings (and can fail builds when warnings are treated as errors). Remove it, or use it if it was intended for something else.
      size_t d = current_estimate.at(key).dim();
      gtsam::Matrix vblock = gtsam::Matrix::Zero(output_dim, d);
      Ablocks.push_back(A.block(0, idx_start, output_dim, d));
      idx_start += d;

gtsam/sam/RISAMGraduatedFactor.h:31

  • GraduatedFactor has out-of-line methods (see RISAMGraduatedFactor.cpp) but is not marked GTSAM_EXPORT. Consider exporting it to avoid missing symbols on Windows shared-library builds, especially since this type is part of the public riSAM API.
/// @brief Graduated Factor for riSAM base class
class GraduatedFactor {
  /** TYPES **/
 public:
  typedef std::shared_ptr<GraduatedFactor> shared_ptr;

  /** FIELDS **/

gtsam/sam/RISAMGraduatedKernel.h:24

  • GraduatedKernel is a non-template class with out-of-line virtual methods; consider adding GTSAM_EXPORT to the class declaration to ensure it is exported correctly on Windows shared-library builds.
/** @brief Base class for graduated kernels for riSAM
 * Advanced users can write their own kernels by inheriting from this class
 */
class GraduatedKernel {
  /** TYPES **/

gtsam/nonlinear/DoglegOptimizerImpl.h:334

  • This header uses std::numeric_limits<double>::epsilon() but does not include <limits>. Add the missing include to keep the header self-contained and avoid relying on transitive includes.
  // Search Increase delta
  double eps = std::numeric_limits<double>::epsilon();
  while (step < max_step - eps) {

gtsam/nonlinear/ISAM2Params.h:149

  • The constructor argument order is (..., wildfire_threshold, sufficient_decrease_coeff, ...), but the member order/docs and call sites in this PR pass ( ..., 1e-3, 1e-4, ...) which reads like (sufficient_decrease_coeff, wildfire_threshold). This is very easy to misuse and likely results in swapped parameter values; consider reordering the constructor parameters to match the field order (or use named setters in call sites).
  ISAM2DoglegLineSearchParams(double min_delta = 0.02, double max_delta = 0.5,
                              double step_size = 1.5,
                              double wildfire_threshold = 1e-4,
                              double sufficient_decrease_coeff = 1e-3,
                              bool verbose = false)

tests/testGaussianISAM2.cpp:328

  • These arguments appear to be passed as (min_delta, max_delta, step_size, sufficient_decrease_coeff, wildfire_threshold, verbose), but the ISAM2DoglegLineSearchParams constructor is declared as (min_delta, max_delta, step_size, wildfire_threshold, sufficient_decrease_coeff, verbose). This likely swaps wildfire_threshold and sufficient_decrease_coeff in the test configuration.
  ISAM2 isam = createSlamlikeISAM2(
      &fullinit, &fullgraph,
      ISAM2Params(ISAM2DoglegLineSearchParams(0.1, 1.0, 3, 1e-3, 1e-4, false),
                  0.0, 0, false));

gtsam/sam/tests/testRISAM.cpp:178

  • This call likely swaps wildfire_threshold and sufficient_decrease_coeff: ISAM2DoglegLineSearchParams takes ( ..., wildfire_threshold, sufficient_decrease_coeff, ...) but the passed literals read like ( ..., sufficient_decrease_coeff, wildfire_threshold, ...). Please confirm and reorder arguments (or switch to setters) so the intended values are applied.
  RISAM::Parameters params;
  params.isam2_params = ISAM2Params(
      ISAM2DoglegLineSearchParams(0.02, 1.0, 1.5, 1e-3, 1e-4, false));
  RISAM risam(params);

gtsam/sam/RISAMGraduatedKernel.h:110

  • SIGKernel has out-of-line method definitions in RISAMGraduatedKernel.cpp but the class is not marked GTSAM_EXPORT. For Windows shared-library builds, export the class (or its methods) so it is usable from downstream code.
class SIGKernel : public GraduatedKernel {
  /** TYPES **/
 public:
  /// @brief Shortcut for shared pointer
  typedef std::shared_ptr<SIGKernel> shared_ptr;
  /// @brief Function type for mu update sequence
  typedef std::function<double(double, double, size_t)> MuUpdateStrategy;

gtsam/nonlinear/ISAM2Params.h:203

  • ISAM2Params::print() currently only handles Gauss-Newton and Dogleg; after adding ISAM2DoglegLineSearchParams to the OptimizationParams variant, printing will fall into the "{unknown type}" branch. Update the print logic to handle the new optimization params type.
  typedef std::variant<ISAM2GaussNewtonParams, ISAM2DoglegParams,
                       ISAM2DoglegLineSearchParams>
      OptimizationParams;  ///< Either ISAM2GaussNewtonParams or
                           ///< ISAM2DoglegParams or
                           ///< ISAM2DoglegLineSearchParams

gtsam/sam/RISAM.h:27

  • RISAM is a non-template public class implemented in a .cpp, but it is not marked with GTSAM_EXPORT. On Windows shared-library builds this can prevent the class from being visible to external users; consider declaring it as class GTSAM_EXPORT RISAM.
class RISAM {
  /** TYPES **/

Comment thread gtsam/sam/RISAM.cpp Outdated
Comment thread tests/testGaussianISAM2.cpp Outdated
Comment thread gtsam/inference/BayesTree-inst.h Outdated
Comment thread gtsam/sam/RISAMGraduatedFactor.h
Comment thread gtsam/sam/RISAM.cpp
@DanMcGann DanMcGann marked this pull request as ready for review March 3, 2026 04:24
@dellaert
Copy link
Copy Markdown
Member

dellaert commented Mar 3, 2026

@DanMcGann, this is awesome.

First, some architectural questions: the "kernels" look very similar to the loss functions in linear. It would be good to discuss whether they are in fact identical in scope, and even line up with some of the robust loss functions we do have already. If this is indeed the case, we really ought to refactor it here so you can use any loss function and move the kernels you did define to the loss function library.

Incidentally, a new robust loss, TLS, was added just days ago in the context of the GNC optimizer.

The connection with GNC also warrants some language, perhaps in the documentation. Speaking of which, you probably want to add a notebook in nonlinear/doc to this PR that explains riSAM, based on and similar to GNC's user guide entry.

@dellaert
Copy link
Copy Markdown
Member

dellaert commented Mar 4, 2026

At least two CI failures, related to

In file included from /Applications/Xcode_16.4.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/v1/string:608:
216
Error: /Applications/Xcode_16.4.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/v1/__memory/allocator.h:168:81: error: destructor called on non-final 'gtsam::SIGKernel' that has virtual functions but non-virtual destructor [-Werror,-Wdelete-non-abstract-non-virtual-dtor]
217
  168 |   _LIBCPP_DEPRECATED_IN_CXX17 _LIBCPP_HIDE_FROM_ABI void destroy(pointer __p) { __p->~_Tp(); }
218

FAILED: gtsam/CMakeFiles/gtsam.dir/sam/RISAMGraduatedKernel.cpp.o 
394
/usr/bin/clang++-14 -DNDEBUG -Dgtsam_EXPORTS -I/__w/gtsam/gtsam -I/__w/gtsam/gtsam/build -I/__w/gtsam/gtsam/CppUnitLite -I/__w/gtsam/gtsam/gtsam/3rdparty/metis/include -I/__w/gtsam/gtsam/gtsam/3rdparty/metis/libmetis -I/__w/gtsam/gtsam/gtsam/3rdparty/metis/GKlib -I/__w/gtsam/gtsam/gtsam/3rdparty/cephes -isystem /__w/gtsam/gtsam/gtsam/3rdparty/SuiteSparse_config -isystem /__w/gtsam/gtsam/gtsam/3rdparty/Spectra -isystem /__w/gtsam/gtsam/gtsam/3rdparty/CCOLAMD/Include -isystem /__w/gtsam/gtsam/gtsam/3rdparty/Eigen -w -O3 -DNDEBUG -fPIC -fdiagnostics-color=always -ftemplate-depth=1024 -Werror -Wall -Wpedantic -Wextra -Wno-unused-parameter -Wreturn-stack-address -Wno-weak-template-vtables -Wno-weak-vtables -Wreturn-type -Wformat -Werror=format-security -Wsuggest-override -O3 -Wno-unused-local-typedefs -std=c++17 -MD -MT gtsam/CMakeFiles/gtsam.dir/sam/RISAMGraduatedKernel.cpp.o -MF gtsam/CMakeFiles/gtsam.dir/sam/RISAMGraduatedKernel.cpp.o.d -o gtsam/CMakeFiles/gtsam.dir/sam/RISAMGraduatedKernel.cpp.o -c /__w/gtsam/gtsam/gtsam/sam/RISAMGraduatedKernel.cpp
395
In file included from /__w/gtsam/gtsam/gtsam/sam/RISAMGraduatedKernel.cpp:2:
396
/__w/gtsam/gtsam/gtsam/sam/RISAMGraduatedKernel.h:26:16: error: no template named 'shared_ptr' in namespace 'std'
397
  typedef std::shared_ptr<GraduatedKernel> shared_ptr;
398
          ~~~~~^
399
/__w/gtsam/gtsam/gtsam/sam/RISAMGraduatedKernel.h:108:16: error: no template named 'shared_ptr' in namespace 'std'
400
  typedef std::shared_ptr<SIGKernel> shared_ptr;
401
          ~~~~~^

Please check all CI failures if there are others...

@DanMcGann
Copy link
Copy Markdown
Contributor Author

Great point about the kernels @dellaert !

After looking into it unfortunately, refactoring is a little tricky and I would love you input.

GNC is able to re-use the existing robustLoss functions because it's formulation permits factoring out the convexification control parameter $\mu$ allowing GNCOptimizer to re-use the existing implementations in robustLoss. The graduated kernel proposed as part of RISAM does not support this resulting in an interface difference.

Interface difference:

GNC / RobustLoss RISAM
$\rho(e)$ $\rho(e, \mu)$

Unifying the interface is possible but causes problems. We could support an optional $\mu$ parameter for any robust loss $\rho(e, \mu=\mathrm{default})$. However, the effect would be ill-defined due to multiple possible implementations (see below), and many kernels do not have proposed graduated variants.

Example: Both GNC and RISAM define a different graduated loss based on Geman-McClure (GM):

GM-GNC GM-RISAM
$\frac{\mu c^2 e^2}{\mu c^2 + e^2}$ $\frac{c^2e^2}{c^2+(e^2)^\mu}$

Example: No graduation scheme for Cauchy loss has been proposed.

Additionally, beyond linearizing factors GNC implements additional logic specifically based on GM and TLS loss and extension of GNCOptimizer to arbitrary graduated losses appears to be non-trivial.

Thus, should we leave the RISAM kernel separate? Or should we fight the problems above and unify the interface?

@dellaert
Copy link
Copy Markdown
Member

Thanks for the notebook, that’s invaluable. Will look at kernel issue.

@@ -0,0 +1,38 @@
/* ----------------------------------------------------------------------------
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking all these methods could be inlined in header.

Comment thread gtsam/sam/RISAMGraduatedFactor.h Outdated
public:
typedef std::shared_ptr<GraduatedFactor> shared_ptr;

/** FIELDS **/
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and elsewhere: please use Doxygen regions as is common in GTSAM.

@dellaert
Copy link
Copy Markdown
Member

dellaert commented Mar 17, 2026

Re, kernels: I think we should fight the fight :-) I’d prefer adding a new virtual method in base with extra \mu parameters rather than carrying a default parameter everywhere.

The issue with \mu having a different interpretation in GNC and RiSAM requires more thought. Is there a way riSAM acan switch to the GNC version? Is there a re-parameterization that makes one mu into the other? Do they behave opposite? Since GNC was included first - and several papers use that approach (including recent one from Rene Vidal's group) I'd prefer if we can use that \mu, and if needed perhaps rename your mu to a different name - especially of it somehow has "opposite" meaning. That being said, I've not carefully looked into a riSAM again to try and understand deeply myself. I'm just still brainstorming with you :-)

@dellaert
Copy link
Copy Markdown
Member

@DanMcGann happy to chat about this move this along - send me an email and we can schedule.

@DanMcGann
Copy link
Copy Markdown
Contributor Author

DanMcGann commented Mar 30, 2026

Hey @dellaert thanks!
Before we fill up a meeting slot take a look at the most recent changes that I have been working on!

These are incomplete (see TODOs below) but should give a sense of the general approach summarized as:

  • Add graduatedLoss and graduatedWeight to robust loss interface
  • Move Weight + Loss compute into static methods to support code re-use
  • Move graduation control parameter (mu) logic into a "scheduler" that is used to construct GraduatedFactors.

It should allow riSAM to support any robust loss that has a graduated form. However GNC will still be limited to only GM and TLS due to the additional logic implemented based on those losses.

If you want to brainstorm improvements ping me on email and we can schedule time to chat!

Remaining TODOs

  • Add GradSchemes to TLS lost that are introduced in the GNC impl (Linear, and Super Linear)
  • Update GNC code for new Loss Methods
  • Add tests for any missing graduated* loss/weight methods to get complete

@DanMcGann
Copy link
Copy Markdown
Contributor Author

Okay following up to mark the requested changes complete!

  • The former RISAMGraduatedKernel's functionality has now been split to re-use mEstimator::RobustLoss and a new class RISAMGraduationScheduler.
  • All compatible robust loss functions are updated to support graduated variants of loss and weight.
  • Tests are updated for to cover the new changes.

Let me know if you are happy with the refactor @dellaert!

@DanMcGann
Copy link
Copy Markdown
Contributor Author

Hey @dellaert ! Just wanted to follow up to see if there were any changes you wanted to see on the implemented design!

@dellaert
Copy link
Copy Markdown
Member

@DanMcGann i’ll try to take a look today - I’ve been procrastinating on this .

Copy link
Copy Markdown
Member

@dellaert dellaert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dan, sorry to be so late in reviewing this. I have a number of comments below, mostly about the use of the export directive, which needs to be done correctly or CI fails.

But my main concern is still the graduated convexity convention. We now seem to have a plethora of schemes: the meaning of $\mu$ changes from noise model to noise model and in fact, within a particular noise model there are different conventions based on the scheme chosen.

I know some of these predate you, in particular the very recent linear and super linear schemes in TLS. Because it's very recent, however, I think we can still change this before we release a 4.3 version. I should have caught that inconsistency when Harneet added that.

My preferred solution would be to re-parameterize all the graduation schemes to follow a particular scheme. Because the very first mechanism implemented in GTSAM - and several people/terams already use GNC - is the STANDARD GM-scheme from infinity (convex) to 1 (robust), that to me seems an obvious candidate.

Comment thread gtsam/sam/RISAM.h

namespace gtsam {

class RISAM {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a doxygen comment. Also, it is a non-template public class with out-of-line implementation, but it is not marked GTSAM_EXPORT - so that probably explains some of the CI failures.


namespace gtsam {

/// @brief Graduated Factor for riSAM base class
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same: GraduatedFactor has out-of-line methods in .cpp and is public API-facing, but lacks GTSAM_EXPORT.

/** @brief Base class for graduation scheduling for riSAM
* Advanced users can write their own schedulers by inheriting from this class
*/
class GraduationScheduler {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same: GraduationScheduler/derived scheduler classes are non-template public types with out-of-line virtual methods. Please review Using-GTSAM-EXPORT.md guidance.

Comment thread gtsam/sam/RISAM.cpp
}

/* ************************************************************************* */
RISAM::UpdateResult RISAM::updateRobust(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is too long and too complex. Consider splitting up so it's easier to read and review.

Comment thread gtsam/sam/RISAM.cpp
convex_factors.end());
// Internal params force update convex factors + ensure better ordering
gtsam::ISAM2UpdateParams update_params_internal;
while (remaining_convex_factors.size() > 0) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, this loop body could probably be a helper function?

Comment thread gtsam/sam/RISAM.cpp
}

/* ************************************************************************* */
std::set<gtsam::FactorIndex> RISAM::convexifyInvolvedFactors(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar comment: rather complex; suggest at least key-selection and factor-selection helpers to keep logic concise and testable.

My rule: "no method longer than 4 lines" :-) excluding comments, and only approximate, but the spirit is there.

Comment thread gtsam/sam/RISAM.cpp
return convex_factors;
}

} // namespace gtsam No newline at end of file
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add newline at end of all files

DOUBLES_EQUAL(0.5000, huber->loss(error4), 1e-8);
}

TEST(NoiseModel, robustFunctionHuberGraduated) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should find a way to copy to double down on copy/pasta among the new (and existing) tests. Codex 5.4 is pretty good at refactoring along those lines.

@@ -0,0 +1,127 @@
{
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The notebook preamble order does not match our documented standard (title/introduction, then copyright remove-cell, then Colab badge, then Colab install remove-cell, then imports/setup code). Current order has badge before copyright and no dedicated imports/setup cell.

See copilot-instructions.md

@dellaert
Copy link
Copy Markdown
Member

A counter-argument could be that we should define the parameter$\mu$ for each of these models as they appear in the source papers. We might then want to be more circumspect about the differences, in the header file and "user guide" notebooks, and make sure the formulas and the reference to the papers are included.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants