Text Counterfactual Explanations#413
Draft
drobiu wants to merge 9 commits intoJuliaTrustworthyAI:mainfrom
Draft
Conversation
13 tasks
Member
|
@ceferisbarov hope you're well! This is the work-in-progress on CE for LLMs that I mentioned last time we spoke. Might you be interested in supporting @drobiu a bit here? |
Contributor
|
@pat-alt Hi! I would love to. Let me review the paper and the original code and I will ping you guys. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Explained in #414:
I'm working on introducing Text Counterfactual Explanations for Language Model Classifier models to CounterfactualExplanations.jl. The method I'm focusing on is Relevance-based Infilling for Textual Counterfactuals (RELITC) Pyhton code, paper. In short, the method generates an explanation for a string (text) classified by an LM Classifier by computing feature attribution per token (score of how much each token contributed to classifying the text to its class), masking the tokens with the highest attribution scores, and filling in the masks using a fine-tuned Conditional Masked Language Model (CMLM).
To have a fully implemented version of RELITC I think we need to have the following (also somewhat tracked in this project):
I'm working on those features in a separate branch, with this PR: #413 where I'm still working in a Jupyter Notebook, but I'm planning to introduce the features to the CE.jl architecture.
The
generate_counterfactual(x, target, data, M, generator)can be used in the following way:xbeing the text(s) to explaintargetbeing the target class for the CEdatabeing optional data if fine-tuning of the LM Classifier or CMLMMbeing the LM Classifier to explaingeneratorbeing the RELITC methodso the function signature should be usable in this case as well.
Following the CounterfactualExplanations.jl spirit, we can think of interoperability for other CE methods, such as MiCE, which is a predecessor for RELITC so it should be possible to reuse some of the code.