Migrating MPNN tutorial to Keras3#2289
Migrating MPNN tutorial to Keras3#2289LakshmiKalaKadali wants to merge 5 commits intokeras-team:masterfrom
Conversation
Summary of ChangesHello @LakshmiKalaKadali, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly updates the Message Passing Neural Network (MPNN) molecular property prediction tutorial to be fully compatible with Keras 3. The core changes involve adapting the graph representation to be backend-agnostic using fixed-size buffers and masks, and overhauling the data loading process with Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request does an excellent job of migrating the Message Passing Neural Network (MPNN) tutorial from Keras 2 to Keras 3, making it backend-agnostic. The changes significantly improve the code quality by introducing a more stable GatedReadout layer, optimizing the data pipeline with pre-featurization, and adopting modern Keras 3 practices like keras.ops and PyDataset. The model definition and training configuration have also been enhanced. I've identified a couple of areas for improvement: some redundant code has been introduced during the refactoring, and there's a minor formatting issue in a docstring. Overall, this is a high-quality contribution.
sachinprasadhs
left a comment
There was a problem hiding this comment.
Thanks! Left the review comments.
Below are some of the comments from the exisiting.
- Remove unnecessary underscores from "_ message passing neural network_ (MPNN) to predict graph properties. Specifically, we will
implement an MPNN to predict a molecular property known as
blood-brain barrier permeability (BBBP)." section.
sachinprasadhs
left a comment
There was a problem hiding this comment.
Wait, none of the previous comments looks addressed! everything is marked as resolved but none are addressed!
sachinprasadhs
left a comment
There was a problem hiding this comment.
Just one small comment.
Make the change and generate other files with the changed code.
Rest all looks good.
| In this tutorial, we will implement a type of graph neural network (GNN) known as | ||
| _ message passing neural network_ (MPNN) to predict graph properties. Specifically, we will | ||
| In this tutorial, we will implement a type of graph neural network (GNN) | ||
| known as _ message passing neural network_ (MPNN) to predict graph |
There was a problem hiding this comment.
Change it to _message passing neural network_ by removing space after first _
This PR migrates the Message Passing Neural Network (MPNN) molecular property prediction tutorial from Keras 2 to Keras 3. Replaced the legacy PartitionPadding and Transformer readout with a GatedReadout layer. This provides a more stable attention mechanism for small datasets like BBBP. gist