Skip to content

Switched langchain llm callbacks to use genai utils handler#3889

Merged
aabmass merged 32 commits intoopen-telemetry:mainfrom
wrisa:genai-instrumentation-langchain-inference-using-geni-utils
Feb 6, 2026
Merged

Switched langchain llm callbacks to use genai utils handler#3889
aabmass merged 32 commits intoopen-telemetry:mainfrom
wrisa:genai-instrumentation-langchain-inference-using-geni-utils

Conversation

@wrisa
Copy link
Copy Markdown
Contributor

@wrisa wrisa commented Oct 20, 2025

Description

This PR3768(merged) added llm span support in genai utils.
Removed telemetry creation in langchain's llm callbacks and added genai utils handler.

New llm span attributes using utils,
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.provider.name: Str(openai) -> gen_ai.input.messages: Str([{"role":"system","parts":[{"content":"You are a helpful assistant!","type":"text"}]},{"role":"human","parts":[{"content":"What is the capital of France?","type":"text"}]}]) -> gen_ai.output.messages: Str([{"role":"ai","parts":[{"content":"The capital of France is Paris.","type":"text"}],"finish_reason":"stop"}]) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-Cvqdva0q8IamQKHpMRTtcjK9ixvm0) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7)

Old llm span attributes before using utils
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.provider.name: Str(openai) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-CvqieNzCqFnyKWXXmVGrbiORoNhh6)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Test A

Does This PR Require a Core Repo Change?

  • Yes. - Link to PR:
  • No.

Checklist:

See contributing.md for styleguide, changelog guidelines, and more.

  • Followed the style guidelines of this project
  • Changelogs have been updated
  • Unit tests have been added
  • Documentation has been updated

@wrisa wrisa marked this pull request as ready for review January 12, 2026 21:38
@wrisa wrisa requested a review from a team as a code owner January 12, 2026 21:38
@shuwpan
Copy link
Copy Markdown

shuwpan commented Jan 14, 2026

LGTM overall. I have one question: is adding SUPPRESS_LANGUAGE_MODEL_INSTRUMENTATION_KEY planned for a future PR?

I also noticed that Bedrock instrumentation exists in botocore, so we might see a similar duplicate span issue there. That can be addressed in a follow-up PR.

Copy link
Copy Markdown
Contributor

@keith-decker keith-decker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question on the invocation manager. Otherwise LGTM.

I like that you've added the start of the seperation of genai utils types. We're going to have to do this as more types come in.

@aabmass
Copy link
Copy Markdown
Member

aabmass commented Jan 29, 2026

I pushed 14f22d5 which fixes the unknown types of all the langchain imports. This revealed some actual issues which I've left open. Can you please fix them? Otherwise it looks good to merge.

@aabmass aabmass force-pushed the genai-instrumentation-langchain-inference-using-geni-utils branch from 14f22d5 to 64fb042 Compare January 29, 2026 20:46
Copy link
Copy Markdown
Member

@pmcollins pmcollins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once other suggestions have been addressed. Added some suggestions around testing.

@aabmass aabmass merged commit e381a36 into open-telemetry:main Feb 6, 2026
670 of 671 checks passed
sightseeker added a commit to sightseeker/opentelemetry-python-contrib that referenced this pull request Mar 11, 2026
…emetry#3889)

* Removed telemetry from inference callbacks and added calls to genai utils apis instead.

* Fixed errors

* Fixed typecheck errors

* Fixed typecheck errors

* Fixed precommit errors

* added util dependancy

* updated invocation manager

* removed unnecessary dependancies

* fixed precommit

* fixed typecheck

* fixed precommit

* fixed test

* removed unnecessary line

* addressed comments

* fixed errors

* fixed precommit

* removed get_property_value method

* fixed format

* Make tox -e typecheck install langchain, and fixed some of the type
errors. The remaining ones seem like real issues that need to be fixed.

* Please fix reportPossiblyUnboundVariable

* fixed typecheck

* added and updated tests

* Fixed conflicts and removed unnecessary changes

* Fixed precommit

---------

Co-authored-by: aaronabbott <aaronabbott@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.