Switched langchain llm callbacks to use genai utils handler#3889
Conversation
…tils apis instead.
...ntelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/__init__.py
Show resolved
Hide resolved
...ntelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/__init__.py
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-langchain/pyproject.toml
Outdated
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-langchain/pyproject.toml
Show resolved
Hide resolved
|
LGTM overall. I have one question: is adding SUPPRESS_LANGUAGE_MODEL_INSTRUMENTATION_KEY planned for a future PR? I also noticed that Bedrock instrumentation exists in botocore, so we might see a similar duplicate span issue there. That can be addressed in a follow-up PR. |
keith-decker
left a comment
There was a problem hiding this comment.
Just a question on the invocation manager. Otherwise LGTM.
I like that you've added the start of the seperation of genai utils types. We're going to have to do this as more types come in.
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
|
I pushed 14f22d5 which fixes the unknown types of all the langchain imports. This revealed some actual issues which I've left open. Can you please fix them? Otherwise it looks good to merge. |
errors. The remaining ones seem like real issues that need to be fixed.
14f22d5 to
64fb042
Compare
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Show resolved
Hide resolved
pmcollins
left a comment
There was a problem hiding this comment.
LGTM once other suggestions have been addressed. Added some suggestions around testing.
instrumentation-genai/opentelemetry-instrumentation-langchain/tests/test_invocation_manager.py
Outdated
Show resolved
Hide resolved
...ry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py
Outdated
Show resolved
Hide resolved
…emetry#3889) * Removed telemetry from inference callbacks and added calls to genai utils apis instead. * Fixed errors * Fixed typecheck errors * Fixed typecheck errors * Fixed precommit errors * added util dependancy * updated invocation manager * removed unnecessary dependancies * fixed precommit * fixed typecheck * fixed precommit * fixed test * removed unnecessary line * addressed comments * fixed errors * fixed precommit * removed get_property_value method * fixed format * Make tox -e typecheck install langchain, and fixed some of the type errors. The remaining ones seem like real issues that need to be fixed. * Please fix reportPossiblyUnboundVariable * fixed typecheck * added and updated tests * Fixed conflicts and removed unnecessary changes * Fixed precommit --------- Co-authored-by: aaronabbott <aaronabbott@google.com>
Description
This PR3768(merged) added llm span support in genai utils.
Removed telemetry creation in langchain's llm callbacks and added genai utils handler.
New llm span attributes using utils,
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.provider.name: Str(openai) -> gen_ai.input.messages: Str([{"role":"system","parts":[{"content":"You are a helpful assistant!","type":"text"}]},{"role":"human","parts":[{"content":"What is the capital of France?","type":"text"}]}]) -> gen_ai.output.messages: Str([{"role":"ai","parts":[{"content":"The capital of France is Paris.","type":"text"}],"finish_reason":"stop"}]) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-Cvqdva0q8IamQKHpMRTtcjK9ixvm0) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7)Old llm span attributes before using utils
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.provider.name: Str(openai) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-CvqieNzCqFnyKWXXmVGrbiORoNhh6)Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Does This PR Require a Core Repo Change?
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.