fix: remove duplicate typing import and apply three bug fixes#26354
fix: remove duplicate typing import and apply three bug fixes#26354Jah-yee wants to merge 1 commit intoBerriAI:litellm_internal_stagingfrom
Conversation
- Remove duplicate 'from typing import List, Optional' at line 285 in prometheus.py - Fix missing comma after Qwen3-30B-A3B-Thinking-2507 in clarifai_models - Fix AZURE_COMPUTER_USE cost constants: 3.0->0.003, 12.0->0.012 (per 1K tokens) - Remove duplicate MAX_SIZE_PER_ITEM_IN_MEMORY_CACHE_IN_KB (keep default 1024)
|
|
Greptile SummaryThe three targeted
Confidence Score: 1/5Not safe to merge — two files contain P0/P1 regressions that will break Prometheus integration and active code paths. The constants.py fixes are sound, but the accompanying changes to init.py and prometheus.py introduce multiple runtime-breaking regressions (ImportError, AttributeError) and significant backwards-incompatible behavior changes that are unrelated to the stated scope of the PR. litellm/init.py and litellm/types/integrations/prometheus.py — both appear to have been inadvertently downgraded to an older version of the code.
|
| Filename | Overview |
|---|---|
| litellm/init.py | ~2,600-line diff that appears to rebase onto a much older commit, removing dozens of active module-level variables and breaking backward compatibility across the codebase. |
| litellm/types/integrations/prometheus.py | Removes _sanitize_prometheus_label_name and _sanitize_prometheus_label_value which are actively imported and used in ~20+ call sites in litellm/integrations/prometheus.py, causing an immediate ImportError. |
| litellm/constants.py | Three legitimate bug fixes: missing comma in clarifai_models set, Azure Computer Use cost constants corrected 1000× (3.0→0.003, 12.0→0.012), and duplicate MAX_SIZE_PER_ITEM_IN_MEMORY_CACHE_IN_KB definition removed. |
| tests/test_litellm/prometheus/test_custom_groups.py | New unit test for custom Prometheus metric group label filtering; no real network calls, consistent with the folder's test policy. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[PR: 4 claimed bug fixes] --> B[constants.py\n✅ 3 correct fixes]
A --> C[prometheus.py\n❌ Removes _sanitize_prometheus_label_name\n _sanitize_prometheus_label_value]
A --> D[__init__.py\n❌ ~2600 line revert\nRemoves: callback_settings\ncold_storage_custom_logger\nprometheus_latency_buckets\nmodel_cost_map_url env-var\nmax_ui_session_budget 0.25→10]
C --> E[litellm/integrations/prometheus.py\nImportError at load time]
D --> F[litellm_core_utils/litellm_logging.py\nAttributeError: callback_settings\nAttributeError: cold_storage_custom_logger]
D --> G[litellm/integrations/prometheus.py\nAttributeError: prometheus_latency_buckets]
Comments Outside Diff (1)
-
litellm/__init__.py, line 1-30 (link)Removes
callback_settings,cold_storage_custom_logger, andprometheus_latency_buckets— causesAttributeErrorat runtimeThese three module-level variables are referenced by internal modules that are still present in staging:
litellm.callback_settingsis read bylitellm_core_utils/litellm_logging.py(line 4566) andlitellm_core_utils/logging_callback_manager.py(line 213).litellm.cold_storage_custom_loggeris read bylitellm_core_utils/litellm_logging.py(line 5070).litellm.prometheus_latency_bucketsis read bylitellm/integrations/prometheus.py(line 78) andlitellm/integrations/prometheus_services.py(line 39).
Removing all three will raise
AttributeError: module 'litellm' has no attribute '...'in active code paths, not just in edge cases.
Reviews (1): Last reviewed commit: "fix: remove duplicate typing import and ..." | Re-trigger Greptile
Medium: Default UI session budget increased 40xThis PR reverts Status: 0 open Posted by Veria AI · 2026-04-23T21:44:44.000Z |
| from enum import Enum | ||
| from types import MappingProxyType | ||
| from typing import Any, ClassVar, Dict, List, Literal, Mapping, Optional, Tuple, Union | ||
| from typing import Dict, List, Literal, Optional, Union | ||
|
|
||
| import litellm | ||
|
|
||
|
|
||
| def _sanitize_prometheus_label_name(label: str) -> str: | ||
| """ | ||
| Sanitize a label name to comply with Prometheus label name requirements. | ||
|
|
||
| Prometheus label names must match: ^[a-zA-Z_][a-zA-Z0-9_]*$ | ||
| - First character: letter (a-z, A-Z) or underscore (_) | ||
| - Subsequent characters: letters, digits (0-9), or underscores (_) | ||
|
|
||
| Args: | ||
| label: The label name to sanitize | ||
|
|
||
| Returns: | ||
| A sanitized label name that complies with Prometheus requirements | ||
| """ | ||
| if not label: | ||
| return "_" | ||
|
|
||
| # Replace all invalid characters with underscores | ||
| # Keep only letters, digits, and underscores | ||
| sanitized = re.sub(r"[^a-zA-Z0-9_]", "_", label) | ||
|
|
||
| # Ensure first character is valid (letter or underscore) | ||
| if sanitized and not re.match(r"^[a-zA-Z_]", sanitized[0]): | ||
| sanitized = "_" + sanitized | ||
|
|
||
| # Handle empty string after sanitization | ||
| if not sanitized: | ||
| sanitized = "_" | ||
|
|
||
| return sanitized | ||
|
|
||
|
|
||
| # v1: single translate pass + escape loop (avoids chained str.replace allocations). | ||
| _PROMETHEUS_LABEL_VALUE_TRANSLATE_V1 = str.maketrans("\n", " ", "\r\u2028\u2029") | ||
|
|
||
|
|
||
| def _sanitize_prometheus_label_value(value: Optional[Any]) -> Optional[str]: | ||
| """ | ||
| Same semantics as :func:`_sanitize_prometheus_label_value`, implemented with | ||
| ``str.translate`` plus a single escape pass instead of chained ``replace``. | ||
| """ | ||
| if value is None: | ||
| return None | ||
|
|
||
| str_value: str = value if isinstance(value, str) else str(value) | ||
|
|
||
| cleaned = str_value.translate(_PROMETHEUS_LABEL_VALUE_TRANSLATE_V1) | ||
| if "\\" not in cleaned and '"' not in cleaned: | ||
| return cleaned | ||
|
|
||
| parts: List[str] = [] | ||
| append = parts.append | ||
| for ch in cleaned: | ||
| if ch == "\\": | ||
| append("\\\\") | ||
| elif ch == '"': | ||
| append('\\"') | ||
| else: | ||
| append(ch) | ||
| return "".join(parts) | ||
|
|
||
|
|
||
| @dataclass | ||
| class MetricValidationError: | ||
| """Error for invalid metric name""" | ||
|
|
||
| metric_name: str | ||
| valid_metrics: Tuple[str, ...] | ||
|
|
||
| @property | ||
| def message(self) -> str: | ||
| return f"Invalid metric name: {self.metric_name}" | ||
|
|
||
|
|
||
| @dataclass | ||
| class LabelValidationError: | ||
| """Error for invalid labels on a metric""" | ||
|
|
||
| metric_name: str | ||
| invalid_labels: List[str] | ||
| valid_labels: List[str] | ||
|
|
||
| @property | ||
| def message(self) -> str: | ||
| return f"Invalid labels for metric '{self.metric_name}': {self.invalid_labels}" | ||
|
|
||
|
|
||
| @dataclass | ||
| class ValidationResults: | ||
| """Container for all validation results""" | ||
|
|
||
| metric_errors: List[MetricValidationError] | ||
| label_errors: List[LabelValidationError] | ||
|
|
||
| @property | ||
| def has_errors(self) -> bool: | ||
| return bool(self.metric_errors or self.label_errors) | ||
|
|
||
| @property | ||
| def all_error_messages(self) -> List[str]: | ||
| messages = [error.message for error in self.metric_errors] | ||
| messages.extend([error.message for error in self.label_errors]) | ||
| return messages | ||
| from pydantic import BaseModel, Field | ||
| from typing_extensions import Annotated | ||
|
|
||
| import litellm | ||
|
|
||
| REQUESTED_MODEL = "requested_model" | ||
| EXCEPTION_STATUS = "exception_status" |
There was a problem hiding this comment.
Removes actively-imported sanitization functions — breaks Prometheus integration at runtime
This file previously exported _sanitize_prometheus_label_name and _sanitize_prometheus_label_value, which are explicitly imported and called in litellm/integrations/prometheus.py (lines 44-45 and ~20+ call sites throughout that file). Removing them here causes an ImportError the moment the Prometheus integration is loaded, taking down the entire callback stack for any deployment using the "prometheus" callback.
| "LITELLM_ANTHROPIC_BETA_HEADERS_URL", | ||
| "https://raw.githubusercontent.com/BerriAI/litellm/main/litellm/anthropic_beta_headers_config.json", | ||
| ) | ||
| model_cost_map_url: str = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json" |
There was a problem hiding this comment.
model_cost_map_url is hardcoded — silently removes LITELLM_MODEL_COST_MAP_URL env-var support
The base branch defines this with os.getenv("LITELLM_MODEL_COST_MAP_URL", "https://...") so operators can point LiteLLM at a private/mirrored cost-map. After this PR the env var is silently ignored, breaking any deployment that relies on it.
Rule Used: What: avoid backwards-incompatible changes without... (source)
| default_internal_user_params: Optional[Dict] = None | ||
| default_team_params: Optional[Union[DefaultTeamSSOParams, Dict]] = None | ||
| default_team_settings: Optional[List] = None | ||
| max_user_budget: Optional[float] = None | ||
| default_max_internal_user_budget: Optional[float] = None | ||
| max_internal_user_budget: Optional[float] = None | ||
| max_ui_session_budget: Optional[float] = 0.25 # $0.25 USD budgets for UI Chat sessions | ||
| max_ui_session_budget: Optional[float] = 10 # $10 USD budgets for UI Chat sessions |
There was a problem hiding this comment.
max_ui_session_budget default jumps from $0.25 → $10 (40×)
This changes the default budget cap for UI Chat sessions from 25 cents to $10 per session. Any existing deployment that relies on the default (i.e., has not explicitly set this value) will silently allow users to spend 40× more per session after this change.
Rule Used: What: avoid backwards-incompatible changes without... (source)
| @@ -66,52 +50,54 @@ | |||
| empower_models, | |||
There was a problem hiding this comment.
Large unannounced scope —
__init__.py is far beyond the stated four-bug fix
The PR description claims four targeted fixes, but this file's diff spans ~2,600 lines and removes or reverts dozens of module-level variables (e.g., audit_log_callbacks, cold_storage_custom_logger, callback_settings, prometheus_latency_buckets, standard_logging_payload_excluded_fields, skip_system_message_in_guardrail, include_cost_in_streaming_usage, reasoning_auto_summary, prompt_name_config_map, custom_prometheus_tags, prometheus_metrics_config, prometheus_emit_stream_label, priority_reservation_settings, aiohttp_trust_env, disable_aiohttp_trust_env, network_mock, disable_stop_sequence_limit, allow_dynamic_callback_disabling, vercel_ai_gateway_key, etc.). This appears to be a rebase onto a significantly older commit rather than an intentional bug fix, and it will silently downgrade staging.
Rule Used: What: avoid backwards-incompatible changes without... (source)
|
Thanks for the thorough review @greptile-apps! The P0 concern about prometheus.py and init.py is valid — those large diffs were unintentional and reflect an accidental base onto an older commit. The intended PR scope was only the three targeted fixes in constants.py (comma in clarifai_models, Azure cost constants 1000× correction, duplicate MAX_SIZE removal) plus the duplicate import fix in prometheus.py. The init.py and prometheus.py changes were not intentional. I'd like to narrow this PR to only the constants.py fixes. Should I close this PR and open a new one with just constants.py changes? Or would you prefer I revert the unwanted changes in this PR? Thank you for catching this — a clean constants-only PR would be much safer to merge. |
|
Either approach works, but opening a new PR is cleaner — it gives you a fresh, minimal diff that's easy to review and audit, and avoids any risk of accidentally re-introducing the unintended changes during a revert. Steps:
That way reviewers see exactly what changed with no noise, and there's no ambiguity about what's intentional. Tip: You can customize Greptile's behavior for this repo with |
Good day
Found and fixed three bugs in litellm/constants.py plus a duplicate import in prometheus.py:
Bug 1: Missing comma in clarifai_models set
Bug 2: AZURE_COMPUTER_USE cost constants 1000× too large
Bug 3: Duplicate MAX_SIZE_PER_ITEM_IN_MEMORY_CACHE_IN_KB
Bug 4: Duplicate import in prometheus.py
Thank you for your work on this project. I hope this small fix is helpful. Please let me know if there's anything to adjust.
Warmly,
RoomWithRoof