feat: add Telnyx as OpenAI-compatible provider#26371
feat: add Telnyx as OpenAI-compatible provider#26371gbattistel wants to merge 11 commits intoBerriAI:litellm_internal_stagingfrom
Conversation
Low: No security issues foundThis PR adds Telnyx as a new OpenAI-compatible provider. The changes are limited to documentation, a JSON configuration entry (base URL and env var name for the API key), an enum addition, a sidebar entry, endpoint support metadata, and unit tests. No new code paths handle user input, authentication, or data processing — the provider is wired through the existing JSON-based provider system. Status: 0 open Posted by Veria AI · 2026-04-24T00:21:32.220Z |
Greptile SummaryThis PR registers Telnyx as an OpenAI-compatible provider by adding a Confidence Score: 5/5Safe to merge — all changes are additive and follow established patterns with no backwards-incompatible modifications. All findings are P2 style suggestions. The core implementation (providers.json, enum, endpoint support matrix) is correct and consistent with existing JSON-registered providers. Previously reported issues are addressed. No network calls in tests, no security concerns, no breaking changes. No files require special attention.
|
| Filename | Overview |
|---|---|
| litellm/llms/openai_like/providers.json | Adds telnyx entry with correct base_url and api_key_env; consistent with existing JSON-provider pattern |
| litellm/types/utils.py | Adds TELNYX = "telnyx" to LlmProviders enum; clean placement and consistent with other entries |
| provider_endpoints_support.json | Adds Telnyx endpoint support matrix with chat_completions and embeddings enabled; all other endpoints correctly set to false |
| tests/test_litellm/llms/openai_like/test_telnyx_provider.py | Unit tests cover provider registration, config values, resolution, and enum; no real network calls; uses sys.path.insert workspace hack and an if __name__ == "__main__" runner block |
| docs/my-website/docs/providers/telnyx.md | Provider documentation with accurate base URL, model table, and usage examples; the OpenAI SDK example correctly uses the /openai-suffixed endpoint per Telnyx's quickstart docs |
| docs/my-website/sidebars.js | Adds a single providers/telnyx entry in alphabetical order; previously reported triple-duplicate is resolved |
Sequence Diagram
sequenceDiagram
participant User
participant LiteLLM
participant JSONProviderRegistry
participant TelnyxAPI
User->>LiteLLM: completion(model="telnyx/moonshotai/Kimi-K2.6")
LiteLLM->>JSONProviderRegistry: exists("telnyx")?
JSONProviderRegistry-->>LiteLLM: true (providers.json entry)
LiteLLM->>JSONProviderRegistry: get("telnyx")
JSONProviderRegistry-->>LiteLLM: {base_url, api_key_env}
LiteLLM->>LiteLLM: resolve api_base = "https://api.telnyx.com/v2/ai"
LiteLLM->>LiteLLM: resolve api_key from TELNYX_API_KEY env
LiteLLM->>TelnyxAPI: POST /v2/ai/chat/completions
TelnyxAPI-->>LiteLLM: OpenAI-compatible response
LiteLLM-->>User: ModelResponse
Reviews (3): Last reviewed commit: "fix: remove fragile pytest import guard ..." | Re-trigger Greptile
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Summary
Adds Telnyx as an OpenAI-compatible provider in LiteLLM via the
providers.jsonconfiguration system.Telnyx provides an OpenAI-compatible Inference API for LLM access to hosted models (Kimi-K2.6, GLM-5.1, MiniMax-M2.7, and more).
Changes
litellm/llms/openai_like/providers.json— Add Telnyx provider entry withbase_url: https://api.telnyx.com/v2/aiandapi_key_env: TELNYX_API_KEYlitellm/types/utils.py— AddTELNYX = "telnyx"to theLlmProvidersenumprovider_endpoints_support.json— Add Telnyx endpoint support matrix (chat completions + embeddings)tests/test_litellm/llms/openai_like/test_telnyx_provider.py— Unit tests for provider config, resolution, and enumdocs/my-website/docs/providers/telnyx.md— Provider documentationdocs/my-website/sidebars.js— Add Telnyx to providers sidebarUsage
Testing
All 7 unit tests pass:
telnyx/model-name→ provider=telnyx)References