Python: Add Kelly Intelligence chat completion concept sample (related to #13694)#13784
Conversation
…rosoft#13694) Adds a new concept sample showing how to use Semantic Kernel with Kelly Intelligence (https://api.thedailylesson.com), a hosted OpenAI-compatible API with a built-in 162,000-word vocabulary RAG layer and an AI tutor persona, built on top of Claude. Operated by Lesson of the Day, PBC, a public benefit corporation. The sample mirrors the existing `lm_studio_chat_completion.py` and `foundry_local_chatbot.py` patterns exactly: an `AsyncOpenAI` client pointed at a custom `base_url`, wrapped in `OpenAIChatCompletion`, with a `ChatHistory` and chat loop. No new connector, no new abstraction — this is a pure example of the existing OpenAI-compatible pattern applied to a third-party hosted endpoint. The sample uses the free `kelly-haiku` model id and reads `KELLY_API_KEY` from the environment. A free key (no credit card) is available at https://api.thedailylesson.com. Reviewers can also smoke-test the API without any signup using the public `/v1/demo` endpoint, which is rate-limited at 5 requests per hour per IP and uses the same wire format. Related to microsoft#13694.
|
@nicoletterankin please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
There was a problem hiding this comment.
Automated Code Review
Reviewers: 4 | Confidence: 92%
✓ Correctness
This new sample file is a straightforward adaptation of the existing
lm_studio_chat_completion.pysample, pointing at a different hosted API (Kelly Intelligence). The code structure, chat-history management pattern (invoke first, then append user/assistant messages), prompt template, and async loop all exactly mirror the established patterns used across other samples in thelocal_models/directory. No correctness bugs, race conditions, or incorrect API usage found.
✗ Security Reliability
This sample adds a chat completion integration with a third-party hosted commercial service (api.thedailylesson.com) placed under the
local_models/directory. The primary security concern is thatos.environ.get("KELLY_API_KEY", "fake-key")silently falls back to sending requests to an external third-party endpoint when no API key is configured—unlike the existingfake-keypatterns in this directory which all targetlocalhost. The other samples (lm_studio, ollama) usefake-keybecause auth is irrelevant on a local server, but here it means user input is silently sent over the network to a non-Microsoft third party without the user explicitly opting in. Additionally, the file contains significant promotional content for a specific commercial service, which is atypical for official SDK samples.
✗ Test Coverage
The new kelly_intelligence_chat_completion.py sample follows the established pattern (matches lm_studio_chat_completion.py almost exactly), but is not registered in python/tests/samples/test_concepts.py. Every other sample in the local_models directory has both a top-level import and a parametrized test entry (with pytest.mark.skip for services requiring external setup). This omission breaks the project convention and means the sample won't even be validated for import correctness.
✗ Design Approach
This sample adds no new technical concept beyond what
lm_studio_chat_completion.pyalready demonstrates. The code is structurally identical — it is the sameAsyncOpenAI(base_url=...) + OpenAIChatCompletionpattern, just pointed at a different URL. The file is placed inlocal_models/despite connecting to a hosted third-party commercial service, and the inline comment even acknowledges the mismatch ('Although this file lives under local_models'). The sample reads less like a technical concept demonstration and more like promotional content for a specific commercial API (Lesson of the Day, PBC): it names the company, links their signup page, describes their free tier, and uses their proprietary vocabulary-tutor persona throughout. None of the other samples in this directory advertise a specific third-party commercial service.
Flagged Issues
- The
fake-keydefault forKELLY_API_KEYsilently sends user input to a remote third-party service (api.thedailylesson.com) when the env var is unset. Unlike the existinglocal_modelssamples that usefake-keywith localhost endpoints, this targets an external host. The sample should fail fast if the key is missing, consistent with how the DepSeek integration handles its API key. - The sample is placed in
local_models/despite connecting to a remote hosted commercial API—not a local model. Every other entry in this directory (LM Studio, Ollama, ONNX, Foundry Local) runs locally. The inline comment even acknowledges the mismatch. It should be moved to an appropriate directory (e.g.,third_party/oropenai_compatible/), or not added as a standalone file. - The sample introduces no new Semantic Kernel concept. Every line of SK-related code is already present in
lm_studio_chat_completion.py; the only differences arebase_url, persona, and model name. A README note or comment in the existing sample stating the pattern works for any OpenAI-compatible endpoint would convey the same information without a redundant file. - Missing test entry in
python/tests/samples/test_concepts.py: every other sample underpython/samples/concepts/is imported and registered as a parametrized test case. This sample needs a top-level import and aparam(...)entry withpytest.mark.skip, following the pattern used for lm_studio and ollama.
Suggestions
- Reduce the promotional content (public benefit corporation description, pricing tiers, demo endpoint, etc.) to a brief one-liner with a documentation link, consistent with other samples in this directory.
- If a dedicated sample for hosted third-party endpoints is desired, make it generic: use environment variables for
base_urlandmodel_idas well, so the file demonstrates the configurable-endpoint pattern rather than hard-coding a single vendor.
Automated review by nicoletterankin's agents
| base_url="https://api.thedailylesson.com/v1", | ||
| ) |
There was a problem hiding this comment.
Security: Unlike the other fake-key usages in this directory (lm_studio, ollama) which target localhost, this default silently sends requests—including user input—to a remote third-party host when KELLY_API_KEY is unset. The sample should fail fast if the key is missing to prevent unintentional data exfiltration to an external service.
| base_url="https://api.thedailylesson.com/v1", | |
| ) | |
| api_key = os.environ.get("KELLY_API_KEY") | |
| if not api_key: | |
| raise ValueError( | |
| "KELLY_API_KEY environment variable is required. " | |
| "Get a free key at https://api.thedailylesson.com" | |
| ) | |
| openAIClient: AsyncOpenAI = AsyncOpenAI( | |
| api_key=api_key, | |
| base_url="https://api.thedailylesson.com/v1", | |
| ) |
| @@ -0,0 +1,121 @@ | |||
| # Copyright (c) Microsoft. All rights reserved. | |||
There was a problem hiding this comment.
This sample is missing a corresponding entry in python/tests/samples/test_concepts.py. All other samples in this directory have a top-level import and a parametrized test case with pytest.mark.skip. Please add both to maintain consistency and ensure the sample is at least import-validated.
Motivation and Context
This PR adds a small concept sample showing how to use Semantic Kernel's existing OpenAI connector with Kelly Intelligence, a hosted OpenAI-compatible API with a built-in 162,000-word vocabulary RAG layer and an AI tutor persona, built on top of Claude. Kelly Intelligence is operated by Lesson of the Day, PBC, a public benefit corporation. The free tier requires no credit card.
It addresses the same project as issue #13694 (Lesson of the Day, PBC's vocabulary education API), reframed to fit Semantic Kernel's contribution model: rather than asking the SK repo to host a third-party plugin (which
CONTRIBUTING.mdexplicitly says is out of scope), this contributes a sample that demonstrates the existing OpenAI-compatibleAsyncOpenAI+base_urlpattern applied to a real hosted endpoint that anyone can try without a credit card.Scenarios this sample contributes to:
Related to #13694.
Description
The sample mirrors the existing
lm_studio_chat_completion.pyandfoundry_local_chatbot.pyfiles exactly:AsyncOpenAIclient with a custombase_url(https://api.thedailylesson.com/v1) and an API key fromKELLY_API_KEYOpenAIChatCompletion(async_client=...)withai_model_id=\"kelly-haiku\"Kernelwith aservice_idChatHistory+kernel.invokechat loopNo new connector, no new abstraction, no dependency changes. It's a pure example of the existing OpenAI-compatible pattern applied to a hosted third-party endpoint.
Files added (1):
python/samples/concepts/local_models/kelly_intelligence_chat_completion.py(121 lines)Note on directory placement: the file lives under
local_models/because that is where the closest precedents (lm_studio_chat_completion.py,foundry_local_chatbot.py,ollama_chat_completion.py) live. The docstring acknowledges that Kelly Intelligence is hosted, not local, and explains that the same pattern works for any OpenAI-compatible endpoint regardless of where it runs. Happy to move it to a different concept folder (e.g. a newopenai_compatible/orchat_completion/) on reviewer request.Reviewer smoke test (no signup required)
Kelly Intelligence has a public
/v1/demoendpoint, IP-rate-limited at 5 requests per hour, that uses the same wire format as/v1/chat/completions:A free, no-credit-card API key is available at https://api.thedailylesson.com for running the sample as written.
Contribution Checklist
Note on positioning: Kelly Intelligence is complementary to direct provider access — not a replacement for the official OpenAI or Anthropic SDKs. It is the right fit when an application needs the OpenAI wire format and vocabulary / language-learning features without building that data layer in-house. For raw Claude or OpenAI access, the existing Semantic Kernel
OpenAIChatCompletion+ native provider classes are still the right tool.