You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ai/agents/task_agent/README.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ A flexible AI agent powered by LiteLLM that supports runtime hot-swapping of mod
13
13
## Architecture
14
14
15
15
```
16
-
agent_with_adk_format/
16
+
task_agent/
17
17
├── __init__.py # Exposes root_agent for ADK
18
18
├── a2a_hot_swap.py # JSON-RPC helper for hot-swapping
19
19
├── README.md # This guide
@@ -38,7 +38,7 @@ agent_with_adk_format/
38
38
39
39
Copying the example file is optional—the repository already ships with a root-level `.env` seeded with defaults. Adjust the values at the package root:
Provide environment configuration at runtime (either pass variables individually or mount a file):
73
73
74
74
```bash
75
75
docker run \
76
76
-p 8000:8000 \
77
-
--env-file agent_with_adk_format/.env \
77
+
--env-file task_agent/.env \
78
78
litellm-hot-swap:latest
79
79
```
80
80
@@ -86,7 +86,7 @@ The container starts Uvicorn with the ADK app (`main.py`) listening on port 8000
86
86
87
87
Start the web interface:
88
88
```bash
89
-
adk web agent_with_adk_format
89
+
adk web task_agent
90
90
```
91
91
92
92
> **Tip:** before launching `adk web`/`adk run`/`adk api_server`, ensure the root-level `.env` contains valid API keys for any provider you plan to hot-swap to (e.g. set `OPENAI_API_KEY` before switching to `openai/gpt-4o`).
@@ -97,14 +97,14 @@ Open http://localhost:8000 in your browser and interact with the agent.
python task_agent/a2a_hot_swap.py --prompt "" --context shared-session --config # Clear the prompt and show current state
223
223
```
224
224
225
225
`--model` accepts either `provider/model` or a provider/model pair. Add `--context` if you want to reuse the same conversation across invocations. Use `--config` to dump the agent's configuration after the changes are applied.
@@ -305,7 +305,7 @@ asyncio.run(chat())
305
305
- Verify LiteLLM supports the model (https://docs.litellm.ai/docs/providers)
306
306
307
307
### Connection Refused
308
-
- Ensure the agent is running (`adk api_server --a2a agent_with_adk_format`)
308
+
- Ensure the agent is running (`adk api_server --a2a task_agent`)
0 commit comments