Skip to content

Commit 018ec40

Browse files
committed
Update task_agent README to use task_agent instead of agent_with_adk_format
1 parent 4b24566 commit 018ec40

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

ai/agents/task_agent/README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ A flexible AI agent powered by LiteLLM that supports runtime hot-swapping of mod
1313
## Architecture
1414

1515
```
16-
agent_with_adk_format/
16+
task_agent/
1717
├── __init__.py # Exposes root_agent for ADK
1818
├── a2a_hot_swap.py # JSON-RPC helper for hot-swapping
1919
├── README.md # This guide
@@ -38,7 +38,7 @@ agent_with_adk_format/
3838

3939
Copying the example file is optional—the repository already ships with a root-level `.env` seeded with defaults. Adjust the values at the package root:
4040
```bash
41-
cd agent_with_adk_format
41+
cd task_agent
4242
# Optionally refresh from the template
4343
# cp .env.example .env
4444
```
@@ -66,15 +66,15 @@ pip install "google-adk" "a2a-sdk[all]" "python-dotenv" "litellm"
6666
Build the container (this image can be pushed to any registry or run locally):
6767

6868
```bash
69-
docker build -t litellm-hot-swap:latest agent_with_adk_format
69+
docker build -t litellm-hot-swap:latest task_agent
7070
```
7171

7272
Provide environment configuration at runtime (either pass variables individually or mount a file):
7373

7474
```bash
7575
docker run \
7676
-p 8000:8000 \
77-
--env-file agent_with_adk_format/.env \
77+
--env-file task_agent/.env \
7878
litellm-hot-swap:latest
7979
```
8080

@@ -86,7 +86,7 @@ The container starts Uvicorn with the ADK app (`main.py`) listening on port 8000
8686

8787
Start the web interface:
8888
```bash
89-
adk web agent_with_adk_format
89+
adk web task_agent
9090
```
9191

9292
> **Tip:** before launching `adk web`/`adk run`/`adk api_server`, ensure the root-level `.env` contains valid API keys for any provider you plan to hot-swap to (e.g. set `OPENAI_API_KEY` before switching to `openai/gpt-4o`).
@@ -97,14 +97,14 @@ Open http://localhost:8000 in your browser and interact with the agent.
9797

9898
Run in terminal mode:
9999
```bash
100-
adk run agent_with_adk_format
100+
adk run task_agent
101101
```
102102

103103
### Option 3: A2A API Server
104104

105105
Start as an A2A-compatible API server:
106106
```bash
107-
adk api_server --a2a --port 8000 agent_with_adk_format
107+
adk api_server --a2a --port 8000 task_agent
108108
```
109109

110110
The agent will be available at: `http://localhost:8000/a2a/litellm_agent`
@@ -114,7 +114,7 @@ The agent will be available at: `http://localhost:8000/a2a/litellm_agent`
114114
Use the bundled script to drive hot-swaps and user messages over A2A:
115115

116116
```bash
117-
python agent_with_adk_format/a2a_hot_swap.py \
117+
python task_agent/a2a_hot_swap.py \
118118
--url http://127.0.0.1:8000/a2a/litellm_agent \
119119
--model openai gpt-4o \
120120
--prompt "You are concise." \
@@ -125,7 +125,7 @@ python agent_with_adk_format/a2a_hot_swap.py \
125125
To send a follow-up prompt in the same session (with a larger timeout for long answers):
126126

127127
```bash
128-
python agent_with_adk_format/a2a_hot_swap.py \
128+
python task_agent/a2a_hot_swap.py \
129129
--url http://127.0.0.1:8000/a2a/litellm_agent \
130130
--model openai gpt-4o \
131131
--prompt "You are concise." \
@@ -214,12 +214,12 @@ You can trigger model and prompt changes directly against the A2A endpoint witho
214214

215215
```bash
216216
# Start the agent first (in another terminal):
217-
adk api_server --a2a --port 8000 agent_with_adk_format
217+
adk api_server --a2a --port 8000 task_agent
218218

219219
# Apply swaps via pure A2A calls
220-
python agent/a2a_hot_swap.py --model openai gpt-4o --prompt "You are concise." --config
221-
python agent/a2a_hot_swap.py --model anthropic claude-3-sonnet-20240229 --context shared-session --config
222-
python agent/a2a_hot_swap.py --prompt "" --context shared-session --config # Clear the prompt and show current state
220+
python task_agent/a2a_hot_swap.py --model openai gpt-4o --prompt "You are concise." --config
221+
python task_agent/a2a_hot_swap.py --model anthropic claude-3-sonnet-20240229 --context shared-session --config
222+
python task_agent/a2a_hot_swap.py --prompt "" --context shared-session --config # Clear the prompt and show current state
223223
```
224224

225225
`--model` accepts either `provider/model` or a provider/model pair. Add `--context` if you want to reuse the same conversation across invocations. Use `--config` to dump the agent's configuration after the changes are applied.
@@ -305,7 +305,7 @@ asyncio.run(chat())
305305
- Verify LiteLLM supports the model (https://docs.litellm.ai/docs/providers)
306306

307307
### Connection Refused
308-
- Ensure the agent is running (`adk api_server --a2a agent_with_adk_format`)
308+
- Ensure the agent is running (`adk api_server --a2a task_agent`)
309309
- Check the port matches (default: 8000)
310310
- Verify no firewall blocking localhost
311311

0 commit comments

Comments
 (0)