Recently I found quite illustrative visualization of different agent patterns:
That I found quite interesting to have/implement some of those as part of ecosystem that based on sympozium (assume each LLM block from above as AgentRun), as out of those we could see that different type of tasks may require different agent pattern itself, few examples:
- Response of model needs to be automatically audited and authorized before being returned to user. While this theoretically can be implemented as part of model system prompt itself ("ensure that ...") that is sub-optimal as same model / context prepare response and decide if it could be returned. In this scenario "respose-gate" (Prompt Chaining Pattern) sidecar container (that should be treated as blackbox in this context) or bypass given model response or block it and reply with some "given information is restricted for an public" response could help.
- There are type of tasks and some studies that says that, if we run same request with same context n times and then summarize results this could lead to better results (Parallelization Pattern), also this could be useful when task could be chunked on buckets as then we could proceed huge list in parallel.
- For some tasks there are are natural requirements to "evaluate" results of another run before continuing (not block), like: one write code - another test it and give feedback (Evaluator Optimizer Pattern)
While I found those 3 (Prompt Chaining Pattern, Parallelization Pattern, Evaluator Optimizer Pattern) quite useful in future, I'm not sure what could be best scenario to implement those, should those be part of controller (SympoziumInstance / AgentRun spec.pattern(s)) or should those be implemented outsize of sympozium stack when some pod/controller simply create AgenRun kinds and handle pattern logic outside of sympozium.
Any comments, suggestions are always welcome.
And yeah... thanks for a great product @AlexsJones it may look like an draft as of now and could be not ready here and there but imho its already solid base for any LLM Agents in Kubernetes.
Recently I found quite illustrative visualization of different agent patterns:
That I found quite interesting to have/implement some of those as part of ecosystem that based on sympozium (assume each LLM block from above as AgentRun), as out of those we could see that different type of tasks may require different agent pattern itself, few examples:
While I found those 3 (Prompt Chaining Pattern, Parallelization Pattern, Evaluator Optimizer Pattern) quite useful in future, I'm not sure what could be best scenario to implement those, should those be part of controller (SympoziumInstance / AgentRun
spec.pattern(s)) or should those be implemented outsize of sympozium stack when some pod/controller simply create AgenRun kinds and handle pattern logic outside of sympozium.Any comments, suggestions are always welcome.
And yeah... thanks for a great product @AlexsJones it may look like an draft as of now and could be not ready here and there but imho its already solid base for any LLM Agents in Kubernetes.