Skip to content

Local LLM as backend for DemoGPT agent #41

@paluigi

Description

@paluigi

Is your feature request related to a problem? Please describe.
Using local LLMs instead than OpenAI API as backend

Describe the solution you'd like
Create a DemoGPT agent from a locally available model (ideally, a quantized Llama2 model via llama-cpp-python

Describe alternatives you've considered
If that' s already possible, a guide or some instruction about how to do it would be greatly appreciated!

Additional context
NA

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions