A Python application that bridges locally hosted AI models (via Ollama) with GhidraMCP for AI-assisted reverse engineering tasks within Ghidra.
(Working on agentic loop. Currently using cogito:32b, its very brief and not wordy so its good at running tools)
This bridge connects the following components:
- Ollama Server: Hosts local AI models (e.g., LLaMA 3, Mistral) accessible via REST API
- Bridge Application: This Python application that serves as an intermediary
- GhidraMCP Server: Exposes Ghidra's functionalities via MCP
- Natural Language Queries: Translate user queries into GhidraMCP commands
- Context Management: Maintains conversation context for multi-step analyses
- Interactive Mode: Command-line interface for interactive sessions
- Health Checks: Verify connectivity to Ollama and GhidraMCP services
- Specialized Summarization Model: Use a separate model optimized for generating comprehensive reports
- Model Switching: Use different models for different phases of the agentic loop
- Agentic Capabilities: Multi-step reasoning with planning, execution, review, and learning phases
- Python 3.8+
- Ollama server running locally or remotely
- GhidraMCP server running within Ghidra
- Follow the installation steps from Laurie's project (https://github.com/LaurieWired/GhidraMCP)
- Install the GhidraPlugin and enable developer mode
-
Clone this repository:
git clone https://github.com/ezrealenoch/ollama-ghidra-bridge.git cd ollama-ghidra-bridge -
Install the required dependencies:
pip install -r requirements.txt
-
Create a
.envfile by copying the example:cp .env.example .env
-
Edit the
.envfile to configure your Ollama and GhidraMCP settings.
Run the bridge in interactive mode:
python main.py --interactiveSpecial commands:
- Type
exitorquitto exit - Type
healthto check connectivity to Ollama and GhidraMCP - Type
modelsto list available Ollama models
You can now use different models for different phases of the agentic reasoning loop. This allows you to optimize the use of models based on their strengths:
python main.py --interactive --model llama3 --planning-model llama3 --execution-model codellama:7bAvailable phase-specific models:
--planning-model: Model for the planning phase (creating analysis plans)--execution-model: Model for the execution phase (running tools)--review-model: Model for the review phase (evaluating results)--verification-model: Model for the verification phase--learning-model: Model for the learning phase--summarization-model: Model for summarization tasks
You can also configure these via environment variables:
OLLAMA_MODEL_PLANNING=llama3
OLLAMA_MODEL_EXECUTION=codellama:7b
For more detailed information about model switching, see README-MODEL-SWITCHING.md.
You can configure a separate model specifically for summarization and report generation tasks:
python main.py --interactive --model llama3 --summarization-model mixtral:8x7bThe summarization model will be used for:
- Generating final reports when analysis is complete
- Summarizing long conversation contexts
- Processing queries that specifically ask for summaries or reports
You can also set this in your .env file:
OLLAMA_MODEL=llama3
OLLAMA_SUMMARIZATION_MODEL=mixtral:8x7b
This is particularly useful when you want to use:
- A lightweight model for interactive analysis and tool execution
- A more powerful model for creating comprehensive, well-structured reports
The system automatically detects summarization/report requests by looking for keywords like "summarize", "report", "analyze the results", etc.
If you don't have a GhidraMCP server running or want to test the bridge functionality, you can use mock mode:
python main.py --interactive --mockIn mock mode, the bridge simulates GhidraMCP responses without contacting the actual server.
Process a single query:
echo "What functions are in this binary?" | python main.pyYou can configure the bridge through:
- Environment variables (see
.env.example) - Command line arguments:
python main.py --ollama-url http://localhost:11434 --ghidra-url http://localhost:8080 --model llama3 --interactive
If you encounter 404 errors or empty responses from the GhidraMCP server:
-
Verify GhidraMCP server is running: Make sure the GhidraMCP server is running and accessible. You can test with
curl http://localhost:8080/methods -
Check endpoint structure: This bridge directly implements the same endpoint structure as the GhidraMCP repository.
-
Try mock mode: Use the
--mockflag to verify the bridge functionality without connecting to a real server. -
Check server URL: Ensure the server URL in your configuration is correct, including the port.
If you encounter issues with the Ollama API:
- Ensure Ollama is running:
curl http://localhost:11434 - Verify the model specified exists:
ollama list - Check the model compatibility with the prompt format
If you see "Expecting value" or other JSON parsing errors:
- The API might be returning empty or non-JSON responses
- Try running with
LOG_LEVEL=DEBUGfor more detailed logs - Check the API documentation to ensure proper request format
The bridge supports the following commands:
decompile_function(address): Decompile a function at a given addressrename_function(address, name): Rename a function to a specified namelist_functions(): Retrieve a list of all functions in the binaryget_imports(): List all imported functionsget_exports(): List all exported functionsget_memory_map(): Retrieve the memory layout of the binarycomment_function(address, comment): Add comments to a functionrename_variable(function_address, variable_name, new_name): Rename a local variablesearch_strings(pattern): Search for strings in memoryget_references(address): Get references from/to a specific address
- "List all functions in this binary"
- "Decompile the function at address 0x1000"
- "What's the memory layout of this binary?"
- "Find all strings containing 'password'"
- "Rename the function at 0x2000 to 'process_data'"
- LaurieWired/GhidraMCP - GhidraMCP server
- Ollama - Local large language model hosting