From Hybrid Pentesting Theory to Practical Implementation
In the previous article, we established a critical reality: fully automated pentesting is not enough. While AI and automation enable continuous visibility at scale, they lack the contextual reasoning required to uncover logic-driven and business-critical vulnerabilities. The only viable path forward is a hybrid pentesting model—one that combines machine speed with human verification.
But theory alone does not secure systems.
The real challenge for security teams is translating this hybrid model into a practical, repeatable workflow that supports continuous testing without sacrificing control, privacy, or accountability.
This tutorial addresses that gap.
Using Ollama, Villager AI, HexStrike, and Cursor for Recon & Triage
Bug bounty recon has always been a game of time, signal, and discipline. The tooling ecosystem is rich, but stitching tools together, interpreting output, and deciding what to test next still consumes most of the effort.
With recent advances in open-source large language models (LLMs), it’s now possible to build a fully local AI-assisted recon and triage stack—without cloud APIs, usage limits, or per-request costs.
This post walks through how to combine:
- Ollama (local LLM runtime)
- Villager AI (multi-step AI orchestration)
- HexStrike (fast recon tooling execution)
- Cursor (AI-augmented development & chat interface)
The result is a local AI recon assistant that helps you:
- Generate recon hypotheses
- Orchestrate safe tooling
- Summarize noisy outputs
- Suggest next investigative steps
All on your own machine.
Why Build a Local AI Recon Assistant?
Cloud LLMs are powerful—but for offensive security workflows, they introduce real friction:
- API costs grow fast during exploratory recon
- Privacy concerns around sensitive targets
- Network dependency and rate limits
- Limited control over execution logic
Running LLMs locally changes the equation.
By pairing a local model (e.g., DeepSeek R1) with orchestration and tooling layers, you get:
- Zero API cost for iterative recon
- Full control over what the AI can and cannot execute
- A persistent local “brain” that understands your workflow
- Faster iteration when forming and validating attack hypotheses
This isn’t about replacing your skills.
It’s about reducing cognitive overhead and recon sprawl.
What This Stack Is (and Is Not)
What it is
- A productivity multiplier for recon and triage
- A hypothesis-driven assistant, not an auto-hacker
- A local SOC-style analyst for bug bounty workflows
What it is not
- Fully autonomous exploitation
- A replacement for manual validation
- A magic vulnerability generator
Think of it as a junior analyst who never gets tired, but still needs supervision.
Component Roles
1. Ollama – Local LLM Runtime
2. Villager AI – Workflow Orchestrator
- Break down a goal (e.g., “analyze this domain”)
- Decide which actions are needed
- Coordinate tool execution and interpretation
3. HexStrike – Tool Execution Engine
HexStrike provides access to 150+ pentesting and bug bounty tools, including:
- Nmap
- Subdomain enumeration tools
- HTTP probing
- Content discovery
- Vulnerability scanners
Villager tells HexStrike what to run; HexStrike handles execution.
4. Cursor – Interactive AI Client
Cursor is your front-end interface. You interact with your AI assistant using natural language:
- Ask questions
- Request recon
- Review structured results
5. MCP (Mini Control Plane)
MCP acts as the glue between Cursor and backend services, enabling safe command execution and job routing.
Example Workflow: Bug Bounty Recon
Here’s a realistic flow using this setup:
- You ask in Cursor
- “Enumerate subdomains for example.com and identify potential attack surfaces.”
- Villager AI interprets the task
- Chooses subdomain enumeration tools
- Plans follow-up HTTP probing
- HexStrike executes tools
- Runs subfinder, amass, httpx, etc.
- Collects structured outputs
- Local LLM analyzes results
- Flags admin panels, APIs, staging hosts
- Highlights unusual ports or technologies
- You get a summarized response
- Clean list of interesting targets
- Suggested next steps (e.g., content discovery, API testing)
All without touching a cloud API.
Step 1: Install Docker (Quick Setup)
Run the following commands to install Docker and enable non-root usage:
Note: If Docker commands still require
sudo, log out and log back in for group changes to take effect.
Step 2: Install Villager + HexStrike Integration
This repository connects Villager AI with HexStrike for tool orchestration.
Tip: Keep the virtual environment activated while running Villager and HexStrike services.
Step 3: Verify Installation
If the setup completes successfully, you should see output similar to:
✅ All tests passed!
Along with service URLs such as:
- Villager Server: http://localhost:37695
- MCP Client: http://localhost:25989
- Browser Service: http://localhost:8080
These indicate that Villager, MCP, and the browser control service are running correctly.
Common Setup Issues
If the setup fails, check the following:
- Confirm Python version:
python3 -V- Ensure all
pipdependencies installed without errors - Verify Docker is running:
docker ps- Make sure the virtual environment is activated
Most issues are caused by an inactive venv or Docker not running.
Step 4: Install HexStrike AI
HexStrike handles fast execution of recon and pentesting tools used by the AI assistant.
Start HexStrike Server
If the server starts without errors, HexStrike is ready to accept jobs from Villager AI.
default: http://127.0.0.1:8888
Step 5: Install Ollama and Pull DeepSeek R1
Ollama runs large language models locally and exposes an OpenAI-compatible API for Villager AI.
Verify Ollama Service
The service should be active (running).
Pull and Warm the Model
If the model is not already present, Ollama will download it automatically on first run.
Ensure you have sufficient disk space before proceeding.
Step 6: Install Cursor (Desktop Client)
Cursor provides the interactive chat interface and MCP integration to control Villager and HexStrike.
- Download Cursor for Debian/Ubuntu from the official site
- Install using your package manager or the provided
.debpackage - Launch Cursor after installation
Cursor will be used to:
- Chat with your local LLM
- Trigger Villager workflows
- Review structured recon results
Step 7: Install and Initialize Cursor
Install Cursor using the downloaded package:
After installation:
- Open Cursor
- Complete the initial setup and create credentials
- Keep Cursor running — you’ll configure MCP servers next
Step 8: Connect Villager and HexStrike to Cursor (MCP)
Cursor uses MCP (Mini Control Plane) to communicate with local AI services.
In this step, you will create an MCP configuration so Cursor can:
- Connect to Villager AI
- Trigger HexStrike tool execution
- Receive structured results from your local stack
You’ll do this by defining an mcp.json configuration file that points to your locally running services.
Once configured, Cursor becomes the single interface for interacting with your local AI recon assistant.
In Cursor: Settings → Tools & MCP and then enable both MCP servers. If you don’t see them, restart Cursor.
Configure Villager .env to use Ollama + HexStrike
Edit the .env inside your villager-ai-hexstrike-integration directory:
Start villager
No go to the cursor and provide the prompt.
- https://github.com/berylliumsec/nebula
- https://www.helpnetsecurity.com/2025/11/17/strix-open-source-ai-agents-penetration-testing/
- https://github.com/KeygraphHQ/shannon
- https://github.com/0x4m4/hexstrike-ai/
- https://blog.checkpoint.com/executive-insights/hexstrike-ai-when-llms-meet-zero-day-exploitation/
- https://medium.com/@lewisgames1995/power-of-villager-x-hex-strike-f234c4a712cd








