Files
project-lyra/cortex/autonomy/tools/adapters/base.py
serversdwn 64429b19e6 feat: Implement Trillium notes executor for searching and creating notes via ETAPI
- Added `trillium.py` for searching and creating notes with Trillium's ETAPI.
- Implemented `search_notes` and `create_note` functions with appropriate error handling and validation.

feat: Add web search functionality using DuckDuckGo

- Introduced `web_search.py` for performing web searches without API keys.
- Implemented `search_web` function with result handling and validation.

feat: Create provider-agnostic function caller for iterative tool calling

- Developed `function_caller.py` to manage LLM interactions with tools.
- Implemented iterative calling logic with error handling and tool execution.

feat: Establish a tool registry for managing available tools

- Created `registry.py` to define and manage tool availability and execution.
- Integrated feature flags for enabling/disabling tools based on environment variables.

feat: Implement event streaming for tool calling processes

- Added `stream_events.py` to manage Server-Sent Events (SSE) for tool calling.
- Enabled real-time updates during tool execution for enhanced user experience.

test: Add tests for tool calling system components

- Created `test_tools.py` to validate functionality of code execution, web search, and tool registry.
- Implemented asynchronous tests to ensure proper execution and result handling.

chore: Add Dockerfile for sandbox environment setup

- Created `Dockerfile` to set up a Python environment with necessary dependencies for code execution.

chore: Add debug regex script for testing XML parsing

- Introduced `debug_regex.py` to validate regex patterns against XML tool calls.

chore: Add HTML template for displaying thinking stream events

- Created `test_thinking_stream.html` for visualizing tool calling events in a user-friendly format.

test: Add tests for OllamaAdapter XML parsing

- Developed `test_ollama_parser.py` to validate XML parsing with various test cases, including malformed XML.
2025-12-26 03:49:20 -05:00

80 lines
2.4 KiB
Python

"""
Base adapter interface for provider-agnostic tool calling.
This module defines the abstract base class that all LLM provider adapters
must implement to support tool calling in Lyra.
"""
from abc import ABC, abstractmethod
from typing import Dict, List, Optional
class ToolAdapter(ABC):
"""Base class for provider-specific tool adapters.
Each LLM provider (OpenAI, Ollama, llama.cpp, etc.) has its own
way of handling tool calls. This adapter pattern allows Lyra to
support tools across all providers with a unified interface.
"""
@abstractmethod
async def prepare_request(
self,
messages: List[Dict],
tools: List[Dict],
tool_choice: Optional[str] = None
) -> Dict:
"""Convert Lyra tool definitions to provider-specific format.
Args:
messages: Conversation history in OpenAI format
tools: List of Lyra tool definitions (provider-agnostic)
tool_choice: Optional tool forcing ("auto", "required", "none")
Returns:
dict: Provider-specific request payload ready to send to LLM
"""
pass
@abstractmethod
async def parse_response(self, response) -> Dict:
"""Extract tool calls from provider response.
Args:
response: Raw provider response (format varies by provider)
Returns:
dict: Standardized response in Lyra format:
{
"content": str, # Assistant's text response
"tool_calls": [ # List of tool calls or None
{
"id": str, # Unique call ID
"name": str, # Tool name
"arguments": dict # Tool arguments
}
] or None
}
"""
pass
@abstractmethod
def format_tool_result(
self,
tool_call_id: str,
tool_name: str,
result: Dict
) -> Dict:
"""Format tool execution result for next LLM call.
Args:
tool_call_id: ID from the original tool call
tool_name: Name of the executed tool
result: Tool execution result dictionary
Returns:
dict: Message object to append to conversation
(format varies by provider)
"""
pass