§ Interfaces
MCP Client (Python). Production-ready template.
A Python 3.10+ client for OctaMem over MCP (Streamable HTTP). Use it in production workflows: backend jobs, internal tools, agents, or any long-running process that should read and write memory through the same remote endpoint as the rest of your stack.
Runtime
Python 3.10+
Server
mcp.octamem.com
Source
github.com/alphatradeai/octamem-mcp-client
What you get
connection.py is the path for deterministic code: connect once, list tools, call them with explicit arguments. chat_client.py is an optional pattern that wires OctaMem tools into OpenAI chat completions (interactive or as a starting point for your own orchestration). For the server URL, query-string key, and bearer-token layout, see MCP Server.
connection.py
Production-friendly template: connect via
mcp.client.streamable_http(streamablehttp_client), list tools,call_tool— no LLM required. Embed or extend this for workers, CLIs, and services that should call memory tools directly.chat_client.py
OpenAI tool-calling loop: maps OctaMem tools into chat completions and injects the last three Q&A turns into history where supported. Use as shipped for an interactive session, or lift the loop into your own app; follows the tool contract below.
Install
Clone the repo and install dependencies (includes mcp, openai, python-dotenv per requirements.txt).
git clone https://github.com/alphatradeai/octamem-mcp-client.git
cd octamem-mcp-client
pip install -r requirements.txtConfigure .env
Copy .env.example to .env. The client builds the MCP URL as https://mcp.octamem.com?api_key=… from OCTAMEM_API_KEY (same server as in MCP Server).
OCTAMEM_API_KEY=sk-om-live-...For chat_client.py, add OPENAI_API_KEY and set MODEL to a chat model your account can use. Always set MODEL in .envso production runs do not depend on a hard-coded default in the repo. The repo’s .env.example also lists other provider keys for your own forks; the bundled chat loop only calls OpenAI.
OCTAMEM_API_KEY=sk-om-live-...
OPENAI_API_KEY=sk-...
MODEL=gpt-5.4-miniRun
Scripted tool calls (no LLM)
Run the template (customize calls in code).
python connection.pyThe template uses an async context manager connect_to_mcp() that yields (session, tools). Replace the example call with your own tool name and JSON arguments:
async with connect_to_mcp() as (session, tools):
list_tools(tools)
output = await call_tool(session, "search_context", {"query": "your query"})
print(output)OpenAI orchestration
Interactive driver (same tool loop you can reuse in your app).
python chat_client.pyAt the prompt, type messages; quit or exitstops the process. The first exchange configures the model to follow OctaMem’s tool contract; reuse that pattern in your own services if you do not need stdin/stdout.
OctaMem tools (contract)
These are the memory tools the server exposes; names and behaviour match what this Python client implements. Tools that accept history expect a JSON string (the client passes json.dumps(...)), not a nested object. Up to three recent user/assistant pairs:
[
{"user": "previous question", "assistant": "previous answer"}
]For the first turn, use an empty list serialized as "[]" when the tool requires the field.
search_context
Step 1 for answering: retrieve context from memory (not the final user-facing answer).
queryrequired;historyoptional.finalize_response
Step 2 for answering: persist the Q&A and return the exact text to show the user.
contentrequired;user_query,historyoptional.store_memory
User explicitly asked to save / remember — store only; do not pair with finalize_response in the same turn.
contentrequired;user_query,historyoptional.octamem_list_tools
Introspection: lists tools, lifecycle rules, and formats (useful in build-time checks or admin utilities).
Per server schema.
Source and updates: github.com/alphatradeai/octamem-mcp-client.