Conversation
Implement tmll_cli.py with commands for experiment management and ML analysis: - Experiment operations: create, list, delete, list-outputs, fetch-data - Anomaly detection: anomaly, memory-leak - Performance analysis: changepoint, correlation - Resource optimization: idle-resources, capacity planning - Clustering analysis Note: this code was created with the assistance of claude sonnet 4.5 Signed-off-by: Matthew Khouzam <matthew.khouzam@ericsson.com>
Add the following MCP
mcp.json
{
"mcpServers": {
"tmll": {
"command": "/usr/bin/python3",
"args": "path-to/tmll/mcp_server_cli.py"],
"env": {
"PYTHONPATH": "path-to/tmll"
}
}
}
}
This code creation was assisted by claude-sonnet-4.5
Signed-off-by: Matthew Khouzam <matthew.khouzam@ericsson.com>
kavehshahedi
left a comment
There was a problem hiding this comment.
Thanks a lot Matthew for the MCP! I tested it with Claude Code (with some minor changes I put comments for), and it was working very well!
Vibe coders are now ready to become tracing masters, right?
tmll_cli.py
Outdated
| client = TMLLClient(args.host, args.port, verbose=args.verbose) | ||
| traces = [{"path": os.path.expanduser(path)} for path in args.traces] | ||
| experiment = client.create_experiment(traces=traces, experiment_name=args.name) | ||
| print(f"Created experiment: {experiment.name} (UUID: {experiment.UUID})") |
There was a problem hiding this comment.
The experiment.UUID should be experiment.uuid
tmll_cli.py
Outdated
| outputs = experiment.find_outputs(keyword=args.keywords, type=['xy']) | ||
|
|
||
| if not outputs: | ||
| print("No outputs found") | ||
| return | ||
|
|
||
| mld = MemoryLeakDetection(client, experiment, outputs) |
There was a problem hiding this comment.
The memory leak detection module doesn't accept outputs. So, you should just use:
mld = MemoryLeakDetection(client, experiment)
tmll_cli.py
Outdated
| return | ||
|
|
||
| mld = MemoryLeakDetection(client, experiment, outputs) | ||
| result = mld.detect_memory_leak() |
There was a problem hiding this comment.
It should be mld.analyze_memory_leaks()
tmll_cli.py
Outdated
| return | ||
|
|
||
| ca = CorrelationAnalysis(client, experiment, outputs) | ||
| correlations = ca.analyze_correlation(method=args.method) |
There was a problem hiding this comment.
This should be: ca.analyze_correlations(method=args.method)
tmll_cli.py
Outdated
| if args.output: | ||
| for key, df in data.items(): | ||
| df.to_csv(f"{args.output}_{key}.csv", index=False) |
There was a problem hiding this comment.
This might fail for cases where the data is not a dictionary of <key, df>. There're some cases that the output may be dict<key, dict<key, df>>. So, probably you should handle those cases.
tmll_cli.py
Outdated
|
|
||
| # anomaly command | ||
| anomaly_parser = subparsers.add_parser("anomaly", help="Detect anomalies") | ||
| anomaly_parser.add_argument("experiment", help="Experiment UUID or name") |
There was a problem hiding this comment.
I guess here we should only pass the UUID, and not the name. Same comment for the other commands below.
mcp_server_cli.py
Outdated
|
|
||
|
|
||
| @server.call_tool() | ||
| async def call_tool(name: str, arguments: dict) -> list[TextContent]: |
There was a problem hiding this comment.
As I checked, I guess some commands may send None for arguments. So, it might be better to make it Optional[dict] = None, and then, check for it:
arguments = arguments if isinstance(arguments, dict) else {}so we don't get an exception when trying to .get() from it.
mcp_server_cli.py
Outdated
|
|
||
| server = Server("tmll-cli-mcp-server") | ||
|
|
||
| CLI_PATH = sys.argv[1] if len(sys.argv) > 1 else "tmll_cli.py" |
There was a problem hiding this comment.
If the path is not given, the "tmll_cli.py" is relative to the cwd. So, probably it's better to handle it.
CLI_PATH = sys.argv[1] if len(sys.argv) > 1 else Path(__file__).resolve().parent / "tmll_cli.py"
mcp_server_cli.py
Outdated
| Tool( | ||
| name="cluster_data", | ||
| description="Perform clustering analysis on trace data (kmeans, dbscan, hierarchical)", | ||
| inputSchema={ | ||
| "type": "object", | ||
| "properties": { | ||
| "experiment_id": {"type": "string"}, | ||
| "keywords": {"type": "array", "items": {"type": "string"}, "default": ["cpu usage"]}, | ||
| "n_clusters": {"type": "integer", "default": 3}, | ||
| "method": {"type": "string", "default": "kmeans", "enum": ["kmeans", "dbscan", "hierarchical"]}, | ||
| }, | ||
| "required": ["experiment_id"], | ||
| }, | ||
| ), | ||
| ] |
There was a problem hiding this comment.
Please checkout my comment for the clustering module.
|
Also, I think you should add the mcp package to the "requirements.txt" in order to run the MCP script: And, could you please place the scripts in a proper package within the tmll src files? Maybe |
- Fix experiment.UUID -> experiment.uuid - Add None check for experiment in create_experiment - Fix MemoryLeakDetection: remove outputs param, use analyze_memory_leaks() - Fix ChangePointAnalysis: method -> methods (list of analysis modes) - Fix CorrelationAnalysis: analyze_correlation -> analyze_correlations, plot_correlation -> plot_correlation_matrix - Fix IdleResourceDetection: single threshold -> per-resource thresholds - Fix CapacityPlanning: plan_capacity -> forecast_capacity(forecast_steps=) - Remove clustering command (module not meaningful) - Fix help text: 'UUID or name' -> 'UUID' - Handle nested dict data in fetch_data_cmd - Move scripts to tmll/mcp package - Fix MCP server CLI_PATH to use Path(__file__) for reliable resolution - Make MCP call_tool arguments Optional[dict] with None guard - Add mcp==1.27.0 to requirements.txt
When the trace server returns a non-200 status for both datatree and timegraph tree endpoints, response.model is None. Accessing .model on None caused an AttributeError, failing CI tests.
kavehshahedi
left a comment
There was a problem hiding this comment.
LGTM! Thanks for the fixes! Feel free to merge whenever you want!
What it does
This PR adds Model Context Protocol (MCP) server integration to TMLL, enabling the trace analysis library to be used as an MCP tool in AI assistants and other MCP-compatible applications.
Key additions:
How to test
bash
Start a trace server instance
./tracecompass-server -data /path/to/workspace -vmargs -Dtraceserver.port=8080
Create an experiment
python3 tmll_cli.py create /path/to/trace -n test_experiment
List experiments
python3 tmll_cli.py list
Run anomaly detection
python3 tmll_cli.py anomaly <experiment_uuid> -k "cpu usage" -m iforest
Test the MCP server:
bash
Run the MCP server
python3 mcp_server_cli.py tmll_cli.py
Configure in an MCP-compatible client (e.g., Kiro CLI, Claude Desktop)
Test tool invocations through the client
Verify all ML modules work through CLI:
Follow-ups
Review checklist