diff --git a/tools/tide-chart/README.md b/tools/tide-chart/README.md
index b6a8d50..f4453a3 100644
--- a/tools/tide-chart/README.md
+++ b/tools/tide-chart/README.md
@@ -1,33 +1,36 @@
# Tide Chart
-> Interactive dashboard comparing 24-hour probability cones for 5 equities using Synth forecasting data.
+> Interactive Flask dashboard comparing probability cones for equities and crypto using Synth forecasting data.
## Overview
-Tide Chart overlays probabilistic price forecasts for SPY, NVDA, TSLA, AAPL, and GOOGL into a single comparison view. It normalizes all forecasts to percentage change, enabling direct comparison across different price levels, and generates a ranked summary table with key metrics.
+Tide Chart overlays probabilistic price forecasts into a single comparison view with an interactive web interface. It supports both equities (SPY, NVDA, TSLA, AAPL, GOOGL) on the 24h horizon and crypto/commodities (BTC, ETH, SOL, XAU) on both 1h and 24h horizons. All forecasts are normalized to percentage change for direct comparison across different price levels.
-The tool addresses three questions from the forecast data:
-- **Directional alignment** - Are all equities moving the same way?
-- **Relative magnitude** - Which equity has the widest expected range?
-- **Asymmetric skew** - Is the upside or downside tail larger, individually and relative to SPY?
+The tool provides:
+- **Probability cones** - Interactive Plotly chart with 5th-95th percentile bands
+- **Probability calculator** - Enter a target price to see the exact probability of an asset reaching it
+- **Variable time horizons** - Toggle between Intraday (1H) and Next Day (24H) views
+- **Live auto-refresh** - Manual refresh button and configurable 5-minute auto-refresh
+- **Ranked metrics table** - Sortable table with directional alignment, skew, and relative benchmarks
## How It Works
-1. Fetches `get_prediction_percentiles` and `get_volatility` for each of the 5 equities (24h horizon)
-2. Normalizes all 289 time steps from raw price to `% change = (percentile - current_price) / current_price * 100`
-3. Computes metrics from the final time step (end of 24h window):
+1. Starts a Flask server serving the interactive dashboard at `http://localhost:5000`
+2. Fetches `get_prediction_percentiles` and `get_volatility` for assets in the selected horizon
+3. Normalizes time steps from raw price to `% change = (percentile - current_price) / current_price * 100`
+4. Computes metrics from the final time step (end of forecast window):
- **Median Move** - 50th percentile % change
- **Upside/Downside** - 95th and 5th percentile distances
- **Directional Skew** - upside minus downside (positive = bullish asymmetry)
- **Range** - total 5th-to-95th percentile width
- - **Relative to SPY** - each metric minus SPY's value
-4. Ranks equities by median expected move (table columns are sortable by click)
-5. Generates an interactive Plotly HTML dashboard and opens it in the browser
+ - **Relative to Benchmark** - each metric minus benchmark (SPY for equities, BTC for crypto)
+5. Ranks assets by median expected move (table columns are sortable by click)
+6. Probability calculator uses linear interpolation across 9 percentile levels to estimate P(price <= target)
## Synth Endpoints Used
-- `get_prediction_percentiles(asset, horizon="24h")` - Provides 289 time-step probabilistic forecast with 9 percentile levels (0.5% to 99.5%). Used for the probability cone overlay and all derived metrics.
-- `get_volatility(asset, horizon="24h")` - Provides forecasted average volatility. Displayed in the ranking table as an independent risk measure.
+- `get_prediction_percentiles(asset, horizon)` - Provides time-step probabilistic forecast with 9 percentile levels (0.5% to 99.5%). Used for probability cones, metrics, and the probability calculator.
+- `get_volatility(asset, horizon)` - Provides forecasted average volatility. Displayed in the ranking table as an independent risk measure.
## Usage
@@ -35,25 +38,27 @@ The tool addresses three questions from the forecast data:
# Install dependencies
pip install -r requirements.txt
-# Run the tool (opens dashboard in browser)
+# Run the dashboard server (opens browser automatically)
python main.py
+# Custom port
+TIDE_CHART_PORT=8080 python main.py
+
# Run tests
python -m pytest tests/ -v
```
-## Example Output
-
-The dashboard contains two sections:
-
-**Probability Cone Comparison** - Interactive Plotly chart with semi-transparent bands (5th-95th percentile) and median lines for each equity. Hover to see exact values at any time step.
+## API Endpoints
-**Equity Rankings** - Sortable table showing price, median move (% and $), forecasted volatility, directional skew (% and $), probability range (% and $), median vs SPY, and skew vs SPY. Click any column header to re-sort. Values are color-coded green (positive) or red (negative), with nominal dollar amounts shown alongside percentages for immediate context.
+- `GET /` - Serves the interactive dashboard HTML
+- `GET /api/data?horizon=24h` - Returns chart traces, table rows, and insights as JSON
+- `POST /api/probability` - Calculates target price probability (body: `{"asset": "SPY", "target_price": 600, "horizon": "24h"}`)
## Technical Details
- **Language:** Python 3.10+
-- **Dependencies:** plotly (for chart generation)
-- **Synth Assets Used:** SPY, NVDA, TSLA, AAPL, GOOGL
-- **Output:** Single HTML file (requires internet for Plotly CDN and fonts; no server needed)
+- **Dependencies:** plotly, flask
+- **Equities (24h only):** SPY, NVDA, TSLA, AAPL, GOOGL
+- **Crypto + Commodities (1h & 24h):** BTC, ETH, SOL, XAU
+- **Output:** Flask web server with Plotly CDN (requires internet for fonts/plotly)
- **Mock Mode:** Works without API key using bundled mock data
diff --git a/tools/tide-chart/chart.py b/tools/tide-chart/chart.py
index 7c36fac..3f03362 100644
--- a/tools/tide-chart/chart.py
+++ b/tools/tide-chart/chart.py
@@ -1,9 +1,9 @@
"""
Data processing module for the Tide Chart dashboard.
-Fetches prediction percentiles and volatility for 5 equities,
+Fetches prediction percentiles and volatility for supported assets,
normalizes to percentage change, calculates comparison metrics,
-and ranks equities by forecast outlook.
+ranks assets by forecast outlook, and computes target price probabilities.
"""
import sys
@@ -11,22 +11,42 @@
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../.."))
-from synth_client import SynthClient
-
EQUITIES = ["SPY", "NVDA", "TSLA", "AAPL", "GOOGL"]
+CRYPTO_ASSETS = ["BTC", "ETH", "SOL", "XAU"]
PERCENTILE_KEYS = ["0.005", "0.05", "0.2", "0.35", "0.5", "0.65", "0.8", "0.95", "0.995"]
+PERCENTILE_LEVELS = [0.005, 0.05, 0.2, 0.35, 0.5, 0.65, 0.8, 0.95, 0.995]
+
+
+ALL_ASSETS = EQUITIES + CRYPTO_ASSETS
+
+
+def get_assets_for_horizon(horizon: str) -> list[str]:
+ """Return the list of supported assets for a given time horizon.
+
+ Equities (SPY, NVDA, TSLA, AAPL, GOOGL) only support 24h.
+ Crypto + XAU (BTC, ETH, SOL, XAU) support both 1h and 24h.
+ The 24h horizon includes all assets.
+ """
+ if horizon == "1h":
+ return list(CRYPTO_ASSETS)
+ return list(ALL_ASSETS)
+
+def fetch_all_data(client, horizon: str = "24h") -> dict:
+ """Fetch prediction percentiles and volatility for all assets in a horizon.
-def fetch_all_data(client):
- """Fetch prediction percentiles and volatility for all 5 equities.
+ Args:
+ client: SynthClient instance.
+ horizon: "1h" or "24h".
Returns:
dict: {asset: {"percentiles": ..., "volatility": ..., "current_price": float}}
"""
+ assets = get_assets_for_horizon(horizon)
data = {}
- for asset in EQUITIES:
- forecast = client.get_prediction_percentiles(asset, horizon="24h")
- vol = client.get_volatility(asset, horizon="24h")
+ for asset in assets:
+ forecast = client.get_prediction_percentiles(asset, horizon=horizon)
+ vol = client.get_volatility(asset, horizon=horizon)
data[asset] = {
"current_price": forecast["current_price"],
"percentiles": forecast["forecast_future"]["percentiles"],
@@ -56,9 +76,9 @@ def normalize_percentiles(percentiles, current_price):
def calculate_metrics(data):
- """Calculate comparison metrics for each equity.
+ """Calculate comparison metrics for each asset.
- Uses the final time step (end of 24h window) for metric computation.
+ Uses the final time step (end of forecast window) for metric computation.
Args:
data: Dict from fetch_all_data().
@@ -102,20 +122,30 @@ def calculate_metrics(data):
return metrics
-def add_relative_to_spy(metrics):
- """Add relative-to-SPY fields for each equity.
+def add_relative_to_benchmark(metrics) -> dict:
+ """Add relative-to-benchmark fields for each asset.
+
+ Uses SPY as benchmark for equities, BTC for crypto assets.
Args:
metrics: Dict from calculate_metrics().
Returns:
- Same dict with added relative_median and relative_skew fields.
+ Same dict with added relative_median, relative_skew, and benchmark fields.
"""
- spy = metrics["SPY"]
+ assets = list(metrics.keys())
+ benchmark = "SPY" if "SPY" in metrics else assets[0]
+ bench_m = metrics[benchmark]
for asset, m in metrics.items():
- m["relative_median"] = m["median_move"] - spy["median_move"]
- m["relative_skew"] = m["skew"] - spy["skew"]
- return metrics
+ m["relative_median"] = m["median_move"] - bench_m["median_move"]
+ m["relative_skew"] = m["skew"] - bench_m["skew"]
+ return metrics, benchmark
+
+
+def add_relative_to_spy(metrics):
+ """Add relative-to-SPY fields for each equity (legacy wrapper)."""
+ result, _ = add_relative_to_benchmark(metrics)
+ return result
def rank_equities(metrics, sort_by="median_move", ascending=False):
@@ -135,7 +165,7 @@ def rank_equities(metrics, sort_by="median_move", ascending=False):
def get_normalized_series(data):
- """Get full normalized time series for all equities (for charting).
+ """Get full normalized time series for all assets (for charting).
Args:
data: Dict from fetch_all_data().
@@ -149,3 +179,41 @@ def get_normalized_series(data):
info["percentiles"], info["current_price"]
)
return series
+
+
+def calculate_target_probability(percentiles: list[dict], target_price: float) -> float:
+ """Calculate the probability of an asset reaching a target price.
+
+ Uses the final time step's percentile distribution and linear interpolation
+ to estimate P(price <= target). Returns the probability as a percentage (0-100).
+
+ Args:
+ percentiles: List of percentile dicts (time steps). Uses the final step.
+ target_price: The target price to evaluate.
+
+ Returns:
+ float: Probability (0-100) that the price will be at or below the target.
+ """
+ final_step = percentiles[-1]
+ prices = [final_step[k] for k in PERCENTILE_KEYS]
+ levels = PERCENTILE_LEVELS
+
+ # Target below the lowest percentile
+ if target_price <= prices[0]:
+ return levels[0] * 100
+
+ # Target above the highest percentile
+ if target_price >= prices[-1]:
+ return levels[-1] * 100
+
+ # Linear interpolation between bracketing percentiles
+ for i in range(len(prices) - 1):
+ if prices[i] <= target_price <= prices[i + 1]:
+ price_range = prices[i + 1] - prices[i]
+ if price_range == 0:
+ return levels[i] * 100
+ fraction = (target_price - prices[i]) / price_range
+ prob = levels[i] + fraction * (levels[i + 1] - levels[i])
+ return prob * 100
+
+ return 50.0
diff --git a/tools/tide-chart/main.py b/tools/tide-chart/main.py
index a351f68..bc905df 100644
--- a/tools/tide-chart/main.py
+++ b/tools/tide-chart/main.py
@@ -1,84 +1,76 @@
+import sys
+import os
+
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../.."))
+
"""
-Tide Chart - Equity Forecast Comparison Dashboard.
+Tide Chart - Interactive Equity & Crypto Forecast Dashboard.
-Generates an interactive HTML dashboard comparing 24h probability cones
-for 5 equities (SPY, NVDA, TSLA, AAPL, GOOGL) using Synth API data.
-Opens the dashboard in the default browser.
+Flask-based dashboard with probability cones, target price calculator,
+variable time horizons (1h/24h), and live auto-refresh.
"""
-import sys
-import os
import json
import webbrowser
-import tempfile
+import threading
from datetime import datetime, timedelta, timezone
from zoneinfo import ZoneInfo
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../.."))
-
+from flask import Flask, jsonify, request, Response
from synth_client import SynthClient
from chart import (
fetch_all_data,
calculate_metrics,
- add_relative_to_spy,
+ add_relative_to_benchmark,
rank_equities,
get_normalized_series,
+ calculate_target_probability,
+ get_assets_for_horizon,
)
-EQUITY_COLORS = {
+ASSET_COLORS = {
"SPY": {"primary": "#e8d44d", "rgb": "232,212,77"},
"NVDA": {"primary": "#3db8e8", "rgb": "61,184,232"},
"TSLA": {"primary": "#e85a6e", "rgb": "232,90,110"},
"AAPL": {"primary": "#9b6de8", "rgb": "155,109,232"},
"GOOGL": {"primary": "#4dc87a", "rgb": "77,200,122"},
+ "BTC": {"primary": "#f7931a", "rgb": "247,147,26"},
+ "ETH": {"primary": "#627eea", "rgb": "98,126,234"},
+ "SOL": {"primary": "#00ffa3", "rgb": "0,255,163"},
+ "XAU": {"primary": "#ffd700", "rgb": "255,215,0"},
}
-EQUITY_LABELS = {
+ASSET_LABELS = {
"SPY": "S&P 500",
"NVDA": "NVIDIA",
"TSLA": "Tesla",
"AAPL": "Apple",
"GOOGL": "Alphabet",
+ "BTC": "Bitcoin",
+ "ETH": "Ethereum",
+ "SOL": "Solana",
+ "XAU": "Gold",
}
+# Backwards compat aliases
+EQUITY_COLORS = ASSET_COLORS
+EQUITY_LABELS = ASSET_LABELS
-def generate_dashboard_html(normalized_series, metrics, ranked):
- """Generate a self-contained HTML dashboard.
- Args:
- normalized_series: {asset: list of normalized percentile dicts} (289 steps)
- metrics: {asset: {median_move, upside, downside, skew, range_pct,
- volatility, current_price, relative_median, relative_skew}}
- ranked: List of (asset, metrics_dict) sorted by median_move.
-
- Returns:
- str: Complete HTML document string.
- """
- timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")
-
- # Generate ET time axis (289 steps x 5 min = 24h)
- et = ZoneInfo("America/New_York")
- now_et = datetime.now(et)
- time_points = [
- (now_et + timedelta(minutes=i * 5)).strftime("%Y-%m-%dT%H:%M")
- for i in range(289)
- ]
-
- # Build Plotly traces for probability cones
+def build_traces(normalized_series: dict, metrics: dict, time_points: list[str]) -> list[dict]:
+ """Build Plotly trace dicts for probability cones."""
traces = []
- for asset in ["SPY", "NVDA", "TSLA", "AAPL", "GOOGL"]:
+ for asset in normalized_series:
series = normalized_series[asset]
- steps = time_points
- color = EQUITY_COLORS[asset]
- label = EQUITY_LABELS[asset]
+ color = ASSET_COLORS[asset]
+ label = ASSET_LABELS[asset]
upper = [s.get("0.95", 0) for s in series]
lower = [s.get("0.05", 0) for s in series]
median = [s.get("0.5", 0) for s in series]
- # Upper bound (invisible line for fill)
traces.append({
- "x": steps,
+ "x": time_points,
"y": upper,
"type": "scatter",
"mode": "lines",
@@ -89,9 +81,8 @@ def generate_dashboard_html(normalized_series, metrics, ranked):
"hoverinfo": "skip",
})
- # Lower bound with fill to upper
traces.append({
- "x": steps,
+ "x": time_points,
"y": lower,
"type": "scatter",
"mode": "lines",
@@ -104,7 +95,6 @@ def generate_dashboard_html(normalized_series, metrics, ranked):
"hoverinfo": "skip",
})
- # Median line - pre-format hover text (d3-format unreliable in unified hover)
current_price = metrics[asset]["current_price"]
hover_text = []
for v in median:
@@ -113,7 +103,7 @@ def generate_dashboard_html(normalized_series, metrics, ranked):
sign_nom = "+" if nom >= 0 else "-"
hover_text.append(f"{sign_pct}{v:.2f}% ({sign_nom}${abs(nom):,.2f})")
traces.append({
- "x": steps,
+ "x": time_points,
"y": median,
"customdata": hover_text,
"type": "scatter",
@@ -123,19 +113,20 @@ def generate_dashboard_html(normalized_series, metrics, ranked):
"name": f"{label} ({asset})",
"hovertemplate": (
f"{label}
"
- "%{{x|%I:%M %p}}
"
- "Median: %{{customdata}}"
+ "%{x|%I:%M %p}
"
+ "Median: %{customdata}"
""
),
})
+ return traces
- traces_json = json.dumps(traces)
- # Build rank table rows
- table_rows = ""
+def build_table_rows(ranked: list, benchmark: str) -> str:
+ """Build HTML table rows for ranked assets."""
+ rows = ""
for rank_idx, (asset, m) in enumerate(ranked, 1):
- color = EQUITY_COLORS[asset]["primary"]
- label = EQUITY_LABELS[asset]
+ color = ASSET_COLORS[asset]["primary"]
+ label = ASSET_LABELS[asset]
def fmt_val(val, nominal=None, suffix="%"):
sign = "+" if val > 0 else ""
@@ -147,10 +138,10 @@ def fmt_val(val, nominal=None, suffix="%"):
return f'{pct_str} ({nom_str})'
return f'{pct_str}'
- rel_median = "-" if asset == "SPY" else fmt_val(m["relative_median"])
- rel_skew = "-" if asset == "SPY" else fmt_val(m["relative_skew"])
+ rel_median = "-" if asset == benchmark else fmt_val(m["relative_median"])
+ rel_skew = "-" if asset == benchmark else fmt_val(m["relative_skew"])
- table_rows += f"""
+ rows += f"""
| {rank_idx} |
@@ -167,669 +158,607 @@ def fmt_val(val, nominal=None, suffix="%"):
| {rel_median} |
{rel_skew} |
"""
+ return rows
+
- # Build directional alignment indicator
+def build_insights(metrics: dict) -> dict:
+ """Compute insight card data from metrics."""
directions = [m["median_move"] for m in metrics.values()]
- alignment_text = "All Bullish" if all(d > 0 for d in directions) else \
- "All Bearish" if all(d < 0 for d in directions) else "Mixed"
- alignment_class = "bullish" if all(d > 0 for d in directions) else \
- "bearish" if all(d < 0 for d in directions) else "mixed"
+ if all(d > 0 for d in directions):
+ alignment_text, alignment_class = "All Bullish", "bullish"
+ elif all(d < 0 for d in directions):
+ alignment_text, alignment_class = "All Bearish", "bearish"
+ else:
+ alignment_text, alignment_class = "Mixed", "mixed"
- # Widest range equity
widest = max(metrics.items(), key=lambda x: x[1]["range_pct"])
- widest_name = f"{EQUITY_LABELS[widest[0]]} ({widest[1]['range_pct']:.2f}%)"
+ widest_name = f"{ASSET_LABELS[widest[0]]} ({widest[1]['range_pct']:.2f}%)"
- # Most skewed equity
most_skewed = max(metrics.items(), key=lambda x: abs(x[1]["skew"]))
skew_dir = "upside" if most_skewed[1]["skew"] > 0 else "downside"
- skew_name = f"{EQUITY_LABELS[most_skewed[0]]} ({skew_dir})"
-
- html = f"""
-
-
-
-
-Tide Chart - Equity Forecast Comparison
-
-
-
-
-
-
-
-
-
-
-
Directional Alignment
-
{alignment_text}
-
-
-
Widest Range
-
{widest_name}
-
-
-
Most Asymmetric
-
{skew_name}
-
-
-
-
-
-
-
click legend to toggle assets · scroll to zoom · drag to pan · double-click to reset
-
-
-
-
-
-
-
- | # |
- Asset |
- Price |
- Median Move\u25B4\u25BE |
- Volatility\u25B4\u25BE |
- Skew\u25B4\u25BE |
- Range\u25B4\u25BE |
- 24h Bounds\u25B4\u25BE |
- vs SPY\u25B4\u25BE |
- Skew vs SPY\u25B4\u25BE |
-
-
-
- {table_rows}
-
-
-
-
-
-
-
-
-
-
-"""
+ skew_name = f"{ASSET_LABELS[most_skewed[0]]} ({skew_dir})"
- return html
+ return {
+ "alignment_text": alignment_text,
+ "alignment_class": alignment_class,
+ "widest_name": widest_name,
+ "skew_name": skew_name,
+ }
-def main():
- """Fetch data, build dashboard, open in browser."""
- import warnings
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- client = SynthClient()
+def make_time_points(horizon: str) -> list[str]:
+ """Generate ET timezone time axis for the given horizon."""
+ et = ZoneInfo("America/New_York")
+ now_et = datetime.now(et)
+ if horizon == "1h":
+ steps = 61
+ interval_min = 1
+ else:
+ steps = 289
+ interval_min = 5
+ return [
+ (now_et + timedelta(minutes=i * interval_min)).strftime("%Y-%m-%dT%H:%M")
+ for i in range(steps)
+ ]
- print("Fetching equity data...")
- data = fetch_all_data(client)
- print("Calculating metrics...")
+def fetch_and_process(client, horizon: str = "24h") -> dict:
+ """Fetch data, compute metrics, and build all dashboard components."""
+ data = fetch_all_data(client, horizon=horizon)
metrics = calculate_metrics(data)
- metrics = add_relative_to_spy(metrics)
+ metrics, benchmark = add_relative_to_benchmark(metrics)
ranked = rank_equities(metrics, sort_by="median_move")
normalized = get_normalized_series(data)
+ time_points = make_time_points(horizon)
+ traces = build_traces(normalized, metrics, time_points)
+ table_rows = build_table_rows(ranked, benchmark)
+ insights = build_insights(metrics)
+
+ assets_with_prices = {
+ asset: {"current_price": info["current_price"]}
+ for asset, info in data.items()
+ }
+
+ return {
+ "traces": traces,
+ "table_rows": table_rows,
+ "insights": insights,
+ "metrics": {
+ asset: {k: v for k, v in m.items()}
+ for asset, m in metrics.items()
+ },
+ "assets": assets_with_prices,
+ "benchmark": benchmark,
+ "horizon": horizon,
+ "timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC"),
+ }
+
+
+def generate_dashboard_html(client) -> str:
+ """Generate the full interactive HTML dashboard."""
+ result = fetch_and_process(client, "24h")
+ traces_json = json.dumps(result["traces"])
+ assets_json = json.dumps(result["assets"])
+ horizon_label = "24h Forecast"
+ benchmark = result["benchmark"]
+ ins = result["insights"]
+ timestamp = result["timestamp"]
+ table_rows = result["table_rows"]
+
+ # The HTML uses raw braces for JS/CSS, so we use explicit concatenation
+ # where Python formatting is needed, and raw strings for JS blocks.
+ html = (
+ '\n\n\n'
+ '\n'
+ '\n'
+ 'Tide Chart - Forecast Comparison\n'
+ '\n'
+ "\n\n\n"
+ '\n'
+ "\n"
+ ' \n"
+ "\n"
+ '
\n'
+ '
\n'
+ ' \n'
+ ' \n'
+ "
\n"
+ '
\n'
+ '
\n"
+ "
\n"
+ "\n"
+ '
\n'
+ ' \n"
+ '
\n"
+ '
\n"
+ "
\n"
+ "\n"
+ '
\n'
+ '
\n'
+ '
Directional Alignment
\n'
+ f'
{ins["alignment_text"]}
\n'
+ "
\n"
+ '
\n'
+ '
Widest Range
\n'
+ f'
{ins["widest_name"]}
\n'
+ "
\n"
+ '
\n'
+ '
Most Asymmetric
\n'
+ f'
{ins["skew_name"]}
\n'
+ "
\n"
+ "
\n"
+ "\n"
+ '
\n'
+ ' \n"
+ '
\n'
+ '
click legend to toggle assets · scroll to zoom · drag to pan · double-click to reset
\n'
+ "
\n"
+ "\n"
+ '
\n'
+ ' \n"
+ '
\n'
+ " \n"
+ " \n"
+ " | # | \n"
+ " Asset | \n"
+ " Price | \n"
+ ' Median Move\u25B4\u25BE | \n'
+ ' Volatility\u25B4\u25BE | \n'
+ ' Skew\u25B4\u25BE | \n'
+ ' Range\u25B4\u25BE | \n'
+ ' Bounds\u25B4\u25BE | \n'
+ f' vs {benchmark}\u25B4\u25BE | \n'
+ f' Skew vs {benchmark}\u25B4\u25BE | \n'
+ "
\n"
+ " \n"
+ f" {table_rows}\n"
+ " \n"
+ "
\n"
+ "
\n"
+ "\n"
+ ' \n"
+ "\n"
+ "
\n"
+ "\n"
+ "\n"
+ "\n"
+ )
+ return html
+
- print("Generating dashboard...")
- html = generate_dashboard_html(normalized, metrics, ranked)
+def create_app(client=None) -> Flask:
+ """Create the Flask application with all routes."""
+ if client is None:
+ import warnings
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
+ client = SynthClient()
+
+ app = Flask(__name__)
+
+ @app.route("/")
+ def index():
+ html = generate_dashboard_html(client)
+ return Response(html, mimetype="text/html")
+
+ @app.route("/api/data")
+ def api_data():
+ horizon = request.args.get("horizon", "24h")
+ if horizon not in ("1h", "24h"):
+ return jsonify({"error": "Invalid horizon. Use '1h' or '24h'."}), 400
+ result = fetch_and_process(client, horizon)
+ return jsonify({
+ "traces": result["traces"],
+ "table_rows": result["table_rows"],
+ "insights": result["insights"],
+ "assets": result["assets"],
+ "benchmark": result["benchmark"],
+ "horizon": horizon,
+ "timestamp": result["timestamp"],
+ })
+
+ @app.route("/api/probability", methods=["POST"])
+ def api_probability():
+ body = request.get_json(silent=True) or {}
+ asset = body.get("asset", "")
+ target_price = body.get("target_price")
+ horizon = body.get("horizon", "24h")
+
+ if horizon not in ("1h", "24h"):
+ return jsonify({"error": "Invalid horizon."}), 400
+
+ valid_assets = get_assets_for_horizon(horizon)
+ if asset not in valid_assets:
+ return jsonify({"error": f"{asset} not available for {horizon} horizon."}), 400
+
+ if target_price is None or not isinstance(target_price, (int, float)) or target_price <= 0:
+ return jsonify({"error": "Invalid target_price. Must be a positive number."}), 400
+
+ try:
+ forecast = client.get_prediction_percentiles(asset, horizon=horizon)
+ percentiles = forecast["forecast_future"]["percentiles"]
+ current_price = forecast["current_price"]
+ prob_below = calculate_target_probability(percentiles, target_price)
+ return jsonify({
+ "asset": asset,
+ "target_price": target_price,
+ "current_price": current_price,
+ "horizon": horizon,
+ "probability_below": round(prob_below, 4),
+ "probability_above": round(100.0 - prob_below, 4),
+ })
+ except Exception as e:
+ return jsonify({"error": str(e)}), 500
+
+ return app
+
+
+def main():
+ """Start the Tide Chart dashboard server."""
+ import warnings
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
+ client = SynthClient()
- out_path = os.path.join(tempfile.gettempdir(), "tide_chart.html")
- with open(out_path, "w", encoding="utf-8") as f:
- f.write(html)
+ app = create_app(client)
+ port = int(os.environ.get("TIDE_CHART_PORT", 5000))
- print(f"Dashboard saved to {out_path}")
- webbrowser.open(f"file://{out_path}")
+ print(f"Tide Chart running at http://localhost:{port}")
+ threading.Timer(1.0, lambda: webbrowser.open(f"http://localhost:{port}")).start()
+ app.run(host="0.0.0.0", port=port, debug=False)
if __name__ == "__main__":
diff --git a/tools/tide-chart/requirements.txt b/tools/tide-chart/requirements.txt
index 73e33e5..0f8f77b 100644
--- a/tools/tide-chart/requirements.txt
+++ b/tools/tide-chart/requirements.txt
@@ -1,2 +1,3 @@
plotly>=5.0.0
requests>=2.28.0
+flask>=3.0.0
diff --git a/tools/tide-chart/tests/test_tool.py b/tools/tide-chart/tests/test_tool.py
index 095d465..2a480d4 100644
--- a/tools/tide-chart/tests/test_tool.py
+++ b/tools/tide-chart/tests/test_tool.py
@@ -1,32 +1,40 @@
+import sys
+import os
+
+# Add project root to path
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../.."))
+# Add tool directory to path
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
+
"""
Tests for the Tide Chart tool.
All tests run against mock data (no API key needed).
They verify data fetching, normalization, metric calculation,
-ranking, and dashboard generation.
+ranking, dashboard generation, horizon toggling, probability
+calculation, and Flask API endpoints.
"""
-import sys
-import os
+import json
import warnings
-# Add project root to path
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../.."))
-# Add tool directory to path
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
-
from synth_client import SynthClient
from chart import (
- EQUITIES,
+ CRYPTO_ASSETS,
+ ALL_ASSETS,
PERCENTILE_KEYS,
+ PERCENTILE_LEVELS,
fetch_all_data,
normalize_percentiles,
calculate_metrics,
add_relative_to_spy,
+ add_relative_to_benchmark,
rank_equities,
get_normalized_series,
+ get_assets_for_horizon,
+ calculate_target_probability,
)
-from main import generate_dashboard_html
+from main import generate_dashboard_html, create_app, build_insights, make_time_points
def _make_client():
@@ -42,12 +50,12 @@ def test_client_loads_in_mock_mode():
def test_fetch_all_equities_data():
- """Verify fetch_all_data returns data for all 5 equities."""
+ """Verify fetch_all_data returns data for all 9 assets (24h default)."""
client = _make_client()
data = fetch_all_data(client)
- assert len(data) == 5
- for asset in EQUITIES:
+ assert len(data) == 9
+ for asset in ALL_ASSETS:
assert asset in data
assert "current_price" in data[asset]
assert "percentiles" in data[asset]
@@ -83,7 +91,7 @@ def test_calculate_metrics_median_move():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
m = metrics[asset]
final = data[asset]["percentiles"][-1]
cp = data[asset]["current_price"]
@@ -97,7 +105,7 @@ def test_calculate_metrics_skew():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
m = metrics[asset]
assert abs(m["skew"] - (m["upside"] - m["downside"])) < 1e-10
@@ -108,7 +116,7 @@ def test_calculate_metrics_range():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
m = metrics[asset]
assert abs(m["range_pct"] - (m["upside"] + m["downside"])) < 1e-10
@@ -127,7 +135,9 @@ def test_relative_to_spy():
assert metrics["SPY"]["relative_median"] == 0.0
assert metrics["SPY"]["relative_skew"] == 0.0
- for asset in ["NVDA", "TSLA", "AAPL", "GOOGL"]:
+ for asset in ALL_ASSETS:
+ if asset == "SPY":
+ continue
m = metrics[asset]
expected_rel_median = m["median_move"] - spy_median
expected_rel_skew = m["skew"] - spy_skew
@@ -143,7 +153,7 @@ def test_rank_equities_sorting():
metrics = add_relative_to_spy(metrics)
ranked = rank_equities(metrics, sort_by="median_move")
- assert len(ranked) == 5
+ assert len(ranked) == 9
for i in range(len(ranked) - 1):
assert ranked[i][1]["median_move"] >= ranked[i + 1][1]["median_move"]
@@ -166,8 +176,8 @@ def test_get_normalized_series():
data = fetch_all_data(client)
series = get_normalized_series(data)
- assert len(series) == 5
- for asset in EQUITIES:
+ assert len(series) == 9
+ for asset in ALL_ASSETS:
assert asset in series
assert len(series[asset]) == 289
# First step should be near 0 (current price normalized)
@@ -178,26 +188,18 @@ def test_get_normalized_series():
def test_generate_dashboard_html():
"""Verify dashboard HTML generation produces valid output."""
client = _make_client()
- data = fetch_all_data(client)
- metrics = calculate_metrics(data)
- metrics = add_relative_to_spy(metrics)
- ranked = rank_equities(metrics, sort_by="median_move")
- normalized = get_normalized_series(data)
-
- html = generate_dashboard_html(normalized, metrics, ranked)
+ html = generate_dashboard_html(client)
assert isinstance(html, str)
assert "" in html
assert "Tide Chart" in html
assert "plotly" in html.lower()
- # Check all equity tickers appear
- for asset in EQUITIES:
+ # Check all asset tickers appear (default 24h = all assets)
+ for asset in ALL_ASSETS:
assert asset in html
# Check table has rows
assert "" in html
assert "cone-chart" in html
- # Check relative_skew column exists (Skew vs SPY header)
- assert "Skew vs SPY" in html
# Check sortable table headers
assert "sortable" in html
assert "data-sort" in html
@@ -206,8 +208,7 @@ def test_generate_dashboard_html():
assert "$" in html
# Check legendgroup is set for trace grouping
assert "legendgroup" in html
- # Check 24h Bounds column
- assert "24h Bounds" in html
+ # Check Bounds column
assert "data-sort=\"bounds\"" in html
# Check column header tooltips
assert "data-tip=" in html
@@ -221,6 +222,16 @@ def test_generate_dashboard_html():
assert "yaxis.autorange" in html
# Check tooltip focus support
assert "data-tip]:focus-visible::after" in html
+ # Check new interactive elements
+ assert "horizon-toggle" in html
+ assert "Intraday (1H)" in html
+ assert "Next Day (24H)" in html
+ assert "Probability Calculator" in html
+ assert "calc-asset" in html
+ assert "calc-price" in html
+ assert "auto-refresh" in html.lower()
+ assert "/api/data" in html
+ assert "/api/probability" in html
def test_calculate_metrics_nominal_values():
@@ -229,7 +240,7 @@ def test_calculate_metrics_nominal_values():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
m = metrics[asset]
final = data[asset]["percentiles"][-1]
cp = data[asset]["current_price"]
@@ -253,7 +264,7 @@ def test_calculate_metrics_projection_bounds():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
m = metrics[asset]
final = data[asset]["percentiles"][-1]
@@ -270,11 +281,240 @@ def test_volatility_values():
data = fetch_all_data(client)
metrics = calculate_metrics(data)
- for asset in EQUITIES:
+ for asset in ALL_ASSETS:
assert metrics[asset]["volatility"] > 0
assert isinstance(metrics[asset]["volatility"], float)
+# --- New tests for issue #12 interactive features ---
+
+
+def test_get_assets_for_horizon_24h():
+ """Verify 24h horizon returns all assets (equities + crypto)."""
+ assets = get_assets_for_horizon("24h")
+ assert assets == ALL_ASSETS
+
+
+def test_get_assets_for_horizon_1h():
+ """Verify 1h horizon returns crypto assets."""
+ assets = get_assets_for_horizon("1h")
+ assert assets == CRYPTO_ASSETS
+
+
+def test_fetch_all_data_1h_horizon():
+ """Verify fetch_all_data returns crypto data for 1h horizon."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="1h")
+
+ assert len(data) == len(CRYPTO_ASSETS)
+ for asset in CRYPTO_ASSETS:
+ assert asset in data
+ assert "current_price" in data[asset]
+ assert "percentiles" in data[asset]
+ assert "average_volatility" in data[asset]
+ assert isinstance(data[asset]["percentiles"], list)
+ assert len(data[asset]["percentiles"]) > 0
+
+
+def test_add_relative_to_benchmark_equities():
+ """Verify benchmark is SPY for 24h (all assets)."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="24h")
+ metrics = calculate_metrics(data)
+ metrics, benchmark = add_relative_to_benchmark(metrics)
+
+ assert benchmark == "SPY"
+ assert metrics["SPY"]["relative_median"] == 0.0
+ assert metrics["SPY"]["relative_skew"] == 0.0
+
+
+def test_add_relative_to_benchmark_crypto():
+ """Verify benchmark is BTC for crypto assets."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="1h")
+ metrics = calculate_metrics(data)
+ metrics, benchmark = add_relative_to_benchmark(metrics)
+
+ assert benchmark == "BTC"
+ assert metrics["BTC"]["relative_median"] == 0.0
+ assert metrics["BTC"]["relative_skew"] == 0.0
+
+
+def test_calculate_target_probability_within_range():
+ """Verify probability calculation returns value between bounds."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="24h")
+ percentiles = data["SPY"]["percentiles"]
+ current_price = data["SPY"]["current_price"]
+
+ prob = calculate_target_probability(percentiles, current_price)
+ assert 0 < prob < 100
+
+
+def test_calculate_target_probability_extreme_low():
+ """Verify probability for very low target clamps to lowest level."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="24h")
+ percentiles = data["SPY"]["percentiles"]
+
+ prob = calculate_target_probability(percentiles, 0.01)
+ assert prob == PERCENTILE_LEVELS[0] * 100
+
+
+def test_calculate_target_probability_extreme_high():
+ """Verify probability for very high target clamps to highest level."""
+ client = _make_client()
+ data = fetch_all_data(client, horizon="24h")
+ percentiles = data["SPY"]["percentiles"]
+
+ prob = calculate_target_probability(percentiles, 999999.0)
+ assert prob == PERCENTILE_LEVELS[-1] * 100
+
+
+def test_calculate_target_probability_interpolation():
+ """Verify linear interpolation with synthetic data."""
+ # Construct a minimal percentile step
+ step = {k: float(i + 1) * 10 for i, k in enumerate(PERCENTILE_KEYS)}
+ # step: {"0.005": 10, "0.05": 20, "0.2": 30, ...}
+ percentiles = [step]
+
+ # Target exactly at a percentile boundary
+ prob = calculate_target_probability(percentiles, 20.0)
+ assert abs(prob - PERCENTILE_LEVELS[1] * 100) < 1e-6 # 5.0
+
+ # Target midway between 2nd and 3rd percentile (20.0 and 30.0)
+ midpoint = 25.0
+ prob = calculate_target_probability(percentiles, midpoint)
+ expected = (PERCENTILE_LEVELS[1] + 0.5 * (PERCENTILE_LEVELS[2] - PERCENTILE_LEVELS[1])) * 100
+ assert abs(prob - expected) < 1e-6
+
+
+def test_make_time_points_24h():
+ """Verify 24h generates 289 time points."""
+ points = make_time_points("24h")
+ assert len(points) == 289
+
+
+def test_make_time_points_1h():
+ """Verify 1h generates 61 time points."""
+ points = make_time_points("1h")
+ assert len(points) == 61
+
+
+def test_build_insights():
+ """Verify insight card data structure."""
+ client = _make_client()
+ data = fetch_all_data(client)
+ metrics = calculate_metrics(data)
+ ins = build_insights(metrics)
+
+ assert "alignment_text" in ins
+ assert "alignment_class" in ins
+ assert "widest_name" in ins
+ assert "skew_name" in ins
+ assert ins["alignment_class"] in ("bullish", "bearish", "mixed")
+
+
+def test_flask_index_route():
+ """Verify Flask index route returns HTML."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.get("/")
+ assert resp.status_code == 200
+ assert b"Tide Chart" in resp.data
+ assert b"" in resp.data
+
+
+def test_flask_api_data_24h():
+ """Verify /api/data returns valid JSON for 24h."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.get("/api/data?horizon=24h")
+ assert resp.status_code == 200
+ data = json.loads(resp.data)
+ assert "traces" in data
+ assert "table_rows" in data
+ assert "insights" in data
+ assert "assets" in data
+ assert data["horizon"] == "24h"
+ assert data["benchmark"] == "SPY"
+
+
+def test_flask_api_data_1h():
+ """Verify /api/data returns valid JSON for 1h."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.get("/api/data?horizon=1h")
+ assert resp.status_code == 200
+ data = json.loads(resp.data)
+ assert data["horizon"] == "1h"
+ assert data["benchmark"] == "BTC"
+ assert "BTC" in data["assets"]
+
+
+def test_flask_api_data_invalid_horizon():
+ """Verify /api/data rejects invalid horizon."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.get("/api/data?horizon=7d")
+ assert resp.status_code == 400
+ data = json.loads(resp.data)
+ assert "error" in data
+
+
+def test_flask_api_probability_valid():
+ """Verify /api/probability returns correct structure."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.post("/api/probability",
+ data=json.dumps({"asset": "SPY", "target_price": 600.0, "horizon": "24h"}),
+ content_type="application/json")
+ assert resp.status_code == 200
+ data = json.loads(resp.data)
+ assert "probability_below" in data
+ assert "probability_above" in data
+ assert "current_price" in data
+ assert abs(data["probability_below"] + data["probability_above"] - 100.0) < 0.01
+
+
+def test_flask_api_probability_invalid_asset():
+ """Verify /api/probability rejects asset not in horizon."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.post("/api/probability",
+ data=json.dumps({"asset": "SPY", "target_price": 600.0, "horizon": "1h"}),
+ content_type="application/json")
+ assert resp.status_code == 400
+ data = json.loads(resp.data)
+ assert "not available" in data["error"]
+
+
+def test_flask_api_probability_invalid_price():
+ """Verify /api/probability rejects non-positive price."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.post("/api/probability",
+ data=json.dumps({"asset": "SPY", "target_price": -10, "horizon": "24h"}),
+ content_type="application/json")
+ assert resp.status_code == 400
+
+
+def test_flask_api_probability_missing_body():
+ """Verify /api/probability handles missing JSON body."""
+ client = _make_client()
+ app = create_app(client)
+ with app.test_client() as tc:
+ resp = tc.post("/api/probability", content_type="application/json")
+ assert resp.status_code == 400
+
+
if __name__ == "__main__":
test_client_loads_in_mock_mode()
test_fetch_all_equities_data()
@@ -290,4 +530,24 @@ def test_volatility_values():
test_calculate_metrics_projection_bounds()
test_generate_dashboard_html()
test_volatility_values()
+ test_get_assets_for_horizon_24h()
+ test_get_assets_for_horizon_1h()
+ test_fetch_all_data_1h_horizon()
+ test_add_relative_to_benchmark_equities()
+ test_add_relative_to_benchmark_crypto()
+ test_calculate_target_probability_within_range()
+ test_calculate_target_probability_extreme_low()
+ test_calculate_target_probability_extreme_high()
+ test_calculate_target_probability_interpolation()
+ test_make_time_points_24h()
+ test_make_time_points_1h()
+ test_build_insights()
+ test_flask_index_route()
+ test_flask_api_data_24h()
+ test_flask_api_data_1h()
+ test_flask_api_data_invalid_horizon()
+ test_flask_api_probability_valid()
+ test_flask_api_probability_invalid_asset()
+ test_flask_api_probability_invalid_price()
+ test_flask_api_probability_missing_body()
print("All tests passed!")