A unified Python toolkit for Multi-Frequency Quantile Econometrics — implementing three families of quantile methods from the academic literature in a single, interoperable library.
| Paper | Method | Module |
|---|---|---|
| Li, Li & Tsai (2015, JASA) | QCOR, QPCOR, QAR, QPACF, QACF, Box-Jenkins | quantile_correlation, qar |
| Adebayo (2026, Applied Economics) | MFQR, MFQGC | mfqr, mfqgc |
| Shi & Sheng (2025, Comm. Stat.) | Uncertain QAR | uncertain_qar |
Author: Dr Merwan Roudane · merwanroudane920@gmail.com
GitHub: github.com/merwanroudane/frequencyquantile
pip install frequencyquantileFrom source:
git clone https://github.com/merwanroudane/frequencyquantile.git
cd frequencyquantile
pip install -e .Dependencies: numpy, scipy, pandas, statsmodels, matplotlib, seaborn, PyEMD, tabulate
import numpy as np
import frequencyquantile as fq
# Generate sample data
np.random.seed(42)
y = np.cumsum(np.random.randn(500)) + 100
x = np.cumsum(np.random.randn(500)) + 50
# === Paper 1: QAR with modified Box-Jenkins ===
model = fq.three_stage_procedure(y, tau=0.05, K=10)
# === Paper 2: Multi-Frequency Quantile Regression ===
mfqr = fq.MFQR(n_imfs=5).fit(y, x, y_name="GDP", x_name="Energy")
# === Paper 2: Multi-Frequency Granger Causality ===
mfqgc = fq.MFQGC(n_imfs=5).fit(y, x, y_name="CO2", x_name="REG")
# === Paper 3: Uncertain QAR ===
data = [(v - 0.05, v + 0.05) for v in y[:30]]
uqar = fq.UncertainQAR(order=3).fit(data)
# === Visualisation ===
plotter = fq.FrequencyQuantilePlotter()
plotter.plot_mfqr_heatmap(mfqr)
plotter.plot_mfqgc_heatmap(mfqgc)# Eq. (2.1) numerator: qcov_τ{Y, X} = E[ψ_τ(Y − Q_τ,Y)(X − EX)]
val = fq.qcov(Y, X, tau=0.25)| Parameter | Type | Description |
|---|---|---|
Y |
array (n,) | Response variable |
X |
array (n,) | Predictor variable |
tau |
float | Quantile level ∈ (0,1) |
| Returns | float | Quantile covariance value |
# Eq. (2.1): normalised by √[(τ−τ²)·σ²_X]
r = fq.qcor(Y, X, tau=0.5) # scalar in [-1, 1]result = fq.sample_qcor(Y, X, tau=0.25)
# result = {'qcor': 0.312, 'se': 0.045, 'z_stat': 6.93, 'p_value': 0.0000}# Eq. (2.4): controls for Z when measuring Y-X association at quantile τ
r_partial = fq.qpcor(Y, X, Z, tau=0.75)| Parameter | Type | Description |
|---|---|---|
Z |
array (n, q) | Covariates to partial out |
result = fq.sample_qpcor(Y, X, Z, tau=0.75)
# result = {'qpcor': -0.187, 'se': 0.063, 'z_stat': -2.97, 'p_value': 0.003}pacf = fq.qpacf(y, tau=0.05, max_lag=15, bandwidth_scale=0.6)
# pacf['lag'] → array([1, 2, ..., 15])
# pacf['qpacf'] → array of φ̂_{kk,τ} values
# pacf['se'] → asymptotic standard errors
# pacf['ci_lower'] → lower 95% confidence band
# pacf['ci_upper'] → upper 95% confidence band| Parameter | Type | Default | Description |
|---|---|---|---|
y |
array | — | Time series |
tau |
float | — | Quantile level |
max_lag |
int | 15 | Maximum lag to compute |
bandwidth_scale |
float | 0.6 | Bofinger bandwidth multiplier |
Key property (Lemma 2): QPACF has a cut-off at lag p — φ_{kk,τ} = 0 for k > p.
model = fq.QARModel(tau=0.05)
model.fit(y, lags=[1, 2, 8, 11]) # explicit lags
# OR
model.fit(y, max_lag=5) # use lags 1..5
# Attributes after fitting:
model.intercept # φ_0(τ)
model.coefficients # {lag: φ_j(τ)} dict
model.residuals # quantile residuals ê_{t,τ}
model.se # {lag: std_error} dict
model.n_obs # effective sample size
# Summary table
model.summary()
# Variable Coef Std.Err z-stat P>|z| Sig
# intercept -0.747 0.029 -25.76 0.0000 ***
# y(t-2) 0.118 0.051 2.31 0.0209 **
# ...
# Forecast
forecasts = model.predict(y, steps=5)ac = fq.qacf(model.residuals, tau=0.05, max_lag=15)
# Same structure as qpacf outputbp = fq.qbp_test(model.residuals, tau=0.05, K=15, p=4)
# bp = {'Q_BP': 12.34, 'df': 11, 'p_value': 0.338}
# p_value > 0.05 → model adequate ✓model = fq.three_stage_procedure(y, tau=0.05, K=15, alpha=0.05, verbose=True)
# [Stage 1] QPACF suggests QAR(11) at τ=0.05.
# Significant lags: [2, 8, 11]
# [Stage 2] Estimated QAR(11) with lags [1,2,...,11].
# [Stage 3] Removing lag 1 (p=0.7234).
# [Stage 3] Removing lag 9 (p=0.5891).
# ...
# [Stage 3] Final lags: [2, 4, 10, 11]
# Q_BP(15) = 14.21, p-value = 0.320
# ✓ Model passes Q_BP diagnostic.dec = fq.eemd_decompose(y, n_imfs=5, noise_width=0.05, n_trials=100)
# dec['imfs'] → (5, n) array of IMFs (high→low frequency)
# dec['residual'] → (n,) residual trend component
# dec['n_imfs'] → 5
# dec['labels'] → ['IMF1', 'IMF2', ..., 'IMF5', 'Residual']mfqr = fq.MFQR(
n_imfs=5, # EEMD decomposition levels
quantiles=[0.05, 0.25, 0.50, 0.75, 0.95], # quantile grid
noise_width=0.05, # EEMD noise
n_trials=100, # EEMD ensemble size
)
# Fit: decomposes both series then runs QR at each (frequency, quantile)
mfqr.fit(y, x, y_name="CO2_Residential", x_name="CPU")
# Full summary table
mfqr.summary()
# Frequency Quantile β(CPU) Std.Error t-stat p-value Sig
# IMF1 (High freq / Short-term) 0.05 0.0312 0.0451 0.692 0.4890
# IMF1 (High freq / Short-term) 0.25 0.0189 0.0234 0.808 0.4191
# ...
# IMF5 (Low freq / Long-term) 0.95 -0.2341 0.0782 -2.994 0.0028 ***
# Coefficient matrix (pivot: quantiles × frequencies)
mfqr.coefficient_matrix()
# Level 1 2 3 4 5
# Quantile
# 0.05 0.031 0.018 -0.045 -0.121 -0.234
# 0.25 0.019 0.023 -0.038 -0.098 -0.189
# ...
# Significance stars matrix
mfqr.significance_matrix()mfqgc = fq.MFQGC(
n_imfs=5,
quantiles=[0.05, 0.10, 0.25, 0.50, 0.75, 0.90, 0.95],
max_lag=8, # max lag for VAR
)
# Fit: EEMD → quantile series → Granger test at each combination
mfqgc.fit(y, x, y_name="RECEM", x_name="EPU")
# Summary
mfqgc.summary()
# Frequency Quantile Lag F-stat p-value Sig Causal
# IMF1 (High freq) 0.05 2 1.234 0.2934 No
# IMF3 (Medium freq) 0.50 3 4.567 0.0034 *** Yes
# IMF5 (Low freq) 0.95 1 8.901 0.0001 *** Yes
# p-value matrix (pivot)
mfqgc.pvalue_matrix()
# Causality significance matrix
mfqgc.causality_matrix()Input: Imprecise observations as (lower, upper) interval pairs.
# Data as interval-valued uncertain variables L(a, b)
data = [
(2.26, 2.36), (3.18, 3.28), (4.00, 4.10), (4.40, 4.50),
(4.37, 4.47), (4.07, 4.17), (3.91, 4.01), (3.83, 3.93),
# ... more observations
]
uqar = fq.UncertainQAR(
order=None, # None = auto-select via cross-validation
max_order=5, # max order for CV search
quantiles=[0.1, 0.3, 0.5, 0.7, 0.9],
sig_level=0.05, # hypothesis test significance
)
uqar.fit(data, verbose=True)
# Selecting order via cross-validation (ATE)...
# k=1: ATE = 0.019600
# k=2: ATE = 0.012300
# k=3: ATE = 0.009900 ← smallest
# → Selected order k = 3
#
# Quantile s = 0.5:
# Coefficients: [0.5687, 0.9160, -0.0463, -0.0115]
# ê_s = 0.0194, σ̂_s = 0.1386
# Hypothesis test: ✓ PASSED
# STE = 0.5231
# Forecast = 4.2105, 95% CI = [3.9306, 4.4904]
# Summary table
uqar.summary()
# Quantile Order ê_s σ̂_s H₀ Test STE Forecast CI Lower CI Upper ĉ_s0 ĉ_s1 ĉ_s2 ĉ_s3
# 0.1 3 0.077 0.282 Pass 2.080 4.229 3.659 4.798 0.0933 0.3642 0.3290 0.2891
# 0.3 3 0.081 0.291 Pass 2.198 4.235 3.648 4.823 0.0862 0.3444 0.3303 0.3101
# 0.5 3 0.019 0.139 Pass 0.523 4.211 3.931 4.490 0.5687 0.9160 -0.0463 -0.0115
# ...
# Get optimal quantile (min STE among passing tests)
best_s = uqar.optimal_quantile() # → 0.5Also works with precise data (auto-wrapped as zero-width intervals):
uqar = fq.UncertainQAR(order=2).fit(np.array([3.1, 3.5, 3.8, 4.2, ...]))plotter = fq.FrequencyQuantilePlotter(style="dark", figsize=(12, 6))| Method | Description | Paper |
|---|---|---|
plot_qpacf(pacf, tau) |
QPACF lollipop with 95% CI bands | 1 |
plot_qacf(acf, tau) |
Residual QACF lollipop | 1 |
plot_multi_qpacf(y, taus) |
Side-by-side QPACF for multiple τ | 1 |
plot_qar_summary(models) |
Coefficient functions φ(τ) across quantiles | 1 |
plot_mfqr_heatmap(mfqr) |
Coefficient heatmap (quantile × frequency) | 2 |
plot_mfqgc_heatmap(mfqgc) |
p-value heatmap with significance stars | 2 |
plot_eemd(decomp) |
EEMD decomposition panel plot | 2 |
plot_uncertain_forecast(uqar, data) |
Fan chart across quantiles | 3 |
# All methods support saving:
plotter.plot_mfqr_heatmap(mfqr, title="CPU → RECEM", save="mfqr_cpu_recem.png")tbl = fq.FrequencyQuantileTable(fmt="fancy_grid") # or "latex", "html", "pipe"
tbl.qar_summary(model)
tbl.mfqr_summary(mfqr)
tbl.mfqgc_summary(mfqgc)
tbl.uncertain_qar_summary(uqar)
# LaTeX export
latex_str = tbl.to_latex(mfqr.summary(), caption="MFQR Results", label="tab:mfqr")import numpy as np
import pandas as pd
import frequencyquantile as fq
# ── Load data ──
df = pd.read_csv("data.csv", parse_dates=["date"])
y = df["CO2_residential"].values
x_cpu = df["CPU"].values
x_epu = df["EPU"].values
x_reg = df["REG"].values
# ── Step 1: EEMD Decomposition ──
plotter = fq.FrequencyQuantilePlotter()
dec = fq.eemd_decompose(y, n_imfs=5)
plotter.plot_eemd(dec, series_name="RECEM", save="eemd_recem.png")
# ── Step 2: QAR Analysis at key quantiles ──
taus = [0.05, 0.25, 0.50, 0.75, 0.95]
models = {}
for tau in taus:
models[tau] = fq.three_stage_procedure(y, tau=tau, K=10, verbose=False)
plotter.plot_qar_summary(models, save="qar_coefficients.png")
# ── Step 3: MFQR for each predictor ──
tbl = fq.FrequencyQuantileTable()
for x, name in [(x_cpu, "CPU"), (x_epu, "EPU"), (x_reg, "REG")]:
mfqr = fq.MFQR(n_imfs=5).fit(y, x, y_name="RECEM", x_name=name)
tbl.mfqr_summary(mfqr)
plotter.plot_mfqr_heatmap(mfqr, save=f"mfqr_{name}.png")
# ── Step 4: MFQGC Causality ──
for x, name in [(x_cpu, "CPU"), (x_epu, "EPU"), (x_reg, "REG")]:
gc = fq.MFQGC(n_imfs=5).fit(y, x, y_name="RECEM", x_name=name)
tbl.mfqgc_summary(gc)
plotter.plot_mfqgc_heatmap(gc, save=f"mfqgc_{name}.png")
# ── Step 5: Uncertain QAR (if interval data) ──
intervals = [(v - 0.05, v + 0.05) for v in y[:30]]
uqar = fq.UncertainQAR(quantiles=[0.1, 0.3, 0.5, 0.7, 0.9]).fit(intervals)
tbl.uncertain_qar_summary(uqar)
plotter.plot_uncertain_forecast(uqar, data=intervals, save="uqar_forecast.png")frequencyquantile/
├── __init__.py # Public API
├── utils.py # Check function, QR solver, density estimation
├── quantile_correlation.py # QCOR, QPCOR (Paper 1, Sec. 2)
├── qar.py # QARModel, QPACF, QACF, Q_BP (Paper 1, Sec. 3)
├── decomposition.py # EEMD wrapper (Wu & Huang 2009)
├── mfqr.py # MFQR (Paper 2)
├── mfqgc.py # MFQGC (Paper 2)
├── uncertain_qar.py # Uncertain QAR (Paper 3)
├── visualization.py # Dark-mode publication plots
└── tables.py # Formatted summary tables
-
Li G., Li Y., Tsai C-L. (2015). "Quantile Correlations and Quantile Autoregressive Modeling", Journal of the American Statistical Association, 110(509): 246–261.
-
Adebayo T.S. (2026). "Response of sectoral CO₂ emissions to climate and economic policy uncertainties: a multi-frequency quantile analysis", Applied Economics, 58(20): 3922–3941.
-
Shi Y., Sheng Y. (2025). "Uncertain quantile autoregressive model", Communications in Statistics - Simulation and Computation, 54(6): 1869–1889.
-
Koenker R., Xiao Z. (2006). "Quantile Autoregression", JASA, 101: 980–990.
-
Wu Z., Huang N.E. (2009). "Ensemble Empirical Mode Decomposition", Advances in Adaptive Data Analysis, 1(1): 1–41.
-
Liu B. (2007). Uncertainty Theory, 2nd edn. Springer.
MIT License — Copyright (c) 2026 Dr Merwan Roudane
@software{roudane2026frequencyquantile,
author = {Roudane, Merwan},
title = {frequencyquantile: Multi-Frequency Quantile Econometrics Toolkit},
year = {2026},
url = {https://github.com/merwanroudane/frequencyquantile},
}