๐Ÿ“ Methodology & Scoring

How GMIIE computes scores, detects patterns, and validates its own reliability

Analytical Philosophy

GMIIE is designed around a principle of layered skepticism. Every data point passes through multiple validation stages before influencing any composite score. The system assumes all data sources may be biased, incomplete, or stale โ€” and adjusts its confidence accordingly.

The 5-ring architecture deliberately separates quantitative data (Ring 1) from qualitative signals (Ring 2), deployment reality (Ring 3) from structural analysis (Ring 4), and jurisdiction-level events from geopolitical dynamics (Ring 5). This separation prevents narrative contamination โ€” a policymaker's optimistic speech cannot inflate the system's assessment unless deployment data (Ring 3) corroborates it.

๐Ÿ”ง Scoring Pipeline

Step 1

Ring Score Computation

Each ring computes a raw score from 0.0 to 1.0 based on its domain-specific inputs. Ring 1 aggregates quantitative financial signals. Ring 2 measures language drift velocity via NLP. Ring 3 tracks deployment stage progression. Ring 4 uses graph-theoretic fragility metrics. Ring 5 tallies geopolitical fracture events weighted by severity.

Step 2

Confidence Dampening

Raw ring scores are adjusted by a confidence weight (0.0โ€“1.0) before being used in downstream analysis. This prevents low-confidence data from inflating system-wide assessments. The dampened score = raw score ร— confidence weight. If Ring 2 has a raw score of 0.64 but a confidence weight of only 0.68, the dampened score is 0.44.

Step 3

Confidence Normalization

The confidence weight itself is computed from five factors using a weighted geometric mean: data density (25%), source diversity (20%), historical validation (20%), recency decay (20%), and analyst confirmation (15%). This ensures that confidence is grounded in data quality, not arbitrary calibration.

Step 4

NIG Score

NIG = Narrative Acceleration Index โˆ’ Infrastructure Deployment Index. Positive values indicate rhetoric is outpacing deployment ('policy theater'). Negative values indicate deployment is outpacing rhetoric ('silent rollout'). The rhetoric-reality gap metric captures this on a per-initiative basis.

Step 5

Cross-Ring Conflict Detection

The system scans all 49 pairwise ring-jurisdiction combinations for anomalous patterns. When multiple rings simultaneously elevate in a single jurisdiction (e.g., Ring 1 stress + Ring 4 fragility + Ring 5 fracture in the EU), it may indicate a crisis precursor. These are flagged with pattern labels like 'crisis_precursor', 'policy_theater', 'silent_rollout'.

๐Ÿ“Š 12 Impact Formulas

These formulas quantify how digital monetary infrastructure changes affect society, sovereignty, privacy, and systemic stability. Each is computed per jurisdiction.

FII

Financial Inclusion Index

โ†‘ Higher = better
FII = ฮฃ(wแตข ยท fแตข)

Weighted sum of account access, mobile reach, and digital literacy across the population. Measures whether digital currency deployment is widening or narrowing financial access.

SCS

Surveillance Capacity Score

โ†“ Lower = better
SCS = (D_scope ร— M_cap ร— L_auth)^(1/3)

Geometric mean of data scope, monitoring capability, and legal authority. Higher scores indicate greater state surveillance capacity enabled by the digital infrastructure.

MSI

Monetary Sovereignty Index

โ†‘ Higher = better
MSI = 1 โˆ’ (E_dep / M_base) ร— (1 โˆ’ T_ctrl)

Measures a jurisdiction's control over its own monetary system. Declines when external dependencies (like reliance on foreign settlement rails) increase.

CBFC

Cross-Border Friction Coefficient

โ†“ Lower = better
CBFC = (R_bar + T_bar + C_bar) / 3

Average of regulatory, technical, and cost barriers to cross-border value transfer. Lower friction means money moves more easily between jurisdictions.

DDRS

Digital Divide Risk Score

โ†“ Lower = better
DDRS = (1 โˆ’ A_dig) ร— V_pop ร— (1 โˆ’ R_mit)

Captures the risk that digital currency deployment leaves behind populations without digital access. Accounts for vulnerable population size and mitigation measures.

SFI

Systemic Fragility Index

โ†“ Lower = better
SFI = C_risk ร— K_conc ร— (1 โˆ’ R_red) ร— H_homo

Product of cascade risk, concentration in key nodes, lack of redundancy, and system homogeneity. Identifies how vulnerable the infrastructure is to cascading failures.

PEI

Privacy Erosion Index

โ†“ Lower = better
PEI = (S_scope ร— R_ret ร— P_share) / L_prot

Ratio of surveillance scope, data retention, and sharing practices against legal protections. Higher values indicate greater erosion of financial privacy.

ITS

Institutional Trust Score

โ†‘ Higher = better
ITS = (T_trans ร— A_acc ร— H_hist ร— S_sent)^(1/4)

Fourth-root geometric mean of transparency, accountability, historical track record, and public sentiment. Measures whether citizens trust the institutions deploying digital infrastructure.

IAR

Innovation Acceleration Rate

โ†‘ Higher = better
IAR = (D_new / ฮ”t) ร— R_adopt ร— I_interop

Rate of new deployments over time, scaled by adoption velocity and interoperability. Captures how quickly the jurisdiction is advancing its digital infrastructure.

NSIC

Net Societal Impact Composite

Composite (โˆ’1 to +1)
NSIC = ฮฃ(wแตข ยท Sแตข ยท dแตข)

Master composite that weights all impact scores by their direction-adjusted significance. Positive means net benefit to society; negative means net harm. Near zero means impacts are balanced.

CP

Cascade Probability

โ†“ Lower = better
P(Cโ‚™) = ฮ  P(Cแตข | Cแตขโ‚‹โ‚)

Conditional probability chain for cascading failures. If node A fails, what is the probability it cascades to B, then to C? Used in scenario modeling.

GIP

Global Impact Propagation

Composite
GIP = M_impact ร— W_trade ร— (1 โˆ’ D_dist) ร— C_conn

How far and how fast a local event propagates through the global monetary network. Based on trade weight, geographic distance, and connectivity.

๐Ÿ”ฎ Oracle โ€” Prediction Factory

The Oracle extends GMIIE from structural analysis into probabilistic macro-asset forecasting. It ingests market data, engineers features that blend traditional technicals with GMIIE's own ring scores, detects volatility regimes via GARCH, and generates directional forecasts through an XGBoost ensemble โ€” all published with cryptographic integrity guarantees.

L1

Market Data Ingestion

Pulls OHLCV price data, order-book snapshots, and macro indicators for BTC, ETH, DXY, GOLD, and UST10Y via ccxt (exchange APIs) and yfinance (Yahoo Finance). Data is deduplicated, gap-filled, and stored as AssetPrice rows with a 1-hour resolution floor.

L2

Feature Engineering

Computes ~30 technical and structural features per asset: multi-window returns (1 h โ€“ 30 d), rolling volatility, Bollinger width, RSI, MACD, VWAP deviation, volume z-scores, cross-asset correlations, and GMIIE ring scores injected as exogenous regressors. Features are z-normalized before model input.

L3

Regime Detection (GARCH)

An ARCH/GARCH(1,1) model estimates conditional volatility for each asset. The fitted volatility series is segmented into three regimes โ€” Risk-Off (ฯƒ > 1.5ร—median), Neutral, Risk-On (ฯƒ < 0.7ร—median). Regime state feeds the forecast as a categorical feature and gates position-sizing guidance.

L4

Forecast Generation (XGBoost)

A gradient-boosted tree ensemble (XGBoost) produces directional and magnitude forecasts at 24-hour and 7-day horizons. Training uses a rolling 180-day window with walk-forward validation. Output includes point prediction, direction probability, and a confidence interval derived from quantile regression (10th/90th percentile).

L5

Publication & Audit

Forecasts are bundled into an OraclePublication with a SHA-256 integrity hash, version ID, and wall-clock timestamp. Every publication is immutable once emitted. A parallel accuracy evaluator compares past forecasts against realized prices, computing direction accuracy, MAE, and RMSE โ€” feeding back into model retraining decisions.

Assets tracked: 5 ยทForecast horizons: 24 h, 7 d ยทTraining window: 180-day rolling ยทPublication cadence: 2ร— daily (AM / PM)

๐Ÿงช Historical Backtesting

GMIIE validates its detection capabilities by replaying historical financial crises through the 5-ring engine and comparing the system's outputs against known outcomes. This process measures accuracy (did the system detect the right signals?) and lead time (how early did it detect them?).

CrisisYearDominant RingAccuracyEarly Warning
Global Financial Crisis2008Ring 1 + Ring 488.2%4 weeks
Euro Sovereign Debt Crisis2011Ring 5 + Ring 187.3%6 weeks
COVID-19 Market Shock2020Ring 1 + Ring 278.4%2 weeks
Rate Shock & SWIFT Weaponization2022Ring 1 + Ring 591.2%5 weeks
Average accuracy: 85.6% ยทBest early warning: 6 weeks ยทFalse positive rate: 12.4%

๐Ÿ›ก๏ธ Ethical Boundaries & Limitations

No Trading Recommendations

GMIIE's core 5-ring engine analyzes infrastructure, not markets. The Oracle layer generates directional macro-asset forecasts but does not produce buy/sell signals, position sizes, or trading strategies. All forecasts carry explicit confidence intervals and should be interpreted as probabilistic estimates, not actionable trade instructions.

Probabilistic, Not Deterministic

All predictions carry explicit probability scores. A 0.72 deployment probability means there is a 28% chance the system is wrong. Users should interpret outputs as informed estimates, not certainties.

Source Transparency

Every data point traces back to identified public sources โ€” central bank publications, BIS working papers, regulatory filings, SWIFT statistics. No anonymous or unverifiable sources are used.

Analyst-in-the-Loop

Automated outputs pass through human analyst review before influencing high-confidence assessments. The Analyst Review Layer and confidence normalization system ensure that the engine is supervised, not autonomous.