๐ Methodology & Scoring
How GMIIE computes scores, detects patterns, and validates its own reliability
Analytical Philosophy
GMIIE is designed around a principle of layered skepticism. Every data point passes through multiple validation stages before influencing any composite score. The system assumes all data sources may be biased, incomplete, or stale โ and adjusts its confidence accordingly.
The 5-ring architecture deliberately separates quantitative data (Ring 1) from qualitative signals (Ring 2), deployment reality (Ring 3) from structural analysis (Ring 4), and jurisdiction-level events from geopolitical dynamics (Ring 5). This separation prevents narrative contamination โ a policymaker's optimistic speech cannot inflate the system's assessment unless deployment data (Ring 3) corroborates it.
๐ง Scoring Pipeline
Ring Score Computation
Each ring computes a raw score from 0.0 to 1.0 based on its domain-specific inputs. Ring 1 aggregates quantitative financial signals. Ring 2 measures language drift velocity via NLP. Ring 3 tracks deployment stage progression. Ring 4 uses graph-theoretic fragility metrics. Ring 5 tallies geopolitical fracture events weighted by severity.
Confidence Dampening
Raw ring scores are adjusted by a confidence weight (0.0โ1.0) before being used in downstream analysis. This prevents low-confidence data from inflating system-wide assessments. The dampened score = raw score ร confidence weight. If Ring 2 has a raw score of 0.64 but a confidence weight of only 0.68, the dampened score is 0.44.
Confidence Normalization
The confidence weight itself is computed from five factors using a weighted geometric mean: data density (25%), source diversity (20%), historical validation (20%), recency decay (20%), and analyst confirmation (15%). This ensures that confidence is grounded in data quality, not arbitrary calibration.
NIG Score
NIG = Narrative Acceleration Index โ Infrastructure Deployment Index. Positive values indicate rhetoric is outpacing deployment ('policy theater'). Negative values indicate deployment is outpacing rhetoric ('silent rollout'). The rhetoric-reality gap metric captures this on a per-initiative basis.
Cross-Ring Conflict Detection
The system scans all 49 pairwise ring-jurisdiction combinations for anomalous patterns. When multiple rings simultaneously elevate in a single jurisdiction (e.g., Ring 1 stress + Ring 4 fragility + Ring 5 fracture in the EU), it may indicate a crisis precursor. These are flagged with pattern labels like 'crisis_precursor', 'policy_theater', 'silent_rollout'.
๐ 12 Impact Formulas
These formulas quantify how digital monetary infrastructure changes affect society, sovereignty, privacy, and systemic stability. Each is computed per jurisdiction.
Financial Inclusion Index
Weighted sum of account access, mobile reach, and digital literacy across the population. Measures whether digital currency deployment is widening or narrowing financial access.
Surveillance Capacity Score
Geometric mean of data scope, monitoring capability, and legal authority. Higher scores indicate greater state surveillance capacity enabled by the digital infrastructure.
Monetary Sovereignty Index
Measures a jurisdiction's control over its own monetary system. Declines when external dependencies (like reliance on foreign settlement rails) increase.
Cross-Border Friction Coefficient
Average of regulatory, technical, and cost barriers to cross-border value transfer. Lower friction means money moves more easily between jurisdictions.
Digital Divide Risk Score
Captures the risk that digital currency deployment leaves behind populations without digital access. Accounts for vulnerable population size and mitigation measures.
Systemic Fragility Index
Product of cascade risk, concentration in key nodes, lack of redundancy, and system homogeneity. Identifies how vulnerable the infrastructure is to cascading failures.
Privacy Erosion Index
Ratio of surveillance scope, data retention, and sharing practices against legal protections. Higher values indicate greater erosion of financial privacy.
Institutional Trust Score
Fourth-root geometric mean of transparency, accountability, historical track record, and public sentiment. Measures whether citizens trust the institutions deploying digital infrastructure.
Innovation Acceleration Rate
Rate of new deployments over time, scaled by adoption velocity and interoperability. Captures how quickly the jurisdiction is advancing its digital infrastructure.
Net Societal Impact Composite
Master composite that weights all impact scores by their direction-adjusted significance. Positive means net benefit to society; negative means net harm. Near zero means impacts are balanced.
Cascade Probability
Conditional probability chain for cascading failures. If node A fails, what is the probability it cascades to B, then to C? Used in scenario modeling.
Global Impact Propagation
How far and how fast a local event propagates through the global monetary network. Based on trade weight, geographic distance, and connectivity.
๐ฎ Oracle โ Prediction Factory
The Oracle extends GMIIE from structural analysis into probabilistic macro-asset forecasting. It ingests market data, engineers features that blend traditional technicals with GMIIE's own ring scores, detects volatility regimes via GARCH, and generates directional forecasts through an XGBoost ensemble โ all published with cryptographic integrity guarantees.
Market Data Ingestion
Pulls OHLCV price data, order-book snapshots, and macro indicators for BTC, ETH, DXY, GOLD, and UST10Y via ccxt (exchange APIs) and yfinance (Yahoo Finance). Data is deduplicated, gap-filled, and stored as AssetPrice rows with a 1-hour resolution floor.
Feature Engineering
Computes ~30 technical and structural features per asset: multi-window returns (1 h โ 30 d), rolling volatility, Bollinger width, RSI, MACD, VWAP deviation, volume z-scores, cross-asset correlations, and GMIIE ring scores injected as exogenous regressors. Features are z-normalized before model input.
Regime Detection (GARCH)
An ARCH/GARCH(1,1) model estimates conditional volatility for each asset. The fitted volatility series is segmented into three regimes โ Risk-Off (ฯ > 1.5รmedian), Neutral, Risk-On (ฯ < 0.7รmedian). Regime state feeds the forecast as a categorical feature and gates position-sizing guidance.
Forecast Generation (XGBoost)
A gradient-boosted tree ensemble (XGBoost) produces directional and magnitude forecasts at 24-hour and 7-day horizons. Training uses a rolling 180-day window with walk-forward validation. Output includes point prediction, direction probability, and a confidence interval derived from quantile regression (10th/90th percentile).
Publication & Audit
Forecasts are bundled into an OraclePublication with a SHA-256 integrity hash, version ID, and wall-clock timestamp. Every publication is immutable once emitted. A parallel accuracy evaluator compares past forecasts against realized prices, computing direction accuracy, MAE, and RMSE โ feeding back into model retraining decisions.
๐งช Historical Backtesting
GMIIE validates its detection capabilities by replaying historical financial crises through the 5-ring engine and comparing the system's outputs against known outcomes. This process measures accuracy (did the system detect the right signals?) and lead time (how early did it detect them?).
| Crisis | Year | Dominant Ring | Accuracy | Early Warning |
|---|---|---|---|---|
| Global Financial Crisis | 2008 | Ring 1 + Ring 4 | 88.2% | 4 weeks |
| Euro Sovereign Debt Crisis | 2011 | Ring 5 + Ring 1 | 87.3% | 6 weeks |
| COVID-19 Market Shock | 2020 | Ring 1 + Ring 2 | 78.4% | 2 weeks |
| Rate Shock & SWIFT Weaponization | 2022 | Ring 1 + Ring 5 | 91.2% | 5 weeks |
๐ก๏ธ Ethical Boundaries & Limitations
No Trading Recommendations
GMIIE's core 5-ring engine analyzes infrastructure, not markets. The Oracle layer generates directional macro-asset forecasts but does not produce buy/sell signals, position sizes, or trading strategies. All forecasts carry explicit confidence intervals and should be interpreted as probabilistic estimates, not actionable trade instructions.
Probabilistic, Not Deterministic
All predictions carry explicit probability scores. A 0.72 deployment probability means there is a 28% chance the system is wrong. Users should interpret outputs as informed estimates, not certainties.
Source Transparency
Every data point traces back to identified public sources โ central bank publications, BIS working papers, regulatory filings, SWIFT statistics. No anonymous or unverifiable sources are used.
Analyst-in-the-Loop
Automated outputs pass through human analyst review before influencing high-confidence assessments. The Analyst Review Layer and confidence normalization system ensure that the engine is supervised, not autonomous.