RM System Calibration & Tuning
Network Planning & Scheduling › Revenue Management · 16 L4 steps · 5 phases · 5 decision gates · Updated 2026-03-18 19:08
📊
Process Flow Diagram (BPMN)
📋
L4 Process Steps
| Step | Step Name | Role / Swim Lane | System | Input | Output | KPI | Dec? | Exc? |
|---|---|---|---|---|---|---|---|---|
Phase 1 1.1 |
Extract RM forecast accuracy metrics | Revenue Management Analyst | Amadeus Revenue Management (NRM) | Scheduled monthly calibration cycle or ad-hoc trigger from revenue variance alert | Forecast accuracy report: MAPE by market, booking class, and horizon | Mean Absolute Percentage Error (MAPE) ≤15% across all O&D markets | N | N |
| 1.2 | Identify underperforming O&D markets | Senior Revenue Analyst | Tableau | Forecast accuracy report from step 1.1; actuals from AWS Redshift booking data warehouse | Ranked list of O&D markets with MAPE >15% or revenue variance >3% vs. plan | ≥90% of O&D markets within MAPE threshold within 30 days of calibration cycle | N | N |
| 1.3 | Assess booking curve deviations from expected pickup | Revenue Optimization Analyst | Amadeus Revenue Management (NRM) | Ranked O&D market list; historical booking curve profiles from NRM | Deviation analysis report flagging systematic early/late booking shifts and class-mix anomalies | Booking curve R² ≥0.92 across flagged markets; decision to proceed with recalibration if ≥3 markets fail threshold | Y | N |
Phase 2 2.1 |
Extract historical booking and ticketing data | Data Engineer | AWS Redshift | Calibration scope (markets, date range, booking classes) from deviation analysis report | Structured training dataset: O&D bookings, fares paid, cancellation rates, no-show rates — 24-month rolling window | Dataset completeness ≥98% of scheduled departures in scope; extract runtime <2 hours | N | N |
| 2.2 | Validate input data quality and completeness | Data Quality Analyst | AWS S3 | Raw extract from AWS Redshift; data quality ruleset stored in S3 | Data quality scorecard with pass/fail per validation rule; cleansed dataset or rejection notice | Data quality score ≥95% (null rate <2%, duplicate bookings <0.5%, fare outliers <1%) | Y | Y |
| 2.3 | Load competitive fare and capacity benchmarks | Revenue Management Analyst | ATPCO | Competitor fare filing data from ATPCO; OAG Schedule Analyser capacity data for target O&D markets | Competitive context dataset: competitor fare ladder, capacity by carrier, schedule frequency | Competitive data coverage ≥85% of target O&D markets; data freshness within 48 hours of filing date | N | N |
Phase 3 3.1 |
Recalibrate demand forecast model coefficients | RM Systems Analyst | Amadeus Revenue Management (NRM) | Cleansed 24-month historical dataset; competitive context dataset; current NRM model parameter set | Updated demand model coefficients (seasonality indices, trend factors, day-of-week multipliers) staged in NRM test partition | Post-recalibration MAPE improvement ≥10% vs. baseline on hold-out validation set | N | N |
| 3.2 | Update price elasticity and WTP model curves | Revenue Optimization Analyst | Amadeus Revenue Management (NRM) | Updated demand model coefficients; ATPCO competitive fare ladder; passenger segment willingness-to-pay data | Revised price elasticity curves and class-specific bid price adjustments per O&D market cluster | Bid price accuracy: simulated revenue within ±2% of optimal on hold-out markets; elasticity curve R² ≥0.88 | N | N |
| 3.3 | Tune overbooking safety factors by route type | Revenue Management Analyst | Amadeus Revenue Management (NRM) | Updated elasticity curves; historical no-show and cancellation rates by route category (short-haul leisure, long-haul corporate, thin route) | Revised overbooking multipliers per route cluster; updated denied boarding risk scores | Denied boarding rate ≤0.05 per 1,000 passengers; seat spoilage rate ≤2% of available seats | Y | Y |
Phase 4 4.1 |
Execute shadow-mode simulation on hold-out period | RM Systems Analyst | Amadeus Revenue Management (NRM) | Updated NRM parameter set (demand model + elasticity curves + overbooking factors); hold-out historical period (most recent 90 days not used in training) | Shadow revenue report: simulated revenue and load factor for each O&D flight in hold-out period | Shadow simulation runtime ≤8 hours; simulated revenue variance vs. actuals ≤2% | N | N |
| 4.2 | Evaluate shadow revenue vs. actual revenue | Revenue Optimization Manager | Tableau | Shadow revenue report; actual revenue actuals from AWS Redshift for the same hold-out period | Calibration validation dashboard: revenue lift by market cluster, load factor delta, booking class mix comparison | Simulated revenue uplift ≥1.0% vs. actuals across all market clusters; no individual cluster showing revenue degradation >0.5% | N | N |
| 4.3 | Run live A/B test on selected pilot markets | Revenue Management Analyst | Amadeus Revenue Management (NRM) | Validated calibration parameter set; pilot market selection (10–15 O&D pairs representative of route clusters) | A/B test results: revenue per available seat kilometre (RASK) and load factor for treatment vs. control group over 4-week period | A/B revenue uplift ≥1.5% in pilot markets over 4-week test window at 95% statistical confidence | Y | Y |
Phase 5 5.1 |
Stage calibrated parameters in UAT environment | RM Systems Engineer | Amadeus Revenue Management (NRM) | Approved calibration parameter set from A/B test; UAT deployment checklist | UAT environment loaded with new parameters; regression test results confirming no unintended impacts to adjacent processes (pricing, inventory, interline) | UAT regression test pass rate 100% on critical path; deployment staging completed within 1 business day | N | Y |
| 5.2 | Present calibration results for RM leadership sign-off | Head of Revenue Management | Microsoft Azure Synapse | Calibration validation dashboard; A/B test results; UAT regression summary; estimated annual revenue impact | Signed approval record or change request with required amendments; go/no-go decision for production deployment | Sign-off decision turnaround ≤2 business days; zero production deployments without documented approval | Y | N |
| 5.3 | Deploy calibrated parameters to production NRM | RM Systems Engineer | Amadeus Revenue Management (NRM) | Signed approval record; staged parameter set from UAT; deployment runbook | Production NRM updated with new demand model coefficients, elasticity curves, and overbooking factors; deployment confirmation log | Production deployment completed within 4-hour maintenance window; zero unplanned system downtime during deployment | N | Y |
| 5.4 | Monitor live KPIs post-deployment | Revenue Management Analyst | Tableau | Live booking data from Amadeus NRM; post-deployment KPI dashboard; baseline metrics from pre-calibration period | 30-day post-deployment performance report; rollback recommendation if KPIs deteriorate beyond threshold | RASK improvement ≥1.0% vs. pre-calibration baseline within 30 days; no market showing load factor decline >2 percentage points | N | Y |
📋