• to navigate
  • to select
  • to close
  • Anatomy of a Delay

    Performance · OTP

    Day of week, month, and carrier choice explain most of the variation in on-time performance. Five years of DOT data reveals exactly where the risk concentrates — and it's reproducible with a single 60-month API call.

    performance/breakdown 60-month window dayofweek carrier distancegroup

    The Question

    Flight delay is often treated as random noise, but the data says otherwise. Certain days of week are structurally worse. Certain months spike reliably. And carrier selection matters more than most travelers realize.

    This analysis uses the performance/breakdown endpoint’s full 60-month window — the maximum allowed — to build a stable baseline free from single-event distortion.

    Five Years of Performance Data in One Query

    The performance family supports up to 60 months per request. A single call returns a full five-year baseline — no stitching, no normalization — making it the most direct path to statistically stable delay benchmarks.

    Time range:
    start_year=2021&start_month=1&end_year=2025&end_month=12 (60 months)
    Breakdown:
    by=dayofweek (1=Mon … 7=Sun)
    Metrics:
    on_time_pct, avg_arr_delay, avg_dep_delay, flights

    The dayofweek dimension uses DOT encoding: 1=Monday, 2=Tuesday … 7=Sunday. No limit is needed since there are only 7 values. Results are per month × day-of-week, which we then aggregate to a single per-DOW average weighted by flight counts.

    Day of Week: Where Risk Lives

    The chart below shows the flight-weighted average on-time rate per day of week, aggregated across all 60 months. Hover each bar to see arrival and departure delay minutes.

    On-Time Rate by Day of Week

    Flight-weighted average across all U.S. carriers · Jan 2021 – Dec 2025 (60 months)

    Loading chart data…
    Data: AirStream API → /performance/breakdown?by=dayofweek&metrics=on_time_pct,avg_arr_delay,flights

    The pattern is consistent: Tuesday and Wednesday are statistically the most reliable days to fly, while Friday, Sunday, and Monday carry the worst on-time rates. The Friday effect is well-understood (high load factor + cascading upstream delays). The Sunday effect is less intuitive — it likely reflects post-weekend schedule compression and reduced maintenance windows.

    The Monday effect is particularly actionable: Monday mornings carry the full burden of weekend disruptions that weren’t resolved over the weekend plus peak business travel demand.

    Tuesday and Wednesday average 82–83% on-time over 60 months. Friday, Sunday, and Monday average 78–80%. That 3–5 percentage point gap represents millions of disrupted passengers per year.

    The 60-month window reveals a clear seasonal structure. Summer (June–August) and winter holidays (December–January) are the two high-delay windows, with spring as the most reliable period.

    Breakdown:
    by=distancegroup (11 distance bands from <250 mi to ≥3,000 mi)
    Aggregation:
    Sum across all distance bands per month to get a monthly system-wide figure

    The distancegroup dimension lets you compare short-haul vs long-haul reliability. This chart aggregates across all bands, but the API data also supports breaking out each distance tier independently.

    Monthly Delay Pressure — 60-Month Trend

    Flight-weighted on-time rate and average arrival delay · all U.S. carriers · Jan 2021 – Dec 2025

    Loading chart data…
    Data: AirStream API → /performance/breakdown?by=distancegroup&metrics=on_time_pct,avg_arr_delay,flights · Aggregated across all distance bands

    The shaded band shows ±1 standard deviation of monthly on-time rate. Summer months consistently fall in the lower half of the distribution. The 2022 summer was particularly bad — that was the year of the SWA operational crisis and widespread ATFM delays driven by convective weather.

    March–May and October are the system's most reliable windows. June–August and December–January are predictably worse. This seasonality repeats every year in the data with very little variation in the rank ordering.

    Carrier Reliability: Five-Year Scorecard

    Carrier selection is the lever travelers control most directly. The scatter plot below positions each major carrier on the reliability landscape: on-time rate (y-axis) vs cancellation rate (x-axis), with bubble size proportional to total flights operated over 60 months.

    Breakdown:
    by=carrier (DOT carrier code)
    limit:
    15 per period (top 15 carriers by flights each month)

    Each month returns up to 15 carriers. Aggregating across 60 months builds a reliable multi-year profile. Carriers that only appear seasonally may have fewer data points — minimum flight thresholds are applied in the derived dataset to avoid noisy outliers.

    Carrier Reliability Scorecard — On-Time Rate vs Cancellation Rate

    5-year flight-weighted averages · Jan 2021 – Dec 2025 · bubble size = total flights operated

    Loading chart data…
    Data: AirStream API → /performance/breakdown?by=carrier&metrics=on_time_pct,cancel_pct,avg_arr_delay,flights

    The chart rewards careful reading. Delta and Alaska have historically occupied the top-left of the reliability spectrum — high on-time rates with moderate cancellations. Southwest’s 2022 operational meltdown shifted its 5-year average noticeably; its post-recovery trajectory is a separate story. Ultra-low-cost carriers (Spirit, Frontier, Allegiant) cluster at the lower end — not always because they’re inherently less reliable, but because they operate with thinner maintenance buffers and tighter schedules, including shorter ground turns (less time between an aircraft’s arrival and its next departure).

    Carrier selection explains more of your actual delay risk than time of day or season. On the same route and day, Delta's 5-year average on-time rate is 8–12 percentage points higher than the lowest performers — a difference of roughly 1 extra delay per 10 flights.