Research Paper Author

The Sequential Learning Group

A Diffusion Model for Regular Time Series Generation from Irregular Data with Completion and Masking

Research Paper | Gal Fadlon, Idan Arbiv, Nimrod Berman, Omri Azencot

A novel two-step framework for generating realistic time series from irregular data using Time Series Transformers and vision-based diffusion models, achieving 70% improvement in discriminative score and 85% reduction in computational cost.

Abstract

Authors

Gal Fadlon*🐪 Ben Gurion University of the Negev
Idan Arbiv*🐪 Ben Gurion University of the Negev
Nimrod Berman🐪 Ben Gurion University of the Negev
Omri Azencot🐪 Ben Gurion University of the Negev

*equal contribution

Abstract

Generating realistic time series data is critical for applications in healthcare, finance, and science. However, irregular sampling and missing values present significant challenges. While prior methods address these irregularities, they often yield suboptimal results and incur high computational costs. Recent advances in regular time series generation, such as the diffusion-based ImagenTime model, demonstrate strong, fast, and scalable generative capabilities by transforming time series into image representations, making them a promising solution. However, extending ImagenTime to irregular sequences using simple masking introduces 'unnatural' neighborhoods, where missing values replaced by zeros disrupt the learning process. To overcome this, we propose a novel two-step framework: first, a Time Series Transformer completes irregular sequences, creating natural neighborhoods; second, a vision-based diffusion model with masking minimizes dependence on the completed values. This approach leverages the strengths of both completion and masking, enabling robust and efficient generation of realistic time series. Our method achieves state-of-the-art performance, achieving a relative improvement in discriminative score by 70% and in computational cost by 85%.

Key Contributions

  1. We introduce a novel generative model for irregularly-sampled time series, leveraging vision-based diffusion approaches to efficiently and effectively handle sequences ranging from short to long lengths.
  2. In contrast to existing methods that assume completed information is drawn from the data distribution, we treat it as a weak conditioning signal and directly optimize on the observed signal using a masking strategy.
  3. Our approach achieves state-of-the-art performance across multiple generative tasks, delivering an average improvement of 70% in discriminative benchmarks while reducing computational requirements by 85% relative to competing methods.

The Challenge of Irregular Time Series

Time series data is essential in fields such as healthcare, finance, and science, supporting critical tasks like forecasting trends, detecting anomalies, and analyzing patterns. Beyond direct analysis, generating synthetic time series has become increasingly valuable for creating realistic proxies of private data, testing systems under new scenarios, exploring 'what-if' questions, and balancing datasets for training machine learning models. The ability to generate realistic sequences enables deeper insights and robust applications across diverse domains. In practice, however, time series data is often irregular, with missing values and unevenly spaced measurements. These irregularities arise from limitations in data collection processes, such as sensor failures, inconsistent sampling, or interruptions in monitoring systems.

Limitations of Existing Approaches

The synthesis of regular time series from irregular ones is a fundamental challenge, yet existing approaches remain scarce, with notable examples being GT-GAN and KoVAE. Unfortunately, these methods suffer from several limitations. First, they rely on generative adversarial networks (GANs) and variational autoencoders (VAEs), which have recently been surpassed in performance by diffusion-based tools. Second, both GT-GAN and KoVAE utilize a computationally-demanding preprocessing step based on neural controlled differential equations (NCDEs), rendering these methods impractical for long time series. For instance, KoVAE requires approximately 6.5 times more training time in comparison to our approach. Third, these methods inherently assume that the data, completed by NCDE, accurately reflects the true underlying distribution, which can introduce catastrophic errors when this assumption fails.

Our Two-Step Framework

To address these shortcomings, we base our approach on a recent diffusion model for time series, ImagenTime. This method maps time series data to images, enabling the use of powerful vision-based diffusion neural architectures. Leveraging a vision-based diffusion generator offers a significant advantage: regular time series can be generated from irregular ones using a straightforward masking mechanism. However, while this straightforward masking approach is simple and achieves strong results, we identify a significant limitation. Missing values in the time series are mapped to zeros in the image, resulting in 'unnatural' neighborhoods that mix valid and invalid information. To address this issue, we propose a two-step generation process. In the first step, we complete the irregular series using our adaptation of an efficient Time Series Transformer (TST) approach, significantly reducing computational overhead and enabling the generation of long time series. In the second step, we apply the straightforward masking approach described earlier.

Method Architecture

Two-step framework architecture showing TST completion and vision diffusion process

Figure: In the first step (top), we train a TST-based autoencoder, which we use during the second step (middle), where a vision diffusion model is trained with masking over non-active pixels. Inference (bottom) is done similarly to ImagenTime.

The Problem of Unnatural Image Neighborhoods

Unfortunately, the straightforward approach has a fundamental limitation: although non-active pixels are ignored during loss computation, they are still processed by the network. In practice, missing values are replaced with zeros, resulting in 'unnatural' pixel neighborhoods. Specifically, while zeros may occasionally occur in non-zero segments of a time series, their repeated presence is highly unlikely, leading to inconsistencies. In other words, masking is not applied at the architecture level, potentially hindering the effective learning of neural components. This can pose challenges for diffusion backbones, such as U-Nets with convolutional blocks, where the convolution kernels are not inherently masked and may inadvertently propagate errors from these artificial neighborhoods.

Combining Completion and Masking

To create more natural pixel neighborhoods while remaining agnostic to the underlying architecture, we draw inspiration from the two-step process utilized in GT-GAN and KoVAE. Our approach adopts a two-step training scheme. First, we complete the missing values in the irregularly-sampled time series using TST, producing a regularly-sampled sequence. Next, we transform the completed time series into an image and apply denoising as in ImagenTime, with a key distinction: we apply the mask to the completed pixels during the loss computation. This novel combination of completion and masking addresses the two primary challenges of processing irregular sequences. On one hand, it creates natural neighborhoods, enabling convolutional kernels to learn effectively from values that closely align with the true data distribution. On the other hand, it ensures that the completed values are not fully relied upon by excluding them from the loss computation via the mask, striking a balance between utilizing and mitigating incomplete information.

State-of-the-Art Performance

We conduct a comprehensive evaluation of our approach on standard irregular time series benchmarks, benchmarking it against state-of-the-art methods. Our model consistently demonstrates superior generative performance, effectively bridging the gap between regular and irregular settings. Furthermore, we extend the evaluation to medium-, long- and ultra-long-length sequence generation, assessing performance across 12 datasets and 12 tasks. The results highlight the robustness and efficiency of our method, achieving consistent improvements over existing approaches. Our approach achieves state-of-the-art performance across multiple generative tasks, delivering an average improvement of 70% in discriminative benchmarks while reducing computational requirements by 85% relative to competing methods.

Computational Efficiency

Our approach demonstrates significant computational advantages over existing methods. Unlike GT-GAN and KoVAE which rely on computationally-demanding NCDE preprocessing, our TST-based completion is much more efficient. KoVAE requires approximately 6.5 times more training time compared to our approach, as demonstrated in our training time analysis. The two-step framework enables effective modeling of long time series while making minimal assumptions about pre-completed data, resulting in significantly improved generation performance with reduced computational overhead. This efficiency is particularly important for practical applications where computational resources are limited.

Results & Comparison

Method Overview

Our two-step framework addresses the challenge of irregular time series generation by combining Time Series Transformer completion with vision-based diffusion models.

Step 1: TST Completion

Complete irregular sequences

Create natural neighborhoods

Efficient preprocessing

Step 2: Vision Diffusion

Mask-based denoising

Minimize dependence on completed values

Robust generation

Training Time Comparison

LengthModelETTh1ETTh2ETTm1ETTm2WeatherElectricityEnergySineMujoco
24GT-GAN7.447.447.447.447.447.447.447.442.17
KoVAE6.496.496.496.496.496.496.496.491.15
Ours1.281.281.281.281.281.281.281.280.60
96KoVAE19.7019.7019.7019.7019.7019.7019.7019.70-
Ours1.521.521.521.521.521.521.521.52-
768KoVAE31.5331.5331.5331.5331.5331.5331.5331.53-
Ours5.385.385.385.385.385.385.385.38-

Table 1: Training time (in hours) for sequence lengths (24, 96, and 768), averaged over 30%, 50%, and 70% missing rates. Our method demonstrates significantly faster training times compared to existing approaches.

Discriminative Time Analysis

Discriminative time analysis showing performance over time

Figure 4: Discriminative time analysis showing how our method maintains consistent performance across different time periods compared to baseline methods.

Quantitative Results

Step 1: Choose Sequence Length

Step 2: Choose Metric to View

Currently viewing: Sequence Length 24 - Discriminative Score

ModelETTh1ETTh2ETTm1ETTm2WeatherElectricityEnergySineStock
Discriminative ScoreLower is better
TimeGAN-Δt0.4990.4990.4990.4990.4970.4990.4740.4970.479
GT-GAN0.4710.3690.4120.3660.4810.4270.3250.3380.249
KoVAE0.1970.0810.050.0670.3320.4980.3230.0430.118
Ours0.0370.0090.0120.0110.0570.3840.080.010.008

Table 2: Averaged results over 30%, 50%, 70% missing rates for sequence length 24. Lower values are better. Our method consistently achieves state-of-the-art performance across all evaluation metrics and datasets.

Qualitative Evaluation

2D t-SNE embeddings and probability density functions comparing real data vs synthetic data from our method and KoVAE

Figure 1: 2D t-SNE embeddings and probability density functions comparing real data vs synthetic data from our method and KoVAE. Our approach generates more realistic data distributions that closely match the original data patterns.

Ablation Studies

Completion Strategy Ablation

Imputation Methods Explained
GN → NaN

Gaussian noise completion - fills missing values with Gaussian noise

0 → NaN

Zero-filling - replaces missing values with zeros

LI

Linear interpolation - estimates missing values using linear interpolation

PI

Polynomial interpolation - uses polynomial fitting for missing value estimation

SI

Stochastic imputation - samples from Gaussian distribution fitted to non-missing values

NCDE

Neural Controlled Differential Equations - advanced learning-based imputation

CSDI

Conditional Score-based Diffusion Imputation - diffusion-based imputation method

GRU-D

GRU with Decay - recurrent neural network with decay mechanism for missing values

Ours (TST)

Time Series Transformer - our proposed lightweight and efficient completion method

This ablation study compares different imputation strategies for handling missing values. Simple methods (GN, zero-filling) create unnatural neighborhoods, while advanced methods (NCDE, CSDI) are computationally expensive. Our TST approach achieves the best balance of performance and efficiency.

ModelEnergy (Disc.)Stock (Disc.)Energy (Pred.)Stock (Pred.)
GN → NaN0.4570.1020.0580.014
0 → NaN0.2690.1580.0510.014
LI0.2510.0130.0490.019
PI0.2010.0120.0530.016
NCDE0.1020.0130.0580.013
CSDI0.0880.0120.0480.013
SI0.0690.0100.0470.013
GRU-D0.1580.0140.0550.015
Ours (TST)0.0650.0070.0470.012

Completion Strategy Ablation: Comparison of different imputation methods with 50% drop-rate on Energy and Stock datasets. Our TST-based completion strategy achieves the best performance across both discriminative and predictive metrics.

Method Ablation

Method Variants Explained
KoVAE + TST

KoVAE baseline with Time Series Transformer completion preprocessing

TimeAutoDiff + TST

TimeAutoDiff baseline with Time Series Transformer completion preprocessing

TransFusion + TST

TransFusion baseline with Time Series Transformer completion preprocessing

Ours (Mask Only)

Our method using only the masking strategy without TST completion

Ours (Without Mask)

Our method using TST completion but without the masking strategy

Ours (Full)

Our complete method with both TST completion and masking strategy

This ablation study demonstrates the contribution of each component in our framework. The results show that both TST completion and masking strategy are essential for optimal performance.

Choose Sequence Length:

Currently viewing: Method Ablation for Sequence Length 24

ModelEnergy (30%)Stock (30%)Energy (50%)Stock (50%)Energy (70%)Stock (70%)
KoVAE + TST0.3990.1090.4070.0640.4080.037
TimeAutoDiff + TST0.2930.1000.3290.1010.4680.375
TransFusion + TST0.2010.0500.2790.0580.4230.065
Ours (Mask Only)0.1570.0870.2690.1680.3720.237
Ours (Without Mask)0.1580.0250.3070.0450.4440.013
Ours0.0480.0070.0650.0070.1280.007

Method Ablation: Discriminative scores comparing different method components for sequence length 24 with 30%, 50%, and 70% drop-rates on Energy and Stock datasets. Our full method consistently outperforms all ablation variants.

Noise Robustness

Noise LevelModelWeather (Disc.)Weather (Pred.)ETTh1 (Disc.)ETTh1 (Pred.)Stock (Disc.)Stock (Pred.)Energy (Disc.)Energy (Pred.)
0.1KoVAE0.4260.0560.2250.0730.2350.0160.4340.067
Ours0.0610.0520.0240.0340.0070.0120.0650.047
0.15KoVAE0.4880.0920.3770.0770.3410.0920.4930.093
Ours0.4160.0290.4070.0590.2820.0230.4670.053
0.2KoVAE0.4910.0960.4400.0840.3520.1210.4960.123
Ours0.4850.0350.4560.0620.3400.0270.4570.057

Table 2: Discriminative and predictive scores for 50% missing rate on Weather, ETTh1, Stock, and Energy datasets with injected noise levels (0.1, 0.15, and 0.2). Our method demonstrates superior robustness across different noise levels.

Cite Us

BibTeX Citation

@inproceedings{fadlon2025diffusionmodelregulartimeseries,
      title={A Diffusion Model for Regular Time Series Generation from Irregular Data with Completion and Masking}, 
      author={Gal Fadlon and Idan Arbiv and Nimrod Berman and Omri Azencot},
      booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
      year={2025},
      url={https://neurips.cc/virtual/2025/poster/118491}
}