SEAformer: frequency domain decomposition transformer with signal enhanced for long-term wind power forecasting (original) (raw)

References

  1. Wang X, Guo P, Huang X (2011) A review of wind power forecasting models. Energy Procedia 12:770–778
    Article Google Scholar
  2. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, … Polosukhin I (2017) Attention is All you Need. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds), Advances in Neural Information Processing Systems (Vol. 30). Curran Associates, Inc.
  3. Kitaev N, Kaiser L, Levskaya A (2020) Reformer: the efficient transformer. In: 8th International conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020
  4. Wu H, Xu J, Wang J, Long M (2021) Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. Adv Neural Inf Process Syst 34:22419–22430
    Google Scholar
  5. Zhou H, Zhang S, Peng J, Zhang S, Li J, Xiong H, Zhang W (2021) Informer: beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the AAAI conference on artificial intelligence, vol 35. pp 11106–11115
  6. Zhou T, Ma Z, Wen Q, Wang X, Sun L, Jin R (2022) Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In: International conference on machine learning, pp. 27268–27286. PMLR
  7. Kuznetsov V, Mohri M (2015) Learning theory and algorithms for forecasting non-stationary time series. Proceedings of the 28th International Conference on Neural Information Processing Systems - Vol 1. Presented at the Montreal, Canada. Cambridge, MA, USA: MIT Press, pp 541–549
  8. Dragomiretskiy K, Zosso D (2013) Variational mode decomposition. IEEE Trans Signal Process 62(3):531–544
    Article MathSciNet Google Scholar
  9. Al-Yahyai S, Charabi Y, Gastli A (2010) Review of the use of numerical weather prediction (NWP) models for wind energy assessment. Renew Sustain Energy Rev 14(9):3192–3198
    Article Google Scholar
  10. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
    Article Google Scholar
  11. Box GE, Jenkins GM, MacGregor JF (1974) Some recent advances in forecasting and control. J Roy Stat Soc: Ser C (Appl Stat) 23(2):158–179
    MathSciNet Google Scholar
  12. Rangapuram SS, Seeger MW, Gasthaus J, Stella L, Wang Y, Januschowski T (2018) Deep State Space Models for Time Series Forecasting. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds), Advances in Neural Information Processing Systems (Vol. 31). Curran Associates, Inc.
  13. De Giorgi MG, Campilongo S, Ficarella A, Congedo PM (2014) Comparison between wind power prediction models based on wavelet decomposition with least-squares support vector machine (LS-SVM) and artificial neural network (ANN). Energies 7(8):5251–5272
    Article Google Scholar
  14. Assimakopoulos V, Nikolopoulos K (2000) The theta model: a decomposition approach to forecasting. Int J Forecast 16(4):521–530
    Article Google Scholar
  15. Holt CC (2004) Forecasting seasonals and trends by exponentially weighted moving averages. Int J Forecast 20(1):5–10
    Article Google Scholar
  16. Cleveland RB, Cleveland WS, McRae JE, Terpenning I (1990) STL: a seasonal-trend decomposition. J Off Stat 6(1):3–73
    Google Scholar
  17. Cleveland WP, Tiao GC (1976) Decomposition of seasonal time series: a model for the census x–11 program. J Am Stat Assoc 71(355):581–587
    Article MathSciNet Google Scholar
  18. Woo G, Liu C, Sahoo D, Kumar A, Hoi SCH (2022) Cost: contrastive learning of disentangled seasonal-trend representations for time series forecasting. In: The tenth international conference on learning representations, ICLR 2022, Virtual Event, April 25-29, 2022
  19. McKenzie E, Gardner ES Jr (2010) Damped trend exponential smoothing: a modelling viewpoint. Int J Forecast 26(4):661–665
    Article Google Scholar
  20. Wen Q, Gao J, Song X, Sun L, Xu H, Zhu S (2019) Robuststl: a robust seasonal-trend decomposition algorithm for long time series. In: Proceedings of the AAAI conference on artificial intelligence, vol 33. pp 5409–5416
  21. Huang S, Yan C, Qu Y (2023) Deep learning model-transformer based wind power forecasting approach. In: Frontiers in Energy Research
  22. Fu X, Gao F, Wu J, Wei X, Duan F (2019) Spatiotemporal attention networks for wind power forecasting. In: 2019 International conference on data mining workshops (ICDMW), pp 149–154
  23. Nascimento EGS, de Melo TAC, Moreira DM (2023) A transformer-based deep neural network with wavelet transform for forecasting wind speed and wind energy. Energy 278:127678
    Article Google Scholar
  24. Mo S, Wang H, Li B, Xue Z, Fan S, Liu X (2024) Powerformer: a temporal-based transformer model for wind power forecasting. Energy Rep 11:736–744
    Article Google Scholar
  25. Wang S, Shi J, Yang W, Yin Q (2024) High and low frequency wind power prediction based on transformer and BiGRU-attention. Energy 288:129753
    Article Google Scholar
  26. Deng B, Wu Y, Liu S, Xu Z (2022) Wind speed forecasting for wind power production based on frequency-enhanced transformer. In: 2022 4th International conference on machine learning, big data and business intelligence (MLBDBI), pp 151–155
  27. Huang X, Jiang A (2022) Wind power generation forecast based on multi-step informer network. Energies 15(18):6642
    Article Google Scholar
  28. Li S, Jin X, Xuan Y, Zhou X, Chen W, Wang Y-X, Yan X (2019) Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in Neural Information Processing Systems, 32
  29. Beltagy I, Peters ME, Cohan A (2020) Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150
  30. Choromanski KM, Likhosherstov V, Dohan D, Song X, Gane A, Sarlós T, Hawkins P, Davis JQ, Mohiuddin A, Kaiser L, Belanger DB, Colwell LJ, Weller A (2021) Rethinking attention with performers. In: 9th International conference on learning representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021
  31. Wang S, Li BZ, Khabsa M, Fang H, Ma H (2020) Linformer: self-attention with linear complexity. arXiv preprint arXiv:2006.04768
  32. Gulati, A., Qin, J., Chiu, C., Parmar, N., Zhang, Y., Yu, J., Han, W., Wang, S., Zhang, Z., Wu, Y., Pang, R.: Conformer: convolution-augmented transformer for speech recognition. In: Interspeech 2020, 21st Annual conference of the international speech communication association, virtual event, Shanghai, China, 25-29 October 2020, pp 5036–5040 (2020)
  33. Box GE, Jenkins GM, Reinsel GC, Ljung GM (2015) Time series analysis: forecasting and control. John Wiley & Sons, Hoboken
    Google Scholar
  34. Wen R, Torkkola K, Narayanaswamy B, Madeka D (2017) A multi-horizon quantile recurrent forecaster. arXiv:Machine Learning
  35. Yu R, Zheng S, Anandkumar A, Yue Y (2017) Long-term forecasting using higher order tensor rnns. arXiv preprint arXiv:1711.00073
  36. Cho K, Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1724–1734. Association for Computational Linguistics, Doha, Qatar
  37. Salinas D, Flunkert V, Gasthaus J, Januschowski T (2020) Deepar: probabilistic forecasting with autoregressive recurrent networks. Int J Forecast 36(3):1181–1191
    Article Google Scholar
  38. Qin Y, Song D, Chen H, Cheng W, Jiang G, Cottrell GW (2017) A dual-stage attention-based recurrent neural network for time series prediction. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, pp 2627–2633
  39. Lai G, Chang W-C, Yang Y, Liu H (2018) Modeling long-and short-term temporal patterns with deep neural networks. In: The 41st International ACM SIGIR conference on research & development in information retrieval, pp 95–104
  40. Shih S-Y, Sun F-K, Lee H-Y (2019) Temporal pattern attention for multivariate time series forecasting. Mach Learn 108:1421–1441
    Article MathSciNet Google Scholar
  41. Bai S, Kolter JZ, Koltun V (2018) An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. CoRR arxiv:abs/1803.01271
  42. Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (Long and Short Papers), Minneapolis, Minnesota, pp 4171–4186
  43. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012–10022
  44. Rao Y, Zhao W, Zhu Z, Lu J, Zhou J (2021) Global filter networks for image classification. Adv Neural Inf Process Syst 34:980–993
    Google Scholar
  45. Zhu Z, Soricut R (2021) H-transformer-1D: Fast one-dimensional hierarchical attention for sequences. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), Online, pp 3801–3815
  46. Bertsimas D, Tsitsiklis J (1993) Simulated Annealing. Stat Sci 8(1):10–15. https://doi.org/10.1214/ss/1177011077
    Article Google Scholar
  47. Yang X-S (2021) Chapter 6 - genetic algorithms. In: Yang X-S (ed) Nature-inspired optimization algorithms, second edition. Academic Press, Amsterdam, pp 91–100. https://doi.org/10.1016/B978-0-12-821986-7.00013-5
    Chapter Google Scholar
  48. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95 - international conference on neural networks, vol 4. pp 1942–19484. https://doi.org/10.1109/ICNN.1995.488968
  49. Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39. https://doi.org/10.1109/MCI.2006.329691
    Article Google Scholar
  50. Zefan C, Xiaodong Y (2017) Cuckoo search algorithm with deep search. In: 2017 3rd IEEE international conference on computer and communications (ICCC), pp 2241–2246. https://doi.org/10.1109/CompComm.2017.8322934
  51. Zhou J, Lu X, Xiao Y, Su J, Lyu J, Ma Y, Dou D (2022) SDWPF: a dataset for spatial dynamic wind power forecasting challenge at KDD cup 2022. CoRR arXiv:abs/2208.04360
  52. Paparrizos J, Gravano L (2015) k-shape: efficient and accurate clustering of time series. In: Proceedings of the 2015 ACM SIGMOD international conference on management of data
  53. Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: 3rd International conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings
  54. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al (2019) Pytorch: an imperative style, high-performance deep learning library. Adv Neural Inf Process Syst 32
  55. Zeng A, Chen M-H, Zhang L, Xu Q (2022) Are transformers effective for time series forecasting? In: AAAI conference on artificial intelligence. https://api.semanticscholar.org/CorpusID:249097444
  56. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9:1735–1780
    Article Google Scholar
  57. Li C, Tang G, Xue X, Saeed A, Hu X (2020) Short-term wind speed interval prediction based on ensemble GRU model. IEEE Trans Sustain Energy 11:1370–1380
    Article Google Scholar
  58. Zhang GP (2003) Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 50:159–175
    Article Google Scholar

Download references