A contrastive self-supervised learning method for source-free EEG emotion recognition (original) (raw)
References
Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: ICML Unsupervised and Transfer Learning (2012)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: Simclr: A simple framework for contrastive learning of visual representations. In: International Conference on Learning Representations, 2, 4 (2020)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019)
Devon Hjelm, R., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., Bengio, Y.: Learning deep representations by mutual information estimation and maximization. arXiv e-prints, 1808 (2018)
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 1532–4435 (2016) MathSciNetMATH Google Scholar
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
Hjelm, R.D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Trischler, A., Bengio, Y.: Learning deep representations by mutual information estimation and maximization. ArXiv arXiv:1808.06670 (2019)
Hjorth, B.: EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 29(3), 306–10 (1970) MATH Google Scholar
Ju, C., Gao, D., Mane, R., Tan, B., Liu, Y., Guan, C.: Federated transfer learning for eeg signal classification. 43rd Annual Conference of the Canadian Medical and Biological Engineering Society (EMBS) (2020)
Koelstra, S., Mühl, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: Deap: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3, 18–31 (2012) Google Scholar
Lee, J., Jung, D., Yim, J., Yoon, S.-H.: Confidence score for source-free unsupervised domain adaptation. In: International Conference on Machine Learning (2022)
Li, J., Qiu, S., Du, C., Wang, Y., He, H.: Domain adaptation for EEG emotion recognition based on latent representation similarity. IEEE Trans. Cogn. Dev. Syst. 12(2), 344–353 (2019) MATH Google Scholar
Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: ICML (2020)
Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, p. 1 (2020)
Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Trans. Pattern Anal. Mach. Intell. 44, 8602–8617 (2020) Google Scholar
Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 8602–8617 (2022). https://doi.org/10.1109/TPAMI.2021.3103390 Article Google Scholar
Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., Tang, J.: Self-supervised learning: generative or contrastive. IEEE Trans. Knowl. Data Eng. 35(1), 857–876 (2021) MATH Google Scholar
Qiu, J., Tang, J., Ma, H., Dong, Y., Wang, K., Tang, J.: Deepinf: Social influence prediction with deep learning. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2110–2119 (2018)
Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: ICML (2021)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Google Scholar
Shen, X., Liu, X., Hu, X., Zhang, D., Song, S.: Contrastive learning of subject-invariant eeg representations for cross-subject emotion recognition. ArXiv arXiv:2109.09559 (2022)
Song, T., Zheng, W., Song, P., Cui, Z.: Eeg emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing, pp. 1–1 (2018)
Song, T., Liu, S., Zheng, W., Zong, Y., Cui, Z.: Instance-adaptive graph for EEG emotion recognition. Proc. AAAI Conf. Artif. Intell. 34(3), 2701–2708 (2020) MATH Google Scholar
Van Den Oord, A., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks. In: International Conference on Machine Learning, pp. 1747–1756. PMLR (2016a)
Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al.: Conditional image generation with pixelcnn decoders. Adv. Neural Inf. Process. Syst. 29, 66 (2016b) Google Scholar
Wang, Y., Wu, Q., Wang, C., Ruan, Q.: De-cnn: an improved identity recognition algorithm based on the emotional electroencephalography. Comput. Math. Methods Med. 2020, 7574531 (2020) MATH Google Scholar