View: |
Part 1: Document Description
|
Citation |
|
---|---|
Title: |
TimeCMA: Towards LLM-Empowered Multivariate Time Series Forecasting via Cross-Modality Alignment |
Identification Number: |
doi:10.21979/N9/V1XDVB |
Distributor: |
DR-NTU (Data) |
Date of Distribution: |
2025-01-15 |
Version: |
1 |
Bibliographic Citation: |
Liu, Chenxi; Xu, Qianxiong; Miao, Hao; Yang, Sun; Zhang, Lingzheng; Long, Cheng; Li, Ziyue; Zhao, Rui, 2025, "TimeCMA: Towards LLM-Empowered Multivariate Time Series Forecasting via Cross-Modality Alignment", https://doi.org/10.21979/N9/V1XDVB, DR-NTU (Data), V1 |
Citation |
|
Title: |
TimeCMA: Towards LLM-Empowered Multivariate Time Series Forecasting via Cross-Modality Alignment |
Identification Number: |
doi:10.21979/N9/V1XDVB |
Authoring Entity: |
Liu, Chenxi (S-Lab, Nanyang Technological University) |
Xu, Qianxiong (S-Lab, Nanyang Technological University) |
|
Miao, Hao (Aalborg University) |
|
Yang, Sun (Peking University) |
|
Zhang, Lingzheng (Hong Kong University of Science and Technology (Guangzhou)) |
|
Long, Cheng (S-Lab, Nanyang Technological University) |
|
Li, Ziyue (University of Cologne) |
|
Zhao, Rui (SenseTime Research) |
|
Software used in Production: |
LaTex |
Software used in Production: |
PyTorch |
Distributor: |
DR-NTU (Data) |
Access Authority: |
Liu, Chenxi |
Depositor: |
Liu, Chenxi |
Date of Deposit: |
2025-01-13 |
Holdings Information: |
https://doi.org/10.21979/N9/V1XDVB |
Study Scope |
|
Keywords: |
Computer and Information Science, Computer and Information Science, Large Language Models, Time Series, Cross-Modality Alignment |
Abstract: |
Multivariate time series forecasting (MTSF) aims to learn temporal dynamics among variables to forecast future time series. Existing statistical and deep learning-based methods suffer from limited learnable parameters and small-scale training data. Recently, large language models (LLMs) combining time series with textual prompts have achieved promising performance in MTSF. However, we discovered that current LLM-based solutions fall short in learning disentangled embeddings. We introduce TimeCMA, an intuitive yet effective framework for MTSF via cross-modality alignment. Specifically, we present a dual-modality encoding with two branches: the time series encoding branch extracts disentangled yet weak time series embeddings, and the LLM-empowered encoding branch wraps the same time series with text as prompts to obtain entangled yet robust prompt embeddings. As a result, such a cross-modality alignment retrieves both disentangled and robust time series embeddings, ``the best of two worlds'', from the prompt embeddings based on time series and prompt modality similarities. As another key design, to reduce the computational costs from time series with their length textual prompts, we design an effective prompt to encourage the most essential temporal information to be encapsulated in the last token: only the last token is passed to downstream prediction. We further store the last token embeddings to accelerate inference speed. Extensive experiments on eight real datasets demonstrate that TimeCMA outperforms state-of-the-arts. |
Kind of Data: |
Code and Data |
Methodology and Processing |
|
Sources Statement |
|
Data Access |
|
Notes: |
S-Lab License 1.0 <br/> Copyright 2025 S-Lab <br/><br/> Redistribution and use for non-commercial purpose in source and binary forms, with or without modification, are permitted provided that the following conditions are met: <br/> 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. <br/> 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. <br/> 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. <br/><br/> THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. <br/><br/> In the event that redistribution and/or use for commercial purpose in source or binary forms, with or without modification is required, please contact the contributor(s) of the work. |
Other Study Description Materials |
|
Related Studies |
|
<div> Github page: <a href="https://github.com/your-repo-link" target="_blank" rel="noopener noreferrer">Link</a> </div> |
|
Related Publications |
|
Citation |
|
Identification Number: |
2406.01638 |
Bibliographic Citation: |
Liu, C., Xu, Q., Miao, H., Yang, S., Zhang, L., Long, C., ... & Zhao, R. (2024). TimeCMA: Towards LLM-Empowered Time Series Forecasting via Cross-Modality Alignment. |