1 to 10 of 35 Results
Apr 9, 2026
|
Apr 7, 2026 - Luo Xiangzhong
Luo, Xiangzhong; Liu, Weichen, 2026, "Replication Data for: Exploring Deep-to-Shallow Transformable Neural Networks for Intelligent Embedded Systems", https://doi.org/10.21979/N9/VVAKSR, DR-NTU (Data), V1
The dataset includes the source codes and readme file for implementing the design presented in the paper "Exploring Deep-to-Shallow Transformable Neural Networks for Intelligent Embedded Systems", accepted by IEEE Transactions on Computer-Aided Design of Integrated Circuits and S... |
Apr 7, 2026 - Luo Xiangzhong
Luo, Xiangzhong; Liu, Di; Kong, Hao; Huai, Shuo; Liu, Weichen, 2026, "Replication Data for: Double-Win NAS: Towards Deep-to-Shallow Transformable Neural Architecture Search for Intelligent Embedded Systems", https://doi.org/10.21979/N9/BIOGHZ, DR-NTU (Data), V1
The dataset includes the source codes and readme file for implementing the design presented in the paper "Double-Win NAS: Towards Deep-to-Shallow Transformable Neural Architecture Search for Intelligent Embedded Systems" |
Mar 13, 2026 - Other Projects of the Liu Team
Wu, Ting; Long, Linbo; Ma, Zhulin; Yu, Qiushuang; Tian, Hang; Liu, Weichen, 2026, "Replication Data for: Simulating CXL Shared Coherent Memory Using Shared Memory Among Virtual Machines", https://doi.org/10.21979/N9/BUO9DY, DR-NTU (Data), V1
The dataset includes the source codes and readme file for implementing the design presented in the paper "Simulating CXL Shared Coherent Memory Using Shared Memory Among Virtual Machines" |
Dec 18, 2023 - Luo Xiangzhong
Luo, Xiangzhong; Liu, Weichen, 2023, "Related Data for: Pearls Hide Behind Linearity: Simplifying Deep Convolutional Networks for Embedded Hardware Systems via Linearity Grafting", https://doi.org/10.21979/N9/SJX3HR, DR-NTU (Data), V1
The related data for "Pearls Hide Behind Linearity: Simplifying Deep Convolutional Networks for Embedded Hardware Systems via Linearity Grafting". |
Dec 18, 2023 - Zhu Shien
Zhu, Shien; Huai, Shuo; Xiong, Guochu; Liu, Weichen, 2023, "Replication Data for: iMAT: Energy-Efficient In-Memory Acceleration of Ternary Neural Networks With Sparse Dot Product", https://doi.org/10.21979/N9/B4SIIN, DR-NTU (Data), V1, UNF:6:KboaxJ9ZX4FgcAqcXn8SqQ== [fileUNF]
This dataset contains the simulation environment, simulation results, and post-processing data for the iMAT paper published in ISLPED 2023. |
Dec 14, 2023 - Zhu Shien
Zhu, Shien; Duong, H. K. Luan; Chen, Hui; Di, Liu; Liu, Weichen, 2023, "Replication Data for: FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks", https://doi.org/10.21979/N9/DYKUPV, DR-NTU (Data), V1, UNF:6:UCCl9ySs+IN5trVkl2AGQQ== [fileUNF]
This dataset contains the codes, figures, and tables for the TCAD 2022 paper: FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks. |
Dec 14, 2023 - KONG Hao
Kong, Hao; Liu, Weichen, 2023, "Replication Data for: EdgeCompress: Coupling Multi-Dimensional Model Compression and Dynamic Inference for EdgeAI", https://doi.org/10.21979/N9/GCAMZH, DR-NTU (Data), V1
This is the source code repository for the publication "EdgeCompress: Coupling Multi-Dimensional Model Compression and Dynamic Inference for EdgeAI". |
Dec 14, 2023 - KONG Hao
Kong, Hao; Liu, Weichen, 2023, "Replication Data for: HACScale: Hardware-Aware Compound Scaling for Resource-Efficient DNNs", https://doi.org/10.21979/N9/KSXK4T, DR-NTU (Data), V1
This is the source code repository for the publication "HACScale: Hardware-Aware Compound Scaling for Resource-Efficient DNNs". |
Dec 14, 2023 - KONG Hao
Kong, Hao; Liu, Weichen, 2023, "Replication Data for: EDLAB: A Benchmark for Edge Deep Learning Accelerators", https://doi.org/10.21979/N9/NT2HXS, DR-NTU (Data), V1, UNF:6:vfraJpp92Ek2OdfPAVmxeg== [fileUNF]
This is the program source code for the publication "EDLAB: A Benchmark for Edge Deep Learning Accelerators" |
