- 52
- 9 116
Efficient and Intelligent Computing Lab
United States
Приєднався 10 січ 2021
The EIC lab in the School of Computer Science at Georgia Tech focuses on developing efficient machine learning systems via cross-layer innovations from algorithm to architecture down to chip design, aiming to promote green AI and enable ubiquitous machine learning powered intelligence.
2024 3DV MixRT
[2024 3DV] MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
licj15.github.io/MixRT/
licj15.github.io/MixRT/
Переглядів: 15
Відео
2024 NeurIPS Fragment Pruning
Переглядів 1621 день тому
[2024 NeurIPS] 3D Gaussian Rendering Can Be Sparser: Efficient Rendering via Learned Fragment Pruning
2024 MICRO Fusion-3D
Переглядів 3621 день тому
[2024 MICRO] Fusion-3D: Integrated Acceleration for Instant 3D Reconstruction and Real-Time Rendering
2024 NeurIPS ShiftAddLLM
Переглядів 18421 день тому
[2024 NeurIPS] ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
2024NeurIPS AmoebaLLM
Переглядів 71Місяць тому
[NeurIPS 2024] "AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment" by Yonggan Fu, Zhongzhi Yu, Junwei Li, Jiayi Qian, Yongan Zhang, Xiangchi Yuan, Dachuan Shi, Roman Yakunin, and Yingyan (Celine) Lin. Paper: arxiv.org/pdf/2411.10606 Github: github.com/GATECH-EIC/AmoebaLLM
2024ECCV Oral Paper: Omni-Recon
Переглядів 97Місяць тому
[ECCV 2024 Oral] "Omni-Recon: Harnessing Image-based Rendering for General-Purpose Neural Radiance Fields" by Yonggan Fu, Huaizhi Qu, Zhifan Ye, Chaojian Li, Kevin Zhao, and Yingyan (Celine) Lin. Paper: arxiv.org/pdf/2403.11131 Github: github.com/GATECH-EIC/Omni-Recon
2024 ICML Enhancing Large Language Models without Training through Attention Calibration
Переглядів 973 місяці тому
[2024 ICML] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
2024DAC EDGE-LLM
Переглядів 1403 місяці тому
[2024 DAC] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
2024ICML Linearized-LLM
Переглядів 1073 місяці тому
[ICML 2024] "When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models" by Haoran You, Yichao Fu, Zheng Wang, Amir Yazdanbakhsh, and Yingyan (Celine) Lin.
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Переглядів 1276 місяців тому
Demonstration of generating multi-grained descriptions for the customized Verilog code repo using MG-Verilog
2023ICML Master-ASR
Переглядів 101Рік тому
[ICML 2023] "Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning" by Zhongzhi Yu, Yang Zhang, Kaizhi Qian, Cheng Wan, Yonggan Fu, Yongan Zhang, Yingyan (Celine) Lin.
2023ICML NeRFool
Переглядів 196Рік тому
[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin.
2023ISCA Instant-3D (Lightning Talk)
Переглядів 142Рік тому
[ISCA 2023] "Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction" by Sixu Li*, Chaojian Li*, Wenbo Zhu, Boyang (Tony) Yu, Yang (Katie) Zhao, Cheng Wan, Haoran You, Huihong Shi, Yingyan (Celine) Lin.
2023ISCA Gen-NeRF (Lightning Talk)
Переглядів 186Рік тому
[ISCA 2023] "Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design" by Yonggan Fu, Zhifan Ye, Jiayi Yuan, Shunyao Zhang, Sixu Li, Haoran You, Yingyan (Celine) Lin.
[2023 CVPR] Hint-Aug
Переглядів 115Рік тому
[2023 CVPR] "Hint-Aug: Drawing Hints From FViTs Towards Boosted Few-Shot Parameter-Efficient Tuning", by Zhongzhi Yu, Shang Wu, Yonggan Fu, Shunyao Zhang, and Yingyan (Celine) Lin.