Jie Zhang is currently engaged in the research and design of storage systems and specialized processors. His research addresses the requirements for high-performance storage systems in the era of big data and artificial intelligence from the perspective of computer architecture. He is dedicated to breaking through the bottlenecks of data migration and the limitations of memory walls in the Von Neumann architecture.
Representative Achievement:
1. Ohm-GPU: Integrating New Optical Network and Heterogeneous Memory into GPU Multi-Processors (MICRO) 2021
2. Revamping Storage Class Memory With Hardware Automated Memory-Over-Storage Solution (ISCA) 2021
3. ZnG: Architecting GPU Multi-Processors with New Flash for Scalable Data Analysis (ISCA) 2020
4. Scalable Parallel Flash Firmware for Many-core Architectures (FAST) 2020
5. DRAM-less: Hardware Acceleration of Data Processing with New Memory (HPCA) 2020
6. FlashGPU: Placing New Flash Next to GPU Cores (DAC) 2019
7. FUSE: Fusing STT-MRAM into GPUs to Alleviate Off-Chip Memory Access Overheads (HPCA) 2019
8. FlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs (OSDI) 2018
9. Amber: Enabling Precise Full-System Simulation with Detailed Modeling of All SSD Resources (MICRO) 2018
10. FlashAbacus: A Self-governing Flash-based Accelerator for Low-power Systems (Eurosys) 2018
11. DUANG: Fast and Lightweight Page Migration in Asymmetric Memory Systems (HPCA) 2016
12. NVMMU: A Non-Volatile Memory Management Unit for Heterogeneous GPU-SSD Architectures (PACT) 2015