Research

July 1st, 2020

At Next-Generation Computing and Codesign Lab (NCCL) in UNIST, we are mainly concerned with the problems of optimizing system performance and power at the intersection of software and hardware. At the core of information technology is computation, which is realized through the combination of hardware and software. But today both hardware and software are seeing fundamental changes, forecasting a paradigm shift in how we understand and realize computation. This shift is caused first by the changes and advances in hardware technology, in particular, semiconductor device technology, forcing us to seek new boundaries between hardware and software, such as application-specific processors, hardware accelerators, and even reconfigurable processors. The second cause of the shift is the emergence and pervasive use of deep learning systems. Deep learning hardware is not programmed in the conventional way such as sequential or parallel programming, but what is called “training”, or iterative application of relevant data through the learning pathway of the system. Thus deep learning systems provide a real possibility of new computer architectures that are not based on von Neumann abstraction but more like the human brain. At NCCL, as we are doing research involving both hardware and software components of a system, we are in a unique position of being able to perform original research that can pave the way for the paradigm shift in the core definition of computing.

우리 연구실은 기본적으로 소프트웨어와 하드웨어의 경계에서 시스템 성능이나 전력 등을 최적화할 수 있는 다양한 기법들을 연구하고 있습니다. 특히나 오늘날의 컴퓨터는 두가지 중요한 패러다임의 변화를 준비하고 있습니다. 하나는 꾸준히 하드웨어 기술이 발전함에 따라, 하드웨어와 소프트웨어의 경계를 허물거나 기존의 폰노이만 컴퓨터 패러다임을 변경하는 연구 수요와 관심이 매우 높아지고 있다는 것이고, 다른 측면에서는, 최근에 등장한 딥러닝을 중심으로 기존의 순차적 또는 병렬 프로그래밍 방식이 아닌 데이터가 곧 프로그램이 되는 새로운 프로그래밍 패러다임을 제시하는 차세대 컴퓨팅 시스템에 대한 연구가 요구된다는 것입니다. 이러한 새로운 컴퓨팅의 필요에 대응할 수 있는 소자, 회로, 아키텍처, 컴파일러 및 알고리즘의 하드웨어-소프트웨어 통합설계 기법을 주로 연구합니다.

Research Areas

  • Deep learning and neuromorphic processor
  • New and emerging device based electronic design paradigm (e.g. stochastic computing, NVM)
  • Compilation for emerging architectures (e.g. multicore processor, GPUs, FPGA, reconfigurable processor)
  • Electronic design automation (EDA)

Recent Projects

Check out the front page, featuring interesting research outcomes from our lab.

1. What do we about Deep Learning / AI?

May 5th, 2020editor

이종은 교수는 UNIST AI 대학원에 참여하고 계십니다. AI 대학원의 커리큘럼, 진로, 참여교원 등에 관해 궁금하신 분은 UNIST AI 대학원 홈페이지 참고하세요: http://aigs.unist.ac.kr/.

이종은 교수는 설계자동화 분야의 전문가로서, 삼성전자 등 대기업 연구소 및 국내외 대학들과 활발한 공동연구를 진행하고 있습니다.

주된 연구 주제는 AI 하드웨어설계 자동화 기술을 적용하는 것인데요 (아래 그림), 대개의 AI 연구와는 달리, AI 알고리즘의 성능 뿐만 아니라 하드웨어 구현 특성을 고려한 알고리즘-하드웨어 통합설계를 주로 연구해오고 있습니다.

또한 최근에는 stochastic computing 기반의 딥뉴럴넷 프로세서를 개발하여, 세계 최고 수준의 효율성과 정확도를 선보였습니다.


가장 최근에는 AI를 설계자동화에 적용하는 연구로도 확장하고 있습니다.

질문이나 연구 참여 관심 있으신 분은 여기에 메모 남겨주세요.

2. Heterogeneous Parallel Computing

February 10th, 2014editor

Heterogeneous Parallel Computing (HPC) is about utilizing a set of very different processors such as CPU and GPU efficiently and with ease. The HPC Research Group in NCCL lab is actively addressing the following research questions to help realize better heterogeneous parallel computing platforms and applications.

Research Agenda

  • communication problem
    • between CPU and accelerators (GPU, VLIW, loop accelerators, ASIC, etc.)
  • memory organization and management problem
    • shared memory? cache? scratch-pad memory?
  • domain specific architecture
    • for example, computer vision or machine learning

Publications

  • Fast Shared On-Chip Memory Architecture for Efficient Hybrid Computing with CGRAs, Jongeun Lee, Yeonghun Jeong, and Sungsok Seo, Proc. of Design, Automation and Test in Europe (DATE ’13), March, 2013.
  • Software-Managed Automatic Data Sharing for Coarse-Grained Reconfigurable Coprocessors, Toan X. Mai and Jongeun Lee*, Proc. of International Conference on Field-Programmable Technology (FPT ’12), pp. 277-284, December, 2012.
  • CRM: Configurable Range Memory for Fast Reconfigurable Computing, Jongkyung Paek, Jongeun Lee*, and Kiyoung Choi, Proc. of Reconfigurable Architecture Workshop (RAW ’11), pp. 158-165, May, 2011.

Related Articles

3. Reconfigurable Computing

December 9th, 2013editor

Multi-core and even many-core processors have been successfully used in other domains. Reconfigurable array processors, for instance, have been actively researched and used as an on-chip accelerator for stream processing applications and embedded processors, due to their extremely low power and high performance execution, compared to general purpose processors or even DSPs (digital signal processors).

However, the main challenge in such accelerator-type reconfigurable processors is compilation — the problem of how to map applications onto the architecture. At the heart of this problem is the 2D placement-and-routing problem, which is traditionally recognized as a CAD problem, which is why this problem is often discussed in the design automation communities. Still the problem needs more research and development efforts (such as mature tool chains) for more wide-spread adoption of the architecture.

The NCCL Lab is actively pursuing research on this topic, with a few specific goals in mind. We have two granted projects on this, partially in collaboration with other labs.

bougard.png

Figure 1: An accelerator-type reconfigurable processor architecture (Bougard et al. ’08).

Research Questions

  • how to compile the usual C programs (“legacy”) onto coarse-grained reconfigurable architectures?
  • can there be good architectural solutions (such as architecture extensions) to make it much easier to map programs to these architectures (“compiler-friendly architectures”)?
  • what are the real bottleneck to enhancing performance through these processors and how to address them?
    • application level mapping problem

Publications

  • Compiling Control-Intensive Loops for CGRAs with State-Based Full Predication, Kyuseung Han, Kiyoung Choi, and Jongeun Lee, Proc. of Design, Automation and Test in Europe (DATE ’13), March, 2013.
  • Architecture Customization of On-Chip Reconfigurable Accelerators, Jonghee W. Yoon, Jongeun Lee*, Sanghyun Park, Yongjoo Kim, Jinyong Lee, Yunheung Paek, and Doosan Cho, ACM Transactions on Design Automation of Electronic Systems (TODAES), 18(4), pp. 52:1-52:22, ACM, October, 2013.
  • Improving Performance of Nested Loops on Reconfigurable Array Processors, Yongjoo Kim, Jongeun Lee*, Toan X. Mai, and Yunheung Paek, ACM Transactions on Architecture and Code Optimization (TACO), 8(4), pp. 32:1-32:23, ACM, January, 2012.
  • Exploiting Both Pipelining and Data Parallelism with SIMD Reconfigurable Architecture, Yongjoo Kim, Jongeun Lee*, Jinyong Lee, Toan X. Mai, Ingoo Heo, and Yunheung Paek, Proc. of International Symposium on Applied Reconfigurable Computing (ARC ’12), Lecture Notes in Computer Science, vol. 7199, pp. 40-52, March, 2012.
  • High Throughput Data Mapping for Coarse-Grained Reconfigurable Architectures, Yongjoo Kim, Jongeun Lee*, Aviral Shrivastava, Jonghee W. Yoon, Doosan Cho, and Yunheung Paek, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 30(11), pp. 1599-1609, IEEE, November, 2011.
  • Memory Access Optimization in Compilation for Coarse Grained Reconfigurable Architectures, Yongjoo Kim, Jongeun Lee*, Aviral Shrivastava, and Yunheung Paek, ACM Transactions on Design Automation of Electronic Systems (TODAES), 16(4), pp. 42:1-42:27, ACM, October, 2011.

Related Articles


See Research Archive for more.