Neuromorphic Research Seminar in Fall 2015


a.k.a. Design Automation Group seminar.
  • When: Thursday evenings (7pm by default)
  • Where: #401-11 or #511


Day 1, Tue 10/20, Seokhyeong, Jongeun, #401-11


Day 2, Thu 10/29, Atul, Hyeonuk, #401-11

  1. (Zhang 2015 fpga) Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks.
  2. (Chippa 2014 islped) StoRM: A Stochastic Recognition and Mining Processor.

Day 3, Tue 11/10, Yesung Kang, Sunmin Kim, #401-11

  1. Approximate multiplier (Yesung Kang)
    • Cong Liu; Jie Han; Lombardi, F., "A low-power, high-performance approximate multiplier with configurable partial error recovery," in Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1-4, 24-28 March 2014.
    • Yuan-Ho Chen, "An Accuracy-Adjustment Fixed-Width Booth Multiplier Based on Multilevel Conditional Probability," in Very Large Scale Integration (VLSI) Systems, IEEE Transactions on , vol.23, no.1, pp.203-207, Jan. 2015.
    • Zervakis, Georgios; Xydis, Sotirios; Tsoumanis, Kostas; Soudris, Dimitrios; Pekmestzi, Kiamal, "Hybrid approximate multiplier architectures for improved power-accuracy trade-offs," in Low Power Electronics and Design (ISLPED), 2015 IEEE/ACM International Symposium on , vol., no., pp.79-84, 22-24 July 2015.
  2. Neuromorphic design (Sunmin Kim)
    • Sinha, S.; Jounghyuk Suh; Bakkaloglu, B.; Cao, Yu, "Workload-aware neuromorphic design of low-power supply voltage controller," in Low-Power Electronics and Design (ISLPED), 2010 ACM/IEEE International Symposium on , vol., no., pp.241-246, 18-20 Aug. 2010.

Day 4, Thu 11/26, Hyeonuk, Sangyun, #411

  1. FPGA implementation of deep belief network (Hyeonuk Sim)
  2. Hardware implementation of Radial Basis Function (Sangyun Oh)

Day 5, Wed 12/09, Jaemin Lee, Jae-woo Kim, #511

  1. CMOS implementation of Spiking Neuron Network (Jaemin Lee)
    • A 45nm CMOS Neuromorphic Chip with a Scalable Architecture for Learning in Network of Spiking Neurons, CICC.
    • Digital CMOS Neuromorphic Processor Design Featuring Unsupervised Online Learning.
    • First digital STDP (Spike-Timing Dependent Placitity) chip
  2. Error resilience (Jae-woo Kim)
    • Energy Efficient Approximate Arithmetic for Error Resilient Neuromorphic Computing, TVLSI, 2015.
    • An energy efficient approximate adder with carry skip for error resilient neuromorphic VLSI systems, ICCAD, 2013.
    • 2.4X faster and 43% more energy-efficient (in EDP) adder
    • Similar to carry lookahead adder, with approximation applied to carry prediction

Day 6, Wed 12/30 (4pm), Atul, Dong, #511

  1. Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array (Atul Rahman, DATE 2016 dry-run)
  2. Deep Learning Accelerator Architecture: SIMD on FPGA (Dong Nguyen, Current Work-in-progress)

Day 7, Thu 01/07, Yesung Kang, Jaemin Lee, #511

  1. Approximate Synthesis Algorithm for the Energy-Efficient FIR filter, on-going work (Yesung Kang)
  2. Design Methodology for Error-Resilient Circuits, on-going work (Jaemin Lee)

Day 8, Thu 01/14, Hyeonuk Sim, Sangyun Oh, #511

  1. Design and implementation of CNN accelerator (Sangyun Oh)
  2. VLSI implementation of Stochastic DNN (Hyeonuk Sim)

Day 9, Thu 01/21, Sunmin Kim, #511

  1. Retraining-Based Timing Error Mitigation for Hardware Neural Networks by Deng, DATE 2015 ()
  2. Optimizing Stochastic Circuits for Accuracy-Energy Tradeoffs by Alaghi, ICCAD 2015 (Sunmin Kim)

Day 10, Thu 02/04, Atul Rahman, Seunghwan Lee, #511

  1. "Fast training of convolutional networks through FFTs", in ICLR 2014 by M. Mathieu et al. (Atul Rahman)
  2. Internship summary (Seunghwan Lee)

Day 11, Thu 02/25, Jaewoo Kim, Yeongkwang Jeong, #511

  1. "AxNN: Energy-efficient neuromorphic systems using approximate computing" (Jaewoo Kim)
  2. Internship presentation - ISPD 2016 contest: FPGA placement (Yeongkwang Jeong)

Day 11, Thu 03/10, Dong Nguyen, Sangyun Oh, #511

  1. "Communication-aware Mapping of Stream Graphs for Multi-GPU Platforms" (Dong Nguyen)
    • CGO (Int'l Symp. on Code Generation and Optimization) 2016 dry-run
    • The paper pdf will become available in the Dropbox
  2. "Throughput-Optimized OpenCL-based FPGA Accelerator for Large-Scale Convolutional Neural Networks" (Sangyun Oh)

Day 12, Thu 03/24, #511, 5:30pm

  1. "RENO: A High-efficient Reconfigurable Neuromorphic Computing Accelerator Design" (DAC 2015)
  2. "Conditional Deep Learning for Energy-Efficient and Enhanced Pattern Recognition" (arXiv 2015)

Day 13, Thu 03/31, Atul Rahman, Dong Nguyen, #511

  1. "The Neuro Vector Engine: Flexibility to Improve Convolutional Net Efficiency for Wearable Vision" by M. Peemen et al. (DATE 2016)

Day 14, Thu 04/07, Sunmin Kim, #511

  1. "Exploration of self-healing circuits for timing resilient design using emerging memristor devices"

Day 15, Thu 04/28, Hyeon Uk Sim, Sangyun Oh, #511

  1. "A New Stochastic Computing Methodology for Efficient Neural Network Implementation," by V. Canals et al., IEEE Trans. NN, 2015
  2. "A 240 G-ops-s Mobile Coprocessor for Deep Neural Networks," by V. Gokhale et al., CVPR Workshop, 2014

Day 16, Thu 05/12, Jaemin Lee, Jae-woo Kim, #511

  1. "Multiplier-less Artificial Neurons Exploiting Error Resiliency for Energy-Efficient Neural Computing", DATE, 2016.
  2. "Significance Driven Hybrid 8T 6T SRAM for Energy Efficient Synaptic Storage in Artificial Neural Networks", DATE, 2016.

Day 17, Thu 05/25, Dong Nguyen, Atul Rahman, #511

  1. "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding," ICLR, 2016.
  2. Overview of Vivado HLS -- What you need to know to do real designs using Xilinx HLS flow.

Day 18, Thu 06/22, Sangyun Oh, Hyeon Uk Sim, #511

  1. "Survey of Accelerating Convolutional Neural Networks on FPGA," compiled and prepared by Sangyun.
  2. "Survey on Stochastic Computing Applied to Deep Neural Networks," compiled and prepared by Hyeon Uk.

Day 19, Thu 06/29, #511

  1. "Optimal Design of JPEG Hardware Under the Approximate Computing Paradigm," DAC 2016.
  2. "Serial T0: Approximate Bus Encoding for Energy-Efficient Transmission of Sensor Signals," DAC 2016.

Day 20, Mon 07/04, Minsik Cho from IBM T.J.Watson Res Ctr, #311

  1. Accelerating Machine-learning and Big Data Workload on Heterogeneous Computing Platform

Day 21, Wed 07/20, Dong Nguyen, Hyeonuk Sim, #511

  1. "A New Learning Method for Inferency Accuracy, Core Occupation, and Performance Co-optimization on TrueNorth Chip," DAC 2016.
  2. "Statistical Fault Injection for Impact-Evaluation of Timing Errors on Application Performance," DAC 2016.

Day 22, Thu 08/04 (1pm), Jaewoo Kim, #511

  1. "SALSA: Systematic Logic Synthesis of Approximate Circuits," DAC 2012,
  2. "Simplifying Deep Neural Networks for Neuromorphic Architectures," DAC 2016.

Day 23, Thu 08/11, Dong Nguyen, Sangyun Oh, #511

  1. "C-Brain: A Deep Learning Accelerator that Tames the Diversity of CNNs through Adaptive Data-level Parallelization," DAC 2016.
  2. "Data Cache Prefetching via Context Directed Pattern Matching for Coarse-Grained Reconfigurable Arrays," DAC 2016.

Day 24, Thu 08/18, Sunmin Kim, #511

  1. "Approximation through Logic Isolation for the Design of Quality Configurable Circuits," DATE 2016.

Day 25, Thu 08/25, Sangyun Oh, Daewoo Kim, #511

  1. "nZDC: A Compiler technique for near Zero Silent data Corruption," DAC 2016.
Topic revision: r38 - 24 Aug 2016, SangyunOh
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback