About

Artificial Intelligence (AI) and Machine Learning (ML) techniques have become the de facto solution to drive human progress and more specifically, automation. In the last years, the world’s economy is gravitating towards such industry and the expectation of growth is not withering away. Additionally, these trends have been further exacerbated by the novel COVID-19 pandemic, which paralyzed the world’s economy and made evident the need for even more automation to provide safer and more reliable services and also to aid in the agile discovery of life-saving drugs and vaccines. To support those kinds of applications, cognitive architectures are designed and deployed to materialize advances in the aforementioned fields. However, although the newest cognitive designs are improving by the day, the amount of challenges ahead for cognitive systems is still overwhelming. With existing solutions reaching functional maturity, design considerations are now pivoting to new aspects like energy scaling, reliable operation, safety guarantees, or even security properties. This has led to a very diverse variety of hardware and software systems, either tailored to their specific applications or designed for general-purpose usage, which calls for a wider understanding of how cognitive architecture’s design should be tackled.

In that context, the CogArch workshop aims at bringing together the necessary know-how to design cognitive architectures from a holistic point of view, tackling all their design considerations from the algorithms to platforms in all the different fields that cognitive architectures will soon occupy, from autonomous cars to critical tasks in avionics or space or even personalized medicine.

The CogArch workshop already had four successful editions, bringing together experts and knowledge on the most novel design ideas for cognitive systems. This workshop capitalizes on synergizing industrial and academic effort in order to provide a better understanding of cognitive systems and key concepts of their design.

Call for Papers

Hardware and software design considerations are gravitating towards AI applications, as those have been proven extremely useful in a wide variety of fields, from edge computing in autonomous cars to cloud-based computing for personalized medicine. The recent years have brought about a boom in start-ups and novel platforms that constantly offer improvements in performance and accuracy for the aforementioned applications. As this kind of cognitive architectures evolve, system designers have to incorporate many different considerations, and this has led to a very diverse field in terms of available solutions.

The CogArch workshop solicits formative ideas and new product offerings in this general space that cover all the design aspects of cognitive systems, from algorithms improving cognition to design methodologies that improve non-functional considerations like energy efficiency, security, agile design, programmability, or reliability.

  • Algorithms in support of cognitive reasoning: recognition, intelligent search, diagnosis, inference and informed decision-making.
  • Swarm intelligence and distributed architectural support; brain-inspired and neural computing architectures.
  • Agile design for cognitive systems.
  • Prototype demonstrations of state-of-the-art cognitive computing systems.
  • Accelerators and micro-architectural support for artificial intelligence.
  • Approaches to reduce training time and enable faster model delivery.
  • Cloud-backed autonomics and mobile cognition: architectural and OS support thereof.
  • Resilient design of distributed (swarm) mobile AI architectures.
  • Reliability and safety considerations, and security against adversarial attacks in mobile AI architectures.
  • Techniques for improving energy efficiency, battery life extension and endurance in mobile AI architectures.
  • Case studies and real-life demonstrations/prototypes in specific application domains: e.g. smart homes, connected cars and UAV-driven commercial services, architectures in support of AI for healthcare applications, such as medical imaging, drug discovery and smart diagnostics, as well as applications of interest to defense and homeland security.

The workshop shall consist of regular presentations and/or prototype demonstrations by authors of selected submissions. In addition, it will include invited keynotes by eminent researchers from industry and academia as well as interactive panel discussions to kindle further interest in these research topics. Submissions will be reviewed by a workshop Program Committee, in addition to the organizers.

Submitted manuscripts must be in English of up to 2 pages (with same formatting guidelines as main conference) indicating the type of submission: regular presentation or prototype demonstration. Submissions should be submitted to the following link by January 8th by January 22nd, 2021 (deadline extended).
If you have questions regarding submission, please contact us: info@cogarchworkshop.org

Call for Prototype Demonstrations

CogArch will feature a session where researchers can showcase innovative prototype demonstrations or proof-of-concept designs in the cognitive architecture space. Examples of such demonstrations may include (but are not limited to):

  • Custom ASIC or FPGA-based demonstrations of machine learning, cognitive or neuromorphic architectures.
  • Innovative implementations of state-of-the-art cognitive algorithms/applications, and the underlying software-hardware co-design techniques.
  • Demonstration of end-to-end cognitive systems comprising of edge devices backed by a cloud computing infrastructure.
  • Novel designs showcasing the adoption of emerging technologies for the design of cognitive systems.
  • Tools or frameworks to aid analysis, simulation and design of cognitive systems.
Submissions for the demonstration session may be made in the form of a 2-page manuscript highlighting key features and innovations of the prototype demonstration. Proposals accepted for demonstration during the workshop can be accompanied by a poster/short presentation. Authors should explicitly indicate that the submission is for prototype demonstration at submission time.

Important Dates

  • Paper submission deadline: January 8th by January 22nd, 2021 (deadline extended)
  • Notification of acceptance: February 5th, 2021
  • Workshop date: February 28th, 2021

Program Committee

  • Roberto Gioiosa, Pacific Northwest National Laboratory
  • David Trilla, IBM Research
  • Augusto Vega, IBM Research
  • Karthik Swaminathan, IBM Research
  • Alper Buyuktosunoglu, IBM Research
  • Pradip Bose, IBM Research

Paper Submission Deadline
January 8th
January 22nd, 2021
(Deadline Extended)

Notification
February 5th, 2021

Workshop date
February 28th, 2021

Keynote Speakers:

Federated Learning: Training Healthcare Models in a Privacy-Preserving Manner

G Anthony Reina, M.D. (Chief AI Architect for Health & Life Sciences at Intel Corporation)

Federated learning is a distributed machine learning approach that enables organizations to collaborate on projects without sharing sensitive data, such as, patient records, financial data, or classified. The basic premise behind federated learning is that the model moves to meet the data rather than the data moving to meet the model. Dr. Reina will describe potential security gaps in federated learning and explain how trusted execution environments can help mitigate these threats.

Avatar

Extreme-Scale Deep Neural Networks: The Need for Sparsity and Flexible Hardware

Natalia Vassilieva, Ph.D. (Technical Product Manager at Cerebras Systems)

Advances in Deep Learning over the past several years have demonstrated two paths to better models: scale and algorithmic innovation. Brute-force scaling of model parameter count increases model capacity, and when presented with enough training data, has shown better performance in many domains. However, the advantages of large-scale models come with price: they require a lot of compute to be trained, much more than can be delivered by a single traditional processor. Today clusters of 10s to even 1000s of processors are commonly used to train large neural networks. This approach to scaling is not sustainable. We need algorithmic innovations to find more efficient neural network architectures and training methods. This requires more flexible hardware to develop and test novel approaches. In this talk, we will look at the trends of extreme-scale deep learning models, discuss implications for hardware, and share how the Cerebras CS-1 addresses these requirements for both scale and flexibility of compute.

Program:

Sunday February 28th, 2021
(all times are in Eastern Standard Time)
9:00 - 9:15 Introduction and Welcoming Remarks
9:15 - 9:45 "Optimizing Markov Random Field Inference via Event-Driven Gibbs Sampling on GPUs"
Ramin Bashizade, Xiangyu Zhang, Sayan Mukherjee and Alvin Lebeck(Duke University)
9:45 - 10:15 "Moving Robot Accountability to the Cloud: Effects in Knowledge Representation Processes"
Miguel Á. González-Santamarta, Francisco J. Rodríguez Lera, Francisco Martín Rico, Camino Fernández-Llamas and Vicente Matellán (Universidad de León and Universidad Rey Juan Carlos)
10:15 - 10:45 Coffee Break
10:45 - 11:30 Keynote Talk: "Federated Learning: Training Healthcare Models in a Privacy-Preserving Manner"
G Anthony Reina, M.D. (Chief AI Architect for Health & Life Sciences at Intel Corporation)
11:30 - 12:00 "Achieving Real-Time Object Detection on Mobile Devices with Neural Pruning Search"
Pu Zhao, Wei Niu, Geng Yuan, Yuxuan Cai, Bin Ren, Yanzhi Wang and Xue Lin (Northeastern University and William & Mary)
12:00 - 13:15 Lunch Break
13:15 - 14:00 Keynote Talk:"Extreme-Scale Deep Neural Networks: The Need for Sparsity and Flexible Hardware"
Natalia Vassilieva, Ph.D. (Technical Product Manager at Cerebras Systems)
14:00 - 14:30 "Hyperdimensional Computing and Spectral Learning"
Namiko Matsumoto, Anthony Thomas, Tara Javidi and Tajana Rosing(University of California)
14:30 - 15:00 "Enhancing Hardware Malware Detectors’ Security through Voltage Over-scaling"
Md Shohidul Islam, Ihsen Alouani and Khaled N. Khasawneh (George Mason University, Dhaka University of Engineering & Technology and Universite Polytechnique Hauts-De-France)
15:00 - 15:30 Coffee Break
15:30 - 16:00 "ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA"
Sung-En Chang, Yanyu Li, Mengshu Sun, Xue Lin and Yanzhi Wang (Northeastern University)
16:00 - 16:30 Panel Discussion: Cloud vs Edge - and the Quest for Secure Federated Learning / Workshop Feedback
Panelists:
G Anthony Reina, Intel
Pradip Bose, IBM Research
Alper Buyuktosunoglu, IBM Research
Karthik Swaminathan, IBM Research
Augusto Vega, IBM Research
Questions from the Audience
16:30 - 16:45 Concluding Remarks

Organizers

Roberto Gioiosa Dr. Gioiosa is a senior researcher in the HPC group and lead of the Scalable and Emerging Technologies team at Pacific Northwest National Laboratory. His current research focuses on hardware/software co-design methodologies, custom AI/ML accelerator designs, and distributed software for heterogeneous systems. Currently, Dr. Gioiosa lead the DOE co-design center for AI and graph analytics (ARIAA) and leads several other co-design efforts at PNNL. In the past, Dr. Gioiosa worked at LANL, BSC, IBM Watson, and ORNL. Dr. Gioiosa holds a Ph.D. from the University of Rome “Tor Vergata”.

David Trilla is a Post-doctoral Researcher in the Efficient and Resilient Systems group at IBM T.J. Watson Research Center. His current work focuses on agile hardware architecture design. He previously worked at the Barcelona Supercomputing Center in Critical Embedded Real-Time Systems and time-randomized processors. He holds a Ph.D. degree from the Polytechnic Univeristy of Catalonia (UPC), Spain.

Augusto Vega is a Research Staff Member at IBM T. J. Watson Research Center involved in research and development work in the areas of highly-reliable power-efficient embedded designs, cognitive systems and mobile computing. He holds a Ph.D. degree from Polytechnic University of Catalonia (UPC), Spain.

Karthik Swaminathan is a Research Staff Member at IBM T. J. Watson Research Center. His research interests include power-aware architectures, domain-specific accelerators and emerging device technologies in processor design. He is also interested in architectures for approximate and cognitive computing, particularly in aspects related to their reliability and energy efficiency. He holds a Ph.D. degree from Penn State University.

Alper Buyuktosunoglu is a Research Staff Member at IBM T. J. Watson Research Center. He has been involved in research and development work in support of IBM Power Systems and IBM z Systems in the area of high performance, reliability and power-aware computer architectures. He holds a Ph.D. degree from University of Rochester.

Pradip Bose is a Distinguished Research Staff Member and manager of Efficient and Resilient Systems at IBM T. J. Watson Research Center. He has over thirty-three years of experience at IBM, and was a member of the pioneering RISC super scalar project at IBM (a pre-cursor to the first RS/6000 system product). He holds a Ph.D. degree from University of Illinois at Urbana-Champaign.

Registration

CogArch will be held in conjunction with the 27th International Symposium on High-Performance Computer Architecture (HPCA 2021). Refer to the main venue to continue with the registration process.

Event Location

Held Virtually