April 3rd 2022, Seoul, South Korea (virtual)
In conjunction with the 28th International Symposium on
High-Performance Computer Architecture (HPCA 2022)
Artificial Intelligence (AI) and Machine Learning (ML) techniques have become the de facto solution to drive human progress and more specifically, automation. In the last years, the world’s economy has been gravitating towards the AI/ML domain (from industrial and scientific perspectives) and the expectation of growth is not withering away. Additionally, these trends have been further exacerbated by the ongoing global COVID-19 pandemic, which paralyzed the world’s economy and made evident the need for even more automation to provide safer and more reliable services and also to aid in the agile discovery of life saving drugs and vaccines — all this with appropriate security and data privacy elements in place. To support those kinds of applications, "cognitive" (AI/ML) architectures are designed and deployed to materialize advances in the aforementioned fields. However, although the newest cognitive designs are improving by the day, the number of challenges ahead for these systems is still overwhelming. With existing solutions reaching functional maturity, design considerations are now pivoting to new aspects like energy scaling, reliable operation, safe guarantees or even security and data-privacy properties. Specifically when it comes to security and data privacy in the AI/ML context, Homomorphic Encryption (HE) has emerged as a highly promising approach. HE is arguably the holy grail of data-secure computing as it provides security and privacy guarantees by allowing computation on encrypted private data without need for decryption. This is particularly enticing for applications in the medical sciences, natural language processing, autonomous and connected vehicles, as well as traditional domains such as banking systems, where HE could drastically reduce the frequency of data breaches, thus guaranteeing privacy of highly sensitive user data.
In this context, this edition of the CogArch workshop aims at bringing together the necessary know-how to design cognitive architectures from a holistic point of view, tackling all their design considerations from the algorithms to platforms in all the different fields that cognitive architectures will soon occupy, from autonomous cars to critical tasks in avionics, finance, space travel or even personalized medicine. This year's edition, in addition, solicits contributions on the security and data-privacy preserving aspects of AI/ML and related application domains.
The CogArch workshop already had five successful editions, bringing together experts and knowledge on the most novel design ideas for cognitive systems. This workshop capitalizes on the synergy between industrial and academic efforts in order to provide a better understanding of cognitive systems and key concepts of their design.
Hardware and software design considerations are gravitating towards AI applications, as those have been proven extremely useful in a wide variety of fields, from edge computing in autonomous cars, to cloud-based computing for personalized medicine. The recent years have brought about a boom in start-ups and novel platforms that constantly offer improvements in performance and accuracy for the aforementioned applications. As this kind of cognitive architectures evolve, system designers must incorporate many different considerations, with security and data privacy being the key ones today. The emergence of different security and data privacy approaches for AI/ML applications, including but not limited to the use of Homomorphic Encryption (HE) techniques, is also leading to a very diverse set of (hardware and software) design decisions and solutions.
The CogArch workshop solicits formative ideas and new product offerings in this general space that covers all the design aspects of cognitive systems, with particular focus this year on the security and data privacy considerations of AI/ML.Topics of interest include (but are not limited to):
The workshop shall consist of regular presentations and/or prototype demonstrations by authors of selected submissions. In addition, it will include invited keynotes by eminent researchers from industry and academia as well as interactive panel discussions to kindle further interest in these research topics. Submissions will be reviewed by a workshop Program Committee, in addition to the organizers.
Submitted manuscripts must be in English of up to 2 pages (with same
formatting guidelines as main
conference) indicating the type of submission: regular presentation or prototype
demonstration. Submissions should be submitted to the following link
January 28th February 4th, 2022.
If you have questions regarding submission, please contact us: firstname.lastname@example.org
CogArch will feature a session where researchers can showcase innovative prototype demonstrations or proof-of-concept designs in the cognitive architecture space. Examples of such demonstrations may include (but are not limited to):
We introduce the approximate homomorphic encryption HEaaN, which supports fixed-point operations via addition, multiplication and rescaling of plaintext vectors. This feature significantly accelerates approximate arithmetic of real numbers (compared with bitwise or finite field arithmetics) and realizes machine learning on encrypted data. We then present the recent developments on fast implementation of HEaaN and its application to logistic regression, neural network, and statistics on encrypted data. We conclude with several real-world applications, ongoing standardization activities and some possible future directions.
The increasing amount of data and the growing complexity of problems has resulted in an ever-growing reliance on cloud computing. However, many applications, most notably in healthcare, finance or defense, demand security and privacy which today's solutions cannot fully address. Fully homomorphic encryption (FHE) elevates the bar of today's solutions by adding confidentiality of data during processing. It allows computation on fully encrypted data without the need for decryption, thus fully preserving privacy. To enable processing encrypted data at usable levels of classic security, e.g., 128-bit, the encryption procedure introduces noticeable data size expansion — the ciphertext is much bigger than the native aggregate of native data types. In this talk, we present MemFHE which is the first accelerator of both client and server for the latest Ring-GSW (Gentry, Sahai, and Waters) based homomorphic encryption schemes using Processing In Memory (PIM). PIM alleviates the data movement issues with large FHE encrypted data, while providing in-situ execution and extensive parallelism needed for FHE’s polynomial operations. While the client-PIM can homomorphically encrypt and decrypt data, the server-PIM can process homomorphically encrypted data without decryption. Our server-PIM is pipelined and is designed to provide flexible bootstrapping, allowing two encryption techniques and various FHE security-levels based on the application requirements. We evaluate our design at various security-levels and compare it with state-of-the-art CPU implementations for Ring-GSW based FHE. Our system is up to 20k× (265×) faster than CPU (GPU) for FHE arithmetic operations and provides on average 2007× higher throughput than the state of the art while implementing learning algorithms with FHE.
Enabling processing encrypted data elevates the bar of confidentiality in existing security solutions and opens the front to new applications. It preserves individuals' privacy and market competitiveness while sustaining societal and economic growth through data sharing, collaboration, and artificial intelligence. Homomorphic Encryption (HE) is a unique family of cryptographic methods to process encrypted data. HE applications can reduce the risk of third-party data leakage at the processing node while preserving both data ownership and lifecycle. However, the performance gap that even the most efficient HE schemes hinder enabling meaningful HE applications and adopting the technology. HE applications can be a million times slower than the corresponding unencrypted applications on existing hardware architectures. Other barriers to adoption include the lack of development tools to reduce non-recurring engineering costs to build HE applications and the lack of international standards. Innovation in hardware architecture is the first step in bridging the performance gap and setting directions to enable meaningful technology adoption. In this talk, I will share Intel's innovation from theory to algorithms down to hardware architecture to allow the adoption of processing encrypted data with HE. Jointly with Microsoft, our Standards & Industry Organization and academic partners, we develop novel HE platforms comprehensive of revolutionary hardware, software libraries, development tools, and applications to make HE technologies accessible, performant, and cost-effective. We build an ecosystem that can sustain the exponential growth of HE-based technologies.
CircLayer is a circuit abstraction layer part of the HELayers end-to-end framework to write high level fully homomorphic encryption code. CircLayer lets the researcher apply optimizations directly on the circuit level. In addition it gives control on scheduling decisions made when executing the circuit. This makes CircLayer an ideal choice for researches and developers who develop new optimization and scheduling algorithms.
Convergence is driving the emergence of tightly integrated scientific workflows that combine physical simulation, machine learning, and analytics. However, such workflows are not well-served by existing computing capabilities that separate HPC and AI/ML computing paradigms into distinct ecosystems. While co-processor accelerators such as GPUs have been successfully applied to both HPC and AI/ML workloads, software stacks, programming languages, and programming models are still largely incompatible. Additionally, many scientific machine learning workloads exhibit characteristics that hamper their performance on GPU throughput-oriented architectures. In this talk, we discuss our vision of the codesign process, as well as ongoing efforts at Pacific Northwest National Laboratory in codesigning hardware and software stacks to support this notion of convergence and the next generation of scientific computing.
|Sunday April 3rd, 2022
(all times are in Eastern Daylight Time)
|1:30 - 1:45 PM||Introduction and Welcoming Remarks|
|1:45 - 2:30 PM||Invited Talk: "CircLayer — Circuit Optimization & Scheduling Made Easy"
Hayim Shaul (IBM Research)
|2:30 - 3:00 PM||"A Programmable Accelerator for Streaming Automatic Speech Recognition on Edge Devices"
Dennis Pinto, Jose María Arnau Montañés and Antonio González Colás (Universitat Politècnica de Catalunya)
|3:00 - 3:15 PM||Break|
|3:15 - 4:00 PM||Invited Talk: "End-to-End Learning with Fully Homomorphic Encryption in Memory"
Tajana Šimunić Rosing (University of California, San Diego)
|4:00 - 4:30 PM||"Low-Latency Convolutional Layer Computation Under Homomorphic Encryption"
Zhi Ming Chua, Christos-Savvas Bouganis and Peter Y. K. Cheung (Imperial College London)
|4:30 - 4:45 PM||Break|
|4:45 - 5:30 PM||Invited Talk: "Homomorphic Computing, with a Focus on Hardware-Accelerated Homomorphic Encryption"
Rosario Cammarota (Intel)
|5:30 - 6:15 PM||Invited Talk: "Codesigning the Next Generation of Intelligent Computing Systems"
Kevin Barker (Pacific Northwest National Laboratory)
|6:15 - 6:30 PM||Break|
|6:30 - 7:00 PM||"EPIC: Efficient Packing for Inference using Cheetah"
Sarabjeet Singh, Shreyas Singh and Rajeev Balasubramonian (University of Utah)
|7:00 - 7:45 PM||Invited Talk: "Approximate Homomorphic Encryption and Privacy Preserving Machine Learning"
Jung Hee Cheon (Seoul National University / CryptoLab Inc.)
|7:45 PM||Concluding Remarks|
Roberto Gioiosa is a senior researcher in the HPC group and lead of the Scalable and Emerging Technologies team at Pacific Northwest National Laboratory. His current research focuses on hardware/software co-design methodologies, custom AI/ML accelerator designs, and distributed software for heterogeneous systems. Currently, Dr. Gioiosa lead the DOE co-design center for AI and graph analytics (ARIAA) and leads several other co-design efforts at PNNL. In the past, Dr. Gioiosa worked at LANL, BSC, IBM Watson, and ORNL. Dr. Gioiosa holds a Ph.D. from the University of Rome “Tor Vergata”.
David Trilla is a post-doctoral Researcher at IBM T. J. Watson Research Center. He has worked on critical-embedded real-time systems and his current research interests include security and agile hardware development. He obtained his Ph.D. at the Barcelona Supercomputing Center (BSC) granted by the Polytechnic University of Catalonia (UPC), Spain.
Subhankar Pal is a Postdoctoral Researcher in the Efficient and Resilient Systems group at IBM T. J. Watson Research Center. His current research is focused on hardware acceleration for domain-specific and energy efficient systems. He holds a Ph.D. and M.Sc. from the University of Michigan. His Ph.D. thesis looked at designing a reconfigurable, software-defined hardware solution that balances programmability with energy efficiency. Prior to that, Subhankar was with NVIDIA, where he worked on pre-silicon verification and bring-up of multiple generations of GPUs.
Saransh Gupta is a Research Staff Member in the Storage Systems Research at IBM Research Almaden. He is involved in research and development in the areas of non-volatile and storage class memories, emerging interconnect technologies, and compute-enabled memory hierarchy. He holds a Ph.D. degree from University of California San Diego.
Karthik Swaminathan is a Research Staff Member at IBM T. J. Watson Research Center. His research interests include power-aware architectures, domain-specific accelerators and emerging device technologies in processor design. He is also interested in architectures for approximate and cognitive computing, particularly in aspects related to their reliability and energy efficiency. He holds a Ph.D. degree from Penn State University.
Alper Buyuktosunoglu is a Research Staff Member at IBM T. J. Watson Research Center. He has been involved in research and development work in support of IBM Power Systems and IBM z Systems in the area of high performance, reliability and power-aware computer architectures. He holds a Ph.D. degree from University of Rochester.
Pradip Bose is a Distinguished Research Staff Member and manager of Efficient and Resilient Systems at IBM T. J. Watson Research Center. He has over thirty-three years of experience at IBM, and was a member of the pioneering RISC super scalar project at IBM (a pre-cursor to the first RS/6000 system product). He holds a Ph.D. degree from University of Illinois at Urbana-Champaign.
Nir Drucker is a Research Staff Member at IBM Research (Haifa, Israel) in the AI security team. He holds a Ph.D. in applied mathematics (cryptography) from the University of Haifa. He worked 3.5 years as a Senior Applied Scientist in AWS (cryptography) and 8 years as a Software Developer at Intel. His research interests involve applied cryptography and applied security.
Augusto Vega is a Research Staff Member at IBM T. J. Watson Research Center involved in research and development work in the areas of highly-reliable power-efficient embedded designs, cognitive systems and mobile computing. He holds a Ph.D. degree from Polytechnic University of Catalonia (UPC), Spain.
The AI and Robotics Timeline from 1939 to date.
The TED Talks with a variety of talks on the topic of AI.
IBM Watson in action in this on-line demo using deep learning algorithms.
The AI and Cognitive Computing portal with the latest research activities conducted by IBM on AI.
CogArch will be held in conjunction with the 28th International Symposium on High-Performance Computer Architecture (HPCA 2022). Refer to the main venue to continue with the registration process.