| Page 861 | Kisaco Research

 Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in high performance memories like HBM and GDDR, as well as mainstream memories like DDR. Emerging memories and new technologies like CXL offer additional possibilities for improving the memory hierarchy. In this panel, we’ll discuss important enabling technologies and key challenges the industry needs to address for memory systems going forward.

Moderator

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Panellists

Author:

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

Author:

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Author:

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Pre-training Foundation Models is prohibitively expensive and therefore impossible for many companies. This is especially true if the models are Large Language Models (LLMs). However, people hope that Foundation Models will live up to the promise of learning more generally than classical Artificial Intelligence (AI) models. The dream is that if you provide just a few examples to Foundation Models, they could extrapolate the high-level, abstract representation of the problem and learn how to accomplish tasks that they have never been trained to execute before. So, the question is, how can you lower the cost of fine-tuning pre-trained Foundation Models for your needs? This is what we will discuss in this panel. We make available to you our personal experience, synthetized in a set of principles, so that you can discover how we found ways to lower the cost of fine-tuning pre-trained Foundational Models across multiple domains. 

Moderator

Author:

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Panellists

Author:

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

 

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

(Moderator) Varun Mehta

Executive Director, Head of ESG Data and Technology Product Management
Morgan Stanley

Abstract coming soon...

Author:

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.

 

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang

Founder & CEO
Moffett AI

Wayne Wang is the Founder & CEO of Moffett AI, and is an expert in digital-analog hybrid circuits in Silicon Valley with 15 years of experience. His main experience is as a CPU high-speed link architect.

He has several years of experience in semiconductor entrepreneurship in Silicon Valley. He used to be the core architect of Intel and Qualcomm, and participated in the development of five generations of Intel CPU processors, with a cumulative mass production of over 5 billion pieces.

Abstract coming soon...

Author:

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

Author:

Krishna Rangasayee

Founder & CEO
SiMa.ai

Krishna is founder and CEO of SiMa.ai™, a machine learning company enabling effortless ML for the Embedded Edge.

Previously, he was the COO of Groq, a machine learning startup. He was with Xilinx for 18 years, where he was Senior Vice President and GM of Xilinx’s overall business prior to his most recent role as Executive Vice President, Global Sales. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents. He has also served on the board of directors of public and private companies.

Krishna Rangasayee

Founder & CEO
SiMa.ai

Krishna is founder and CEO of SiMa.ai™, a machine learning company enabling effortless ML for the Embedded Edge.

Previously, he was the COO of Groq, a machine learning startup. He was with Xilinx for 18 years, where he was Senior Vice President and GM of Xilinx’s overall business prior to his most recent role as Executive Vice President, Global Sales. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents. He has also served on the board of directors of public and private companies.

Abstract coming soon...

Author:

Soojung Ryu

CEO
SAPEON

As a well-known expert in AI processors, Soojung Ryu is in charge of SAPEON in order to accelerate the company’s growth in the global AI market. She brings more than 25 years of extensive experience in leading various projects related to NPU and GPU.

Before she joined SK Telecom as the head of the AI accelerator office, Ryu was a University-Industry Collaboration Professor at Seoul National University, where she conducted R&D in the NPU and PIM. When she served as the Vice President of Samsung Group's R&D hub, she undertook diverse projects related to GPU. Ryu received her Ph.D. degree in Electrical & Computer Engineering from Georgia Institute of Technology.

Soojung Ryu

CEO
SAPEON

As a well-known expert in AI processors, Soojung Ryu is in charge of SAPEON in order to accelerate the company’s growth in the global AI market. She brings more than 25 years of extensive experience in leading various projects related to NPU and GPU.

Before she joined SK Telecom as the head of the AI accelerator office, Ryu was a University-Industry Collaboration Professor at Seoul National University, where she conducted R&D in the NPU and PIM. When she served as the Vice President of Samsung Group's R&D hub, she undertook diverse projects related to GPU. Ryu received her Ph.D. degree in Electrical & Computer Engineering from Georgia Institute of Technology.

 

Dr. Michael Capps

CEO/Co-Founder
Diveplane

Dr. Michael Capps

CEO/Co-Founder
Diveplane

Dr. Michael Capps

CEO/Co-Founder
Diveplane