Deep Learning Compiler Engineer for Ryzen AI NPU

Sep 06, 2024
San Jose, United States
... Not specified
... Intermediate
Full time
... Office work


WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_




THE ROLE:

We are looking for a talented Machine Learning (ML) Compiler SW Engineer to join our growing team in the AI group and play a crucial role in developing SW toolset to deploy cutting-edge ML models on AMD’s XDNA Neural Processing Units (NPU). You will be responsible for designing, implementing, and optimizing compilers, that translate Gen-AI ML inference models like SDXL-Turbo, Llama2, Mistral, etc. into low-level code for specialized hardware architectures. Your work will directly impact the efficiency, scalability, and reliability of our ML applications. 

 

THE PERSON:

If you thrive in a fast-paced environment and love working on cutting edge machine learning inference, this role is for you. 

 

RESPONSIBILITIES: 

  • Design and develop novel algorithms for tiling and mapping quantized ML workloads on application specific hardware platforms. 

  • Analyze and transform intermediate representations of ML models (computational graphs) for efficient execution. 

  • Collaborate with Architecture and runtime software teams to understand optimization requirements and translate them into effective compiler strategies. 

  • Collaborate with kernel developers to understand the tiling requirements to strategize the dataflow and buffer allocation schemes. 

  • Develop back-end optimization passes to convert high-level representation into driver calls. 

  • Implement compiler optimizations for performance, resource usage, and compute efficiency. 

  • Develop and maintain unit tests and integration tests for the compiler to support different generations of HW architectures. 

  • Enable detailed profiling and debugging tools for analyzing performance bottlenecks and deadlocks in the dataflow schemes. 

  • Stay up-to-date on the latest advancements in ML compiler technology and hardware architectures. 

PREFERRED EXPERIENCES : 

  • Strong understanding of compiler design principles (front-end, middle-end, back-end). 

  • Experience with machine learning frameworks (e.g., TensorFlow, PyTorch). 

  • Experience working with ML compilers (e.g., MLIR, TVM). 

  • Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must. 

  • Excellent programming skills in Python, C++, or similar languages. 

  • Experience with machine learning hardware architectures (e.g., GPUs, TPUs, VLIW) is a plus. 

  • A passion for innovation and a strong desire to push the boundaries of machine learning performance. 

ACADEMIC CREDENTIALS:

  • Master's degree or PhD. in Computer Science, Engineering, or a related field (or Bachelor's degree with significant experience).

 

 

LOCATION:

San Jose, Ca

 

 

#LI-RF1

 

#LI-HYBRID




At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

THE ROLE:

We are looking for a talented Machine Learning (ML) Compiler SW Engineer to join our growing team in the AI group and play a crucial role in developing SW toolset to deploy cutting-edge ML models on AMD’s XDNA Neural Processing Units (NPU). You will be responsible for designing, implementing, and optimizing compilers, that translate Gen-AI ML inference models like SDXL-Turbo, Llama2, Mistral, etc. into low-level code for specialized hardware architectures. Your work will directly impact the efficiency, scalability, and reliability of our ML applications. 

 

THE PERSON:

If you thrive in a fast-paced environment and love working on cutting edge machine learning inference, this role is for you. 

 

RESPONSIBILITIES: 

  • Design and develop novel algorithms for tiling and mapping quantized ML workloads on application specific hardware platforms. 

  • Analyze and transform intermediate representations of ML models (computational graphs) for efficient execution. 

  • Collaborate with Architecture and runtime software teams to understand optimization requirements and translate them into effective compiler strategies. 

  • Collaborate with kernel developers to understand the tiling requirements to strategize the dataflow and buffer allocation schemes. 

  • Develop back-end optimization passes to convert high-level representation into driver calls. 

  • Implement compiler optimizations for performance, resource usage, and compute efficiency. 

  • Develop and maintain unit tests and integration tests for the compiler to support different generations of HW architectures. 

  • Enable detailed profiling and debugging tools for analyzing performance bottlenecks and deadlocks in the dataflow schemes. 

  • Stay up-to-date on the latest advancements in ML compiler technology and hardware architectures. 

PREFERRED EXPERIENCES : 

  • Strong understanding of compiler design principles (front-end, middle-end, back-end). 

  • Experience with machine learning frameworks (e.g., TensorFlow, PyTorch). 

  • Experience working with ML compilers (e.g., MLIR, TVM). 

  • Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must. 

  • Excellent programming skills in Python, C++, or similar languages. 

  • Experience with machine learning hardware architectures (e.g., GPUs, TPUs, VLIW) is a plus. 

  • A passion for innovation and a strong desire to push the boundaries of machine learning performance. 

ACADEMIC CREDENTIALS:

  • Master's degree or PhD. in Computer Science, Engineering, or a related field (or Bachelor's degree with significant experience).

 

 

LOCATION:

San Jose, Ca

 

 

#LI-RF1

 

#LI-HYBRID

COMPANY JOBS
1180 available jobs
WEBSITE