SMTS Software System Design Eng.

Jul 03, 2024
Austin, United States
... Not specified
... Intermediate
Full time
... Office work


WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_




SMTS SOFTWARE SYSTEMS DESIGN ENGINEER 

 

THE ROLE: 

We are looking for a dynamic, energetic Lead Compiler Engineer to join our growing team in AI group. As a part of this role, you will be responsible for designing, developing, and optimizing frontend compiler for latest neural networks on AMD’s XDNA Neural Processing Units that power cutting edge generative AI models like Stable diffusion, SDXL-Turbo, Llama2, etc. Your work will directly impact the efficiency, scalability, and reliability of our ML applications. If you thrive in a fast-paced environment and love working on cutting edge machine learning inference applications, this role is for you.   

 

THE PERSON: 

This AMD (Advanced Micro Devices) team is looking for a senior level person that can help guide the team, mentor upcoming developers, provide long range strategy, and is willing to jump in to help resolve issues quickly.  You will be involved in all areas that impact the team including performance, automation, and development.  The right candidate will be informed on the latest trends and become prepared to give consultative direction to senior management.  

 

KEY RESPONSIBILITIES: 

  • Design and implement NPU compiler framework for neural networks 
  • Develop hardware aware graph optimizations for high level ML frameworks like ONNX.  
  • Research new algorithms for operator scheduling for efficient inference of latest NN models. 
  • Interface with ONNX / Pytorch runtime and lower level HW implementation 
  • Contribute to high performance inference for GenAI workloads such as Llama2-7B, Stable diffusion, SDXL-Turbo etc.  
  • Work closely with kernel developers, performance architects, and AI researchers 
  • Manage CPU, and memory resources effectively during model execution.  
  • Handle resource allocation for ML deployments across different tenants.  
  • Research heterogenous mapping of ML operators for maximum efficiency 
  • Build tools to track resource utilization, bottlenecks, and anomalies.  
  • Enable detailed profiling and debugging tools for analyzing ML workload latency.  
  • Implement rigorous code review practices for superior code quality assurance. 

 

PREFERRED EXPERIENCE: 

  • Strong programming skills in C++, Python.  
  • Experience with proprietary/open source compiler stack: TVM, MLIR. 
  • Experience with ML frameworks (e.g., ONNX, PyTorch) is required 
  • Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must 
  • Experience with ONNX, Pytorch runtime integration is a bonus.  
  • Excellent problem-solving abilities and a passion for performance optimization. 

 

ACADEMIC CREDENTIALS: 

  • Master’s, or PhD degree in Computer Science, Electrical Engineering, or related fields.

 

Location:

Austin, TX

 

 

#LI-RF1 

#LI-HYBRID

 




At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

SMTS SOFTWARE SYSTEMS DESIGN ENGINEER 

 

THE ROLE: 

We are looking for a dynamic, energetic Lead Compiler Engineer to join our growing team in AI group. As a part of this role, you will be responsible for designing, developing, and optimizing frontend compiler for latest neural networks on AMD’s XDNA Neural Processing Units that power cutting edge generative AI models like Stable diffusion, SDXL-Turbo, Llama2, etc. Your work will directly impact the efficiency, scalability, and reliability of our ML applications. If you thrive in a fast-paced environment and love working on cutting edge machine learning inference applications, this role is for you.   

 

THE PERSON: 

This AMD (Advanced Micro Devices) team is looking for a senior level person that can help guide the team, mentor upcoming developers, provide long range strategy, and is willing to jump in to help resolve issues quickly.  You will be involved in all areas that impact the team including performance, automation, and development.  The right candidate will be informed on the latest trends and become prepared to give consultative direction to senior management.  

 

KEY RESPONSIBILITIES: 

  • Design and implement NPU compiler framework for neural networks 
  • Develop hardware aware graph optimizations for high level ML frameworks like ONNX.  
  • Research new algorithms for operator scheduling for efficient inference of latest NN models. 
  • Interface with ONNX / Pytorch runtime and lower level HW implementation 
  • Contribute to high performance inference for GenAI workloads such as Llama2-7B, Stable diffusion, SDXL-Turbo etc.  
  • Work closely with kernel developers, performance architects, and AI researchers 
  • Manage CPU, and memory resources effectively during model execution.  
  • Handle resource allocation for ML deployments across different tenants.  
  • Research heterogenous mapping of ML operators for maximum efficiency 
  • Build tools to track resource utilization, bottlenecks, and anomalies.  
  • Enable detailed profiling and debugging tools for analyzing ML workload latency.  
  • Implement rigorous code review practices for superior code quality assurance. 

 

PREFERRED EXPERIENCE: 

  • Strong programming skills in C++, Python.  
  • Experience with proprietary/open source compiler stack: TVM, MLIR. 
  • Experience with ML frameworks (e.g., ONNX, PyTorch) is required 
  • Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must 
  • Experience with ONNX, Pytorch runtime integration is a bonus.  
  • Excellent problem-solving abilities and a passion for performance optimization. 

 

ACADEMIC CREDENTIALS: 

  • Master’s, or PhD degree in Computer Science, Electrical Engineering, or related fields.

 

Location:

Austin, TX

 

 

#LI-RF1 

#LI-HYBRID

 

COMPANY JOBS
1111 available jobs
WEBSITE