Lead Engineer - Client ML and Runtime

May 11, 2024
Boston, United States
... Not specified
... Intermediate
Full time
... Office work


WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_




  • The Role

     

    We are building IREE as an open-source compiler and runtime solution to productionize ML on a variety of usage scenarios and hardware targets. The runtime needs to scale from datacenter deployment down to resource constrained environments like embedded or mobile devices. It requires us to write the most efficient code to interact with the OS and device drivers with minimal dependency and small binary size. There will be no short of intriguing technical challenges to tackle, and there are abundant chances to collaborate with industry experts working at different layers of the stack. If this sounds interesting to you, please don’t hesitate to reach out to us!

     

    The Person

    An ideal candidate should be familiar with operating systems, device drivers, accelerator API/runtime, and artifact releasing/deployment. He/she should be comfortable at performing analysis of system issues using various tools and drive improvements at suitable layers. Most importantly, the candidate is willing to learn and work across boundaries.

     

    Key Responsibilities:

    · Integrate low level client AI device code generation with device drivers and firmware.

    · Cross boundaries between driver/firmware/compiler/userland on Windows and Linux to drive overall product integration.

    · Collaborate with teams building ML code generation libraries.

     

     

    Preferred Experience in following tools/flows

    · Familiarity with operating system internals and resource management

    · Experience with AI accelerator (e.g., GPU) driver API/runtime

    · Experience with various system debugging/benchmarking/profiling tools

    · Strong C/C++ understanding and skills

    · Familiarity with IREE, MLIR, LLVM, SPIR-V or other compiler technologies

    · Open-source development ethos

     

    Academic Credentials

    · BS/MS (Computer Science, Computer Engineering, Electrical Engineering, or related equivalent)

     

    Location:

    Santa Clara, CA, USA OR Bellevue, WA, USA 

     

     

    #LI-EM1




At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

  • The Role

     

    We are building IREE as an open-source compiler and runtime solution to productionize ML on a variety of usage scenarios and hardware targets. The runtime needs to scale from datacenter deployment down to resource constrained environments like embedded or mobile devices. It requires us to write the most efficient code to interact with the OS and device drivers with minimal dependency and small binary size. There will be no short of intriguing technical challenges to tackle, and there are abundant chances to collaborate with industry experts working at different layers of the stack. If this sounds interesting to you, please don’t hesitate to reach out to us!

     

    The Person

    An ideal candidate should be familiar with operating systems, device drivers, accelerator API/runtime, and artifact releasing/deployment. He/she should be comfortable at performing analysis of system issues using various tools and drive improvements at suitable layers. Most importantly, the candidate is willing to learn and work across boundaries.

     

    Key Responsibilities:

    · Integrate low level client AI device code generation with device drivers and firmware.

    · Cross boundaries between driver/firmware/compiler/userland on Windows and Linux to drive overall product integration.

    · Collaborate with teams building ML code generation libraries.

     

     

    Preferred Experience in following tools/flows

    · Familiarity with operating system internals and resource management

    · Experience with AI accelerator (e.g., GPU) driver API/runtime

    · Experience with various system debugging/benchmarking/profiling tools

    · Strong C/C++ understanding and skills

    · Familiarity with IREE, MLIR, LLVM, SPIR-V or other compiler technologies

    · Open-source development ethos

     

    Academic Credentials

    · BS/MS (Computer Science, Computer Engineering, Electrical Engineering, or related equivalent)

     

    Location:

    Santa Clara, CA, USA OR Bellevue, WA, USA 

     

     

    #LI-EM1

COMPANY JOBS
1501 available jobs
WEBSITE