Principal Research Scientist

Mar 28, 2024
San Jose, United States
... Not specified
... Intermediate
Full time
... Office work


WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_




THE ROLE:

We are looking for Principal Research Scientist experienced with training large language models and/or large multimodal models. In this role, you will explore novel LLM/LMM architectures and large-scale training techniques to advance the state-of-the-arts. You will be part of a world-class research team working on pre-training, fine-tuning, and aligning large language and multimodal models, in addition to keeping up-to-date to the latest progress and trends in LLM/LMM and foundation models.

 

THE PERSON:

Do you like to design and implement novel research ideas, improve the quality of the large language and multimodal models, accelerate the training and inference speed of LLMs/LMMs, and influence future hardware and software direction? If so, this role is for you. The ideal candidate will have expertise and hands-on experience on training LLMs/LMMs, familiar with hyper-parameter tuning techniques, data preprocessing, tokenization methods and latest training approaches for LLMs/LMMs. A successful candidate needs to be knowledgeable with latest transformer architectures.

 

KEY RESPONSIBILITIES:

  • Train and finetune LLMs/LMMs.
  • Improve on the state-of-the-art LLMs/LMMs..
  • Accelerate the training and inference speed of LLMs/LMMs.
  • Research novel ML techniques and model architectures.
  • Influence the direction of AMD AI platform.
  • Publish your work at top-tier venues.

 

PREFERRED EXPERIENCE:

  • Experience in developing and debugging in Python.
  • Experience in ML frameworks such as PyTorch, JAX or TensorFlow.
  • Experience with distributed training.
  • Expertise on LLM/LMM pretraining, finetuning, and/or RLHF.
  • Expertise on transformer architecture.
  • Strong publication record in top tier conferences and journals.
  • Strong communication and problem-solving skills.

ACADEMIC CREDENTIALS:

  • A PhD degree or equivalent in machine learning, computer science, artificial intelligence, or a related field.



At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

THE ROLE:

We are looking for Principal Research Scientist experienced with training large language models and/or large multimodal models. In this role, you will explore novel LLM/LMM architectures and large-scale training techniques to advance the state-of-the-arts. You will be part of a world-class research team working on pre-training, fine-tuning, and aligning large language and multimodal models, in addition to keeping up-to-date to the latest progress and trends in LLM/LMM and foundation models.

 

THE PERSON:

Do you like to design and implement novel research ideas, improve the quality of the large language and multimodal models, accelerate the training and inference speed of LLMs/LMMs, and influence future hardware and software direction? If so, this role is for you. The ideal candidate will have expertise and hands-on experience on training LLMs/LMMs, familiar with hyper-parameter tuning techniques, data preprocessing, tokenization methods and latest training approaches for LLMs/LMMs. A successful candidate needs to be knowledgeable with latest transformer architectures.

 

KEY RESPONSIBILITIES:

  • Train and finetune LLMs/LMMs.
  • Improve on the state-of-the-art LLMs/LMMs..
  • Accelerate the training and inference speed of LLMs/LMMs.
  • Research novel ML techniques and model architectures.
  • Influence the direction of AMD AI platform.
  • Publish your work at top-tier venues.

 

PREFERRED EXPERIENCE:

  • Experience in developing and debugging in Python.
  • Experience in ML frameworks such as PyTorch, JAX or TensorFlow.
  • Experience with distributed training.
  • Expertise on LLM/LMM pretraining, finetuning, and/or RLHF.
  • Expertise on transformer architecture.
  • Strong publication record in top tier conferences and journals.
  • Strong communication and problem-solving skills.

ACADEMIC CREDENTIALS:

  • A PhD degree or equivalent in machine learning, computer science, artificial intelligence, or a related field.
COMPANY JOBS
1784 available jobs
WEBSITE