Research Engineer - NLP & Large Language Models
EPFLEPFL, the Swiss Federal Institute of Technology in Lausanne, is one of the most dynamic university campuses in Europe and ranks among the top 20 universities worldwide. The EPFL employs more than 6,500 people supporting the three main
MISSIONS
of the institution: education, research and innovation. The EPFL campus offers an exceptional working environment at the heart of a community of more than 17,000 people, including over 12,500 students and 4,000 researchers from more than 120 different countries. Research Engineer - NLP & Large Language Models
- About the Role
- We are seeking a
- Research Engineer in Natural Language Processing (NLP) and Large Language Models (LLMs)to contribute to the design, training, and evaluation of next-generation foundation models. The role sits at the intersection ofresearch and production-grade engineering, with a strong emphasis onpost-training, multimodality, and advanced generative modeling techniques, including diffusion-based approaches. You will work closely with researchers and applied scientists to translate novel ideas into scalable, reproducible systems, and to push the state of the art in open, responsible, and multilingual AI. Key Responsibilities
- Design, implement, and maintaintraining and post-training pipelinesfor large language and multimodal models (e. , instruction tuning, alignment, preference optimization) Conduct research and engineering onpost-training methods
- Contribute tomultimodal modeling, integrating text with modalities such as vision, speech, or audio
- Explore and applydiffusion-based modelsand hybrid generative approaches for language and multimodal representation learning
- Optimize large-scale training and inference
- Develop evaluation pipelines and benchmarks for language understanding, reasoning, alignment, and multimodal performance
- Collaborate with researchers to prototype new ideas, reproduce results from the literature, and contribute to publications or technical reports
- Ensure code quality, reproducibility, and documentation suitable for long-term research and open-source release
- Required
QUALIFICATIONS
MSc or PhD in Computer Science, Machine Learning, AI, or a related field (or equivalent practical experience) Strong background inNLP and deep learning, with hands-on experience working withlarge language models
- Solid programming skills in
- Python, with experience using modern ML frameworks (e. , Py
- Torch) Experience working withopen-weight or open-data models, including releasing models, datasets, or benchmarks
- Familiarity withpost-training techniquesfor LLMs (e. , instruction tuning, preference optimization, alignment) Strong experimental rigor: ability to design controlled experiments, analyze results, and iterate efficiently
- Desired / Bonus
QUALIFICATIONS
Parcours professionnel withdiffusion models(e. , text diffusion, latent diffusion, or multimodal diffusion) Hands-on work onmultimodal models(e. , text-image, text-audio, speech-language systems) Exposure toLLM alignment, safety, or evaluationbeyond standard language modeling metrics
- Background withdistributed trainingand large-scale model experimentation
- Familiarity with multilingual or low-resource language settings
- Contributions to open-source ML or published research in NLP, multimodality, or generative modeling
- What
WE OFFER
A research-driven environment with access tolarge-scale compute and modern ML infrastructure
- Close collaboration with leading researchers in NLP, multimodality, and generative modeling
- The opportunity to work onfoundational, open, and socially responsible AI systems
- Support for publishing research, contributing to open-source projects, and engaging with the broader research community
- Competitive compensation and benefits, commensurate with experience
- Information
- Contract Start Date : to be agreed upon
- Activity Rate : 100%Duration : 1 year, renewable
- Contract Type: Fixed-term contract (CDD).