
Applied Research Scientist, LLM Evaluation & Post-Training
- Remote
- Ridgefield Park, New Jersey, United States
- Innodata Services LLC
Job description
Who we are:
Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are the AI technology solutions provider-of-choice to 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML/AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we’re helping usher in the promise of clean and optimized digital data to all industries. Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.
Our global workforce includes over 3,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany. We’re poised for a period of explosive growth over the next few years.
Position Summary:
Innodata is expanding its GenAI research capability to advance state-of-the-art evaluation and post-training methods for LLM and multimodal systems. As an Applied Research Scientist, LLM Evaluation & Post-Training, you will lead research and experimentation on how evaluation design, measurement strategies, and feedback signals influence model improvement.
This role is ideal for a technically rigorous researcher who is deeply fluent in modern LLM evaluation and post-training, and who can turn research insight into practical methods for customer solutions and internal platform innovation. You will work across human-in-the-loop and AI-augmented workflows, partnering with Language Data Scientists and AI/ML Research Engineers to design and validate evaluation frameworks that drive measurable model gains.
The ideal candidate combines strong experimental and statistical judgment with hands-on technical ability and can engage as a peer with research and engineering stakeholders at leading AI companies.
Who We’re Looking For:
You have at least 5+ years of relevant experience (including graduate research) in applied ML research, research science, or advanced ML experimentation, with significant experience in LLM evaluation, benchmarking, alignment, or post-training. You have a track record of designing high-quality experiments, interpreting results rigorously, and translating findings into practical improvements.
You are comfortable working across research and product/customer contexts. You can identify important methodological questions, build a research agenda, and collaborate with engineers and data experts to execute. You understand that evaluation is not only about metrics, but about measurement validity, robustness, stress testing, and alignment to real-world usage.
You are excited by frontier challenges including long-context, cross-modal, and dynamic multi-turn evaluations, and by the opportunity to build new benchmark datasets and evaluation frameworks that become strategic assets for Innodata and its customers.
You bring an implementation-minded approach to experimentation and are comfortable collaborating closely with engineers to productionize methods and research outputs when appropriate.
Tell Me More:
As an Applied Research Scientist, LLM Evaluation & Post-Training, you will help define the next generation of evaluation-driven model improvement workflows. You will study how different evaluation approaches (human, automated, hybrid) shape model selection and post-training outcomes, and you will design experiments that produce credible, actionable conclusions.
Your work may include designing benchmark datasets, developing evaluation taxonomies and protocols, defining metrics and scoring methodologies, analyzing failure modes, and testing how changes in evaluation setup affect downstream fine-tuning results. You will also support customer engagements by bringing scientific rigor to evaluation strategy, methodology review, and technical recommendations.
This is a highly collaborative role that sits at the intersection of research, engineering, and language/data operations.
Responsibilities:
Define and execute a research agenda focused on LLM evaluation and post-training, especially evaluation-driven model improvement
Design rigorous experiments to study how evaluation methodologies impact fine-tuning and post-training outcomes
Develop and validate evaluation frameworks for LLM and multimodal systems, including:
benchmark/task design
scoring methods
judge/model-assisted evaluation
human evaluation protocols
robustness/stress testing
Lead research on advanced evaluation domains, including long-context, cross-modal, and dynamic multi-turn evaluations
Study the effectiveness and limitations of existing evaluation techniques, and propose improved methodologies with clear validity and scalability tradeoffs
Analyze model behavior and failure patterns; generate actionable recommendations for model improvement and evaluation redesign
Collaborate with AI/ML Research Engineers to translate research methods into scalable evaluation and post-training pipelines
Collaborate with Language Data Scientists to integrate human-in-the-loop and synthetic data/evaluation strategies into research programs
Engage with customer technical stakeholders to understand evaluation goals, review methodologies, and provide expert recommendations
Contribute to internal benchmark datasets, evaluation frameworks, and reusable research assets
Produce high-quality technical documentation, internal research reports, and client-facing materials explaining methods, results, assumptions, and limitations
Contribute to thought leadership and best practices in LLM evaluation, post-training, and GenAI quality measurement
Job requirements
MS/PhD in Computer Science, Machine Learning, Statistics, Applied Mathematics, AI, or a related quantitative scientific field (PhD strongly preferred)
5+ years of relevant experience in applied research / research science in ML/AI, with substantial work in LLMs or foundation models
Demonstrated experience with LLM evaluation, benchmarking, alignment, post-training, or model quality research
Strong foundation in experimental design, statistical analysis, and scientific reasoning for ML systems
Strong coding skills in Python for research experimentation and analysis (e.g., data processing, evaluation pipelines, statistical analysis, visualization)
Experience working with modern ML tooling/frameworks (e.g., PyTorch, Hugging Face, JAX/TensorFlow as applicable) sufficient to design and execute model/evaluation experiments
Ability to evaluate and compare human and automated evaluation methods, including tradeoffs in cost, reliability, validity, and scalability
Experience designing evaluation studies and protocols that are reproducible across datasets, model versions, and evaluation runs
Ability to collaborate directly with technical stakeholders including research scientists, ML engineers, data scientists, and customer technical counterparts
Strong communication skills and ability to present nuanced technical conclusions, assumptions, and limitations clearly
Technical Skills
Evaluation Science & Benchmarking
Experience designing benchmark datasets, test suites, or evaluation frameworks for language or multimodal models
Deep understanding of metric design, scoring reliability, and measurement validity
Experience with human evaluation methods and quality assurance considerations (e.g., rubric design, inter-rater reliability, adjudication frameworks)
LLM / Post-Training
Understanding of post-training methods and how training objectives interact with evaluation outcomes
Ability to reason about model behavior, failure modes, and tradeoffs across tasks/domains
Familiarity with alignment and robustness considerations in model evaluation
Quantitative Analysis
Strong statistical analysis skills (sampling, uncertainty, significance testing where appropriate, error analysis, metric interpretation)
Ability to synthesize complex experimental findings into actionable recommendations
Preferred Skills
Hands-on experience running or supporting fine-tuning/post-training experiments (SFT, preference optimization, RLHF/RLAIF-style workflows)
Experience with multimodal evaluation (e.g., text-image, audio, video)
Experience with long-context benchmarking/evaluation and real-world context management challenges
Experience designing multi-turn, interactive, or agentic evaluation protocols
Published research and/or open-source benchmark contributions in LLM evaluation, post-training, alignment, or related areas
Experience in customer-facing applied research, technical consulting, or cross-functional product/research collaborations
Familiarity with safety, trustworthiness, and governance considerations in GenAI evaluation
How this role partners with the team
This role works closely with:
Language Data Scientists, who bring deep expertise in language data, human evaluation workflows, multilingual/multimodal process design, and data quality operations
AI/ML Research Engineers, who implement scalable training/evaluation systems and connect research methods to production-grade pipelines
Business and Customer Teams, who rely on Innodata for expert consultation and credible, technically rigorous GenAI solutions
Internal R&D and Platform Teams, to transform research outputs into reusable frameworks, benchmarks, and differentiated offerings
Please be aware of recruitment scams involving individuals or organizations falsely claiming to represent employers. Innodata will never ask for payment, banking details, or sensitive personal information during the application process. To learn more on how to recognize job scams, please visit the Federal Trade Commission’s guide at https://consumer.ftc.gov/articles/job-scams.
If you believe you’ve been targeted by a recruitment scam, please report it to Innodata at verifyjoboffer@innodata.com and consider reporting it to the FTC at ReportFraud.ftc.gov.
#LI-NS1
or
All done!
Your application has been successfully submitted!