hero
Glynn Capital
53
companies
2,169
Jobs

Code Reviewer for LLM Data Training (Multi-Language)

SuperAnnotate

SuperAnnotate

India
Posted on Apr 21, 2025

About the Role

We are hiring Code Reviewers with strong programming expertise to support response evaluations across multiple languages. You will audit evaluations performed by annotators reviewing AI-generated code and verify that those evaluations are accurate, complete, and aligned with technical and instructional guidelines.

Supported languages for this role include:

  1. Java
  2. C++
  3. HTML/CSS
  4. C#
  5. TypeScript
  6. Bash
  7. PHP
  8. R
  9. Dart
  10. Excel
  11. Kotlin
  12. C
  13. MATLAB
  14. Python
  15. JavaScript
  16. SQL
  17. Go

Responsibilities

  • Audit annotator evaluations of AI-generated responses across supported languages
  • Validate whether the code meets prompt-specific requirements for structure, syntax, and logic
  • Assess code quality for correctness, security, performance, and readability
  • Run and validate proof-of-work code and verify annotators’ testing outcomes
  • Review annotator-submitted Loom videos to confirm proper evaluation and testing processes
  • Ensure responses align with formatting, tone, and instruction-following expectations
  • Document any misalignments, overlooked errors, or guideline violations
  • Provide clear, constructive QA feedback based on structured rubrics and review protocols
  • Collaborate with internal teams on edge cases and content-specific challenges

Required Qualifications

  • 5–7+ years of hands-on experience in at least one or more of the listed languages
  • Strong technical skills in software development, debugging, and code review
  • Ability to interpret and apply strict evaluation rubrics and instructions
  • Experience using code execution environments or dev tools to validate output
  • Excellent written communication for producing QA documentation and feedback
  • English proficiency at B2, C1, C2, or Native level

Preferred Qualifications

  • Experience with AI-generated code review, LLM evaluation, or human-in-the-loop workflows
  • Familiarity with version control (Git), structured QA systems, or annotation platforms (e.g., SuperAnnotate)
  • Ability to handle multi-language QA with contextual awareness and attention to detail

Why Join Us

Join a growing global team focused on raising the quality of AI-generated programming content. Your evaluations will directly contribute to building safer, more accurate, and instructionally sound AI systems. This fully remote role offers flexible hours, milestone-based delivery, and competitive compensation.