Code Reviewer for LLM Data Training (Multi-Language)
SuperAnnotate
About the Role
We are hiring Code Reviewers with strong programming expertise to support response evaluations across multiple languages. You will audit evaluations performed by annotators reviewing AI-generated code and verify that those evaluations are accurate, complete, and aligned with technical and instructional guidelines.
Supported languages for this role include:
- Java
- C++
- HTML/CSS
- C#
- TypeScript
- Bash
- PHP
- R
- Dart
- Excel
- Kotlin
- C
- MATLAB
- Python
- JavaScript
- SQL
- Go
Responsibilities
- Audit annotator evaluations of AI-generated responses across supported languages
- Validate whether the code meets prompt-specific requirements for structure, syntax, and logic
- Assess code quality for correctness, security, performance, and readability
- Run and validate proof-of-work code and verify annotators’ testing outcomes
- Review annotator-submitted Loom videos to confirm proper evaluation and testing processes
- Ensure responses align with formatting, tone, and instruction-following expectations
- Document any misalignments, overlooked errors, or guideline violations
- Provide clear, constructive QA feedback based on structured rubrics and review protocols
- Collaborate with internal teams on edge cases and content-specific challenges
Required Qualifications
- 5–7+ years of hands-on experience in at least one or more of the listed languages
- Strong technical skills in software development, debugging, and code review
- Ability to interpret and apply strict evaluation rubrics and instructions
- Experience using code execution environments or dev tools to validate output
- Excellent written communication for producing QA documentation and feedback
- English proficiency at B2, C1, C2, or Native level
Preferred Qualifications
- Experience with AI-generated code review, LLM evaluation, or human-in-the-loop workflows
- Familiarity with version control (Git), structured QA systems, or annotation platforms (e.g., SuperAnnotate)
- Ability to handle multi-language QA with contextual awareness and attention to detail
Why Join Us
Join a growing global team focused on raising the quality of AI-generated programming content. Your evaluations will directly contribute to building safer, more accurate, and instructionally sound AI systems. This fully remote role offers flexible hours, milestone-based delivery, and competitive compensation.