Posted at: 15 January
Sr. SWE for Code Reviewing LLM Data Training (Go)
Company
G2i
G2i is a U.S.-based B2B SaaS platform specializing in connecting companies with top engineering talent, including software engineers and AI specialists.
Remote Hiring Policy:
G2i is a fully remote company hiring engineers for contract roles worldwide, with team members located in various regions such as LATAM, Europe, and Canada.
Job Type
Full-time
Allowed Applicant Locations
Worldwide
Salary
$50 to $100 per hour
Job Description
10-min AI interview, project starts Jan 29, rare languages = higher placement rates
About the Company
G2i connects subject-matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact-checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.
About the Role
We're hiring a Code Reviewer with deep Go expertise to review evaluations completed by data annotators assessing AI-generated Go code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.
Responsibilities
Review and audit annotator evaluations of AI-generated Go code.
Assess if the Go code follows the prompt instructions, is functionally correct, and secure.
Validate code snippets using proof-of-work methodology.
Identify inaccuracies in annotator ratings or explanations.
Provide constructive feedback to maintain high annotation standards.
Work within Project Atlas guidelines for evaluation integrity and consistency.
Required Qualifications
5–7+ years of experience in Go development, QA, or code review.
Strong knowledge of Go syntax, concurrency patterns, debugging, edge cases, and testing.
Experience with Go modules, testing frameworks, and standard tooling.
Comfortable using code execution environments and debugging tools.
Excellent written communication and documentation skills.
Experience working with structured QA or annotation workflows.
English proficiency at B2, C1, C2, or Native level.
Preferred Qualifications
Experience in AI training, LLM evaluation, or model alignment.
Familiarity with annotation platforms.
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Background in microservices architecture or cloud-native development.
Compensation
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You'll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re-evaluated for different projects based on your performance and experience.