Posted at: 26 November
Code Reviewer for LLM Data Training (SQL)
Company
G2i
G2i is a U.S.-based B2B SaaS platform specializing in connecting companies with top engineering talent, including software engineers and AI specialists.
Remote Hiring Policy:
G2i is a fully remote company hiring engineers for contract roles worldwide, with team members located in various regions such as LATAM, Europe, and Canada.
Job Type
Full-time
Allowed Applicant Locations
Worldwide
Job Description
About the Company
G2i connects subject-matter experts, students, and professionals with flexible, remote AI training work such as annotation, evaluation, fact-checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.
About the Role
We are looking for a Code Reviewer with expertise in SQL to join our AI training QA team. You will be responsible for auditing annotator evaluations of AI-generated responses written in SQL. Your role will be to ensure technical correctness, instruction adherence, proof-of-work validation, and consistent application of evaluation rubrics.
Responsibilities
Audit annotator evaluations of AI-generated SQL responses
Verify if responses follow prompt requirements regarding query logic, joins, and correct SQL syntax.
Evaluate SQL code for correctness, performance, security, and readability
Run and validate proof-of-work code submitted by annotators to confirm functionality
Ensure responses align with instruction-following expectations including style, tone, and clarity
Identify and document errors, omissions, or rating inconsistencies
Provide clear, constructive QA feedback based on rubric and guideline criteria
Collaborate with internal teams for clarifications on complex or ambiguous items
Required Qualifications
5–7+ years of experience in SQL development, QA, or code review
Strong understanding of core SQL concepts and development best practices
Experience with debugging, testing tools, and code execution environments
Strong analytical and critical thinking skills
Excellent written communication for documenting evaluations and feedback
English proficiency at B2, C1, C2, or Native level
Preferred Qualifications
Experience with AI/LLM workflows, human-in-the-loop QA, or model evaluation
Familiarity with structured code evaluation processes and rubrics