Posted at: 19 March

Data Scientist

Company

CompanyRazer

Razer Inc. is a dual-headquartered gaming hardware and consumer electronics company specializing in high-performance gaming peripherals and laptops, targeting gamers globally.

Job Type

Full-time

Allowed Applicant Locations

Singapore

Job Description

Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.Job Responsibilities :Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.This Data Scientist role sits within the Agentic AI Pod, focused on designing, building, and scaling agentic AI systems within Razer’s internal AI platform. You will play a critical role in developing autonomous and semi-autonomous AI agents that combine large language models (LLMs), retrieval systems, fine-tuned models, and tool-based orchestration to enable intelligent, real-time capabilities across Razer’s gaming and platform experiences.The ideal candidate is a technically strong AI systems engineer with hands-on experience in agentic architectures, RAG pipelines, LLM fine-tuning, and production deployment. You will work across the full lifecycle—from data preparation and model adaptation to system integration, deployment, and continuous optimization—while collaborating closely with AI Software Engineers, Platform Engineers, and DevOps teams.Key ResponsibilitiesDesign, implement, and maintain agentic AI architectures, including planning, tool use, memory, and multi-step reasoningBuild, operate, and optimize Retrieval-Augmented Generation (RAG) pipelines using embeddings, vector databases, and internal knowledge sourcesPerform LLM fine-tuning and adaptation (e.g., supervised fine-tuning, instruction tuning, parameter-efficient methods such as LoRA) to improve task performance and domain alignmentDevelop internal frameworks, tooling, and orchestration layers for LLM-driven agents and workflowsIntegrate and adapt 3rd-party AI services (LLMs, speech, vision, agent platforms) into agent-based systemsEvaluate, prototype, and productionize agent frameworks, models, and AI platforms, focusing on system performance, cost, and architectural fitDeploy and operate production-grade AI systems, addressing scalability, latency, reliability, observability, and cost controlsConduct benchmarking, evaluation, and trade-off analysis across models, fine-tuning strategies, agent behaviors, and retrieval approachesCollaborate with platform and infrastructure teams to ensure secure, compliant, and maintainable AI systemsStay current with advances in agentic AI, LLM fine-tuning techniques, RAG methods, and deployment patternsPre-Requisites :Pre-RequisitesTechnical SkillsMinimum 2+ years of experience in AI systems engineering, agentic AI development, or applied ML in productionStrong proficiency in Python and solid software engineering fundamentals (API design, testing, modular architecture)Strong proficiency in prompt design and prompt engineering for agentic AI systems (instruction design, role prompting, tool-use prompting, iterative refinement, and evaluation)Hands-on experience with LLM APIs (e.g., OpenAI, Claude, Gemini) and open-source LLMsPractical experience with LLM fine-tuning workflows, including data preparation, training, evaluation, and deploymentExperience with agent and RAG frameworks such as LangChain, LlamaIndex, AutoGen, or similarExperience deploying and operating AI systems with attention to latency, throughput, and reliabilityFamiliarity with cloud platforms (AWS, GCP, Azure) and AI deployment / MLOps workflows (CI/CD, monitoring, versioning)Preferred QualificationsExperience with parameter-efficient fine-tuning (PEFT) techniques such as LoRA, QLoRA, or adaptersHands-on experience with vector databases (e.g., Pinecone, Weaviate, Milvus, FAISS)Strong understanding of prompt engineering, retrieval strategies, and RAG evaluationExperience operating and debugging agent-based systems in productionAbility to clearly communicate architectural decisions and trade-offsPassion for gaming and interest in intelligent, interactive AI experiencesComfortable working in a fast-paced, high-pressure, agile environmentEducation & ExperienceMaster’s degree or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a closely related technical disciplineRazer is proud to be an Equal Opportunity Employer. We believe that diverse teams drive better ideas, better products, and a stronger culture. We are committed to providing an inclusive, respectful, and fair workplace for every employee across all the countries we operate in. We do not discriminate on the basis of race, ethnicity, colour, nationality, ancestry, religion, age, sex, sexual orientation, gender identity or expression, disability, marital status, or any other characteristic protected under local laws. Where needed, we provide reasonable accommodations - including for disability or religious practices - to ensure every team member can perform and contribute at their best.Are you game?