Posted at: 14 October

AI Gateway Engineer – Platform Integrations (AVM - AI - 20251014)

Company

Celara

Celara Labs is a Buenos Aires-based B2B SaaS company specializing in AI-enabled software development services for enterprise clients.

Remote Hiring Policy:

Celara Labs operates with a near-shore remote work model, hiring top engineering talent exclusively from Latin America to support clients primarily in the U.S. and other regions.

Job Type

Full-time

Allowed Applicant Locations

South America

Job Description

About the Role
We are looking for an experienced AI Engineer to architect, implement, and optimize AI-powered solutions driving our next-generation products and platforms.

This hands-on role will focus on integrating, deploying, and scaling machine learning models, collaborating closely with product and engineering teams, and ensuring robust, reliable, and secure AI services across our organization.

Key Responsibilities

    • Model Development: Design, train, evaluate, and deploy machine learning models and deep learning architectures for a range of business problems.
    • Prompt Management: Build systems to manage, enrich, and filter prompts, including handling sensitive content and ensuring context is maintained for optimal model performance.
    • Content Moderation: Implement processes to screen and filter model outputs, ensuring responses are free from harmful or inappropriate content.
    • Integration: Develop APIs and supporting systems to serve AI models to client applications, integrating with both internal systems and third-party platforms.
    • Data Protection/Masking: Implement rigorous data protection, masking, and encryption strategies to safeguard sensitive information and support compliance requirements.
    • MCP: Integrate and support workflows leveraging the Model Context Protocol for contextual management of AI model requests and outputs.
    • Performance Optimization: Optimize AI service delivery for scalability, reliability, and low latency, using modern deployment and caching strategies.
    • Security and Compliance: Collaborate with security teams to enforce robust access controls, data privacy, and compliance across all AI solutions.
    • Monitoring and Observability: Apply logging, monitoring, and tracing for model performance, operational analytics, and cost management.
    • Collaboration: Work with ML engineers, software developers, product managers, and DevOps teams to integrate AI capabilities into the larger application landscape.
    • Documentation: Create comprehensive documentation, guides, and internal tooling to support the adoption and maintenance of AI systems.

About You

    • Experience:
    • 3–6+ years of designing, implementing, and maintaining AI/ML models in production (Python, TensorFlow, PyTorch, etc.).
    • Hands-on experience with Model Context Protocol (MCP) or similar context management workflows.
    • Experience implementing prompt management, data protection, and content moderation systems for AI-powered applications.
    • Track record building scalable data and ML pipelines.
    • Familiarity with cloud-native architecture (AWS, GCP, or Azure)

    • Technical Skills:
    • Proficiency in model development, training, and deployment using leading ML/DL frameworks.
    • Strong capability in API integration and RESTful service design for serving AI models.
    • Experience with data engineering, caching (Redis), and monitoring tools.
    • Comfort integrating external APIs (e.g., third-party LLM/AI services) and designing abstraction layers.
    • Experience with AI Gateway platforms (Kong, Mosaic, TrueFoundry, Portkey) and integrating with LLM APIs.

    • Mindset:
    • You value clean code, clear documentation, and thoughtful testing.
    • You’re comfortable working in early-stage environments and iterating quickly.
    • You have strong communication skills and a collaborative approach.

Nice to Have

    • Familiarity with observability stacks (OpenTelemetry, Datadog, Prometheus).
    • Background building multi-tenant platforms or scaling AI applications for large user bases
    • Experience with the Golang programming language.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.