Posted at: 30 June

Senior Data Engineer

Company

Cloudbeds

Cloudbeds is a B2B hospitality management system headquartered in the United States, providing integrated solutions for property management and channel management to hoteliers globally.

Remote Hiring Policy:

Cloudbeds embraces a flexible approach to remote work, hiring from various regions including the United States, Brazil, LATAM, and Europe, allowing team members to collaborate across different time zones.

Job Type

Full-time

Allowed Applicant Locations

Peru, South America, Europe

Job Description

How You'll Make an Impact:

As a Senior Data Engineer, you'll design and implement large-scale distributed data processing systems using technologies like Apache Hadoop, Spark, and Flink. You'll build robust data pipelines and infrastructure that transform complex data into actionable insights, ensuring scalability and fault-tolerance across our platform that processes billions in bookings annually.

You'll architect data lakes, warehouses, and real-time streaming platforms while implementing security measures and optimizing performance. With your expertise in distributed computing, containerization (Docker, Kubernetes), and streaming technologies (Kafka, Confluent), you'll drive innovation and evaluate new technologies to continuously improve our data ecosystem.

Our Data team:

We're energized by turning complex, messy data into meaningful insights that drive real impact across the organization. As a team, we thrive on solving data infrastructure challenges at scale, building resilient pipelines, optimizing performance, and architecting systems that enable everyone from analysts to executives to make smarter, faster decisions.

What sets us apart is our culture of deep trust and collaboration. No one gets left stuck. When someone hits a blocker, the team jumps in to help without hesitation. We're problem solvers with a shared mission, where collaboration isn't just encouraged, it's expected, and collective wins matter more than individual credit.

What You Bring to the Team:

  • Technical Expertise & Scalability Mindset: Deep knowledge of data architecture, ETL/ELT pipelines, and distributed systems, with the ability to design scalable, high-performance solutions.

  • Problem-Solving & Ownership: A proactive approach to diagnosing issues, improving infrastructure, and taking full ownership from concept to production.

  • Code Quality & Best Practices: Strong skills in writing clean, maintainable code (e.g., in Python, SQL, or Scala) and championing best practices like version control, testing, and CI/CD for data.

  • Collaboration & Mentorship: The ability to work cross-functionally with analysts, data scientists, and engineers, while mentoring junior team members and raising the team’s technical bar.

  • Data Governance & Reliability Focus: A strong sense of responsibility for data accuracy, lineage, and security—building systems that others can trust and scale on.

What Sets You Up for Success:

  • Clear Business Context: Understanding the “why” behind data needs—when goals and use cases are clear, it’s easier to design impactful and relevant solutions.

  • Autonomy with Trust: The freedom to make technical decisions, explore solutions, and iterate—paired with the trust that you’ll deliver with quality and accountability.

  • Strong Team Collaboration: Being part of a team that shares knowledge openly, helps each other unblock challenges, and values collective wins over individual credit.

  • Access to Quality Tooling & Infrastructure: Having the right modern tools (e.g., Airflow, dbt, Spark, Flink, cloud platforms) and the ability to improve them when needed sets the foundation for effective work.

  • Supportive Leadership & Growth Mindset: Managers who advocate for your growth, give regular feedback, and create space for learning and experimentation help sustain long-term success.

Bonus Skills to Stand Out (Optional):

  • Leadership skills: guiding and mentoring junior members, coordinating projects and collaborating with other teams.

  • Domain Knowledge: Having understanding of the hospitality industry can greatly enhance your ability to design and build effective data pipelines.

  • Data Architecture Expertise: Contributing to designing data architecture and system and having deep understanding of data storage and ability to design a robust, scalable and maintainable architecture is a valuable skill.

  • Performance Optimization: Being able to optimize the performance of data pipelines is a crucial skill such as query optimization, indexing, caching and data partitioning.

  • Programming knowledge: Knowledge of Python and SQL is essential, familiarity with Java and other languages is a plus. 

  • Experience working with a remote-first and globally distributed team.

  •  Experience with CI/CD tooling, including GitHub Actions and Build Workflows

  • Knowledge of Confluent and AWS