Posted at: 28 October
Data Engineer (Infra/DevOps Focus)
Company
Rackspace
Rackspace Technology is a San Antonio-based B2B cloud computing company specializing in multicloud solutions, data management, and security services for global enterprises.
Remote Hiring Policy:
Rackspace Technology supports remote work and hires globally, with team members located in various regions including North America, Europe, and Asia. Specific roles may have location requirements, such as being based in certain areas of Mexico.
Job Type
Full-time
Allowed Applicant Locations
Worldwide
Job Description
We are looking for a highly skilled Azure Data Engineer with expert knowledge in cloud infrastructure and DevOps automation. This critical hybrid role will be responsible for designing, building, optimizing, and automating our entire end-to-end data platform within the Microsoft Azure ecosystem. The ideal candidate will ensure our data solutions are scalable, reliable, and deployed using modern Infrastructure as Code (IaC) and CI/CD practices.
Key Responsibilities
Data Platform Development & Engineering
Design & Implement ETL/ELT: Develop, optimize, and maintain scalable data pipelines using Python, SQL, and core Azure data services.
Azure Data Services Management: Architect and manage key Azure data components, including:
Data Lakes: Provisioning and structuring data within Azure Data Lake Storage (ADLS Gen2).
Data Processing: Implementing data transformation and analysis logic using Azure Data Factory (ADF), Azure Synapse Pipelines, and Azure Databricks (using Spark/PySpark).
Data Warehousing: Designing and optimizing the enterprise Data Warehouse in Azure Synapse Analytics (SQL Pool).
Data Modeling and Quality: Define and enforce data modeling standards and implement data quality checks within the pipelines.
Cloud Infrastructure & DevOps Automation
Infrastructure as Code (IaC): Design, manage, and provision all Azure data resources (ADLS, Synapse, ADF, Databricks Clusters) using Terraform or Azure Resource Manager (ARM) Templates/Bicep.
CI/CD Implementation: Build and maintain automated Continuous Integration/Continuous Deployment (CI/CD) pipelines for all code (data, infrastructure, and application) using Azure DevOps or GitHub Actions.
Containerization & Compute: Utilize Docker and manage deployment environments using Azure Kubernetes Service (AKS) or Azure Container Instances (ACI) when required for data applications.
Monitoring, Logging, & Security: Configure comprehensive monitoring and alerting using Azure Monitor and Log Analytics. Implement network security and access controls (RBAC) across the data platform.
Required Skills & Qualifications
Azure Cloud: Strong hands-on experience designing and deploying end-to-end data solutions specifically within the Azure ecosystem.
Programming: High proficiency in Python (including PySpark) and expert knowledge of SQL.
DevOps & IaC: Proven, production-level experience with Terraform (preferred) or ARM/Bicep for automating Azure infrastructure deployment.
CI/CD: Experience setting up CI/CD workflows using Azure DevOps Pipelines or GitHub Actions.
Data Tools: Deep working knowledge of Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
Orchestration: Experience with workflow orchestration tools like Azure Data Factory or Apache Airflow.
Preferred Qualifications
Azure certifications such as Azure Data Engineer Associate (DP-203) or Azure DevOps Engineer Expert (AZ-400).
Familiarity with Data Governance tools such as Azure Purview.
Experience with real-time data ingestion using Azure Event Hubs or Azure Stream Analytics.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.