<p>**** For Faster response on the position, please send a message to Jimmy Escobar on LinkedIn or send an email to Jimmy.Escobar@roberthalf(.com) with your resume. You can also call my office number at 424-270-9193****</p><p><br></p><p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Los Angeles, California. In this role, you will design, build, and maintain robust data infrastructure to support business operations and analytics. This position offers an opportunity to work with cutting-edge technologies and contribute to impactful projects. This position is a hybrid that is 3 days a week on-site and 2 days remote.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement scalable data pipelines using Apache Spark, Hadoop, and other big data technologies.</p><p>• Collaborate with cross-functional teams to understand and translate business requirements into technical solutions.</p><p>• Create and maintain ETL processes to ensure data integrity and accessibility.</p><p>• Manage and monitor large-scale data processing systems, ensuring seamless operation.</p><p>• Design and deploy solutions for real-time data streaming using Apache Kafka.</p><p>• Perform advanced data analytics to support business decision-making.</p><p>• Troubleshoot and resolve issues related to data infrastructure and applications.</p><p><br></p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
<p>We are looking for an experienced Senior Data Engineer to join our team in Denver, Colorado. In this role, you will design and implement data solutions that drive business insights and operational efficiency. You will collaborate with cross-functional teams to manage data pipelines, optimize workflows, and ensure the integrity and security of data systems.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain robust data pipelines to process and transform large datasets effectively.</p><p>• Advise on tools / technologies to implement. </p><p>• Collaborate with stakeholders to understand data requirements and translate them into technical solutions.</p><p>• Design and implement ETL processes to facilitate seamless data integration.</p><p>• Optimize data workflows and ensure system performance meets organizational needs.</p><p>• Work with Apache Spark, Hadoop, and Kafka to build scalable data systems.</p><p>• Create and maintain SQL queries for data extraction and analysis.</p><p>• Ensure data security and integrity by adhering to best practices.</p><p>• Troubleshoot and resolve issues in data systems to minimize downtime.</p><p>• Provide technical guidance and mentorship to less experienced team members.</p><p>• Stay updated on emerging technologies to enhance data engineering practices.</p>
<p>We are on a mission to make our communities safer and more secure. Our cutting-edge public safety product aims to prevent and solve crimes through advanced analytics, visualization, and performance-driven solutions. Join us in creating a state-of-the-art platform that will revolutionize public safety. We’re seeking a Senior Full-Stack Engineer with deep expertise in NodeJS and Angular to drive the development of this cutting-edge product. You’ll play a key role in building highly performant, analytics-driven, and visualization-focused applications that empower decision-makers with actionable insights.</p><p><br></p><p>Key Responsibilities:</p><p>• Optimize for speed, scalability, and security; ensure compliance with standards.</p><p>• Build reusable components and libraries; lead code reviews and testing.</p><p>• Design and develop scalable and high-performance applications using NodeJS for backend and Angular for front-end development.</p><p>• Architect solutions with a focus on data analytics, visualizations, and real-time performance metrics that aid in crime prevention and resolution.</p><p>• Collaborate with cross-functional teams to ensure seamless integration of frontend and backend services, ensuring a fluid user experience for public safety officials and community users.</p><p>• Optimize applications for maximum speed, scalability, and security, ensuring compliance with best practices and regulatory standards.</p><p>• Develop reusable components and libraries that can be leveraged across multiple applications.</p><p>• Lead efforts in code reviews, testing, and continuous integration to ensure robust, reliable, and maintainable codebases.</p>
<p>Since it’s 2026, the Data Engineering landscape in DC has shifted heavily toward <strong>Cloud-Native architectures</strong> and <strong>GenAI-ready pipelines</strong>. Robert Half typically recruits for both their internal corporate teams and their high-end consulting arm (Protiviti).</p><p>Here is a tailored job description based on current 2026 market standards and Robert Half’s specific hiring trends in the District.</p><p><br></p><p>Job Title: Data Engineer</p><p><strong>Location:</strong> Washington, DC (Hybrid – Downtown DC Office)</p><p><strong>Company:</strong> Robert Half </p><p><strong>Employment Type: </strong>Contract-to-Hire</p><p>Role Overview</p><p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p><p><br></p><p><br></p>
The Opportunity: Be part of a dynamic team that designs, develops, and optimizes data solutions supporting enterprise-level products across diverse industries. This role provides a clear track to higher-level positions, including Lead Data Engineer and Data Architect, for those who demonstrate vision, initiative, and impact. Key Responsibilities: Design, develop, and optimize relational database objects and data models using Microsoft SQL Server and Snowflake. Build and maintain scalable ETL/ELT pipelines for batch and streaming data using SSIS and cloud-native solutions. Integrate and utilize Redis for caching, session management, and real-time analytics. Develop and maintain data visualizations and reporting solutions using Sigma Computing, SSRS, and other BI tools. Collaborate across engineering, analytics, and product teams to deliver impactful data solutions. Ensure data security, governance, and compliance across all platforms. Participate in Agile Scrum ceremonies and contribute to continuous improvement within the data engineering process. Support database deployments using DevOps practices, including version control (Git) and CI/CD pipelines (Azure DevOps, Flyway, Octopus, SonarQube). Troubleshoot and resolve performance, reliability, and scalability issues across the data platform. Mentor entry level team members and participate in design/code reviews.
<p>Essential Duties and Responsibilities:</p><p> · Knowledge of database coding and tables; as well as general database management</p><p> · Understanding of client management, support, and communicating progress and timelines accordingly</p><p> · Organizes and/or leads Informatics projects in the implementation/use of new data warehouse tools and systems</p><p> · Ability to train new hires; as well as lead in training of new client staff members</p><p> · Understanding data schema and the analysis of database performance and accuracy</p><p> · Understanding of ETL tools, OLAP design, and data quality processes</p><p> · Knowledge of Business Intelligence life cycle: planning, design, development, validation, deployment, documentation, and ongoing support</p><p> · Working knowledge of electronic medical records software (eCW, Nextgen, etc) and the backend storage of that data</p><p> · Ability to generate effective probability modeling and statistics as it pertains to healthcare outcomes and financial risks</p><p> · Ability to manage sometimes lengthy and complicated projects from throughout the life cycle and meet the deadlines associated with these projects</p><p> · Development, maintenance, technical support of various reports and dashboards</p><p> · Knowledge of Microsoft® SQL including coding language, creation of tables, stored procedures, and query design</p><p> · Fundamental understanding of outpatient healthcare workflows</p><p> · Knowledge of relational database concepts and flat/formatted file processing.</p><p> · Possesses strong commitment to data validation processes in order to ensure accuracy of reporting (internal quality control)</p><p> · Possesses a firm grasp of patient confidentiality and system security practices to prevent HIPAA and other security violations.</p><p> · Knowledge of IBM Cognos® or other database reporting software such as SAS, SPSS, and Crystal Reports</p><p> · Ability to meet the needs of other members of the Informatics department to maximize efficiency and minimize complexity of end-user products</p><p><br></p><p>Requirements:</p><p> · Education: Bachelor's Degree</p><p> · Proven experience as a dbt Developer or in a similar Data Engineer role.</p><p> · Expert-level SQL skills — capable of writing, tuning, and debugging complex queries across large datasets.</p><p> · Strong experience with Snowflake or comparable data warehouse technologies (BigQuery, Redshift, etc.).</p><p> · Proficiency in Python for scripting, automation, or data manipulation.</p><p> · Solid understanding of data warehousing concepts, modeling, and ELT workflows.</p><p> · Familiarity with Git or other version control systems.</p><p> · Experience working with cloud-based platforms such as AWS, GCP, or Azure.</p><p><br></p><p><br></p>
<p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p>
<p>Position Overview</p><p>We are seeking a Data Engineer Engineer to support and enhance a Databricks‑based data platform during its development phase. This role is focused on building reliable, scalable data solutions early in the lifecycle—not production firefighting.</p><p>The ideal candidate brings hands‑on experience with Databricks, PySpark, Python, and a working understanding of Azure cloud services. You will partner closely with Data Engineering teams to ensure pipelines, notebooks, and workflows are designed for long‑term scalability and production readiness.</p><p><br></p><p>Key Responsibilities</p><ul><li>Develop and enhance Databricks notebooks, jobs, and workflows</li><li>Write and optimize PySpark and Python code for distributed data processing</li><li>Assist in designing scalable and reliable data pipelines</li><li>Apply Spark performance best practices: partitioning, caching, joins, file sizing</li><li>Work with Delta Lake tables, schemas, and data models</li><li>Perform data validation and quality checks during development cycles</li><li>Support cluster configuration, sizing, and tuning for development workloads</li><li>Identify performance bottlenecks early and recommend improvements</li><li>Partner with Data Engineers to prepare solutions for future production rollout</li><li>Document development standards, patterns, and best practices</li></ul>
<p>We are looking for a skilled Data Engineer to design and enhance scalable data solutions that meet diverse business objectives. This role involves collaborating with cross-functional teams to identify data requirements, improve existing pipelines, and ensure efficient data processing. The ideal candidate will bring expertise in server-side development, database management, and software deployment, working in a dynamic and fast-paced environment.</p><p><br></p><p>Responsibilities</p><ul><li>Enhance and optimize existing data storage platforms, including relational and NoSQL databases, to improve data accessibility, performance, and persistence</li><li>Apply advanced database techniques such as tuning, indexing, views, and stored procedures to support efficient and reliable data management</li><li>Develop server-side Python services utilizing concurrency patterns such as asynchronous programming and multi-threading, and leveraging libraries such as NumPy and Pandas</li><li>Design, build, and maintain APIs using modern frameworks, with experience across communication protocols including gRPC and socket-based implementations</li><li>Create, manage, and maintain CI/CD pipelines using DevOps and artifact management tools to enable efficient and reliable software delivery</li><li>Design and deploy applications in enterprise Linux environments, ensuring stability, performance, and scalability</li><li>Partner with cross-functional teams to gather requirements and deliver technical solutions aligned with business objectives</li><li>Follow software development lifecycle best practices to ensure high-quality, maintainable, and secure solutions</li><li>Work effectively in iterative, fast-paced development environments while consistently delivering high-quality outcomes on schedule</li></ul><p><br></p>
Key Responsibilities:<br><br>· Design, develop, and maintain scalable backend systems to support data warehousing and data lake initiatives.<br><br>· Build and optimize ETL/ELT processes to extract, transform, and load data from various sources into centralized data repositories.<br><br>· Develop and implement integration solutions for seamless data exchange between systems, applications, and platforms.<br><br>· Collaborate with data architects, analysts, and other stakeholders to define and implement data models, schemas, and storage solutions.<br><br>· Ensure data quality, consistency, and security by implementing best practices and monitoring frameworks.<br><br>· Monitor and troubleshoot data pipelines and systems to ensure high availability and performance.<br><br>· Stay up-to-date with emerging technologies and trends in data engineering and integration to recommend improvements and innovations.<br><br>· Document technical designs, processes, and standards for the team and stakeholders.<br><br><br><br>Qualifications:<br><br>· Bachelor’s degree in Computer Science, Engineering, or a related field; equivalent experience considered.<br><br>· Proven experience as a Data Engineer with 5 or more years of experience; or in a similar backend development role.<br><br>· Strong proficiency in programming languages such as Python, Java, or Scala.<br><br>· Hands-on experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica, etc.).<br><br>· Extensive knowledge of relational and non-relational databases (e.g., SQL, NoSQL, PostgreSQL, MongoDB).<br><br>· Expertise in building and managing enterprise data warehouses (e.g., Snowflake, Amazon Redshift, Google BigQuery) and data lakes (e.g., AWS S3, Azure Data Lake).<br><br>· Familiarity with cloud platforms (AWS, Azure, Google Cloud) and their data services.<br><br>· Experience with API integrations and data exchange protocols (e.g., REST, SOAP, JSON, XML).<br><br>· Solid understanding of data governance, security, and compliance standards.<br><br>· Strong analytical and problem-solving skills with attention to detail.<br><br>· Excellent communication and collaboration abilities.<br><br><br><br>Preferred Qualifications:<br><br>· Certifications in cloud platforms (AWS Certified Data Analytics, Azure Data Engineer, etc.)<br><br>· Experience with big data technologies (e.g., Apache Hadoop, Spark, Kafka).<br><br>· Knowledge of data visualization tools (e.g., Tableau, Power BI) for supporting downstream analytics.<br><br>· Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes, Jenkins).
We are looking for a skilled Data Engineer to join our team in Philadelphia, Pennsylvania. In this long-term contract position, you will play a key role in managing and optimizing large-scale data pipelines and systems within the healthcare industry. Your expertise will contribute to the development of robust solutions for data processing, analysis, and integration.<br><br>Responsibilities:<br>• Design, develop, and maintain large-scale data pipelines to support business needs.<br>• Optimize data workflows using tools such as Apache Spark and Python.<br>• Implement and manage ETL processes for seamless data transformation and integration.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure consistent performance and reliability.<br>• Work with Apache Hadoop and Apache Kafka to enhance data storage and streaming capabilities.<br>• Ensure compliance with data security and privacy standards.<br>• Analyze and interpret complex datasets to provide actionable insights.<br>• Document processes and solutions to support future scalability and maintenance.
We are looking for an experienced Data Engineer Lead to join our team in Columbus, Ohio on a contract basis. In this role, you will be responsible for leading the development and operation of data pipelines, ensuring seamless integration and delivery of data for analytics initiatives. As a senior member of the team, you will also take on a mentorship role, guiding and developing less experienced team members while driving technical excellence.<br><br>Responsibilities:<br>• Design, build, and maintain robust data pipelines to support enterprise-wide analytics initiatives.<br>• Collaborate with data science and business teams to refine data requirements and ensure streamlined data consumption.<br>• Lead efforts to renovate and automate data management infrastructure to enhance integration and processing efficiency.<br>• Implement and enforce data quality standards to ensure accuracy, consistency, and reliability of data.<br>• Provide training and guidance to colleagues on data preparation techniques and tools.<br>• Partner with data governance teams to curate and promote reusable data content across the organization.<br>• Communicate complex data insights effectively to both technical and non-technical stakeholders.<br>• Stay informed on emerging technologies, assessing their potential impact and integrating relevant advancements.<br>• Offer leadership, coaching, and mentorship to team members, fostering a collaborative and growth-oriented environment.<br>• Work closely with stakeholders to understand business goals and align services to meet those needs.
<p>We are looking for a skilled and innovative Data Engineer to join our team in Grove City, Ohio. In this role, you will be responsible for designing and implementing advanced data pipelines, ensuring the seamless integration and accessibility of data across various systems. As a key player in our analytics and data infrastructure efforts, you will contribute to building a robust and scalable data ecosystem to support AI and machine learning initiatives.</p><p><br></p><p>Responsibilities:</p><p>• Design and develop scalable data pipelines to ingest, process, and transform data from multiple sources.</p><p>• Optimize data models to support analytics, forecasting, and AI/ML applications.</p><p>• Collaborate with internal teams and external partners to enhance data engineering capabilities.</p><p>• Implement and enforce data governance, security, and quality standards across hybrid cloud environments.</p><p>• Work closely with analytics and data science teams to ensure seamless data accessibility and integration.</p><p>• Develop and maintain data products and services to enable actionable insights.</p><p>• Troubleshoot and improve the performance of data workflows and storage systems.</p><p>• Align data systems across departments to create a unified and reliable data infrastructure.</p><p>• Support innovation by leveraging big data tools and frameworks such as Databricks and Spark.</p>
<p>We’re looking for a <strong>Senior Data Engineer</strong> to design, build, and optimize modern data pipelines and architecture. You’ll support analytics, reporting, and data‑driven applications by creating scalable, efficient data systems across cloud environments.</p><p><strong>What You’ll Do</strong></p><ul><li>Design and build <strong>ETL/ELT pipelines</strong> across cloud platforms (Azure, AWS, or GCP)</li><li>Architect and maintain Data Lake / Lakehouse environments</li><li>Develop and optimize data ingestion, transformation, and orchestration workflows</li><li>Ensure data quality, reliability, and scalability across all pipelines</li><li>Collaborate with BI developers, analysts, and business stakeholders</li><li>Implement best practices around versioning, testing, and deployment</li><li>Support real‑time and batch data processing initiatives</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This position offers the opportunity to work remotely while contributing to critical data management and integration efforts. The ideal candidate will have hands-on experience with customer master data in ECC6, and the ability to create, maintain, and manage data effectively.<br><br>Responsibilities:<br>• Develop and maintain customer master data within ECC6, ensuring data accuracy and consistency.<br>• Create new customer profiles and manage existing ones, maintaining high standards of data integrity.<br>• Support the integration process by working with custom tables related to customer data.<br>• Collaborate with cross-functional teams to ensure seamless data flow and effective data management.<br>• Utilize tools such as Apache Spark, Python, and ETL processes to extract, transform, and load data efficiently.<br>• Leverage Apache Hadoop for scalable data storage and processing solutions.<br>• Implement Apache Kafka to enable real-time data streaming and integration.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability.<br>• Provide documentation and training to stakeholders on data management processes.<br>• Stay updated on industry best practices and emerging technologies to enhance data engineering workflows.
<p>We’re looking for a Senior Cloud Engineer to design, build, and operate secure, scalable infrastructure on Google Cloud Platform (GCP). You’ll lead with GKE for container orchestration, Infrastructure as Code (Terraform or Ansible) for repeatability, and scripting (Python, Shell/Bash) to automate everything from provisioning to observability. This role bridges architecture and hands-on delivery, partnering closely with DevOps, SRE, Security, and application teams.</p><p><br></p><p>What You’ll Do</p><ul><li><strong>Architect & implement</strong> highly available, cost-efficient GCP environments (VPCs, subnets, routing, load balancers, Cloud NAT, Cloud DNS, Cloud Storage, Cloud SQL/Spanner/BigQuery as applicable).</li><li><strong>Design, deploy, and operate GKE</strong> clusters (node pools, autoscaling, upgrades, ingress, CNI/CNI‑overlays, workload identity, network policies, pod security).</li><li><strong>Automate infrastructure</strong> with Terraform or Ansible (modules/roles, workspaces/environments, pipelines, policy-as-code).</li><li><strong>Build platform tooling</strong> and automation in Python/Shell/Bash (provisioning, configuration drift remediation, release packaging, operational runbooks).</li><li><strong>Implement observability</strong> (Cloud Monitoring/Logging, Prometheus/Grafana, OpenTelemetry) and actionable alerting/SLOs.</li><li><strong>Harden security</strong> (IAM least-privilege, service accounts, secrets management, private clusters, image scanning, workload identity, org policies).</li><li><strong>Enable CI/CD</strong> for apps and infra (Cloud Build/GitHub Actions/GitLab CI, artifact registries, blue/green or canary deployment strategies).</li><li><strong>Drive reliability</strong>—capacity planning, performance tuning, backup/DR strategies, incident response, root cause analysis, and postmortems.</li><li><strong>Mentor engineers</strong>, codify best practices, and contribute to architectural standards and roadmaps.</li></ul><p><br></p>
<p><strong>Senior Cloud Engineer</strong></p><p>Austin, TX | Hybrid | Contract</p><p><br></p><p>We are partnering with an Austin based client to identify a Senior Cloud Engineer for their 6-month contract engagement. The Senior Cloud Engineer will design, build, and support cloud infrastructure across enterprise environments. This role is hands‑on and focuses on architecting and automating cloud solutions, with an emphasis on databases, identity, networking, and security. You will work closely with engineering, architecture, and product teams to drive cloud adoption and ensure reliable, scalable, secure platform delivery.</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Build, update, and troubleshoot cloud infrastructure across large, complex environments.</li><li>Develop and automate infrastructure using Terraform and other IaC tools.</li><li>Create reusable, modular, and orchestrated code to deploy resources consistently and efficiently.</li><li>Analyze technical requirements and propose meaningful, actionable solutions.</li><li>Research and implement new tools, patterns, and cloud-native technologies.</li><li>Continuously identify opportunities to improve processes, tooling, and technical approaches.</li></ul>
We are looking for an experienced Senior Data Engineer to join our team in Atlanta, Georgia. This role is ideal for someone with a strong background in data architecture, cloud platforms, and analytics tools. You will play a key role in designing, building, and optimizing data systems to support business operations and decision-making.<br><br>Responsibilities:<br>• Develop and maintain scalable data models and database designs to support business needs.<br>• Implement and manage data integration workflows using ETL processes and tools.<br>• Build and optimize data lakes and LakeHouse architectures on Azure platforms.<br>• Utilize Microsoft Fabric and Azure Databricks to create advanced data solutions.<br>• Design and develop dashboards and reports using Power BI to provide actionable insights.<br>• Ensure data governance by establishing policies, procedures, and standards for data use.<br>• Collaborate with cross-functional teams to align data strategies with organizational goals.<br>• Leverage Python and SQL for data analysis, transformation, and automation.<br>• Work with middleware solutions like MuleSoft for efficient data communication and integration.<br>• Stay updated on emerging technologies to continuously improve data engineering practices.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. As part of the Manufacturing industry, you will play a pivotal role in developing and maintaining data infrastructure critical to our operations. This is a long-term contract position that offers the opportunity to work on innovative projects and collaborate with a dynamic team.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support business operations and analytics.<br>• Develop, test, and maintain ETL processes for efficient data extraction, transformation, and loading.<br>• Utilize tools such as Apache Spark and Hadoop to manage and process large datasets.<br>• Integrate and optimize data streaming platforms like Apache Kafka.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Create and maintain documentation for data processes and systems.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.<br>• Ensure data security and compliance with industry standards and regulations.
We are looking for an experienced Senior Data Engineer with a strong background in Python and modern data engineering tools to join our team in West Des Moines, Iowa. This is a long-term contract position that requires expertise in designing, building, and optimizing data pipelines and working with cloud-based data warehouses. If you thrive in a collaborative environment and have a passion for transforming raw data into actionable insights, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, debug, and optimize Python-based data pipelines using frameworks such as Flask, Django, or FastAPI.<br>• Design and implement data transformations in a data warehouse using tools like dbt, ensuring high-quality analytics-ready datasets.<br>• Utilize Amazon Redshift and Snowflake for managing large-scale data storage and performing advanced querying and optimization.<br>• Automate data integration processes using platforms like Fivetran and orchestration tools such as Prefect or Airflow.<br>• Build reusable and maintainable data models to improve performance and scalability for analytics and reporting.<br>• Conduct data analysis and visualization leveraging Python libraries such as NumPy, Pandas, TensorFlow, and PyTorch.<br>• Manage version control for data engineering projects using Git and GitHub.<br>• Ensure data quality through automated testing and validation processes.<br>• Document workflows, code, and data transformations following best practices for readability and maintainability.<br>• Optimize cloud-based data warehouse and lake platforms for performance and integration of new data sources.