<p>We are looking for a skilled and innovative Data Engineer to join our team in Grove City, Ohio. In this role, you will be responsible for designing and implementing advanced data pipelines, ensuring the seamless integration and accessibility of data across various systems. As a key player in our analytics and data infrastructure efforts, you will contribute to building a robust and scalable data ecosystem to support AI and machine learning initiatives.</p><p><br></p><p>Responsibilities:</p><p>• Design and develop scalable data pipelines to ingest, process, and transform data from multiple sources.</p><p>• Optimize data models to support analytics, forecasting, and AI/ML applications.</p><p>• Collaborate with internal teams and external partners to enhance data engineering capabilities.</p><p>• Implement and enforce data governance, security, and quality standards across hybrid cloud environments.</p><p>• Work closely with analytics and data science teams to ensure seamless data accessibility and integration.</p><p>• Develop and maintain data products and services to enable actionable insights.</p><p>• Troubleshoot and improve the performance of data workflows and storage systems.</p><p>• Align data systems across departments to create a unified and reliable data infrastructure.</p><p>• Support innovation by leveraging big data tools and frameworks such as Databricks and Spark.</p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This position offers the opportunity to work remotely while contributing to critical data management and integration efforts. The ideal candidate will have hands-on experience with customer master data in ECC6, and the ability to create, maintain, and manage data effectively.<br><br>Responsibilities:<br>• Develop and maintain customer master data within ECC6, ensuring data accuracy and consistency.<br>• Create new customer profiles and manage existing ones, maintaining high standards of data integrity.<br>• Support the integration process by working with custom tables related to customer data.<br>• Collaborate with cross-functional teams to ensure seamless data flow and effective data management.<br>• Utilize tools such as Apache Spark, Python, and ETL processes to extract, transform, and load data efficiently.<br>• Leverage Apache Hadoop for scalable data storage and processing solutions.<br>• Implement Apache Kafka to enable real-time data streaming and integration.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability.<br>• Provide documentation and training to stakeholders on data management processes.<br>• Stay updated on industry best practices and emerging technologies to enhance data engineering workflows.
<p>We are seeking a SQL Server Data Engineer</p><p>Location: Albuquerque, NM (Local preferred),</p><p>Work Type: Full-Time | Onsite 3+ days/week | Contract-to-Hire option</p><p><br></p><p>We’re looking for a SQL Server Data Engineer to support and optimize our legacy Operating Budget Management System (OBMS) environment. This role is ideal for someone experienced in stored‑procedure–driven systems, SQL performance tuning, and SSRS reporting.</p><p><br></p><p>Responsibilities include but are not limited to:</p><p>Maintain and optimize T‑SQL code, stored procedures, and functions.</p><p>Perform query tuning, indexing, and performance diagnostics.</p><p>Develop and deploy SSRS reports; troubleshoot reporting issues.</p><p>Translate business requirements into technical solutions.</p><p>Support database design and ETL/data integration efforts.</p><p>Document changes and follow change‑management best practices.</p><p><br></p><p><br></p>
The Opportunity: Be part of a dynamic team that designs, develops, and optimizes data solutions supporting enterprise-level products across diverse industries. This role provides a clear track to higher-level positions, including Lead Data Engineer and Data Architect, for those who demonstrate vision, initiative, and impact. Key Responsibilities: Design, develop, and optimize relational database objects and data models using Microsoft SQL Server and Snowflake. Build and maintain scalable ETL/ELT pipelines for batch and streaming data using SSIS and cloud-native solutions. Integrate and utilize Redis for caching, session management, and real-time analytics. Develop and maintain data visualizations and reporting solutions using Sigma Computing, SSRS, and other BI tools. Collaborate across engineering, analytics, and product teams to deliver impactful data solutions. Ensure data security, governance, and compliance across all platforms. Participate in Agile Scrum ceremonies and contribute to continuous improvement within the data engineering process. Support database deployments using DevOps practices, including version control (Git) and CI/CD pipelines (Azure DevOps, Flyway, Octopus, SonarQube). Troubleshoot and resolve performance, reliability, and scalability issues across the data platform. Mentor entry level team members and participate in design/code reviews.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. As part of the Manufacturing industry, you will play a pivotal role in developing and maintaining data infrastructure critical to our operations. This is a long-term contract position that offers the opportunity to work on innovative projects and collaborate with a dynamic team.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support business operations and analytics.<br>• Develop, test, and maintain ETL processes for efficient data extraction, transformation, and loading.<br>• Utilize tools such as Apache Spark and Hadoop to manage and process large datasets.<br>• Integrate and optimize data streaming platforms like Apache Kafka.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Create and maintain documentation for data processes and systems.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.<br>• Ensure data security and compliance with industry standards and regulations.
We are looking for an experienced Data Engineer Lead to join our team in Columbus, Ohio on a contract basis. In this role, you will be responsible for leading the development and operation of data pipelines, ensuring seamless integration and delivery of data for analytics initiatives. As a senior member of the team, you will also take on a mentorship role, guiding and developing less experienced team members while driving technical excellence.<br><br>Responsibilities:<br>• Design, build, and maintain robust data pipelines to support enterprise-wide analytics initiatives.<br>• Collaborate with data science and business teams to refine data requirements and ensure streamlined data consumption.<br>• Lead efforts to renovate and automate data management infrastructure to enhance integration and processing efficiency.<br>• Implement and enforce data quality standards to ensure accuracy, consistency, and reliability of data.<br>• Provide training and guidance to colleagues on data preparation techniques and tools.<br>• Partner with data governance teams to curate and promote reusable data content across the organization.<br>• Communicate complex data insights effectively to both technical and non-technical stakeholders.<br>• Stay informed on emerging technologies, assessing their potential impact and integrating relevant advancements.<br>• Offer leadership, coaching, and mentorship to team members, fostering a collaborative and growth-oriented environment.<br>• Work closely with stakeholders to understand business goals and align services to meet those needs.
We are looking for a skilled Data Engineer to join our team in Philadelphia, Pennsylvania. In this long-term contract position, you will play a key role in managing and optimizing large-scale data pipelines and systems within the healthcare industry. Your expertise will contribute to the development of robust solutions for data processing, analysis, and integration.<br><br>Responsibilities:<br>• Design, develop, and maintain large-scale data pipelines to support business needs.<br>• Optimize data workflows using tools such as Apache Spark and Python.<br>• Implement and manage ETL processes for seamless data transformation and integration.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure consistent performance and reliability.<br>• Work with Apache Hadoop and Apache Kafka to enhance data storage and streaming capabilities.<br>• Ensure compliance with data security and privacy standards.<br>• Analyze and interpret complex datasets to provide actionable insights.<br>• Document processes and solutions to support future scalability and maintenance.
<p>IMMEDIATE HIRE NEEDED. Interviews to begin the first week of February. </p><p><br></p><p>We are looking for a skilled Snowflake Marketing Data Engineer to join our team in Tampa, Florida in a hybrid in-office work schedule (2 to 3 days remote per week) preferably, remote candidates may be considered depending of the quality in match. </p><p><br></p><p>In this role, you will be responsible for designing, implementing, and maintaining data solutions that support critical business operations. Your expertise will play a key part in driving data-driven decisions and optimizing performance across various platforms.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain ETL processes to efficiently extract, transform, and load data from multiple sources.</p><p>• Analyze marketing data to uncover insights and support strategic decision-making.</p><p>• Create and manage dashboards and reports using Power BI to visualize data effectively.</p><p>• Integrate and leverage tools like Braze and Google Analytics to enhance data tracking and reporting capabilities.</p><p>• Collaborate with cross-functional teams to ensure the accuracy and reliability of data systems.</p><p>• Optimize database performance and troubleshoot any issues related to data pipelines.</p><p>• Document data workflows and provide training to stakeholders on best practices.</p><p>• Work with cloud-based platforms, such as Snowflake, to store and manage large datasets.</p><p>• Ensure data security and compliance with company policies and standards.</p>
<p>We are looking for an experienced Data Engineer to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p> Responsibilities:</p><p> </p><p> • Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p> • Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p> • Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p> • Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p> • Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p> • Ensure data quality, governance, and integrity by implementing and enforcing best practices.</p><p> </p><p><br></p>
We are looking for a skilled Data Engineer to design and implement robust technical solutions for enterprise applications. This role involves creating scalable and secure cloud-native systems on Azure, while collaborating closely with stakeholders to meet business requirements. The ideal candidate will possess strong expertise in data architecture and integration strategies, ensuring high engineering standards and seamless orchestration across systems.<br><br>Responsibilities:<br>• Design and maintain comprehensive technical architectures for enterprise applications, ensuring scalability and security.<br>• Develop integration strategies across multiple systems, including manufacturing, field service, and customer portals.<br>• Collaborate with the Principal Architect to define data contracts and establish effective integration patterns.<br>• Partner with teams in Product, AI/ML Engineering, and business units to translate requirements into functional solutions.<br>• Create reference implementations and frameworks to streamline development processes.<br>• Oversee system-level orchestration and elevate engineering standards across projects.<br>• Implement cloud-native solutions on Azure, leveraging modern tools and technologies.<br>• Provide technical guidance and mentorship to engineering teams, fostering best practices.<br>• Continuously monitor and improve system performance, addressing issues proactively.
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
We are looking for an experienced Data Architect to design and implement cutting-edge data solutions that meet the evolving needs of our enterprise. This role involves building secure, scalable, and high-performing data platforms while leveraging modern technologies and aligning with organizational goals. The ideal candidate will have expertise in cloud-based architecture, data governance, and advanced analytics, driving innovation across diverse business functions.<br><br>Responsibilities:<br>• Develop comprehensive data architecture strategies for advanced analytics and big data solutions using Azure Databricks.<br>• Design and implement Databricks Delta Lake-based Lakehouse architecture, utilizing PySpark Jobs, Databricks Workflows, Unity Catalog, and Medallion architecture.<br>• Optimize and configure Databricks clusters, notebooks, and workflows to ensure efficiency and scalability.<br>• Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure Data Factory, Azure Key Vault, and Microsoft Fabric.<br>• Establish and enforce best practices for data governance, security, and cost management.<br>• Collaborate with data engineers, analysts, and business stakeholders to translate functional requirements into robust technical solutions.<br>• Provide technical mentoring and leadership to team members focused on Databricks and Azure technologies.<br>• Monitor, troubleshoot, and enhance data pipelines and workflows to maintain reliability and performance.<br>• Ensure compliance with organizational and regulatory standards regarding data security and privacy.<br>• Document configurations, processes, and governance standards to support long-term scalability and usability.
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>We are looking for a skilled GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments. You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission-critical analytics, reporting, and customer engagement platforms. Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.</p><p><br></p><p>Location: <strong>REMOTE - EST or CST</strong></p><p><br></p><p>This position involves designing, developing, testing, and troubleshooting software programs to enhance existing systems and build new software products. The ideal candidate will apply software engineering principles and collaborate effectively with colleagues to tackle moderately complex technical challenges and deliver impactful solutions.</p><p><br></p><p>Responsibilities:</p><ul><li> Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions.</li><li> Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems.</li><li> Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting.</li><li> Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real-time and batch processing into Apache Solr, AWS Open Search platforms.</li><li> Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to:</li><li> Create automated processes for vector generation and embedding from unstructured data to support Generative AI models.</li><li> Automate data quality checks, metadata tagging, and lineage tracking.</li><li> Enhance ingestion/ETL with LLM-assisted transformation and anomaly detection.</li><li> Build conversational BI interfaces that allow natural language access to Solr and SQL data.</li><li> Develop AI-powered copilots for pipeline monitoring and automated troubleshooting.</li><li> Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance.</li><li> Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration.</li><li> Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls.</li><li> Support Agile DevOps processes with sprint-based delivery of pipeline and AI-enabled features.</li></ul>
<p>We’re seeking a <strong>BI Engineer</strong> who can design and optimize end‑to‑end analytics solutions leveraging <strong>Power BI</strong> and <strong>Microsoft Fabric</strong>. This role blends engineering, modeling, and performance optimization to support scalable reporting environments.</p><p><strong>What You’ll Do</strong></p><ul><li>Architect and maintain scalable BI solutions across Power BI and Fabric</li><li>Create and optimize semantic models, dataflows, and pipelines</li><li>Implement real‑time or near‑real‑time reporting frameworks</li><li>Build workspace structures, governance standards, and deployment processes</li><li>Partner with data engineering teams to ensure structured, high‑quality data</li><li>Troubleshoot data refresh, gateway, and performance issues</li></ul><p><br></p>
We are looking for an experienced Power BI Business Intelligence Engineer to join our team in Niceville, Florida. In this role, you will play a vital part in managing and enhancing our reporting and business intelligence platforms to provide actionable insights. Your expertise will drive data analysis, dashboard creation, and the development of solutions that support key business decisions.<br><br>Responsibilities:<br>• Oversee the company's reporting and business intelligence systems to ensure optimal performance and accuracy.<br>• Develop a deep understanding of the organization's business models, operations, and decision-making processes.<br>• Analyze data architecture and gather requirements from stakeholders to create tailored solutions.<br>• Build and manage data sources, models, and integrations for reporting and analytics purposes.<br>• Design and maintain dashboards and reports using enterprise business intelligence tools.<br>• Facilitate seamless data integration processes to retrieve, transform, and analyze datasets.<br>• Support leadership in creating management information and KPIs to drive data-driven decision-making.<br>• Ensure data quality and integrity across all business intelligence deliverables.<br>• Stay updated with the latest advancements in BI technologies, tools, and practices to recommend improvements.<br>• Document systems and processes comprehensively while adhering to governance, security, and privacy standards.
<p>We are looking for an AI Engineer to join our team. In this role, you will contribute to the development and implementation of AI and Machine Learning solutions that optimize renewable energy projects. You will work on creating scalable models, applications, and workflows that drive data-driven decision-making across the organization while adhering to established engineering standards and practices.</p><p><br></p><p>Responsibilities:</p><p>• Develop, test, and deploy AI/ML models and applications to enhance renewable energy planning, forecasting, and construction processes.</p><p>• Build and maintain MLOps workflows, including data pipelines, model packaging, versioning, monitoring, and retraining.</p><p>• Collaborate with IT and data teams to ensure AI solutions meet security, integration, and performance requirements.</p><p>• Break down technical requirements into actionable tasks and contribute to design implementation during reviews.</p><p>• Deliver scalable and secure solutions by following established standards and reference architectures.</p><p>• Support deployment of AI/ML solutions to cloud environments, with a focus on Azure.</p><p>• Create APIs and integrate AI solutions into enterprise systems to enable seamless operations.</p><p>• Utilize advanced machine learning frameworks such as TensorFlow and Scikit-learn to develop innovative solutions.</p><p>• Analyze data engineering concepts and apply them to enhance AI workflows.</p><p>• Provide technical input and partner with stakeholders to meet project objectives</p>
<p>We are seeking a mid-level Google Cloud Platform (GCP) Engineer with strong, hands-on experience across cloud administration, automation, system integration, and application support. This role supports production GCP environments and participates in cloud migration initiatives, including on-premises to cloud and cloud-to-cloud migrations.</p><p><br></p><p>This position prioritizes real-world, operational experience over certifications. While Google Cloud certifications are valued, demonstrated hands-on experience designing, deploying, automating, migrating, and supporting GCP workloads in production environments carries significantly more weight.</p><p><br></p><p>Experience with data engineering and analytics platforms is a strong plus but not a strict requirement for candidates with a proven track record of strong GCP engineering experience.</p><p><br></p><p><strong>GCP Administration & Operations</strong></p><ul><li>Administer and support GCP organizations, folders, projects, and billing</li><li>Manage IAM roles, service accounts, and access controls using least-privilege principles</li><li>Configure and maintain VPC networks, firewall rules, VPNs, and hybrid connectivity</li><li>Monitor platform health using Cloud Monitoring, Logging, and Alerting</li><li>Troubleshoot production issues and perform root-cause analysis</li></ul><p>Support environments across development, test, staging, and production</p><p><br></p><p><strong>Cloud Migration & Modernization</strong></p><ul><li>Support cloud migration initiatives, including:</li><li>On-premises to Google Cloud migrations</li><li>Cloud-to-cloud migrations (e.g., AWS or Azure to GCP)</li><li>Assist with: Migration planning and execution, Workload and dependency analysis, Data, application, and infrastructure migrations</li><li>Support cutovers, post-migration stabilization, and optimization</li></ul><p>Help modernize legacy workloads into cloud-native or hybrid architectures</p><p><br></p><p><strong>Integration & Platform Engineering</strong></p><ul><li>Integrate GCP services with enterprise systems such as:</li><li>Identity platforms (Google Workspace, Active Directory, SSO)</li><li>CI/CD pipelines and automation tooling</li><li>SaaS and internal applications</li><li>Support API-based and event-driven integrations using REST and Pub/Sub</li><li>Collaborate with security, networking, and application teams</li><li>Assist with hybrid and multi-cloud integration patterns</li></ul><p><br></p><p><strong>Application & Development Support</strong></p><ul><li>Assist development teams with: Environment provisioning, Deployment pipelines, Performance and reliability tuning</li><li>Review cloud architectures for scalability, resilience, security, and cost</li></ul><p><br></p><p><strong>Data & Analytics (Strong Plus)</strong></p><ul><li>Support data platforms such as: BigQuery, Cloud Storage, Pub/Sub, Dataflow</li><li>Assist with data ingestion pipelines and analytics workloads</li><li>Understand basic data governance, access controls, and performance tuning</li></ul>
Join our team as a Business Intelligence Software Engineer and help design, build, and maintain innovative reporting and data-driven applications that power field operations, business units, and customer solutions. This is a hands-on coding role that requires strong technical judgment and collaboration with cross-functional teams. You’ll manage the entire development lifecycle, ensuring solutions are scalable, reliable, and aligned with business priorities. Key Responsibilities: Lead the Software Development Lifecycle (SDLC): Oversee all phases of BI application development, from concept through deployment and support. Hands-on Development: Build and maintain applications using Python (PySpark), SQL, and TypeScript/JavaScript. Technical Strategy & Architecture: Apply best practices for design, performance, and scalability. Quality Assurance: Establish testing frameworks, conduct code reviews, and maintain bug-tracking processes. Continuous Improvement: Identify and implement tools and methodologies to streamline development and increase system reliability. Collaboration: Work with internal stakeholders, data scientists, analysts, and operations teams to translate business needs into software solutions. Support & Maintenance: Provide ongoing support for newly developed applications, ensuring smooth integration with existing systems.
Position: IT Product Owner (Data Integrations)<br> Location: Remote<br> Salary: Up to $110,000 base + excellent benefits<br> <br> *** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br> Are you a Product Owner who thrives at the intersection of vision, strategy, and execution? <br> <br> Do you love transforming complex problems into elegant, buildable solutions? Are you motivated by the opportunity to help build brand‑new digital products from the ground up?<br> If so, we have an incredible Product Owner role— playing a critical part in a major digital transformation that is redefining and transforming services.<br> This is a rare chance to join a newly built, remote‑first product and engineering team shaping the future of a multi‑industry ecosystem.<br> <br> About the Transformation<br> We're building a modern, connected, mobile‑responsive digital platform that unifies dozens of systems into one seamless experience. Think:<br> • MVP‑first mindset<br> • Scalable, flexible architecture<br> • Seamless data flows<br> • Consumer‑grade UX<br> • A team empowered to innovate quickly<br> And you’ll be at the center of it.<br> You’ll Own…<br> • Translating data, API, and integration requirements into clear, actionable user stories<br> • Partnering closely with the Sr Data Integration Engineer to shape data flows, CDP capabilities, and API frameworks<br> • Prioritizing a data‑heavy backlog that balances business value with technical complexity<br> • Ensuring delivered work is accurate, secure, and scalable<br> You Are…<br> • A Product Owner who understands how data moves through systems<br> • Comfortable working with data engineers, APIs, ETL/ELT workflows, and integration frameworks<br> • A strong communicator who can simplify complex technical concepts for non‑technical audiences<br> <br> *** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. ***
<p><strong>Role Summary</strong></p><p>As a Technical Project Manager focused on data and AWS cloud, you will lead the planning, execution, and delivery of engineering efforts involving data infrastructure, data platforms, analytics, and cloud services. You will partner with data engineering, analytics, DevOps, product, security, and business stakeholders to deliver on key strategic initiatives. You are comfortable navigating ambiguity, managing dependencies across teams, and ensuring alignment between technical direction and business priorities.</p><p><strong>Key Responsibilities</strong></p><ul><li>Lead end-to-end technical projects pertaining to AWS cloud, data platforms, data pipelines, ETL/ELT, analytics, and reporting.</li><li>Define project scope, objectives, success criteria, deliverables, and timelines in collaboration with stakeholders.</li><li>Create and maintain detailed project plans, roadmaps, dependency maps, risk & mitigation plans, status reports, and communication plans.</li><li>Track and monitor project progress, managing changes to scope, schedule, and resources.</li><li>Facilitate agile ceremonies (e.g., sprint planning, standups, retrospectives) or hybrid methodologies as appropriate.</li><li>Serve as the bridge between technical teams (data engineering, DevOps, platform, security) and business stakeholders (product, analytics, operations).</li><li>Identify technical and organizational risks, escalate when needed, propose mitigation or contingency plans.</li><li>Drive architectural and design discussions, ensure technical feasibility, tradeoff assessments, and alignment with cloud best practices.</li><li>Oversee vendor, third-party, or external partner integrations and workstreams.</li><li>Ensure compliance, security, governance, and operational readiness (e.g., data privacy, logging, monitoring, SLA) are baked into deliverables.</li><li>Conduct post-implementation reviews, lessons learned, and process improvements.</li><li>Present regularly to senior leadership on project status, challenges, KPIs, and outcomes.</li></ul>
<p><strong>Data Engineer </strong>Java Dev (AWS, Microservices, Spring Boot) IV </p><p>46 Week Contract </p><p>Hybrid | Philadelphia, PA </p><p><strong>Job Summary</strong></p><p>The Senior Java Developer will design, build, and support cloud‑based microservices using Java and AWS. This role focuses on developing scalable, secure solutions, supporting DevOps and CI/CD practices, and collaborating with cross‑functional teams to deliver high‑quality software in an Agile environment.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, test, and maintain <strong>Java‑based microservices</strong> using <strong>Spring Boot</strong> and AWS.</li><li>Build and support <strong>cloud‑native solutions</strong> with an emphasis on scalability, performance, and security.</li><li>Contribute to <strong>DevOps and CI/CD pipelines</strong>, including source control, automation, monitoring, and deployment practices.</li><li>Troubleshoot production issues and drive continuous improvements across platform reliability and performance.</li><li>Collaborate with architects, product managers, and engineering teams to translate requirements into technical solutions.</li><li>Promote and apply <strong>software engineering best practices</strong> within an Agile development environment.</li></ul><p><br></p>
We are looking for a dedicated Systems Engineer to manage and maintain a multi-node Linux server environment, supporting instructional and research activities. This role involves ensuring the reliability and performance of IT infrastructure, providing technical expertise for Linux systems, and collaborating with stakeholders to meet specialized computing needs. The ideal candidate will play a key role in optimizing and securing IT solutions while documenting workflows and procedures to uphold operational excellence.<br><br>Responsibilities:<br>• Administer and maintain a multi-node Linux server environment, including associated workstations used for teaching and research.<br>• Troubleshoot and resolve complex Linux server and workstation issues, utilizing tools like Ansible for automation and configuration management.<br>• Oversee the operation of a small data center, ensuring uninterrupted support for engineering courses and research activities.<br>• Perform system performance tuning, security hardening, and monitoring to ensure optimal operation and reliability.<br>• Implement and document workflows, procedures, and technical standards to enhance system continuity and reliability.<br>• Collaborate with faculty, researchers, and technical staff to address specialized computing requirements.<br>• Build, configure, and document IT infrastructure to align with best practices and service level objectives.<br>• Monitor and analyze performance metrics, identifying areas for improvement and ensuring system efficiency.<br>• Serve as a technical liaison, providing support and maintaining communication with internal and external stakeholders.<br>• Develop and implement robust and secure IT solutions tailored to the needs of the organization.