Back to Jobs

Senior Data Engineer- Contract

RBA, Inc.

Remotecontract3 days agosenior
pythonscalasql
Apply Now

Senior Data Engineer – Contract

RBA is an established leader and trusted partner for enterprise and mid-size organizations seeking to transform their business through technology solutions. As a Digital and Technology consultancy, we combine strategic insight with technical expertise to deliver impactful, scalable solutions that align with business goals. We take pride in working with some of the most recognized companies in our market—while fostering a culture that blends challenging career opportunities with a collaborative, fun work environment.

We are seeking a Senior Data Engineer with strong Databricks expertise to join our growing Data & Analytics practice. In this role, you’ll architect, build, and optimize modern data pipelines and Databricks Lakehouse platforms that power analytics, AI/ML, and business intelligence solutions for our clients.

You’ll partner with business stakeholders, data scientists, and engineers to design scalable, secure, and high-performance data ecosystems. The ideal candidate is comfortable solving complex data challenges, optimizing Spark workloads, and setting technical direction for Databricks-based solutions.

Responsibilities

• Design, build, and optimize end-to-end data pipelines within Databricks across structured, semi-structured, and unstructured data sources.

• Architect and implement cloud-based Lakehouse platforms using Databricks and Delta Lake.

• Implement Medallion (Bronze/Silver/Gold) architecture and enforce best practices for data quality, governance, and security.

• Develop and optimize ETL/ELT processes using Databricks, dbt, Airflow, or Azure Data Factory.

• Optimize Spark (PySpark/Scala) workloads for performance and cost efficiency, including cluster configuration and tuning.

• Integrate real-time streaming technologies (Kafka, Kinesis, or Event Hubs) with Databricks Structured Streaming.

• Support AI/ML initiatives by building pipelines for feature engineering, MLflow integration, and RAG use cases.

• Collaborate with cross-functional teams to translate business requirements into scalable technical solutions.

• Contribute to CI/CD practices and infrastructure-as-code (Terraform) for Databricks deployments.

• Mentor junior engineers and stay current with emerging trends in Databricks and cloud-native data platforms.

Requirements

• Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

• 5+ years of experience in data engineering or data platform development.

• Hands-on experience building and optimizing solutions in Databricks.

• Strong programming skills in Python or Scala and experience with Spark.

• Expertise in SQL, data modeling, and modern lakehouse architectures.

• Experience with at least one major cloud platform (Azure preferred; AWS or GCP acceptable).

• Proficiency with ETL/ELT and workflow orchestration tools (Databricks Workflows, Airflow, dbt, ADF).

• Strong understanding of CI/CD, DevOps for data, and infrastructure-as-code.

• Excellent communication skills and ability to thrive in client-facing environments.

Preferred Qualifications

• Databricks certifications (Associate or Professional).

• Experience with Unity Catalog and enterprise data governance.

• Familiarity with Vector Databases and semantic search use cases.

• Experience supporting AI/ML workflows and cloud-based ML services.

• Certifications in AWS, Azure, or GCP.

• Experience with BI tools such as Power BI, Tableau, or Looker.

via JSearch
FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.