Actively looking for a Data engineer with AWS Glue and Redshift
Arbor Tek Systems
Role: AWS Data Engineer
Location: 100% Remote
Role Summary
The Data Engineer will build and maintain secure, scalable, and automated data pipelines to support the ingestion, transformation, and curation of data products. This role will focus on implementing CI/CD pipelines, automated testing, and monitoring solutions.
Key Responsibilities
Data Pipelines
Develop configuration-driven ingestion and transformation pipelines using AWS Glue, Lambda, and Redshift.
Automation
Implement CI/CD pipelines for automated build, test, and deployment of data products.
Testing
Establish automated test frameworks for schema validation, data quality, and performance testing.
Monitoring
Set up monitoring and observability dashboards for product-level metrics (CloudWatch, custom metrics).
Collaboration
Work with data modelers and architects to ensure alignment with business requirements.
Required Skills
Programming
Proficiency in Python, SQL, and shell scripting.
AWS Expertise
Hands-on experience with AWS services (Glue, Redshift, S3, Lambda, CloudWatch).
Automation
Experience with CI/CD tools (Terraform, Jenkins, AWS CodePipeline).
Data Integration
Strong knowledge of ETL/ELT processes and frameworks.
Soft Skills
Strong problem-solving, collaboration, and communication skills.
Qualifications
Bachelor s degree in computer science, Data Engineering, or related field.
5+ years of experience in data engineering and pipeline development.
AWS Certified Data Analytics or AWS Certified Developer is a plus.