Remote Data Engineer – Full‑Time, $27/hr – Data Pipeline & ETL Specialist at Gigentra (Work‑From‑Home)
Flexionis
About Flexara – Pioneering Entertainment‑Driven Data Innovation Jobnity is a global leader in creating immersive experiences that blend storytelling, cutting‑edge technology, and data‑driven insights. From world‑renowned theme parks to beloved streaming platforms, Remotara delivers magical moments to millions of fans every day. Our Data Science & Innovation (DSI) division fuels that magic by turning raw data into actionable intelligence, empowering every line of business—from guest experience to content creation—to make smarter, faster decisions. As part of our rapid growth, Skillora is expanding its data engineering team to support a thriving media ecosystem. This is a unique opportunity for a skilled Data Engineer to work remotely, collaborate with cross‑functional partners, and help shape the future of entertainment analytics—all while enjoying a competitive hourly rate of $27. Why Join Nexspire? Impactful Work: Your pipelines will power insights for blockbuster movies, beloved theme‑park experiences, and iconic sports content. Remote‑First Culture: Flexible home‑office setup, robust collaboration tools, and a supportive virtual community. Career Growth: Clear pathways to senior engineering, data architecture, or product leadership roles. Learning Stipends: Budget for certifications, conferences, and online courses in cloud platforms, data engineering, and machine learning. Competitive Compensation: $27 per hour, performance bonuses, and comprehensive benefits package. Key Responsibilities As a Data Engineer on the DSI Data Engineering (DE) team, you will be a cornerstone of our data infrastructure, ensuring reliable, scalable, and secure data delivery across the organization. Data Pipeline Development: Design, build, and maintain ETL/ELT pipelines using SQL (Snowflake, PostgreSQL) and Python to ingest, transform, and load data from diverse sources. Data Modeling & Architecture: Create logical and physical data models, define schema, and implement best‑practice table designs to support analytical workloads. Automation & Orchestration: Develop and manage automated job schedules using Airflow or equivalent orchestration tools, ensuring high‑availability and fault tolerance. Quality Assurance: Implement data validation, testing frameworks, and monitoring alerts to guarantee data integrity and compliance. Collaboration: Partner with Data Science, Analytics, Product, and Engineering teams to understand data requirements and translate them into robust solutions. Documentation & Governance: Maintain clear documentation of pipelines, data dictionaries, and governance policies to promote transparency and reuse. Performance Optimization: Identify bottlenecks, optimize query performance, and tune cloud resources for cost‑effective processing. DevOps Integration: Use GitLab/GitHub for version control, Docker for containerization, and CI/CD pipelines to streamline deployments across dev, QA, and production environments. Essential Qualifications Education: Bachelor’s degree in Computer Science, Software Engineering, Mathematics, Statistics, or a related technical field. Experience: Minimum 2 years of hands‑on experience designing, building, and supporting ELT/ETL data pipelines in a production environment. Technical Proficiency: Strong command of SQL (especially Snowflake and PostgreSQL) and Python for data manipulation and automation. Cloud Exposure: Familiarity with cloud‑based data platforms (Snowflake, Databricks, AWS Redshift, etc.) and basic cloud architecture concepts. Version Control & CI/CD: Experience working with GitLab/GitHub, Docker containers, and automated deployment pipelines. Data Modeling Insight: Solid understanding of relational and dimensional modeling, as well as data warehousing principles. Preferred Qualifications & Nice‑to‑Have Skills Hands‑on experience with large‑scale data sets (terabytes to petabytes) and high‑throughput ingestion frameworks. Advanced cloud certifications (AWS Certified Solutions Architect, SnowPro Core, etc.). Exposure to streaming data technologies such as Kafka or Kinesis. Knowledge of infrastructure‑as‑code tools (Terraform, CloudFormation) for environment provisioning. Background in media, entertainment, or consumer‑facing digital products. Experience building data products that directly feed machine learning models. Core Skills & Competencies Analytical Mindset: Ability to dissect complex business problems and translate them into data solutions. Problem‑Solving: Proactive identification of issues, root‑cause analysis, and implementation of robust fixes. Collaboration: Strong communication skills to work effectively with cross‑functional stakeholders, both technical and non‑technical. Adaptability: Thrive in a fast‑paced environment where priorities shift and new technologies emerge. Attention to Detail: Meticulous approach to data quality, documentation, and security standards. Work Environment & Culture at Worklio Hirefluxa values creativity, curiosity, and communit