Find a Career-Defining* Opportunity, Whatever Your Stage

*P9-backed companies are 4x more likely to succeed than the industry average. (Dealroom).

Data Engineer

Showbie

Showbie

Software Engineering, Data Science
Edmonton, AB, Canada · Remote
Posted on Aug 15, 2025

About the Role

We’re looking for a Data Engineer to design, build, and scale the data infrastructure that powers decision-making across the company. You’ll own our data pipelines and models end-to-end — from ingestion to transformation — ensuring they are reliable, well-documented, and optimized for performance.

This is a hands-on engineering role focused primarily (80%) on backend data engineering, with the remainder spent collaborating with analytics, product, and engineering teams on modelling and governance. You’ll help shape our modern data stack, contributing to both the immediate business needs and the long-term vision for our platform.

Key Responsibilities

Execution & Engineering

  • You will, design, develop, and maintain scalable ETL/ELT pipelines using modern tools.
  • Build and manage data models and data marts in our cloud data warehouse (AWS Redshift) using dbt.
  • Manage workflow orchestration and automation with Apache Airflow.
  • Ensure high performance, reliability, and cost-efficiency of data systems through monitoring, optimization, and proactive maintenance.
  • Implement data testing and version control best practices to ensure reliability in production.

Cross-Functional Collaboration

  • Partner with analysts, product managers, and engineering teams to translate business requirements into scalable data solutions.
  • Align event tracking, logging, and ingestion with analytical and reporting needs.
  • Work closely with DevOps and platform teams on infrastructure, tooling, and deployments.
  • Serve as a subject matter expert on data pipelines and tooling in cross-functional initiatives.

Governance & Operations

  • Implement and enforce data governance best practices, including privacy, access control, and data lineage.
  • Respond to and troubleshoot data issues in production, ensuring timely resolution and communication.
  • Maintain documentation and contribute to knowledge sharing within the BizOps (data) team.

Our Current Stack

  • Warehouse: AWS Redshift
  • Transformation: dbt, Spark
  • Orchestration: Apache Airflow
  • Ingestion: Fivetran, Airbyte, AWS Glue
  • Languages: Python, SQL
  • Infrastructure: AWS S3, EC2
  • Business Intelligence: Looker
  • Product Analytics: Amplitude
  • Machine Learning: SageMaker

Qualifications

Must-Have:

  • 3+ years of experience in a Data Engineering role or similar.
  • Strong SQL and Python skills.
  • Experience with cloud data platforms (AWS).
  • Hands-on experience with modern data stack tools (Airflow, dbt, Fivetran, Airbyte, Kafka, etc.).
  • Proven ability to work cross-functionally and communicate effectively with technical and non-technical stakeholders.

Nice-to-Have:

  • Experience designing data models for analytics and BI.
  • Familiarity with containerization (Docker) and CI/CD pipelines.
  • Exposure to data governance, privacy, or regulatory frameworks (e.g., GDPR, HIPAA).
  • Understanding of ML/AI pipeline fundamentals.

Who You Are

💡 Curious & mission-driven: You’re passionate about education and excited about the impact technology can have on learning.

🤝 Collaborative & user-focused: You thrive in cross-functional teams, listen deeply to users, and advocate for the best solutions.

📈 Strategic & execution-oriented: You balance vision with action, ensuring strategic vision turns into actionable outcomes.