Data Engineer
Vital
Location
EDT / EST (US East Coast only)
Employment Type
Full time
Location Type
Remote
Department
Engineering
Compensation
- - $160K – $200K • Offers Equity
Healthcare is in crisis and the people behind the results deserve better. With data exploding across wearables, lab tests, and patient–doctor interactions, we’re entering an era where data is abundant.
Junction is building the infrastructure layer for diagnostic healthcare, making patient data accessible, actionable, and automated across labs and devices. Our mission is simple but ambitious: use health data to unlock unprecedented insight into human health and disease.
If you're passionate about how technology can supercharge healthcare, you’ll fit right in.
Backed by Creandum, Point Nine, 20VC, YC, and leading angels, we’re working to solve one of the biggest challenges of our time: making healthcare personalized, proactive, and affordable. We’re already connecting millions and scaling fast.
Short on time?
Who you are: A data engineer with solid software engineering fundamentals who can build, own, and scale reliable data pipelines and warehouse infrastructure.
Ownership: You’ll shape our data foundation from ingestion through transformation — and make it analytics-ready at scale.
Salary: $160K - $200k+ early stage options
Time zone: Preferably NYC; EST required.
Why we need you
Junction powers modern diagnostics at scale and as we grow, our platform is becoming increasingly data-intensive. The way we move, structure, and surface data directly affects our ability to support customers, deliver real-time insights, and unlock the next generation of diagnostics products.
We’re hiring our first Data Engineer to take ownership of that foundation.
Build and run pipelines that turn raw, messy healthcare data into clean, trusted, usable information
Power customer products, internal analytics, and the AI models behind our next wave of diagnostics
Design how data flows through an entire diagnostics ecosystem — not just maintain ETLs
Build scalable, cloud-native pipelines on GCP and eliminate bottlenecks as we scale
Hunt down edge cases, build guardrails for quality, and ship systems other engineers rely on daily
If you love untangling complexity and building data systems that truly make an impact, you’ll fit right in — and the systems you build will unlock new products and accelerate everything we ship.
What you’ll be doing day to day
Designing and operating ingestion, transformation, and replication pipelines on GCP
Managing orchestration and streamlining ELT/ETL workflows (e.g., Temporal)
Creating clean, scalable, analytics-ready schemas in BigQuery
Implementing monitoring, alerting, testing, and observability across data flows
Integrating data from APIs, operational databases, and unstructured sources
Collaborating with product, engineering, analytics, and compliance on secure, high-quality data delivery
What this role isn’t responsible for but can contribute to
BI development or analytics reporting
Data and AI strategy, prioritization, or commercialization
Compliance frameworks, privacy requirements, or regulatory ownership
These are owned by other partners, your work enables them.
Requirements
Solid engineering fundamentals and experience building pipelines from scratch
Python and SQL fluency; comfortable across relational + NoSQL systems
Experience with orchestrators like Temporal, Airflow, or Dagster
Hands-on with BigQuery, BigTable, and core GCP data tooling
Ability to turn messy, ambiguous data problems into clear, scalable solutions
Startup or small-team experience; comfortable moving fast with ownership
Communication skills, attention to detail, and a bias toward clarity and reliability
You don’t need to tick every box to fit in here. If the problems we’re solving genuinely interest you and you know you can contribute, we’d love to talk.
Nice to have
Experience with HIPAA/PHI or regulated healthcare data
Background with time-series data or event-driven architectures
Familiarity with dbt or similar transformation frameworks
Experience with healthcare, diagnostics, or ML/AI workloads
How you'll be compensated
Salary: $160K - $200k + early stage options
Your salary is dependant on your location and experience level, generated by our salary calculator. Read more in our handbook here.
Generous early stage options (extended exercise post 2 years employment) - you will receive 3 offers based on how much equity you'd like
Regular in person offsites, last were in Morocco and Tenerife
Bi-weekly team happy hours & events remotely
Monthly learning budget of $300 for personal development/productivity
Flexible, remote-first working - including $1K for home office equipment
25 days off a year + national holidays
Healthcare cover depending on location
Oh and before we forget:
Backend Stack: Python (FastAPI), Go, PostgreSQL, Google Cloud Platform (Cloud Run, GKE, Cloud BigTable, etc), Temporal Cloud
Frontend Stack: TypeScript, Next.js
API docs are here: docs.tryvital.io
Company handbook is here with engineering values + principles
Important details before applying:
We only hire folks physically based in GMT and EST timezones - more information here.
We do not sponsor visas right now given our stage
Compensation Range: $160K - $200K
