incident.io is an incident management platform that helps companies when things go wrong. Whether they're site outages, data breaches, or functionality issues, incidents happen literally all the time. When they do, we help get the right people in the room, we run and communicate how you're responding, and we give you a suite of tools and insights to learn and improve over time.
At incident.io data is involved in everything we do: from product launches and core dashboards we run the company with, to dashboards embedded in our product that have become hugely valuable for our customers to gain insights from their incident data.
We collect a variety of data both internally (product usage) and externally (social media, Google Analytics, Stripe, Finance & Sales tools), and we’re really proud of the data stack we’ve built so far (and updated). Our data stack consists of Google BigQuery, Fivetran, dbt, and Metabase.
We’re looking to make our first Data Engineering hire as this is an area that’s crucial to invest in early. We want someone who is passionate about making our data stack great for our Data and Engineering teams to interact with, and who enjoys optimising dbt setups to keep up to the latest & greatest standards.
Think of this role as bridging the Data Engineer & Analytics Engineer domains. As an early stage startup, and the first hire in this area, you’ll get the chance to meaningfully impact every part of our stack and set the technical direction.
What you’ll be doing:
- Making our CI/CD process great: enabling Engineering & Data to develop & deploy code quickly and with confidence. This is critical to help our team scale, and is an area where you can quickly have a lot of impact
- Being the glue between Engineering and Data: we want someone who will proactively work with users of our data stack to understand where gaps & improvements are
- Owning & improving the “EL” part of our stack: currently we use Fivetran, Segment, and a handful of Python scripts, but it’s growing in complexity and cost. You’ll be responsible for accelerating our teams whilst also setting technical direction, simplifying our setup and keeping costs under control
- Improving the “T” part of our stack: all of our Data team are well versed in dbt, so you won’t be expected to “own” dbt, but we’d love for you to help make our dbt setup exceptional
This role could be ideal for you if you:
- Have 4+ years of experience as a Data Engineer: or Analytics Engineer with a spike in tooling and a keen interest in the “EL” part of the stack as well as the “T”
- Know when to build, and when to buy: we aren’t adverse to spending money on great tools, nor do we want to solve every problem by throwing money at it. We want you to help steer which bits of our stack we should be building and which bits to buy
- Are an expert on dbt Core: and are comfortable with advanced concepts such as overriding default macros (e.g. schema name generation) and being able to build custom dbt functionality (e.g. run state:modified queries in a shortened dbt command)
- (Nice to have) Have experience with GCP & Terraform: we already have an infrastructure engineer who is our go-to for both of these areas, but it would of course be nice to have any additional expertise
The salary for this position is determined by several job-related factors, such as experience, relevant skills, training, location, business needs, or market demands. The salary range for this role is £100,000 - £130,000. This position will also offer equity options.