Member of Engineering (Evaluations / Engineering)
Poolside
Location
Remote (EMEA/East Coast)
Employment Type
Full time
Location Type
Remote
Department
R&D
ABOUT POOLSIDE
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.
ABOUT OUR TEAM
We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.
Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
ABOUT THE ROLE
Evaluation is one of the most important pillars of building a frontier model and product - it informs the direction of research and development, powers our experimentation and ensures quality and alignment with our users.
To support this, we need a powerful pipeline and self-serve evaluation framework that helps poolsiders build, run and extract insight from evals easily.
In this role, you will design and implement this platform to build and run evaluations at scale.
YOUR MISSION
Build a scalable self-serve evaluation platform to power our research and development
RESPONSIBILITIES
Design a Python framework that makes it easy for poolsiders to implement both internal and public benchmarks in a centralized way
Build and maintain the pipeline that runs distributed evaluations at scale
Collaborate with modeling and product teams to identify opportunities to improve our experimentation and evaluation tooling
SKILLS & EXPERIENCE
-
Strong engineering background
Experience leading software projects cross functionally
Experience building highly reliable and well tested services
-
Experience with distributed systems
Data pipelines, distributed processing
Message queues, event-driven architectures, e.g. Kafka, Google pub/sub.
Clouds: managed services, storage. GCP, AWS, Azure.
Monitoring and alerting: Grafana, Prometheus, Datadog
-
Plus: Experience designing frameworks or tooling for developers
A product mindset towards building developer facing software
A knack for collaborating with researchers and ML engineers to identify opportunities
Plus: Experience with ML ops and data visualization platforms
PROCESS
Intro call with one of our Founding Engineers
Technical Interview(s) with one of our Founding Engineers
Team fit call with the People team
-
Final interview with one of our Founding Engineers
BENEFITS
Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you and dependents
Company-provided equipment
Wellbeing, always-be-learning and home office allowances
Frequent team get togethers
Great diverse & inclusive people-first culture
