→ Member of Engineering (Pre-training / Data)
Poolside
- Position
- Full-time
- Location
- Remote (EMEA/East Coast)
ABOUT POOLSIDE
In this decade, the world will create artificial intelligence that reaches human level intelligence (and beyond) by combining learning and search. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will determine who survives and wins. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research and engineering at scale. They will create powerful economic engines. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this.
poolside exists to be one of these companies - to build a world where AI will drive the majority of economically valuable work and scientific progress.
We believe that software development will be the first major capability in neural networks that reaches human-level intelligence because it's the domain where we can combine Search and Learning approaches the best.
At poolside we believe our applied research needs to culminate in products that are put in the hands of people. Today we focus on building for a developer-led increasingly AI-assisted world. We believe that current capabilities of AI lead to incredible tooling that can assist developers in their day to day work. We also believe that as we increase the capabilities of our models, we increasingly empower anyone in the world to be able to build software. We envision a future where not 100 million people can build software but 2 billion people can.
ABOUT OUR TEAM
We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.
Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
ABOUT THE ROLE
You would be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This includes synthetic data generation and data mix optimization.
You would be closely collaborating with other teams like Pre-training, Fine-tuning and Product to define high-quality data both quantitatively and qualitatively.
Staying in sync with the latest research in the field of dataset design and pretraining is key for being successful in a role where you would be constantly showing original research initiatives with short time-bounded experiments and highly technical engineering competence while deploying your solutions in production. With the volumes of data to process being massive, you'll have at your disposal a performant distributed data pipeline together with a large GPU cluster.
YOUR MISSION
To deliver massive-scale datasets of natural language and source code with the highest quality for training poolside models.
RESPONSIBILITIES
Follow the latest research related to LLMs and data quality in particular. Be familiar with the most relevant open-source datasets and models
Closely work with other teams such as Pretraining, Fine-tuning or Product to ensure short feedback loops on the quality of the models delivered
-
Suggest, conduct and analyze data ablations or training experiments that aim to improve the quality of the datasets generated via quantitative insights
SKILLS & EXPERIENCE
Strong machine learning and engineering background
-
Experience with Large Language Models (LLM)
Good knowledge of Transformers is a must
Knowledge/Experience with cutting-edge training tricks
Knowledge/Experience of distributed training
Trained LLMs from scratch
Knowledge of deep learning fundamentals
-
Experience in building trillion-scale pretraining datasets, in particular:
Ingest, filter and deduplicate large amounts of web and code data
Familiar with concepts making SOTA pretraining datasets: multi-linguality, curriculum learning, data augmentation, data packing, etc
Run data ablations, tokenization and data-mixture experiments
Develop prompt engineering pipelines to generate synthetic data at scale
Fine-tuning small models for data filtering purposes
Experience working with large-scale GPU clusters and distributed data pipelines
Strong obsession with data quality
-
Research experience
Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc, is a nice to have
Can freely discuss the latest papers and descend to fine details
Is reasonably opinionated
-
Programming experience
Strong algorithmic skills
Linux
Git, Docker, k8s, cloud managed services
Data pipelines and queues
Python with PyTorch or Jax
-
Nice to have:
Prior experience in non-ML programming, especially not in Python
C/C++, CUDA, Triton
PROCESS
Intro call with Eiso, our CTO & Co-Founder
Technical Interview(s) with one of our Founding Engineers
Team-fit call with Beatriz, our Head of People
Final interview with Eiso, our CTO & Co-Founder
BENEFITS
Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you and dependents
Company-provided equipment
Wellbeing, always-be-learning and home office allowances
Frequent team get togethers
Great diverse & inclusive people-first culture
- Position
- FullTime
- Location
- Remote (EMEA/East Coast)
More R&D roles
- Member of Engineering (Applied Research Engineering)
- Member of Engineering (Code Execution)
- Member of Engineering (Coding AI Tutor)
- Member of Engineering (Evaluations)
- Member of Engineering (Fine-tuning)
- Member of Engineering (GPU)
- Member of Engineering (Human Data)
- Member of Engineering (Inference)
- Member of Engineering (Infrastructure)
- Member of Engineering (Pre-training)
- Member of Engineering (Reinforcement Learning)
- Member of Engineering (Search)
- Position
- Full-time
- Location
- Remote (EMEA/East Coast)