Joining the Epic Games family is propelling the kidtech sector to new heights.
Our teams are growing rapidly and we’re hiring a Data Engineer to take our products to the next level of scale.
As a Data Engineer at SuperAwesome your main responsibilities will be three-fold: you’ll act as a hands-on mentor who can lead teammates by example, you will keep the quality bar high by continuously evolving the system while keeping it simple, and you will focus on having the highest impact on the product.
You will join one of our Data and Analytics team, and will work on developing our first "single source of truth" data analytics solution together with the Product Manager as well as our internal facing platform teams.
You will work closely with the Tech Lead and the other engineers in your team to define the appropriate technical approach, metrics, and timelines. You will have your say in the product roadmap and help the team and the Product Manager to make the most informed decisions to break down complex tech deliverables into simple and understandable user stories.
Quality is key for us, so you will ensure all product components are built to an appropriate level of quality for the stage (alpha/beta/production), deliver products using the appropriate level of testing and monitoring, fail fast, and learn and iterate frequently.
You will champion continuous improvement and always aim to improve the product your team owns and measure your impact with the appropriate tech, product, or delivery metrics.
Here’s what a typical day as a Senior Engineer looks like:
- You’ll work across the full stack depending on where you can drive the highest impact: from ETL pipelines to data warehousing to visualisations, as well as testing and cloud infrastructure
- You’ll work with your team to design and implement features and services for the data analytics solution, and keep the design choices well documented and explained
- You’ll be a hands-on mentor and drive quality and reliability from the get go, lowering the complexity of the system
- You will master one or more domains and will break complex goals into simple and iterative deliverables
- You’ll interview candidates for your and other teams, participating to both code reviews and system design interviews
- Commit to high-speed iterations, high code quality, and continuous improvement via agile processes
- Ensure long-term quality, scalability and maintainability of our systems
- You’ll champion the devops culture, treat operations with a mission critical mindset, and support the live system in production, including participation in our out-of-hours on-call rota
Our stack is entirely cloud native, and it includes technologies such as Python and PySpark, AWS, Presto, Snowflake, Tableau, Terraform, Kubernetes, Kafka, PostgreSQL, Druid, Redis, Sumologic, Datadog, Pagerduty. There will be an opportunity to work with ML as well, and it might be helpful if you have any experience with statistical modelling or machine learning libraries in either Python or R.