This job might no longer be available.
Data Infrastructure Engineer
São Paulo, São Paulo, Brazil
6 months ago
About the Team
Data Engineering has a significant part in all of our strategic efforts and decisions at Wildlife . Our mission is to provide our company with complete, secure, reliable, high quality and highly available data. To accomplish this mission we are looking for engineers to help us to develop cutting-edge Data Science infrastructure. We love working with large datasets, low latency data systems and complex business logic.
About the Role
At Data Platform we are focused on providing the interfaces between our computational resources and the people willing to use it to process data. Our current tech stack for data processing is Hadoop (MR), Hive, Presto and Spark, the challenge here is dealing with +1PB of historical data and Spark streaming for near real-time processing with a volume around 200k RPM. For this role what we want is to make happen that our stakeholders have all the tools and services they need to seamlessly extract, ingest, transform, load and consume data.
More about you
- Enjoy working with complex business logic and deal with large scale to build low latency systems;
- Smart and creative, both, you have the ability and persistence to solve problems, big and small. Curious by nature, you're constantly looking for ways to improve upon things;
- Demonstrate critical thinking and problem-solving capabilities both independently and collaboratively;
- You're flexible, fearless, and excited to help build something;
- You're hands-on, in the right ways; willing and able to do what's needed, no matter the task
What you'll do
- Working proactively and closely with data scientists on the various group projects developing tools for data exploration and feature engineering pipelines;
- Work proactively with our cloud infrastructure team and propose new solutions to ingest and process data;
- Design, develop and test tools to improve Data Science and Engineering teams productivity;
- Build microservices for data-centred solutions hosted in our Kubernetes cluster.
What you'll need
- BS in Computer Science, Engineer (Software or others), Statistics, Physics or a related field;
- At least 3 years of experience as a Data engineer or Infrastructure engineer (data services);
- Experience with Python, Scala, Java, Shell or Go (at least two) for system operations;
- Relevant experience using Big Data Technologies such as Hive, HDFS, Yarn and Spark;
- Designing distributed systems on a large scale with low latency;
- You have experience with real-time technologies like Kafka, Kinesis, Storm or Spark Streaming, Kafka Stream;
- Know how to design and code REST APIs;
- Experience with orchestration frameworks like airflow or Luigi.
We welcome people from all backgrounds who seek the opportunity to help build the best gaming company, where everyone thrives.