Overview

Role & responsabilities:

  • As a fast-growing company, our data and data sources are constantly expanding. Being able to come up with scalable and efficient solutions to provide data for downstream use is a core part of your responsibilities
  • Responsible for managing a large part of our data pipeline
  • Communicate with different stakeholders to understand their data related needs
  • Independently develop new parts of our data infrastructure from scratch, tailored to the company’s needs
  • You are a mentor to our Junior/Trainee Data Engineers
  • You can assist our Data Scientists on any Data Engineering topic
  • Envision and shape our infrastructure to take it to the next level

Requirements:

  • A master’s or PhD in a technical field (e.g. Computer Science, Mathematics)
  • At least 3 years of Data Engineering experience
  • Extensive knowledge of SQL and Database Management and architecture
  • Strong coding skills in Python and knowing your way around APIs
  • Experience with Amazon Web Services (Redshift, s3, Lambda, Sagemaker, Glue, EMR etc.)
  • Experience with Apache Airflow (or a similar workflow management platform)
  • Familiarity with Kubernetes
  • Familiarity with Tableau (or a similar data visualization tool)
  • Not required but is a big plus: Knowing your way around DevOps and any experience related to Machine Learning / Data Science

 

Tagged as: 3-5 Years

About Compado

  • A company that gives you space to grow and take ownership! A chance to be part of one of Berlin’s fastest growing startups, carrying major responsibility in growing Compado into a company with 100+ employees.
  • Lead a “work from everywhere” lifestyle with an optional office in the heart of Berlin.
  • Work in a highly innovative culture, constantly striking new and refreshing paths, with emphasis on New Work and Remote Working as well as environmental consciousness.
  • An international & multicultural work environment.