Senior Big Data Engineer

We are looking for talent Senior Big Data Engineer.

Required skills

  • 3-5+ years professional experience
  • Experience building tools as part of a high-quality data system
  • Working knowledge of Sqoop, Hive, Impala, and HDFS.
  • Fluency with at least one dialect of SQL (MySQL and Hive preferred)
  • Ability to develop software, whether scripts for shuffling data around, batch tasks, or stream processing units.
  • Strong written and verbal communication skills
  • Level of English Upper-Intermediate

Will be a plus

  • Streaming platform experience, typically based around Kafka / Spark, Storm, Beam/ — Strong understanding of AWS data platform services and their strengths/weaknesses
  • Scala/spark some ruby some python

Responsibilities

Data quality and integrity are two areas of focus for your work in our existing, organically-grown data infrastructure. You would be responsible for building tools and technology to ensure that downstream customers can trust the data they're consuming. Depending on the project, this might involve collaboration with the Data Science and Content Engineering teams to repartition or optimize business-critical Hive tables, or working with Core Platform to implement better processing jobs for scaling our consumption of streaming data sets. Almost everything you would be working on would be to increase the "customer satisfaction" for internal customers of Scribd data.

We Offer

  • Competitive compensation according to your skills
  • WFH option (Possibility to work from home)
  • Democratic management style & friendly environment

Locations

  • Kyiv / Kharkov / Remote