Senior Data Engineer (Spark) [Rekrutacja online]
Miejsce pracy: Warszawa
We are looking for you if:
  • You have experience with Spark and Kafka,
  • You have experience with HDFS/Hive and generally with Hadoop,
  • You have advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases,
  • You have great development skills in Scala, Python or Java,
  • You hav 5+ years of experience in a Data and/or Software Engineer role,
  • You are communicative person,
  • You have very good analitical skills and can solve complex problems,
  • You can complete tasks and achieves results in an efficient, timely and high-quality manner.
Youll get extra points for:
  • Experience with Nifi and groovy based processors,
  • Spring/springboot based microservices creation,
  • Bash and/or Ansible knowledge,
  • Experience in the financial services area.
  • Experience with stream-processing using Flink or Kafka,
  • Experience in building and optimizing big scale data pipelines, architectures and data sets - in both batch and real-time data integration,
  • Working knowledge of message queuing, stream processing, and highly scalable BigData data stores,
  • Experience working in a DevOps environment - CI/CD, Azure DevOps, Test automation, Dockereld.
Information about squad:


We are the biggest datalake in ING. Besides batch ingestion we do support near real-time data ingestion and distribution using Kafka. We plan to move to a new streaming ingest platform, using container technologies and focus more on being an event based datalake. We have build a generic, metadata driven, data pipeline on top of Nifi. You will be joining a team of experts, where you can learn, share and will have fun.
  • contract of employment
    type of contract
  • Start 7:00 - 9:00 End 15:00 - 17:00
    work hours
  • Zajęcza 4, Warszawa
    this is the location of our office
Scope of duties
20% - Build the solutions required for optimalloading of data from a wide variety of datasources
15%- Create and maintain, with the team, an optimaldata pipeline architecture
15%- Assemble large, complex data sets that meetfunctional / non-functional requirementsresponse or streaming manner
15%-Identify, design, and implement internalprocess improvements
15%- Keep data separated and secure acrossinternational boundaries through multipledata centers
10%- Work with data and analytics experts to strivefor greater functionality in our data systems
10%- Work with stakeholders including the Data andDesign teams to assist with data-relatedtechnical issues and support their datainfrastructure needs
Your development
  • professional development
  • certificates and knowledge development
  • training budget
  • access to the newest technologies
  • international projects
  • free English courses
Your health, well-being and family
  • provate medical care
  • 50% funded Multisport Card
  • bicycle parking
  • chillout rooms
  • integration events and Stay Fit program
Working conditiions
  • stability of employement
  • fully equipped workstations
  • kitchen
We kindly inform you that we will get in touch only with the chosen candidates.

If you agree for processing your data for future recruitment offers, we will keep the data for a year.

All information concerning the way we process personal data can be found here here.