ING is looking for Data Engineer(s) for our Global KYC department
Think Forward! Our purpose is to empower people to stay a step ahead in life and in business. We are an industry recognized strong brand with positive recognition from customers in many countries, strong financial position, omni-channel distribution strategy and international network.
If you want to work at a place with lots of freedom to innovate, where we believe that you can live by the Agile manifesto without jeopardizing the necessary continuity, compliance and QA measures, where we are committed to deliver outstanding, stable and secure services to end users, and where we have a getting-things done mentality, please read further.
The Global KYC organization is a first line of defense department, providing the ING business and functions with guidance and standardized solutions in the area of KYC-related regulations as well as supporting operational excellence.
As part of Global KYC Delivery Tribe, you will be involved with the development of state-of-the-art solutions on relevant topics such as Anti-Money Laundering, Counter Terrorism Financing, Fraud, Sanctions and their global implementation across 40+ countries, affecting 36 million ING customers.
Create and maintain, with the team, an optimal data pipeline architecture
Assemble large, complex data sets that meet functional and non-functional requirements
Implement highly-performant and secured solutions for data ingestion, processing and storage
Identify, design and implement frameworks for highest level of data quality
Design and implement optimization and constant improvement processes : automating and optimizing data delivery and scalability
Work with data and analytics experts and contribute towards greatest functionality in our data systems and our data-driven organization
What we’re looking for :
Proficient (senior) engineer with several years of experience in Data and Software Engineer roles
Working knowledge in data warehousing concepts, data lakes, data marts
Experience with ETL processes and tools : IBM InfoSphere Data Stage, IBM InfoSphere Data Architect and / or other relevant tooling
Excellent SQL knowledge and experience, working with various types of relational as well as non-relational databases
Experience with stream-processing systems like Kafka and / or Flink.
Object-oriented and / or scripting programming languages : Python, Java, Scala
Experience in building and optimizing (big) data pipelines, architectures and data sets in both batch and real-time data integration
Working knowledge of message queuing, stream processing, and highly scalable data stores
Experience with BI tools like Cognos, PowerBI etc.
DevOps knowledge and experience : CI / CD concepts, Azure DevOps
Knowledge / affinity in analytics and data science
Results oriented, determined, team-player and having a self-driven and can-do attitude
Understanding of Agile - Scrum methodology
What we offer :
A salary tailored to your qualities and experience
Personal growth and challenging work environment with endless possibilities to realize your ambitions
Fulfilling and progressive way of working with a team who strive for the very best
Flexibility and support in all working processes
13th month salary
Individual Savings Contribution (BIS)
8% Holiday payment and great vacation program
If you recognize yourself in the profile described, you can apply directly or contact the recruiter attached to the advertisement for more information.