Supporting our software developers and Data Scientists on data initiatives you will ensure the consistency of the data delivery architecture throughout our ongoing projects.
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional and non-functional business requirements
Identify, design, and implement internal improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure "big data" technologies
Keep our data segregated and secure across national boundaries through multiple data centers and Azure regions
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
Work with stakeholders including the Executive, Product, Data, and Customer teams to assist with data-related technical issues and support their data infrastructure needs
Create data tools for analysts and data scientists that assist them in building and optimizing our product into an innovative industry leader
If you are a fast learner who thrives in challenging environments and has a creative yet pragmatic approach to problem-solving, read on!
A Master's degree in Computer Science or Software Engineering, or proven working experience with large volumes of data
Can design and implement a scalable real-time data processing pipeline using open source projects
Can maintain a codebase with multiple contributors and manage the release processes
Can develop solutions for managing structured (e.g. SQL Server, PostgreSQL, etc.), and unstructured / NOSQL (i.e. MongoDB, Hadoop, HBASE, etc.)
Has an eye for detail and can obtain the domain knowledge necessary to spot incorrect data early and deliver with quality
Proficient in Python, C#, Java, or Scala
Proficient in monitoring technologies such as Grafana or Prometheus
Proficient in build tools such as Maven or SBT
Experience / working knowledge of open source big data solutions such as Spark, Flink, Beam, HBase, Kafka, Cassandra, NiFi, etc.
Willingness to work across a diverse set of technologies, and ability to ramp up on new technologies quickly
Excellent written and verbal communication skills
Good understanding of relational databases
Basic understanding of functional programming and property-based testing
Knowledge of Azure, Google Cloud, or AWS
Experience with build / test automation
We are Connecterra :
We build an AI that will impact the future of our planet.
As a full-stack technology company, our engineering challenges range from designing hardware sensors to building a machine learning platform, a farmers’ assistant that can run a dairy farm more effective than a human.
We call it Ida. She learns about the behavior of farmers and cows, then provides guidance on making better decisions.
Join us to build an AI that has purpose.
Learn more about us or check out our .
The Tech Team
The Connecterra Engineering and Tech Team is at the core of our company. We keep all systems running and build powerful housing for our AI to grow and learn.
Most importantly, we are a key component in the company’s strategic development.
We offer :
Every day we work hard on making the world a better place, thus the happiness and well-being of our team members is important to us. Hence we offer :
An open minded and creative working environment
Flexible working hours and vacation policy
Fantastic office space in Amsterdam with a nice panoramic view
Tons of snacks and fruit, optional catered lunch plan
A fridge filled with special kinds of beer and drinks