View job on Handshake

Employer: Noodle.ai

Expires: 08/16/2021

Come solve real problems, with really awesome people :)At Noodle.ai, we are not just building AI applications. We are going deep into industries that have yet to leverage AI at scale such as steel mills, distribution & logistics companies, or consumer packaged goods. Our applications fit and integrate deeply into the supply chain from raw materials to shelf. The applications we build not only need to integrate into the existing software in these industries, but also need to talk to each other to really drive the value from AI. Turns out, we are one of the pioneers here charting a new course. This means that science behind building the software and the AI behind it has not settled. You will be part of a team that is charting this new course figuring out how to adapt software engineering best practices to delivering AI applications that fit within legacy software in non-tech industries. This is going to be an exciting ride full of opportunity for impact, learning, and challenges we will tackle together.Noodle.ai’s Data Engineers have a strong understanding of database structures, modeling, and data warehousing techniques; know how to create SQL queries, stored procedures, views and define best practices for engineering scalable secure data pipelines. We are looking for people who are not afraid of the unknown, are experts at their craft, and can adapt and learn as we create a suite of new AI applications.ROLES AND RESPONSIBILITIES:-Support and monitor multiple data pipelines across different customersWork closely with the development team to understand changes to every releaseContribute to the data engineering development work when neededCollaboration with multiple stakeholders including and not limited to the Infrastructure, DevOps, Data science among other teamsInterface with customer-facing teams Must-havesRelevant experience of at least 2 yearsUndergraduate degree in a relevant field (ex. Computer Science) or equivalent experienceGood knowledge of Python, especially for data processingVery good knowledge of SQL and experience with writing complex SQL on PostgreSQLExperience in data pipeline orchestration tools, preferably AirflowBasic understanding of containers and familiarity with docker commandsWorking knowledge of any distributed systems like Spark, Hadoop, etc.Have been part of data engineering engagements involving developing complex data pipelines or ETL/ELT processes involving ingestion & processing of dataVery good debugging skillsFlexible to learn new technologies and adapt to a dynamic environment Nice to haveExposure to cloud (preferably AWS)Working experience on SnowflakeBasic understanding and usage of Jenkins for continuous deploymentUnderstanding of ML model lifecycle and pipelines