View job on Handshake
Category: Software Development/ Engineering
Main location: United States, North Carolina, Cary
Position ID: J0722-0962
Employment Type: Full Time
CGI is seeking a dynamic Hadoop, Spark and Scala Developer to join our team supporting a core system for a large financial client in Cary, NC. We are looking for a Hands-on Hadoop, Spark and Scala (6+ years) with experience. The system has high visibility within the organization, and is used by many prominent collaborators. This role offers flexibility of joining a team applying Agile methodologies to deliver high quality software to our customers. You will also have the opportunity to work with cloud based technologies on highly visible initiatives. This role can be performed in Cary, NC.
• 5+ years of demonstrated ability with Big Data tools and technologies including working in a Production environment of a Hadoop Project.
• 2+ years of experience with Spark, PySpark, SQL, Hive, Impala, Oozie, HDFS, Hue, Git, MapReduce and Sqoop.
• 2+ years of Programming experience in Scala Programming and Application Development.
• Experience in Test Driven Development (TDD), and/or Continuous Integration/Continuous Deployment (CI/CD) is a plus
• Big Data Development using Hadoop Ecosystem including Pig, Hive and other Cloudera tools.
• Analytical and problem solving skills, applied to a Big Data environment.
• Experience with large-scale distributed applications.
• Experience with Agile methodologies to iterate quickly on product changes, developing user stories and working through backlog.
• We Prefer experience with Cloudera Hadoop distribution components and custom packages.
• Traditional Data Warehouse/ETL experience.
• Excellent planning, organization, communication and thought leadership skills.
• Ability to learn and apply new concepts quickly.
• Validated ability to mentor and coach junior team members.
• Strong leadership, communication and interpersonal skills.
• Ability to adapt to constant changes. Sense of innovation, creativity, organization, autonomy and quick adaptation to various technologies.
• Capable and eager to work under minimal direction in fast-paced energetic environment.
Required qualifications to be successful in this role:
• Strong Hands-on Hadoop, Python, Mainframe, DB2 and Teradata experience
• Experience in analysis, design, development, support, and improvements in data warehouse environment with Bigdata Technologies with a minimum of 5+ years’ experience in Hadoop, MapReduce, Sqoop, HDFS, Hive, Impala, Oozie, Hue, Kafka, Yarn
• 2+ Experience in PySpark, Spark.
• 2+ Experience in Scala Development.
• Python experience is a plus
• Experience with Agile methodologies to iterate quickly on product changes
Education: Bachelor’s Degree or a level of education, training and experience equivalent to a Bachelor’s Degree in related field.