Big Data Engineer

Posted 18 October 2023
Salary Negotiable
LocationPhoenix
Job type Contract
Discipline Finance
ReferenceBBBH8107_1697660400
Contact NameSarah Ford

Job description

Join a global leader in financial services! Our client, known for their diverse financial solutions and innovative services, is looking for a Big Data Engineer to join their team. Don't miss out on this exciting opportunity, apply today!

Must-Have Qualifications:

  • 5+ years of software development experience, including leadership of engineering teams and Scrum practices.
  • Proven 3+ years of hands-on expertise with data processing technologies, such as Map-Reduce, Hive, and Spark (core, SQL, and PySpark).
  • Strong experience in crafting and comprehending complex SQL for data manipulation, especially with Hive/PySpark data frames, including optimizing joins for extensive data sets.
  • Proficiency in UNIX shell scripting.


Additional Qualifications:

  • Sound knowledge of data warehousing concepts.
  • Familiarity with financial reporting systems is advantageous.
  • Proficiency in Data Visualization tools, such as Tableau, SiSense, and Looker.
  • Demonstrated expertise in distributed ecosystems.
  • Hands-on experience with programming in Python and Scala.
  • Mastery of Hadoop and Spark architecture, along with its operational principles.
  • The ability to architect and develop optimized data pipelines for both batch and real-time data processing.
  • Proven experience in the analysis, design, development, testing, and implementation of system applications.
  • Demonstrated ability to create and maintain technical and functional specifications and analyze software and system processing flows.
  • A keen aptitude for acquiring and applying programming concepts.
  • Effective communication skills for collaboration with internal and external business partners.


Preferred Qualifications:

  • Familiarity with cloud platforms like GCP/AWS and expertise in building scalable microservices solutions.
  • 2+ years of experience designing and constructing solutions using Kafka streams or queues.
  • Proficiency with GitHub and experience in deploying CI/CD pipelines.
  • Experience working with NoSQL databases such as HBase, Couchbase, and MongoDB.


Preferred Location: Phoenix, AZ (Remote Work Considered)