Lead Data Engineer

Brand:  HSBC
Area of Interest:  Technology
Location: 

Birmingham, GB, B1 1HQ

Work style:  Hybrid Worker
Date:  3 Oct 2025

If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential - whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.

HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.

 

Department: 

ESG Data & Analytics is an initiative that is part of the Group Data Strategy to transform the way we govern, manage and use ESG data to its full potential across HSBC.

Assets that are being developed as part of ESG Data & Analytics are being designed to support HSBC at a Group level. These assets include the creation of a Data Lake, or a single virtual pool of client, transaction, product, instrument, pricing and portfolio data. Using the lake deliver solution to business requirement using Data as business as service.

 

In this role, you will: (Principal Responsibilities)

As a key member of the technical team alongside Engineers, Data Scientists and Data Users, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:

 

•             Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment

•             Promoting development standards, code reviews, mentoring, knowledge sharing and team management

•             Production support & troubleshooting along with Peer Code reviews

•             Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring

•             Liaison with BAs to ensure that requirements are correctly interpreted and implemented.

•             Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives.  Input into system architecture and design. 

 

To be successful in this role, you should meet the following requirements: (Must have Requirements)

•             Strong experience of Scala development and design using Scala 2.10+ and sound knowledge of working Unix/Linux Platform

•             Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).

•             Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. And understanding of big data modelling techniques using relational and non-relational techniques

•             Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.

•             Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;

 

(Good to have requirements but not essential)

•             Experience with time-series/analytics dB’s such as Elastic search.

•             Experience with scheduling tools such as Airflow, Control-M and understanding or experience of Cloud design patterns

•             Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.