Henkel

AO Migration - Data Engineer

Posted Aug 23, 2021
Project ID: 2733302
Location
Düsseldorf, Remote
Hours/week
40 hrs/week
Timeline
3 months
Starts: Sep 1, 2021
Ends: Nov 30, 2021
Payrate range
Unknown

*Please note: the service contract for this position will not be concluded with Henkel AG & Co. KGaA but with an external party”.

 

Project name: AO Migration

 

Project description / Background to the assignment

 

As part of the AO Migration project, we plan 4 supply chain related products. This includes, among other things, the (further) development of software “Supply Chain Digital Twin” for the purpose of providing a generic network model mirroring Henkel’s supply chain network. The development also requires the development of component Global Supply Network Steering contained in the software. The latter must be carried out in the programming language Azure Data Factory and Databricks, in order to maintain compatibility with older versions of the software Supply Chain Digital Twin”. However, the programming language Azure Data Factory and Databricks has no relevance for the core products. Consequently, we do not have our own employees with sufficient expertise in connection with the programming language Azure Data Factory and data bricks and for this reason requires external expertise. The contractor used by us has extensive experience with projects that require development services in the programming language Azure Data Factory and Databricks, therefore the contractor has a unique position and provides significantly different services than the internal staff.

                      

Task Description
 

The services shall be provided within the framework of an agile development method. The concrete activities required in each case to implement the services commissioned shall be agreed iteratively between the parties within the framework of sprint meetings and implemented by the Contractor within the respective sprints following the sprint meetings. Prior to each sprint meeting, the contractor shall independently check, on the basis of its professional expertise, which individual services are reasonable and feasible within the scope of the assignment in the respective sprint. The sprints each have a duration of 2 weeks, so that the sprint meetings take place at intervals of 2 weeks. Within the individual sprints, the contracting parties shall coordinate the respective technical requirements for the services to be provided in daily meetings in order to achieve the compatibility of the individual components Supply Chain Digital Twin. The technical requirements for the services to be provided are assessed by the Contractor on the basis of its own technical assessment. After completion of a Sprint, the parties shall conduct a "Sprint Review'' in which the contractor reports on the feasibility and status of the services performed by it in the previous sprint and makes a recommendation on how to proceed with regard to the services that proved to be unfeasible in the respective sprint. All of the meetings and exchanges described above shall take place exclusively in the presence of a central contact person named by us, who shall coordinate the project on our internal side. The organisation and scheduling of the meetings described above in which the contractor is involved shall be organised and carried out by the contractor and coordinated with us. 

 

Backlog items will be assigned in Azure DevOps containing business requirements and acceptance criterias.
 

Tasks:

  • Affiliation of scalable and robust data pipelines for Microsoft Azure Cloud based data platform using state-of-the-art data engineering practices by
    • Designing a technical concept according to the backlog item
    • Developing and implementation of the aforementioned technical concept
    • Testing including Unit and functional tests, in case of identified bugs or issues the task is to rework it from step 1
  • Identify, design, and implement process improvements for the project related tasks according to the discussed backlog: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies
  • Design and implement scalable, agile data platform (DataOps) and facilitate sustainable data science deployment solutions (MLOps)
  • All back log items have to be in line with defined processes and given technical standards provided by Henkel through Azure DevOps Wikis.
  • Evaluation of Azure releases and based on the outcome derive action items related to the aforementioned tasks
  • Documentation of technical implementation and related processes in Azure DevOps. Henkel will validate and approve it.

Similar projects

+ Search all projects