跳至內容
返回職業生涯

Data Engineer - EMEA Data & Analytics

職位類別:
發佈日期:
結束日期:
ID:
2607043261W

分享此職位:

Kenvue 目前正在招聘 a:

Data Engineer - EMEA Data & Analytics

我們做什麼

Kenvue,我們意識到日常護理的非凡力量。我們以一個多世紀的傳統為基礎,植根於科學,是標誌性品牌的品牌 - 包括您已經熟悉和喜愛的 NEUTRGENA®、AVEENO、TYLENOL®®、LISTERINE®、JOHNSON'S® 和 BAND-AID®。科學是我們的熱情所在;關心就是我們的才能。

我們是誰

我們的全球團隊由 ~ 22,000 名才華橫溢的員工組成,他們的職場文化中,每個聲音都很重要,每一個貢獻都受到讚賞。 我們熱衷於洞察, 創新並致力於為我們的客戶提供最好的產品。憑藉專業知識和同理心,成為 Kenvuer 意味著每天有能力影響數百萬人。我們以人為本,熱切關懷,以科學贏得信任,以勇氣解決——有絕佳的機會等著您!加入我們,塑造我們和您的未來。有關更多資訊,請按兩下 here.

Role reports to:

Manager Data & Analytics EMEA

位置:

Asia Pacific, India, Karnataka, Bangalore

工作地點:

混合

你會做什麼

Job Title: Data Engineer - EMEA Data & Analytics

Location: Bangalore

Company: Kenvue

Position Summary: We are seeking a skilled Data Engineer to join our dynamic EMEA data and analytics team. The ideal candidate will have a strong background in data engineering, with hands-on experience in utilizing the Kenvue Data Platform, which includes Snowflake/Databricks, Apache Airflow, dbt/PySpark, Streamlit, GitHub and Python. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines that support our analytics and reporting needs.

Key Responsibilities:

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines using Apache Airflow, ensuring data is ingested, transformed, and made available for analytics and reporting.

  • Data Warehousing: Utilize Snowflake/Databricks for data storage, management, and optimization, ensuring high performance and reliability of data access.

  • Data Transformation: Employ dbt (data build tool)/PySpark to develop and manage the data transformation workflow, ensuring data quality, consistency, and documentation.

  • Data Visualization: Collaborate with data visualization developers (PowerBI) and stakeholders to create interactive dashboards and data applications using Streamlit, providing insights and supporting decision-making processes.

  • Programming & Scripting: Write efficient and scalable code in Python to automate data processes, perform data cleansing, and manage ETL workflows.

  • Collaboration: Work closely with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.

  • Monitoring & Maintenance: Implement monitoring solutions to ensure data quality, pipeline health, troubleshoot issues, and optimize performance.

  • Version Control: Utilise GitHub to follow best version control practices.

  • Documentation: Maintain comprehensive documentation of data workflows, methodologies, and best practices to ensure knowledge sharing within the team.

  • Support and Maintenance: Ability to take over the support/maintenance of existing solutions built on the Kenvue Data Platform.

Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.

  • 3+ years of experience in data engineering or a related role.

  • Hands-on experience with Snowflake/Databricks, Apache Airflow, dbt, Streamlit, GitHub and Python.

  • Strong understanding of ETL/ELT processes, data modeling, and data warehousing concepts.

  • Proficiency in SQL and experience with data querying and analysis.

  • Familiarity with data visualization tools and techniques (PowerBI)

  • Excellent problem-solving skills and attention to detail.

  • Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment.

Preferred Qualifications:

  • Knowledge of data governance and data security practices.

  • Knowledge of data observability best practices.

如果您是殘障人士,請查看我們的 殘障人士援助頁面瞭解如何申請便利