コンテンツにスキップ
採用情報に戻る

Data Engineer - EMEA Data & Analytics

職務分類:
掲載日:
終了日:
ID:
2607043261W

この仕事を共有する:

Kenvueは現在、以下求人を募集しております。

Data Engineer - EMEA Data & Analytics

私たちがしていること

私たちKenvueは、日々のケアが持つ驚くべき力を信じています。100年以上の伝統と科学に根ざし、Neutrogena®, Aveeno®, Tylenol®, Listerine®, Johnson’s® and BAND-AID®など、皆様が既にご存じでご愛用いただいているアイコニックなブランドを提供しています。科学は私たちの情熱であり、ケアは私たちの才能です。 

Who We Are

私たちのグローバルチームは、インサイトとイノベーションに情熱を注ぎ、最高の製品をお客様にお届けすることに全力を注ぐ、多様で優秀な22,000人以上の社員で構成されています。専門知識と共感力を備えたKenvuerであることは、毎日何百万人もの人々の生活に影響を与える力を持つことを意味します。私たちは、人を第一に考え、全身全霊をもってケアし、サイエンスで信頼を獲得し、勇気をもって解決します。私たちとあなた自身の未来を、共に切り開いていきましょう。 

Role reports to:

Manager Data & Analytics EMEA

場所:

Asia Pacific, India, Karnataka, Bangalore

勤務地:

ハイブリッド

あなたがすること

Job Title: Data Engineer - EMEA Data & Analytics

Location: Bangalore

Company: Kenvue

Position Summary: We are seeking a skilled Data Engineer to join our dynamic EMEA data and analytics team. The ideal candidate will have a strong background in data engineering, with hands-on experience in utilizing the Kenvue Data Platform, which includes Snowflake/Databricks, Apache Airflow, dbt/PySpark, Streamlit, GitHub and Python. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines that support our analytics and reporting needs.

Key Responsibilities:

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines using Apache Airflow, ensuring data is ingested, transformed, and made available for analytics and reporting.

  • Data Warehousing: Utilize Snowflake/Databricks for data storage, management, and optimization, ensuring high performance and reliability of data access.

  • Data Transformation: Employ dbt (data build tool)/PySpark to develop and manage the data transformation workflow, ensuring data quality, consistency, and documentation.

  • Data Visualization: Collaborate with data visualization developers (PowerBI) and stakeholders to create interactive dashboards and data applications using Streamlit, providing insights and supporting decision-making processes.

  • Programming & Scripting: Write efficient and scalable code in Python to automate data processes, perform data cleansing, and manage ETL workflows.

  • Collaboration: Work closely with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.

  • Monitoring & Maintenance: Implement monitoring solutions to ensure data quality, pipeline health, troubleshoot issues, and optimize performance.

  • Version Control: Utilise GitHub to follow best version control practices.

  • Documentation: Maintain comprehensive documentation of data workflows, methodologies, and best practices to ensure knowledge sharing within the team.

  • Support and Maintenance: Ability to take over the support/maintenance of existing solutions built on the Kenvue Data Platform.

Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.

  • 3+ years of experience in data engineering or a related role.

  • Hands-on experience with Snowflake/Databricks, Apache Airflow, dbt, Streamlit, GitHub and Python.

  • Strong understanding of ETL/ELT processes, data modeling, and data warehousing concepts.

  • Proficiency in SQL and experience with data querying and analysis.

  • Familiarity with data visualization tools and techniques (PowerBI)

  • Excellent problem-solving skills and attention to detail.

  • Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment.

Preferred Qualifications:

  • Knowledge of data governance and data security practices.

  • Knowledge of data observability best practices.

障害のある個人の方は、宿泊施設のリクエスト方法について障害支援ページを確認してください。