Kenvueは現在、以下求人を募集しております。
Senior Analyst Data Engineering私たちがしていること
私たちKenvueは、日々のケアが持つ驚くべき力を信じています。100年以上の伝統と科学に根ざし、Neutrogena®, Aveeno®, Tylenol®, Listerine®, Johnson’s® and BAND-AID®など、皆様が既にご存じでご愛用いただいているアイコニックなブランドを提供しています。科学は私たちの情熱であり、ケアは私たちの才能です。
Who We Are
私たちのグローバルチームは、インサイトとイノベーションに情熱を注ぎ、最高の製品をお客様にお届けすることに全力を注ぐ、多様で優秀な22,000人以上の社員で構成されています。専門知識と共感力を備えたKenvuerであることは、毎日何百万人もの人々の生活に影響を与える力を持つことを意味します。私たちは、人を第一に考え、全身全霊をもってケアし、サイエンスで信頼を獲得し、勇気をもって解決します。私たちとあなた自身の未来を、共に切り開いていきましょう。
Role reports to:
Data Engineering Manager場所:
Asia Pacific, India, Karnataka, Bangalore勤務地:
ハイブリッドあなたがすること
As a Senior Data & Analytics Engineer, you will build, support, and optimize modern data solutions and analytical products across the Azure Data Platform. You’ll design and manage robust data pipelines (Azure Data Factory, Databricks, Data Lake, Synapse). You will collaborate closely with business stakeholders, data engineers, and data scientists to deliver reliable, secure, and scalable data products that drive decision-making—particularly within Supply Chain and CPG contexts.
Key Responsibilities:
Data Engineering
· Design, develop, customize, and manage data integration tools, data lakes, warehouses, and analytical systems on Azure.
· Build, scale, and harden data pipelines (internal & external sources) using Azure Data Factory, Azure Databricks (PySpark/Spark SQL), Azure Data Lake, Azure Synapse, and Azure SQL.
· Own automation, observability, and monitoring frameworks—capturing operational KPIs and data quality metrics; set up alerting and runbooks.
· Deploy pipelines to production, manage compute resources, configure data attributes, set up monitoring tools, and ensure resilient operations.
· Monitor performance & stability, triage incidents, and adapt pipelines as data/models/requirements evolve.
· Implement best practices for systems integration, security (RBAC, Key Vault, RLS/OLS), performance, cost optimization, and data governance.
· Apply DevOps/DataOps/Agile methodologies; manage version control (Git/Bitbucket) and CI/CD pipelines (Azure DevOps/Jenkins) for data and BI assets.
· Partner with business, IT, and SMEs to translate requirements into scalable data products; lead solutioning and POC discussions.
· Preferred: Experience in CPG Supply Chain (planning, sourcing, manufacturing, logistics, customer service) and related KPIs.
Required Qualifications
· Bachelor’s degree in Engineering, Computer Science, Data Analytics, IT, or related field.
· 4–6 years of relevant experience across Azure Data Engineering and Power BI.
· Hands-on expertise with:
o Azure Databricks (PySpark/Spark SQL)\Microsoft Fabric, Azure Data Factory, Azure Data Lake, Azure Synapse, Azure SQL
o SQL/T-SQL (complex queries, stored procedures)
· Experience with Git/Bitbucket, Azure DevOps/Jenkins for CI/CD of data and BI assets.
· Strong scripting, data modeling, and programming fundamentals.
· Solid understanding of cloud architecture principles (security, scalability, reliability, cost).
· Excellent communication, stakeholder management, and the ability to mentor others.
Desired Qualifications
· Certifications:
o Microsoft Certified: Azure Data Engineer Associate (DP-203)
o Databricks Data Engineer Associate/Professional
· Experience with migration & modernization (from legacy ETL/BI to Azure-native).
· Preferred CPG/Supply Chain domain expertise.
· Master’s degree (e.g., Supply Chain/Analytics/IT) is a plus.
障害のある個人の方は、宿泊施設のリクエスト方法について障害支援ページを確認してください。