コンテンツにスキップ
採用情報に戻る

Analytics Engineer

職務分類:
掲載日:
終了日:
ID:
2607043179W

この仕事を共有する:

Kenvueは現在、以下求人を募集しております。

Analytics Engineer

私たちがしていること

私たちKenvueは、日々のケアが持つ驚くべき力を信じています。100年以上の伝統と科学に根ざし、Neutrogena®, Aveeno®, Tylenol®, Listerine®, Johnson’s® and BAND-AID®など、皆様が既にご存じでご愛用いただいているアイコニックなブランドを提供しています。科学は私たちの情熱であり、ケアは私たちの才能です。 

Who We Are

私たちのグローバルチームは、インサイトとイノベーションに情熱を注ぎ、最高の製品をお客様にお届けすることに全力を注ぐ、多様で優秀な22,000人以上の社員で構成されています。専門知識と共感力を備えたKenvuerであることは、毎日何百万人もの人々の生活に影響を与える力を持つことを意味します。私たちは、人を第一に考え、全身全霊をもってケアし、サイエンスで信頼を獲得し、勇気をもって解決します。私たちとあなた自身の未来を、共に切り開いていきましょう。 

Role reports to:

Senior Manager - Forecasting & Analytics

場所:

Asia Pacific, India, Karnataka, Bangalore

勤務地:

ハイブリッド

あなたがすること

Job Overview

We are looking for a Analytics Engineer with 4+ years of experience to help build and optimize our next-generation data platform using Microsoft Fabric. You will be responsible for designing scalable data pipelines, managing Lakehouse/Warehouses, and ensuring our data architecture supports a "Plug-and-Play" model for downstream business projects. The ideal candidate is a hands-on engineer who can translate complex data requirements into efficient, governed, and reusable data assets.

---

Key Responsibilities

1. Fabric Ecosystem Engineering

· End-to-End Pipeline Development: Design and implement data ingestion and transformation workflows using Fabric Data Factory (Pipelines & Dataflows Gen2).

· Lakehouse & Warehouse Management: Architect and maintain OneLake storage structures, including Lakehouse (Delta Parquet) and Data Warehouse environments, to ensure a single source of truth.

· KPI Hub Integration: Contribute to the development of the centralized KPI Hub by building reusable datasets that reduce data redundancy across the enterprise.

2. Data Modeling & Transformation

· Advanced Spark/Python: Utilize Fabric Notebooks and PySpark for complex data engineering tasks, optimization, and large-scale processing.

· Semantic Layer Design: Build and optimize Power BI Semantic Models (Direct Lake mode) to provide high-performance reporting capabilities without data duplication.

· Data Quality & Governance: Implement automated data validation checks and metadata management to ensure compliance with GxP, SOX, and PII standards.

3. Automation & Platform Operations

· Fabric CI/CD: Implement version control and deployment automation for Fabric items using Git integration and deployment pipelines.

· Cost & Performance Optimization: Monitor capacity utilization and optimize queries/storage to ensure the platform remains cost-efficient.

· Technical Documentation: Create detailed data lineage maps, schema definitions, and deployment runbooks to support team collaboration.

---

Required Qualifications

· Experience: 4+ years of total experience in Data Engineering, with at least 1 year of hands-on experience in Microsoft Fabric or deep expertise in the Azure Data Stack (Lakeshouse, Data Factory, Databricks).

· Technical Proficiency:

o Strong SQL and PySpark skills for data manipulation.

o Hands-on experience with Delta Lake and Parquet formats.

o Experience with Power BI and DAX is highly preferred.

· Cloud Knowledge: Firm understanding of Azure networking, security (Managed Identities, Service Principals), and Entra ID (Azure AD) integration.

· Problem Solving: Ability to work in a fast-paced environment and deliver scalable solutions that align with a broader Solution Architecture.

---

Desired Qualifications

· Certification: DP-600 (Microsoft Fabric Analytics Engineer Associate) is a significant plus.

· Architectural Mindset: Familiarity with broader solution architecture principles is a strong plus.

障害のある個人の方は、宿泊施設のリクエスト方法について障害支援ページを確認してください。