跳至內容
返回職業生涯

Analytics Engineer

職位類別:
發佈日期:
結束日期:
ID:
2607043179W

分享此職位:

Kenvue 目前正在招聘 a:

Analytics Engineer

我們做什麼

Kenvue,我們意識到日常護理的非凡力量。我們以一個多世紀的傳統為基礎,植根於科學,是標誌性品牌的品牌 - 包括您已經熟悉和喜愛的 NEUTRGENA®、AVEENO、TYLENOL®®、LISTERINE®、JOHNSON'S® 和 BAND-AID®。科學是我們的熱情所在;關心就是我們的才能。

我們是誰

我們的全球團隊由 ~ 22,000 名才華橫溢的員工組成,他們的職場文化中,每個聲音都很重要,每一個貢獻都受到讚賞。 我們熱衷於洞察, 創新並致力於為我們的客戶提供最好的產品。憑藉專業知識和同理心,成為 Kenvuer 意味著每天有能力影響數百萬人。我們以人為本,熱切關懷,以科學贏得信任,以勇氣解決——有絕佳的機會等著您!加入我們,塑造我們和您的未來。有關更多資訊,請按兩下 here.

Role reports to:

Senior Manager - Forecasting & Analytics

位置:

Asia Pacific, India, Karnataka, Bangalore

工作地點:

混合

你會做什麼

Job Overview

We are looking for a Analytics Engineer with 4+ years of experience to help build and optimize our next-generation data platform using Microsoft Fabric. You will be responsible for designing scalable data pipelines, managing Lakehouse/Warehouses, and ensuring our data architecture supports a "Plug-and-Play" model for downstream business projects. The ideal candidate is a hands-on engineer who can translate complex data requirements into efficient, governed, and reusable data assets.

---

Key Responsibilities

1. Fabric Ecosystem Engineering

· End-to-End Pipeline Development: Design and implement data ingestion and transformation workflows using Fabric Data Factory (Pipelines & Dataflows Gen2).

· Lakehouse & Warehouse Management: Architect and maintain OneLake storage structures, including Lakehouse (Delta Parquet) and Data Warehouse environments, to ensure a single source of truth.

· KPI Hub Integration: Contribute to the development of the centralized KPI Hub by building reusable datasets that reduce data redundancy across the enterprise.

2. Data Modeling & Transformation

· Advanced Spark/Python: Utilize Fabric Notebooks and PySpark for complex data engineering tasks, optimization, and large-scale processing.

· Semantic Layer Design: Build and optimize Power BI Semantic Models (Direct Lake mode) to provide high-performance reporting capabilities without data duplication.

· Data Quality & Governance: Implement automated data validation checks and metadata management to ensure compliance with GxP, SOX, and PII standards.

3. Automation & Platform Operations

· Fabric CI/CD: Implement version control and deployment automation for Fabric items using Git integration and deployment pipelines.

· Cost & Performance Optimization: Monitor capacity utilization and optimize queries/storage to ensure the platform remains cost-efficient.

· Technical Documentation: Create detailed data lineage maps, schema definitions, and deployment runbooks to support team collaboration.

---

Required Qualifications

· Experience: 4+ years of total experience in Data Engineering, with at least 1 year of hands-on experience in Microsoft Fabric or deep expertise in the Azure Data Stack (Lakeshouse, Data Factory, Databricks).

· Technical Proficiency:

o Strong SQL and PySpark skills for data manipulation.

o Hands-on experience with Delta Lake and Parquet formats.

o Experience with Power BI and DAX is highly preferred.

· Cloud Knowledge: Firm understanding of Azure networking, security (Managed Identities, Service Principals), and Entra ID (Azure AD) integration.

· Problem Solving: Ability to work in a fast-paced environment and deliver scalable solutions that align with a broader Solution Architecture.

---

Desired Qualifications

· Certification: DP-600 (Microsoft Fabric Analytics Engineer Associate) is a significant plus.

· Architectural Mindset: Familiarity with broader solution architecture principles is a strong plus.

如果您是殘障人士,請查看我們的 殘障人士援助頁面瞭解如何申請便利