Analytics Data Engineer
- Type:
- Posting Date:
- End Date:
- Job ID:
- 2607043180W
Share this job:
Kenvue is currently recruiting for a:
What we do
At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent.
Who We Are
Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. For more information, click here.
Role reports to:
ManagerLocation:
Asia Pacific, India, Karnataka, BangaloreWork Location:
HybridWhat you will do
We are seeking a highly skilled Analyst Data Engineer with over 4 years of experience to design, develop, and optimize our data architecture. The ideal candidate will bridge the gap between Operational Technology (OT) and data-driven insights, specializing in real-time data streaming and cloud-based analytics. You will be responsible for building robust data pipelines that ingest IoT telemetry and transform it into actionable intelligence.---
Key Responsibilities
· 1Data Pipeline Development: Design, develop and maintain scalable ETL/ELT pipelines using Databricks and PySpark to process large-scale structured and unstructured data.
· Streaming & IoT Integration: Implement real-time data streaming solutions and manage device connectivity/telemetry ingestion using Azure IoT services (IoT Hub, Event Hubs).
· Cloud Architecture: Leverage Microsoft Fabric and Azure Data Lake Storage to build modern data warehouse solutions.
· OT Collaboration: Interface with Operational Technology systems to ensure seamless data flow from edge devices to the cloud.
· Programming: Write clean, maintainable, and high-performance code in Python for data manipulation and automation tasks.
· Reporting Support: Collaborate with BI teams to structure data for optimized consumption in Power BI (PBI), ensuring high-level alignment with reporting requirements.
---
Required Skills & Qualifications
· Experience: 4+ years of professional experience in Data Engineering or a related software development role.
· Technical Stack:
o Expertise in PySpark and Databricks for distributed data processing.
o Hands-on experience with Azure IoT Hub and Event Hubs for streaming.
o Strong proficiency in Python programming.
· Data Warehousing: Solid understanding of Delta Lake architecture and relational database design.
· Domain Knowledge: Familiarity with OT (Operational Technology) environments and industrial data protocols.
· Visualization: High-level understanding of Power BI data modeling and DAX.
---
Desired Qualifications
· Architectural Mindset: Familiarity with broader solution architecture principles is a strong plus.
If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.