Job Description
The Future Begins Here
At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet.
Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement.
At Takeda’s ICC we Unite in Diversity
Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team
About the role:
As a Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business.
In this role, you will be a part of the Digital and Analytics team. This team drives business insights through data engineering best practices, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineering team located in India & US. This role will align to the Data & Analytics chapter of the ICC.
This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead.
How you will contribute:
- Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity.
- Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment.
- Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity.
- Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes.
- Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives.
- Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling.
- Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.
- Leverage AI/ML and Generative AI capabilities within data engineering workflows to enhance data quality, automate pipeline optimization, enable intelligent data discovery, and support advanced analytics use cases.
Minimum Requirements/Qualifications:
- Bachelor's degree in engineering, Computer Science, Data Science, or related field
- 3+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development
- Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines.
- Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc.
- Proven track record of designing and implementing complex data solutions
- Demonstrated understanding and experience using:
- Data Engineering Programming Languages (i.e., Python, SQL)
- Distributed Data Framework (e.g., Spark)
- Cloud platform services (AWS/ Azure preferred)
- Relational Databases
- DevOps and continuous integration
- AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services
- Azure knowledge on services like ADF, ADLS, etc.
- Knowledge of Data lakes, Data warehouses
- Databricks/Delta Lakehouse architecture
- Code management platforms like Github/ Gitlab/ etc.,
- Understanding of database architecture, Data modelling concepts and administration.
- Handson experience of Spark Structured Streaming for building real-time ETL pipelines.
- Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases.
- Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals.
- Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions.
- Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners
- Strong problem solving and troubleshooting skills
- Ability to work in a fast-paced environment and adapt to changing business priorities
Preferred requirements:
- Master's degree in engineering specialized in Computer Science, Data Science, or related field
- Demonstrated understanding and experience using:
- Knowledge in CDK
- Experience in IICS Data Integration tool
- Job orchestration tools like Tidal/Airflow/ or similar
- Knowledge on No SQL
- Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous.
- Databricks Certified Data Engineer Associate
- AWS/Azure Certified Data Engineer
BENEFITS:
It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are:
- Competitive Salary + Performance Annual Bonus
- Flexible work environment, including hybrid working
- Comprehensive Healthcare Insurance Plans for self, spouse, and children
- Group Term Life Insurance and Group Accident Insurance programs
- Employee Assistance Program
- Broad Variety of learning platforms
- Diversity, Equity, and Inclusion Programs
- Reimbursements – Home Internet & Mobile Phone
- Employee Referral Program
- Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days)
ABOUT ICC IN TAKEDA:
- Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day.
- As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization.
#Li-Hybrid
Locations
IND - Bengaluru
Worker Type
Employee
Worker Sub-Type
Regular
Time Type
Full time