37 Etl Processes jobs in Singapore
Data Engineering Intern
Posted today
Job Viewed
Job Description
We are an IT MES (Manufacturing Execution System) team based in Woodlands, supporting Seagate's global factory operations in Singapore, Malaysia, US, Thailand, and China. Our core mission is to design and implement scalable data integration solutions that power MES and Factory IT applications.
Our focus includes Database ETL processes, complex SQL development, and Python-based automation to optimize data flows and ensure system reliability. Beyond traditional data engineering, we are also exploring Generative AI and Agentic AI solutions to modernize data platforms and create new value for factory operations.
This internship is ideal for students who are passionate about ETL/Data Engineering with Oracle, eager to sharpen their Python skills, and curious about the application of LLMs and AI frameworks in enterprise IT.
About the role - you will:As a Data Engineering Intern, you will:
- Work with senior engineers on ETL processes in Postgres / Oracle, including writing and optimizing stored procedures, functions, and packages.
- Develop and optimize complex SQL queries to support data extraction, transformation, and reporting needs.
- Use Python for automation, data processing, and proof-of-concepts.
- Collaborate with Application Architects and Business SMEs to deliver data integration solutions supporting MES and factory applications.
- Contribute to projects involving LLMs, LangChain, LangGraph, and Marimo notebooks for GenAI-enabled data pipelines.
- Support testing, troubleshooting, and documentation to ensure system reliability and performance.
- Strong foundation in SQL and relational database concepts.
- Hands-on skills in Database stored procedures, triggers, and performance tuning.
- Comfortable coding in Python and eager to apply it for ETL automation and analytics.
- Interested in emerging technologies like Generative AI, LLM frameworks (LangChain, LangGraph), and Marimo notebooks.
- Detail-oriented, analytical, and self-motivated with strong problem-solving skills.
- Good communication and teamwork abilities.
- Pursuing a degree in Computer Science, Software Engineering, Information Systems, or related field.
- Experience (academic or project-based) with ETL pipelines in Oracle/Postgres
- Familiarity with Generative AI frameworks (LangChain, LangGraph, Chainlit, or similar).
- Knowledge of version control (Git) and Agile practices.
Our Woodlands site is one of the largest electronics manufacturing sites in Singapore, housing our recording media operations. Spread over three sites, it is easily accessible via bus or from the MRT Station, with many employees taking mass transportation to work. Here at work, you can enjoy breakfast, lunch, dinner, and snacks at our onsite canteen and coffee shop. We offer a range of facilities including an in-house gym and dance studio, as well as after-work badminton and table tennis competitions. On-site celebrations and community volunteer opportunities also abound.
Location: Woodlands, Singapore, W2
Travel: None
Data Engineering Analyst
Posted today
Job Viewed
Job Description
- Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Engineering, or a related field.
- Experience as a Data Analyst or in a similar analytical/data science role.
- Strong proficiency in SQL for data querying and transformation.
- Advanced knowledge of Python for data analysis, automation, and ML workflows.
- Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras.
- Solid understanding of data validation, anomaly detection, and data quality principles .
- Hands-on experience with data visualization and reporting tools (e.g., Power BI, Tableau, Matplotlib, Seaborn).
- Familiarity with version control (Git), Jupyter notebooks, and working in collaborative environments.
Data Engineering Lead
Posted today
Job Viewed
Job Description
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.
Responsibilities:
- Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
- Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
- Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
- Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
- Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
- Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
- Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
Qualifications:
- 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
- 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
- Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
- Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
- Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
- Experienced in working with large and multiple datasets and data warehouses
- Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
- Strong analytic skills and experience working with unstructured datasets
- Ability to effectively use complex analytical, interpretive, and problem-solving techniques
- Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
- Experience with external cloud platform such as OpenShift, AWS & GCP
- Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
- Experienced in integrating search solution with middleware & distributed messaging - Kafka
- Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
- Experienced in software development life cycle and good problem-solving skills.
- Excellent problem-solving skills and strong mathematical and analytical mindset
- Ability to work in a fast-paced financial environment
Education:
- Bachelor's degree/University degree or equivalent experience
- Master's degree preferred
-
Job Family Group:
Technology
-
Job Family:
Systems & Engineering
-
Time Type:
Full time
-
Most Relevant Skills
Please see the requirements listed above.
-
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
-
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .
View Citi's EEO Policy Statement and the Know Your Rights poster.
Data Engineering Architect
Posted today
Job Viewed
Job Description
We are seeking a highly proficient and results-driven Data Engineering Architect with a robust background in designing, implementing, and maintaining scalable and resilient data ecosystems. The ideal candidate will possess a minimum of five years of dedicated experience in orchestrating complex data workflows and will serve as a key contributor within our advanced data services team. This role requires a meticulous professional who can transform intricate business requirements into high-performance, production-grade data solutions.
Key Responsibilities:- Architectural Stewardship: Design, develop, and optimize sophisticated data pipelines leveraging distributed computing frameworks to ensure efficiency, reliability, and scalability of data ingestion, transformation, and delivery layers.
- SQL Mastery: Act as a subject matter expert in SQL, crafting and refining highly complex, multi-layered queries and stored procedures for advanced data manipulation, extraction, and reporting, while ensuring optimal performance and resource utilization.
- Distributed Processing Expertise: Lead the development and deployment of data processing jobs using Apache Spark , orchestrating complex data transformations, aggregations, and feature engineering at petabyte-scale.
- Big Data Orchestration: Proactively manage and evolve our data warehousing solutions built on Apache Hive , overseeing schema design, partition management, and query optimization to support large-scale analytical and reporting needs.
- Collaborative Innovation: Work synergistically with senior engineers and cross-functional teams to conceptualize and execute architectural enhancements, data modeling strategies, and systems integrations that align with long-term business objectives.
- Quality Assurance & Governance: Establish and enforce rigorous data quality standards, implementing comprehensive validation protocols and monitoring mechanisms to guarantee data integrity, accuracy, and lineage across all systems.
- Operational Excellence: Proactively identify, diagnose, and remediate technical bottlenecks and anomalies within data workflows, ensuring system uptime and operational stability through systematic troubleshooting and root cause analysis.
- Educational Foundation: Bachelor's degree in Computer Science, Information Systems, or a closely related quantitative field.
- Experience: A minimum of five (5) years of progressive, hands-on experience in a dedicated data engineering or equivalent role, with a proven track record of delivering enterprise-level data solutions.
- Core Technical Skills:
SQL: Expert-level proficiency in SQL programming is non-negotiable, including advanced query optimization, window functions, and schema design principles.
Distributed Computing: Demonstrated high-level proficiency with Apache Spark for large-scale data processing.
Data Warehousing: In-depth, practical experience with Apache Hive and its ecosystem.
- Conceptual Knowledge: Deep understanding of data warehousing methodologies, ETL/ELT processes, and dimensional modeling.
- Analytical Acumen: Exceptional problem-solving and analytical capabilities, with the ability to dissect complex technical challenges and formulate elegant, scalable solutions.
- Continuous Learning: A relentless curiosity and a strong desire to stay abreast of emerging technologies and industry best practices.
- Domain Preference: Prior professional experience within the Banking or Financial Services sector is highly advantageous.
Data Engineering Expert
Posted today
Job Viewed
Job Description
1. Data Engineering & Platform Knowledge (Must)
Strong understanding of Hadoop ecosystem: HDFS, Hive, Impala, Oozie, Sqoop, Spark (on YARN).
Experience in data migration strategies (lift & shift, incremental, re-engineering pipelines).
Knowledge of Databricks architecture (Workspaces, Unity Catalog, Clusters, Delta Lake, Workflows).
2. Testing & Validation (Preferred)
Data reconciliation (source vs. target).
Performance benchmarking.
Automated test frameworks for ETL pipelines.
3. Databricks-Specific Expertise (Preferred)
Delta Lake: ACID transactions, time travel, schema evolution, Z-ordering.
Unity Catalog: Catalog/schema/table design, access control, lineage, tags.
Workflows/Jobs: Orchestration, job clusters vs. all-purpose clusters.
SQL Endpoints / Databricks SQL: Designing downstream consumption models.
Performance Tuning: Partitioning, caching, adaptive query execution (AQE), photon runtime.
4. Migration & Data Movement (Preferred)
Data migration from HDFS/Cloudera to cloud storage (ADLS/S3/GCS).
Incremental ingestion techniques (Change Data Capture, Delta ingestion frameworks).
Mapping Hive Metastore to Unity Catalog (metastore migration).
Refactoring HiveQL/Impala SQL to Databricks SQL (syntax differences).
5. Security & Governance (Nice to have)
Mapping Cloudera Ranger/SSO policies Unity Catalog RBAC.
Azure AD / AWS IAM integration with Databricks.
Data encryption, masking, anonymization strategies.
Service Principal setup & governance.
6. DevOps & Automation (Nice to have)
Infrastructure as Code (Terraform for Databricks, Cloud storage, Networking).
CI/CD for Databricks (GitHub Actions, Azure DevOps, Databricks Asset Bundles).
Cluster policies & job automation.
Monitoring & logging (Databricks system tables, cloud-native monitoring).
7. Cloud & Infra Skills (Nice to have)
- Strong knowledge of the target cloud (AWS/Azure/GCP):
o Storage (S3/ADLS/GCS).
o Networking (VNETs, Private Links, Security Groups).
o IAM & Key Management. 9. Soft Skills
Ability to work with business stakeholders for data domain remapping.
Strong documentation and governance mindset.
Cross-team collaboration (infra, security, data, business).
Senior Manager, Data Engineering
Posted today
Job Viewed
Job Description
SPH Media's mission is to be the trusted source of news on Singapore and Asia, to represent the communities that make up Singapore, and to connect them to the world.
It has several business segments in the media industry, including the publishing of newspapers, magazines, and books in both print and digital editions. It also owns and operates other businesses including radio stations and outdoor media.
Key Responsibilities- Data Pipeline & Architecture
- Design, build, and optimize scalable, reliable data pipelines (batch and streaming) and ETL/ELT workflows using SQL, Python, and big data technologies.
- Lead data architecture discussions, including the design and review of ERDs, data models, and system design.
- Build and maintain transactional and analytical schemas for data lakes, warehouses, and marts.
- Cloud & Infrastructure
- Implement and manage cloud-based data platforms (AWS preferred; Azure/GCP experience a plus) ensuring scalability, reliability, cost efficiency, and security.
- Deploy and manage infrastructure using Terraform and other IaC tools.
- Develop and maintain CI/CD pipelines (e.g., GitHub Actions) for deploying data applications and services.
- Apply best practices for cloud infrastructure, including cost management, security, redundancy, and performance optimization.
- Data Quality, Governance & Security
- Drive data quality, governance, and compliance across product and business areas.
- Implement data encryption, hashing, and privacy protection mechanisms.
- Adhere to Master Data Management (MDM) principles and enterprise data governance policies.
- Analytics & Business Impact
- Partner with product, engineering, data science, and business teams to deliver business outcomes through data-driven products.
- Deliver audience and behavioral analytics, KPIs, dashboards, and reporting.
- Propose solutions for BI dashboards and enterprise data needs.
- Support the democratization of data across the organization.
- Leadership & Collaboration
- Lead a team of data engineers, taking ownership of decisions and deliveries.
- Mentor and guide engineers in best practices, architecture, and performance optimization.
- Manage stakeholder relationships, roadmaps, and expectations.
- Work with diverse stakeholders across domains including sales, marketing, advertising, engineering, and publishing.
- Follow Agile methodologies (Scrum) for project delivery.
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills for data engineering use cases.
- Proven experience in AWS services (S3, Glue, Lambda, RDS, Lake Formation, Athena, Kinesis, EMR, Step Functions).
- Strong expertise in Terraform for infrastructure provisioning.
- Proficiency in CI/CD tools (e.g., GitHub Actions) and Git branching strategies.
- Hands-on experience with big data technologies such as Spark, Hive, Kafka, Hudi, or Iceberg.
- Ability to design BI-ready data models (dimensional modeling) and implement BI frameworks.
- Solid understanding of data governance, data quality, and security frameworks .
- Strong communication skills to explain complex data and analytics concepts to stakeholders.
- Leadership experience, including mentoring teams and managing deliveries.
- 7+ years of relevant hands-on experience in data engineering, solution architecture, or analytics roles.
- 5+ years of team leadership or people management experience.
- Experience with containerization and orchestration (Docker, Kubernetes).
- Experience with BI tools (Tableau, QuickSight, etc.) and query engines (Presto, Trino).
- Familiarity with Agile methodologies and enterprise-scale data platform implementations.
Cloud Data Engineering Lead
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Cloud Data Engineering Lead to spearhead our cloud-based data initiatives. You will be responsible for architecting, building, and maintaining scalable data pipelines and platforms in the cloud, enabling advanced analytics and data-driven decision-making across the organization.
Key Responsibilities:
- Leadership & Strategy
- Lead a team of data engineers in designing and implementing cloud-native data solutions.
- Define and drive the data engineering roadmap aligned with business goals.
- Collaborate with cross-functional teams, including Data Science, Analytics, DevOps, and Product.
- Architecture & Development
- Architect and implement scalable, secure, and cost-effective data pipelines and platforms.
- Design and optimize data lake, data warehouse, and real-time streaming architectures.
- Ensure high availability, performance, and reliability of data systems.
- Cloud & Tools
- Leverage cloud platforms (AWS, Azure). for data storage, processing, and orchestration.
Utilize tools such as Snowflake, Data Bricks, AWS Glue, Iceberg, Apache Spark, Airflow, Kafka, dbt, and Terraform.
- Implement CI/CD pipelines and infrastructure-as-code for data workflows.
- Governance & Quality
- Establish data governance, security, and compliance standards.
- Monitor data quality and implement automated validation and alerting mechanisms.
- Mentorship & Growth
- Mentor junior engineers and foster a culture of continuous learning and innovation.
- Conduct code reviews, technical workshops, and performance evaluations.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 7+ years of experience in data engineering, with 2+ years in a leadership role.
- Proven expertise in cloud platforms (AWS, Azure).
- Strong programming skills in Python, Scala, or Java.
- Experience with big data technologies (Spark, Hadoop), ETL tools, and SQL.
- Familiarity with data modeling, warehousing, and real-time data processing.
- Excellent communication, leadership, and project management skills.
5 day week @ AMK area
Maestro HR
damien lee tian hong
R
16C8462
Be The First To Know
About the latest Etl processes Jobs in Singapore !
Assistant/ Manager, Data Engineering
Posted today
Job Viewed
Job Description
Health Promotion Board
Permanent
What the role is
Overview of the Chief Data Officer Office (CDOO)
Advance the data-driven, evidence-based and citizen-centric approach to health promotion through state-of-the-art data best practices to maximize data quality and internationally-recognised governance and policies to promote safe usage of data.
Overview of Data Operations & Architecture Department
Recognising the efficacy of big data and artificial intelligence in optimising sharper insights and outcomes for public health, the Data Operations and Architecture Department will take a proactive role to drive the design of a 360-view citizen-centric data architecture and the implementation of data operations to support and facilitate an evidence-based, data-driven approach to programme delivery. The department will set up and implement data architecture and structuring, cleaning and validation of big data from diverse HPB and partners' data sources (e.g. food transaction data, sports attendance data, screening and sensor data) to facilitate data feature engineering use cases pivotal to downstream data exploitation. The department will also be involved in strategic collaborations with partners (e.g. public healthcare and data science institutions, academics and technology players) to bring in their data and expertise onto the data operations to benefit citizens.
We are looking for dynamic and enthusiastic individuals, with strong data engineering and data science skills and project delivery experience, to join us on this pioneering journey to build a more data-driven HPB. You will be pivotal in helping to set up and implement a robust data engineering pipeline that can integrate the vast wealth of data HPB has through its numerous data sources. A robust data pipeline ensures that changes from the data sources does not impact the timely delivery of high quality data to our business users.
Given the diversity of data collected from our lifestyle programmes, data science approaches such as machine learning will also be deployed to support the data engineering process. For example, we have build and included machine learning ops to help with standardising event names.
Responsibilities
- Design optimal data structures for strategic datasets and drive data engineering processes related to data ingestion, data imputation, data transformation and integration to facilitate data science and analytics use cases
- Build optimised, test driven codes to achieve the above mentioned
- Translate research and analytics questions into data features to facilitate dashboarding, model training, analytics, and data science applications
- Build test case on code to ensure that all codes are built to perfection
- Project manage and be the overall lead for engagement to bring together HPB vendors, partners, IT and business teams to plan, set timelines and drive processes that enable timely delivery of the data engineering pipeline guided by business needs
- Lead internal cross-functional teams to integrate, process and engineer disparate datasets into structured and meaningful variables to facilitate machine learning and data science use cases
- Lead the preparation of materials to update senior management periodically on progress of the project and surface issues and challenges for collective deliberation and decision-making
Qualifications and skillsets
- Bachelor/Masters degree in Computer Science, Engineering or Information Systems with major / project experience in data science and/or business analytics from a recognised university. Professional certifications in data engineering from accredited professional bodies will be highly desirable.
- Minimum 3-5 years of relevant project management and delivery experience. Experience in end-to-end deployment of data engineering projects in production environment preferred.
- Proficiency in the use of Python and/or Power BI
- Experience working on data engineering, and/or database architectural work
- Experience working on a few of the following component in the data pipeline: data ETL, data cleaning, data modelling, and feature engineering.
- Able to work in a fast-paced matrix environment, overseeing multiple projects at the same time
- Good communication skills and able to communicate and express complex data/concepts in a clear and simple-to-understand manner
- Have the drive, commitment, and perseverance to do well and have strong intellectual curiosity to learn and seek continuous improvements in the work assigned
- Experience in Big Data technologies e.g. Hadoop and Spark Ecosystems will be desirable
- Experience with test-driven development will be desirable
- Take an interest in digital health and healthcare
About Health Promotion Board
Established in 2001, the Health Promotion Board's (HPB) vision is to build a nation of healthy people.
We aim to empower residents in Singapore to attain optimal health, increase the quality and years of healthy life, and prevent illnesses, disability and premature death.
To achieve this, HPB drives national health promotion and disease prevention programmes, spearheads health education initiatives and creates a supportive environment in Singapore where healthy lifestyle options are available and accessible for healthy living every day.
About your application process
If you do not hear from us within 4 weeks of the job ad closing date, we seek your understanding that it is likely that we are not moving forward with your application for this role. We thank you for your interest and would like to assure you that this does not affect your other job applications with the Public Service. We encourage you to explore and for other roles within Health Promotion Board or the wider Public Service.
ETL Developer
Posted today
Job Viewed
Job Description
Amaris Consulting is an independent technology consulting firm providing guidance and solutions to businesses. With more than 1,000 clients across the globe, we have been rolling out solutions in major projects for over a decade – this is made possible by an international team of 7,600 people spread across 5 continents and more than 60 countries. Our solutions focus on four different Business Lines: Information System & Digital, Telecom, Life Sciences and Engineering. We're focused on building and nurturing a top talent community where all our team members can achieve their full potential. Amaris is your steppingstone to cross rivers of change, meet challenges and achieve all your projects with success.
At Amaris, we strive to provide our candidates with the best possible recruitment experience. We like to get to know our candidates, challenge them, and be able to give them proper feedback as quickly as possible. Here's what our recruitment process looks like:
Brief Call: Our process typically begins with a brief virtual/phone conversation to get to know you The objective? Learn about you, understand your motivations, and make sure we have the right job for you
Interviews (the average number of interviews is 3 - the number may vary depending on the level of seniority required for the position). During the interviews, you will meet people from our team: your line manager of course, but also other people related to your future role. We will talk in depth about you, your experience, and skills, but also about the position and what will be expected of you. Of course, you will also get to know Amaris: our culture, our roots, our teams, and your career opportunities
Case study: Depending on the position, we may ask you to take a test. This could be a role play, a technical assessment, a problem-solving scenario, etc.
As you know, every person is different and so is every role in a company. That is why we have to adapt accordingly, and the process may differ slightly at times. However, please know that we always put ourselves in the candidate's shoes to ensure they have the best possible experience.
We look forward to meeting you
Job descriptionABOUT THE JOB
- Develop and maintain ETL processes for data migration projects
- Design, implement, and optimize ETL workflows using Talend (preferably) or similar tools
- Analyze existing data structures and migration requirements
- Collaborate with business and technical teams to gather and clarify requirements
- Perform data mapping, transformation, and validation
- Troubleshoot and resolve ETL-related issues
- Ensure data quality and integrity throughout migration processes
- Document ETL processes and maintain technical documentation
ABOUT YOU
- Experience with ETL development and data migration projects
- Proficiency with Talend ETL tools (preferably) or similar platforms
- Academic background: degree in Computer Science, Information Systems, or related field
- Experience with relational databases (e.g., SQL Server, Oracle, MySQL)
- Experience with data modeling and data warehousing concepts
- Experience with AI generative tools (if relevant)
- You have strong analytical and problem-solving skills
- You have excellent communication and teamwork abilities
- You have adaptability and a proactive approach to challenges
WHY AMARIS?
At Amaris Consulting, we believe in creating a thriving, positive workplace where every team member can grow, connect, and make a real impact. Here's what you can expect when you join our dynamic community:
- Global Diversity: Be part of an international team of 110+ nationalities, celebrating diverse perspectives and collaboration.
- Trust and Growth: With 70% of our leaders starting at entry-level, we're committed to nurturing talent and empowering you to reach new heights.
- Continuous Learning: Unlock your full potential with our internal Academy and over 250 training modules designed for your professional growth.
- Vibrant Culture: Enjoy a workplace where energy, fun, and camaraderie come together through afterworks, networking events, and more.
- Meaningful Impact: Join us in making a difference through our CSR initiatives, including the WeCare Together program, and be part of something bigger.
Equal Opportunity
Amaris Consulting is proud to be an equal-opportunity workplace. We are committed to promoting diversity within the workforce and creating an inclusive working environment. For this purpose, we welcome applications from all qualified candidates regardless of gender, sexual orientation, race, ethnicity, beliefs, age, marital status, disability, or other characteristics.
Head of PT Data Engineering

Posted 3 days ago
Job Viewed
Job Description
**The Position**
The Pharma Technical Operations (PT) department is establishing the One PT Data Office to serve as the strategic center for data governance, strategy, and enablement across the entire global PT network. This team is at the heart of our digital transformation, responsible for architecting and leading a central data office to unlock the full potential of PT's data assets.
The Head of PT Data Engineering will be instrumental in building the robust data backbone that powers PT's digital transformation and data driven decision making. Reporting into the One PT Data Office, this critical role is accountable for leading a cutting-edge internal and external global data engineering team. You will define the strategy, evolve the data platforms and processes, and oversee the delivery of scalable, high-quality data products to enable advanced analytics, AI initiatives, and critical business processes across Pharma Technical Operations. You will lead a critical team of internal and external data engineers, fostering a culture of technical excellence, innovation, and continuous delivery. This pivotal role requires a visionary leader to build and manage the foundational data infrastructure, pipelines, and platforms that enable the seamless flow of high-quality, FAIR data from diverse sources to data consumers, ensuring compliance, scalability, and future readiness for PT's ambitious digital agenda.
**The Opportunity**
+ Provide strategic leadership and vision for PT's global data engineering capabilities, defining the roadmap for data ingestion, transformation, storage, and consumption architectures.
+ Accountable for the design, development, and evolution of scalable, robust, and cost-effective data platforms (e.g., data lakes, data warehouses, streaming platforms) that support PT's advanced analytics, AI/ML, and data product needs.
+ Define and implement best practices, standards, and guidelines for data modeling, ETL/ELT processes, data quality, and data pipeline orchestration across the PT landscape.
+ Actively monitor and integrate cutting-edge industry trends, emerging data engineering technologies, and cloud-native solutions to continually optimize PT's data infrastructure in close collaboration with IT.
+ Build, mentor, mobilize, and empower a high-performing, global team of internal and external data engineers, fostering a culture of technical excellence, innovation, and agile delivery.
+ Accountable for the end-to-end delivery and operational excellence of critical data pipelines, ensuring timely, accurate, and reliable data availability for PT's business processes and analytical use cases.
+ Ensure data infrastructure and pipelines adhere to strict quality, security, and compliance standards (e.g., GxP, data integrity, data privacy), collaborating closely with Data Governance and Cybersecurity teams.
+ Drive the automation and optimization of data engineering workflows to enhance efficiency, reduce manual effort, and improve data freshness.
**Who You Are**
+ 12+ years of progressive experience in data engineering, data platform architecture, or related roles within a complex, global enterprise, preferably in life sciences/pharma and 7+ years of senior leadership experience, specifically building, developing, and leading large, global teams of data engineers.
+ Proven track record of successfully designing, implementing, and scaling robust data pipelines and cloud-based data platforms (AWS, Azure, GCP data services) for advanced analytics and AI/ML.
+ Expert-level knowledge of modern data architectures, ETL/ELT, data orchestration, and data quality management.
+ Strong understanding of GxP, data integrity, and data privacy regulations in a manufacturing context.
+ Exceptional strategic thinking, communication, and influencing skills to lead and align diverse stakeholders globally.
+ Bachelor's degree in a relevant technical field required; Master's or advanced certifications are highly advantageous.
Ready for the next step? We look forward to hearing from you. Apply now to discover this exciting opportunity!
**Who we are**
A healthier future drives us to innovate. Together, more than 100'000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact.
Let's build a healthier future, together.
**Roche is an Equal Opportunity Employer.**