981 Etl Developer jobs in Singapore
ETL Developer
Posted today
Job Viewed
Job Description
We are looking for an ETL Developer with solid experience in building, optimizing, and maintaining data integration workflows, along with strong testing skills to ensure data accuracy, integrity, and performance. The ideal candidate will be able to develop ETL pipelines, validate data transformations, and collaborate with business and QA teams to deliver reliable data solutions.
Key Responsibilities:
- Design, develop, and maintain ETL processes for extracting, transforming, and loading data from multiple sources into target systems.
- Work closely with business analysts and data architects to translate business requirements into technical solutions.
- Write and optimize SQL queries for complex data transformations and validations.
- Conduct unit testing, system testing, and integration testing of ETL workflows to ensure data accuracy and completeness.
- Create and maintain technical documentation, including data flow diagrams, mapping documents, and transformation rules.
- Identify and resolve performance bottlenecks in ETL processes.
- Collaborate with QA teams to prepare and execute ETL test cases, track defects, and verify fixes.
- Support data migration projects and ensure compliance with data governance and security standards.
Required Skills & Experience:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- 5+ years of experience in ETL development using tools like Informatica, Talend, SSIS, DataStage, or Pentaho.
- Strong SQL skills for data manipulation, validation, and performance tuning.
- Experience in ETL testing, including data reconciliation and regression testing.
- Good understanding of data warehousing concepts, dimensional modeling, and ETL best practices.
- Familiarity with defect tracking tools (e.g., JIRA, HP ALM).
- Strong problem-solving, debugging, and analytical skills.
Preferred Qualifications:
- Experience with cloud-based ETL platforms (AWS Glue, Azure Data Factory, Google Dataflow, Snowflake).
- Exposure to automation testing tools for ETL validation (e.g., QuerySurge, Python-based scripts).
- Knowledge of big data tools (Hadoop, Spark) is a plus.
- Experience working in Agile/Scrum environments.
Technical Documentation
Factory
Big Data
Pipelines
Unit Testing
Hadoop
Informatica
ETL
Data Integration
Information Technology
Test Cases
SQL
Performance Tuning
SSIS
Data Warehousing
Business Requirements
ETL Developer
Posted 11 days ago
Job Viewed
Job Description
We are looking for an ETL Developer with solid experience in building, optimizing, and maintaining data integration workflows, along with strong testing skills to ensure data accuracy, integrity, and performance. The ideal candidate will be able to develop ETL pipelines, validate data transformations, and collaborate with business and QA teams to deliver reliable data solutions.
Key Responsibilities:
- Design, develop, and maintain ETL processes for extracting, transforming, and loading data from multiple sources into target systems.
- Work closely with business analysts and data architects to translate business requirements into technical solutions.
- Write and optimize SQL queries for complex data transformations and validations.
- Conduct unit testing, system testing, and integration testing of ETL workflows to ensure data accuracy and completeness.
- Create and maintain technical documentation, including data flow diagrams, mapping documents, and transformation rules.
- Identify and resolve performance bottlenecks in ETL processes.
- Collaborate with QA teams to prepare and execute ETL test cases, track defects, and verify fixes.
- Support data migration projects and ensure compliance with data governance and security standards.
Required Skills & Experience:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 5+ years of experience in ETL development using tools like Informatica, Talend, SSIS, DataStage, or Pentaho.
- Strong SQL skills for data manipulation, validation, and performance tuning.
- Experience in ETL testing, including data reconciliation and regression testing.
- Good understanding of data warehousing concepts, dimensional modeling, and ETL best practices.
- Familiarity with defect tracking tools (e.g., JIRA, HP ALM).
- Strong problem-solving, debugging, and analytical skills.
Preferred Qualifications:
- Experience with cloud-based ETL platforms (AWS Glue, Azure Data Factory, Google Dataflow, Snowflake).
- Exposure to automation testing tools for ETL validation (e.g., QuerySurge, Python-based scripts).
- Knowledge of big data tools (Hadoop, Spark) is a plus.
- Experience working in Agile/Scrum environments.
Lead ETL Process Developer
Posted today
Job Viewed
Job Description
Data Engineer - Lead ETL Process Development
The role of a Senior Data Engineer is pivotal in designing and maintaining efficient data pipelines to prepare large-scale enterprise data for analysis and storage.
Key Responsibilities:- Develop, test, and maintain complex data pipelines for Extract, Transform, Load (ETL) processes.
- Collaborate with cross-functional teams to create user-friendly dashboards using Power BI or Tableau.
- Design and implement data governance, security, and logging strategies.
- Leverage programming languages like Python and SQL for data manipulation.
- Prior experience working on large-scale data projects would be beneficial.
- Mastery of ETL/ELT pipeline development and management.
- Hands-on experience with data visualization tools like Power BI or Tableau.
- Strong understanding of data security, governance, and system logging.
- Familiarity with scripting languages and SQL for data manipulation.
- At least 2 years of experience in a data engineering or backend data role.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
- Design and develop scalable data pipelines using Azure Data Factory, Databricks, and Spark.
- Ingest, transform, and store structured and unstructured data from various sources including REST APIs.
- Write efficient SQL queries for data extraction, transformation, and reporting.
- Develop and maintain data models in Azure Cosmos DB for high-performance access.
- Collaborate with data scientists, analysts, and stakeholders to understand data requirements.
- Implement data quality, monitoring, and validation processes.
- Optimize data workflows for performance, scalability, and cost efficiency in the cloud.
- Ensure data security, compliance, and governance in alignment with enterprise policies.
- Participate in code reviews, testing, and documentation of data solutions.
- Stay updated with emerging Azure and big data technologies to continuously improve data engineering practices.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Sembcorp Industries Ltd
3 days ago Be among the first 25 applicants
Join to apply for the Data Engineer role at Sembcorp Industries Ltd
About Sembcorp
Sembcorp is a leading energy and urban solutions provider headquartered in Singapore. Led by its purpose to drive energy transition, Sembcorp delivers sustainable energy solutions and urban developments by leveraging its sector expertise and global track record.
About Sembcorp
Sembcorp is a leading energy and urban solutions provider headquartered in Singapore. Led by its purpose to drive energy transition, Sembcorp delivers sustainable energy solutions and urban developments by leveraging its sector expertise and global track record.
Purpose & Scope
We are seeking a highly skilled and self-driven Azure Data Engineer with expertise in PySpark, Python, and modern Azure data services including Synapse Analytics and Azure Data Explorer. The ideal candidate will design, develop, and maintain scalable data pipelines and architectures, enabling effective data management, analytics, and governance.
Key Roles & Responsibilities
- Design, develop, and maintain scalable and efficient data pipelines (both batch and real-time streaming) using modern data engineering tools.
- Build and manage data lakes, data warehouses, and data marts using Azure Data Services.
- Integrate data from various sources including APIs, structured/unstructured files, IoT devices, and real-time streams.
- Develop and optimize ETL/ELT workflows using tools such as Azure Data Factory, Databricks, and Apache Spark.
- Implement real-time data ingestion and processing using Azure Stream Analytics, Event Hubs, or Kafka.
- Ensure data quality, availability, and security across the entire data lifecycle.
- Collaborate with analysts, data scientists, and engineering teams to deliver business-aligned data solutions.
- Contribute to data governance efforts and ensure compliance with data privacy standards.
- Establish and manage source system connectivity (on-prem, APIs, sensors, etc.).
- Handle deployment and migration of data pipeline artifacts between environments using Azure DevOps.
- Design, develop, and troubleshoot PySpark scripts and orchestration pipelines.
- Perform data integration using database joins and other transformations aligned with project requirements.
- Bachelor’s Degree in Computer Science, Engineering, or related field
- 3–5 years of experience in Azure-based data engineering, PySpark, and Big Data technologies
- Strong hands-on experience with Azure Synapse Analytics for pipeline orchestration and data handling
- Expertise in SQL, data warehousing, data marts, and ingestion using PySpark and Python
- Solid experience building and maintaining cloud-based ETL/ELT pipelines, especially with Azure Data Factory or Synapse
- Familiarity with cloud data environments such as Azure and optionally AWS
- Experience with Azure DevOps for CI/CD and artifact deployment
- Excellent communication, problem-solving, and interpersonal skills
- 1–2 years of experience working with Azure Data Explorer (including row-level security and access controls).
- Experience with Azure Purview for metadata management, data lineage, governance, and discovery
- Ability to work independently and take full ownership of assignments
- Proactive in identifying and resolving blockers and escalating when needed
- Exposure to real-time processing with tools like Azure Stream Analytics or Kafka
At Sembcorp, our culture is shaped by a strong set of shared behaviours that guide the way we work and uphold our commitment to driving the energy transition.
We foster an institution-first mindset, where the success of Sembcorp takes precedence over individual interests. Collaboration is at the heart of what we do, as we work seamlessly across markets, businesses, and functions to achieve our goals together. Accountability is a core principle, ensuring that we take ownership of our commitments and deliver on them with integrity and excellence. These values define who we are and create a workplace where our people can thrive while making a meaningful impact on driving energy transition.
Join us in making a real impact! Seniority level
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries Utilities
Referrals increase your chances of interviewing at Sembcorp Industries Ltd by 2x
Sign in to set job alerts for “Data Engineer” roles. Software Engineer (Java) – Fixed Income E-Trading WeChat - Software Engineer Intern (Backend) Developer (.NET),Technology Consulting _Digital Engineering Python Developer (Singapore) – Elite Hedge Fund (up to $250K SGD + Bonus + Hybrid) Full Stack Software Engineer - AI Applications Intern - Software Engineering Intern - Polytechnic IntakeWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
- Design, develop, and maintain data pipelines, ETL/ELT processes, and data integration workflows .
- Architect and optimize data lakes, data warehouses, and streaming platforms .
- Work with structured, semi-structured, and unstructured data at scale.
- Implement real-time and batch data processing solutions .
- Collaborate with Data Scientists, Analysts, and Business stakeholders to deliver high-quality data solutions.
- Ensure data security, lineage, governance, and compliance across platforms.
- Optimize queries, data models, and storage for performance and cost efficiency .
- Automate processes and adopt DevOps/DataOps practices for CI/CD in data engineering.
- Troubleshoot complex data-related issues and resolve production incidents.
- Mentor junior engineers and contribute to technical strategy and best practices.
Programming & Scripting
- Proficiency in Python, Scala, or Java for data engineering.
- Strong SQL skills (query optimization, tuning, advanced joins, window functions).
Big Data & Distributed Systems
- Expertise with Apache Spark, Hadoop, Hive, HBase, Flink, Kafka .
- Hands-on with streaming frameworks (Kafka Streams, Spark Streaming, Flink) .
Cloud & Data Platforms
- Deep knowledge of AWS (Redshift, Glue, EMR, Athena, S3, Kinesis) ,
or Azure (Synapse, Data Factory, Databricks, ADLS) ,
or GCP (BigQuery, Dataflow, Pub/Sub, Dataproc) . - Experience with Snowflake, Databricks, or Teradata .
ETL/ELT & Orchestration
- Strong experience with Airflow, Luigi, Azkaban, Prefect .
- ETL tools like Informatica, Talend, SSIS .
Data Modeling & Storage
- Experience with Data Lakes, Data Warehouses, and Lakehouse architectures .
- Knowledge of Star Schema, Snowflake Schema, Normalization/Denormalization .
DevOps & Automation
- Proficiency in CI/CD (Jenkins, GitLab, Azure DevOps) for data pipelines.
- Experience with Docker, Kubernetes, Terraform, Ansible for infrastructure automation.
Other Tough Skills
- Strong knowledge of Data Governance, MDM, Data Quality, Metadata Management .
- Familiarity with Graph Databases (Neo4j), Time-Series Databases (InfluxDB, TimescaleDB) .
- Understanding of machine learning data pipelines (feature engineering, model serving).
- Bachelor’s/Master’s degree in Computer Science, Data Engineering, or related field.
- 7–10 years of experience in data engineering or big data development .
- At least 2–3 large-scale end-to-end data platform implementations .
- Preferred Certifications:
AWS Certified Data Analytics – Specialty
Google Professional Data Engineer
Databricks Certified Data Engineer
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
About Innowave Tech Singapore
Innowave Tech is an Artificial Intelligence (AI) company offering solutions for the Semiconductor and Advanced Manufacturing industry. Utilizing deep industrial domain knowledge, proven experience, and innovation, we provide expert AI solutions and systems to address various industry pain points.
Roles & Responsibilities
We are seeking Data Engineer to establish and lead our data infrastructure. The successful candidate will be responsible for building our data engineering practice from the ground up, implementing robust data systems for industrial AI applications, and establishing best practices that will power our semiconductor manufacturing AI solutions.
Your Role and Impact
As our first Data Engineer, you will have a foundational role in building robust data infrastructure to handle manufacturing data and LLM applications, while establishing secure data practices that power our AI solutions for advanced manufacturing operations.
What You’ll Do
- Select and manage on-premises technologies suitable for secure and efficient operations.
- Build robust pipelines to collect, clean, and transform diverse datasets including process data, sensor data, image data, and human annotations.
- Ensure secure, maintainable, and scalable deployment of data infrastructure.
- Define and enforce best practices in data governance, privacy, and access control.
- Collaboration & Deployment.
What We’re Looking For
Educational Background:
Minimum Poly or Bachelor Degree in Computer Science, Engineering, or a related field.
Technical Expertise:
- 3+ years of experience in data engineering roles, ideally with on-premises or hybrid infrastructure.
- Proven track record of building scalable data systems from ground up in a startup environment.
- Proficiency in Python and/or Java for data pipeline development.
- Solid experience with ETL frameworks (e.g., Apache Airflow, Dagster) and streaming systems (e.g., Kafka).
- Experience designing and maintaining SQL and NoSQL databases.
- Experience building and operating data lakes and data catalog.
- Familiarity with containerization (Docker), version control (Git), and CI/CD practices.
Soft Skills:
- Excellent communication skills and ability to collaborate with cross-functional technical and non-technical teams.
- Excellent problem-solving and debugging abilities.
- Ability to balance engineering tradeoffs.
Bonus Skills:
- Experience with manufacturing data systems, especially SPC, SCADA, and industrial sensor protocols (e.g., OPC UA, MQTT, Modbus).
- Familiarity with AI/ML pipelines and tools (e.g., MLflow).
- Knowledge in vector databases and LLM data infrastructure.
- Prior experience working in or with regulated industries (e.g., semiconductor, automotive, aerospace).
* Only Singapore Citizens and Permanent Residents (PRs) are accepted for this position due to project requirements.
What we Offer
• A leading role in cutting-edge AI projects within the semiconductor industry.
• The opportunity to work with an learn from experts in the field of AI and data science.
• A dynamic, innovative, and supportive work environment.
• Competitive salary and benefits package.
• Career growth opportunities in a fast-paces technology company.
#J-18808-LjbffrBe The First To Know
About the latest Etl developer Jobs in Singapore !
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Southern Ridges Capital
Southern Ridges Capital is an investment firm managing fixed income, currency assets, and derivatives by employing discretionary macro and relative value investment strategies.
We believe in cultivating a strong, collaborative culture where people are empowered to learn, grow, and do their best work. We are committed to mentoring early-career professionals and giving them the tools, exposure, and guidance to succeed in a dynamic and intellectually challenging environment.
What You’ll Do
As a Data Engineer, you’ll work closely with our investment team to support the development and maintenance of robust data systems that drive investment research and decision-making.
This is a hands-on role ideal for someone with a strong interest in data engineering, financial markets, and real-world applications of data-driven tools.
Your key responsibilities will include:
- Build and maintain scalable data pipelines and infrastructure to ingest, process, and store structured and unstructured data from various sources
- Prepare data for analysis or operational use
- Ensure data integrity, consistency, and quality by implementing monitoring, validation, and error-handling frameworks Collaborate closely with data analysts, researchers and portfolio managers to understand data needs and deliver fit-for-purpose solutions
What You’ll Gain
- Hands-on Experience: Exposure to live trading environments and real-world investment problems from day one
- Mentorship & Learning: Work closely with seasoned professionals who will support your growth through feedback and collaboration
- Professional Development: Build skills in coding, data analysis, and financial market research
- Impact: See your work contribute directly to the investment process and help drive results
What We’re Looking For
Technical Skills
- A minimum degree with a major in data science, statistics, mathematics, physics, engineering, or a related field
- Proficient in Python and/or R
- Willingness to learn and apply new technologies in a practical, real-world setting
Helpful Experience
- Exposure to macroeconomic or financial data tools like Bloomberg, CEIC, Haver, or Macrobond
- Interest in financial markets and macroeconomic trends
- Basic understanding of options, derivatives, or econometrics
Soft Skills
- Strong attention to detail and a structured, logical approach to problem-solving
- Eagerness to learn and improve your technical and domain knowledge
- Good written and verbal communication skills.
- Team-oriented mindset with a positive attitude and initiative High standards of integrity and professionalism
Why Join Us
At Southern Ridges Capital, you will work closely with the investment team, and you will have the opportunity to contribute to real investment decisions and view your impact. You will gain a wider experience and better understanding of the investment world working in a smaller and more collaborative setup like ours.
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Job Scope:-
a) Perform the impact analysis for upstream and downstream changes affecting DataMart and reporting systems
b) Design, develop and deploy the programs, source codes, batch scripts, complex SQL stored procures, functions and triggers, SSIS packages or SSRS reports.
c) Following the organizational SDLC processes to deliver the project or enhancement requests, create/update the SDLC documents including functional and non-functional specifications, technical design documents, test plan, test cases, release procedures, system operational documents, user manuals, etc.
d) Review technical deliverables from third-party vendors, including functional and design specs, programs/scripts, test results, release runbook and system operational manual.
e) Investigate and troubleshoot the production issues and system problems escalated by IT operation team or business users, identify the root cause, provide workaround to rectify the issue/problem and work out the long-term solution to fix issues/problems permanently.
f) Involved in regularly maintenance activities, e.g., yearly DR drill, quarterly support for server or platform software patching, etc.
g) Provide ad-hoc support for other IT service requests, e.g., data extraction, data alteration, extract system logic, answer to users’ inquiry about the data/logic in the system, etc.
h) Prepare artefacts in accordance to SMBC’s system lifecycle framework and documents.
i) Manage, maintain, and support applications and the corresponding operating environments, focusing on stability, quality, and functionality against service level expectation.
j) Evaluate the current system state, identify aspects which could be improved and recommend changes to achieve the improvement.
k) Coordinate with IT teams on system environment setup and maintenance
l) Coordinate production release and provide post implementation support
m) Provide on-call support and afterhours/weekend support as needed to cover application support and change deployment
n) Highlight or escalate risk and issues to relevant parties in a timely manner.
o) Identify customers’ needs and providing value-added solutions to them.
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Job Description
Design, build, and maintain robust data pipelines (ETL/ELT) to ingest, process, and transform large-scale log data for the detection of unauthorized privileged access.
Develop and manage scalable data warehousing solutions that support advanced analytics, AI models, and security monitoring use cases.
Operationalize machine learning models by deploying them into production environments, ensuring reliability, scalability, and seamless integration with existing systems.
Implement strong data governance practices to ensure data quality, consistency, and integrity across multiple sources and platforms.
Collaborate cross-functionally with data scientists, software engineers, and cybersecurity teams to deliver secure, data-driven features and enhance detection capabilities.
Establish and enforce best practices for data engineering and DevOps processes, including CI/CD pipelines, monitoring, and automation.
Provide technical leadership and mentorship by guiding junior engineers, fostering capability building, and driving innovation in data-driven security solutions.
Requirements
Professional Experience: 1–3 years for Junior roles and 3–7 years for Senior roles in data engineering or a closely related field, with proven hands-on project exposure.
Strong proficiency in Python for data processing, pipeline development, and integration tasks, with good coding practices and debugging skills.
Experience working with SQL, NoSQL, and Graph databases, including schema design, query optimization, and managing large-scale data storage solutions.
Familiarity with cloud platforms such as GCP, AWS Bedrock , and SPLUNK , with the ability to leverage their services for data engineering and security use cases.
Hands-on experience with CI/CD setups, including version control (Git), automated testing, and deployment pipelines for scalable data solutions.
Solid foundation in algorithms, data structures, and integration strategies, with the ability to design scalable, efficient, and resilient data pipelines.
Exposure to ML/AI solution deployment, API development, and event-driven architectures (e.g., Kafka, SQS), along with an understanding of graph data structures for complex relationships would be an added advantage.
This is a 1-year Contract position under People Advantage(Certis Group). We appreciate your application and regret only shortlisted candidates will be notified.
By submitting your resume, you consent to the handling of your personal data in accordance with Certis Group Privacy Policy (
EA Personnel Name: Siti Khatijah
EA Personnel No: R22111204
EA LicenseNo:11C3955
#J-18808-Ljbffr