1,012 Senior Data Engineer jobs in Singapore
Data Engineer
Posted today
Job Viewed
Job Description
Key Qualifications:
- 5-7 years experience
- Extensive experience in reporting, data visualisation, data mining, data integration and ad hoc analysis
- Strong Snowflake & Tableau expertise is a must
- Familiar with Supply Chain/Operations environment preferred
- Excellent data analysis and presentation skills
- Excellent attention-to-detail, ability to compile and validate large amounts of data while maintaining a very high degree of accuracy
- Excellent communication and comprehension skills
- Ability to operate in a fast paced, rapidly changing environment
- Business Acumen and ability to rapidly understand complex business process
- Excellent problem solving skills: ability to analyze and resolve complex problems in a structured and logical manner
- Excellent/Advanced Excel skills
- Ability to comprehensively understand data elements, sources and relationships
- Self-motivated individual able to function effectively when working independently or in a team
- Strong Project Management experience
Job Type: Contract
Contract length: 12 months
Pay: $3, $12,170.38 per month
Benefits:
- Health insurance
Experience:
- Snowflake : 1 year (Required)
- SQL: 1 year (Required)
- Tableau: 1 year (Required)
Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
- Work across workstreams to support data requirements including reports and dashboards.
- Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes.
- Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications.
- Develop data pipeline automation using Azure, AWS data platform and technologies stack, Databricks, Data Factory.
- Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery.
- Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming.
- Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data.
- Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools.
Requirements:
- Bachelor's degree in Computer Science, Computer Engineer, IT, or related fields.
- Minimum of 4 years' experience in Data Engineering fields.
- Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure, AWS.
- Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design.
- Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing.
- Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph.
- Experience working in a Singapore public sector, client-facing/consulting environment is a plus.
- Team player, analytical and problem-solving skills.
Data Engineer
Posted today
Job Viewed
Job Description
Exp range is 8-10 years
- Skills / Tech Knowledge: AWS
Glue (batch & streaming), AWS Lambda, Python/PySpark, API integration,
job orchestration (Step Functions/Airflow), OpenSearch/Kibana
troubleshooting.
- Role Fit: Handle complex pipeline builds, integrate new APIs/data sources, lead enhancements, mentor junior
engineers.
Data Engineer
Posted today
Job Viewed
Job Description
• 8+ yrs. experience working with data migration, transformation, processing & warehouse design, specifically on Databricks (Unity Catalog, Delta Lake) / Azure Data Factory / Azure SQL
• Expertise in developing efficient Azure Data Factory Pipelines and Databricks Notebooks (PySpark/Spark SQL)
• Proficient in PySpark, Spark SQL, T-SQL
• Understanding of data security principles and practices
• Strong analytical and problem-solving skills
• Strong communication and collaboration skills
Must-have
• Databricks
• Azure Data Factory
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer
Job Overview:
We are seeking an experienced Data Engineer to join our Data team in Singapore. The ideal candidate will have a proven track record of hands-on data engineering experience, particularly within the AWS and Azure. As a Data Engineer, you will be responsible for developing and maintaining data pipelines, ensuring the reliability, efficiency, and scalability of our data lake and enabling data marts for AI models.
Responsibilities:
Develop robust ETL pipeline and frameworks for both batch and real-time data processing using Python and SQL.
Deploying and Monitoring the ETL Pipelines using orchestration tools such as Airflow, DBT or AWS Services such as Glue Workflow, Step Functions, EventBridge.
Work with cloud-based data platforms like Redshift, Snowflake and Data Ingestion tools like DMS, ELT tools like dbt cloud for effective data processing.
Work with Azure data factory for building data pipelines
Implement CI/CD for ETLs and Pipeline to automate build and deployments
Qualifications:
Bachelor's or Master's degree in computer science, Information Technology, or a related field.
3+ years of hands-on data engineering experience in AWS
Should have delivered Atleast 2 programs into production as data engineer
Primary Skills:
Proficient in Python, SQL and Data Warehousing Concepts
Develop ETL frameworks
Proficient in AWS services such as S3, DMS, Redshift, Glue, Kinesis, Athena, AWS Lambda, Step Functions to implement scalable data solutions.
Proficient in Azure data factory.
Working experience on Data Warehousing using Snowflake or AWS or Databricks
Should have understanding of data marts for presentation layer into reporting
Good-to-Have Skills:
ETL development using tools like Informatica, Talend, Fivetran
CI/CD setup using GitHub or Bitbucket
Good Communication Skill
Good Knowledge in Data lake and data warehousing concepts
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Location: Central, Singapore
Type: 1-year contract
Salary: Up to SGD 10,500/month
About the Role
We're seeking a Data Engineer to design and deliver reliable, scalable data pipelines that power reports, dashboards, and data products across multiple workstreams. You'll partner with business, system analysts, and engineering teams to translate requirements into technical designs, automate pipelines on Azure/AWS/Databricks, and surface trusted data for analytics and operational use.
Key Responsibilities
- Partner across workstreams to understand data needs and support reports/dashboards.
- Perform data profiling and analysis to identify patterns, gaps, and discrepancies; apply Data Quality & Data Management practices.
- Design and develop end-to-end data pipelines (ingestion, transformation, processing, serving) for large-scale applications.
- Build pipeline automation using Databricks and Azure Data Factory (with exposure to AWS equivalents).
- Translate business requirements into technical specs and delivery plans with system analysts and developers.
- Implement batch and real-time ingestion (file transfer, APIs, streaming via Kafka and Spark Streaming).
- Develop ETL/ELT using Spark to standardize and transform data for downstream use.
- Enable data access via exports, data APIs, or visualization layers (e.g., Power BI, Tableau).
Qualifications
- Bachelor's degree in Computer Science, Computer Engineering, IT, or related field.
- ≥ 4 years of experience in Data Engineering.
- Technical skills: Python, SQL, Spark, Databricks, Azure, AWS, APIs; exposure to data/solution architecture concepts.
- Visualization skills: Power BI (DAX, data modeling), or equivalent tools; ability to craft clear narratives and wireframes.
- Business analysis fundamentals: data profiling, basic data model design, requirement analysis, SQL.
- Foundations in Data Lake/Warehouse/Big Data, Apache Spark, RDBMS & NoSQL, and Knowledge Graph concepts.
- Plus: Experience in the Singapore public sector and/or client-facing/consulting environments.
- Strong teamwork, communication, and problem-solving skills.
How to Apply: Interested applicants, please click on the "Apply Now" to submit your updated resume.
Please note: Due to the anticipated high volume of applications, only shortlisted candidates will be contacted. All information provided will be treated with strict confidentiality and used solely for recruitment purposes.
Ahmad Ilyas bin Azhari
Consultant – IT & Digital
EA Personnel No: R
Peoplebank Singapore Pte Ltd | EA Licence No: 08C5248
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are looking for a skilled Data Engineer to design, build, and maintain robust data systems that power analytics and decision-making across our organization. The ideal candidate will work across multiple workstreams, supporting data requirements through reports, dashboards, and end-to-end data pipeline development. You will collaborate with business and technical teams to translate requirements into scalable, data-driven solutions.
Key Responsibilities- Work collaboratively across teams to support data needs including reports, dashboards, and analytics.
- Conduct data profiling and analysis to identify patterns, discrepancies, and quality issues in alignment with Data Quality and Data Management standards.
- Design and develop end-to-end (E2E) data pipelines for data ingestion, transformation, processing, and surfacing in large-scale systems.
- Automate data pipeline processes using Azure, AWS, Databricks, and Data Factory technologies.
- Translate business requirements into detailed technical specifications for analysts and developers.
- Perform data ingestion in both batch and real-time modes using methods such as file transfer, API, and data streaming (Kafka, Spark Streaming).
- Develop ETL pipelines using Spark for data transformation and standardization.
- Deliver data outputs via APIs, data exports, or visualization dashboards using tools like Power BI or Tableau.
- Bachelor's degree in Computer Science, Computer Engineering, Information Technology, or a related field.
- Minimum 4 years of experience in Data Engineering or related roles.
- Strong technical expertise in:
Python, SQL, Spark, Databricks, Azure, AWS
Cloud & Data Architecture, APIs, and ETL pipelines
- Proficiency in data visualization tools such as Power BI (preferred) or Tableau, including DAX programming, data modeling, and storytelling.
- Understanding of Data Lakes, Data Warehousing, Big Data frameworks, RDBMS, NoSQL, and Knowledge Graphs.
- Familiarity with business analysis, data profiling, data modeling, and requirement analysis.
- Experience working in Singapore public sector, consulting, or client-facing environments is advantageous.
- Excellent analytical, communication, and problem-solving skills with a collaborative mindset.
- Experience with real-time data streaming (Kafka, Spark Streaming).
- Understanding of data governance, data quality frameworks, and metadata management.
- Hands-on experience with automation and CI/CD for data workflows.
Be The First To Know
About the latest Senior data engineer Jobs in Singapore !
Data Engineer
Posted today
Job Viewed
Job Description
Zenith Infotech (S) Pte Ltd. was started in 1997, primarily with the vision of offering state-of-the-art IT Professionals and solutions to various organizations and thereby helping them increase their productivity and competitiveness. From deployment of one person to formation of whole IT teams, Zenith Infotech has helped clients with their staff augmentation needs. Zenith offers opportunity to be engaged in long term projects with large IT savvy companies, Consulting organizations, System Integrators, Government, and MNCs.
EA Licence No: 20S0237
Roles and Responsibilities:
Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications
Develop data pipeline automation using Azure, AWS data platform and technologies stack, Databricks, Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools
Required Skills:
We are looking for experience and qualifications in the following:
- Bachelor's degree in Computer Science, Computer Engineer, IT, or related fields
- Minimum of 4 years' experience in Data Engineering fields
- Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure, AWS
- Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
- Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing
- Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph
Only shortlisted applicants will be contacted. By submitting your application, you acknowledge and agree that your personal data will be collected, used, and retained in accordance with our Privacy Policy This information will be used solely for recruitment and employment purposes.
Data Engineer
Posted today
Job Viewed
Job Description
About ST Engineering
ST Engineering is a global technology, defence, and engineering group with offices across Asia, Europe, the Middle East, and the U.S., serving customers in more than 100 countries. The Group uses technology and innovation to solve real-world problems and improve lives through its diverse portfolio of businesses across the aerospace, smart city, defence, and public security segments. Headquartered in Singapore, ST Engineering ranks among the largest companies listed on the Singapore Exchange.
Our history spans more than 50 years, and our strategy is underpinned by our core values – Integrity, Value Creation, Courage, Commitment and Compassion. These 5 core values guide every aspect of our business and are embedded in our ST Engineering culture – from the people we hire, to working with each other, to our partners and customers.
About our Line of Business – Geo-Insights
ST Engineering Geo-Insights Pte Ltd is a data analytics company focused on generating insights from geospatial data for maritime, agriculture and sustainability applications. We are looking for team members who are passionate about implementing technological solutions to solve real world problems and delivering social, economic and environmental value. You can look forward to a growth-oriented career in a supportive and fast-paced environment.
Together, We Can Make A Significant Impact
We are looking to add a dedicated and talented Data Engineer to our team. You will work closely with other engineers, designers, and project managers to deliver high-quality Geospatial Analytics solutions that meet our clients' needs.
Be Part of Our Success
- Use big data tools and platforms to create and maintain data pipelines, making sure pipelines are robust, scalable, and reliable. Troubleshoot and rectify issues with data pipelines as necessary.
- Design and manage geodatabases, feature classes, and datasets, ensuring data integrity and efficiency.
- Collaborate with cross-functional teams to understand data requirements and develop data processing workflows accordingly.
- Stay updated on emerging trends and best practices in data management.
- Publish map services, feature services, and geoprocessing services, configuring security and performance settings.
- Support technical innovation within the team by providing coding assistance and creative solutions to meet project goals.
- Design and implement advanced analytics solutions that handle high-volume streaming and batch datasets efficiently.
Qualities We Value
- Proficient in data pipeline tools, such as Airflow, Kafka, MLFlow.
- Proficiency in machine learning and deep learning technologies is a plus.
- Knowledge of Flask, NodeJS or other app frameworks, and containerisation, for quick proto-typing.
- Applicants should be motivated, proactive, able to work independently and result oriented.
- Singaporean only
Our Commitment That Goes Beyond the Norm
- An environment where you will be working on cutting-edge technologies and architectures.
- Safe space where diverse perspectives are valued, and everyone's unique contributions are celebrated.
- Meaningful work and projects that make a difference in people's lives.
- A fun, passionate and collaborative workplace.
- Competitive remuneration and comprehensive benefits.
Working Location: Ang Mo Kio
Data Engineer
Posted today
Job Viewed
Job Description
What You'll Be Doing
Design, build, and maintain scalable, reliable data pipelines to support diverse business and AI applications
Take ownership of pipeline health. Proactively monitor performance, debug issues, and improve data quality
Collaborate with cross-functional teams to translate data needs into technical solutions
Implement data pipeline orchestration using tools such as Apache Airflow or AWS Step Functions
Optimise data storage and retrieval across RDBMS (Postgres, MSSQL), object stores (MinIO/S3), and graph databases (Neo4j)
Design data schemas (normalized, denormalized, star-schema) and enforce proper access and backup protocols
Support analytics, product development, and AI POCs with clean, well-managed data structures
Apply scripting (Python) and open-source tools to automate data workflows and ETL processes
What You Should Know or Be Eager to Learn
Preferred Technical Skills (not mandatory)
Python for scripting and data manipulation
Database technologies: PostgreSQL, MongoDB, Neo4j
Cloud and storage systems: MinIO, AWS S3, AWS tech stack (preferred)
Data orchestration tools: Apache Airflow, AWS Step Functions
Familiarity with BI tools like Tableau
Comfort working in both Windows and Linux environments
Basic understanding of networks, OS commands, and system-level data operations
Mindset We Look For
Strong problem-solving mindset with attention to detail
Customer-first attitude. Designing with real users and business needs in mind
Flexibility to adapt tools and approaches based on the use case
Eagerness to learn and grow in a high-performance tech environment
All selected candidates will undergo a 11-week, fully sponsored on-job-training with allowance, before being deployed full-time for 3 years.