1,034 Data Engineer jobs in Singapore
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
SKILLS SET AND TRACK RECORD
- Good understanding and completion of projects using waterfall/Agile methodology.
- Analytical, conceptualisation and problem-solving skills.
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage.
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
The developer is responsible to:
- Analyse the Client data needs and document the requirements.
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
- Develop test plan and scripts for system testing, support user acceptance testing.
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
- Ensure the smooth operations and service level of IT solutions.
- Support production issues
Big Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa. About the team Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users.
We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
Minimum Qualifications
- Bachelor's degree or above, majoring in Computer Science, or related fields, with at least 1 year of experience
- Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
- Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
- Effective communication skills and a sense of ownership and drive
- Experience of Peta Byte level data processing is a plus
Big Data Engineer
Posted today
Job Viewed
Job Description
Experience
Hands-on Big Data experience using common open-source components (Hadoop, Hive, Spark, Presto, NiFi, MinIO, K8S, Kafka).
Experience in stakeholder management in heterogeneous business/technology organizations.
Experience in banking or financial business, with handling sensitive data across regions.
Experience in large data migration projects with on-prem Data Lakes.
Hands-on experience in integrating Data Science Workbench platforms (e.g., KNIME, Cloudera, Dataiku).
Track record in Agile project management and methods (e.g., Scrum, SAFe).
Skills
Knowledge of reference architectures, especially concerning integrated, data-driven landscapes and solutions.
Expert SQL skills, preferably in mixed environments (i.e., classic DWH and distributed).
Working automation and troubleshooting experience in Python using Jupyter Notebooks or common IDEs.
Data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI or Python-based).
Applying a data quality framework within the architecture.
Role description
Datasets and data pipelines preparation, support for Business, data troubleshooting.
Closely collaborate with the Data & Analytics Program Management and stakeholders to co-design Enterprise Data Strategy and Common Data Model.
Implementation and promotion of Data Platform, transformative data processes, and services.
Develop data pipelines and structures for Data Scientists, testing such to ensure that they are fit for use.
Maintain and model JSON-based schemas and metadata to re-use it across the organization (with central tools).
Resolving and troubleshooting data-related issues and queries.
Covering all processes from enterprise reporting to data science (incl. ML Ops).
Big Data Engineer
Posted today
Job Viewed
Job Description
Roles & Responsibilities
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Job Type: Contract
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Tableau
PySpark
Scala
Big Data
Pipelines
Hadoop
Informatica
ETL
Data Engineering
SQL
Python
Communication Skills
Java
Data Warehousing
Databases
Linux
Big Data Engineer
Posted today
Job Viewed
Job Description
Experience
• Hands-on Big Data experience using common open-source components (Hadoop, Hive, Spark, Presto, NiFi, MinIO, K8S, Kafka).
• Experience in stakeholder management in heterogeneous business/technology organizations.
• Experience in banking or financial business, with handling sensitive data across regions.
• Experience in large data migration projects with on-prem Data Lakes.
• Hands-on experience in integrating Data Science Workbench platforms (e.g., KNIME, Cloudera, Dataiku).
• Track record in Agile project management and methods (e.g., Scrum, SAFe).
Skills
• Knowledge of reference architectures, especially concerning integrated, data-driven landscapes and solutions.
• Expert SQL skills, preferably in mixed environments (i.e., classic DWH and distributed).
• Working automation and troubleshooting experience in Python using Jupyter Notebooks or common IDEs.
• Data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI or Python-based).
• Applying a data quality framework within the architecture.
Role description
• Datasets and data pipelines preparation, support for Business, data troubleshooting.
• Closely collaborate with the Data & Analytics Program Management and stakeholders to co-design Enterprise Data Strategy and Common Data Model.
• Implementation and promotion of Data Platform, transformative data processes, and services.
• Develop data pipelines and structures for Data Scientists, testing such to ensure that they are fit for use.
• Maintain and model JSON-based schemas and metadata to re-use it across the organization (with central tools).
• Resolving and troubleshooting data-related issues and queries.
• Covering all processes from enterprise reporting to data science (incl. ML Ops).
cloudera
Workbench
Big Data
Pipelines
Hadoop
Data Quality
Spark
Program Management
SQL
Data Migration
Python
KNIME
Apache Kafka
Hive
Data Science
Metadata
Power BI
Data Strategy
Be The First To Know
About the latest Data engineer Jobs in Singapore !
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer (Java, Spark, Hadoop)
Location: Singapore
Experience: 7- 12 years
Employment Type: Full-Time
Open to Citizens and SPR only | No Visa sponsorship available
Job Summary
We are looking for a Senior Big Data Engineerwith 7–12 years of experience to join our growing data engineering team. The ideal candidate will bring deep expertise in Java, Apache Spark, and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.
Key Responsibilities
● Design, build, and optimize large-scale, distributed data processing systems using Apache Spark, Hadoop, and Java.
● Lead the development and deployment of data ingestion, ETL/ELT pipelines, and data transformation frameworks.
● Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.
● Ensure high performance and reliability of big data systems through performance tuning and best practices.
● Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka.
● Apply deep knowledge of Java to build efficient, modular, and reusable codebases.
● Mentor junior engineers, participate in code reviews, and enforce engineering best practices.
● Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.
● Ensure data governance, security, and compliance standards are maintained.
Required Qualifications
● 7–12 years of experience in big data engineering or backend data systems.
● Strong hands-on programming skills in Java; exposure to Scala or Python is a plus.
● Proven experience with Apache Spark, Hadoop (HDFS, YARN, MapReduce), and related tools.
● Solid understanding of distributed computing, data partitioning, and optimization techniques.
● Experience with data access and storage layers like Hive, HBase, or Impala.
● Familiarity with data ingestion tools like Apache Kafka, NiFi, Flume, or Sqoop.
● Comfortable working with SQL for querying large datasets.
● Good understanding of data architecture, data modeling, and data lifecycle management.
● Experience with cloud platforms like AWS, Azure, or Google Cloud Platform.
● Strong problem-solving, analytical, and communication skills.
Preferred Qualifications
● Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
● Experience with streaming data frameworks such as Spark Streaming, Kafka Streams, or Flink.
● Knowledge of DevOps practices, CI/CD pipelines, and infrastructure as code (e.g., Terraform).
● Exposure to containerization (Docker) and orchestration (Kubernetes).
● Certifications in Big Data technologies or Cloud platforms are a plus.
To apply, email to / with the following details – Current CTC, Expected CTC, Notice period and Residential Status.
Tell employers what skills you haveApache Spark
Scala
Azure
Big Data
Data Modeling
Pipelines
Hadoop
Data Governance
MapReduce
Data Engineering
SQL
Python
Data Architecture
Docker
Java
Databases
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer (Java, Spark, Hadoop)
Location : Singapore
Experience : 7- 12 years
Employment Type : Full-Time
Open to Citizens and SPR only | No Visa sponsorship available
Job Summary
We are looking for a
Senior Big Data Engineer
with
7–12 years of experience
to join our growing data engineering team. The ideal candidate will bring deep expertise in
Java ,
Apache Spark , and
Hadoop
ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.
Key Responsibilities
Design, build, and optimize
large-scale, distributed data processing systems
using
Apache Spark ,
Hadoop , and
Java .
Lead the development and deployment of
data ingestion ,
ETL/ELT pipelines , and
data transformation frameworks .
Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.
Ensure high performance and reliability of big data systems through performance tuning and best practices.
Manage and monitor
batch and real-time data pipelines
from diverse sources including APIs, databases, and streaming platforms like
Kafka .
Apply deep knowledge of
Java
to build efficient, modular, and reusable codebases.
Mentor junior engineers, participate in code reviews, and enforce engineering best practices.
Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.
Ensure
data governance ,
security , and
compliance
standards are maintained.
Required Qualifications
7–12 years of experience
in big data engineering or backend data systems.
Strong hands-on programming skills in
Java ; exposure to
Scala
or
Python
is a plus.
Proven experience with
Apache Spark ,
Hadoop
(HDFS, YARN, MapReduce), and related tools.
Solid understanding of
distributed computing , data partitioning, and optimization techniques.
Experience with data access and storage layers like
Hive ,
HBase , or
Impala .
Familiarity with data ingestion tools like
Apache Kafka ,
NiFi ,
Flume , or
Sqoop .
Comfortable working with
SQL
for querying large datasets.
Good understanding of
data architecture ,
data modeling , and
data lifecycle management .
Experience with cloud platforms like
AWS ,
Azure , or
Google Cloud Platform .
Strong problem-solving, analytical, and communication skills.
Preferred Qualifications
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
Experience with
streaming data frameworks
such as
Spark Streaming ,
Kafka Streams , or
Flink .
Knowledge of
DevOps practices , CI/CD pipelines, and infrastructure as code (e.g., Terraform).
Exposure to
containerization (Docker)
and
orchestration (Kubernetes) .
Certifications in
Big Data technologies
or
Cloud platforms
are a plus.
Please note that this is an equal opportunities employer.
#J-18808-Ljbffr
Big Data Engineer
Posted 15 days ago
Job Viewed
Job Description
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor’s degree in IT, Computer Science, or related field.
- Minimum 5 years’ experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Explore exciting data engineer job opportunities. Data engineers are in high demand, responsible for designing, building, and maintaining data pipelines and systems. These professionals work with large datasets, ensuring data quality and accessibility for analysis and decision-making.