300 Data Engineering Manager jobs in Singapore
Data Engineering Manager
Posted today
Job Viewed
Job Description
We're looking for Lead Data Engineer / Data Engineering Manager to join our client to lead the design and delivery of scalable, cloud-based data solutions. In this role, you'll work closely with cross-functional teams to solve complex data challenges, modernize infrastructure, and shape strategic data platforms from the ground up.
You'll drive technical direction, design reusable solutions, and influence best practices in data engineering and governance across high-impact projects.
What You'll Do
- Advise on data strategy, architecture, and implementation.
- Build and optimize data pipelines across cloud and on-prem environments.
- Develop secure, scalable infrastructure for structured and unstructured data.
- Design reusable data models and maintain metadata and lineage.
- Ensure governance, data quality, and access controls are in place.
- Support both greenfield builds and legacy modernization efforts.
- Mentor teams and contribute to internal capability development.
What You Bring
- Over 10 years of experience in data engineering, platform or cloud infrastructure.
- Expertise in cloud platforms (AWS, Azure, GCP) and distributed systems (Spark, Hadoop).
- Proficient in Python, SQL, Java, Scala.
- Experience with orchestration tools (e.g. Airflow, ADF) and DevOps (Docker, Git, Terraform).
- Familiarity with Databricks and real-time/batch data pipelines.
- Strong grasp of data governance, security, and compliance practices.
- Clear communicator with strong stakeholder management skills.
- Proven ability to lead, mentor, and drive technical alignment across teams.
For more information you can contact Norean Tan at
We regret to inform you that only shortlisted candidates will be notified / contacted.
EA Registration No.: R - Tan Lee Ying, Norean
iKas International (Asia) Pte Ltd
ROC No.: E | EA License No.: 16S8086
Tell employers what skills you haveManagement Skills
Airflow
Azure
Pipelines
Hadoop
Technical Direction
Data Quality
Data Design
Data Engineering
SQL
Python
Docker
Cloud
Java
Orchestration
Data Strategy
Data Engineering Manager
Posted 2 days ago
Job Viewed
Job Description
We're looking for Lead Data Engineer / Data Engineering Manager to join our client to lead the design and delivery of scalable, cloud-based data solutions. In this role, you’ll work closely with cross-functional teams to solve complex data challenges, modernize infrastructure, and shape strategic data platforms from the ground up.
You’ll drive technical direction, design reusable solutions, and influence best practices in data engineering and governance across high-impact projects.
What You’ll Do
- Advise on data strategy, architecture, and implementation.
- Build and optimize data pipelines across cloud and on-prem environments.
- Develop secure, scalable infrastructure for structured and unstructured data.
- Design reusable data models and maintain metadata and lineage.
- Ensure governance, data quality, and access controls are in place.
- Support both greenfield builds and legacy modernization efforts.
- Mentor teams and contribute to internal capability development.
What You Bring
- Over 10 years of experience in data engineering, platform or cloud infrastructure.
- Expertise in cloud platforms (AWS, Azure, GCP) and distributed systems (Spark, Hadoop).
- Proficient in Python, SQL, Java, Scala.
- Experience with orchestration tools (e.g. Airflow, ADF) and DevOps (Docker, Git, Terraform).
- Familiarity with Databricks and real-time/batch data pipelines.
- Strong grasp of data governance, security, and compliance practices.
- Clear communicator with strong stakeholder management skills.
- Proven ability to lead, mentor, and drive technical alignment across teams.
For more information you can contact Norean Tan at
We regret to inform you that only shortlisted candidates will be notified / contacted.
EA Registration No.: R - Tan Lee Ying, Norean
iKas International (Asia) Pte Ltd
ROC No.: E | EA License No.: 16S8086
Data Engineering Manager (Data Governance) - Data Cycling Centre
Posted today
Job Viewed
Job Description
Data Engineering Manager (Data Governance) - Data Cycling Centre
About Data Cycling Center
AI is the new electricity and data is the energy for AI. Our core belief is that unstructured data contains the untapped wisdom of humanity — and by building the best AI‐ready data infrastructure, we enable more meaningful, responsible, and creative uses of AI. Data Cycling Center (DCC) is a Data Science team that is responsible for building the mid‐platform data layer that powers AI development across all business lines of TikTok, including e‐commerce, advertising, livestreaming, and emerging technologies.
Scaled human‐in‐the‐loop (HITL) data processing (including capability, methodology, E2E process, platform), delivering the most cost‐effective results at benchmark quality levels that outperform leading tech peers.
Built a highly automated end‐to‐end data pipeline for AI‐generated content (AIGC) that supports strong model performance with minimal resource input.
Develop a comprehensive content understanding and insight generation system, setting new benchmarks for marketing intelligence in the online ads ecosystem.
DCC aims to be the number 1 data service provider in the AI era. Our mission is to build the most effective (lossless and affordable) content understanding capabilities to fully satisfy AI needs of TikTok and to help the industry push the limit.
Role Overview
As a Lead Data Engineer, you will be responsible for building up both the team and the data pipeline. You will be working on cutting‐edge challenges in the big data and AI industry which requires strong passion and capability of innovation. You will collaborate closely with cross‐functional teams to understand business requirements and translate them into technical solutions. Additionally, you will provide technical guidance, mentorship, and support to junior team members.
Responsibilities
Architect efficient, scalable and reliable data pipelines and infrastructure for ingesting, processing, and transforming large volumes of data.
Define the technical strategy and roadmap for data engineering projects in alignment with business objectives, actively evaluate and bring in industry best practices and state‐of‐the‐art technical approaches, and timely update the strategy according to the rapid change of the industry.
Build and lead a high‐performing data engineering team with a clear strategy, providing business, technical, and personal development coaching.
Own and drive data engineering projects by leveraging both internal and cross‐functional resources, setting meaningful and challenging targets, and achieving them with innovative approaches.
Foster a collaborative and inclusive team culture within and across teams, collaborate with data scientists.
Why Join Us
Shape the Future of AI.
Massive Scale, Real‐World Impact: Process petabytes of unstructured data to influence real products used by over a billion users worldwide.
Global Team, Local Leadership: Join a diverse, high‐performing team spread across Singapore, New York, San Jose, Shanghai, and more.
Build What Others Benchmark Against: Our innovations in HITL systems, AIGC pipelines, and content understanding set the industry standard — not just keep up with it.
From Data to Strategy: We don’t just process data — we turn it into actionable insight that shapes product and business strategy across all TikTok verticals.
Dynamic, Fast‐Paced, Fun.
Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
At least 5 years of experience in data engineering, with proven expertise in building scalable data pipelines and infrastructure.
Experience with big data technologies such as Apache Spark, Hadoop, and Kafka along with strong proficiency in programming languages such as Python, SQL, and Java.
Strong understanding of database systems, data warehousing, and distributed computing concepts.
Proven experience in team management, with a strong track record of leading teams and collaboration with stakeholders at a regional or global level; proven track record of delivering complex data engineering projects.
Excellent communication and expression skills, logical thinking, teamwork and execution abilities. Independent thinking and capable of exploring and prioritising the core business needs.
Ability to thrive in a fast‐paced, dynamic environment and manage multiple priorities effectively.
About TikTok
TikTok is the leading destination for short‐form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok’s global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. We are passionate about this and hope you are too.
Trust & Safety
TikTok recognises that keeping our platform safe for the TikTok communities is no ordinary job. It can be both rewarding and psychologically demanding and emotionally taxing for some. This is why we are sharing the potential hazards, risks and implications in this unique line of work from the start, so our candidates are well informed before joining. We are committed to the wellbeing of all our employees and promise to provide comprehensive and evidence‐based programs, to promote and support physical and mental wellbeing throughout each employee’s journey with us.
#J-18808-Ljbffr
Big Data Developer
Posted today
Job Viewed
Job Description
My Client
It is a global cryptocurrency exchange dedicated to providing secure, efficient, and innovative digital asset trading services. The platform offers a wide range of services, including spot trading, derivatives trading, and wealth management. They are looking for experienced and motivated Mobile app developer to join the dynamic team. The role involves working closely with other developers, designers, and product managers to create high-quality, user-friendly mobile apps that deliver exceptional user experiences.
Key Responsibilities:
- Develop scalable data processing architectures using Hadoop, Flink, and other Big Data tools.
- Implement and optimize data ingestion, transformation, and analysis pipelines.
- Design efficient data storage and retrieval mechanisms to support analytical needs.
- Work closely with business stakeholders to translate requirements into technical solutions.
- Ensure data quality, security, and compliance with industry standards.
- Identify and resolve performance bottlenecks in data processing systems.
- Keep up with advancements in Big Data technologies and recommend improvements.
Requirements:
- Degree in Computer Science, Information Technology, or a related field.
- 3+ years of hands-on experience in Big Data engineering.
- Proficiency in Hadoop, Flink, Kafka, and related frameworks.
- Experience with database modeling and large-scale data storage solutions.
- Strong understanding of data security best practices.
- Excellent analytical and troubleshooting skills.
- Effective communication and teamwork capabilities.
- A proactive mindset with a passion for data-driven innovation.
If it sounds like your next move, please don't hesitate to apply. Kindly note that only shortlisted candidates will be contacted. Appreciate your understanding. Data provided is for recruitment purposes only.
About Us
Dada Consultants was established in 2017, with the commitment of providing the best recruitment services in Singapore. We are comprised of a dynamic head-hunting team dedicated to sourcing for highly competent professionals in IT industry. We provide enterprises with customized talent solutions, and bring talents to career advancement.
EA Registration Number: R
Business Registration Number: W. Licence Number: 18S9037
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
SKILLS SET AND TRACK RECORD
- Good understanding and completion of projects using waterfall/Agile methodology.
- Analytical, conceptualisation and problem-solving skills.
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage.
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
The developer is responsible to:
- Analyse the Client data needs and document the requirements.
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
- Develop test plan and scripts for system testing, support user acceptance testing.
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
- Ensure the smooth operations and service level of IT solutions.
- Support production issues
Big Data Analyst
Posted today
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Charles, Lau Ngie Hao License No.: 02C3423 Personnel Registration No.: R
Please note that your response to this advertisement and communications with us pursuant to this advertisement will constitute informed consent to the collection, use and/or disclosure of personal data by ManpowerGroup Singapore for the purpose of carrying out its business, in compliance with the relevant provisions of the Personal Data Protection Act 2012. To learn more about ManpowerGroup's Global Privacy Policy, please visit
Big Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa. About the team Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users.
We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
Minimum Qualifications
- Bachelor's degree or above, majoring in Computer Science, or related fields, with at least 1 year of experience
- Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
- Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
- Effective communication skills and a sense of ownership and drive
- Experience of Peta Byte level data processing is a plus
Be The First To Know
About the latest Data engineering manager Jobs in Singapore !
Big Data Engineer
Posted today
Job Viewed
Job Description
Experience
Hands-on Big Data experience using common open-source components (Hadoop, Hive, Spark, Presto, NiFi, MinIO, K8S, Kafka).
Experience in stakeholder management in heterogeneous business/technology organizations.
Experience in banking or financial business, with handling sensitive data across regions.
Experience in large data migration projects with on-prem Data Lakes.
Hands-on experience in integrating Data Science Workbench platforms (e.g., KNIME, Cloudera, Dataiku).
Track record in Agile project management and methods (e.g., Scrum, SAFe).
Skills
Knowledge of reference architectures, especially concerning integrated, data-driven landscapes and solutions.
Expert SQL skills, preferably in mixed environments (i.e., classic DWH and distributed).
Working automation and troubleshooting experience in Python using Jupyter Notebooks or common IDEs.
Data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI or Python-based).
Applying a data quality framework within the architecture.
Role description
Datasets and data pipelines preparation, support for Business, data troubleshooting.
Closely collaborate with the Data & Analytics Program Management and stakeholders to co-design Enterprise Data Strategy and Common Data Model.
Implementation and promotion of Data Platform, transformative data processes, and services.
Develop data pipelines and structures for Data Scientists, testing such to ensure that they are fit for use.
Maintain and model JSON-based schemas and metadata to re-use it across the organization (with central tools).
Resolving and troubleshooting data-related issues and queries.
Covering all processes from enterprise reporting to data science (incl. ML Ops).
Big Data Engineer
Posted today
Job Viewed
Job Description
Roles & Responsibilities
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Job Type: Contract
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.