319 Data Engineering Manager jobs in Singapore
Data Engineering Manager
Posted today
Job Viewed
Job Description
Join to apply for the Data Engineering Manager role at SP Group
Join to apply for the Data Engineering Manager role at SP Group
Get AI-powered advice on this job and more exclusive features.
Make the most of your talents and develop products that can create impact on a national scale. We are an in-house software team, assembled to move with speed and deliver with quality.
We Build Reliable Solutions. For Customers, Company and Country.
You will be part of the Digital Team and together, you will innovate, create, and deploy digital products that will empower more than 3,800 employees within SP Group and improve the quality of life for more than 1.7 million commercial, industrial and residential customers that SP Group serves. We build solutions that enable sustainable high-quality lifestyles and help consumers save energy and cost, as well as supporting national goals for a sustainable liveable city.
Now, imagine the impact you can create.
Job Summary
We are seeking a highly skilled and experienced Data Engineering Manager to join our dynamic data team. The ideal candidate will translate our strategic data vision into actionable solutions, with a balanced focus on hands-on technical delivery, end-to-end project leadership, and team enablement.
What You'll Do:
- Lead discussions with business users and stakeholders to solicit and document requirements for data projects, including data usage, master data management, and AI initiatives.
- Translate business needs into clear, actionable technical specifications and work breakdowns.
- Lead sprint ceremonies and manage project roadmaps to drive on-time, on-budget delivery
- Ensuring clear visibility into project milestones and resource utilization
- Proactively identify and remove obstacles by coordinating with stakeholders and ensuring the team has clear direction, resources, and support to maintain delivery momentum
- Lead the design, development, and optimization of scalable data pipelines, Master Data Management processes, and data services.
- Define robust data models and implement data quality checks along with governance frameworks.
- Encourage continuous learning by assigning real-world projects and case studies, then guiding reflective debriefs to reinforce lessons and foster independent problem-solving
- Facilitate hands-on workshops and knowledge-sharing sessions to strengthen technical competencies and promote best practices across data operations
- Establish and document best-practice run-books, decision logs, and process playbooks.
- Identify and implement process enhancements, enforce CI/CD standards for data workflows, and maintain compliance with data-security and regulatory requirements.
What You’ll Need:
- Experience in Data Engineering, Data Operations or closely related roles
- Experience in leading requirements gathering sessions and translating business needs into technical specifications.
- Strong proficiency in data management practices, including creating data dictionaries and designing data models.
- Experience building and operating large scale data lake and data warehouse.
- Experience with Hadoop ecosystem and big data tools, including Spark and Kafka
- Experience with stream-processing systems including Spark-Streaming
- Hands-on experience with UML and data specification development.
- Experience with Agile & DevOps Practices would be advantageous
- Knowledge in Utilities data standard such as IEC Common Information Model would be advantageous
- Experience in AI and machine learning projects would be advantageous.
- Domain knowledge in Utilities sector would be advantageous
- Experience in Data Fabric and/or Data Mesh Architecture would be advantageous
- Experience in Open Data-Lakehouse would be advantageous
- Excellent communication and interpersonal skills, with the ability to convey complex technical plans into actionable updates.
- Strong analytical and problem-solving abilities, with keen attention to detail.
- Skilled in project-management tools and methodologies; uses data-driven insights to track progress, forecast resource needs, and inform decision-making
What We’ll Provide:
- Opportunity to work on the cutting edge of digital engineering practices
- Collaborative and fast-paced work environment
- Be at the forefront of shaping our company's digital future
- Seniority level Not Applicable
- Employment type Full-time
- Job function Information Technology
- Industries Utilities, Energy Technology, and IT System Data Services
Referrals increase your chances of interviewing at SP Group by 2x
Sign in to set job alerts for “Engineering Manager” roles. Regional Engineering Manager (Up to $16,000) Head of Engineering (Senior Manager) - RTSO Johor Bahru Manufacturing Engineering Manager (Machining) Senior Manager/ Manager, Manufacturing Engineering Data Engineering Director, Asian Bank- based in Vietnam Mechanical Engineering Manager (Operations & Maintenance) Manager - Engineering & Security (Department Head) Director, Platform & Application Engineering Head (AI Software Engineering), AI Products Mechanical Engineering Manager, Data Centre Head of R&D ( Machine Learning_Data Analysis_Retail Solution ) Engineering Manager (Machine Learning) - AICS Healthcare AI SolutionsWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Developer
Posted 8 days ago
Job Viewed
Job Description
Roles and Responsibilites:
- Developing and optimising ETL (Extract, Transform, Load) processes to ingest and transform large volumes of data from multiple sources.
- Must have experience in investment banking, payment and transaction banking domains.
- Developing and deploying data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka, or similar technologies.
- Proficiency in programming languages and scripting (e.g., Java, Scala, Python, SQL) for data processing and analysis.
- Experience with cloud platforms and services for Big Data (e.g., AWS, Azure, Google Cloud)
Requirements:
Primary Skills:
- Designing, building, and maintaining systems that handle large volumes of data, enabling businesses to extract valuable insights and make data-driven decisions.
- Creating scalable and efficient data pipelines, implementing data models, and integrating various data sources.
- Developing and deploying data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka
- Write efficient and optimised code in programming languages like Java, Scala, Python to manipulate and analyse data
- Creating scalable and efficient data pipelines, implementing data models, and integrating diverse data sources to enable businesses to extract valuable insights
Secondary Skills:
- Designing, developing, and implementing scalable and efficient data processing pipelines using Big Data technologies.
- Implementing a Kafka-based pipeline to feed event-driven data into a dynamic pricing model, enabling real-time pricing adjustments based on market conditions and customer
- Conduct testing and validation of data pipelines and analytical solutions to ensure accuracy, reliability, and performance.
- Strong experience in Spring Boot and microservices architecture.
- Strong experience in distributed computing principles and Big Data ecosystem components (e.g., Hadoop, Spark, Hive, HBase).
- More than 8 years of working experience in IT industry
- More than 5 years of relevant experience
Big Data Engineer
Posted 15 days ago
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 4 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
1.
Responsibilities
• Analyse the Authority’s data needs and document the requirements.
• Refine data collection/consumption by migrating data collection to more efficient channels.
• Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
• Develop test plan and scripts for system testing, support user acceptance testing.
• Build reports and dashboards according to user requirements
• Work with the Authority’s technical teams to ensure smooth deployment and adoption of new solution.
• Ensure the smooth operations and service level of IT solutions.
• Support production issues
2.
What we are looking for
• Good understanding and completion of projects using waterfall/Agile methodology.
• Strong SQL, data modelling and data analysis skills are a must.
• Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica.
• Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is must.
• Hands-on experience in DevOps deployment and data virtualisation tools like Denodo will be an advantage.
• Track record in implementing systems using Hive, Impala and Cloudera Data Platform will be preferred.
• Good understanding of analytics and data warehouse implementations.
• Ability to troubleshoot complex issues ranging from system resource to application stack traces.
• Track record in implementing systems with high availability, high performance, high security hosted at various data centres or hybrid cloud environments will be an added advantage.
• Passion for automation, standardization, and best practices
Big Data Test Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Big Data Test Engineer role at Patsnap .
About Patsnap
Patsnap empowers IP and R&D teams by providing better answers, enabling faster decision-making with confidence. Founded in 2007, Patsnap is a global leader in AI-powered IP and R&D intelligence. Our domain-specific LLM, trained on extensive proprietary innovation data, combined with Hiro, our AI assistant, delivers actionable insights that boost productivity and reduce R&D wastage. Trusted by over 15,000 companies including NASA, Tesla, PayPal, Sanofi, Dow Chemical, and Wilson Sonsini, Patsnap helps teams innovate faster.
About The Role
We are seeking a Big Data Test Engineer to ensure the quality and reliability of our data platforms. You will be responsible for testing and validating our big data solutions, working with technologies like Hadoop and Spark, and collaborating with our data engineering team. If passionate about automation testing and big data platform development, and eager to grow your skills, this role is ideal for you.
Key Responsibilities
- Test and ensure quality of big data and data-related products.
- Develop testing frameworks or tools, and contribute to CI platforms and automation.
- Validate data quality, consistency, and accuracy across pipelines.
- Analyze test results and generate reports on software quality and coverage.
- Monitor and validate data workflows and pipelines.
- Improve testing processes for better efficiency.
- Ensure data integrity across sources and platforms.
- Collaborate with data engineers to enhance data quality.
Desired Qualifications
- Bachelor's degree in Computer Science, IT, or related field.
- 2+ years of software testing experience.
- Proficiency in Python or Java, with SQL and data validation skills.
- Experience with test automation tools and frameworks.
- Knowledge of version control (e.g., Git) and CI/CD tools.
- Strong communication skills.
- Ability to work independently and as part of a team.
- Attention to detail and quality focus.
Preferred Skills
- Experience with Hadoop, Spark, Hive, HBase.
- Knowledge of ETL and data warehousing.
- Experience with performance testing tools like JMeter.
- Cloud platform experience (AWS, Azure, GCP).
- ISTQB certification or equivalent.
Why Join Us
- Work with cutting-edge big data technologies.
- Collaborative and innovative environment.
- Opportunities for professional growth.
- Regular team events and knowledge sharing.
Not Applicable
Employment typeFull-time
Job functionEngineering and Information Technology
IndustriesSoftware Development
Referrals can double your chances of interviewing at Patsnap. Stay updated on new Big Data Developer jobs in Singapore .
#J-18808-LjbffrBig Data Test Engineer
Posted today
Job Viewed
Job Description
About PatSnap
Patsnap empowers IP and R&D teams by providing better answers, so they can make faster decisions with more confidence. Founded in 2007, Patsnap is the global leader in AI-powered IP and R&D intelligence. Our domain-specific LLM, trained on our extensive proprietary innovation data, coupled with Hiro, our AI assistant, delivers
actionable insights that increase productivity for IP tasks by 75% and reduce R&D wastage by 25%. IP and R&D teams collaborate better with a user-friendly platform across the entire innovation lifecycle. Over 15,000 companies trust Patsnap to innovate faster with AI, including NASA, Tesla, PayPal, Sanofi, Dow Chemical, and
Wilson Sonsini.
About the Role
We are looking for a Big Data Test Engineer to ensure the quality and reliability of our data platforms. The successful candidate will be responsible for ensuring the quality and reliability of our big data solutions through rigorous testing and validation processes, working with technologies like Hadoop and Spark while collaborating with our data engineering team. If you are passionate about automation testing, and big data testing platform development and are eager to grow your technical skills, this opportunity is for you.
Key Responsibilities- Responsible for testing and quality assurance of big data and data-related products.
- Participate in the development of big data testing frameworks or testing tools, and contribute to the construction of continuous integration platforms and automation development.
- Validate data quality, consistency, and accuracy across data pipelines.
- Analyze test results and provide detailed reports on software quality and test coverage.
- Monitor and validate data workflows and pipelines.
- Continuously improve testing processes and methodologies to enhance efficiency and effectiveness.
- Ensure data integrity and accuracy across various data sources and platforms.
- Work closely with data engineers to improve data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 2+ years of experience in software testing.
- Proficiency in programming languages such as Python or Java, experience with SQL and data querying for data validation.
- Familiarity with test automation tools and frameworks.
- Knowledge of version control systems (e.g., Git) and experience with CI/CD tools.
- Strong communication and interpersonal skills.
- Ability to work independently and collaboratively in a team environment.
- Attention to detail and a commitment to quality.
- Experience with big data platforms (Hadoop, Spark, Hive, HBase).
- Knowledge of ETL processes and data warehousing concepts.
- Experience with performance testing tools (JMeter).
- AWS/Azure/GCP cloud platform experience.
- ISTQB certification or equivalent.
Why Join Us
- Work with cutting-edge big data technologies and tools.
- Collaborative and innovative work environment.
- Professional growth and learning opportunities.
- Regular team events and knowledge sharing sessions.
#J-18808-LjbffrBig Data Platform Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Big Data Platform Engineer role at Epergne Solutions
Join to apply for the Big Data Platform Engineer role at Epergne Solutions
Get AI-powered advice on this job and more exclusive features.
We are looking for a passionate and experienced Big Data Platform Engineer to join our dynamic Global Data Platform team. This role offers the opportunity to work on cutting-edge technologies and contribute to building and operating resilient, scalable, and secure data platforms.
Key Responsibilities:
- Manage and operate core Global Data Platform components such as VM Servers, Kubernetes, Kafka , and applications within the Apache stack , including Collibra , Dataiku , and similar tools.
- Automate infrastructure and security components, and implement CI/CD pipelines to ensure seamless and efficient execution of ELT/ETL data pipelines .
- Enhance data pipeline resilience through monitoring, alerting, and health checks , ensuring high standards of data quality, timeliness, and accuracy .
- Apply DevSecOps principles and Agile methodologies to deliver robust and integrated platform solutions incrementally.
- Collaborate with enterprise security, digital engineering, and cloud operations teams to define and agree on architectural solution frameworks.
- Investigate system issues and incidents, identify root causes, and implement continuous improvements to optimize platform performance.
- Stay up to date with emerging technologies and industry trends to drive innovation and new feature development.
Required Skills and Experience:
- Bachelor’s degree in Engineering, Computer Science, Information Technology , or a related field.
- 5–7 years of experience designing or building large-scale, fault-tolerant distributed systems (e.g., data lakes, data meshes, streaming data platforms).
- Strong hands-on expertise with distributed technologies like Kafka, Kubernetes, Spark , and the broader Hadoop ecosystem .
- Experience in storage migration (e.g., from HDFS to S3 or similar object storage).
- Proficient in integrating streaming and batch ingestion pipelines using tools like Kafka , Control-M , or AWA .
- Demonstrated experience with DevOps and automation tools such as Jenkins , Octopus , and optionally Ansible , Chef , XL Release , XL Deploy .
- Strong programming skills in Python and Java (or other languages like Scala , R ), along with Linux/Unix scripting and automation using Jinja , Puppet , and firewall configuration.
- Experience with Kubernetes pod scaling , Docker image management via Harbor , and CI/CD deployment of containers.
- Familiarity with data serialization formats such as Parquet, ORC, or Avro .
- Exposure to machine learning and Data Science platforms like Dataiku is a plus.
- Cloud migration experience is advantageous.
- Comfortable working in Agile environments (e.g., Scrum, SAFe).
- Knowledge of the financial services industry and its products is a strong asset.
Soft Skills:
- Excellent communication skills with the ability to collaborate across technical and business teams.
- Detail-oriented and highly organized, with strong prioritization and multitasking abilities.
- Proactive, customer-focused, and collaborative approach to problem-solving and project execution.
- A strong advocate for data-driven culture and the democratization of data across the organization.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Engineering and Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Epergne Solutions by 2x
Get notified about new Big Data Developer jobs in Singapore .
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Platform Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Big Data Platform Engineer role at Centre for Strategic Infocomm Technologies (CSIT)
Join to apply for the Big Data Platform Engineer role at Centre for Strategic Infocomm Technologies (CSIT)
Data processing is an essential part of national security in the modern world. As an engineer in our Big Data Platform team, you will build and own a solid data platform as the foundation to support critical national security missions. You will push the limits of technology to ensure that the whole big data pipeline runs optimally and smoothly from ingestion to processing to persistence. You are encouraged to suggest the best way to deliver value and the technologies used. Join the awesome CSIT family and use cutting-edge technologies to protect the nation.
Responsibilities
- Implement and maintain real-time high-volume data processing, querying and persistence solutions using Big Data technologies
- Implement highly available, resilient, scalable and sustainable Big Data platforms
- Administer (configuration, backup and recovery) Big Data platform services running across multiple clusters
- Deploy and manage the cluster efficiently using DevOps and automation tools
- Build performance and health monitoring tools to ensure optimal system performance and availability
- Work closely with different teams to ensure smooth onboarding and operations of the Big Data platform
- Conduct resource planning and scaling exercises to support increasing workload
- Conduct research and evaluations of new technologies
- Innovate and improve the current system
- Bachelor’s degree in Computer Engineering, Computer Science or equivalent experience
- Able to communicate clearly
- Positive attitude and excellent teamwork
- Passionate about big data technologies
- Love for open source technologies
- Experience in managing physical and virtual infrastructures
- Experience with cluster management and DevOps tools (e.g. Ansible, Kubernetes)
- Experience working with application or product teams
- Experience working in an agile and innovative environment
- The work is purposeful and meaningful
- You will work with the best engineers
- We work with modern technologies and tech stacks
- We have excellent engineering culture and work-life balance
- We aspire to engineering and operational excellence
- We empower to innovate
- We grow together as a family
- Seniority level Entry level
- Employment type Full-time
- Job function Engineering and Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Centre for Strategic Infocomm Technologies (CSIT) by 2x
Get notified about new Big Data Developer jobs in Singapore, Singapore .
Project Intern, Digital Innovations & Solutions (Full Stack Developer) Web Frontend Engineer(Work Location: Remote in Taiwan) Software Engineer SG - Internship 2025 (3-6 months) Back-end Software Engineer (On-site 202506)Queenstown, Central Singapore Community Development Council, Singapore 5 months ago
Software Developer – Life Sciences TechnologyWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBe The First To Know
About the latest Data engineering manager Jobs in Singapore !
Big Data Engineer (Singapore)
Posted today
Job Viewed
Job Description
Direct message the job poster from Baidu, Inc.
-Build the company's big data warehouse system, including batch and stream data flow construction;
-In-depth understanding of business systems, understanding of project customer needs, design and implement big data systems that meet user needs, and ensure smooth project acceptance
- Responsible for data integration and ETL architecture design and development
- Research cutting-edge big data technologies, optimize big data platforms continuously
Job requirements:
-Bachelor degree or above in computer, database, machine learning and other related majors, more than 3 years of data development work;
-Have a keen understanding of business data, and can analyze business data quickly;
-Proficient in SQL language, familiar with MySQL, very familiar with Shell, Java (Scala), Python;
-Familiar with common ETL technologies and principles; proficient in data warehouse database design specifications and practical operations; rich experience in spark, MR task tuning
-Familiar with Hadoop, Hive, HBase, Spark, flink, kafka and other big data components ;
-Proficient in the use of mainstream analytical data systems such as clickhouse、doris、 TIDB, have tuning experience are preferredBig Data Development Engineer
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology and Engineering
- Industries Software Development, Technology, Information and Media, and Information Services
Referrals increase your chances of interviewing at Baidu, Inc. by 2x
Get notified about new Big Data Developer jobs in Singapore, Singapore .
Data Engineer (Snowflake/Microsoft Fabric), AI & Data, Technology ConsultingSouth East Community Development Council, Singapore 1 day ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrPrincipal Big Data Engineer
Posted today
Job Viewed
Job Description
Design and implement scalable data pipelines to collect, process, and store large volumes of structured and unstructured data.
Ensure data quality, integrity, and consistency across various data sources.
Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
Integrate data from multiple sources, including databases, APIs, and third-party data providers.
Build and maintain big data infrastructure using technologies
Optimize data storage and retrieval processes to improve performance and reduce costs.
Proven experience as a Big Data Engineer or similar role.
Experience with big data technologies such as Hadoop, Spark, Kafka, Hive, HBase, etc
Proficiency in programming languages such as Python, Java, or Scala.
Strong knowledge of SQL and NoSQL databases.
Knowledge of machine learning and data analytics.
JAVA BIG DATA DEVELOPER
Posted 1 day ago
Job Viewed
Job Description
Main responsibilities
- Lead technical study into a proposed solution, while involving expertise from infrastructure big data expert, business analyst requirement
- Document proposed design and develop the solution
- Implicitly ensure all CI-CD artefacts are part of the solution
- Perform code review while fostering knowledge and coaching best practices to team members
- Interact and provide reporting to project managers
- Monitor technical risk and escalate appropriately to management
- Research, design, and develop software.
- Analyse user needs and deVelop software solutions
- Updates software, enhances existing software capabilities, and develops and direct software testing and validation procedures.
- Work with other engineers to integrate hardware and/or software systems
The position requires autonomy and reliability in performing duties with initiatives and leadership when it comes to all non-functional deliverables such as testing tools, mocking objects, production monitoring concerns, quality control including performance and load testing.
Qualifications and Profile
Mandatory
- At least 8 years in Software development
- At least 5 years in Java/J2EE development
- Hands on Data ingest and data processing technology like Spark streaming and Spark
- Hands on Messaging systems like Kafka, Flume or ActiveMQ, MQSeries or RabbitMQ
- Hands on knowledge on Hadoop (preferably Hortonworks distribution) - HDFS, HBase, Hive, ORC/Parquet.
- Build tool - Maven/sbt/ant, UML, Restful web services, Jenkins/Team City, Source management – SVN/GIT, TDD using Junit, Jira/QC.
Good to Have
- Solution design using proven patterns, awareness of anti-patterns, performance tuning, especially in streaming
- Knowledge of tools like Phoenix, Elasticsearch, Sqoop, StreamSets are good to have.
- Basic understanding of finance and investment banking
Other Professional Skills and Mindset
- Excellent written and verbal communication skills for both teammates and management
- Strong analytical and problem-solving skills
- Proficient software development life cycle
- Appetite to follow technology trend and participate to communities
- Passion for sharing expertise and grow team members’ skills
- Interest in mentoring and guiding junior team members on the path of high-quality deliverables.
Education Requirements
At least a bachelor’s degree in any of these faculties:
- Computer Science
- Information Technology
- Programming & Systems Analysis
- Science (Computer Studies)