786 Data Engineer jobs in Singapore
Data Engineer
Posted today
Job Viewed
Job Description
At PwC, we help clients build trust and reinvent so they can turn complexity into competitive advantage. We’re a tech-forward, people-empowered network with more than 370,000 people in 149 countries. Across audit and assurance, tax and legal, deals and consulting we help clients build, accelerate and sustain momentum. Find out more at
How will you value-add?
- Design, develop, and maintain robust ETL/ELT pipelines using tools like Azure Data Factory, Azure Data Bricks and MS Fabric.
- Build and optimize Data Architectures (Data lakes, Data warehouses) on Azure Data Services, Azure SQL PaaS and Power BI
- Collaborate with data analysts, and business stakeholders to understand data requirements.
- Ensure data quality, integrity, and security across all data systems along with authentication and authorizations for Database.
- Monitor and troubleshoot data pipeline performance and reliability.
- Implement best practices for data governance, metadata management, and documentation.
- Memory management for database systems.
- Develop database schemas, tables and dictionaries.
- Ensure the data quality and integrity in databases and fix any issues related to database performance and provide corrective measures.
- Work with structured and unstructured data from various sources (APIs, databases, flat files, etc.).
- Perform regular Audit checks
About you
- Bachelor's or master's degree in computer science, Engineering, Information Systems, or related field.
- Good to have Azure Data Engineer certificate
- 2-3 years of experience in Data Engineering or related roles.
- Proficiency in MS SQL T-SQL and programming languages such as Python or C# .Net.
- Experience with data pipeline orchestration tool Azure Data Factory or Azure Data Bricks.
- Familiarity with cloud data platforms MS Fabric.
- Strong understanding of data modeling and warehousing concepts
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
PALO IT is a global technology consultancy that crafts tech as a force for good. We design, develop and scale digital and sustainable products and services to unlock value across the triple bottom line: people, planet, profit. We do the right thing, and we do it right. We're proud to be a World Economic Forum New Champion, and a B Corp-certified company.
OverviewYOUR ROLE: We are seeking a Data Engineer to build and operate enterprise-scale data pipelines primarily on Microsoft Azure, with exposure to AWS and GCP as a plus. This role supports critical business domains including Financial Services, Healthcare, Maritime, and Retail.
Responsibilities- Build ingestion and transformation pipelines in Azure Fabric / Synapse / Databricks.
- Implement metadata-driven frameworks, quality checks, and monitoring dashboards.
- Enable self-service analytics and reporting pipelines.
- Collaborate with cross-functional teams to ensure secure, governed, and performant pipelines.
- 3–6 years of hands-on data engineering experience.
- Strong Python and SQL skills.
- Experience implementing data quality rules and monitoring.
- Familiarity with RBAC, Purview (governance & lineage).
- AWS services: S3, Glue, Lambda.
- GCP services: BigQuery, Dataflow.
- Experience with real-time streaming (Event Hub, Kafka, Pub/Sub).
We’re eager to adapt to change, learn from our experiences and move to meet our planet’s urgent needs. We are continuously taking action to attain 50% of revenue from projects with a positive impact, train 100% of our workforce on impact, achieve B Corp certification among all our offices across the globe, and continuously measure & improve employee happiness.
What We Offer- Stimulating working environments
- Unique career path
- International mobility
- Internal R&D projects
- Knowledge sharing
- Personalized training
PALO IT Singapore is an equal opportunity employer. Employment decisions will be based on merit, qualifications and abilities. Palo IT SG does not discriminate in employment opportunities or practices on the basis of race, colour, religion, sex, sexuality, national origin, age, disability, marital status or any other characteristics protected by law.
Protecting your privacy and the security of your data are longstanding top priorities for Palo-IT. Your personal data will be processed for the purposes of managing Palo-IT’s recruitment related activities, including setting up and conducting interviews and tests for applicants, evaluating and assessing the results, and as is otherwise needed in the recruitment and hiring processes. Please consult our Privacy Notice to know more about how we collect, use, and transfer the personal data of our candidates. Here you can find how you can request for access, correction and/or withdrawal of your Personal Data.
#J-18808-LjbffrData Engineer
Posted 1 day ago
Job Viewed
Job Description
Direct message the job poster from Innova Solutions
OverviewWe are seeking a skilled Data Engineer with strong SSIS development expertise to design, build, and maintain scalable data integration solutions. The ideal candidate will have experience in ETL processes, SQL Server, and data warehouse development, with the ability to translate business requirements into efficient data pipelines.
Qualifications- Bachelor’s degree in computer science, Data Science, Statistics, or a related field.
- At least 2 years’ working experience in data platform, data analytics or relevant projects.
- Proven experience as a Data Engineer, with design and managing enterprise data architecture, including batch and real time processing.
- Solid programming skills in languages such as Python, SQL, SAS, or R for data analysis and data engineering tasks.
- Experience on SSIS package and pipeline.
- Prior experience as a Data Engineer, with a demonstrated ability to design and implement data models, ETL processes, and data integration solutions.
- Proven experience in at least one major data processing tool (Informatica, Kettle, Talend, Airflow, Dolphin etc).
- Prior experience with cloud-based data platforms (e.g., Azure Data Factory, Azure Purview, Databricks, Snowflake, Kafka, or equivalent) and data storage technologies (e.g., SQL/NoSQL, Vector and Graph etc).
- Due to the restrictions of the project, we are only able to accommodate Singaporeans and SPR
Mid-Senior level
Employment typeContract
Job functionData Engineering
IndustriesGambling Facilities and Casinos
#J-18808-LjbffrDATA ENGINEER
Posted 1 day ago
Job Viewed
Job Description
We are looking for a highly motivated and skilled Data Engineer
Key Responsibilities- Build and maintain robust, scalable ETL pipelines across batch and real-time data sources.
- Design and implement data transformations using Spark (PySpark/Scala/Java) on Hadoop/Hive.
- Stream data from Kafka topics into data lakes or analytics layers using Spark Streaming.
- Collaborate with cross-functional teams on data modeling, ingestion strategies, and performance optimization.
- Implement and support CI/CD pipelines using Git, Jenkins, and container technologies like Docker/Kubernetes.
- Work within cloud and on-prem hybrid data platforms, contributing to automation, deployment, and monitoring of data workflows.
Skills
- Strong programming skills in Python, Scala, or Java.
- Hands-on experience with Apache Spark, Hadoop, Hive, Kafka, HBase, or related tools.
- Sound understanding of data warehousing, dimensional modeling, and SQL.
- Familiarity with Airflow, Git, Jenkins, and containerization tools (Docker/Kubernetes).
- Exposure to cloud platforms such as AWS or GCP is a plus.
- Experience with Agile delivery models and collaborative tools like Jira and Confluence.
- Experience with streaming data pipelines, machine learning workflows, or feature engineering.
- Familiarity with Terraform, Ansible, or other infrastructure-as-code tools.
- Exposure to Snowflake, Databricks, or modern data lakehouse architecture is a bonus.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Overview
Data Architecture & Pipeline Development: Design and implement data models for social service integration (Client360, Household360).
Data Architecture & Pipeline Development- Build ETL processes for legacy system migration and real-time agency data synchronization
- Develop and maintain batch and real-time data pipelines for cross-agency data flow.
- Implement data quality validation frameworks and monitoring systems.
- Design and develop APIs for system integrations (OneCV, CaseConnect, ComLink+)
- Create and maintain documentation for all integration points and APIs
- Ensure API performance, security, and scalability
- Implement data protection measures including field-level encryption and access controls
- Ensure compliance with IM8 standards and security requirements
- Maintain audit trails and data masking for sensitive information
- Monitor and enforce data classification policies
- Optimize query performance and batch processing efficiency
- Monitor and tune real-time data synchronization
- Implement performance monitoring dashboards
- Maintain system reliability and uptime metrics
- Work with business analysts, solution architects, and frontend developers
- Participate in Agile ceremonies and sprint planning
- Conduct code reviews and pair programming sessions
- Provide technical guidance on data integration matters
- Develop and maintain comprehensive technical documentation
- Implement automated testing for data transformations and integrations
- Ensure data accuracy and validation across systems
- Create and maintain troubleshooting guides and procedures.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Overview
Data Engineer role at Tencent Games. Tencent Games was established in 2003. We are a leading global platform for game development, operations and publishing, and the largest online game community in China. Tencent Games has developed and operated over 140 games and serves more than 800 million users in over 201 countries and regions. Our titles include Honor of Kings, PUBG MOBILE, and League of Legends.
We actively promote the development of the esports industry, work with global partners to build an open, collaborative and symbiotic industrial ecology, and create high-quality digital life experiences for players.
What the Role Entails- Responsible for game security related data statistics, analysis, mining work
- Responsible for the development and management of content moderation solutions covering text, image, and audio data in games
- Provide reference for security working decision-making through user behavior data modeling and analysis of user profile
- Bachelor degree or above in computer science or related major
- 3+ years’ experience in Linux, C++ language and a scripting language (e.g. shell/python/lua)
- 3+ years’ experience in data analysis, familiarity with data analysis methods and machine learning algorithms; deep learning and LLM experience is preferred
- Experience with mega data analysis and processing
- Excellent written and verbal communication skills in English and Chinese are required to interact with stakeholders in China HQ and other regions
- Strong sense of responsibility, strong logical thinking and communication abilities
- Passionate gamer; prior anti-fraud work experience is highly desired
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Seniority level- Mid-Senior level
- Full-time
- Information Technology and Engineering
- Software Development
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C-suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of consulting firm, visit us at
Title: Consultant (Data)
Summary: Infosys Consulting is looking for a highly skilled data engineer with 2-5 years of experience in data processing using ETL tools like Informatica, Python or similar. The ideal candidate should be a great communicator and have a strong background in data analytics patterns. The successful candidate will be responsible for designing and developing ML-based algorithms in Python and have good knowledge of CI-CD with DevOps.
Key Responsibilities:
- Design and develop ETL workflows to move data from various sources into the organization's data warehouse.
- Develop and implement data quality controls and monitor data accuracy.
- Design and develop ML-based algorithms in Python to automate and optimize data processing.
- Work closely with cross-functional teams to ensure data solutions meet business requirements.
- Develop and maintain data processing and automation scripts using Python.
- Create data visualizations and provide insights to the data analytics team.
- Design and implement data security and access controls.
- Develop and maintain data pipelines and ETL workflows for various data sources.
If you have a passion for data engineering and are looking for a challenging opportunity, we would love to hear from you. This is a great opportunity for someone who is looking to grow their skills and work with a dynamic team in a fast-paced environment.
We welcome applications from all members of society irrespective of age, sex, disability, sexual orientation, race, religion, or belief. We make recruiting decisions based on your experience, skills and personality. We believe that employing a diverse workforce is the right thing to do and is central to our success. We offer you great opportunities within a dynamically growing consultancy.
At Infosys Consulting you will discover a truly global culture, highly dedicated and motivated colleagues, a co-operative work environment and interesting training opportunities.
Minimum Requirements:
- Bachelor's degree in computer science, Information Systems, or related field.
- 2-5 years of experience in data processing and ETL using SQL, Informatica, AWS Glue, Airflow, Python, or similar tools.
- Experience with CI-CD with DevOps.
- Experience in designing and developing ML-based algorithms in Python.
- Strong knowledge of data analytics patterns and data processing techniques.
- Good communication and interpersonal skills to effectively communicate with cross-functional teams.
- Excellent problem-solving skills and the ability to work independently or as part of a team.
- Knowledge of data security and access control.
- Only Singaporean & Singapore PR can apply.
Be The First To Know
About the latest Data engineer Jobs in Singapore !
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Internal Security Department
What The Role Is
ISD confronts and addresses threats to Singapore’s internal security and stability. For over 70 years, ISD and its predecessor organisations have played a central role in countering threats such as those posed by foreign subversive elements, spies, racial and religious extremists, and terrorists. A fulfilling and rewarding career awaits those who want to join ISD’s critical mission of keeping Singapore safe, secure, and sovereign for all Singaporeans.
What You Will Be Working On
- Design, build, maintain, and optimize data pipeline architecture
- Collaborate with Data Scientists and Analysts to optimise extraction, transformation, and loading of data from various sources using SQL and Big Data technologies
- Evaluate Data Analytics technologies to recommend tools and frameworks for data processing and analysis
What We Are Looking For
- Experience with scripting languages like Python, SQL, Scala, Spark, or PowerShell is preferred
- Applicants with no experience may apply
- Good interpersonal and communication skills
- Strong analytical skills and problem-solving ability
- Ability to work independently and in a team
- Only Singaporeans need apply
We wish to inform that only shortlisted candidates will be notified.
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Overview
Aon sp. z o.o.
Data Engineer
Aon is in the business of better decisions At Aon, we shape decisions for the better to protect and enrich the lives of people around the world. As an organization, we are united through trust as one inclusive, diverse team, and we are passionate about helping our colleagues and clients succeed. Specifically within Commercial Risk Solutions we offer risk advisory, risk transfer and structured solutions that help organisations and individuals identify, quantify and manage their risk exposure through insurance.
What the day will look like
This role will provide the candidate an opportunity to deliver enterprise-scale data workloads following a data lakehouse architecture on a modern cloud-based data stack for our new Commercial Risk Solutions Ecosystem. These data workloads represent strategic feeds to support our BI/Analytics solutions that are essential to the value proposition of our ecosystem.
Responsibilities of this role include:
- Cross-functional Team: Work within a cross-functional Agile team alongside Product Owner, Data Architect and Data Analysts to deliver on release goals for a global program
- Data Workload Design: Design scalable and efficient data workloads architectures that integrate our transactional systems with a new lakehouse data model.
- Data Integration: Develop, test and maintain data pipelines (ETL) for integrating diverse data sources into a unified format embedding best practices and standards.
- Data Management : Manage and optimize data to ensure efficient data storage, retrieval, and processing.
- Data Quality Management: Implement automated data quality checks and ensure data integrity throughout the migration process.
- Documentation: Create and maintain comprehensive documentation for data processes, ensuring knowledge transfer, observability and supportability.
- Performance Monitoring: Monitor and optimize data performance to meet defined service-level agreements.
- Troubleshooting: Identify and resolve data-related issues in a timely manner, collaborating with relevant teams.
How this opportunity is different
Aon has transitioned to a product-oriented model and is investing in capabilities like never before. This role will give the chosen candidate a unique opportunity to shape the product and data delivery model for our Commercial Risk Ecosystem to drive significant revenue and efficiency opportunities across all countries where Aon operates.
Skills and experience that will lead to success
- Advanced Technical Skills:
- In-depth knowledge in programming languages such as Python incl. Spark/Scala.
- Experience with ETL tools and lakehouse architectures (3+years) through Databricks, Apache SparkSql and similar
- Strong SQL skills for data manipulation and querying Pipeline efficiency optimization skills
- Hands on experience with Agile technical practices, source versioning and Agile project management tools (Azure Repos, GitLab, Azure DevOps, Jira, Confluence, other)
- Database Knowledge: Familiarity with relational and non-relational databases (Oracle, MSSQLServer).
- Agile planning skills – Kanban and Scrum release/sprint planning.
- Problem-Solving: Proven ability to solve complex data engineering challenges and optimize system performance.
- Good communication and interpersonal skills to collaborate effectively with team members and stakeholders
- College degree in STEM-related discipline.
How we support our colleagues
In addition to our comprehensive benefits package, we encourage a inclusive workforce. Plus, our agile, inclusive environment allows you to manage your wellbeing and work/life balance, ensuring you can be your best self at Aon. Furthermore, all colleagues enjoy two “Global Wellbeing Days” each year, encouraging you to take time to focus on yourself. We offer a variety of working style solutions, but we also recognise that flexibility goes beyond just the place of work. and we are all for it. We call this Smart Working! Our continuous learning culture inspires and equips you to learn, share and grow, helping you achieve your fullest potential. As a result, at Aon, you are more connected, more relevant, and more valued. Aon values an innovative, diverse workplace where all colleagues feel empowered to be their authentic selves. Aon is proud to be an equal opportunity workplace. Aon provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, creed, sex, sexual orientation, gender identity, national origin, age, disability, veteran, marital, domestic partner status, or other legally protected status. We welcome applications from all and provide individuals with disabilities with reasonable adjustments to participate in the job application, interview process and to perform essential job functions once onboard. If you would like to learn more about the reasonable accommodations we provide, email
Please attach CV in English only.
#LI-HYBRID
#J-18808-LjbffrData Engineer
Posted 4 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Bitdeer (NASDAQ: BTDR)
Get AI-powered advice on this job and more exclusive features.
About Bitdeer:
Bitdeer Technologies Group (NASDAQ: BTDR) is a leader in the blockchain and high-performance computing industry. It is one of the world's largest holders of proprietary hash rate and suppliers of hash rate. Bitdeer is committed to providing comprehensive computing solutions for its customers.
The company was founded by Jihan Wu, an early advocate and pioneer in cryptocurrency who cofounded multiple leading companies serving the blockchain economy. Headquartered in Singapore, Bitdeer has deployed mining datacenters in the United States, Norway, and Bhutan. It offers specialized mining infrastructure, high-quality hash rate sharing products, and reliable hosting services to global users. The company also offers advanced cloud capabilities for customers with high demands for artificial intelligence. Dedication, authenticity, and trustworthiness are foundational to our mission of becoming the world's most reliable provider of full-spectrum blockchain and high-performance computing solutions. We welcome global talent to join us in shaping the future.
About The JobWe are seeking an experienced Data Engineer to join our Data Platform team with a focus on improving and optimizing our existing data infrastructure. The ideal candidate will have deep expertise in data management, cloud-based big data services, and real-time data processing, collaborating closely with cross-functional teams to enhance scalability, performance, and reliability.
Key Responsibilities- Optimize and improve existing data pipelines and workflows to enhance performance, scalability, and reliability.
- Collaborate with the IT team to design and enhance cloud infrastructure, ensuring alignment with business and technical requirements.
- Demonstrate a deep understanding of data management principles to optimize data frameworks, ensuring efficient data storage, retrieval, and processing.
- Act as the service owner for cloud big data services (e.g., AWS EMR with Spark) and orchestration tools (e.g., Apache Airflow), driving operational excellence and reliability.
- Design, implement, and maintain robust data pipelines and workflows to support analytics, reporting, and machine learning use cases.
- Develop and optimize solutions for real-time data processing using technologies such as Apache Flink and Kafka.
- Monitor and troubleshoot data systems, identifying opportunities for automation and performance improvements.
Stay updated on emerging data technologies and best practices to drive continuous improvement in data infrastructure.
Seniority level- Mid-Senior level
- Full-time
- Information Technology
- Software Development