270 Senior Data Engineer jobs in Singapore
Data Engineer

Posted 3 days ago
Job Viewed
Job Description
As a Data Engineer, you will play a key role in designing, developing, and maintaining data pipelines and platforms to support business analytics, reporting, and data science initiatives. You will collaborate with cross-functional teams to deliver high-quality, reliable, and scalable data solutions.
**Key Responsibilities**
+ Prepare datasets and data pipelines to support business needs and troubleshoot data-related issues.
+ Collaborate with Data & Analytics Program Management and stakeholders to co-design the Enterprise Data Strategy and Common Data Model.
+ Implement and promote Data Platform solutions, transformative data processes, and services.
+ Develop and test data pipelines and structures for Data Scientists, ensuring fitness for use.
+ Maintain and model JSON-based schemas and metadata for organizational reuse.
+ Resolve and troubleshoot data-related issues and queries.
+ Cover all processes from enterprise reporting to data science, including ML Ops.
**Required Experience**
+ Hands-on experience with Big Data technologies and open-source components (e.g., Hadoop, Hive, Spark, Presto, NiFi, MinIO, Kubernetes, Kafka).
+ Experience in stakeholder management within diverse business and technology environments.
+ Experience in banking or financial services, handling sensitive data across regions.
+ Experience in large-scale data migration projects with on-premises Data Lakes.
+ Experience integrating Data Science Workbench platforms (e.g., Knime, Cloudera, Dataiku).
+ Demonstrated track record in Agile project management and methodologies (e.g., Scrum, SAFe).
**Required Skills & Qualifications**
+ Knowledge of reference architectures for integrated, data-driven landscapes and solutions.
+ Advanced SQL skills, preferably in mixed environments (classic DWH and distributed).
+ Proficiency in Python for automation and troubleshooting, using Jupyter Notebooks or common IDEs.
+ Experience with data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI, Python-based).
+ Ability to apply data quality frameworks within architectural solutions.
+ Excellent command of English; proficiency in other languages is a plus if relevant to the role.
+ Bachelor's degree or higher in Computer Science, Information Systems, or a related field.
Cognizant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.
Data Engineer
Posted 15 days ago
Job Viewed
Job Description
About Us
Artefact is the next generation end-to-end data service company, with a focus on consulting and marketing, that helps organisations transform data into value and business impact.
Our broad range of data-driven solutions in data consulting and digital marketing are designed to meet our clients’ specific needs, always conceived with a business-centric approach and delivered with tangible results.
We have 900+ employees across 18 offices who are focused on accelerating digital transformation. Thanks to a unique mix of company assets: State of the art data technologies, lean AI agile methodologies for fast delivery, and cohesive teams of the finest business consultants, data analysts, data scientists, data engineers, and digital experts, all dedicated to bringing extra value to every client.
About The Team
The most tech people of our Data & Consulting division, the title of ‘Data engineer’ or ‘Software engineer’ does not describe everything our amazing women and men can do: data engineering, operation management, security, cloud architecture, MLOps, and more.
You will work with the team to identify your clients' needs and define innovative solutions of which you will be ownership from start to end. You manage both its conception and implementation, while also optimising the performance and scalability.
You will also coach others, keep abreast of industry news/updates and get stuck into training sessions with our business partners and suppliers, such as Ali Cloud & Azure & Amazon. You share your knowledge, learnings and success, with the capability of presenting and communicating.
About The Role
Key Areas of Responsibility
- Securing delivery on your projects
- Ensuring that your solutions are bringing values to the client problematic
- Caring for the happiness of the team, ensuring work is delivered to a high standard and providing feedback and mentoring
- Demonstrating the skill and credibility required to ensure the success of our clients’ initiatives
- Researching and developing new technical approaches to address problems efficiently
Competences & Skills
- Bachelor Degree or Above in Computer Science, Statistics, Software Engineering, or related field
- Hands-on experience in developing and applying data-driven solutions with right technical architecture
- Intellectual curiosity and excellent problem-solving skills, including the ability to structure and prioritise an approach for maximum impact
- Experience with Python, Linux, SQL, Scala or other popular programming languages
- Experience with Spark, Hadoop, Hive, Mapreduce, Flink and related big data technologies
- Experience with Docker, Kubernetes, CI/CD, Terraform, etc.
- Experience with Cloud service (Ali Cloud, Azure, AWS)
- Experience with AI and machine learning algorithms model building and AI analytics product deployment.
- Experience with Java web development (e.g. Java Spring framework) and frontend development is a plus.
- Must be native Chinese (Mandarin) speaker, and fluent in English
Our Belief
We believe data is changing the world, and it’s just the beginning. We want this to be done in the right way, with transparency and ethics. This is the only way to create sustainable impact for business and society.
Our Mission
We are on a mission to build the next generation of data leaders who:
- Fully capture the power of data & digital to deliver business value;
- Bring understanding, trust & transparency of data into our society.
Our Values
- Collaboration: People of different background and expertise working closely together;
- Trust & Transparency: Dealing with data topics with integrity and ethics; Realistic and honest with our capabilities and limitations;
- Innovation: Always working on the most trendy and new topics on data and digital; Always on top of the new ways of using data;
- Action: We would rather do than to tell what to do; Has a “building the plane while flying” agile mentality.
- We bring great value to business and create a better society with the understanding, transparency and ethical use of data;
- We build the next generation of data leaders;
- We disrupt the market:
- Data Native - Born with data and defining data;
- ART + SCIENCE - Mixture of talents in ART (Creative, Planning, Media, Consumer Engagement) and SCIENCE (Data Consulting, Data Science, Data Engineer);
- One P&L - Integrated & collaborative, with all chapters working toward the same goal;
- End to End - Capabilities from Strategize to Build to Run.
Data Engineer
Posted 15 days ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Internal Security Department
What The Role Is
ISD confronts and addresses threats to Singapore’s internal security and stability. For over 70 years, ISD and its predecessor organisations have played a central role in countering threats such as those posed by foreign subversive elements, spies, racial and religious extremists, and terrorists. A fulfilling and rewarding career awaits those who want to join ISD’s critical mission of keeping Singapore safe, secure, and sovereign for all Singaporeans.
What You Will Be Working On
- Design, build, maintain, and optimize data pipeline architecture
- Collaborate with Data Scientists and Analysts to optimise extraction, transformation, and loading of data from various sources using SQL and Big Data technologies
- Evaluate Data Analytics technologies to recommend tools and frameworks for data processing and analysis
What We Are Looking For
- Experience with scripting languages like Python, SQL, Scala, Spark, or PowerShell is preferred
- Applicants with no experience may apply
- Good interpersonal and communication skills
- Strong analytical skills and problem-solving ability
- Ability to work independently and in a team
- Only Singaporeans need apply
We wish to inform that only shortlisted candidates will be notified.
#J-18808-LjbffrData Engineer
Posted 15 days ago
Job Viewed
Job Description
We are seeking a highly skilled Data Engineer with strong hands-on experience in big data technologies and distributed systems. The ideal candidate will have a strong foundation in Scala or Java , and must be experienced in building scalable data pipelines using tools like Apache Spark , HBase , and Hive .
Must-Have Skills- Strong programming expertise in Scala or Java (must be hands-on)
- Hands-on experience with HBase , Hive , and SQL
- Solid understanding of distributed systems
- Comfortable working in Unix/Linux environments
- Strong data modeling and performance tuning skills
- Experience with Spring Framework and Microservices Architecture
Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
- Work across workstreams to support data requirements including reports and dashboards.
- Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes.
- Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications.
- Develop data pipeline automation using Azure, AWS data platform and technologies stack, Databricks, Data Factory.
- Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery.
- Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming.
- Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data.
- Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools.
Requirements:
- Bachelor's degree in Computer Science, Computer Engineer, IT, or related fields.
- Minimum of 4 years' experience in Data Engineering fields.
- Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure, AWS.
- Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design.
- Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing.
- Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph.
- Experience working in a Singapore public sector, client-facing/consulting environment is a plus.
- Team player, analytical and problem-solving skills.
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer
Job Overview:
We are seeking an experienced Data Engineer to join our Data team in Singapore. The ideal candidate will have a proven track record of hands-on data engineering experience, particularly within the AWS and Azure. As a Data Engineer, you will be responsible for developing and maintaining data pipelines, ensuring the reliability, efficiency, and scalability of our data lake and enabling data marts for AI models.
Responsibilities:
Develop robust ETL pipeline and frameworks for both batch and real-time data processing using Python and SQL.
Deploying and Monitoring the ETL Pipelines using orchestration tools such as Airflow, DBT or AWS Services such as Glue Workflow, Step Functions, EventBridge.
Work with cloud-based data platforms like Redshift, Snowflake and Data Ingestion tools like DMS, ELT tools like dbt cloud for effective data processing.
Work with Azure data factory for building data pipelines
Implement CI/CD for ETLs and Pipeline to automate build and deployments
Qualifications:
Bachelor's or Master's degree in computer science, Information Technology, or a related field.
3+ years of hands-on data engineering experience in AWS
Should have delivered Atleast 2 programs into production as data engineer
Primary Skills:
Proficient in Python, SQL and Data Warehousing Concepts
Develop ETL frameworks
Proficient in AWS services such as S3, DMS, Redshift, Glue, Kinesis, Athena, AWS Lambda, Step Functions to implement scalable data solutions.
Proficient in Azure data factory.
Working experience on Data Warehousing using Snowflake or AWS or Databricks
Should have understanding of data marts for presentation layer into reporting
Good-to-Have Skills:
ETL development using tools like Informatica, Talend, Fivetran
CI/CD setup using GitHub or Bitbucket
Good Communication Skill
Good Knowledge in Data lake and data warehousing concepts
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are looking for a skilled Data Engineer to design, build, and maintain robust data systems that power analytics and decision-making across our organization. The ideal candidate will work across multiple workstreams, supporting data requirements through reports, dashboards, and end-to-end data pipeline development. You will collaborate with business and technical teams to translate requirements into scalable, data-driven solutions.
Key Responsibilities- Work collaboratively across teams to support data needs including reports, dashboards, and analytics.
- Conduct data profiling and analysis to identify patterns, discrepancies, and quality issues in alignment with Data Quality and Data Management standards.
- Design and develop end-to-end (E2E) data pipelines for data ingestion, transformation, processing, and surfacing in large-scale systems.
- Automate data pipeline processes using Azure, AWS, Databricks, and Data Factory technologies.
- Translate business requirements into detailed technical specifications for analysts and developers.
- Perform data ingestion in both batch and real-time modes using methods such as file transfer, API, and data streaming (Kafka, Spark Streaming).
- Develop ETL pipelines using Spark for data transformation and standardization.
- Deliver data outputs via APIs, data exports, or visualization dashboards using tools like Power BI or Tableau.
- Bachelor's degree in Computer Science, Computer Engineering, Information Technology, or a related field.
- Minimum 4 years of experience in Data Engineering or related roles.
- Strong technical expertise in:
Python, SQL, Spark, Databricks, Azure, AWS
Cloud & Data Architecture, APIs, and ETL pipelines
- Proficiency in data visualization tools such as Power BI (preferred) or Tableau, including DAX programming, data modeling, and storytelling.
- Understanding of Data Lakes, Data Warehousing, Big Data frameworks, RDBMS, NoSQL, and Knowledge Graphs.
- Familiarity with business analysis, data profiling, data modeling, and requirement analysis.
- Experience working in Singapore public sector, consulting, or client-facing environments is advantageous.
- Excellent analytical, communication, and problem-solving skills with a collaborative mindset.
- Experience with real-time data streaming (Kafka, Spark Streaming).
- Understanding of data governance, data quality frameworks, and metadata management.
- Hands-on experience with automation and CI/CD for data workflows.
Be The First To Know
About the latest Senior data engineer Jobs in Singapore !
Data Engineer
Posted today
Job Viewed
Job Description
Zenith Infotech (S) Pte Ltd. was started in 1997, primarily with the vision of offering state-of-the-art IT Professionals and solutions to various organizations and thereby helping them increase their productivity and competitiveness. From deployment of one person to formation of whole IT teams, Zenith Infotech has helped clients with their staff augmentation needs. Zenith offers opportunity to be engaged in long term projects with large IT savvy companies, Consulting organizations, System Integrators, Government, and MNCs.
EA Licence No: 20S0237
Roles and Responsibilities:
Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications
Develop data pipeline automation using Azure, AWS data platform and technologies stack, Databricks, Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools
Required Skills:
We are looking for experience and qualifications in the following:
- Bachelor's degree in Computer Science, Computer Engineer, IT, or related fields
- Minimum of 4 years' experience in Data Engineering fields
- Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure, AWS
- Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
- Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing
- Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph
Only shortlisted applicants will be contacted. By submitting your application, you acknowledge and agree that your personal data will be collected, used, and retained in accordance with our Privacy Policy This information will be used solely for recruitment and employment purposes.
Data Engineer
Posted today
Job Viewed
Job Description
Mandatory Skills
- Possess a degree in Computer Science/Information Technology or related fields.
- At least 3 years of experience in a role focusing on development and support of data ingestion pipelines.
- Experience with building on data platforms, e.g. Snowflake .
- Proficient in SQL and Python .
- Experience with Cloud environments (e.g. AWS ).
- Experience with continuous integration and continuous deployment ( CICD ) using GitHub .
- Experience with Software Development Life Cycle (SDLC) methodology.
- Experience with data warehousing concepts.
- Strong problem-solving and troubleshooting skills.
- Strong communication and collaboration skills .
- Able to design and implement solution and perform code review independently.
- Able to provide production support independently.
- Agile, fast learner and able to adapt to changes.
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
We are seeking a skilled Data Engineer to join the Insights, Digitalization & Analytics department. The ideal candidate will design, develop, and maintain scalable data solutions to drive analytics, machine learning, and business insights. Responsibilities include building data pipelines, APIs, and dashboards, deploying Machine Learning projects, and leveraging big data technologies and cloud platforms.
Responsibilities
- Design, develop, and maintain ETL/ELT pipelines, API endpoints, and data applications across cloud and on-premise environments, integrating internal and external data sources, including web scraping of public data (ensuring compliance).
- Monitor, optimize, and maintain data quality, performance, and availability through data cleansing, transformation, and deployment of scalable solutions.
- Design, implement, and manage Azure cloud infrastructure to support scalable and secure deployment of cloud-native applications, including configuration of networking, storage, compute resources, and identity management. Ensure alignment with best practices for performance, cost optimization, and security compliance.
- Implement CI/CD pipelines, automate deployments, and ensure scalability and performance for data and Machine Learning (ML)/AI solutions.
- Create and optimize interactive dashboards to visualize business metrics, collaborating with stakeholders to design user-friendly interfaces and integrate them with backend data pipelines.
- Collaborate with Data Scientists to deploy, monitor, and maintain ML/AI projects in production systems.
Requirements
- Possess a bachelor's degree in Computer Science, Computer Engineering, or a related field (specialization in Software Engineering is a plus).
- 2–3 years of experience in data engineering or software engineering with expertise in data warehousing, big data platforms, cloud technologies, and automation tools
- Strong data analysis, data verification and problem-solving abilities.
- Analytical, meticulous, and team player.
- Effective communication skills for collaboration across teams.
- Ability to manage multiple tasks in a dynamic environment.
- Self-motivated and possess initiative to learn new skills and technologies.
Technical Skills required:
- Proficiency in data warehouse design including relational databases (MS SQL Server), NoSQL, and ETL pipelines using Python or ETL tools (e.g., Microsoft SSIS, Informatica IPC) and data warehousing concepts, database optimization, and data governance.
- Familiarity with Python web application and API development tools (e.g., Flask, Requests) and web scraping tools (e.g., BeautifulSoup, Scrapy).
- Skilled in Power BI, including DAX and Power Query, for creating reports and dashboards.
- Experience with architecting and implementing Microsoft Azure services, including Azure Data Factory, Data Lake Storage, App Service, and Azure SQL, as well as CI/CD pipelines using Azure DevOps.
- Knowledge of Machine Learning tools (e.g., AutoML platforms like Azure AutoML or DataRobot) and ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Keras).
- Familiarity with big data technologies (e.g., Hadoop, Hive, Spark) and Databricks platform.