2,108 Data Professionals jobs in Singapore
Data Engineer/ Data Analytics
Posted today
Job Viewed
Job Description
We are seeking a Reporting and Analytics Developer/Data Engineer (Associate Consultant skill level) to join the Financial Centre Development Department The ideal candidate must have strong data modelling and data analysis skills, and familiarity with scripting for automating tasks. The successful candidate will be part of the team that drives data and digital transformation within the development function . Responsibilities include working with partners to streamline processes and implement automation to improve efficiency and quality of output.
Key Responsibilities
- Analyse the Authority's data needs and document the requirements.
- Refine data collection/consumption by migrating data collection to more efficient channels.
- Plan, design, and implement data engineering jobs and reporting solutions to meet the analytical needs.
- Develop test plan and scripts for system testing, support user acceptance testing.
- Work with partners to streamline processes and implement automation to improve efficiency and quality of output.
- Work with the Authority's technical teams to ensure smooth deployment and adoption of new solution.
- Ensure the smooth operations and service level of IT solutions.
- Support production issues.
What we are looking for
· Strong data modelling and data analysis skills are a must.
· Familiarity with scripting for automating tasks using Python.
· Hands-on experience of reporting or visualization tool like SAP BO and Tableau is important.
· Good understanding and completion of projects using waterfall/Agile methodology.
· Good understanding of analytics and data warehouse implementations.
· Hands-on experience in DevOps deployment, ETL tools like Informatica, and data virtualisation tools like Denodo will be an added advantage.
Please refer to U3's Privacy Notice for Job Applicants/Seekers at When you apply, you voluntarily consent to the collection, use and disclosure of your personal data for recruitment/employment and related purposes.
Tell employers what skills you haveDigital Transformation
Tableau
Data Analysis
Informatica
System Testing
Scripting
ETL
Service Level
Data Engineering
SAP
Python
User Acceptance Testing
Visualization
Virtualisation
Data Analytics
Data Engineer - Intelligence & Data
Posted today
Job Viewed
Job Description
Overview
Data Engineer - Intelligence & Data at Shopee.
About The Team
The Marketplace Intelligence and Data team's mission is to build sustainable and efficient data and intelligence products to facilitate Shopee's business development. The team is responsible for Shopee e-commerce data warehouse construction, merchant and operation data product construction, all-link traffic data, product algorithms (including product release, control, information optimization, SPU library and its comparison business), marketing algorithms (including Merchandising, Product Selection, Recommendation Algorithm, Evaluation Algorithms, User Profiling), and basic AI capabilities such as Machine Translation, Speech Algorithm, Image Algorithm, and Real-person Authentication. The data team aims to build high-quality offline and real-time data warehouses for Shopee's e-commerce business, integrate diverse raw data, build consistent data models, and provide efficient data marts for different data application scenarios. We are a data solution provider delivering stable, reliable and efficient data services for each data user in the company.
Job Description
Participate in the development work related to the Marketplace data warehouse, including data collection in offline and real-time data stores, data public layer construction, data application layer construction, and data governance.
Support data applications for various business modules; communicate requirements with different teams and design data architecture to provide data users with efficient solutions, including BI analysis, data products, and algorithm applications.
Explore and advance Marketplace key data technologies; optimize and improve existing data architecture, enhance data quality and productivity, including mass data processing, real-time processing, and application of new technologies.
Qualifications
Bachelor degree or above in computer-related field and 3+ years of working experience.
Familiarity with one or more big data processing technologies such as Spark, Flink, Hadoop, HBase, Kafka, Druid, Clickhouse, etc.
Proficiency in one or more programming languages, such as Java, Scala, Python, SQL, etc.
Familiar with data warehouse architecture and principles, with relevant experience in big data architecture design, model design and performance tuning.
Good thinking logic and communication skills, as well as strong project management and coordination abilities, and ability to communicate in English.
Seniority level
Not Applicable
Employment type
Full-time
Job function
Information Technology
Industries
Software Development
Internet Marketplace Platforms
Technology, Information and Internet
#J-18808-Ljbffr
Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
• Design, develop, and maintain Big Data solutions for both structured and unstructured data environments.
• Work with traditional structured databases such as Teradata, Oracle, and perform SQL/PLSQL development.
• Manage and process large datasets using the Hadoop ecosystem, including tools like HIVE, Impala, HDFS, Spark, Scala, and HBase.
• Develop and implement modern data transformation methodologies using tools like DBT.
Skills/Requirement
• Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field.
• Min 4 years Experience in Data Engineering, Big Data solutions, and analytics functions
• Strong background in traditional structured database environments such as Teradata / Oracle, SQL & PL/SQL.
• Fluent in the management of structured and unstructured data, as well as modern data transformation methodologies and tools like DBT.
• Hands-on experience working on real-time data and streaming applications using Kafka or other similar tools
• Hands-on experience in automation process – building procedures, ETL and automated job scheduling using data integration tool such as Talend
Interested candidate please click "APPLY" to begin your job search journey.
We regret to inform that only shortlisted candidates will be notified.
By sending us your personal data and curriculum vitae (CV), you are deemed to consent to EVO Outsourcing Solutions Pte Ltd and its local and overseas subsidiaries and affiliates to collect, use and disclose your personal data for the purposes set out in the Privacy Policy. Our full privacy policy is available at evo-sg.com/privacy-policy. If you wish to withdraw your consent, please drop us an email (embed this email -> ) to let us know. Please feel free to contact us if you have any queries
EVO Outsourcing
RCB No. K
Tell employers what skills you haveTalend
Teradata
Scala
Oracle
Big Data
Pipelines
Oracle SQL
Outsourcing
Hadoop
Data Transformation
ETL
Data Integration
Data Engineering
SQL
Scheduling
Databases
Data Engineer
Posted today
Job Viewed
Job Description
Job Description & Requirements
We are seeking an experienced Data Engineer with at least 5 years of professional experience in software engineering or platform engineering.
The ideal candidate will possess strong expertise in designing efficient and scalable applications, working with relational databases, and leveraging modern cloud and big data technologies.
This role requires a combination of technical proficiency, attention to detail, and excellent communication skills to collaborate with data analysts, business users, and vendors in delivering robust data solutions.
Responsibilities:
- Design and implement optimal data structures and algorithms to create efficient and scalable applications using Python.
- Develop, maintain, and enhance data pipelines, ensuring high levels of data quality and reliability.
- Integrate applications with relational databases (e.g., Snowflake, Oracle, MS-SQL) to support data processing and analytics.
- Collaborate with stakeholders, including data analysts, business users, and vendors, to design and develop solutions that meet business requirements.
- Employ best practices for code versioning, testing, Continuous Integration/Continuous Deployment (CI/CD), and code documentation.
- Apply knowledge of data quality tools for profiling, cleansing, and monitoring data pipelines.
The Candidate:
- A degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience as a software engineer.
- Solid experience in Python, relational databases and SQL and Object-Oriented Programming (OOP) principles.
- Strong analytical skills with a passion for solving complex problems through innovative solutions.
- Excellent interpersonal and communication skills to interact effectively with diverse stakeholders.
- A detail-oriented approach with a focus on operational excellence.
Preferred Qualifications
- Experience with Snowflake, Oracle, and MS-SQL
- Familiarity with cloud services such as AWS Glue, EKS, and S3; knowledge of Presto, Trino, AWS Athena, or similar tools.
- Experience of Financial Services – if no experience, at least an interest in the products.
We regret to inform that only shortlisted candidates will be notified / contacted.
EA Registration No.: R , Yap Jia Yi
Allegis Group Singapore Pte Ltd, Company Reg No. N, EA License No. 10C4544
Tell employers what skills you haveExcellent Communication Skills
Operational Excellence
Oracle
Big Data
Pipelines
Data Structures
Software Engineering
ETL
Information Technology
Data Quality
Reliability
SQL
Python
Cloud Services
Databases
Business Requirements
Data Engineer
Posted today
Job Viewed
Job Description
Job Summary
We are seeking a skilled and experienced Data Engineer to work in data engineering, analytics and data management in a well established company.
Mandatory Skill-set
- Degree in Information Technology, Computer Engineering, and/or Computer Science;
- Minimum 2 years of experience in data engineering and data management;
- Hands on technical experience in SQL Server Integration Services (SSIS);
- Has experience in data engineering and data visualization;
- Strong experience in gathering business requirements and translating into technical specification;
- Strong analytical skills with high curiosity to deep dive into root cause of problems and suggest solutions;
- Has good stakeholders management experience;
- Attention to technical details;
- Ability to deliver tasks on time with high quality outcome;
- Strong written and spoken communication skills.
Desired Skill-set
- Prior experience in Python, R or Machine Learning tools.
Responsibilities
- Designing, developing, and maintaining ETL pipelines using SSIS;
- Extracting data from various sources (databases, flat files, APIs, cloud sources);
- Transforming and cleaning data to ensure consistency and accuracy;
- Loading data into target systems such as data warehouses or reporting databases;
- Optimizing ETL performance and troubleshooting data flow issues;
- Supporting data migration and business intelligence/reporting needs (often with Power BI, SSRS, or Azure Data Services).
Should you be interested in this career opportunity, please send in your updated resume to at the earliest.
When you apply, you voluntarily consent to the disclosure, collection and use of your personal data for employment/recruitment and related purposes in accordance with the SCIENTE Group Privacy Policy, a copy of which is published at SCIENTE's website (
Confidentiality is assured, and only shortlisted candidates will be notified for interviews.
EA Licence No. 07C5639
Tell employers what skills you haveSSIS reports
ETL Tools
Data Management
System Integration
ETL
Data Engineering
SQL
Data Migration
SQL Server
Python
SSIS
Power BI
Databases
Data Visualization
Business Requirements
SSRS
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a Data Engineer with experience or interest in IoT technologies and cloud-based data engineering to join our team. This role blends the management of high-volume data flows and IoT-specific analytics with general data engineering practices to deliver robust and scalable data solutions. The ideal candidate will balance technical skills and domain knowledge, enabling data-driven insights for utility operations, customer services, and infrastructure optimisation initiatives.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient data pipelines and ETL processes for IoT and enterprise data systems.
- Implement data ingestion workflows from IoT devices and integrate with enterprise platforms using Azure Data Factory or similar tools.
- Ensure data quality through validation, cleansing, and monitoring processes to address issues such as missing data, duplicates, and inconsistencies.
- Define data attributes and formats for IoT device and network data to support seamless integration with existing systems and standards.
- Optimise data storage solutions in Azure Data Lake Storage (or equivalent) for structured and unstructured data.
- Develop APIs and data interfaces for real-time or near-real-time data transfer between IoT components and enterprise platforms.
- Apply advanced analytics techniques to IoT data for performance monitoring, usage profiling, and network management.
- Leverage BI tools (e.g., Power BI) to enable business intelligence and operational insights.
- Implement robust data security and privacy measures, ensuring compliance with relevant regulations.
- Collaborate with cross-functional teams to gather requirements and deliver high-quality, documented solutions.
Required Skills & Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field. Advanced degrees or certifications are advantageous.
- Academic or industrial experience in data engineering, including SQL Databases and cloud platforms (Azure, AWS, or GCP).
- Experience or interest in IoT technologies and data systems.
- Proficiency in data ingestion tools such as Azure Data Factory and ETL processes.
- Strong programming skills in Python, SQL, and one or more of Java or Scala.
- Familiarity with big data technologies (e.g., Hadoop, Kafka) and enterprise service buses (ESB).
- Knowledge of data management systems and integration with enterprise applications.
- Understanding of operational data flows, business cycles, and regulatory requirements.
- This position open to fresh graduates.
Preferred Skills & Qualifications:
- Familiarity with additional Azure ecosystem tools (e.g., Synapse Analytics, Event Hub, Stream Analytics).
- Experience with APIs, data interfaces, and integrating IoT systems with enterprise data platforms.
- Relevant certifications in Microsoft Azure or other recognised credentials.
- Knowledge of DevOps practices and CI/CD pipelines (e.g., Azure DevOps).
- Familiarity with containerisation technologies such as Docker.
- Background in implementing Change Data Capture (CDC) designs and scalable data architectures.
Personal Attributes:
- Excellent communication and collaboration skills.
- Strong analytical and problem-solving mindset.
- Proactive approach to continuous professional development.
- Ability to work effectively in dynamic, fast-paced environments.
This position offers an opportunity to work at the intersection of IoT data systems and modern cloud engineering, supporting innovation and operational excellence across industries.
Tell employers what skills you haveWater
Azure
Data Modeling
Software
Information Technology
NoSQL
Data Engineering
SQL
Agile Software Development
IT Systems
Data Engineer
Posted today
Job Viewed
Job Description
Position Summary:
Data Engineer for data migration projects (Primarily utilizing AWS, IDMC, Databricks and Tableau)
Responsibilities:
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica Power Center/IDMC for data transformation and quality.
- Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica Power Center/IDMC for optimizing data workflows.
- Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
- Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
- Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
- Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica Power Center/IDMC environments.
- Create, manage, and optimize data visualization process using as Tableau, OAS or Power BI
- Stay up-to-date with AWS, Databricks, Informatica Power Center/IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Requirements:
- Bachelor's or master's degree in computer science, data engineering, or a related field.
- Minimum 4 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica Power Center/IDMC.
- Proficiency in programming languages such as Python for building data pipelines.
- Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
- Strong knowledge of SQL databases.
- Familiarity with data modeling and schema design.
- AWS certifications: (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Tableau
Scalability
Data Modeling
Pipelines
AWS
Informatica
Data Transformation
ETL
Databricks
Data Governance
Data Engineering
SQL
Python
Metadata
Data Analytics
Power BI
Databases
Data Visualization
Be The First To Know
About the latest Data professionals Jobs in Singapore !
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
We are looking for a Data Engineer to design, build and maintain reliable data systems that support the collection, processing, storage and analysis of information in a secure and scalable way. You will play a key role in ensuring data is accessible, accurate, and ready for use by stakeholders and data scientists.
What You Will Do- Work with stakeholders (customers, partners, colleagues) to resolve data-related issues and support infrastructure needs
- Collaborate with data scientists to understand requirements and support data models
- Design, develop and maintain data pipelines, ETL processes, and data warehouses for both structured and unstructured data
- Ensure data flows smoothly across different sources in real-time, near-real-time, or batch modes
- Define and monitor SLAs for data pipelines and data products
- Implement solutions that meet data security and governance standards
- Drive data quality assurance and best practices
- Provide support for pre-sales (proposals) and post-sales (implementation) when needed
- Strong technical knowledge in:
Data Modelling, Data Pipelines, OLAP, Data Ingestion & Integration, Query Optimisation - Proficiency in programming languages such as Java, C/C++, Python, Scala, SQL
- Hands-on experience with big data technologies (e.g., Hadoop, Spark, Hive, HBase)
- Experience in data governance, master data management, and lifecycle management
- Knowledge of software engineering best practices (programming, testing, version control)
- Understanding of data privacy and security standards
Your recruiter for this job:
WhatsApp Chally @ for a quicker response
Connect with me on LinkedIn:
Chally |
Talentsis Pte Ltd | EA No: 20C0312
Tell employers what skills you haveMicrosoft Azure
Talend
Factory
Azure
Big Data
Pipelines
Hadoop
Data Management
Software Engineering
Azure Data Factory
Scripting
ETL
Data Integration
Data Engineering
Python
Cloud Services
Ansible
Databases
Linux
Able To Work Independently
Data Engineer
Posted today
Job Viewed
Job Description
Zenith Infotech (S) Pte Ltd. was started in 1997, primarily with the vision of offering state-of-the-art IT Professionals and solutions to various organizations and thereby helping them increase their productivity and competitiveness. From deployment of one person to formation of whole IT teams, Zenith Infotech has helped clients with their staff augmentation needs. Zenith offers opportunity to be engaged in long term projects with large IT savvy companies, Consulting organizations, System Integrators, Government, and MNCs.
EA Licence No: 20S0237
Roles and Responsibilities:
· Work across workstreams to support data requirements including reports and dashboards
· Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
· Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications
· Develop data pipeline automation using Azure, AWS data platform and technologies stack, Databricks, Data Factory
· Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
· Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming
· Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data
· Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools
Required Skills:
We are looking for experience and qualifications in the following:
- Bachelor's degree in Computer Science, Computer Engineer, IT, or related fields
- Minimum of 4 years' experience in Data Engineering fields
- Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure, AWS
- Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
- Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing
- Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph
Only shortlisted applicants will be contacted. By submitting your application, you acknowledge and agree that your personal data will be collected, used, and retained in accordance with our Privacy Policy This information will be used solely for recruitment and employment purposes.
Tell employers what skills you haveTableau
Apache Spark
Data Analysis
Azure
Big Data
Data Transformation
Data Management
ETL
Data Quality
Data Engineering
SQL
Python
Visualization
API
Power BI
Data Visualization
Data Engineer
Posted today
Job Viewed
Job Description
Job Purpose
The Data Engineer will be part of the team to develop operation & maintenance decision-support tools to enhance train reliability and maintenance efficiency. This position involves designing, developing, and maintaining data pipelines, APIs, and cloud infrastructure for various rail-oriented applications. The ideal candidate will have expertise in data analysis, transformation, ingestion, database design, API development, and preferably, cloud infrastructure setup. Collaborating closely with software engineers, data scientists, and frontend developers, the Data Engineer will contribute to building efficient, scalable, and reliable systems.
ResponsibilitiesThe duties and responsibilities for Data Engineer, are as listed below. The list is not comprehensive and related duties and responsibilities may be assigned from time to time.
Data Engineering & Processing:
- Develop and maintain data pipelines for efficient data ingestion and transformation.
- Work with structured and unstructured data to ensure optimal storage and retrieval.
- Perform data analysis and report on results.
Database Design & Management:
- Design and implement relational and NoSQL database schemas for scalability.
- Optimize database performance through indexing, partitioning, and query tuning.
- Implement data security and compliance best practices.
API Development & Backend Engineering:
- Design and develop APIs for data access and application integration.
- Implement authentication, authorization, and API security best practices.
Cloud Infrastructure & Deployment (Supporting Role):
- Assist in design Azure cloud architectures
- Work with IT infrastructure team to set up cloud infrastructure for application hosting, data storage and processing.
Collaboration & Best Practices:
- Collaborate with internal stakeholders to understand their business needs.
- Work with software engineers, data scientist, frontend developer to understand the data requirement and design architecture of the data platform.
- Implement CI/CD pipelines for automated testing, deployment and monitoring.
- Write testable and maintainable code and documentation to deploy to production.
- Engage continuously with end-user for feedback and improvements.
- Degree in Science, Technology, Engineering or Mathematics (STEM)
- Previous experience as a data engineer or in a similar role
- Data engineering certification is a plus
- Knowledge of security best practices in cloud and database management is a plus
Technical skills include:
- Programming and Data processing: MATLAB, Python, SQL, or similar languages.
- Databases: My SQL, SQL Server, MongoDB or similar.
- Cloud Platforms: Azure
- DevOps & CI/CD: Git Lab CI/CD, Docker
Generic skills include:
- Strong inclination and eager for continual learning and development
- Strong team player
- Critical thinking and problem-solving skills
- Ability to understand and explain complex data and effective interactions with the stakeholders
- Ability to think independently and actively propose solutions to the team.
Git
MongoDB
Scalability
Data Analysis
Azure
Pipelines
Mathematics
Reliability
Data Engineering
SQL
SQL Server
Python
Database Design
Docker
API
Databases