1,779 Database Developers jobs in Singapore
SQL Developer
Posted 13 days ago
Job Viewed
Job Description
Responsibilities
• Design, develop, and implement database solutions, including tables, views, stored procedures, and functions.
• Write complex SQL queries to retrieve, manipulate, and analyze data.
• Optimize database performance for speed and efficiency.
• Ensure data quality, accuracy, and consistency.
• Troubleshoot and resolve database issues.
• Collaborate with developers, analysts, and other stakeholders to understand data requirements.
• Create and maintain database documentation.
• Participate in database design and code reviews.
• Stay up-to-date with the latest SQL Server features and database technologies.
• Develop ETL processes to move data between databases.
Skills/Requirement
• Bachelor's degree in Computer Science, Information Systems, or a related field.
• 2 + years of experience as a SQL Developer.
• Strong proficiency in SQL (e.g., T-SQL, PL/SQL).
• Experience with database management systems such as SQL Server, MySQL, or Oracle.
• Understanding of database design principles and normalization.
• Experience with query optimization and performance tuning.
• Knowledge of data warehousing concepts.
• Excellent analytical and problem-solving skills.
• Strong communication and collaboration skills.
• Ability to work independently and as part of a team.
We regret to inform you that only shortlisted candidates will be contacted.
By sending us your personal data and curriculum vitae (CV), you are deemed to consent to EVO Outsourcing Solutions Pte Ltd and its affiliates to collect, use and disclose your personal data for the purposes set out in the Privacy Policy available at evo-sg.com/privacy-policy. You acknowledge that you have read, understood, and agree with the Privacy Policy.
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at Synpulse .
We are an established, globally active management consulting company with offices in Switzerland, Germany, Austria, UK, USA, Singapore, Hong Kong, the Philippines, Australia, Indonesia and India. We are a valued partner to many of the world's largest international financial services and insurance firms. We support our clients at all project management stages from the development of strategies and operational frameworks to the technical implementation and handover. Our expertise in business and technology combined with our methodic approach enable us to create sustainable added value for our clients' business.
Responsibilities- Develop processes of the ingestion of data using various programming languages, techniques and tools from systems implemented using Oracle, Teradata, SAP, and Hadoop technology stack
- Evaluate and make decisions around dataset implementations designed and proposed by peer engineers. Build large consumer database models for financial planning & Analytics including Balance Sheet, Profit and Loss, Cost Analytics and Related Ratios
- Develop ETL, real time and batch data processes feeding into in-memory data infrastructure
- Perform and document data analysis, data validation, and data mapping/design
- Work with clients to solve business problems in fraud, compliance and financial crime and present project results
- Use emerging and open-source technologies such as Spark, Hadoop, and Scala
- Collaborate on scalability issues involving access to massive amounts of data and information
- You should be comfortable with working with high profile clients on their sites
- Bachelor's degree in computer science, Physics, Mathematics, or similar degree or equivalent
- Experience with open source big-data tools, such as Spark, Hadoop, and specially Scala
- 2 to 6 years of experience working in the Financial Services sector on big data project implementations
- Demonstrate strong analytical and problem-solving skills and the ability to debug and solve technical challenges with sometimes unfamiliar technologies
- Client facing experience, good communication and presentation skills
- Strong technical communication skills with demonstrable experience of working in rapidly changing client environments
- Quantexa Certification preferred
- Flexible working hours with part-time working models and hybrid options
- Attractive fringe benefits and salary structures in line with the market
- Modern and central office space with good public transport connections
- Can-do mentality and one-spirit culture
- Varied events and employee initiatives
- Resume
- Job references
- Qualifications (bachelor/ master diploma, etc.) with certificate of grades
- Motivation letter: Why Synpulse? Why you? Why this function?
- Recommendation letters (optional)
Do you approach your tasks with commitment and enjoyment and are you convinced that teamwork achieves better results than working alone? Are you proactive and willing to go the extra mile for your clients? Are you motivated not only to design solutions but also to implement them? As a flexible and goal-oriented person, you will quickly assume entrepreneurial responsibility with us.
Do you appreciate the spirit of a growing international company with Swiss roots and a strong corporate culture? Then we look forward to receiving your online application at
Seniority level- Mid-Senior level
- Full-time
- Information Technology
- Business Consulting and Services
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Data Engineer role at Pelago by Singapore Airlines. Pelago is a travel experiences platform created by Singapore Airlines Group. Think of us as a travel magazine you can book — highly curated, visually inspiring, with the trust and quality of Singapore Airlines. We connect you with global, local cultures and ideas so you can expand your life.
We are a team of diverse, passionate, empowered, inclusive, authentic and open individuals who share the same values and strive towards a common goal.
What can we offer you?- A unique opportunity to take end-to-end ownership of your workstream to deliver real value to users.
- Platforms to solve real user problems concerning travel planning & booking with innovative products/services.
- An amazing peer group to work with, and the ability to learn from the similarly great minds around you.
- An opportunity to be an integral part of shaping the company’s growth and culture with a diverse, fun, and dynamic environment with teammates from different parts of the world.
- Competitive compensation and benefits - including work flexibility, insurance, remote working and more!
We’re looking for a motivated Data Engineer who can independently build and support both real-time and batch data pipelines. You’ll be responsible for enhancing our existing data infrastructure, providing clean data assets, and enabling ML/DS use cases.
Responsibilities- Develop and maintain Kafka streaming pipelines and batch ETL workflows via AWS Glue (PySpark).
- Orchestrate, schedule, and monitor pipelines using Airflow.
- Build and update dbt transformation models and tests for Redshift.
- Design, optimize, and support data warehouse structures in Redshift.
- Leverage AWS ECS, Lambda, Python, and SQL for lightweight compute and integration tasks.
- Troubleshoot job failures, data inconsistencies, and apply hotfixes swiftly.
- Collaborate with ML/DS teams to deliver feature pipelines and data for modeling.
- Promote best practices in data design, governance, and architecture.
- Streaming & Batch: Kafka, AWS Glue (PySpark), Airflow
- Data Warehouse & Storage: Redshift, dbt, Python, SQL
- Cloud Services: AWS ECS, Lambda
- Others: Strong understanding of data principles, architectures, processing patterns
- 3–5 years in data engineering or similar roles.
- Hands-on experience with Kafka, AWS Glue (PySpark), Redshift, Airflow, dbt, Python, and SQL.
- Strong foundation in data architecture, modeling, and engineering patterns.
- Proven ability to own end-to-end pipelines in both real-time and batch contexts.
- Skilled in debugging and resolving pipeline failures effectively.
- Production experience with AWS ECS and Lambda.
- Familiarity with ML/DS feature pipeline development.
- Understanding of data quality frameworks and observability in pipelines.
- AWS certifications (e.g., AWS Certified Data Analytics).
If you’re excited about this journey, please apply directly with a copy of your full resume. We’ll reach out to you as soon as we can.
Job details- Seniority level: Mid-Senior level
- Employment type: Full-time
- Job function: Information Technology
- Industries: Travel Arrangements
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Join to apply for the Data Engineer role at StarHub
5 days ago Be among the first 25 applicants
Join to apply for the Data Engineer role at StarHub
Get AI-powered advice on this job and more exclusive features.
Job Description As a Data Engineer, you’ll work with large-scale, heterogeneous datasets and hybrid cloud architectures to support analytics and AI solutions. Collaborate with data scientists, infra engineers, sales specialists, and stakeholders to ensure data quality, build scalable pipelines, and optimize performance. Your work will integrate telco data with other verticals (retail, healthcare), automate DataOps/MLOps/LLMOps workflows, and deliver production-grade systems.
Job Description As a Data Engineer, you will:
- Ensure Data Quality & Consistency
- Validate, clean, and standardize data (e.g., geolocation attributes) to maintain integrity.
- Define and implement data quality metrics (completeness, uniqueness, accuracy) with automated checks and reporting.
- Build & Maintain Data Pipelines
- Develop ETL/ELT workflows (PySpark, Airflow) to ingest, transform, and load data into warehouses (S3, Postgres, Redshift, MongoDB).
- Automate DataOps/MLOps/LLMOps pipelines with CI/CD (Airflow, GitLab CI/CD, Jenkins), including model training, deployment, and monitoring.
- Design Data Models & Schemas
- Translate requirements into normalized/denormalized structures, star/snowflake schemas, or data vaults.
- Optimize storage (tables, indexes, partitions, materialized views, columnar encodings) and tune queries (sort/distribution keys, vacuum).
- Integrate & Enrich Telco Data
- Map 4G/5G infrastructure metadata to geospatial context, augment 5G metrics with legacy 4G, and create unified time-series datasets.
- Consume analytics/ML endpoints and real-time streams (Kafka, Kinesis), designing aggregated-data APIs with proper versioning (Swagger/OpenAPI).
- Manage Cloud Infrastructure
- Provision and configure resources (AWS S3, EMR, Redshift, RDS) using IaC (Terraform, CloudFormation), ensuring security (IAM, VPC, encryption).
- Monitor performance (CloudWatch, Prometheus, Grafana), define SLAs for data freshness and system uptime, and automate backups/DR processes.
- Collaborate Cross-Functionally & Document
- Clarify objectives with data owners, data scientists, and stakeholders; partner with infra and security teams to maintain compliance (PDPA, GDPR).
- Document schemas, ETL procedures, and runbooks; enforce version control and mentor junior engineers on best practices.
Requirements
- Degree in Computer Science, Software Engineering, Data Science, or equivalent experience
- 2+ years in data engineering, analytics, or related AI/ML role
- Proficient in Python for ETL/data engineering and Spark (PySpark) for large-scale pipelines
- Experience with Big Data frameworks and SQL engines (Spark SQL, Redshift, PostgreSQL) for data marts and analytics
- Hands-on with Airflow (or equivalent) to orchestrate ETL workflows and GitLab CI/CD or Jenkins for pipeline automation
- Familiar with relational (PostgreSQL, Redshift) and NoSQL (MongoDB) stores: data modeling, indexing, partitioning, and schema evolution
- Proven ability to implement scalable storage solutions: tables, indexes, partitions, materialized views, columnar encodings
- Skilled in query optimization: execution plans, sort/distribution keys, vacuum maintenance, and cost-optimization strategies (cluster resizing, Spectrum)
- Experience with cloud platforms (AWS): S3/EMR/Glue, Redshift and containerization (Docker, Kubernetes)
- Infrastructure as Code using Terraform or CloudFormation for provisioning and drift detection
- Knowledge of MLOps/LLMOps: auto-scaling ML systems, model registry management, and CI/CD for model deployment
- Strong problem-solving, attention to detail, and the ability to collaborate with cross-functional teams
Nice to Have
- Exposure to serverless architectures (AWS Lambda) for event-driven pipelines
- Familiarity with vector databases, data mesh, or lakehouse architectures
- Experience using BI/visualization tools (Tableau, QuickSight, Grafana) for data quality dashboards
- Hands-on with data quality frameworks (Deequ) or LLM-based data applications (NL->SQL generation)
- Participation in GenAI POCs (RAG pipelines, Agentic AI demos, geomobility analytics)
- Client-facing or stakeholder-management experience in data-driven/AI projects
- Entry level
- Full-time
- Information Technology
- Telecommunications
Referrals increase your chances of interviewing at StarHub by 2x
Sign in to set job alerts for “Data Engineer” roles. Junior Data Analyst - Business Intelligence, Regional Operations Data Analyst - Shopee (2026 Fresh Graduates)Greater Batam IDR5,500,000.00-IDR7,000,000.00 1 month ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 1 day ago
Job Viewed
Job Description
We are looking for a highly motivated and skilled Data Engineer .
Responsibilities- Build and maintain robust, scalable ETL pipelines across batch and real-time data sources.
- Design and implement data transformations using Spark (PySpark/Scala/Java) on Hadoop/Hive.
- Stream data from Kafka topics into data lakes or analytics layers using Spark Streaming.
- Collaborate with cross-functional teams on data modeling, ingestion strategies, and performance optimization.
- Implement and support CI/CD pipelines using Git, Jenkins, and container technologies like Docker/Kubernetes.
- Work within cloud and on-prem hybrid data platforms, contributing to automation, deployment, and monitoring of data workflows.
- Strong programming skills in Python, Scala, or Java.
- Hands-on experience with Apache Spark, Hadoop, Hive, Kafka, HBase, or related tools.
- Sound understanding of data warehousing, dimensional modeling, and SQL.
- Familiarity with Airflow, Git, Jenkins, and containerization tools (Docker/Kubernetes).
- Exposure to cloud platforms such as AWS or GCP is a plus.
- Experience with Agile delivery models and collaborative tools like Jira and Confluence.
- Experience with streaming data pipelines, machine learning workflows, or feature engineering.
- Familiarity with Terraform, Ansible, or other infrastructure-as-code tools.
- Exposure to Snowflake, Databricks, or modern data lakehouse architecture is a bonus.
- Mid-Senior level
- Contract
- Information Technology
- IT Services and IT Consulting
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
We are looking for a highly motivated and skilled Data Engineer.
Responsibilities- Build and maintain robust, scalable ETL pipelines across batch and real-time data sources.
- Design and implement data transformations using Spark (PySpark/Scala/Java) on Hadoop/Hive.
- Stream data from Kafka topics into data lakes or analytics layers using Spark Streaming.
- Collaborate with cross-functional teams on data modeling, ingestion strategies, and performance optimization.
- Implement and support CI/CD pipelines using Git, Jenkins, and container technologies like Docker/Kubernetes.
- Work within cloud and on-prem hybrid data platforms, contributing to automation, deployment, and monitoring of data workflows.
- Strong programming skills in Python, Scala, or Java.
- Hands-on experience with Apache Spark, Hadoop, Hive, Kafka, HBase, or related tools.
- Sound understanding of data warehousing, dimensional modeling, and SQL.
- Familiarity with Airflow, Git, Jenkins, and containerization tools (Docker/Kubernetes).
- Exposure to cloud platforms such as AWS or GCP is a plus.
- Experience with Agile delivery models and collaborative tools like Jira and Confluence.
- Experience with streaming data pipelines, machine learning workflows, or feature engineering.
- Familiarity with Terraform, Ansible, or other infrastructure-as-code tools.
- Exposure to Snowflake, Databricks, or modern data lakehouse architecture is a bonus.
- Mid-Senior level
- Contract
- Information Technology
- IT Services and IT Consulting
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Skilled Data Engineer with hands-on experience in designing and building scalable data pipelines using Azure Data Factory (ADF) and Azure Databricks . Proficient in developing ETL workflows, data transformations, and integrations across structured and unstructured data sources. Strong expertise in Apache Spark for distributed data processing and performance optimization. Adept in working with Cosmos DB , implementing data models and real-time data ingestion. Solid command over SQL for querying, data validation, and performance tuning in cloud-based environments.
About PureSoftwarePureSoftware, a wholly owned subsidiary of Happiest Minds Technologies, is a global software products and digital services company. PureSoftware has been driving transformation for the world’s top organizations across various industry verticals, including banking, financial services, and insurance, life sciences and healthcare, high tech and communications, retail and logistics, and gaming and entertainment. Arttha, from PureSoftware, is a globally trusted financial technology platform.
PureSoftware is Great Place to Work Certified in India for the third consecutive year
You can visit our website at
Job details- Seniority level: Mid-Senior level
- Employment type: Full-time
- Job function: Information Technology, Other, and Analyst
- Industries: Insurance, Banking, and Financial Services
- Design and build scalable data pipelines using Azure Data Factory (ADF) and Azure Databricks .
- Develop ETL workflows, data transformations, and integrations across structured and unstructured data sources.
- Leverage Apache Spark for distributed data processing and performance optimization.
- Work with Cosmos DB , implementing data models and real-time data ingestion.
- Query, validate, and tune performance using SQL in cloud-based environments.
- Hands-on experience designing and building scalable data pipelines using Azure Data Factory (ADF) and Azure Databricks .
- Proficiency in developing ETL workflows, data transformations, and data integrations across structured and unstructured data sources.
- Strong expertise in Apache Spark for distributed data processing and performance optimization.
- Adept in working with Cosmos DB , implementing data models and real-time data ingestion.
- Solid command over SQL for querying, data validation, and performance tuning in cloud-based environments.
Be The First To Know
About the latest Database developers Jobs in Singapore !
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Infosys Consulting is a global management consulting firm helping some of the world’s most recognizable brands transform and innovate. Our consultants are industry experts that lead complex change agendas driven by disruptive technology. With offices in 20 countries and backed by the power of the global Infosys brand, our teams help the C-suite navigate today’s digital landscape to win market share and create shareholder value for lasting competitive advantage. To see our ideas in action, or to join a new type of consulting firm, visit us at
Title: Consultant (Data)
Summary: Infosys Consulting is looking for a highly skilled data engineer with 2-5 years of experience in data processing using ETL tools like Informatica, Python or similar. The ideal candidate should be a great communicator and have a strong background in data analytics patterns. The successful candidate will be responsible for designing and developing ML-based algorithms in Python and have good knowledge of CI-CD with DevOps.
Key Responsibilities:
- Design and develop ETL workflows to move data from various sources into the organization's data warehouse.
- Develop and implement data quality controls and monitor data accuracy.
- Design and develop ML-based algorithms in Python to automate and optimize data processing.
- Work closely with cross-functional teams to ensure data solutions meet business requirements.
- Develop and maintain data processing and automation scripts using Python.
- Create data visualizations and provide insights to the data analytics team.
- Design and implement data security and access controls.
- Develop and maintain data pipelines and ETL workflows for various data sources.
If you have a passion for data engineering and are looking for a challenging opportunity, we would love to hear from you. This is a great opportunity for someone who is looking to grow their skills and work with a dynamic team in a fast-paced environment.
We welcome applications from all members of society irrespective of age, sex, disability, sexual orientation, race, religion, or belief. We make recruiting decisions based on your experience, skills and personality. We believe that employing a diverse workforce is the right thing to do and is central to our success. We offer you great opportunities within a dynamically growing consultancy.
At Infosys Consulting you will discover a truly global culture, highly dedicated and motivated colleagues, a co-operative work environment and interesting training opportunities.
Minimum Requirements:
- Bachelor's degree in computer science, Information Systems, or related field.
- 2-5 years of experience in data processing and ETL using SQL, Informatica, AWS Glue, Airflow, Python, or similar tools.
- Experience with CI-CD with DevOps.
- Experience in designing and developing ML-based algorithms in Python.
- Strong knowledge of data analytics patterns and data processing techniques.
- Good communication and interpersonal skills to effectively communicate with cross-functional teams.
- Excellent problem-solving skills and the ability to work independently or as part of a team.
- Knowledge of data security and access control.
- Only Singaporean & Singapore PR can apply.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Bitdeer Technologies Group (Nasdag: BTDR) is a leader in the blockchain and high-performance computing industry, lt is one of the world's largest holders of proprietary hash rate and suppliers of hash rate. Bitdeer is committed to providing comprehensive computing solutions for its customers.
The company was founded by Jihan Wu, an early advocate and pioneer in cryptocurrency who cofounded multiple leading companies serving the blockchain economy. Headquartered in Singapore, Bitdeer has deployed mining datacenters in the United States, Norway, and Bhutan. lt offers specialized mining infrastructure, high-quality hash rate sharing products, and reliable hosting services to global users. The company also offers advanced cloud capabilities for customers with high demands for artifcial inteligence Dedication, authenticity, and trustworthiness are foundational to our mission of becoming the world's mostreliable provider of full-spectrum blockchain and high-performance computing solutions. We welcome global talent to join us in shaping the future.
We are seeking an experienced Data Engineer to join our Data Platform team with a focus on improving and optimizing our existing data infrastructure. The ideal candidate will have deep expertise in data management, cloud-based big data services, and real-time data processing, collaborating closely with cross-functional teams to enhance scalability, performance, and reliability.
Key Responsibilities- Optimize and improve existing data pipelines and workflows to enhance performance, scalability, and reliability.
- Collaborate with the IT team to design and enhance cloud infrastructure, ensuring alignment with business and technical requirements.
- Demonstrate a deep understanding of data management principles to optimize data frameworks, ensuring efficient data storage, retrieval, and processing.
- Act as the service owner for cloud big data services (e.g., AWS EMR with Spark) and orchestration tools (e.g., Apache Airflow), driving operational excellence and reliability.
- Design, implement, and maintain robust data pipelines and workflows to support analytics, reporting, and machine learning use cases.
- Develop and optimize solutions for real-time data processing using technologies such as Apache Flink and Kafka.
- Monitor and troubleshoot data systems, identifying opportunities for automation and performance improvements.
Stay updated on emerging data technologies and best practices to drive continuous improvement in data infrastructure.
#J-18808-LjbffrData Engineer
Posted 2 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
Anker Innovations is a technology company dedicated to creating industry-leading smart devices for entertainment, travel, and smart homes. At the forefront of AI innovation, we develop reliable, high-quality AI applications to enhance the quality of care and provide exceptional user experiences. We are seeking driven individuals who are passionate about technology to help build cutting-edge, consumer-facing solutions.
Key Responsibilities
- Assist in design and develop data pipelines to support analytics for IoT and mobile products.
- Partner with cross-functional teams to deliver data modeling, extraction, and reporting solutions.
- Participate in the design and construction of data warehouses and big data platforms to optimize data workflows and system performance.
- Implement data governance best practices, including access control, data masking, and audit logging.
Qualifications & Skills
- Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.Strong SQL skills;
- Experience with data lake/warehouse design and query optimization.
- Hands-on experience with big data tools such as Spark, Hive, Kafka, and ClickHouse is a plus.
- Strong communication skills and a collaborative, proactive attitude.
- Preferred : Previous experience or coursework related to data engineering, big data analytics, distributed systems, or cloud platforms.
- Preferred : Mandarin language skills for communication with counterparts in China.
- Seniority level Entry level
- Employment type Full-time
- Job function Information Technology
- Industries Computers and Electronics Manufacturing
Referrals increase your chances of interviewing at Anker Innovations LTD by 2x
Sign in to set job alerts for “Data Engineer” roles. Junior Data Analyst - Business Intelligence, Regional OperationsGreater Batam IDR5,500,000.00-IDR7,000,000.00 1 month ago
Data Analyst - Shopee (2026 Fresh Graduates)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr