503 Data Engineer jobs in Singapore
Junior Data Engineer/Architecture
Posted today
Job Viewed
Job Description
Key Responsibilities
- Build and maintain data pipelines for ingesting futures market data from multiple sources (exchange feeds, vendors, internal systems).
- Design and improve data models and structures to store historical and intraday futures prices across multiple asset classes.
- Implement logic to roll futures contracts based on configurable rules.
- Develop tools to stitch together contract-level data into continuous time series, handling edge cases (e.g. holidays, partial rolls, gaps).
- Ensure data quality and integrity via validation checks, exception handling, and alerting.
- Work with Quant Researchers to deliver clean, backtest-ready datasets that match trading and strategy assumptions.
- Document technical processes and support other teams with data usage and integration.
Requirements
- 1–3 years of experience in a data engineering, quant research, or backoffice data role. Fresh graduates with relevant experience are welcome.
- Solid programming skills in Python (e.g., pandas , numpy , datetime , etc.)
- Good understanding of futures markets and contract structures (e.g., expiries, rollovers, front-month logic).
- Familiarity with data versioning, time series databases, or data lakes.
- Experience working with APIs, flat files, or vendor feeds (e.g., Bloomberg, ICE, CME, Refinitiv).
- Strong attention to detail and ability to work with messy or inconsistent datasets.
Data Engineer
Posted 21 days ago
Job Viewed
Job Description
- Contributing to the design and development of Tower data platform
- Designing and implementing data quality checks
- Monitoring data processes and data health, participating in on-call rotation, proactively solving data related problems
- Supporting internal data users (Traders, Researchers)
- Master's in Computer Science, Quantitative Finance or Information Technology, or a related field, or equivalent work experience
- At least 2+ years of experience in Python
- Familiarity with ETL processes, data modeling, and database design principles
- Familiarity with database systems such as SQL Server, PostgreSQL, or MySQL
- Experience with CI/CD processes and DevOps
- Excellent problem-solving and troubleshooting skills
- Strong communication and collaboration skills
- Ability to work autonomously as part of a global team
- Proficient in AI/ML techniques for detecting patterns and anomalies
- Knowledge of referential data and corporate actions
- Experience in the financial/trading sector
Tower continues to enhance the in-house trading system and strategies that have positioned the firm as a leader in the thriving field of quantitative trading. While Tower offers challenges and rewards rivaling those of any Wall Street firm, Tower’s cubicle-free workplace, jeans-clad workforce, and well-stocked kitchens reflect the premium the firm places on quality of life. Benefits include:
Tower Research Capital is an equal opportunity employer.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Responsibilities:
- Design and implement scalable and robust data pipelines to support analytics and data processing needs.
- Develop and maintain database architectures, including data lakes and data warehouses.
- Ensure data quality and consistency through data cleaning, transformation, and validation processes.
- Collaborate with data scientists and analysts to gather requirements and deliver data solutions that support business objectives.
- Optimize data retrieval and develop dashboards and reports for various user needs.
- Implement data security and privacy policies to comply with legal and regulatory requirements.
- Translate business problem into data use case to solved via enabling digital capabilities
- Data related processes including acquisition, classification, filtering, cleaning etc.
- Design and develop use case driven application to be deployed on Ops platform
- Development of applications on IIOT platform, design, setup and deployment.
- Implementing algorithms, Ai, automation etc. that can deliver data-exploration, decision support, recommendation systems, machine learning to achieve business efficiency and outcome
Analysis of deriving actionable outcomes.
Requirements:
- At least 2-3 years of experience in data processing, and analytic
- Programming Languages: Proficiency in languages like Python, Java, or Scala.
- Database Technologies: Experience with SQL and other databases.
- Cloud Computing: Familiarity with cloud platforms like AWS, Azure, or Google Cloud.
- Data Modelling: Ability to design and implement effective data models.
- Problem-Solving and Analytical Skills: Ability to identify and resolve complex data-related issues.
- Communication and Collaboration: Ability to work effectively with various teams and communicate technical information clearly.
Our Address and Working Hours:
Seatrium Pioneer Yard
50 Gul Road Singapore
(Island wide transport provided)
Mon - Thu: 8am - 5:15pm, Fri: 8am to 4:30pm
Interested candidates are invited to send us an updated resume with your current and expected salary and earliest availability.
We regret that only shortlisted candidates will be notified.
Please note that your personal data disclosed to Seatrium Limited and our group of companies, shall be used for the purposes of evaluation, and processing in accordance with our recruitment processes and policies. By providing your personal data, you have consented to the aforesaid purpose under the provisions of the Personal Data Protection Act 2012.
Data Engineer
Posted today
Job Viewed
Job Description
- Possess a degree in Computer Science/Information Technology or related fields.
- At least 3 years of experience in a role focusing on development and support of data ingestion pipelines.
- Experience with building on data platforms, e.g. Snowflake .
- Proficient in SQL and Python .
- Experience with Cloud environments (e.g. AWS ).
- Experience with continuous integration and continuous deployment ( CICD ) using GitHub .
- Experience with Software Development Life Cycle (SDLC) methodology.
- Experience with data warehousing concepts.
- Strong problem-solving and troubleshooting skills.
- Strong communication and collaboration skills .
- Able to design and implement solution and perform code review independently.
- Able to provide production support independently.
- Agile, fast learner and able to adapt to changes.
Responsibilities:
- Work closely with data stewards, data analysts and business end-users to implement and support data solutions.
- Design and build robust and scalable data ingestion and data management solutions for batch-loading and streaming from multiple data sources using Python via different mechanisms such as API, Files transfer, direct interface with Oracle and MSSQL databases.
- Familiar with SDLC process: Requirement gathering, design and development, SIT testing, support UAT and CICD deployment using GitHub for enhancement and new ingestion pipeline.
- Ensure compliance with IT security standards, policies, and procedures.
Provide BAU support in terms of production job monitoring, issue resolution, and bug fixes.
- Enable ingestion checks and data quality checks for all data sets in the data platform and ensure the data issues are actively detected, tracked, and fixed without breaching SLA.
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled Data Engineer to join our team and contribute to building scalable, reliable, and high-performance data platforms. The ideal candidate will have strong expertise in Python, SQL, Spark, and hands-on experience with cloud-based big data platforms such as Databricks.
Key Responsibilities:
Design, develop, and maintain large-scale data pipelines and ETL processes.
Work with Spark and Databricks to process and transform large datasets efficiently.
Optimize SQL queries and data models for performance and scalability.
Collaborate with data scientists, analysts, and business stakeholders to deliver reliable data solutions.
Ensure best practices in data governance, data quality, and security.
Mentor junior engineers and contribute to technical design reviews.
Required Skills & Qualifications:
Strong experience in Python programming for data engineering.
Advanced knowledge of SQL for querying, optimization, and modeling.
Hands-on expertise with Apache Spark for distributed data processing.
Working knowledge of Databricks (Databricks Certification: Yes/No – preferred but not mandatory).
Experience with data lakes, data warehouses, and cloud platforms (Azure/AWS/GCP).
Familiarity with CI/CD pipelines and version control (Git).
Excellent problem-solving and communication skills.
Nice-to-Have:
Databricks Certification (Associate/Professional) – highly preferred.
Knowledge of streaming frameworks (Kafka, Delta Live Tables).
Experience with data orchestration tools (Airflow, ADF, etc.).
Data Engineer
Posted today
Job Viewed
Job Description
*WHO WE ARE: *
As Singapore's longest established bank, we have been dedicated to enabling individuals and businesses to achieve their aspirations since 1932. How? By taking the time to truly understand people. From there, we provide support, services, solutions, and career paths that meet their individual needs and desires.
Today, we're on a journey of transformation. Leveraging technology and creativity to become a future-ready learning organisation. But for all that change, our strategic ambition is consistently clear and bold, which is to be Asia's leading financial services partner for a sustainable future.
We invite you to build the bank of the future. Innovate the way we deliver financial services. Work in friendly, supportive teams. Build lasting value in your community. Help people grow their assets, business, and investments. Take your learning as far as you can. Or simply enjoy a vibrant, future-ready career.
Your Opportunity Starts Here.
This is the broad job description of the job profile. Definitive job description should be reviewed and discussed between you and your manager.
Data Engineer
Why Join
As a Data Analytics Support specialist at OCBC, you'll be at the forefront of driving business growth through data-driven insights. You'll work closely with stakeholders to identify opportunities, develop solutions, and implement changes that make a real impact. If you're passionate about data and want to make a difference, this role is for you.
How you succeed
To succeed in this role, you'll need to be a collaborative problem-solver who can distill complex data into actionable insights. You'll work closely with stakeholders to understand their needs, develop targeted solutions, and implement changes that drive business outcomes. By leveraging your analytical skills and business acumen, you'll help OCBC stay ahead of the curve in a rapidly changing market.
What you do
To work closely with various in-country stakeholders such as Business Units, Product Units to identify data gap, manage data pipeline and transformation for analytic usage and business processes for growth opportunities
Establish robust business data pipeline and data architecture design to on a regional level from Singapore as a Centre of Excellence
Implement data initiatives and share best practices regionally through a build and transfer model
Automate Reports and Dashboards generation through programmed scripting
Who you are
Proficient in Python and SQL
Experience with Hadoop and various databases, Data Build Tool (DBT), Dagster and Git
Strong in data engineering, ETL process and process automation capabilities
Experience in interactive dashboard design and development
Good in Microsoft Excel, Word and PowerPoint and analytical thinking
Good communication and interpersonal skills
Ability to work independently, agile to business requirements
Willing to take on new challenges and work in a fast-paced environment
Who We Are
As Singapore's longest established bank, we have been dedicated to enabling individuals and businesses to achieve their aspirations since 1932. How? By taking the time to truly understand people. From there, we provide support, services, solutions, and career paths that meet their individual needs and desires.
Today, we're on a journey of transformation. Leveraging technology and creativity to become a future-ready learning organisation.
But for all that change, our strategic ambition is consistently clear and bold, which is to be Asia's leading financial services partner for a sustainable future.
We invite you to build the bank of the future. Innovate the way we deliver financial services. Work in friendly, supportive teams. Build lasting value in your community. Help people grow their assets, business, and investments. Take your learning as far as you can. Or simply enjoy a vibrant, future-ready career. Your Opportunity Starts Here.
What we offer
Competitive base salary. A suite of holistic, flexible benefits to suit every lifestyle. Community initiatives. Industry-leading learning and professional development opportunities. Equal opportunity. Fair employment. Selection based on ability and fit with our culture and values. Your wellbeing, growth and aspirations are every bit as cared for as the needs of our customers.
*What we offer: *
Competitive base salary. A suite of holistic, flexible benefits to suit every lifestyle. Community initiatives. Industry-leading learning and professional development opportunities. Your wellbeing, growth and aspirations are every bit as cared for as the needs of our customers.
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and platforms. This role supports digital transformation initiatives and requires working closely with infrastructure and AI engineers to provide clients with robust data infrastructure for insights and innovation.
Key Responsibilities
- Design, implement, test, deploy, and maintain secure, scalable data engineering solutions and pipelines
- Integrate new data sources into data warehouses and build reports/visualizations
- Develop insights and recommendations, delivering advice and solutions to client problems
- Ensure best-in-class security measures within data platforms
- Automate repetitive data management tasks through scalable, replicable code
- Contribute to fostering a data-driven culture
Requirements
- Bachelor's degree in Information Technology, Computer Science, or related field
- 1–3 years of experience in data engineering or related fields
- Proficiency in Python or Java for data manipulation and automation
- Experience building and maintaining ETL pipelines on big data platforms
- Strong knowledge of relational databases (SQL, MySQL), Hadoop, Spark, and column-oriented databases (e.g., BigQuery, Cassandra)
- Understanding of data lifecycle concepts and data transformation methods
- Familiarity with Google Cloud Platform (GCP) services for data engineering
- Experience in building Data Lake and Data Warehouse solutions
- Ability to work independently and lead small teams
Be The First To Know
About the latest Data engineer Jobs in Singapore !
Data Engineer
Posted today
Job Viewed
Job Description
We are looking for a skilled Data Engineer with 5 years of hands-on experience in designing, developing, and optimizing big data pipelines and solutions. The ideal candidate will have strong expertise in SQL, Python, Apache Spark, Hive, and Hadoop ecosystems and will be responsible for building scalable data platforms to support business intelligence, analytics, and machine learning use cases.
Key Responsibilities- Design, develop, and maintain scalable ETL pipelines using Spark, Hive, and Hadoop.
- Write efficient SQL queries for data extraction, transformation, and analysis.
- Develop automation scripts and data processing workflows using Python.
- Optimize data pipelines for performance, reliability, and scalability.
- Work with structured and unstructured data from multiple sources.
- Ensure data quality, governance, and security throughout the data lifecycle.
- Collaborate with cross-functional teams (Data Scientists, Analysts, and Business stakeholders) to deliver data-driven solutions.
- Monitor and troubleshoot production data pipelines.
Requirements
Required Skills & Qualifications- 5+ years of experience in Data Engineering / Big Data development.
- Strong expertise in SQL (query optimization, performance tuning, stored procedures).
- Proficiency in Python for data manipulation, scripting, and automation.
- Hands-on experience with Apache Spark (PySpark/Scala) for large-scale data processing.
- Solid knowledge of Hive for querying and managing data in Hadoop environments.
- Strong working knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce, etc.).
- Experience with data pipeline orchestration tools (Airflow, Oozie, or similar) is a plus.
- Familiarity with cloud platforms (AWS, Azure, or GCP) is preferred.
- Excellent problem-solving, debugging, and communication skills.
Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities
Design and implement data architecture, pipelines, and ETL processes.
Develop and maintain data platforms, dashboards, and reporting systems.
Ensure data quality, reliability, and availability.
Implement IaC and CI/CD for automated deployments.
Collaborate with business units, data scientists, and analysts to deliver solutions.
Optimize data storage/retrieval for performance.
Provide L3 support for in-house development and data platforms/warehouses.
Requirements
- Bachelor's in Computer Science/Information Systems or related field.
- 3–5 years' experience in data engineering or similar role.
- Proven cloud data platform experience.
- Strong in SQL, Python, and relevant programming languages.
- Hands-on with IaC & cloud services (BigQuery, Dataflow, Kafka, Pub/Sub).
- Experience with Apigee APIs, Drupal developer portals, and monetization models (advantage).
- Knowledge of data modeling, ETL, and data warehousing.
- Strong problem-solving, attention to detail, and communication skills.
Data Engineer
Posted today
Job Viewed
Job Description
NTT Data is seeking a skilled and motivated Data Engineers to join our dynamic team supporting diverse client projects across industries such as banking, healthcare, and public sector. The ideal candidate will have a strong foundation in data architecture, ETL development, and cloud technologies, with a passion for building scalable and efficient data solutions.
Key Responsibilities- Design, develop, and maintain robust data pipelines and ETL processes to support analytics and reporting needs.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
- Implement data integration solutions across structured and unstructured data sources.
- Ensure data quality, integrity, and security across all stages of the data lifecycle.
- Optimize data workflows for performance and scalability in cloud and on-premise environments.
- Support data migration and transformation initiatives for client projects.
- Monitor and troubleshoot data pipeline issues and provide timely resolutions.
- Bachelor's degree in Computer Science, Information Systems, Engineering, or related field.
- 3+ years of experience in data engineering or related roles.
- Proficiency in SQL and Python or Scala.
- Experience with data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.
- Familiarity with cloud platforms (AWS, Azure, or GCP).
- Strong understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
- Knowledge of data governance, security, and compliance standards.
- Experience working in regulated industries such as banking or healthcare.
- Familiarity with DevOps practices and CI/CD pipelines.
- Exposure to machine learning workflows and data science collaboration.
- Strong communication and stakeholder management skills.