3,665 Data Team jobs in Singapore

Data Engineering Intern

Singapore, Singapore $40000 - $120000 Y Seagate Technology

Posted today

Job Viewed

Tap Again To Close

Job Description

About our group:

We are an IT MES (Manufacturing Execution System) team based in Woodlands, supporting Seagate's global factory operations in Singapore, Malaysia, US, Thailand, and China. Our core mission is to design and implement scalable data integration solutions that power MES and Factory IT applications.

Our focus includes Database ETL processes, complex SQL development, and Python-based automation to optimize data flows and ensure system reliability. Beyond traditional data engineering, we are also exploring Generative AI and Agentic AI solutions to modernize data platforms and create new value for factory operations.

This internship is ideal for students who are passionate about ETL/Data Engineering with Oracle, eager to sharpen their Python skills, and curious about the application of LLMs and AI frameworks in enterprise IT.

About the role - you will:

As a Data Engineering Intern, you will:

  • Work with senior engineers on ETL processes in Postgres / Oracle, including writing and optimizing stored procedures, functions, and packages.
  • Develop and optimize complex SQL queries to support data extraction, transformation, and reporting needs.
  • Use Python for automation, data processing, and proof-of-concepts.
  • Collaborate with Application Architects and Business SMEs to deliver data integration solutions supporting MES and factory applications.
  • Contribute to projects involving LLMs, LangChain, LangGraph, and Marimo notebooks for GenAI-enabled data pipelines.
  • Support testing, troubleshooting, and documentation to ensure system reliability and performance.
About you:
  • Strong foundation in SQL and relational database concepts.
  • Hands-on skills in Database stored procedures, triggers, and performance tuning.
  • Comfortable coding in Python and eager to apply it for ETL automation and analytics.
  • Interested in emerging technologies like Generative AI, LLM frameworks (LangChain, LangGraph), and Marimo notebooks.
  • Detail-oriented, analytical, and self-motivated with strong problem-solving skills.
  • Good communication and teamwork abilities.
Your experience includes:
  • Pursuing a degree in Computer Science, Software Engineering, Information Systems, or related field.
  • Experience (academic or project-based) with ETL pipelines in Oracle/Postgres
  • Familiarity with Generative AI frameworks (LangChain, LangGraph, Chainlit, or similar).
  • Knowledge of version control (Git) and Agile practices.
Location:

Our Woodlands site is one of the largest electronics manufacturing sites in Singapore, housing our recording media operations. Spread over three sites, it is easily accessible via bus or from the MRT Station, with many employees taking mass transportation to work. Here at work, you can enjoy breakfast, lunch, dinner, and snacks at our onsite canteen and coffee shop. We offer a range of facilities including an in-house gym and dance studio, as well as after-work badminton and table tennis competitions. On-site celebrations and community volunteer opportunities also abound.

Location: Woodlands, Singapore, W2

Travel: None

This advertiser has chosen not to accept applicants from your region.

Data Engineering Analyst

Singapore, Singapore $80000 - $120000 Y MINDGRAPH PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

  • Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Engineering, or a related field.
  • Experience as a Data Analyst or in a similar analytical/data science role.
  • Strong proficiency in SQL for data querying and transformation.
  • Advanced knowledge of Python for data analysis, automation, and ML workflows.
  • Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras.
  • Solid understanding of data validation, anomaly detection, and data quality principles .
  • Hands-on experience with data visualization and reporting tools (e.g., Power BI, Tableau, Matplotlib, Seaborn).
  • Familiarity with version control (Git), Jupyter notebooks, and working in collaborative environments.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Lead

Singapore, Singapore $150000 - $250000 Y Citi

Posted today

Job Viewed

Tap Again To Close

Job Description

The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.

Responsibilities:

  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.

Qualifications:

  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
  • Experience with external cloud platform such as OpenShift, AWS & GCP
  • Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
  • Experienced in integrating search solution with middleware & distributed messaging - Kafka
  • Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
  • Experienced in software development life cycle and good problem-solving skills.
  • Excellent problem-solving skills and strong mathematical and analytical mindset
  • Ability to work in a fast-paced financial environment

Education:

  • Bachelor's degree/University degree or equivalent experience
  • Master's degree preferred

-

Job Family Group:

Technology

-

Job Family:

Systems & Engineering

-

Time Type:

Full time

-

Most Relevant Skills

Please see the requirements listed above.

-

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

-

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .

View Citi's EEO Policy Statement and the Know Your Rights poster.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Architect

Singapore, Singapore $70000 - $120000 Y Unitech Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly proficient and results-driven Data Engineering Architect with a robust background in designing, implementing, and maintaining scalable and resilient data ecosystems. The ideal candidate will possess a minimum of five years of dedicated experience in orchestrating complex data workflows and will serve as a key contributor within our advanced data services team. This role requires a meticulous professional who can transform intricate business requirements into high-performance, production-grade data solutions.

Key Responsibilities:
  • Architectural Stewardship: Design, develop, and optimize sophisticated data pipelines leveraging distributed computing frameworks to ensure efficiency, reliability, and scalability of data ingestion, transformation, and delivery layers.
  • SQL Mastery: Act as a subject matter expert in SQL, crafting and refining highly complex, multi-layered queries and stored procedures for advanced data manipulation, extraction, and reporting, while ensuring optimal performance and resource utilization.
  • Distributed Processing Expertise: Lead the development and deployment of data processing jobs using Apache Spark , orchestrating complex data transformations, aggregations, and feature engineering at petabyte-scale.
  • Big Data Orchestration: Proactively manage and evolve our data warehousing solutions built on Apache Hive , overseeing schema design, partition management, and query optimization to support large-scale analytical and reporting needs.
  • Collaborative Innovation: Work synergistically with senior engineers and cross-functional teams to conceptualize and execute architectural enhancements, data modeling strategies, and systems integrations that align with long-term business objectives.
  • Quality Assurance & Governance: Establish and enforce rigorous data quality standards, implementing comprehensive validation protocols and monitoring mechanisms to guarantee data integrity, accuracy, and lineage across all systems.
  • Operational Excellence: Proactively identify, diagnose, and remediate technical bottlenecks and anomalies within data workflows, ensuring system uptime and operational stability through systematic troubleshooting and root cause analysis.
Required Competencies & Qualifications:
  • Educational Foundation: Bachelor's degree in Computer Science, Information Systems, or a closely related quantitative field.
  • Experience: A minimum of five (5) years of progressive, hands-on experience in a dedicated data engineering or equivalent role, with a proven track record of delivering enterprise-level data solutions.
  • Core Technical Skills:

    SQL: Expert-level proficiency in SQL programming is non-negotiable, including advanced query optimization, window functions, and schema design principles.

Distributed Computing: Demonstrated high-level proficiency with Apache Spark for large-scale data processing.

Data Warehousing: In-depth, practical experience with Apache Hive and its ecosystem.
- Conceptual Knowledge: Deep understanding of data warehousing methodologies, ETL/ELT processes, and dimensional modeling.
- Analytical Acumen: Exceptional problem-solving and analytical capabilities, with the ability to dissect complex technical challenges and formulate elegant, scalable solutions.
- Continuous Learning: A relentless curiosity and a strong desire to stay abreast of emerging technologies and industry best practices.
- Domain Preference: Prior professional experience within the Banking or Financial Services sector is highly advantageous.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Expert

Singapore, Singapore $120000 - $200000 Y 3 CUBED BUSINESS CONSULTING PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

1. Data Engineering & Platform Knowledge (Must)

  • Strong understanding of Hadoop ecosystem: HDFS, Hive, Impala, Oozie, Sqoop, Spark (on YARN).

  • Experience in data migration strategies (lift & shift, incremental, re-engineering pipelines).

  • Knowledge of Databricks architecture (Workspaces, Unity Catalog, Clusters, Delta Lake, Workflows).

2. Testing & Validation (Preferred)

  • Data reconciliation (source vs. target).

  • Performance benchmarking.

  • Automated test frameworks for ETL pipelines.

3. Databricks-Specific Expertise (Preferred)

  • Delta Lake: ACID transactions, time travel, schema evolution, Z-ordering.

  • Unity Catalog: Catalog/schema/table design, access control, lineage, tags.

  • Workflows/Jobs: Orchestration, job clusters vs. all-purpose clusters.

  • SQL Endpoints / Databricks SQL: Designing downstream consumption models.

  • Performance Tuning: Partitioning, caching, adaptive query execution (AQE), photon runtime.

4. Migration & Data Movement (Preferred)

  • Data migration from HDFS/Cloudera to cloud storage (ADLS/S3/GCS).

  • Incremental ingestion techniques (Change Data Capture, Delta ingestion frameworks).

  • Mapping Hive Metastore to Unity Catalog (metastore migration).

  • Refactoring HiveQL/Impala SQL to Databricks SQL (syntax differences).

5. Security & Governance (Nice to have)

  • Mapping Cloudera Ranger/SSO policies Unity Catalog RBAC.

  • Azure AD / AWS IAM integration with Databricks.

  • Data encryption, masking, anonymization strategies.

  • Service Principal setup & governance.

6. DevOps & Automation (Nice to have)

  • Infrastructure as Code (Terraform for Databricks, Cloud storage, Networking).

  • CI/CD for Databricks (GitHub Actions, Azure DevOps, Databricks Asset Bundles).

  • Cluster policies & job automation.

  • Monitoring & logging (Databricks system tables, cloud-native monitoring).

7. Cloud & Infra Skills (Nice to have)

  • Strong knowledge of the target cloud (AWS/Azure/GCP):

o Storage (S3/ADLS/GCS).

o Networking (VNETs, Private Links, Security Groups).

o IAM & Key Management. 9. Soft Skills

  • Ability to work with business stakeholders for data domain remapping.

  • Strong documentation and governance mindset.

  • Cross-team collaboration (infra, security, data, business).

This advertiser has chosen not to accept applicants from your region.

Data Engineering Expert

Singapore, Singapore beBeeEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineering Expert

As a seasoned professional, you will be responsible for designing, developing, and automating data pipelines in a Databricks environment using Python, PySpark, and SQL.

The ideal candidate should have hands-on experience in AWS-native implementations and ETL processes.

Key Responsibilities:

  • Create efficient ETL pipelines using Databricks notebooks with PySpark, Python, and SQL to automate data workflows.
  • Perform data manipulation, validation, and error handling using Python to ensure data accuracy and quality.
  • Implement complex SQL queries, joins, aggregations, and other database operations within the Databricks environment.
  • Lead ETL migration projects to ensure smooth transition and minimal disruption to existing workflows.
  • Work on government sector data projects adhering to compliance and regulatory requirements.
  • Collaborate with cross-functional teams to understand data requirements and implement scalable solutions.
  • Optimize data pipelines for performance, scalability, and reliability on AWS.

Required Skills and Qualifications:

  • 7+ years of experience as a Data Engineer with expertise in AWS cloud services.
  • Strong experience in ETL development and migration projects.
  • Proficiency in data validation, transformation, and error handling in ETL processes.
  • Solid experience in complex SQL operations and data manipulation within Databricks.
  • Experience working in government projects with an understanding of compliance requirements.
  • Strong problem-solving skills and ability to work in a collaborative team environment.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Specialist

Singapore, Singapore beBeeDeveloper

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description:

We are seeking skilled developers with a strong background in coding and test case writing to join our data engineering team.

Candidates should have experience with SQL or Informatica, but a background primarily in these areas will not be considered.

The ideal candidate will possess excellent coding skills and be able to write effective test cases.


Required Skills and Qualifications:
  • Strong coding skills
  • Ability to write test cases
  • Experience with SQL or Informatica

Benefits:
  • Global diversity: Be part of an international team celebrating diverse perspectives and collaboration
  • Trust and growth: Nurture your talent and empower yourself to reach new heights
  • Continuous learning: Unlock your full potential with over 250 training modules
  • Vibrant culture: Enjoy a workplace where energy, fun, and camaraderie come together
  • Meaningful impact: Join us in making a difference through CSR initiatives
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data team Jobs in Singapore !

Data Engineering Specialist

Singapore, Singapore beBeeEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

**Job Title:** Data Engineering Specialist

The role of a Data Engineering Specialist involves designing and implementing large-scale data processing systems, leveraging cutting-edge technologies to handle structured and unstructured data from various sources. This requires in-depth knowledge of data quality frameworks, monitoring solutions, and best practices for data governance, security, and compliance.

Key Responsibilities:
  1. Design and implement ETL/ELT processes using native Databricks capabilities
  2. Develop and maintain data quality frameworks and monitoring solutions
  3. Establish and enforce data governance policies and procedures
  4. Monitor and optimize production data pipelines for performance and efficiency
  5. Implement comprehensive logging and alerting systems
  6. Perform regular health checks on Databricks cluster performance and troubleshoot issues
Requirements:
  • Bachelor's degree in Computer Science or Computer Engineering
  • Minimum 8-10 years of experience in system operations, compliance, and management areas
  • Hands-on experience with Databricks platform and cloud-based technologies
  • Cloud certification (AWS) and Databricks certification (Associate or Professional level)
This advertiser has chosen not to accept applicants from your region.

Data Engineering Manager

Singapore, Singapore IKAS INTERNATIONAL (ASIA) PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Roles & Responsibilities

We're looking for Lead Data Engineer / Data Engineering Manager to join our client to lead the design and delivery of scalable, cloud-based data solutions. In this role, you'll work closely with cross-functional teams to solve complex data challenges, modernize infrastructure, and shape strategic data platforms from the ground up.

You'll drive technical direction, design reusable solutions, and influence best practices in data engineering and governance across high-impact projects.

What You'll Do

  • Advise on data strategy, architecture, and implementation.
  • Build and optimize data pipelines across cloud and on-prem environments.
  • Develop secure, scalable infrastructure for structured and unstructured data.
  • Design reusable data models and maintain metadata and lineage.
  • Ensure governance, data quality, and access controls are in place.
  • Support both greenfield builds and legacy modernization efforts.
  • Mentor teams and contribute to internal capability development.

What You Bring

  • Over 10 years of experience in data engineering, platform or cloud infrastructure.
  • Expertise in cloud platforms (AWS, Azure, GCP) and distributed systems (Spark, Hadoop).
  • Proficient in Python, SQL, Java, Scala.
  • Experience with orchestration tools (e.g. Airflow, ADF) and DevOps (Docker, Git, Terraform).
  • Familiarity with Databricks and real-time/batch data pipelines.
  • Strong grasp of data governance, security, and compliance practices.
  • Clear communicator with strong stakeholder management skills.
  • Proven ability to lead, mentor, and drive technical alignment across teams.

For more information you can contact Norean Tan at

We regret to inform you that only shortlisted candidates will be notified / contacted.

EA Registration No.: R - Tan Lee Ying, Norean

iKas International (Asia) Pte Ltd

ROC No.: E | EA License No.: 16S8086

Tell employers what skills you have

Management Skills
Airflow
Azure
Pipelines
Hadoop
Technical Direction
Data Quality
Data Design
Data Engineering
SQL
Python
Docker
Cloud
Java
Orchestration
Data Strategy
This advertiser has chosen not to accept applicants from your region.

Data Engineering Manager

$20000 Monthly IKAS INTERNATIONAL (ASIA) PTE. LTD.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

We're looking for Lead Data Engineer / Data Engineering Manager to join our client to lead the design and delivery of scalable, cloud-based data solutions. In this role, you’ll work closely with cross-functional teams to solve complex data challenges, modernize infrastructure, and shape strategic data platforms from the ground up.

You’ll drive technical direction, design reusable solutions, and influence best practices in data engineering and governance across high-impact projects.


What You’ll Do

  • Advise on data strategy, architecture, and implementation.
  • Build and optimize data pipelines across cloud and on-prem environments.
  • Develop secure, scalable infrastructure for structured and unstructured data.
  • Design reusable data models and maintain metadata and lineage.
  • Ensure governance, data quality, and access controls are in place.
  • Support both greenfield builds and legacy modernization efforts.
  • Mentor teams and contribute to internal capability development.

What You Bring

  • Over 10 years of experience in data engineering, platform or cloud infrastructure.
  • Expertise in cloud platforms (AWS, Azure, GCP) and distributed systems (Spark, Hadoop).
  • Proficient in Python, SQL, Java, Scala.
  • Experience with orchestration tools (e.g. Airflow, ADF) and DevOps (Docker, Git, Terraform).
  • Familiarity with Databricks and real-time/batch data pipelines.
  • Strong grasp of data governance, security, and compliance practices.
  • Clear communicator with strong stakeholder management skills.
  • Proven ability to lead, mentor, and drive technical alignment across teams.

For more information you can contact Norean Tan at

We regret to inform you that only shortlisted candidates will be notified / contacted.

EA Registration No.: R - Tan Lee Ying, Norean

iKas International (Asia) Pte Ltd

ROC No.: E | EA License No.: 16S8086

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Team Jobs