1,442 Senior Data Engineer jobs in Singapore

Big Data Engineer

Singapore, Singapore LION & ELEPHANTS CONSULTANCY PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer (Java, Spark, Hadoop)

Location : Singapore

Experience : 7- 12 years

Employment Type : Full-Time

Open to Citizens and SPR only | No Visa sponsorship available

Job Summary

We are looking for a Senior Big Data Engineer with 7–12 years of experience to join our growing data engineering team. The ideal candidate will bring deep expertise in Java , Apache Spark , and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.

Key Responsibilities
  • Design, build, and optimize large-scale, distributed data processing systems using Apache Spark , Hadoop , and Java .
  • Lead the development and deployment of data ingestion , ETL/ELT pipelines , and data transformation frameworks .
  • Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.
  • Ensure high performance and reliability of big data systems through performance tuning and best practices.
  • Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka .
  • Apply deep knowledge of Java to build efficient, modular, and reusable codebases.
  • Mentor junior engineers, participate in code reviews, and enforce engineering best practices.
  • Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.
  • Ensure data governance , security , and compliance standards are maintained.
Required Qualifications
  • 7–12 years of experience in big data engineering or backend data systems.
  • Strong hands-on programming skills in Java ; exposure to Scala or Python is a plus.
  • Proven experience with Apache Spark , Hadoop (HDFS, YARN, MapReduce), and related tools.
  • Solid understanding of distributed computing , data partitioning, and optimization techniques.
  • Experience with data access and storage layers like Hive , HBase , or Impala .
  • Familiarity with data ingestion tools like Apache Kafka , NiFi , Flume , or Sqoop .
  • Comfortable working with SQL for querying large datasets.
  • Good understanding of data architecture , data modeling , and data lifecycle management .
  • Experience with cloud platforms like AWS , Azure , or Google Cloud Platform .
  • Strong problem-solving, analytical, and communication skills.
Preferred Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Experience with streaming data frameworks such as Spark Streaming , Kafka Streams , or Flink .
  • Knowledge of DevOps practices , CI/CD pipelines, and infrastructure as code (e.g., Terraform).
  • Exposure to containerization (Docker) and orchestration (Kubernetes) .
  • Certifications in Big Data technologies or Cloud platforms are a plus.

Please note that this is an equal opportunities employer.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

$120000 - $180000 Y U3 InfoTech Pte Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Position Details:

Company : U3 Infotech (Payroll)

Role : Big Data Engineer

Position : Contract

Duration : 12+ Months

Location : Singapore

Job Description:

We are seeking a highly skilled and motivated Lead Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.

Key Responsibilities:

● Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.

● Lead architecture discussions, including the creation and review of Entity Relationship Diagrams (ERDs) and overall system design.

● Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.

● Deploy and manage infrastructure using Terraform and other Infrastructure-as-Code (IaC) tools.

● Develop and maintain CI/CD pipelines for deploying data applications and services.

● Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake Formation) to support scalable and secure cloud-based data platforms.

● Handle both batch and real-time data processing effectively.

● Apply best practices in data modeling and support data privacy and data protection initiatives.

● Implement and manage data encryption and hashing techniques to secure sensitive information.

● Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.

● Lead performance tuning and troubleshooting for data applications and platforms.

Required Skills & Experience:

● Strong proficiency in SQL for data modeling, querying, and transformation.

● Advanced Python development skills with an emphasis on data engineering use cases.

● Hands-on experience with Terraform for cloud infrastructure provisioning.

● Proficiency with CI/CD tools, particularly GitHub Actions.

● Deep expertise in AWS cloud architecture and services.

● Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.

● Strong communication and leadership skills with experience mentoring engineering teams.

Preferred Qualifications:

● Experience with big data technologies such as Apache Spark, Hive, or Kafka.

● Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes).

● Solid understanding of data governance, data quality, and security frameworks.

About the Company

U3 Infotech is a Technology Solutions, Managed Services, and Talent Management Solutions company with over 2 decades of experience in the APAC region since 2002. Our clients include Fortune 100, MNCs, Leading Regional Organisations, Government organizations, and Startups. We work with clients across Banking, Insurance, Bio-Science, Pharmaceutical, Healthcare, Engineering, Product, and Supply Chain domains.

We have been growing rapidly through value creation, solving complex problems, and addressing the opportunities of our clients' businesses. We differentiate ourselves through our deep commitment at all levels, entrepreneurial mindset, outcome-driven approach, and financial resources.

If you are interested in this role, send us your CV

Please refer to U3's Privacy Notice for Job Applicants/Seekers at When you apply, you voluntarily consent to the collection, use and disclosure of your personal data for recruitment/employment and related purposes.

Cheers Stay Safe & Healthy.

Thanks and Regards,

Raghunath

Senior Recruitment Consultant

Talent Acquisition Team

Mobile:

Email:

133 Cecil Street, Keck Seng Tower, #14-3,

Singapore

Singapore | Australia | Malaysia | Thailand

Vietnam | India | Philippines| Hong Kong

Job Types: Full-time, Contract

Contract length: 12 months

Pay: $10, $11,000.00 per month

Benefits:

  • Health insurance

Work Location: In person

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Singapore, Singapore $80000 - $120000 Y TikTok Pte. Ltd.

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities

TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa. About the team Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users.

We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.

  • Design and implement a reasonable offline data architecture for large-scale recommendation systems
  • Design and implement flexible, scalable, stable and high-performance storage and computing systems
  • Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
  • Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
  • Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
  • Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
  • Visualise, interpret, and report data findings and may create dynamic data reports as well

Qualifications

Minimum Qualifications

  • Bachelor's degree or above, majoring in Computer Science, or related fields, with at least 1 year of experience
  • Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
  • Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
  • Effective communication skills and a sense of ownership and drive
  • Experience of Peta Byte level data processing is a plus
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Singapore, Singapore $80000 - $120000 Y UNISON Group

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.

SKILLS SET AND TRACK RECORD

  • Good understanding and completion of projects using waterfall/Agile methodology.
  • Analytical, conceptualisation and problem-solving skills.
  • Good understanding of analytics and data warehouse implementations
  • Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
  • Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
  • Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
  • Track record in implementing systems using Cloudera Data Platform will be an added advantage.
  • Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
  • Passion for automation, standardization, and best practices
  • Good presentation skills are preferred

The developer is responsible to:

  • Analyse the Client data needs and document the requirements.
  • Refine data collection/consumption by migrating data collection to more efficient channels
  • Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
  • Develop test plan and scripts for system testing, support user acceptance testing.
  • Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
  • Ensure the smooth operations and service level of IT solutions.
  • Support production issues
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Singapore, Singapore LION & ELEPHANTS CONSULTANCY PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Roles & Responsibilities

Job Title: Big Data Engineer (Java, Spark, Hadoop)

Location: Singapore

Experience: 7- 12 years

Employment Type: Full-Time

Open to Citizens and SPR only | No Visa sponsorship available

Job Summary

We are looking for a Senior Big Data Engineerwith 7–12 years of experience to join our growing data engineering team. The ideal candidate will bring deep expertise in Java, Apache Spark, and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.

Key Responsibilities

● Design, build, and optimize large-scale, distributed data processing systems using Apache Spark, Hadoop, and Java.

● Lead the development and deployment of data ingestion, ETL/ELT pipelines, and data transformation frameworks.

● Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.

● Ensure high performance and reliability of big data systems through performance tuning and best practices.

● Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka.

● Apply deep knowledge of Java to build efficient, modular, and reusable codebases.

● Mentor junior engineers, participate in code reviews, and enforce engineering best practices.

● Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.

● Ensure data governance, security, and compliance standards are maintained.

Required Qualifications

● 7–12 years of experience in big data engineering or backend data systems.

● Strong hands-on programming skills in Java; exposure to Scala or Python is a plus.

● Proven experience with Apache Spark, Hadoop (HDFS, YARN, MapReduce), and related tools.

● Solid understanding of distributed computing, data partitioning, and optimization techniques.

● Experience with data access and storage layers like Hive, HBase, or Impala.

● Familiarity with data ingestion tools like Apache Kafka, NiFi, Flume, or Sqoop.

● Comfortable working with SQL for querying large datasets.

● Good understanding of data architecture, data modeling, and data lifecycle management.

● Experience with cloud platforms like AWS, Azure, or Google Cloud Platform.

● Strong problem-solving, analytical, and communication skills.

Preferred Qualifications

● Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.

● Experience with streaming data frameworks such as Spark Streaming, Kafka Streams, or Flink.

● Knowledge of DevOps practices, CI/CD pipelines, and infrastructure as code (e.g., Terraform).

● Exposure to containerization (Docker) and orchestration (Kubernetes).

● Certifications in Big Data technologies or Cloud platforms are a plus.

To apply, email to / with the following details – Current CTC, Expected CTC, Notice period and Residential Status.

Tell employers what skills you have

Apache Spark
Scala
Azure
Big Data
Data Modeling
Pipelines
Hadoop
Data Governance
MapReduce
Data Engineering
SQL
Python
Data Architecture
Docker
Java
Databases
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

$12000 Monthly LION & ELEPHANTS CONSULTANCY PTE. LTD.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer (Java, Spark, Hadoop)

Location : Singapore

Experience : 7- 12 years

Employment Type : Full-Time

Open to Citizens and SPR only | No Visa sponsorship available


J ob Summary

We are looking for a Senior Big Data Engineer with 7–12 years of experience to join our growing data engineering team. The ideal candidate will bring deep expertise in Java , Apache Spark , and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.


Key Responsibilities

● Design, build, and optimize large-scale, distributed data processing systems using Apache Spark , Hadoop , and Java .

● Lead the development and deployment of data ingestion , ETL/ELT pipelines , and data transformation frameworks .

● Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.

● Ensure high performance and reliability of big data systems through performance tuning and best practices.

● Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka .

● Apply deep knowledge of Java to build efficient, modular, and reusable codebases.

● Mentor junior engineers, participate in code reviews, and enforce engineering best practices.

● Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.

● Ensure data governance , security , and compliance standards are maintained.


Required Qualifications

7–12 years of experience in big data engineering or backend data systems.

● Strong hands-on programming skills in Java ; exposure to Scala or Python is a plus.

● Proven experience with Apache Spark , Hadoop (HDFS, YARN, MapReduce), and related tools.

● Solid understanding of distributed computing , data partitioning, and optimization techniques.

● Experience with data access and storage layers like Hive , HBase , or Impala .

● Familiarity with data ingestion tools like Apache Kafka , NiFi , Flume , or Sqoop .

● Comfortable working with SQL for querying large datasets.

● Good understanding of data architecture , data modeling , and data lifecycle management .

● Experience with cloud platforms like AWS , Azure , or Google Cloud Platform .

● Strong problem-solving, analytical, and communication skills.


Preferred Qualifications

● Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.

● Experience with streaming data frameworks such as Spark Streaming , Kafka Streams , or Flink .

● Knowledge of DevOps practices , CI/CD pipelines, and infrastructure as code (e.g., Terraform).

● Exposure to containerization (Docker) and orchestration (Kubernetes) .

● Certifications in Big Data technologies or Cloud platforms are a plus.


To apply, email to / with the following details – Current CTC, Expected CTC, Notice period and Residential Status.

This advertiser has chosen not to accept applicants from your region.

Data Engineer - Big Data

139691 $8200 Monthly SYNAPXE PTE. LTD.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Roles and responsibilities

1) Develop TRUST data strategy:

Work with stakeholders to understand data analytics needs, data structure requirements (both in terms of scalability and accessibility), and translate this into a coherent near to long term data strategy for TRUST.

Support translation of data business needs into technical system requirements for MCDR, in terms of collection, storage, batch and real-time processing, as well as analysis of information from structured and unstructured sources in a scalable, repeatable, and secure manner.

Identify opportunities for improvements and optimisation e.g., Implement best practices and performance optimization on Big Data and Cloud to achieve the best data engineering outcomes.

2) Oversee data preparation and data provisioning for TRUST:

Collaborate with data engineers to organise and prepare anonymised datasets in MCDR according to TRUST standards, and then providing the data in accordance with the approved TRUST Data Request. This involves working with the data engineers closely to ensure that the datasets meet the required standards and are made available as per the specific data request guidelines set by TRUST.

3) Oversee implementation of common data model and data quality programme in TRUST and MCDR:

Work with data analysts, data scientists, clinicians and other stakeholders to implement common data models to support analytics use cases.

Design and implement tools to enhance the data strategy and enable seamless integration with the data, potentially leveraging API calls for efficient integration.

Implements data management standards and practices.



Requirements

  • Degree/master’s in computer science, Information Technology, Computer Engineering or equivalent.
  • At least eight (8) years of relevant working experience in Data management / Integration / Modelling the data warehouse or advanced analytics solutions.
  • Demonstrate good, in-depth knowledge in relevant Extract-Transform-Load (ETL) hardware/software products, frameworks, and methodologies.
  • Experience in designing and implementing cloud-based data solutions using cloud platforms (e.g., AWS cloud native tools)
  • Experience with at least two of the following areas:
    • Databases (e.g., Oracle, MS SQL, MySQL, Teradata)
  • Big data (e.g., Hadoop ecosystem)
    • ETL development using ETL tools (e.g., Informatica,IBM DataStage, Talend)
    • Data repository design (e.g., operational data stores, dimensional data stores, data marts)
    • Data interrogation techniques (e.g., SQL, NoSQL).
    • Structured and unstructured data analytics.
    • Batch and real-time data ingestion and processing
    • Data quality tools and processes.
    • Data transformation and terminology equivalence mapping.
      • Experience in data modelling for analytics (e.g.,star schemas, snowflake schemas, OMOP CDM).
      • Experience in interacting with analytics stakeholders(economists, statisticians, clinicians, policy makers) on a business or domain level.
      • Comfortable working independently to carry out data analysis, estimate data quality and sufficiency.
      • Good interpersonal skills, a detail-oriented & flexible person who can work across different areas within the team.
      • The following will be preferred: Some understanding of Singapore Healthcare System and healthcare data governance,management;
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Senior data engineer Jobs in Singapore !

Big Data Engineer (Singapore)

Singapore, Singapore Baidu, Inc.

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Build the company's big data warehouse system, including batch and stream data flow construction.

In-depth understanding of business systems, understanding of project customer needs, design and implement big data systems that meet user needs, and ensure smooth project acceptance.

Responsible for data integration and ETL architecture design and development.

Research cutting-edge big data technologies, optimize big data platforms continuously.

Job Requirements:
  • Bachelor degree or above in computer, database, machine learning and other related majors, more than 3 years of data development work.
  • Have a keen understanding of business data, and can analyze business data quickly.
  • Proficient in SQL language, familiar with MySQL, very familiar with Shell, Java (Scala), Python.
  • Familiar with common ETL technologies and principles; proficient in data warehouse database design specifications and practical operations; rich experience in spark, MR task tuning.
  • Familiar with Hadoop, Hive, HBase, Spark, flink, kafka and other big data components.
  • Proficient in the use of mainstream analytical data systems such as clickhouse, doris, TIDB, have tuning experience are preferred.
Seniority Level:

Mid-Senior level

Employment Type:

Full-time

Job Function:

Information Technology and Engineering

Industries:

Software Development, Technology, Information and Media, and Information Services

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer, Manager

Singapore, Singapore OCBC

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Overview

As Singapore’s longest established bank, we have been dedicated to enabling individuals and businesses to achieve their aspirations since 1932. Today, we’re on a journey of transformation, leveraging technology and creativity to become a future-ready learning organisation. Our strategic ambition is to be Asia’s leading financial services partner for a sustainable future. We invite you to build the bank of the future, innovate the way we deliver financial services, work in friendly, supportive teams, and build lasting value in your community. Your opportunity starts here.

What You Do
  • Collaborate with business stakeholders to translate requirements to data solutions
  • Deliver end-to-end initiatives in a multi-stakeholder environment
  • Design and implement data pipelines for batch and real-time/stream processing
  • Troubleshoot and resolve issues across a complex multi-technology landscape
  • Performance tuning and optimization of data pipelines within the Big Data ecosystem
  • Drive initiatives for optimization and resiliency employing automation
  • Maintain excellent communication and stakeholder management in an Agile setting
Who You Are
  • A degree in Computer Science, Information Technology, or a related field
  • 3–5 years of hands-on experience in Big Data Engineering within the banking domain
  • Expert knowledge of operationalizing data pipelines on Hadoop, Hive, Iceberg, Spark
  • Strong knowledge of translating business requirements to Spark and SparkSQL
  • Working experience with PySpark, Kafka, Spark Streaming, Flink
  • Proficiency in Unix shell scripting, Python scripting, and Python frameworks for automation
  • Proficient with DevOps processes and orchestration tools
  • Experience deploying applications in containers
  • Experience operationalizing data pipelines on cloud platforms (Azure, Google Cloud, AWS) is a plus
  • Knowledge of data APIs with GraphQL is a plus
What We Offer

Competitive base salary and a holistic, flexible benefits package. Community initiatives and industry-leading learning and professional development opportunities. Your wellbeing, growth and aspirations are valued as much as the needs of our customers.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer & Architect

Singapore, Singapore beBeeDataEngineer

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer & Architect

About the Role:

We are seeking a highly skilled and motivated Data Engineer & Architect to join our team. As a Data Engineer & Architect, you will be responsible for designing, developing, and deploying data pipelines using Python and PySpark in cloud-based environments.

Key Responsibilities:
  • Design and implement data pipelines using Python and PySpark in AWS and Google Cloud Platform (GCP) environments.
  • Coding using Python and PySpark in cloud-based environments involving big-data frameworks in AWS resources like EMR, Lambda, S3 bucket, RDS, EC2, ECS, EKS etc.
  • Experience in ETL frameworks for data ingestion, data pipeline to Snowflake Data Warehouse at high-frequency & volume scenarios (giga / tera-bytes of data ingestions per day / month).
  • Optimize pyspark jobs for performance, efficiency, troubleshoot issues, ensure data quality and availability.
  • Implement processes for automating data delivery, monitoring data quality, production deployment.
  • The candidate should be an expert in development with hands-on experience in leveraging Git/GitLab based repository management with understanding of Government Commercial Cloud (GCC) requirements.
  • Proficient in consulting communication skills - good in articulating business problem, approach for solution, respond to changes in scenarios for business problems.
  • Degree in Computer Science, Computer Engineering or any STEM equivalent.
  • Familiar with working in Government Commercial Cloud (GCC) environment.
  • Independent and self-motivated contributor and passionate about software development.
What We Offer:

A dynamic work environment with opportunities for growth and development.

An excellent salary package that reflects your skills and qualifications.

Ongoing training and support to help you achieve your career goals.

A collaborative and inclusive team culture that values diversity and creativity.

Flexible working arrangements to suit your needs.

Access to cutting-edge technology and tools to help you stay ahead of the curve.

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Senior Data Engineer Jobs