3,791 Data Professionals jobs in Singapore

Data Engineer- Data Warehouse

Singapore, Singapore Traveloka

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Data Engineer - Data Warehouse role at Traveloka .

It's fun to work in a company where people truly BELIEVE in what they're doing!

Job Description
  • Define data model conventions and governance
  • Design, develop, and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc.)
  • Design, develop, and maintain data pipeline frameworks (combining open source and internal software to build and govern data pipelines)
  • Create and manage data pipeline infrastructures
  • Continuously seek ways to optimize existing data processing for cost and time efficiency
  • Ensure good data governance and quality through monitoring systems to oversee data quality in the data warehouse
Requirements
  • Minimum 3 years of experience in Data Engineering and Warehousing
  • Fluent in Python and advanced SQL
  • Preferably familiar with data warehouse environments (e.g., Google BigQuery, AWS Redshift, Snowflake)
  • Preferably familiar with data transformation or processing frameworks (e.g., dbt, Dataform, Spark, Hive)
  • Preferably familiar with data processing technologies (e.g., Google Dataflow, Google Dataproc)
  • Preferably familiar with orchestration tools (e.g., Airflow, Argo, Azkaban)
  • Understanding of data warehousing concepts (e.g., Kimball, Inmon, Data Vault) and experience in data modeling and improving data quality
  • Preferably understand basic containerization and microservice concepts (e.g., Docker, Kubernetes)
  • Knowledge of machine learning, building robust APIs, and web development is an advantage
  • Ability to build and maintain good stakeholder relationships
  • Ability to translate business requirements into data warehouse modeling specifications
  • Creative problem-solving skills
  • A team player who loves collaborating but can also work independently
Additional Information

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Seniority level

Mid-Senior level

Employment type

Full-time

Job function

Information Technology

Industries

Software Development

Note: This job posting is active.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer- Data Warehouse

Singapore, Singapore Traveloka

Posted today

Job Viewed

Tap Again To Close

Job Description

Join to apply for the
Data Engineer - Data Warehouse
role at
Traveloka .
It's fun to work in a company where people truly BELIEVE in what they're doing!
Job Description
Define data model conventions and governance
Design, develop, and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc.)
Design, develop, and maintain data pipeline frameworks (combining open source and internal software to build and govern data pipelines)
Create and manage data pipeline infrastructures
Continuously seek ways to optimize existing data processing for cost and time efficiency
Ensure good data governance and quality through monitoring systems to oversee data quality in the data warehouse
Requirements
Minimum 3 years of experience in Data Engineering and Warehousing
Fluent in Python and advanced SQL
Preferably familiar with data warehouse environments (e.g., Google BigQuery, AWS Redshift, Snowflake)
Preferably familiar with data transformation or processing frameworks (e.g., dbt, Dataform, Spark, Hive)
Preferably familiar with data processing technologies (e.g., Google Dataflow, Google Dataproc)
Preferably familiar with orchestration tools (e.g., Airflow, Argo, Azkaban)
Understanding of data warehousing concepts (e.g., Kimball, Inmon, Data Vault) and experience in data modeling and improving data quality
Preferably understand basic containerization and microservice concepts (e.g., Docker, Kubernetes)
Knowledge of machine learning, building robust APIs, and web development is an advantage
Ability to build and maintain good stakeholder relationships
Ability to translate business requirements into data warehouse modeling specifications
Creative problem-solving skills
A team player who loves collaborating but can also work independently
Additional Information
If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries
Software Development
Note: This job posting is active.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore PURESOFTWARE PTE. LTD.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

  • Design and develop scalable data pipelines using Azure Data Factory, Databricks, and Spark.
  • Ingest, transform, and store structured and unstructured data from various sources including REST APIs.
  • Write efficient SQL queries for data extraction, transformation, and reporting.
  • Develop and maintain data models in Azure Cosmos DB for high-performance access.
  • Collaborate with data scientists, analysts, and stakeholders to understand data requirements.
  • Implement data quality, monitoring, and validation processes.
  • Optimize data workflows for performance, scalability, and cost efficiency in the cloud.
  • Ensure data security, compliance, and governance in alignment with enterprise policies.
  • Participate in code reviews, testing, and documentation of data solutions.
  • Stay updated with emerging Azure and big data technologies to continuously improve data engineering practices.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore Sembcorp Industries Ltd

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Data Engineer role at Sembcorp Industries Ltd

3 days ago Be among the first 25 applicants

Join to apply for the Data Engineer role at Sembcorp Industries Ltd

About Sembcorp

Sembcorp is a leading energy and urban solutions provider headquartered in Singapore. Led by its purpose to drive energy transition, Sembcorp delivers sustainable energy solutions and urban developments by leveraging its sector expertise and global track record.

About Sembcorp

Sembcorp is a leading energy and urban solutions provider headquartered in Singapore. Led by its purpose to drive energy transition, Sembcorp delivers sustainable energy solutions and urban developments by leveraging its sector expertise and global track record.

Purpose & Scope

We are seeking a highly skilled and self-driven Azure Data Engineer with expertise in PySpark, Python, and modern Azure data services including Synapse Analytics and Azure Data Explorer. The ideal candidate will design, develop, and maintain scalable data pipelines and architectures, enabling effective data management, analytics, and governance.

Key Roles & Responsibilities

  • Design, develop, and maintain scalable and efficient data pipelines (both batch and real-time streaming) using modern data engineering tools.
  • Build and manage data lakes, data warehouses, and data marts using Azure Data Services.
  • Integrate data from various sources including APIs, structured/unstructured files, IoT devices, and real-time streams.
  • Develop and optimize ETL/ELT workflows using tools such as Azure Data Factory, Databricks, and Apache Spark.
  • Implement real-time data ingestion and processing using Azure Stream Analytics, Event Hubs, or Kafka.
  • Ensure data quality, availability, and security across the entire data lifecycle.
  • Collaborate with analysts, data scientists, and engineering teams to deliver business-aligned data solutions.
  • Contribute to data governance efforts and ensure compliance with data privacy standards.
  • Establish and manage source system connectivity (on-prem, APIs, sensors, etc.).
  • Handle deployment and migration of data pipeline artifacts between environments using Azure DevOps.
  • Design, develop, and troubleshoot PySpark scripts and orchestration pipelines.
  • Perform data integration using database joins and other transformations aligned with project requirements.

Qualifications, Skills & Experience

  • Bachelor’s Degree in Computer Science, Engineering, or related field
  • 3–5 years of experience in Azure-based data engineering, PySpark, and Big Data technologies
  • Strong hands-on experience with Azure Synapse Analytics for pipeline orchestration and data handling
  • Expertise in SQL, data warehousing, data marts, and ingestion using PySpark and Python
  • Solid experience building and maintaining cloud-based ETL/ELT pipelines, especially with Azure Data Factory or Synapse
  • Familiarity with cloud data environments such as Azure and optionally AWS
  • Experience with Azure DevOps for CI/CD and artifact deployment
  • Excellent communication, problem-solving, and interpersonal skills
  • 1–2 years of experience working with Azure Data Explorer (including row-level security and access controls).
  • Experience with Azure Purview for metadata management, data lineage, governance, and discovery
  • Ability to work independently and take full ownership of assignments
  • Proactive in identifying and resolving blockers and escalating when needed
  • Exposure to real-time processing with tools like Azure Stream Analytics or Kafka

Our Culture at Sembcorp

At Sembcorp, our culture is shaped by a strong set of shared behaviours that guide the way we work and uphold our commitment to driving the energy transition.

We foster an institution-first mindset, where the success of Sembcorp takes precedence over individual interests. Collaboration is at the heart of what we do, as we work seamlessly across markets, businesses, and functions to achieve our goals together. Accountability is a core principle, ensuring that we take ownership of our commitments and deliver on them with integrity and excellence. These values define who we are and create a workplace where our people can thrive while making a meaningful impact on driving energy transition.

Join us in making a real impact!

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries Utilities

Referrals increase your chances of interviewing at Sembcorp Industries Ltd by 2x

Sign in to set job alerts for “Data Engineer” roles. Software Engineer (Java) – Fixed Income E-Trading WeChat - Software Engineer Intern (Backend) Developer (.NET),Technology Consulting _Digital Engineering Python Developer (Singapore) – Elite Hedge Fund (up to $250K SGD + Bonus + Hybrid) Full Stack Software Engineer - AI Applications Intern - Software Engineering Intern - Polytechnic Intake

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore HORIZON SOFTWARE PTE. LTD.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities
  • Design, develop, and maintain data pipelines, ETL/ELT processes, and data integration workflows .
  • Architect and optimize data lakes, data warehouses, and streaming platforms .
  • Work with structured, semi-structured, and unstructured data at scale.
  • Implement real-time and batch data processing solutions .
  • Collaborate with Data Scientists, Analysts, and Business stakeholders to deliver high-quality data solutions.
  • Ensure data security, lineage, governance, and compliance across platforms.
  • Optimize queries, data models, and storage for performance and cost efficiency .
  • Automate processes and adopt DevOps/DataOps practices for CI/CD in data engineering.
  • Troubleshoot complex data-related issues and resolve production incidents.
  • Mentor junior engineers and contribute to technical strategy and best practices.
Technical Skills (Must-Have Tough Requirements)

Programming & Scripting

  • Proficiency in Python, Scala, or Java for data engineering.
  • Strong SQL skills (query optimization, tuning, advanced joins, window functions).

Big Data & Distributed Systems

  • Expertise with Apache Spark, Hadoop, Hive, HBase, Flink, Kafka .
  • Hands-on with streaming frameworks (Kafka Streams, Spark Streaming, Flink) .

Cloud & Data Platforms

  • Deep knowledge of AWS (Redshift, Glue, EMR, Athena, S3, Kinesis) ,
    or Azure (Synapse, Data Factory, Databricks, ADLS) ,
    or GCP (BigQuery, Dataflow, Pub/Sub, Dataproc) .
  • Experience with Snowflake, Databricks, or Teradata .

ETL/ELT & Orchestration

  • Strong experience with Airflow, Luigi, Azkaban, Prefect .
  • ETL tools like Informatica, Talend, SSIS .

Data Modeling & Storage

  • Experience with Data Lakes, Data Warehouses, and Lakehouse architectures .
  • Knowledge of Star Schema, Snowflake Schema, Normalization/Denormalization .

DevOps & Automation

  • Proficiency in CI/CD (Jenkins, GitLab, Azure DevOps) for data pipelines.
  • Experience with Docker, Kubernetes, Terraform, Ansible for infrastructure automation.

Other Tough Skills

  • Strong knowledge of Data Governance, MDM, Data Quality, Metadata Management .
  • Familiarity with Graph Databases (Neo4j), Time-Series Databases (InfluxDB, TimescaleDB) .
  • Understanding of machine learning data pipelines (feature engineering, model serving).
Qualifications
  • Bachelor’s/Master’s degree in Computer Science, Data Engineering, or related field.
  • 7–10 years of experience in data engineering or big data development .
  • At least 2–3 large-scale end-to-end data platform implementations .
  • Preferred Certifications:
    AWS Certified Data Analytics – Specialty
    Google Professional Data Engineer
    Databricks Certified Data Engineer
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore INNOWAVE TECH PTE. LTD.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

About Innowave Tech Singapore

Innowave Tech is an Artificial Intelligence (AI) company offering solutions for the Semiconductor and Advanced Manufacturing industry. Utilizing deep industrial domain knowledge, proven experience, and innovation, we provide expert AI solutions and systems to address various industry pain points.

Roles & Responsibilities

We are seeking Data Engineer to establish and lead our data infrastructure. The successful candidate will be responsible for building our data engineering practice from the ground up, implementing robust data systems for industrial AI applications, and establishing best practices that will power our semiconductor manufacturing AI solutions.

Your Role and Impact

As our first Data Engineer, you will have a foundational role in building robust data infrastructure to handle manufacturing data and LLM applications, while establishing secure data practices that power our AI solutions for advanced manufacturing operations.

What You’ll Do

  1. Select and manage on-premises technologies suitable for secure and efficient operations.
  2. Build robust pipelines to collect, clean, and transform diverse datasets including process data, sensor data, image data, and human annotations.
  3. Ensure secure, maintainable, and scalable deployment of data infrastructure.
  4. Define and enforce best practices in data governance, privacy, and access control.
  5. Collaboration & Deployment.

What We’re Looking For

Educational Background:

Minimum Poly or Bachelor Degree in Computer Science, Engineering, or a related field.

Technical Expertise:

  • 3+ years of experience in data engineering roles, ideally with on-premises or hybrid infrastructure.
  • Proven track record of building scalable data systems from ground up in a startup environment.
  • Proficiency in Python and/or Java for data pipeline development.
  • Solid experience with ETL frameworks (e.g., Apache Airflow, Dagster) and streaming systems (e.g., Kafka).
  • Experience designing and maintaining SQL and NoSQL databases.
  • Experience building and operating data lakes and data catalog.
  • Familiarity with containerization (Docker), version control (Git), and CI/CD practices.

Soft Skills:

  • Excellent communication skills and ability to collaborate with cross-functional technical and non-technical teams.
  • Excellent problem-solving and debugging abilities.
  • Ability to balance engineering tradeoffs.

Bonus Skills:

  • Experience with manufacturing data systems, especially SPC, SCADA, and industrial sensor protocols (e.g., OPC UA, MQTT, Modbus).
  • Familiarity with AI/ML pipelines and tools (e.g., MLflow).
  • Knowledge in vector databases and LLM data infrastructure.
  • Prior experience working in or with regulated industries (e.g., semiconductor, automotive, aerospace).

* Only Singapore Citizens and Permanent Residents (PRs) are accepted for this position due to project requirements.

What we Offer

• A leading role in cutting-edge AI projects within the semiconductor industry.

• The opportunity to work with an learn from experts in the field of AI and data science.

• A dynamic, innovative, and supportive work environment.

• Competitive salary and benefits package.

• Career growth opportunities in a fast-paces technology company.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore SOUTHERN RIDGES CAPITAL PTE. LTD.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Southern Ridges Capital

Southern Ridges Capital is an investment firm managing fixed income, currency assets, and derivatives by employing discretionary macro and relative value investment strategies.

We believe in cultivating a strong, collaborative culture where people are empowered to learn, grow, and do their best work. We are committed to mentoring early-career professionals and giving them the tools, exposure, and guidance to succeed in a dynamic and intellectually challenging environment.


What You’ll Do

As a Data Engineer, you’ll work closely with our investment team to support the development and maintenance of robust data systems that drive investment research and decision-making.

This is a hands-on role ideal for someone with a strong interest in data engineering, financial markets, and real-world applications of data-driven tools.


Your key responsibilities will include:

  • Build and maintain scalable data pipelines and infrastructure to ingest, process, and store structured and unstructured data from various sources
  • Prepare data for analysis or operational use
  • Ensure data integrity, consistency, and quality by implementing monitoring, validation, and error-handling frameworks
  • Collaborate closely with data analysts, researchers and portfolio managers to understand data needs and deliver fit-for-purpose solutions

What You’ll Gain

  • Hands-on Experience: Exposure to live trading environments and real-world investment problems from day one
  • Mentorship & Learning: Work closely with seasoned professionals who will support your growth through feedback and collaboration
  • Professional Development: Build skills in coding, data analysis, and financial market research
  • Impact: See your work contribute directly to the investment process and help drive results

What We’re Looking For

Technical Skills

  • A minimum degree with a major in data science, statistics, mathematics, physics, engineering, or a related field
  • Proficient in Python and/or R
  • Willingness to learn and apply new technologies in a practical, real-world setting

Helpful Experience

  • Exposure to macroeconomic or financial data tools like Bloomberg, CEIC, Haver, or Macrobond
  • Interest in financial markets and macroeconomic trends
  • Basic understanding of options, derivatives, or econometrics

Soft Skills

  • Strong attention to detail and a structured, logical approach to problem-solving
  • Eagerness to learn and improve your technical and domain knowledge
  • Good written and verbal communication skills.
  • Team-oriented mindset with a positive attitude and initiative
  • High standards of integrity and professionalism

Why Join Us

At Southern Ridges Capital, you will work closely with the investment team, and you will have the opportunity to contribute to real investment decisions and view your impact. You will gain a wider experience and better understanding of the investment world working in a smaller and more collaborative setup like ours.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data professionals Jobs in Singapore !

Data Engineer

Singapore, Singapore 1MTECH PTE. LTD.

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Job Scope:-

a) Perform the impact analysis for upstream and downstream changes affecting DataMart and reporting systems
b) Design, develop and deploy the programs, source codes, batch scripts, complex SQL stored procures, functions and triggers, SSIS packages or SSRS reports.
c) Following the organizational SDLC processes to deliver the project or enhancement requests, create/update the SDLC documents including functional and non-functional specifications, technical design documents, test plan, test cases, release procedures, system operational documents, user manuals, etc.
d) Review technical deliverables from third-party vendors, including functional and design specs, programs/scripts, test results, release runbook and system operational manual.
e) Investigate and troubleshoot the production issues and system problems escalated by IT operation team or business users, identify the root cause, provide workaround to rectify the issue/problem and work out the long-term solution to fix issues/problems permanently.
f) Involved in regularly maintenance activities, e.g., yearly DR drill, quarterly support for server or platform software patching, etc.
g) Provide ad-hoc support for other IT service requests, e.g., data extraction, data alteration, extract system logic, answer to users’ inquiry about the data/logic in the system, etc.

h) Prepare artefacts in accordance to SMBC’s system lifecycle framework and documents.
i) Manage, maintain, and support applications and the corresponding operating environments, focusing on stability, quality, and functionality against service level expectation.
j) Evaluate the current system state, identify aspects which could be improved and recommend changes to achieve the improvement.
k) Coordinate with IT teams on system environment setup and maintenance
l) Coordinate production release and provide post implementation support
m) Provide on-call support and afterhours/weekend support as needed to cover application support and change deployment

n) Highlight or escalate risk and issues to relevant parties in a timely manner.
o) Identify customers’ needs and providing value-added solutions to them.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore PEOPLE ADVANTAGE PTE. LTD.

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Job Description

  • Design, build, and maintain robust data pipelines (ETL/ELT) to ingest, process, and transform large-scale log data for the detection of unauthorized privileged access.

  • Develop and manage scalable data warehousing solutions that support advanced analytics, AI models, and security monitoring use cases.

  • Operationalize machine learning models by deploying them into production environments, ensuring reliability, scalability, and seamless integration with existing systems.

  • Implement strong data governance practices to ensure data quality, consistency, and integrity across multiple sources and platforms.

  • Collaborate cross-functionally with data scientists, software engineers, and cybersecurity teams to deliver secure, data-driven features and enhance detection capabilities.

  • Establish and enforce best practices for data engineering and DevOps processes, including CI/CD pipelines, monitoring, and automation.

  • Provide technical leadership and mentorship by guiding junior engineers, fostering capability building, and driving innovation in data-driven security solutions.

Requirements

  • Professional Experience: 1–3 years for Junior roles and 3–7 years for Senior roles in data engineering or a closely related field, with proven hands-on project exposure.

  • Strong proficiency in Python for data processing, pipeline development, and integration tasks, with good coding practices and debugging skills.

  • Experience working with SQL, NoSQL, and Graph databases, including schema design, query optimization, and managing large-scale data storage solutions.

  • Familiarity with cloud platforms such as GCP, AWS Bedrock , and SPLUNK , with the ability to leverage their services for data engineering and security use cases.

  • Hands-on experience with CI/CD setups, including version control (Git), automated testing, and deployment pipelines for scalable data solutions.

  • Solid foundation in algorithms, data structures, and integration strategies, with the ability to design scalable, efficient, and resilient data pipelines.

  • Exposure to ML/AI solution deployment, API development, and event-driven architectures (e.g., Kafka, SQS), along with an understanding of graph data structures for complex relationships would be an added advantage.

This is a 1-year Contract position under People Advantage(Certis Group). We appreciate your application and regret only shortlisted candidates will be notified.

By submitting your resume, you consent to the handling of your personal data in accordance with Certis Group Privacy Policy (


EA Personnel Name: Siti Khatijah

EA Personnel No: R22111204

EA LicenseNo:11C3955

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Singapore, Singapore ADECCO PERSONNEL PTE LTD

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Qualifications and Profile

Key Role Skill & Capability Requirements:
• Demonstrated experience of turning business use cases and requirements to technical solutions.
• Experience in business processing mapping of data and analytics solutions.
• Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.
• The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration.
• Mastery of Python and SQL is required.
• Knowledge of Azure Databricks, Azure Data Lake, Azure SQL Server, and Azure SQL is required.
• Experience preparing data and transforming Data.
• Experience in Data Migration is a plus.
• Demonstrated experience preparing data and building data pipelines.
• Strong team collaboration and experience working with remote teams.
• Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code fundamentals.
• Experience with Git/TFS/VSTS is a must.

Next Step:

Prepare your updated resume (please include your current salary package with full breakdown such as base, incentives, annual wage supplement, etc.) and expected package. Simply click on 'Apply here' to drop your resume or email at

Susmita Sahu

EA License No: 91C2918

Personnel Registration Number: R23114076

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Professionals Jobs