1,295 Cloud Data Engineer jobs in Singapore

Cloud Data engineer

Singapore, Singapore $120000 - $240000 Y Encora

Posted today

Job Viewed

Tap Again To Close

Job Description

Cloud Data Engineer (AI & Analytics domain)

Important Information

Location: Singapore

Job Summary

As a Cloud data engineer, you will design, build, and maintain scalable, secure, and efficient cloud data architectures on platforms like AWS, Azure, or Google Cloud.

Required Skills and Experience

  • 5+ years of Data consulting experience, or other relevant experience in AI & Analytics domain, with a proven track record of building and maintaining client relationships
  • Collaborate with customers and account partners to identify new Data and AI opportunities, and build proposals.
  • Organize and lead educational and ideation AI and Generative AI workshops for customers
  • Understand customer's needs and assess their data maturity
  • Evaluate and recommend appropriate data pipelines, frameworks, tools and platforms
  • Lead data feasibility studies and PoC projects to demonstrate the value and viability of Data and AI solutions
  • Develop end to end AI PoC projects using Python, Flask, FastAPI and Streamlit
  • Have experience in cloud services of AWS / GCP / Azure to deploy PoCs and pipelines
  • Have experience with big data frameworks on prem and on cloud
  • Collaborate with AI architects and engineers to data scientist and develop Data and AI solutions tailored to the clients'
  • requirements
  • Lead integration and deployment of data and AI products
  • Stay updated with the latest advancements in big data and GenAI along with best practices to develop thought leadership and PoV
  • Work with cross-functional team and partners to develop/ enhance and package AI offerings and assets for customer-facing discussions

About Encora

Encora is a global company that offers Software and Digital Engineering solutions. Our practices include Cloud Services, Product Engineering & Application Modernization, Data & Analytics, Digital Experience & Design Services, DevSecOps, Cybersecurity, Quality Engineering, AI & LLM Engineering, among others.

At Encora, we hire professionals based solely on their skills and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Singapore, Singapore $13200 - $144000 Y UNISON Group

Posted today

Job Viewed

Tap Again To Close

Job Description

  • Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
  • Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
  • Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
  • Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
  • Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.

Requirements

  • Good experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
  • Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
  • Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
  • Strong knowledge of SQL and NoSQL databases.
  • Familiarity with data modeling and schema design.
  • AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Singapore, Singapore Unison Consulting Pte Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Good experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Singapore, Singapore Cybervaultit

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities
Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring 1 | P a g e compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements. Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Qualifications
Bachelor’s or master’s degree in computer science, data engineering, or a related field.
Minimum 7 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Good to have skills:
Informatica Cloud, Databricks and AWS.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Singapore, Singapore Cybervault

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities
Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring 1 | P a g e compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements. Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Qualifications
Bachelor’s or master’s degree in computer science, data engineering, or a related field.
Minimum 7 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Good to have skills: Informatica Cloud, Databricks and AWS.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

AWS Cloud Data Engineer

Singapore, Singapore $90000 - $120000 Y SEDHA CONSULTING PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Junior Data Engineer (2 to 4 years experience)

Position Overview

We are seeking a highly skilled AWS Cloud Data Engineer to design, build, and maintain scalable data pipelines and infrastructure on AWS. The ideal candidate will have hands-on experience with cloud-native services, strong data engineering capabilities, and expertise in supporting enterprise-level data ingestion, transformation, and management initiatives.

Key Responsibilities

  • Design, develop, and maintain data pipelines and ETL/ELT workflows using AWS services.

  • Build and optimize data lakes and data warehouses on AWS (e.g., S3, Redshift, Snowflake, Databricks).

  • Implement data ingestion from multiple structured and unstructured sources.

  • Develop and deploy scalable data processing solutions using AWS Lambda, Glue, EMR, Kinesis, Step Functions, and Terraform.

  • Ensure data quality, governance, and security in compliance with organizational and regulatory standards.

  • Collaborate with data scientists, analysts, and business stakeholders to enable analytics and AI/ML initiatives.

  • Monitor, troubleshoot, and optimize data infrastructure performance and costs.

Required Skills & Qualifications

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.

  • 2 to 4 years of experience in Data Engineering with strong exposure to AWS Cloud.

  • Strong hands-on expertise with AWS services such as S3, Glue, Redshift, Lambda, IAM, CloudFormation/Terraform.

  • Experience with data pipeline orchestration (Airflow, Step Functions, or similar).

  • Proficiency in Python, SQL, and PySpark.

  • Strong knowledge of data warehousing, data lakes, and data management practices.

  • Familiarity with Snowflake, Databricks, or similar platforms (preferred).

  • Strong understanding of DevOps practices, CI/CD pipelines, and cloud infrastructure.

  • Excellent problem-solving and communication skills.

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer Lead

Singapore, Singapore SYNAPXE PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Roles And Responsibilities:

• Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.

• Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.

• Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.

• Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.

• Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.

• Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.

• Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.

• Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.

• Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.

• Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.

• Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements.

• Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Requirements / Qualifications

• Bachelor's or master's degree in computer science, data engineering, or a related field.

• Minimum 10 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.

• Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.

• Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.

• Strong knowledge of SQL and NoSQL databases.

• Familiarity with data modeling and schema design.

• Excellent problem-solving and analytical skills.

• Strong communication and collaboration skills.

• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Preferred Skills:

• Experience with big data technologies like Apache Spark and Hadoop on Databricks.

• Knowledge of data governance and data cataloguing tools, especially Informatica IDMC.

• Familiarity with data visualization tools like Tableau or Power BI.

• Knowledge of containerization and orchestration tools like Docker and Kubernetes.

• Understanding of DevOps principles for managing and deploying data pipelines.

• Experience with version control systems (e.g., Git) and CI/CD pipelines
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Cloud data engineer Jobs in Singapore !

Senior Cloud Data Engineer

Singapore, Singapore beBeeDataEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

As a Data Engineering professional, you will be responsible for designing, developing and maintaining complex data pipelines using Python. These pipelines will enable efficient data processing and orchestration within the AWS environment.

You will work closely with cross-functional teams to understand data requirements and architect robust solutions. Key responsibilities include:

  • Designing, developing and maintaining complex data pipelines using Python, PySpark and SQL for data processing and manipulation
  • Collaborating with cross-functional teams to understand data requirements and architect robust solutions, utilizing expertise in AWS services such as S3, Glue, EMR and Redshift
  • Implementing data integration and transformation processes to ensure optimal performance and reliability of data pipelines
  • Optimizing and fine-tuning existing data pipelines/Airflow to improve efficiency, scalability and maintainability
  • Troubleshooting and resolving issues related to data pipelines, ensuring smooth operation and minimal downtime
  • Developing and maintaining documentation for data pipelines, processes and system architecture

To succeed in this role, you will need:

  • Bachelor's degree in Computer Science, Engineering or a related field
  • Proficiency in Python, PySpark and SQL for data processing and manipulation
  • At least 5 years of experience in data engineering, specifically working with Apache Airflow and AWS technologies
  • Strong knowledge of AWS services, particularly S3, Glue, EMR, Redshift and AWS Lambda
  • Understanding of Snowflake Data Lake is preferred
  • Experience with optimizing and scaling data pipelines for performance and efficiency
  • Good understanding of data modeling, ETL processes and data warehousing concepts
This advertiser has chosen not to accept applicants from your region.

AWS Cloud Data Engineer

Singapore, Singapore EPS Consultants

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title:
AWS ETL Cloud Data Engineer
Job Overview:
The AWS Cloud Data Engineer will be responsible for designing, building, and maintaining scalable data pipelines and data infrastructure in the AWS cloud environment. This role requires expertise in AWS services, data modeling, ETL processes, and a keen understanding of best practices for data management and governance.
Key Responsibilities:
Design, build, and operationalize large-scale enterprise data solutions and applications using AWS data and analytics services in combination with third-party tools – including Spark/Python on Glue, Redshift, S3, Athena, RDS-PostgreSQL, Airflow, Lambda, DMS, Code Commit, Code Pipeline, Code Build, etc.
Design and build production ETL data pipelines from ingestion to consumption within a big data architecture, using DMS, DataSync, and Glue.
Understand existing applications (including on-premise Cloudera Data Lake) and infrastructure architecture.
Analyze, re-architect, and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or third-party services.
Design and implement data engineering, ingestion, and curation functions on AWS cloud using native AWS services or custom programming.
Perform detailed assessments of current data platforms and create transition plans to AWS cloud.
Collaborate with development, infrastructure, and data center teams to define Continuous Integration and Continuous Delivery processes following industry standards.
Work on hybrid Data Lake environments.
Coordinate with multiple stakeholders to ensure high standards are maintained.
Mandatory Skill-set:
Bachelor's Degree in Computer Science, Information Technology, or related fields.
5+ years of experience with ETL, Data Modeling, Data Architecture to build Data Lakes. Proficient in ETL optimization, designing, coding, and tuning big data processes using PySpark.
3+ years of extensive experience working on AWS platform using core services like AWS Athena, Glue PySpark, Redshift, RDS-PostgreSQL, S3, and Airflow for orchestration.
Good to Have Skills:
Fundamentals of the Insurance domain.
Functional knowledge of IFRS17.
Understanding of 14 days AL (Accumulated Leave).
Knowledge of company insurance benefits.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Cloud-Native Data Engineer

Singapore, Singapore beBeeData

Posted today

Job Viewed

Tap Again To Close

Job Description

Unlock Your Potential in Cloud-Native Data Engineering

About the Role

We are seeking a highly skilled Senior Data Engineer to join our team and play a key role in designing, building, and scaling cloud-native data pipelines.

  • Design and implement scalable ETL/ELT pipelines on AWS and GCP.
  • Migrate existing on-premises and hybrid workloads into cloud-native services.
  • Build reusable data frameworks for ingestion, quality monitoring, and transformations.

MUST-HAVE EXPERIENCE:

  • 7+ years in data engineering with end-to-end pipeline design and implementation.
  • AWS expertise: S3, Glue, Lambda, Step Functions, EMR, Athena, Fargate, AWS Batch.
  • GCP expertise: BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Functions.

BENEFITS:

  • Stimulating working environments
  • Unique career path
  • International mobility
  • Internal R&D projects

Join Our Team of Experts

At our company, we value innovation, collaboration, and growth. We strive to create a culture that promotes learning, creativity, and teamwork.

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Cloud Data Engineer Jobs