1,295 Cloud Data Engineer jobs in Singapore
Cloud Data engineer
Posted today
Job Viewed
Job Description
Cloud Data Engineer (AI & Analytics domain)
Important Information
Location: Singapore
Job Summary
As a Cloud data engineer, you will design, build, and maintain scalable, secure, and efficient cloud data architectures on platforms like AWS, Azure, or Google Cloud.
Required Skills and Experience
- 5+ years of Data consulting experience, or other relevant experience in AI & Analytics domain, with a proven track record of building and maintaining client relationships
- Collaborate with customers and account partners to identify new Data and AI opportunities, and build proposals.
- Organize and lead educational and ideation AI and Generative AI workshops for customers
- Understand customer's needs and assess their data maturity
- Evaluate and recommend appropriate data pipelines, frameworks, tools and platforms
- Lead data feasibility studies and PoC projects to demonstrate the value and viability of Data and AI solutions
- Develop end to end AI PoC projects using Python, Flask, FastAPI and Streamlit
- Have experience in cloud services of AWS / GCP / Azure to deploy PoCs and pipelines
- Have experience with big data frameworks on prem and on cloud
- Collaborate with AI architects and engineers to data scientist and develop Data and AI solutions tailored to the clients'
- requirements
- Lead integration and deployment of data and AI products
- Stay updated with the latest advancements in big data and GenAI along with best practices to develop thought leadership and PoV
- Work with cross-functional team and partners to develop/ enhance and package AI offerings and assets for customer-facing discussions
About Encora
Encora is a global company that offers Software and Digital Engineering solutions. Our practices include Cloud Services, Product Engineering & Application Modernization, Data & Analytics, Digital Experience & Design Services, DevSecOps, Cybersecurity, Quality Engineering, AI & LLM Engineering, among others.
At Encora, we hire professionals based solely on their skills and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.
Cloud Data Engineer
Posted today
Job Viewed
Job Description
- Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
- Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
- Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
- Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Requirements
- Good experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
- Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
- Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
- Strong knowledge of SQL and NoSQL databases.
- Familiarity with data modeling and schema design.
- AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Good experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
#J-18808-Ljbffr
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring 1 | P a g e compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements. Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Qualifications
Bachelor’s or master’s degree in computer science, data engineering, or a related field.
Minimum 7 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Good to have skills:
Informatica Cloud, Databricks and AWS.
#J-18808-Ljbffr
Cloud Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring 1 | P a g e compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements. Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Qualifications
Bachelor’s or master’s degree in computer science, data engineering, or a related field.
Minimum 7 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
Strong knowledge of SQL and NoSQL databases.
Familiarity with data modeling and schema design.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
AWS certifications (e.g., AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Good to have skills: Informatica Cloud, Databricks and AWS.
#J-18808-Ljbffr
AWS Cloud Data Engineer
Posted today
Job Viewed
Job Description
Junior Data Engineer (2 to 4 years experience)
Position Overview
We are seeking a highly skilled AWS Cloud Data Engineer to design, build, and maintain scalable data pipelines and infrastructure on AWS. The ideal candidate will have hands-on experience with cloud-native services, strong data engineering capabilities, and expertise in supporting enterprise-level data ingestion, transformation, and management initiatives.
Key Responsibilities
Design, develop, and maintain data pipelines and ETL/ELT workflows using AWS services.
Build and optimize data lakes and data warehouses on AWS (e.g., S3, Redshift, Snowflake, Databricks).
Implement data ingestion from multiple structured and unstructured sources.
Develop and deploy scalable data processing solutions using AWS Lambda, Glue, EMR, Kinesis, Step Functions, and Terraform.
Ensure data quality, governance, and security in compliance with organizational and regulatory standards.
Collaborate with data scientists, analysts, and business stakeholders to enable analytics and AI/ML initiatives.
Monitor, troubleshoot, and optimize data infrastructure performance and costs.
Required Skills & Qualifications
Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
2 to 4 years of experience in Data Engineering with strong exposure to AWS Cloud.
Strong hands-on expertise with AWS services such as S3, Glue, Redshift, Lambda, IAM, CloudFormation/Terraform.
Experience with data pipeline orchestration (Airflow, Step Functions, or similar).
Proficiency in Python, SQL, and PySpark.
Strong knowledge of data warehousing, data lakes, and data management practices.
Familiarity with Snowflake, Databricks, or similar platforms (preferred).
Strong understanding of DevOps practices, CI/CD pipelines, and cloud infrastructure.
Excellent problem-solving and communication skills.
Cloud Data Engineer Lead
Posted today
Job Viewed
Job Description
• Design and architect data storage solutions, including databases, data lakes, and warehouses, using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake. Integrate Informatica IDMC for metadata management and data cataloging.
• Create, manage, and optimize data pipelines for ingesting, processing, and transforming data using AWS services like AWS Glue, AWS Data Pipeline, and AWS Lambda, Databricks for advanced data processing, and Informatica IDMC for data integration and quality.
• Integrate data from various sources, both internal and external, into AWS and Databricks environments, ensuring data consistency and quality, while leveraging Informatica IDMC for data integration, transformation, and governance.
• Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and enrich data, making it suitable for analytical purposes using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
• Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements. Utilize Informatica IDMC for optimizing data workflows.
• Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring compliance with data privacy regulations. Employ Informatica IDMC for data governance and compliance.
• Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
• Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
• Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
• Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
• Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements.
• Stay up-to-date with AWS, Databricks, Informatica IDMC services, and data engineering best practices to recommend and implement new technologies and techniques.
Requirements / Qualifications
• Bachelor's or master's degree in computer science, data engineering, or a related field.
• Minimum 10 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
• Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
• Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
• Strong knowledge of SQL and NoSQL databases.
• Familiarity with data modeling and schema design.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics - Specialty), Databricks certifications, and Informatica certifications are a plus.
Preferred Skills:
• Experience with big data technologies like Apache Spark and Hadoop on Databricks.
• Knowledge of data governance and data cataloguing tools, especially Informatica IDMC.
• Familiarity with data visualization tools like Tableau or Power BI.
• Knowledge of containerization and orchestration tools like Docker and Kubernetes.
• Understanding of DevOps principles for managing and deploying data pipelines.
• Experience with version control systems (e.g., Git) and CI/CD pipelines
Be The First To Know
About the latest Cloud data engineer Jobs in Singapore !
Senior Cloud Data Engineer
Posted today
Job Viewed
Job Description
As a Data Engineering professional, you will be responsible for designing, developing and maintaining complex data pipelines using Python. These pipelines will enable efficient data processing and orchestration within the AWS environment.
You will work closely with cross-functional teams to understand data requirements and architect robust solutions. Key responsibilities include:
- Designing, developing and maintaining complex data pipelines using Python, PySpark and SQL for data processing and manipulation
- Collaborating with cross-functional teams to understand data requirements and architect robust solutions, utilizing expertise in AWS services such as S3, Glue, EMR and Redshift
- Implementing data integration and transformation processes to ensure optimal performance and reliability of data pipelines
- Optimizing and fine-tuning existing data pipelines/Airflow to improve efficiency, scalability and maintainability
- Troubleshooting and resolving issues related to data pipelines, ensuring smooth operation and minimal downtime
- Developing and maintaining documentation for data pipelines, processes and system architecture
To succeed in this role, you will need:
- Bachelor's degree in Computer Science, Engineering or a related field
- Proficiency in Python, PySpark and SQL for data processing and manipulation
- At least 5 years of experience in data engineering, specifically working with Apache Airflow and AWS technologies
- Strong knowledge of AWS services, particularly S3, Glue, EMR, Redshift and AWS Lambda
- Understanding of Snowflake Data Lake is preferred
- Experience with optimizing and scaling data pipelines for performance and efficiency
- Good understanding of data modeling, ETL processes and data warehousing concepts
AWS Cloud Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
AWS ETL Cloud Data Engineer
Job Overview:
The AWS Cloud Data Engineer will be responsible for designing, building, and maintaining scalable data pipelines and data infrastructure in the AWS cloud environment. This role requires expertise in AWS services, data modeling, ETL processes, and a keen understanding of best practices for data management and governance.
Key Responsibilities:
Design, build, and operationalize large-scale enterprise data solutions and applications using AWS data and analytics services in combination with third-party tools – including Spark/Python on Glue, Redshift, S3, Athena, RDS-PostgreSQL, Airflow, Lambda, DMS, Code Commit, Code Pipeline, Code Build, etc.
Design and build production ETL data pipelines from ingestion to consumption within a big data architecture, using DMS, DataSync, and Glue.
Understand existing applications (including on-premise Cloudera Data Lake) and infrastructure architecture.
Analyze, re-architect, and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or third-party services.
Design and implement data engineering, ingestion, and curation functions on AWS cloud using native AWS services or custom programming.
Perform detailed assessments of current data platforms and create transition plans to AWS cloud.
Collaborate with development, infrastructure, and data center teams to define Continuous Integration and Continuous Delivery processes following industry standards.
Work on hybrid Data Lake environments.
Coordinate with multiple stakeholders to ensure high standards are maintained.
Mandatory Skill-set:
Bachelor's Degree in Computer Science, Information Technology, or related fields.
5+ years of experience with ETL, Data Modeling, Data Architecture to build Data Lakes. Proficient in ETL optimization, designing, coding, and tuning big data processes using PySpark.
3+ years of extensive experience working on AWS platform using core services like AWS Athena, Glue PySpark, Redshift, RDS-PostgreSQL, S3, and Airflow for orchestration.
Good to Have Skills:
Fundamentals of the Insurance domain.
Functional knowledge of IFRS17.
Understanding of 14 days AL (Accumulated Leave).
Knowledge of company insurance benefits.
#J-18808-Ljbffr
Cloud-Native Data Engineer
Posted today
Job Viewed
Job Description
Unlock Your Potential in Cloud-Native Data Engineering
About the RoleWe are seeking a highly skilled Senior Data Engineer to join our team and play a key role in designing, building, and scaling cloud-native data pipelines.
- Design and implement scalable ETL/ELT pipelines on AWS and GCP.
- Migrate existing on-premises and hybrid workloads into cloud-native services.
- Build reusable data frameworks for ingestion, quality monitoring, and transformations.
MUST-HAVE EXPERIENCE:
- 7+ years in data engineering with end-to-end pipeline design and implementation.
- AWS expertise: S3, Glue, Lambda, Step Functions, EMR, Athena, Fargate, AWS Batch.
- GCP expertise: BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Functions.
BENEFITS:
- Stimulating working environments
- Unique career path
- International mobility
- Internal R&D projects
Join Our Team of Experts
At our company, we value innovation, collaboration, and growth. We strive to create a culture that promotes learning, creativity, and teamwork.