126 Etl Processes jobs in Singapore
Data Engineering Expert
Posted today
Job Viewed
Job Description
The Data Technology Operations Masterplan (DTOM) is a centralized platform designed to overcome the challenges faced by quantitative strategists. As a key member of the team, you will contribute to the development and maintenance of this platform.
Responsibilities:
- Drive the transition from traditional laptop-based model development to a scalable, collaborative, and cloud-native solution.
- Leverage Databricks' Platform-as-a-Service (PaaS) to consolidate analytics capabilities across Artificial Intelligence (AI), Machine Learning (ML), Business Intelligence (BI), and Data Engineering into a unified platform.
- Build and optimize systems that streamline data workflows, enhance cross-team collaboration, and reduce reliance on fragmented or custom-built applications.
- Ensure high standards of reliability, scalability, and performance for the platform while fostering a culture of innovation, automation, and best practices in engineering and DevOps.
Requirements:
- Possess a degree in Computer Science/Information Technology or related fields.
- At least 5 years of strong DevOps experience with strong experience in CI/CD pipeline and artifactory management.
- Proficiency in CI/CD workflows such as GitHub Action, Jenkins.
- Proficiency in artifact repository management systems such as JFrog, Nexus.
- Strong Linux administration skills and Shell scripting expertise.
- Experience in software engineering in at least one of these programming languages - JavaScript, Java, Python, or .NET.
- Experience in containerization technologies such as Docker, Kubernetes, EKS, Helm (Relevant certifications are a plus).
- Experience in AWS with solid understanding of Cloud services and infrastructure management (AWS certifications are a plus).
- Experience in diagnosing and resolving complex system issues across multiple technology layers.
- Excellent communication skills to work effectively with diverse engineering teams.
- Strong team-player mindset, focused on leveraging experience to help the team succeed.
- Passion for advocating and implementing best practices in Software Engineering and DevOps.
We are committed to being an equal opportunity employer and provides equal employment opportunities to all employees and applicants. We strive to cultivate a workplace that celebrates diversity and inclusion, where individuals of all backgrounds—regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, or any other distinguishing trait—can succeed and thrive.
We prohibit discrimination and harassment of any type with regard to race, color, religion, age, national origin, disability status, genetics, sexual orientation, gender identity, or expression. This policy applies to all terms and conditions of employment, including recruiting, hiring, and the entire employee lifecycle. We are focused on creating an environment where everyone can reach their full potential.
Please note that your application will be sent to and reviewed by the direct employer. Tell employers what skills you have:
Git
Procedure Development
Web Services
Architecture
Scrum
JavaScript
Python
Java
API
Databases
Data Engineering Specialist
Posted today
Job Viewed
Job Description
Job Title: Data Engineering Specialist
We are seeking an experienced Data Engineer to join our team. This role involves designing, developing, and maintaining data pipelines and ETL processes using AWS services.
Key Responsibilities:- Design and develop data models and databases to store and process large datasets.
- Develop and implement data integration solutions using AWS Glue and other tools.
- Work with cross-functional teams to design and implement data warehouses, data lakes, and operational data stores.
- Develop automated data collection processes and implement data security and compliance requirements.
- At least 3 years of experience in data engineering or a related field.
- Strong proficiency in Python, VQL, SQL, and other programming languages.
- Experience with AWS services, including Glue, Athena, S3, RDS, and Sagemaker.
- Familiarity with data virtualisation concepts and tools, such as Denodo.
- Understanding of data governance and master data management principles.
- Experience working in Agile environments with iterative development practices.
- A competitive salary and benefits package.
- Ongoing training and professional development opportunities.
- A collaborative and dynamic work environment.
- The opportunity to work on challenging projects and contribute to the growth of the company.
We are a leading provider of data engineering services. Our team is passionate about delivering high-quality solutions that meet the needs of our clients.
Data Engineering Internship
Posted today
Job Viewed
Job Description
Are you a motivated and curious individual looking to gain hands-on experience in building and maintaining data pipelines, databases, and infrastructure that support real-world analytics and data-driven decision-making?
Key Responsibilities:
- Design and maintain scalable data pipelines to collect, process, and store both structured and unstructured data.
- Work with SQL, Python to support data ingestion, transformation, and cleaning.
- Support database tasks such as schema design, query optimization, and performance tuning across relational and NoSQL systems.
- Implement and test ETL workflows to integrate data from APIs, logs, and third-party sources.
- Collaborate with data scientists and analysts to understand business data needs and assist in preparing datasets for analytics and reporting.
- Monitor and debug existing pipelines, learning to diagnose and resolve common issues.
- Document workflows, tools used, and data handling best practices to contribute to internal knowledge-sharing.
- Stay updated with emerging trends and technologies in data engineering.
Requirements:
- Pursuing a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field.
- Basic understanding of data processing concepts such as ETL, databases, and APIs.
- Experience or familiarity with SQL, Python, or Java.
- Interest in big data technologies frameworks.
- Strong analytical thinking, problem-solving ability, and eagerness to learn.
- Good communication and teamwork skills.
- Bonus: Familiarity with data visualization tools like Tableau, Power BI, or Looker.
- Availability for a full-time internship for at least 3-6 months is preferred.
- Curiosity about working with cloud platforms and modern data warehouses.
Data Engineering Leader
Posted today
Job Viewed
Job Description
As a senior data engineering leader, you will be responsible for planning, executing, and delivering high-impact projects for our next-generation air traffic management platform. Your mission is to drive innovation and excellence in data engineering, ensuring seamless integration with cloud infrastructure and cutting-edge data platforms.
The ideal candidate will have proven experience in managing technical projects involving cloud-native architectures and data engineering platforms. You should possess strong knowledge of AWS services and data platforms such as Databricks, as well as proficiency in project tracking tools like Jira and Confluence.
Data Engineering Professional
Posted today
Job Viewed
Job Description
We are seeking an experienced Cloud Data Engineer to join our team. In this role, you will be responsible for designing and implementing data architecture, pipelines, and ETL processes. You will also develop and maintain data platforms, dashboards to support analytics and reporting, ensuring data quality, reliability, and availability across all data platforms.
The ideal candidate will have a strong proficiency in SQL, Python, and other relevant programming languages, as well as hands-on experience with IAC and Cloud services such as Terraform, BigQuery, Dataflow, Kafka, and Pub/Sub. Familiarity with data modeling, ETL processes, and data warehousing concepts is also essential.
Required Skills and Qualifications:- Education: Bachelor's degree in Computer Science, Information Systems, or related field.
- Experience: At least 2-5 years of experience in data engineering or related roles.
- Technical Skills: Strong proficiency in SQL, Python, and other relevant programming languages, as well as hands-on experience with IAC and Cloud services.
- Soft Skills: Excellent problem-solving skills, attention to detail, effective communication skills, and the ability to work collaboratively in a team environment.
As a Cloud Data Engineer, you will have the opportunity to work on challenging projects, collaborate with a talented team, and contribute to the growth and success of our organization.
Others:If you are a motivated and experienced data engineer looking for a new challenge, we encourage you to apply for this exciting opportunity.
Manager, Data Engineering
Posted today
Job Viewed
Job Description
Key Responsibilities:
- Lead and manage a team of Data Analysts.
- Design, build, and maintain scalable and efficient data pipelines and architecture.
- Ensure data quality, governance, security, and compliance.
- Perform ETL operations across multiple data sources and platforms.
- Improve and optimize data storage, retrieval, and scalability.
- Collaborate across business units to deliver data-driven solutions.
- Drive initiatives in advanced analytics, AI/ML, and emerging data technologies.
- Own and manage Power BI reporting framework and delivery.
- Manage analytics project timelines and deliverables.
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 6+ years of experience in data engineering and analytics roles.
- Strong proficiency in Python, SQL, and Azure cloud platform.
- Experience with big data tools (e.g., Spark, Hadoop, Kafka).
- Knowledge of data modeling, data warehousing, and architecture.
- Strong leadership, project management, and communication skills.
EA License: 01C4394, R
By sending us your personal data and curriculum vitae (CV), you are deemed to consent to PERSOLKELLY Singapore Pte Ltd and its affiliates collecting, using and disclosing my personal data for the purposes set out in the Privacy Policy which is available at I also acknowledge that I have read, understood, and agree to the said Privacy Policy.
Data Engineering Lead
Posted today
Job Viewed
Job Description
At Teksalt Solutions, we specialize in connecting top-tier talent with leading companies to create dynamic, productive workforces. We are committed to delivering technology solutions that not only meet but exceed the demands of the modern business landscape.We are currently seeking a Data Engineering Lead - AWS Glue & PySpark Specialist for a permanent full-time position in Bangalore. The ideal candidate should have 5 to 8 years of experience with skills in AWS Glue, PySpark, and Python.Key Responsibilities:- Spark & PySpark Development: Design and implement scalable data processing pipelines using Apache Spark and PySpark for large-scale data transformations.- ETL Pipeline Development: Build, maintain, and optimize ETL processes for seamless data extraction, transformation, and loading across various data sources and destinations.- AWS Glue Integration: Utilize AWS Glue to create, run, and monitor serverless ETL jobs for data transformations and integrations in the cloud.- Python Scripting: Develop efficient, reusable Python scripts to support data manipulation, analysis, and transformation within the Spark and Glue environments.- Data Pipeline Optimization: Ensure that all data workflows are optimized for performance, scalability, and cost-efficiency on the AWS Cloud platform.- Collaboration: Work closely with data analysts, data scientists, and other engineering teams to create reliable data solutions that support business analytics and decision-making.- Documentation & Best Practices: Maintain clear documentation of processes, workflows, and code while adhering to best practices in data engineering, cloud architecture, and ETL design.Required Skills:- Expertise in Apache Spark and PySpark for large-scale data processing and transformation.- Hands-on experience with AWS Glue for building and managing ETL workflows in the cloud.- Strong programming skills in Python, with experience in data manipulation, automation, and integration with Spark and Glue.- In-depth knowledge of ETL principles and data pipeline design, including optimization techniques.- Proficiency in working with AWS services, such as S3, Glue, Lambda, and Redshift.- Strong skills in writing optimized SQL queries, with a focus on performance tuning.- Ability to translate complex business requirements into practical technical solutions.- Familiarity with Apache Airflow for orchestrating data workflows.- Knowledge of data warehousing concepts and cloud-native analytics tools.If you are passionate about data engineering and have the required skills and experience, we welcome you to apply for this position. Join us at Teksalt Solutions, where a pinch of us makes all the difference in the world of technology.,
Sign-in & see how your skills match this job
Sign-in & Get noticed by top recruiters and get hired fast
#J-18808-Ljbffr
Be The First To Know
About the latest Etl processes Jobs in Singapore !
Big Data Engineering Lead
Posted 4 days ago
Job Viewed
Job Description
We're looking for a seasoned Big Data Engineering Lead with expertise in Scala, Python, and PySpark to lead our client data engineering team. You'll be responsible for designing and implementing scalable, efficient, and fault-tolerant data pipelines, as well as mentoring team members and driving technical innovation.
Key Responsibilities:
- Design and develop large-scale data pipelines using Scala, Python, and PySpark
- Lead and mentor a team of data engineers to build and maintain data architectures
- Collaborate with cross-functional teams to identify data requirements and implement data-driven solutions
- Ensure data quality, integrity, and security across all data pipelines
- Develop and maintain technical documentation for data pipelines and architectures
- Stay up-to-date with industry trends and emerging technologies in big data, cloud computing, and data engineering
- Drive technical innovation and recommend new tools and technologies to improve data engineering capabilities
Requirements:
- 5+ years of experience in big data engineering, with expertise in Scala, Python, and PySpark
- Strong experience with big data technologies such as Apache Spark, Hadoop, and Kafka
- Experience with cloud-based data platforms such as AWS, GCP, or Azure
- Strong understanding of data architecture, data governance, and data security
- Excellent leadership and mentoring skills, with experience leading high-performing teams
- Strong communication and collaboration skills, with ability to work with cross-functional teams
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
Big Data Engineering Lead
Posted 13 days ago
Job Viewed
Job Description
We're looking for a seasoned Big Data Engineering Lead with expertise in Scala, Python, and PySpark to lead our client data engineering team. You'll be responsible for designing and implementing scalable, efficient, and fault-tolerant data pipelines, as well as mentoring team members and driving technical innovation.
Key Responsibilities:
- Design and develop large-scale data pipelines using Scala, Python, and PySpark
- Lead and mentor a team of data engineers to build and maintain data architectures
- Collaborate with cross-functional teams to identify data requirements and implement data-driven solutions
- Ensure data quality, integrity, and security across all data pipelines
- Develop and maintain technical documentation for data pipelines and architectures
- Stay up-to-date with industry trends and emerging technologies in big data, cloud computing, and data engineering
- Drive technical innovation and recommend new tools and technologies to improve data engineering capabilities
Requirements:
- 5+ years of experience in big data engineering, with expertise in Scala, Python, and PySpark
- Strong experience with big data technologies such as Apache Spark, Hadoop, and Kafka
- Experience with cloud-based data platforms such as AWS, GCP, or Azure
- Strong understanding of data architecture, data governance, and data security
- Excellent leadership and mentoring skills, with experience leading high-performing teams
- Strong communication and collaboration skills, with ability to work with cross-functional teams
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
Big Data Engineering Lead
Posted today
Job Viewed
Job Description
We're looking for a seasoned Big Data Engineering Lead with expertise in Scala, Python, and PySpark to lead our client data engineering team. You'll be responsible for designing and implementing scalable, efficient, and fault-tolerant data pipelines, as well as mentoring team members and driving technical innovation.
Key Responsibilities:
- Design and develop large-scale data pipelines using Scala, Python, and PySpark
- Lead and mentor a team of data engineers to build and maintain data architectures
- Collaborate with cross-functional teams to identify data requirements and implement data-driven solutions
- Ensure data quality, integrity, and security across all data pipelines
- Develop and maintain technical documentation for data pipelines and architectures
- Stay up-to-date with industry trends and emerging technologies in big data, cloud computing, and data engineering
- Drive technical innovation and recommend new tools and technologies to improve data engineering capabilities
Requirements:
- 5+ years of experience in big data engineering, with expertise in Scala, Python, and PySpark
- Strong experience with big data technologies such as Apache Spark, Hadoop, and Kafka
- Experience with cloud-based data platforms such as AWS, GCP, or Azure
- Strong understanding of data architecture, data governance, and data security
- Excellent leadership and mentoring skills, with experience leading high-performing teams
- Strong communication and collaboration skills, with ability to work with cross-functional teams
- Bachelor's or Master's degree in Computer Science, Engineering, or related field