85 Etl Processes jobs in Singapore
Data Engineering Lead
Posted today
Job Viewed
Job Description
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.
Responsibilities:
- Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
- Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
- Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
- Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
- Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
- Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
- Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
Qualifications:
- 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
- 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
- Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
- Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
- Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
- Experienced in working with large and multiple datasets and data warehouses
- Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
- Strong analytic skills and experience working with unstructured datasets
- Ability to effectively use complex analytical, interpretive, and problem-solving techniques
- Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
- Experience with external cloud platform such as OpenShift, AWS & GCP
- Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
- Experienced in integrating search solution with middleware & distributed messaging - Kafka
- Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
- Experienced in software development life cycle and good problem-solving skills.
- Excellent problem-solving skills and strong mathematical and analytical mindset
- Ability to work in a fast-paced financial environment
Education:
- Bachelor's degree/University degree or equivalent experience
- Master's degree preferred
-
Job Family Group:
Technology
-
Job Family:
Systems & Engineering
-
Time Type:
Full time
-
Most Relevant Skills
Please see the requirements listed above.
-
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
-
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .
View Citi's EEO Policy Statement and the Know Your Rights poster.
Data Engineering Architect
Posted today
Job Viewed
Job Description
We are seeking a highly proficient and results-driven Data Engineering Architect with a robust background in designing, implementing, and maintaining scalable and resilient data ecosystems. The ideal candidate will possess a minimum of five years of dedicated experience in orchestrating complex data workflows and will serve as a key contributor within our advanced data services team. This role requires a meticulous professional who can transform intricate business requirements into high-performance, production-grade data solutions.
Key Responsibilities:- Architectural Stewardship: Design, develop, and optimize sophisticated data pipelines leveraging distributed computing frameworks to ensure efficiency, reliability, and scalability of data ingestion, transformation, and delivery layers.
- SQL Mastery: Act as a subject matter expert in SQL, crafting and refining highly complex, multi-layered queries and stored procedures for advanced data manipulation, extraction, and reporting, while ensuring optimal performance and resource utilization.
- Distributed Processing Expertise: Lead the development and deployment of data processing jobs using Apache Spark , orchestrating complex data transformations, aggregations, and feature engineering at petabyte-scale.
- Big Data Orchestration: Proactively manage and evolve our data warehousing solutions built on Apache Hive , overseeing schema design, partition management, and query optimization to support large-scale analytical and reporting needs.
- Collaborative Innovation: Work synergistically with senior engineers and cross-functional teams to conceptualize and execute architectural enhancements, data modeling strategies, and systems integrations that align with long-term business objectives.
- Quality Assurance & Governance: Establish and enforce rigorous data quality standards, implementing comprehensive validation protocols and monitoring mechanisms to guarantee data integrity, accuracy, and lineage across all systems.
- Operational Excellence: Proactively identify, diagnose, and remediate technical bottlenecks and anomalies within data workflows, ensuring system uptime and operational stability through systematic troubleshooting and root cause analysis.
- Educational Foundation: Bachelor's degree in Computer Science, Information Systems, or a closely related quantitative field.
- Experience: A minimum of five (5) years of progressive, hands-on experience in a dedicated data engineering or equivalent role, with a proven track record of delivering enterprise-level data solutions.
- Core Technical Skills:
SQL: Expert-level proficiency in SQL programming is non-negotiable, including advanced query optimization, window functions, and schema design principles.
Distributed Computing: Demonstrated high-level proficiency with Apache Spark for large-scale data processing.
Data Warehousing: In-depth, practical experience with Apache Hive and its ecosystem.
- Conceptual Knowledge: Deep understanding of data warehousing methodologies, ETL/ELT processes, and dimensional modeling.
- Analytical Acumen: Exceptional problem-solving and analytical capabilities, with the ability to dissect complex technical challenges and formulate elegant, scalable solutions.
- Continuous Learning: A relentless curiosity and a strong desire to stay abreast of emerging technologies and industry best practices.
- Domain Preference: Prior professional experience within the Banking or Financial Services sector is highly advantageous.
Data Engineering Expert
Posted today
Job Viewed
Job Description
1. Data Engineering & Platform Knowledge (Must)
Strong understanding of Hadoop ecosystem: HDFS, Hive, Impala, Oozie, Sqoop, Spark (on YARN).
Experience in data migration strategies (lift & shift, incremental, re-engineering pipelines).
Knowledge of Databricks architecture (Workspaces, Unity Catalog, Clusters, Delta Lake, Workflows).
2. Testing & Validation (Preferred)
Data reconciliation (source vs. target).
Performance benchmarking.
Automated test frameworks for ETL pipelines.
3. Databricks-Specific Expertise (Preferred)
Delta Lake: ACID transactions, time travel, schema evolution, Z-ordering.
Unity Catalog: Catalog/schema/table design, access control, lineage, tags.
Workflows/Jobs: Orchestration, job clusters vs. all-purpose clusters.
SQL Endpoints / Databricks SQL: Designing downstream consumption models.
Performance Tuning: Partitioning, caching, adaptive query execution (AQE), photon runtime.
4. Migration & Data Movement (Preferred)
Data migration from HDFS/Cloudera to cloud storage (ADLS/S3/GCS).
Incremental ingestion techniques (Change Data Capture, Delta ingestion frameworks).
Mapping Hive Metastore to Unity Catalog (metastore migration).
Refactoring HiveQL/Impala SQL to Databricks SQL (syntax differences).
5. Security & Governance (Nice to have)
Mapping Cloudera Ranger/SSO policies Unity Catalog RBAC.
Azure AD / AWS IAM integration with Databricks.
Data encryption, masking, anonymization strategies.
Service Principal setup & governance.
6. DevOps & Automation (Nice to have)
Infrastructure as Code (Terraform for Databricks, Cloud storage, Networking).
CI/CD for Databricks (GitHub Actions, Azure DevOps, Databricks Asset Bundles).
Cluster policies & job automation.
Monitoring & logging (Databricks system tables, cloud-native monitoring).
7. Cloud & Infra Skills (Nice to have)
- Strong knowledge of the target cloud (AWS/Azure/GCP):
o Storage (S3/ADLS/GCS).
o Networking (VNETs, Private Links, Security Groups).
o IAM & Key Management. 9. Soft Skills
Ability to work with business stakeholders for data domain remapping.
Strong documentation and governance mindset.
Cross-team collaboration (infra, security, data, business).
Data Engineering Specialist
Posted today
Job Viewed
Job Description
Data Engineering Specialist
About the Role:
We are seeking a highly skilled and detail-oriented Data Engineering Specialist to join our growing analytics team. In this role, you will play a key part in designing and implementing data pipelines and storage solutions to support business decisions.
You will be responsible for extracting, analyzing, and interpreting large datasets to provide actionable insights to stakeholders. Proficiency in Spark, Python, and data analytics is essential, as well as excellent communication skills to work closely with cross-functional teams.
Key Responsibilities:
- Design and implement data pipelines using Spark, AWS, and Azure
- Analyze and perform data profiling to understand data patterns and discrepancies
- Develop data pipeline automation using cloud-based technologies
- Work with stakeholders to translate business requirements into technical requirements
Requirements:
- Bachelor's degree in Computer Science or related field
- Minimum of 4 years' experience in Data Engineering fields
- Strong knowledge of Spark, Python, and data analytics
Benefits:
- Collaborative and dynamic work environment
- Opportunities for growth and professional development
- Competitive salary and benefits package
Data Engineering Strategist
Posted today
Job Viewed
Job Description
As a data engineering strategist, you will play a critical role in designing and implementing cutting-edge data storage solutions. Using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake, you will integrate Informatica IDMC for metadata management and data cataloging.
Key Responsibilities:Data Engineering Innovator
Posted today
Job Viewed
Job Description
We are seeking a skilled professional to lead the design, implementation and maintenance of scalable data pipelines and architectures.
The ideal candidate will have a solid understanding of ETL processes, data warehousing concepts and data modeling best practices.
Key Responsibilities
- Design and develop robust data pipelines and architectures to support data ingestion, processing and storage.
- Develop and optimize complex SQL queries and stored procedures for data extraction, transformation and analysis.
- Model data to meet different use cases and automate data workflows.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
- Lead the integration of data from various sources into data lakes and warehouses.
- Monitor and troubleshoot data pipelines and workflows to ensure optimal performance and reliability.
Requirements
- Minimum 3 years of experience in data engineering fields with system integration.
- Proven solutions: demonstrated experience in providing effective working solutions particularly in cloud-based environments.
Technical Skills
- Proficiency in Databricks, Azure Data lake, PowerBI, Tableau and related data processing and visualization software.
- Familiarity with Windows, Linux, AWS and/or Azure platforms.
- Strong programming skills in languages such as Python and R.
Benefits
Join a forward-thinking team that is transforming government digital services.
Thrive in a dynamic environment where you can utilize your technical expertise to drive business success.
Enjoy a collaborative work culture that fosters innovation and creativity.
Work on challenging projects that help shape the future of government digital services.
Data Engineering Specialist
Posted today
Job Viewed
Job Description
As a Data Engineering Specialist, you will play a critical role in designing, developing, and maintaining large-scale data systems. You will collaborate with stakeholders to identify data-related technical issues and infrastructure needs, and work closely with data scientists to gather requirements for modeling.
Key Responsibilities
- Design and develop data models, ETL processes, data warehouses, and pipeline solutions for structured/unstructured data from various sources.
- Define and monitor SLAs for data pipelines and products.
- Execute data quality assurance practices and support data management solutions pre-sales initiatives.
Technical Requirements
- Expertise in relational/non-relational databases and enterprise data warehouses.
- Proficiency in big data technologies (e.g., Hadoop, Spark).
- Knowledge of data ingestion technologies (e.g., Flume, Kafka).
- Experience in scripting, programming, and software development (e.g., Java, Python) for Windows or Linux.
- Understanding of machine learning and computer vision is a plus.
Benefits
This role offers the opportunity to work on cutting-edge technology and contribute to the success of our organization.
Be The First To Know
About the latest Etl processes Jobs in Singapore !
Data Engineering Lead
Posted today
Job Viewed
Job Description
Job Summary:
Senior Data EngineerWe are seeking a seasoned Senior Data Engineer to join our team. This is an exceptional opportunity for a highly skilled individual to drive data-driven decisions and contribute to the success of our organization.
- Data Engineering:
- Design, implement, and maintain large-scale data pipelines using Python and scalable architectures.
- Collaborate with cross-functional teams to ensure seamless integration and deployment of data solutions.
Technical Leadership and Quality Assurance:
- Architect and lead the development of robust data access controls and monitoring systems to ensure secure data operations.
- Implement automated testing and validation frameworks to ensure data accuracy and quality.
Service Delivery Enhancement:
- Develop standardized frameworks for evaluating and fulfilling data requests, reducing processing time.
- Lead first-level triage operations and establish clear escalation pathways for complex issues.
About You:
- Bachelor or Master's degree in Computer Science, Engineering, or Information Systems, or related field.
- Minimum 5 years of experience in data engineering, technical leadership, or equivalent roles at Senior Consultant level.
- Proven expertise in Python, data modeling, and large-scale datasets.
- Strong stakeholder engagement skills and ability to communicate complex ideas effectively.
In this role, you will have the opportunity to work with a talented team of professionals who share your passion for data-driven decision-making.
Tell employers what skills you have:
Big Data
Quality Assurance
Pipelines
Architect
Data Integration
Data Quality
Data Governance
Reliability
Python
Business Analytics
Statistics
Process Optimization
Data Science
Technical Leadership
Service Delivery
Business Requirements
Data Engineering Professional
Posted today
Job Viewed
Job Description
Job Opportunity
">We are seeking a highly skilled Data Engineering Professional to join our team.
The ideal candidate will have keen interest in Government projects and be experienced in setting up data pipelines, reviewing and maintaining scripts as part of the ETL process, building data marts and models, experimenting and implementing new data tools, and documenting data life processes. Successful candidates must hold a degree in Computer Science or equivalent, possess strong analytical and data related skills, have at least 3 years in data engineering or management, exhibit good interpersonal and communication skills, and demonstrate experience with Azure Synapse Analytics and Tableau.
About the Role
This is an exciting opportunity for a motivated and detail-oriented individual to contribute to our organization's success by designing and implementing data-driven solutions that drive business growth and improvement. As a key member of our team, you will be responsible for developing and maintaining high-quality data systems, ensuring data accuracy and integrity, and collaborating with cross-functional teams to achieve business objectives. If you are passionate about data engineering and want to make a meaningful impact, we encourage you to apply for this role.
Data Engineering Specialist
Posted today
Job Viewed
Job Description
As a skilled data professional, you will play a pivotal role in designing, developing, and maintaining large-scale data systems and pipelines. Our team is seeking an expert in data engineering to join our dynamic Singapore-based position.
Key Responsibilities:- Design and deploy scalable data pipelines utilizing Python and AWS services such as S3, Lambda, EC2, and CloudWatch.
- Ensure data integrity by testing and validating data pipelines for all new releases.
- Optimize pipeline performance while maintaining high-quality code.
- Collaborate with data scientists and engineers to ensure seamless data flow.
- Debug and troubleshoot pipeline issues, implementing fixes and documenting root causes.
- Proficient in Python and experience in data pipeline development and troubleshooting.
- Familiarity with AWS services (S3, Lambda, EC2, CloudWatch) and strong debugging skills.
- Attention to detail and focus on data quality, with good understanding of data science workflows.
- Experience with CI/CD and version control is a plus.
Achieving data excellence requires dedication, precision, and passion. We are looking for someone who shares our commitment to delivering high-quality solutions.