177 Etl Processes jobs in Singapore

Data Engineering

$80000 - $120000 Y PERSOL

Posted today

Job Viewed

Tap Again To Close

Job Description

As a Manager, Data Engineering & Analytics, you will lead the data analytics team and drive the overall data strategy. This hybrid role combines leadership in data engineering and data analysis, with a strong focus on Azure Data Services and scalable architecture. You will be responsible for managing data pipelines, ensuring data quality, optimizing infrastructure, and guiding advanced analytics efforts including AI/ML.

Key Responsibilities:

  • Lead and manage a team of local and remote Data Analysts.
  • Design, build, and maintain scalable and efficient data pipelines and architecture.
  • Ensure data quality, governance, security, and compliance.
  • Perform ETL operations across multiple data sources and platforms.
  • Improve and optimize data storage, retrieval, and scalability.
  • Collaborate across business units to deliver data-driven solutions.
  • Drive initiatives in advanced analytics, AI/ML, and emerging data technologies.
  • Own and manage Power BI reporting framework and delivery.
  • Manage analytics project timelines and deliverables.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering and analytics roles.
  • Strong proficiency in Python, SQL, and Azure cloud platform.
  • Experience with big data tools (e.g., Spark, Hadoop, Kafka).
  • Knowledge of data modeling, data warehousing, and architecture.
  • Strong leadership, project management, and communication skills.

Interested candidates who wish to apply for the advertised position, please click on "Apply Now". We regret that only shortlisted candidates will be notified.

Job Code: ANNH

EA License: 01C4394

By sending us your personal data and curriculum vitae (CV), you are deemed to consent to PERSOLKELLY Singapore Pte Ltd and its affiliates collecting, using and disclosing my personal data for the purposes set out in the Privacy Policy which is available at I also acknowledge that I have read, understood, and agree to the said Privacy Policy.

This advertiser has chosen not to accept applicants from your region.

AVP, Data Engineering

Singapore, Singapore Sony Pictures Networks India

Posted 7 days ago

Job Viewed

Tap Again To Close

Job Description

workfromhome

You will drive the future of data-driven entertainment by leading the Data Engineering team at SonyLIV. In this role, you will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support the AI infrastructure roadmap. Your primary responsibility will be to design and implement the data architecture that influences decision-making, derives insights, and directly impacts the growth of the platform while enhancing user experiences.As a part of SonyLIV, you will have the opportunity to work with industry experts, access vast data sets, and leverage cutting-edge technology. Your contributions will play a crucial role in the products delivered and the engagement of viewers.The ideal candidate for the role should possess a strong foundation in data infrastructure and architecture, demonstrate leadership in scaling data teams, ensure operational excellence for efficiency and speed, and have a visionary approach to how Data Engineering can drive company success. If you are passionate about making a significant impact in the world of OTT and entertainment, we are looking forward to connecting with you.As the AVP of Data Engineering at SonyLIV in Bangalore, your responsibilities will include defining the technical vision for scalable data infrastructure, leading innovation in data processing and architecture, ensuring operational excellence in data systems, building and mentoring a high-caliber data engineering team, collaborating with cross-functional teams, architecting and managing production data models and pipelines, and driving data quality and business insights.This role requires a minimum of 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, along with expertise in managing large data engineering teams. Proficiency in modern data technologies such as Spark, Kafka, Redshift, Snowflake, and BigQuery is essential, as well as strong skills in SQL and experience in object-oriented programming languages. Additionally, experience in data governance, privacy, compliance, A/B testing methodologies, statistical analysis, and security protocols within large data ecosystems is crucial.Preferred qualifications include a Bachelor's or Masters degree in a related technical field, experience managing the end-to-end data engineering lifecycle, working with large-scale infrastructure, familiarity with automated data lineage and data auditing tools, expertise in BI and visualization tools, and advanced processing frameworks.Joining SonyLIV will offer you the opportunity to lead the data engineering strategy, drive technological innovation, and enable data-driven decisions that shape the future of OTT entertainment. SonyLIV, a part of CulverMax Entertainment Pvt Ltd, is committed to creating an inclusive and equitable workplace where diversity is celebrated. Being a part of this progressive content powerhouse will allow you to tell stories beyond the ordinary and contribute to the exciting journey of digital entertainment.,

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineering Lead

Singapore, Singapore $150000 - $250000 Y Citi

Posted today

Job Viewed

Tap Again To Close

Job Description

The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.

Responsibilities:

  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.

Qualifications:

  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
  • Experience with external cloud platform such as OpenShift, AWS & GCP
  • Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
  • Experienced in integrating search solution with middleware & distributed messaging - Kafka
  • Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
  • Experienced in software development life cycle and good problem-solving skills.
  • Excellent problem-solving skills and strong mathematical and analytical mindset
  • Ability to work in a fast-paced financial environment

Education:

  • Bachelor's degree/University degree or equivalent experience
  • Master's degree preferred

-

Job Family Group:

Technology

-

Job Family:

Systems & Engineering

-

Time Type:

Full time

-

Most Relevant Skills

Please see the requirements listed above.

-

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

-

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .

View Citi's EEO Policy Statement and the Know Your Rights poster.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Architect

Singapore, Singapore $70000 - $120000 Y Unitech Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly proficient and results-driven Data Engineering Architect with a robust background in designing, implementing, and maintaining scalable and resilient data ecosystems. The ideal candidate will possess a minimum of five years of dedicated experience in orchestrating complex data workflows and will serve as a key contributor within our advanced data services team. This role requires a meticulous professional who can transform intricate business requirements into high-performance, production-grade data solutions.

Key Responsibilities:
  • Architectural Stewardship: Design, develop, and optimize sophisticated data pipelines leveraging distributed computing frameworks to ensure efficiency, reliability, and scalability of data ingestion, transformation, and delivery layers.
  • SQL Mastery: Act as a subject matter expert in SQL, crafting and refining highly complex, multi-layered queries and stored procedures for advanced data manipulation, extraction, and reporting, while ensuring optimal performance and resource utilization.
  • Distributed Processing Expertise: Lead the development and deployment of data processing jobs using Apache Spark , orchestrating complex data transformations, aggregations, and feature engineering at petabyte-scale.
  • Big Data Orchestration: Proactively manage and evolve our data warehousing solutions built on Apache Hive , overseeing schema design, partition management, and query optimization to support large-scale analytical and reporting needs.
  • Collaborative Innovation: Work synergistically with senior engineers and cross-functional teams to conceptualize and execute architectural enhancements, data modeling strategies, and systems integrations that align with long-term business objectives.
  • Quality Assurance & Governance: Establish and enforce rigorous data quality standards, implementing comprehensive validation protocols and monitoring mechanisms to guarantee data integrity, accuracy, and lineage across all systems.
  • Operational Excellence: Proactively identify, diagnose, and remediate technical bottlenecks and anomalies within data workflows, ensuring system uptime and operational stability through systematic troubleshooting and root cause analysis.
Required Competencies & Qualifications:
  • Educational Foundation: Bachelor's degree in Computer Science, Information Systems, or a closely related quantitative field.
  • Experience: A minimum of five (5) years of progressive, hands-on experience in a dedicated data engineering or equivalent role, with a proven track record of delivering enterprise-level data solutions.
  • Core Technical Skills:

    SQL: Expert-level proficiency in SQL programming is non-negotiable, including advanced query optimization, window functions, and schema design principles.

Distributed Computing: Demonstrated high-level proficiency with Apache Spark for large-scale data processing.

Data Warehousing: In-depth, practical experience with Apache Hive and its ecosystem.
- Conceptual Knowledge: Deep understanding of data warehousing methodologies, ETL/ELT processes, and dimensional modeling.
- Analytical Acumen: Exceptional problem-solving and analytical capabilities, with the ability to dissect complex technical challenges and formulate elegant, scalable solutions.
- Continuous Learning: A relentless curiosity and a strong desire to stay abreast of emerging technologies and industry best practices.
- Domain Preference: Prior professional experience within the Banking or Financial Services sector is highly advantageous.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Expert

Singapore, Singapore $120000 - $200000 Y 3 CUBED BUSINESS CONSULTING PTE. LTD.

Posted today

Job Viewed

Tap Again To Close

Job Description

1. Data Engineering & Platform Knowledge (Must)

  • Strong understanding of Hadoop ecosystem: HDFS, Hive, Impala, Oozie, Sqoop, Spark (on YARN).

  • Experience in data migration strategies (lift & shift, incremental, re-engineering pipelines).

  • Knowledge of Databricks architecture (Workspaces, Unity Catalog, Clusters, Delta Lake, Workflows).

2. Testing & Validation (Preferred)

  • Data reconciliation (source vs. target).

  • Performance benchmarking.

  • Automated test frameworks for ETL pipelines.

3. Databricks-Specific Expertise (Preferred)

  • Delta Lake: ACID transactions, time travel, schema evolution, Z-ordering.

  • Unity Catalog: Catalog/schema/table design, access control, lineage, tags.

  • Workflows/Jobs: Orchestration, job clusters vs. all-purpose clusters.

  • SQL Endpoints / Databricks SQL: Designing downstream consumption models.

  • Performance Tuning: Partitioning, caching, adaptive query execution (AQE), photon runtime.

4. Migration & Data Movement (Preferred)

  • Data migration from HDFS/Cloudera to cloud storage (ADLS/S3/GCS).

  • Incremental ingestion techniques (Change Data Capture, Delta ingestion frameworks).

  • Mapping Hive Metastore to Unity Catalog (metastore migration).

  • Refactoring HiveQL/Impala SQL to Databricks SQL (syntax differences).

5. Security & Governance (Nice to have)

  • Mapping Cloudera Ranger/SSO policies Unity Catalog RBAC.

  • Azure AD / AWS IAM integration with Databricks.

  • Data encryption, masking, anonymization strategies.

  • Service Principal setup & governance.

6. DevOps & Automation (Nice to have)

  • Infrastructure as Code (Terraform for Databricks, Cloud storage, Networking).

  • CI/CD for Databricks (GitHub Actions, Azure DevOps, Databricks Asset Bundles).

  • Cluster policies & job automation.

  • Monitoring & logging (Databricks system tables, cloud-native monitoring).

7. Cloud & Infra Skills (Nice to have)

  • Strong knowledge of the target cloud (AWS/Azure/GCP):

o Storage (S3/ADLS/GCS).

o Networking (VNETs, Private Links, Security Groups).

o IAM & Key Management. 9. Soft Skills

  • Ability to work with business stakeholders for data domain remapping.

  • Strong documentation and governance mindset.

  • Cross-team collaboration (infra, security, data, business).

This advertiser has chosen not to accept applicants from your region.

Data Engineering Specialist

Singapore, Singapore beBeeData

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title:
Data Engineering Specialist

About the Role:

We are seeking a highly skilled and detail-oriented Data Engineering Specialist to join our growing analytics team. In this role, you will play a key part in designing and implementing data pipelines and storage solutions to support business decisions.

You will be responsible for extracting, analyzing, and interpreting large datasets to provide actionable insights to stakeholders. Proficiency in Spark, Python, and data analytics is essential, as well as excellent communication skills to work closely with cross-functional teams.

Key Responsibilities:

  • Design and implement data pipelines using Spark, AWS, and Azure
  • Analyze and perform data profiling to understand data patterns and discrepancies
  • Develop data pipeline automation using cloud-based technologies
  • Work with stakeholders to translate business requirements into technical requirements

Requirements:

  • Bachelor's degree in Computer Science or related field
  • Minimum of 4 years' experience in Data Engineering fields
  • Strong knowledge of Spark, Python, and data analytics

Benefits:

  • Collaborative and dynamic work environment
  • Opportunities for growth and professional development
  • Competitive salary and benefits package
This advertiser has chosen not to accept applicants from your region.

Data Engineering Strategist

Singapore, Singapore beBeeEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

As a data engineering strategist, you will play a critical role in designing and implementing cutting-edge data storage solutions. Using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake, you will integrate Informatica IDMC for metadata management and data cataloging.

Key Responsibilities:
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in Singapore !

Data Engineering Innovator

Singapore, Singapore beBeeDataEngineer

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

We are seeking a skilled professional to lead the design, implementation and maintenance of scalable data pipelines and architectures.

The ideal candidate will have a solid understanding of ETL processes, data warehousing concepts and data modeling best practices.


Key Responsibilities
  • Design and develop robust data pipelines and architectures to support data ingestion, processing and storage.
  • Develop and optimize complex SQL queries and stored procedures for data extraction, transformation and analysis.
  • Model data to meet different use cases and automate data workflows.
  • Collaborate with cross-functional teams to deliver high-quality data solutions.
  • Lead the integration of data from various sources into data lakes and warehouses.
  • Monitor and troubleshoot data pipelines and workflows to ensure optimal performance and reliability.

Requirements
  • Minimum 3 years of experience in data engineering fields with system integration.
  • Proven solutions: demonstrated experience in providing effective working solutions particularly in cloud-based environments.

Technical Skills
  • Proficiency in Databricks, Azure Data lake, PowerBI, Tableau and related data processing and visualization software.
  • Familiarity with Windows, Linux, AWS and/or Azure platforms.
  • Strong programming skills in languages such as Python and R.

Benefits

Join a forward-thinking team that is transforming government digital services.

Thrive in a dynamic environment where you can utilize your technical expertise to drive business success.

Enjoy a collaborative work culture that fosters innovation and creativity.

Work on challenging projects that help shape the future of government digital services.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Specialist

Singapore, Singapore beBeeDataEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineering Specialist

As a Data Engineering Specialist, you will play a critical role in designing, developing, and maintaining large-scale data systems. You will collaborate with stakeholders to identify data-related technical issues and infrastructure needs, and work closely with data scientists to gather requirements for modeling.

Key Responsibilities
  • Design and develop data models, ETL processes, data warehouses, and pipeline solutions for structured/unstructured data from various sources.
  • Define and monitor SLAs for data pipelines and products.
  • Execute data quality assurance practices and support data management solutions pre-sales initiatives.

Technical Requirements
  • Expertise in relational/non-relational databases and enterprise data warehouses.
  • Proficiency in big data technologies (e.g., Hadoop, Spark).
  • Knowledge of data ingestion technologies (e.g., Flume, Kafka).
  • Experience in scripting, programming, and software development (e.g., Java, Python) for Windows or Linux.
  • Understanding of machine learning and computer vision is a plus.

Benefits

This role offers the opportunity to work on cutting-edge technology and contribute to the success of our organization.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Lead

Singapore, Singapore beBeeDataEngineering

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Summary:

Senior Data Engineer

We are seeking a seasoned Senior Data Engineer to join our team. This is an exceptional opportunity for a highly skilled individual to drive data-driven decisions and contribute to the success of our organization.

  • Data Engineering:
    • Design, implement, and maintain large-scale data pipelines using Python and scalable architectures.
    • Collaborate with cross-functional teams to ensure seamless integration and deployment of data solutions.

Technical Leadership and Quality Assurance:

  • Architect and lead the development of robust data access controls and monitoring systems to ensure secure data operations.
  • Implement automated testing and validation frameworks to ensure data accuracy and quality.

Service Delivery Enhancement:

  • Develop standardized frameworks for evaluating and fulfilling data requests, reducing processing time.
  • Lead first-level triage operations and establish clear escalation pathways for complex issues.

About You:

  • Bachelor or Master's degree in Computer Science, Engineering, or Information Systems, or related field.
  • Minimum 5 years of experience in data engineering, technical leadership, or equivalent roles at Senior Consultant level.
  • Proven expertise in Python, data modeling, and large-scale datasets.
  • Strong stakeholder engagement skills and ability to communicate complex ideas effectively.

In this role, you will have the opportunity to work with a talented team of professionals who share your passion for data-driven decision-making.

Tell employers what skills you have:
Big Data
Quality Assurance
Pipelines
Architect
Data Integration
Data Quality
Data Governance
Reliability
Python
Business Analytics
Statistics
Process Optimization
Data Science
Technical Leadership
Service Delivery
Business Requirements

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs