433 Big Data Technologies jobs in Singapore
Data Engineering Lead - Big Data Technologies- Vice President
Posted 2 days ago
Job Viewed
Job Description
Join to apply for the Data Engineering Lead - Big Data Technologies- Vice President role at Citi .
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.
Responsibilities- Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
- Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
- Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
- Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
- Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
- Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
- Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
- 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
- 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
- Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
- Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
- Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
- Experienced in working with large and multiple datasets and data warehouses
- Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets.
- Strong analytic skills and experience working with unstructured datasets
- Ability to effectively use complex analytical, interpretive, and problem-solving techniques
- Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
- Experience with external cloud platform such as OpenShift, AWS & GCP
- Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
- Experienced in integrating search solution with middleware & distributed messaging - Kafka
- Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
- Experienced in software development life cycle and good problem-solving skills.
- Excellent problem-solving skills and strong mathematical and analytical mindset
- Ability to work in a fast-paced financial environment
- Bachelor’s degree/University degree or equivalent experience
- Master’s degree preferred
We are an equal opportunity employer. Citi considers all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. If you require accommodations to apply or interview, please contact Citi for accessibility information.
#J-18808-LjbffrData Engineering Lead - Big Data Technologies- Vice President
Posted 13 days ago
Job Viewed
Job Description
**Responsibilities:**
+ **Strategic Leadership:** Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
+ **Team Management:** Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
+ **Architecture and Design:** Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
+ **Technology Selection and Implementation:** Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
+ **Performance Optimization:** Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
+ **Collaboration:** Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
+ **Data Governance:** Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
**Qualifications:**
+ 10-15 years of hands-on experience in **Hadoop** , **Scala** , **Java** , **Spark** , **Hive** , Kafka, Impala, Unix Scripting and other Big data frameworks.
+ 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
+ Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
+ Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
+ Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
+ Experienced in working with large and multiple datasets and data warehouses
+ Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
+ Strong analytic skills and experience working with unstructured datasets
+ Ability to effectively use complex analytical, interpretive, and problem-solving techniques
+ Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain - Git, BitBucket, Jira
+ Experience with external cloud platform such as OpenShift, AWS & GCP
+ Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
+ Experienced in integrating search solution with middleware & distributed messaging - Kafka
+ Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
+ Experienced in software development life cycle and good problem-solving skills.
+ Excellent problem-solving skills and strong mathematical and analytical mindset
+ Ability to work in a fast-paced financial environment
**Education:**
+ Bachelor's degree/University degree or equivalent experience
+ Master's degree preferred
---
**Job Family Group:**
Technology
---
**Job Family:**
Systems & Engineering
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Data Engineering Lead - Big Data Technologies- Vice President
Posted today
Job Viewed
Job Description
Join to apply for the
Data Engineering Lead - Big Data Technologies- Vice President
role at
Citi .
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.
Responsibilities
Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
Qualifications
10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
Experienced in working with large and multiple datasets and data warehouses
Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets.
Strong analytic skills and experience working with unstructured datasets
Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Experience with external cloud platform such as OpenShift, AWS & GCP
Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
Experienced in integrating search solution with middleware & distributed messaging - Kafka
Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
Experienced in software development life cycle and good problem-solving skills.
Excellent problem-solving skills and strong mathematical and analytical mindset
Ability to work in a fast-paced financial environment
Education
Bachelor’s degree/University degree or equivalent experience
Master’s degree preferred
We are an equal opportunity employer. Citi considers all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. If you require accommodations to apply or interview, please contact Citi for accessibility information.
#J-18808-Ljbffr
Data Engineering
Posted today
Job Viewed
Job Description
As a Manager, Data Engineering & Analytics, you will lead the data analytics team and drive the overall data strategy. This hybrid role combines leadership in data engineering and data analysis, with a strong focus on Azure Data Services and scalable architecture. You will be responsible for managing data pipelines, ensuring data quality, optimizing infrastructure, and guiding advanced analytics efforts including AI/ML.
Key Responsibilities:
- Lead and manage a team of local and remote Data Analysts.
- Design, build, and maintain scalable and efficient data pipelines and architecture.
- Ensure data quality, governance, security, and compliance.
- Perform ETL operations across multiple data sources and platforms.
- Improve and optimize data storage, retrieval, and scalability.
- Collaborate across business units to deliver data-driven solutions.
- Drive initiatives in advanced analytics, AI/ML, and emerging data technologies.
- Own and manage Power BI reporting framework and delivery.
- Manage analytics project timelines and deliverables.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 5+ years of experience in data engineering and analytics roles.
- Strong proficiency in Python, SQL, and Azure cloud platform.
- Experience with big data tools (e.g., Spark, Hadoop, Kafka).
- Knowledge of data modeling, data warehousing, and architecture.
- Strong leadership, project management, and communication skills.
Interested candidates who wish to apply for the advertised position, please click on "Apply Now". We regret that only shortlisted candidates will be notified.
Job Code: ANNH
EA License: 01C4394
By sending us your personal data and curriculum vitae (CV), you are deemed to consent to PERSOLKELLY Singapore Pte Ltd and its affiliates collecting, using and disclosing my personal data for the purposes set out in the Privacy Policy which is available at I also acknowledge that I have read, understood, and agree to the said Privacy Policy.
AVP, Data Engineering
Posted 8 days ago
Job Viewed
Job Description
You will drive the future of data-driven entertainment by leading the Data Engineering team at SonyLIV. In this role, you will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support the AI infrastructure roadmap. Your primary responsibility will be to design and implement the data architecture that influences decision-making, derives insights, and directly impacts the growth of the platform while enhancing user experiences.As a part of SonyLIV, you will have the opportunity to work with industry experts, access vast data sets, and leverage cutting-edge technology. Your contributions will play a crucial role in the products delivered and the engagement of viewers.The ideal candidate for the role should possess a strong foundation in data infrastructure and architecture, demonstrate leadership in scaling data teams, ensure operational excellence for efficiency and speed, and have a visionary approach to how Data Engineering can drive company success. If you are passionate about making a significant impact in the world of OTT and entertainment, we are looking forward to connecting with you.As the AVP of Data Engineering at SonyLIV in Bangalore, your responsibilities will include defining the technical vision for scalable data infrastructure, leading innovation in data processing and architecture, ensuring operational excellence in data systems, building and mentoring a high-caliber data engineering team, collaborating with cross-functional teams, architecting and managing production data models and pipelines, and driving data quality and business insights.This role requires a minimum of 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, along with expertise in managing large data engineering teams. Proficiency in modern data technologies such as Spark, Kafka, Redshift, Snowflake, and BigQuery is essential, as well as strong skills in SQL and experience in object-oriented programming languages. Additionally, experience in data governance, privacy, compliance, A/B testing methodologies, statistical analysis, and security protocols within large data ecosystems is crucial.Preferred qualifications include a Bachelor's or Masters degree in a related technical field, experience managing the end-to-end data engineering lifecycle, working with large-scale infrastructure, familiarity with automated data lineage and data auditing tools, expertise in BI and visualization tools, and advanced processing frameworks.Joining SonyLIV will offer you the opportunity to lead the data engineering strategy, drive technological innovation, and enable data-driven decisions that shape the future of OTT entertainment. SonyLIV, a part of CulverMax Entertainment Pvt Ltd, is committed to creating an inclusive and equitable workplace where diversity is celebrated. Being a part of this progressive content powerhouse will allow you to tell stories beyond the ordinary and contribute to the exciting journey of digital entertainment.,
#J-18808-LjbffrData Engineering Lead
Posted today
Job Viewed
Job Description
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.
Responsibilities:
- Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
- Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
- Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
- Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
- Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
- Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
- Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
Qualifications:
- 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
- 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
- Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
- Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
- Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
- Experienced in working with large and multiple datasets and data warehouses
- Experience building and optimizing 'big data' data pipelines, architectures, and datasets.
- Strong analytic skills and experience working with unstructured datasets
- Ability to effectively use complex analytical, interpretive, and problem-solving techniques
- Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
- Experience with external cloud platform such as OpenShift, AWS & GCP
- Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
- Experienced in integrating search solution with middleware & distributed messaging - Kafka
- Highly effective interpersonal and communication skills with tech/non-tech stakeholders.
- Experienced in software development life cycle and good problem-solving skills.
- Excellent problem-solving skills and strong mathematical and analytical mindset
- Ability to work in a fast-paced financial environment
Education:
- Bachelor's degree/University degree or equivalent experience
- Master's degree preferred
-
Job Family Group:
Technology
-
Job Family:
Systems & Engineering
-
Time Type:
Full time
-
Most Relevant Skills
Please see the requirements listed above.
-
Other Relevant Skills
For complementary skills, please see above and/or contact the recruiter.
-
Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.
If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .
View Citi's EEO Policy Statement and the Know Your Rights poster.
Data Engineering Architect
Posted today
Job Viewed
Job Description
We are seeking a highly proficient and results-driven Data Engineering Architect with a robust background in designing, implementing, and maintaining scalable and resilient data ecosystems. The ideal candidate will possess a minimum of five years of dedicated experience in orchestrating complex data workflows and will serve as a key contributor within our advanced data services team. This role requires a meticulous professional who can transform intricate business requirements into high-performance, production-grade data solutions.
Key Responsibilities:- Architectural Stewardship: Design, develop, and optimize sophisticated data pipelines leveraging distributed computing frameworks to ensure efficiency, reliability, and scalability of data ingestion, transformation, and delivery layers.
- SQL Mastery: Act as a subject matter expert in SQL, crafting and refining highly complex, multi-layered queries and stored procedures for advanced data manipulation, extraction, and reporting, while ensuring optimal performance and resource utilization.
- Distributed Processing Expertise: Lead the development and deployment of data processing jobs using Apache Spark , orchestrating complex data transformations, aggregations, and feature engineering at petabyte-scale.
- Big Data Orchestration: Proactively manage and evolve our data warehousing solutions built on Apache Hive , overseeing schema design, partition management, and query optimization to support large-scale analytical and reporting needs.
- Collaborative Innovation: Work synergistically with senior engineers and cross-functional teams to conceptualize and execute architectural enhancements, data modeling strategies, and systems integrations that align with long-term business objectives.
- Quality Assurance & Governance: Establish and enforce rigorous data quality standards, implementing comprehensive validation protocols and monitoring mechanisms to guarantee data integrity, accuracy, and lineage across all systems.
- Operational Excellence: Proactively identify, diagnose, and remediate technical bottlenecks and anomalies within data workflows, ensuring system uptime and operational stability through systematic troubleshooting and root cause analysis.
- Educational Foundation: Bachelor's degree in Computer Science, Information Systems, or a closely related quantitative field.
- Experience: A minimum of five (5) years of progressive, hands-on experience in a dedicated data engineering or equivalent role, with a proven track record of delivering enterprise-level data solutions.
- Core Technical Skills:
SQL: Expert-level proficiency in SQL programming is non-negotiable, including advanced query optimization, window functions, and schema design principles.
Distributed Computing: Demonstrated high-level proficiency with Apache Spark for large-scale data processing.
Data Warehousing: In-depth, practical experience with Apache Hive and its ecosystem.
- Conceptual Knowledge: Deep understanding of data warehousing methodologies, ETL/ELT processes, and dimensional modeling.
- Analytical Acumen: Exceptional problem-solving and analytical capabilities, with the ability to dissect complex technical challenges and formulate elegant, scalable solutions.
- Continuous Learning: A relentless curiosity and a strong desire to stay abreast of emerging technologies and industry best practices.
- Domain Preference: Prior professional experience within the Banking or Financial Services sector is highly advantageous.
Be The First To Know
About the latest Big data technologies Jobs in Singapore !
Data Engineering Expert
Posted today
Job Viewed
Job Description
1. Data Engineering & Platform Knowledge (Must)
Strong understanding of Hadoop ecosystem: HDFS, Hive, Impala, Oozie, Sqoop, Spark (on YARN).
Experience in data migration strategies (lift & shift, incremental, re-engineering pipelines).
Knowledge of Databricks architecture (Workspaces, Unity Catalog, Clusters, Delta Lake, Workflows).
2. Testing & Validation (Preferred)
Data reconciliation (source vs. target).
Performance benchmarking.
Automated test frameworks for ETL pipelines.
3. Databricks-Specific Expertise (Preferred)
Delta Lake: ACID transactions, time travel, schema evolution, Z-ordering.
Unity Catalog: Catalog/schema/table design, access control, lineage, tags.
Workflows/Jobs: Orchestration, job clusters vs. all-purpose clusters.
SQL Endpoints / Databricks SQL: Designing downstream consumption models.
Performance Tuning: Partitioning, caching, adaptive query execution (AQE), photon runtime.
4. Migration & Data Movement (Preferred)
Data migration from HDFS/Cloudera to cloud storage (ADLS/S3/GCS).
Incremental ingestion techniques (Change Data Capture, Delta ingestion frameworks).
Mapping Hive Metastore to Unity Catalog (metastore migration).
Refactoring HiveQL/Impala SQL to Databricks SQL (syntax differences).
5. Security & Governance (Nice to have)
Mapping Cloudera Ranger/SSO policies Unity Catalog RBAC.
Azure AD / AWS IAM integration with Databricks.
Data encryption, masking, anonymization strategies.
Service Principal setup & governance.
6. DevOps & Automation (Nice to have)
Infrastructure as Code (Terraform for Databricks, Cloud storage, Networking).
CI/CD for Databricks (GitHub Actions, Azure DevOps, Databricks Asset Bundles).
Cluster policies & job automation.
Monitoring & logging (Databricks system tables, cloud-native monitoring).
7. Cloud & Infra Skills (Nice to have)
- Strong knowledge of the target cloud (AWS/Azure/GCP):
o Storage (S3/ADLS/GCS).
o Networking (VNETs, Private Links, Security Groups).
o IAM & Key Management. 9. Soft Skills
Ability to work with business stakeholders for data domain remapping.
Strong documentation and governance mindset.
Cross-team collaboration (infra, security, data, business).
Data Engineering Strategist
Posted 1 day ago
Job Viewed
Job Description
As a data engineering strategist, you will play a critical role in designing and implementing cutting-edge data storage solutions. Using AWS services such as Amazon S3, Amazon RDS, Amazon Redshift, and Amazon DynamoDB, along with Databricks' Delta Lake, you will integrate Informatica IDMC for metadata management and data cataloging.
Key Responsibilities:Data Engineering Specialist
Posted 1 day ago
Job Viewed
Job Description
Data Engineering Specialist
About the Role:
We are seeking a highly skilled and detail-oriented Data Engineering Specialist to join our growing analytics team. In this role, you will play a key part in designing and implementing data pipelines and storage solutions to support business decisions.
You will be responsible for extracting, analyzing, and interpreting large datasets to provide actionable insights to stakeholders. Proficiency in Spark, Python, and data analytics is essential, as well as excellent communication skills to work closely with cross-functional teams.
Key Responsibilities:
- Design and implement data pipelines using Spark, AWS, and Azure
- Analyze and perform data profiling to understand data patterns and discrepancies
- Develop data pipeline automation using cloud-based technologies
- Work with stakeholders to translate business requirements into technical requirements
Requirements:
- Bachelor's degree in Computer Science or related field
- Minimum of 4 years' experience in Data Engineering fields
- Strong knowledge of Spark, Python, and data analytics
Benefits:
- Collaborative and dynamic work environment
- Opportunities for growth and professional development
- Competitive salary and benefits package