382 Data Scientists jobs in Singapore
Big Data Engineer
Posted today
Job Viewed
Job Description
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Big Data Engineer
Posted today
Job Viewed
Job Description
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Meals provided
Responsibilities
About the team
Libra is a large-scale online one-stop A/B testing platform developed by Data Platform. Some of its features include:
- Provides experimental evaluation services for all product lines within the company, covering solutions for complex scenarios such as recommendation, algorithm, function, UI, marketing, advertising, operation, social isolation, causal inference, etc.
- Provides services throughout the entire experimental lifecycle from experimental design, experimental creation, indicator calculation, statistical analysis to final evaluation launch.
- Supports the entire company's business on the road of rapid iterative trial and error, boldly assuming and carefully verifying.
Responsibilities
- Responsible for data system of experimentation platform operation and maintenance.
- Construct PB-level data warehouses, participate in and be responsible for data warehouse design, modeling, and development, etc.
- Build ETL data pipelines and automated ETL data pipeline systems.
- Build an expert system for metric data processing that combines offline and real-time processing.
Qualifications
Minimum Qualifications
- Bachelor's degree in Computer Science, a related technical field involving software or systems engineering, or equivalent practical experience.
- Proficiency with big data frameworks such as Presto, Hive, Spark, Flink, Clickhouse, Hadoop, and have experience in large-scale data processing.
- Minimum 1 year of experience in Data Engineering.
- Experience writing code in Java, Scala, SQL, Python or a similar language.
- Experience with data warehouse implementation methodologies, and have supported actual business scenarios.
Preferred Qualifications
- Knowledge about a variety of strategies for ingesting, modeling, processing, and persisting data, ETL design, job scheduling and dimensional modeling.
- Expertise in designing, analyzing, and troubleshooting large-scale distributed systems is a plus (Hadoop, M/R, Hive, Spark, Presto, Flume, Kafka, ClickHouse, Flink or comparable solutions).
- Work/internship experience in internet companies, and those with big data processing experience are preferred.
Big Data Analyst
Posted today
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Tableau
Machine Learning
Microsoft Excel
Construction
Data Analysis
Pipelines
ETL
Tuning
SQL
Python
Statistics
Data Science
Visualization
Google Analytics
Data Analytics
Data Visualization
Big Data Developer
Posted today
Job Viewed
Job Description
Role: Big Data Developer
Contract: 1 year Renewable
Location: Changi
Experience required:
Interested candidates may also directly email their CVs to:
Only shortlisted candidates will be contacted for interview.
Data EngineerWe are looking for a profile that is a data engineer who have a background in Spark development
Job Duties & Responsibilities:
- Develop and maintain ETL processes for finance regulatory reporting projects and applications.
- Develop Big Data applications using the Agile Software development life cycle.
- Enhance existing applications based on mapping specifications provided by the Tech BA.
- Collaborate with the Tech BA and Scrum Master on project delivery and issue resolution.
- Contribute to the coding, testing, and Level 2/3 support of the data warehouse.
- Partner with business stakeholders to ensure requirements are met and collaborate with other technology teams (QA, Production Support) for effective implementation.
- Provide technical expertise to assist in designing, testing, and implementing software code and infrastructure to support data infrastructure and governance activities.
- Troubleshoot and resolve technical problems in applications or processes, providing effective solutions.
- Perform performance tuning using execution plans and other relevant tools.
- Continuously explore and evaluate evolving tools within the Hadoop ecosystem and apply them to relevant business challenges.
- Analyze and debug existing shell scripts, and enhance them as needed.
Required Skills:
- Bachelor's degree in Computer Science or a related field.
- Experience with Spark, HDFS, MapReduce, Hive, Impala, Sqoop, and Linux/Unix technologies.
- Hands-on experience with RDBMS technologies (e.g., Oracle, MariaDB).
- Strong analytical and problem-solving skills.
- Experience working with a Big Data implementation in a production environment.
- Familiarity with Unix shell scripting.
- Understanding of Agile methodology and experience working in an Agile development environment.
Oracle
Big Data
Scrum
Hadoop
ETL
MapReduce
MariaDB
Agile Methodology
SQL
Performance Tuning
Unix Shell Scripting
Project Delivery
Agile Software Development
Mapping
Agile Development
Big Data Expert
Posted today
Job Viewed
Job Description
We are seeking a skilled Data Engineer to design and implement big data systems that meet user needs.
Your responsibility will include building the company's big data warehouse system, including batch and stream data flow construction.
Key Responsibilities:- Develop ETL architecture to ensure smooth data integration
- Research cutting-edge big data technologies to optimize big data platforms continuously
- Bachelor degree or above in computer science, database management, machine learning and other related fields
- More than 3 years of data development work experience
- Proficient in SQL language and familiar with MySQL
- Very familiar with Shell, Java (Scala), Python
- Familiar with common ETL technologies and principles
- Rich experience in spark, MR task tuning
As an EA licensed employment agency, we strive to provide our clients with top talent.
Big Data Specialist
Posted today
Job Viewed
Job Description
Job Summary:
We are seeking a highly skilled Big Data Professional to join our team. The ideal candidate will have expertise in handling and processing large datasets using various big data tools and technologies.
Key Responsibilities:
- Design, develop, and implement big data solutions using Apache Spark, Hadoop, and other relevant technologies.
- Work closely with cross-functional teams to understand business requirements and develop data-driven insights.
- Analyze and optimize big data pipelines for improved performance and scalability.
Required Skills and Qualifications:
- Expertise in Apache Spark, Hadoop, and other big data tools and technologies.
- Strong understanding of data modeling, data warehousing, and ETL processes.
- Experience with cloud-based big data platforms such as Azure or AWS.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Certifications related to data and analytics.
- Experience with data visualization tools such as Tableau or Power BI.
What We Offer:
- A dynamic and collaborative work environment.
- Ongoing training and professional development opportunities.
- Competitive salary and benefits package.
Big Data Specialist
Posted today
Job Viewed
Job Description
Job Overview
We are seeking a highly skilled Data Engineer to design and develop scalable data pipelines using Azure Data Factory, Databricks, and Spark.
Key Responsibilities
- Data Pipeline Design: Develop efficient data pipelines that can handle large volumes of structured and unstructured data from various sources.
- Data Ingestion and Transformation: Ingest data from diverse sources, transform it into a suitable format, and store it in Azure Cosmos DB for high-performance access.
- Collaboration: Work closely with data scientists and analysts to understand data requirements and develop customized solutions.
- Quality and Monitoring: Implement data quality and monitoring processes to ensure seamless data workflows.
- Technology Updates: Stay updated with emerging Azure and big data technologies to optimize data solutions.
Requirements and Qualifications
- Technical Skills: Proficiency in Azure Data Factory, Databricks, Spark, and Azure Cosmos DB.
- Programming Languages: Strong skills in Python, Java, or C#.
- Communication: Excellent communication and collaboration skills to work effectively with cross-functional teams.
Benefits
As a key member of our team, you will have the opportunity to work on challenging projects, receive continuous training, and participate in code reviews and testing.
Be The First To Know
About the latest Data scientists Jobs in Singapore !
Big Data Specialist
Posted today
Job Viewed
Job Description
About the Role:
Our organization is seeking a skilled Big Data Specialist to design, develop, and maintain robust data pipelines that power analytics and business intelligence initiatives. You will work on data integration, transformation, and quality to support various projects across the organization.
Key Responsibilities:- Design, test, and implement data pipelines and transformation workflows on the data platform.
- Support data integration and processing for cross-functional projects by working with stakeholders to understand requirements and deliverables.
- Conduct regular data quality checks and troubleshoot issues as needed.
- Prepare datasets and documentation for analytics and operational use by developing clear and concise documentation.
- Collaborate with other teams to improve scalability, automation, and platform best practices.
Big Data Specialist
Posted today
Job Viewed
Job Description
Mandatory Skills : (Azure OR AWS) AND ("Apache Spark" OR Hive OR Hadoop) AND ("Spark Streaming" OR "Apache Flink" OR Kafka) AND NoSQL AND Datamodeling AND Shell OR Python
Job Description:
- Mandatory Skills :
- (SQL Server / Oracle / DB2 / Netezza) – at-least good working knowledge in 2 of these DB
- Apache Spark Streaming or Apache Flink
- Hadoop
- Kafka
- NOSQL databases - Cosmos DB, Document DB
- Spark, Dataframe API
- Hive (HQL)
- Scripting language – Shell or bash
- CI CD
- Experience with at least one Cloud Infra provider (Azure/AWS)
- Good to have Skills :
- Certifications related to Data and Analytics
Management Consulting
Apache Spark
Oracle
Hadoop
Scripting
Predictive Analytics
Data Quality
SQL Server
Python
Revolution
Netezza
DB2
API
Apache
Databases
Data Visualization
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Requirements:
· Degree in Information Technology or equivalent.
· Must have 6- 8 years of experience in Data Warehousing.
· Extensive experience in ERD Design/ETL/Querying.
· Must have at least 5 years of Python Development in Data Engineering.
· Minimum 6 years of SQL experience.
· Experience in RestAPI is mandatory.
· Experience in AWS cloud is Mandatory.
· Experience in CI/CD, Docker and Kubernetes.
· Good to have experience in Big Data Technologies- Apache, Spark, Hive or Kafka.
· Good to experience in Data Governance, Data Security and Security Frameworks.
· Ability to work independently and manage stake holders and users.
· Excellent written and verbal communication skills.
Tell employers what skills you haveApache Spark
Kubernetes
Big Data
Ability To Work Independently
Pipelines
Hadoop
Information Technology
Data Governance
Data Engineering
SQL
Python
Docker
Java
Data Warehousing