833 Data Scientists jobs in Singapore
Big Data Analyst
Posted 2 days ago
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Big Data Engineer
Posted 22 days ago
Job Viewed
Job Description
Job Description:
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
Key Responsibilities:
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
Required Skills & Experience:
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
Preferred Qualifications:
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Big Data Engineer
Posted 22 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
Skills Set And Track Record
- Good understanding and completion of projects using waterfall/Agile methodology
- Analytical, conceptualisation and problem-solving skills
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
- Analyse the Client data needs and document the requirements
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs
- Develop test plan and scripts for system testing, support user acceptance testing
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution
- Ensure the smooth operations and service level of IT solutions
- Support production issues
- Seniority level Mid-Senior level
- Employment type Contract
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Unison Consulting by 2x
Get notified about new Big Data Developer jobs in Singapore, Singapore .
Data Engineer (Snowflake/Microsoft Fabric), AI & Data, Technology ConsultingSouth East Community Development Council, Singapore 1 week ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer
Posted today
Job Viewed
Job Description
TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. TikTok has global offices including Los Angeles, New York, London, Paris, Berlin, Dubai, Singapore, Jakarta, Seoul and Tokyo.
Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.
About The Team
Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for our recommendation system to provide the most stable and best experience for our TikTok users.
The team is responsible for system stability and high availability, online services and offline data flow performance optimization, solving system bottlenecks, reducing cost overhead, building data and service mid-platform, realizing flexible and scalable high-performance storage and computing systems.
Responsibilities
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive data and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Apply data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
1. Bachelor's degree or above in computer science, software engineering, or a related field
2. Familiar with many open source frameworks in the field of big data, e.g.Hadoop, Hive,Flink, FlinkSQL,Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch etc.
3. Familiar with Java, C + and other programming languages
4. Strong coding and trouble shooting ability
5. Willing to challenge questions that have no obvious answers, and have a strong enthusiasm for learning new technologies
6. Experience of Peta Byte level data processing is a plus
7. At least 3 years of relevant experience
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Big Data Engineer
Posted today
Job Viewed
Job Description
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa.
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and its offices include New York, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.
About the team
Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users. We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.
Responsibilities
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
Minimum Qualifications
- Bachelor's degree or above, majoring in Computer Science, or related fields, with at 3+ years of experience
- Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
- Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
- Effective communication skills and a sense of ownership and drive
- Experience of Peta Byte level data processing is a plus
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Big Data Engineer
Posted today
Job Viewed
Job Description
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
Key Responsibilities:
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
Required Skills & Experience:
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
Preferred Qualifications:
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Be The First To Know
About the latest Data scientists Jobs in Singapore !
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 4 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
Responsibilities
· Analyse the Authority's data needs and document the requirements.
· Refine data collection/consumption by migrating data collection to more efficient channels.
· Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
· Develop test plan and scripts for system testing, support user acceptance testing.
· Build reports and dashboards according to user requirements
· Work with the Authority's technical teams to ensure smooth deployment and adoption of new solution.
· Ensure the smooth operations and service level of IT solutions.
· Support production issues
What we are looking for
· Good understanding and completion of projects using waterfall/Agile methodology.
· Strong SQL, data modelling and data analysis skills are a must.
· Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica.
· Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is must.
· Hands-on experience in DevOps deployment and data virtualisation tools like Denodo will be an advantage.
· Track record in implementing systems using Hive, Impala and Cloudera Data Platform will be preferred.
· Good understanding of analytics and data warehouse implementations.
· Ability to troubleshoot complex issues ranging from system resource to application stack traces.
· Track record in implementing systems with high availability, high performance, high security hosted at various data centres or hybrid cloud environments will be an added advantage.
· Passion for automation, standardization, and best practices
Tell employers what skills you haveTableau
cloudera
PySpark
Data Analysis
Big Data
ETL Tools
High Availability
Informatica
SAP BusinessObjects
Big Data Analytics
ETL
Data Engineering
Impala
SQL
SAP
Big Data Analysis
Python
Hive
Visualization
Linux
Big Data Analyst
Posted today
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Tableau
Machine Learning
Microsoft Excel
Construction
Data Analysis
Pipelines
ETL
Tuning
SQL
Python
Statistics
Data Science
Visualization
Google Analytics
Data Analytics
Data Visualization
Big Data Analyst
Posted today
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Tableau
Machine Learning
Microsoft Excel
Construction
Data Analysis
Pipelines
Hadoop
ETL
Tuning
Spark
SQL
Python
Hive
Statistics
Data Science
Visualization
Google Analytics
Data Analytics
Data Visualization