233 Big Data jobs in Singapore
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Job Description:
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
Key Responsibilities:
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
Required Skills & Experience:
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
Preferred Qualifications:
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Big Data Analyst
Posted 6 days ago
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Big Data Engineer
Posted 18 days ago
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
SKILLS SET AND TRACK RECORD
* Good understanding and completion of projects using waterfall/Agile methodology.
* Analytical, conceptualisation and problem-solving skills.
* Good understanding of analytics and data warehouse implementations
* Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
* Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
* Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
* Track record in implementing systems using Cloudera Data Platform will be an added advantage.
* Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
* Passion for automation, standardization, and best practices
* Good presentation skills are preferred
The developer is responsible to:
* Analyse the Client data needs and document the requirements.
* Refine data collection/consumption by migrating data collection to more efficient channels
* Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
* Develop test plan and scripts for system testing, support user acceptance testing.
* Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
* Ensure the smooth operations and service level of IT solutions.
* Support production issues
#J-18808-LjbffrBig Data Engineer
Posted 18 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
Skills Set And Track Record
- Good understanding and completion of projects using waterfall/Agile methodology
- Analytical, conceptualisation and problem-solving skills
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
- Analyse the Client data needs and document the requirements
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs
- Develop test plan and scripts for system testing, support user acceptance testing
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution
- Ensure the smooth operations and service level of IT solutions
- Support production issues
- Seniority level Mid-Senior level
- Employment type Contract
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Unison Consulting by 2x
Get notified about new Big Data Developer jobs in Singapore, Singapore .
Data Engineer (Snowflake/Microsoft Fabric), AI & Data, Technology ConsultingSouth East Community Development Council, Singapore 1 week ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Developer
Posted today
Job Viewed
Job Description
We are seeking an experienced Big Data Development Engineer to join our global engineering team. The ideal candidate will have a strong foundation in computer science and experience in deploying and maintaining big data platforms across multiple global locations.
Job DescriptionAs a Big Data Development Engineer, you will be responsible for the deployment, configuration, monitoring, and maintenance of big data platforms. You will also implement and optimize the infrastructure for big data storage, processing, and computation. Additionally, you will be responsible for architectural design, performance tuning, and troubleshooting of big data platforms.
Required Skills and QualificationsThe successful candidate must have a Bachelor's Degree in Computer Science or related discipline with experience in software engineering, and at least 2 years of relevant experience in big data platform operations. They should also have a strong foundation in computer science, familiarity with operating system principles, data structures, and algorithms.
BenefitsThis is an excellent opportunity to work with a talented team and contribute to the development of a cutting-edge big data analytics platform. As a Big Data Development Engineer, you will have the chance to work on challenging projects, collaborate with experienced engineers, and develop your skills in a rapidly growing tech company.
Big Data Engineer
Posted today
Job Viewed
Job Description
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our innovative product is built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and bring joy - a mission we work towards every day.
We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team. We're resilient and embrace challenges as they come. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Meals provided
Responsibilities
About the team
Libra is a large-scale online one-stop A/B testing platform developed by TikTok Data Platform. Some of its features include:
- Provides experimental evaluation services for all product lines within the company, covering solutions for complex scenarios such as recommendation, algorithm, function, UI, marketing, advertising, operation, social isolation, causal inference, etc.
- Provides services throughout the entire experimental lifecycle from experimental design, experimental creation, indicator calculation, statistical analysis to final evaluation launch.
- Supports the entire company's business on the road of rapid iterative trial and error, boldly assuming and carefully verifying.
Responsibilities
- Responsible for data system of experimentation platform operation and maintenance.
- Construct PB-level data warehouses, participate in and be responsible for data warehouse design, modeling, and development, etc.
- Build ETL data pipelines and automated ETL data pipeline systems.
- Build an expert system for metric data processing that combines offline and real-time processing.
Qualifications
Minimum Qualifications
- Bachelor's degree in Computer Science, a related technical field involving software or systems engineering, or equivalent practical experience.
- Proficiency with big data frameworks such as Presto, Hive, Spark, Flink, Clickhouse, Hadoop, and have experience in large-scale data processing.
- Minimum 1 year of experience in Data Engineering.
- Experience writing code in Java, Scala, SQL, Python or a similar language.
- Experience with data warehouse implementation methodologies, and have supported actual business scenarios.
Preferred Qualifications
- Knowledge about a variety of strategies for ingesting, modeling, processing, and persisting data, ETL design, job scheduling and dimensional modeling.
- Expertise in designing, analyzing, and troubleshooting large-scale distributed systems is a plus (Hadoop, M/R, Hive, Spark, Presto, Flume, Kafka, ClickHouse, Flink or comparable solutions).
- Work/internship experience in internet companies, and those with big data processing experience are preferred.
Big Data Engineer
Posted today
Job Viewed
Job Description
- We are seeking a highly skilled and motivated Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
- Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
- Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
- Build synchronous and asynchronous data APIs for downstream systems to consume the data.
- Deploy and manage infrastructure using Terraform and other
- Infrastructure-as-Code (IaC) tools.
- Develop and maintain CI/CD pipelines for deploying data applications and services.
- Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake
- Formation) to support scalable and secure cloud-based data platforms.
- Handle both batch and real-time data processing effectively.
- Apply best practices in data modeling and support data privacy and data protection initiatives.
- Implement and manage data encryption and hashing techniques to secure sensitive information.
- Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
- Lead performance tuning and troubleshooting for data applications and platforms.
- Strong proficiency in SQL for data modeling, querying, and transformation.
- Advanced Python development skills with an emphasis on data engineering use cases.
- Hands-on experience with Terraform for cloud infrastructure provisioning.
- Proficiency with CI/CD tools, particularly GitHub Actions.
- Deep expertise in AWS cloud architecture and services.
- Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
- Strong communication.
- Experience with big data technologies such as Apache Spark, Hive, or Kafka.
- Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes) .
- Solid understanding of data governance, data quality, and security frameworks.
Be The First To Know
About the latest Big data Jobs in Singapore !
Big Data Engineer
Posted today
Job Viewed
Job Description
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our innovative product is built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and bring joy - a mission we work towards every day.
We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team. We're resilient and embrace challenges as they come. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Paid leave, 100+ mil users, Meals provided
Responsibilities
Team Introduction
The mission of the Data Platform Singapore Business Partnering (DPSG BP) team is to empower the TikTok Business with data. Our goal is to build a Data Warehouse that can cater to batch and streaming data, Data Products that provide useful information to build efficient data metrics & dashboards which will be used to make smarter business decisions to support business growth. If you're looking for a challenging ground to push your limits, this is the team for you
Responsibilities:
- Translate business requirements & end to end designs into technical implementations and responsible for building batch and real-time data warehouse.
- Manage data modeling design, writing, and optimizing ETL jobs.
- Collaborate with the business team to building data metrics based on data warehouse.
- Responsible for building and maintaining data products.
- Involvement in rollouts, upgrades, implementation, and release of data system changes as required for streamlining of internal practices.
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software.
- Apply data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets.
- Visualise, interpret, and report data findings and may create dynamic data reports as well.
Qualifications
Minimum Qualifications:
- At least 5 years in software engineering and 2 years of relevant experience in data engineering.
- Proficient in creating and maintaining complex ETL pipelines end-to-end while maintaining high reliability and security.
- Familiar with data warehouse concept and have production experience in modeling design.
- Familiar with at least 1 distributed computing engine (e.g. Hive, Spark, Flink).
- Familiar with at least 1 NoSQL database is a plus (e.g. HBase).
Preferred Qualifications:
- Excellent interpersonal and communication skills with the ability to engage and manage internal and external stakeholders across all levels of seniority.
- Strong collaboration skills with the ability to build rapport across teams and stakeholders.
Big Data Engineer
Posted today
Job Viewed
Job Description
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our innovative product is built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and bring joy - a mission we work towards every day.
We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team. We're resilient and embrace challenges as they come. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Flat organization
Responsibilities
About the team
The mission of the Data Platform Singapore Business Partnering (DPSG BP) team is to empower the TikTok Business with data. Our goal is to build a Data Warehouse that can cater to batch and streaming data, Data Products that provide useful information to build efficient data metrics & dashboards which will be used to make smarter business decisions to support business growth. If you're looking for a challenging ground to push your limits, this is the team for you
As a data engineer in the Data Platform team, you will have the opportunity to build, optimize and grow one of the largest data platforms in the world. You'll have the opportunity to gain hands-on experience on all kinds of systems in the data platform ecosystem. Your work will have a direct and huge impact on the company's core products as well as hundreds of millions of users.
Responsibilities:
- Design and build data transformations efficiently and reliably for different purposes (e.g. reporting, growth analysis, and multi-dimensional analysis).
- Design and implement reliable, scalable, robust and extensible big data systems that support core products and business.
- Establish solid design and best engineering practice for engineers as well as non-technical people.
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software.
- Apply data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets.
- Visualise, interpret, and report data findings and may create dynamic data reports as well.
Qualifications
Minimum Qualifications:
- Bachelor or Masters degree in Computer Science or related technical field or equivalent practical experience.
- At least 3 years of experience in the Big Data technologies (Hadoop, M/R, Hive, Spark, Metastore, Presto, Flume, Kafka, ClickHouse, Flink etc.).
Preferred Qualifications:
- Experience with performing data analysis, data ingestion and data integration.
- Experience with schema design and data modeling.
- Experience with ETL (Extraction, Transformation & Loading) and architecting data systems.
- Experience in writing, analyzing and debugging SQL queries.
- Experience in data privacy and security related projects.
- Deep understanding of various Big Data technologies.
- Passionate and self-motivated about technologies in the Big Data area.
- Solid communication and collaboration skills.
Big Data Engineer
Posted today
Job Viewed
Job Description
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Meals provided
Responsibilities
About the team
Libra is a large-scale online one-stop A/B testing platform developed by Data Platform. Some of its features include:
- Provides experimental evaluation services for all product lines within the company, covering solutions for complex scenarios such as recommendation, algorithm, function, UI, marketing, advertising, operation, social isolation, causal inference, etc.
- Provides services throughout the entire experimental lifecycle from experimental design, experimental creation, indicator calculation, statistical analysis to final evaluation launch.
- Supports the entire company's business on the road of rapid iterative trial and error, boldly assuming and carefully verifying.
Responsibilities
- Responsible for data system of experimentation platform operation and maintenance.
- Construct PB-level data warehouses, participate in and be responsible for data warehouse design, modeling, and development, etc.
- Build ETL data pipelines and automated ETL data pipeline systems.
- Build an expert system for metric data processing that combines offline and real-time processing.
Qualifications
Minimum Qualifications
- Bachelor's degree in Computer Science, a related technical field involving software or systems engineering, or equivalent practical experience.
- Proficiency with big data frameworks such as Presto, Hive, Spark, Flink, Clickhouse, Hadoop, and have experience in large-scale data processing.
- Minimum 1 year of experience in Data Engineering.
- Experience writing code in Java, Scala, SQL, Python or a similar language.
- Experience with data warehouse implementation methodologies, and have supported actual business scenarios.
Preferred Qualifications
- Knowledge about a variety of strategies for ingesting, modeling, processing, and persisting data, ETL design, job scheduling and dimensional modeling.
- Expertise in designing, analyzing, and troubleshooting large-scale distributed systems is a plus (Hadoop, M/R, Hive, Spark, Presto, Flume, Kafka, ClickHouse, Flink or comparable solutions).
- Work/internship experience in internet companies, and those with big data processing experience are preferred.