538 Global Data jobs in Singapore
Global Data Solutions Specialist
Posted today
Job Viewed
Job Description
Job Overview:
">- ">
- Data engineers with strong problem-solving abilities to work hand in hand with cross-functional partners to efficiently solve challenging data development and construction tasks.
We are seeking talented data engineers to contribute to our core business, impacting the company's financial security and business strategic planning.
">About Us ">At TikTok, our mission is to inspire creativity and bring joy. Our global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
">Why Join Us? ">Inspiring creativity is at the core of our mission. Our innovative product helps people authentically express themselves, discover and connect - and our global, diverse teams make that possible.
">We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team.
">Diversity & Inclusion ">We are committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace.
"),Global Data Engineering Specialist
Posted today
Job Viewed
Job Description
Are you a detail-driven individual with expertise in data engineering? We are seeking someone who can collaborate with cross-functional teams to solve complex data challenges.
Job DescriptionAs a data engineer, your primary responsibility will be to design and build scalable data pipelines that ingest, process, and transform large volumes of data. You will work closely with product managers, strategic operations, and internal engineering teams to understand data requirements and provide solutions that meet business needs.
You will be responsible for evaluating, implementing, and maintaining data infrastructure tools and technologies to support efficient data processing, storage, and querying. This includes ensuring data integrity, accuracy, and consistency by implementing data quality checks, validation processes, and monitoring mechanisms.
Key Responsibilities:- Design and build scalable data pipelines to ingest, process, and transform large volumes of data.
- Collaborate with cross-functional teams to understand data requirements and provide solutions that meet business needs.
- Evaluate, implement, and maintain data infrastructure tools and technologies to support efficient data processing, storage, and querying.
- Ensure data integrity, accuracy, and consistency by implementing data quality checks, validation processes, and monitoring mechanisms.
To be successful in this role, you should have a solid foundation in computer science and experience as a data engineer or similar role supporting datacentric businesses. You should also possess strong knowledge of SQL and work experience with relational and nonrelational databases.
Additionally, you should have proficiency in at least one programming language such as Python, Java, Go, etc. and solid mastery of data modeling and data warehouse concepts, data integration, and ETL/ELT technologies.
Excellent communication skills and the ability to collaborate effectively with cross-functional teams are essential for this role.
Requirements:- Bachelor's or higher degree in Computer Science, Information Technology, Programming & System Analysis, Science (Computer Studies) or related discipline.
- Experience as a data engineer or similar role supporting datacentric businesses.
- Strong knowledge of SQL and work experience with relational and nonrelational databases.
- Proficiency in at least one programming language such as Python, Java, Go, etc.
- Solid mastery of data modeling and data warehouse concepts, data integration, and ETL/ELT technologies.
- Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.
This is an excellent opportunity to join our team and contribute to the company's success.
Model Safety Analyst - Seed Global Data
Posted today
Job Viewed
Job Description
Overview
5 days ago Be among the first 25 applicants
As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets.
Responsibilities
Conduct research on the latest developments in AI safety across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real-world and edge-case scenarios.
Design and continuously refine safety evaluations for multi-models. Define and implement robust evaluation metrics to assess safety-related behaviors, failure modes, and alignment with responsible AI principles.
Conduct a thorough analysis of safety evaluation results to surface safety issues stemming from model training, fine-tuning, or product integration. Translate these findings into actionable insights to inform model iteration and product design improvements.
Partner with cross-functional stakeholders to build scalable safety evaluation workflows. Help establish feedback loops that continuously inform model development and risk mitigation strategies.
Manage end-to-end project lifecycles, including scoping, planning, execution, and delivery. Effectively allocate team resources and coordinate efforts across functions to meet project goals and timelines.
Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases.
This may include, but is not limited to, text, images, or videos depicting: Hate speech or harassment; Self-harm or suicide-related content; Violence or cruelty; Child safety. Support resources and resilience training will be provided to support employee well-being.
Qualifications
Minimum Qualifications
Bachelor's degree or higher, preferably in AI policy, Computer Science, Engineering, journalism, international relations, law, regional studies, or a related discipline.
Strong command of English in both written and verbal communication. Proficiency in other languages is a plus, as some projects may involve cross-regional collaboration or content in non-English languages.
Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
Proven project management abilities, with experience leading cross-functional initiatives in dynamic, fast-paced environments.
Creative problem-solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
Professional experience in AI safety, Trust & Safety, Risk consulting, or Risk management. Experience working at or with AI companies is highly desirable.
Intellectually curious, self-motivated, detail-oriented, and team-oriented.
Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real-world case studies and applying insights in a high-impact setting.
About Doubao (Seed)
Founded in 2023, the ByteDance Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements. With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Seniority level
Associate
Employment type
Full-time
Job function
Management and Manufacturing
Industries: Technology, Information and Internet
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
Big Data Developer
Posted today
Job Viewed
Job Description
My Client
It is a global cryptocurrency exchange dedicated to providing secure, efficient, and innovative digital asset trading services. The platform offers a wide range of services, including spot trading, derivatives trading, and wealth management. They are looking for experienced and motivated Mobile app developer to join the dynamic team. The role involves working closely with other developers, designers, and product managers to create high-quality, user-friendly mobile apps that deliver exceptional user experiences.
Key Responsibilities:
- Develop scalable data processing architectures using Hadoop, Flink, and other Big Data tools.
- Implement and optimize data ingestion, transformation, and analysis pipelines.
- Design efficient data storage and retrieval mechanisms to support analytical needs.
- Work closely with business stakeholders to translate requirements into technical solutions.
- Ensure data quality, security, and compliance with industry standards.
- Identify and resolve performance bottlenecks in data processing systems.
- Keep up with advancements in Big Data technologies and recommend improvements.
Requirements:
- Degree in Computer Science, Information Technology, or a related field.
- 3+ years of hands-on experience in Big Data engineering.
- Proficiency in Hadoop, Flink, Kafka, and related frameworks.
- Experience with database modeling and large-scale data storage solutions.
- Strong understanding of data security best practices.
- Excellent analytical and troubleshooting skills.
- Effective communication and teamwork capabilities.
- A proactive mindset with a passion for data-driven innovation.
If it sounds like your next move, please don't hesitate to apply. Kindly note that only shortlisted candidates will be contacted. Appreciate your understanding. Data provided is for recruitment purposes only.
About Us
Dada Consultants was established in 2017, with the commitment of providing the best recruitment services in Singapore. We are comprised of a dynamic head-hunting team dedicated to sourcing for highly competent professionals in IT industry. We provide enterprises with customized talent solutions, and bring talents to career advancement.
EA Registration Number: R
Business Registration Number: W. Licence Number: 18S9037
Big Data Engineer
Posted today
Job Viewed
Job Description
Position Details:
Company : U3 Infotech (Payroll)
Role : Big Data Engineer
Position : Contract
Duration : 12+ Months
Location : Singapore
Job Description:
We are seeking a highly skilled and motivated Lead Big Data Engineer to join our data team. The ideal candidate will play a key role in designing, developing, and maintaining scalable big data solutions while providing technical leadership. This role will also support strategic Data Governance initiatives, ensuring data integrity, privacy, and accessibility across the organization.
Key Responsibilities:
● Design, implement, and optimize robust data pipelines and ETL/ELT workflows using SQL and Python.
● Lead architecture discussions, including the creation and review of Entity Relationship Diagrams (ERDs) and overall system design.
● Collaborate closely with Data Engineers, Analysts, and cross-functional engineering teams to meet evolving data needs.
● Deploy and manage infrastructure using Terraform and other Infrastructure-as-Code (IaC) tools.
● Develop and maintain CI/CD pipelines for deploying data applications and services.
● Leverage strong experience in AWS services (e.g., S3, Glue, Lambda, RDS, Lake Formation) to support scalable and secure cloud-based data platforms.
● Handle both batch and real-time data processing effectively.
● Apply best practices in data modeling and support data privacy and data protection initiatives.
● Implement and manage data encryption and hashing techniques to secure sensitive information.
● Ensure adherence to software engineering best practices including version control, automated testing, and deployment standards.
● Lead performance tuning and troubleshooting for data applications and platforms.
Required Skills & Experience:
● Strong proficiency in SQL for data modeling, querying, and transformation.
● Advanced Python development skills with an emphasis on data engineering use cases.
● Hands-on experience with Terraform for cloud infrastructure provisioning.
● Proficiency with CI/CD tools, particularly GitHub Actions.
● Deep expertise in AWS cloud architecture and services.
● Demonstrated ability to create and evaluate ERDs and contribute to architectural decisions.
● Strong communication and leadership skills with experience mentoring engineering teams.
Preferred Qualifications:
● Experience with big data technologies such as Apache Spark, Hive, or Kafka.
● Familiarity with containerization tools (e.g., Docker) and orchestration platforms (e.g., Kubernetes).
● Solid understanding of data governance, data quality, and security frameworks.
About the Company
U3 Infotech is a Technology Solutions, Managed Services, and Talent Management Solutions company with over 2 decades of experience in the APAC region since 2002. Our clients include Fortune 100, MNCs, Leading Regional Organisations, Government organizations, and Startups. We work with clients across Banking, Insurance, Bio-Science, Pharmaceutical, Healthcare, Engineering, Product, and Supply Chain domains.
We have been growing rapidly through value creation, solving complex problems, and addressing the opportunities of our clients' businesses. We differentiate ourselves through our deep commitment at all levels, entrepreneurial mindset, outcome-driven approach, and financial resources.
Please refer to U3's Privacy Notice for Job Applicants/Seekers at When you apply, you voluntarily consent to the collection, use and disclosure of your personal data for recruitment/employment and related purposes.
Cheers Stay Safe & Healthy.
Thanks and Regards,
Raghunath
Senior Recruitment Consultant
Talent Acquisition Team
Mobile:
Email:
133 Cecil Street, Keck Seng Tower, #14-3,
Singapore
Singapore | Australia | Malaysia | Thailand
Vietnam | India | Philippines| Hong Kong
Job Types: Full-time, Contract
Contract length: 12 months
Pay: $10, $11,000.00 per month
Benefits:
- Health insurance
Work Location: In person
Big Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa. About the team Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users.
We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
Minimum Qualifications
- Bachelor's degree or above, majoring in Computer Science, or related fields, with at least 1 year of experience
- Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
- Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
- Effective communication skills and a sense of ownership and drive
- Experience of Peta Byte level data processing is a plus
Big Data Analyst
Posted today
Job Viewed
Job Description
Job Responsibility:
- Work with data science team to complete game data analysis, including data logic combing, basic data processing, analysis and corresponding development work.
- Complete basic data analysis, machine learning analysis, and build the required data processing flow, data report visualization.
- Develop data processing pipelines for data modelling, analysis, and reporting from large and complex transaction datasets
- Ability to assist in supporting engineering development, data construction and maintenance when required.
Requirements:
- Degree in Computer Science or related technical field
- At least 2 years of experience in data analysis/data warehouse/mart development and BI reporting.
- At least 2 years of experience in ETL processing data.
- Good understanding of Python, SQL, HiveQL/SparkSQL and the relevant best practices/techniques for perf tuning, experience deploying models in production and adjusting model thresholds to improve performance is a plus.
- Familiarity with data visualization tools, such as Google Analytics or Tableau.
Charles, Lau Ngie Hao License No.: 02C3423 Personnel Registration No.: R
Please note that your response to this advertisement and communications with us pursuant to this advertisement will constitute informed consent to the collection, use and/or disclosure of personal data by ManpowerGroup Singapore for the purpose of carrying out its business, in compliance with the relevant provisions of the Personal Data Protection Act 2012. To learn more about ManpowerGroup's Global Privacy Policy, please visit
Be The First To Know
About the latest Global data Jobs in Singapore !
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
SKILLS SET AND TRACK RECORD
- Good understanding and completion of projects using waterfall/Agile methodology.
- Analytical, conceptualisation and problem-solving skills.
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage.
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
The developer is responsible to:
- Analyse the Client data needs and document the requirements.
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
- Develop test plan and scripts for system testing, support user acceptance testing.
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
- Ensure the smooth operations and service level of IT solutions.
- Support production issues
Big Data Innovator
Posted today
Job Viewed
Job Description
We are a data-driven organization that aims to inspire creativity and bring joy to our users. Our mission is to help people authentically express themselves, discover and connect with others.
We have a global presence with offices in various locations around the world. Our teams are diverse and inclusive, making us a great place to work for those who share our values.
Job Highlights
- Flat organizational structure
- Career growth opportunities
- Positive team atmosphere
As a Big Data Engineer, you will be working closely with cross-functional teams to understand data requirements and deliver data solutions that meet business needs. You will be responsible for designing, building and optimizing scalable data pipelines, ensuring data integrity and accuracy.
Responsibilities
Big Data Specialist
Posted today
Job Viewed
Job Description
As a data processing specialist, you will be responsible for developing and maintaining large-scale data processing systems using SQL and Hadoop technologies.
Your key responsibilities will include:
- Designing and implementing efficient data processing workflows using SQL and Hadoop
- Developing and deploying scalable data pipelines using big data technologies
- Collaborating with cross-functional teams to integrate data solutions into business applications
- Analyzing complex technical issues and providing creative solutions
To succeed in this role, you will need:
- 3-5 years of software development experience
- Good experience in SQL development
- Knowledge in Spark and Hadoop
- Excellent problem-solving and analytical skills
Having banking knowledge is an added advantage.
This position offers opportunities for growth and professional development in a dynamic environment. If you are passionate about working with big data and SQL technologies, we encourage you to apply.