Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Big Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of experience managing data engineering jobs in big data environment e.g., Cloudera Data Platform. The successful candidate will be responsible for designing, developing, and maintaining the data ingestion and processing jobs. Candidate will also be integrating data sets to provide seamless data access to users.
SKILLS SET AND TRACK RECORD
- Good understanding and completion of projects using waterfall/Agile methodology.
- Analytical, conceptualisation and problem-solving skills.
- Good understanding of analytics and data warehouse implementations
- Hands-on experience in big data engineering jobs using Python, Pyspark, Linux, and ETL tools like Informatica
- Strong SQL and data analysis skills. Hands-on experience in data virtualisation tools like Denodo will be an added advantage
- Hands-on experience in a reporting or visualization tool like SAP BO and Tableau is preferred
- Track record in implementing systems using Cloudera Data Platform will be an added advantage.
- Motivated and self-driven, with ability to learn new concepts and tools in a short period of time
- Passion for automation, standardization, and best practices
- Good presentation skills are preferred
The developer is responsible to:
- Analyse the Client data needs and document the requirements.
- Refine data collection/consumption by migrating data collection to more efficient channels
- Plan, design and implement data engineering jobs and reporting solutions to meet the analytical needs.
- Develop test plan and scripts for system testing, support user acceptance testing.
- Work with the Client technical teams to ensure smooth deployment and adoption of new solution.
- Ensure the smooth operations and service level of IT solutions.
- Support production issues
Big Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa. About the team Our Recommendation Architecture Team is responsible for building up and optimizing the architecture for recommendation system to provide the most stable and best experience for our TikTok users.
We cover almost all short-text recommendation scenarios in TikTok, such as search suggestions, the video-related search bar, and comment entities. Our recommendation system supports personalized sorting for queries, optimizing the user experience and improving TikTok's search awareness.
- Design and implement a reasonable offline data architecture for large-scale recommendation systems
- Design and implement flexible, scalable, stable and high-performance storage and computing systems
- Trouble-shooting of the production system, design and implement the necessary mechanisms and tools to ensure the stability of the overall operation of the production system
- Build industry-leading distributed systems such as storage and computing to provide reliable infrastructure for massive date and large-scale business systems
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
- Applying data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
- Visualise, interpret, and report data findings and may create dynamic data reports as well
Qualifications
Minimum Qualifications
- Bachelor's degree or above, majoring in Computer Science, or related fields, with at least 1 year of experience
- Familiar with many open source frameworks in the field of big data, e.g. Hadoop, Hive, Flink, FlinkSQL, Spark, Kafka, HBase, Redis, RocksDB, ElasticSearch, etc.
- Experience in programming, including but not limited to, the following programming languages: c, C++, Java or Golang
- Effective communication skills and a sense of ownership and drive
- Experience of Peta Byte level data processing is a plus
Big Data Engineer
Posted today
Job Viewed
Job Description
Experience
Hands-on Big Data experience using common open-source components (Hadoop, Hive, Spark, Presto, NiFi, MinIO, K8S, Kafka).
Experience in stakeholder management in heterogeneous business/technology organizations.
Experience in banking or financial business, with handling sensitive data across regions.
Experience in large data migration projects with on-prem Data Lakes.
Hands-on experience in integrating Data Science Workbench platforms (e.g., KNIME, Cloudera, Dataiku).
Track record in Agile project management and methods (e.g., Scrum, SAFe).
Skills
Knowledge of reference architectures, especially concerning integrated, data-driven landscapes and solutions.
Expert SQL skills, preferably in mixed environments (i.e., classic DWH and distributed).
Working automation and troubleshooting experience in Python using Jupyter Notebooks or common IDEs.
Data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI or Python-based).
Applying a data quality framework within the architecture.
Role description
Datasets and data pipelines preparation, support for Business, data troubleshooting.
Closely collaborate with the Data & Analytics Program Management and stakeholders to co-design Enterprise Data Strategy and Common Data Model.
Implementation and promotion of Data Platform, transformative data processes, and services.
Develop data pipelines and structures for Data Scientists, testing such to ensure that they are fit for use.
Maintain and model JSON-based schemas and metadata to re-use it across the organization (with central tools).
Resolving and troubleshooting data-related issues and queries.
Covering all processes from enterprise reporting to data science (incl. ML Ops).
Big Data Engineer
Posted today
Job Viewed
Job Description
Roles & Responsibilities
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Job Type: Contract
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
- Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
- Analyse data requirements and build scalable data solutions.
- Support testing, deployment, and production operations.
- Collaborate with business and technical teams for smooth delivery.
- Drive automation, standardization, and performance optimization.
Requirements:
- Bachelor's degree in IT, Computer Science, or related field.
- Minimum 5 years' experience in Big Data Engineering.
- Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
- Experience with Cloudera Data Platform is an advantage.
- Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
- Strong analytical, problem-solving, and communication skills.
Big Data Engineer
Posted 12 days ago
Job Viewed
Job Description
Experience
• Hands-on Big Data experience using common open-source components (Hadoop, Hive, Spark, Presto, NiFi, MinIO, K8S, Kafka).
• Experience in stakeholder management in heterogeneous business/technology organizations.
• Experience in banking or financial business, with handling sensitive data across regions.
• Experience in large data migration projects with on-prem Data Lakes.
• Hands-on experience in integrating Data Science Workbench platforms (e.g., KNIME, Cloudera, Dataiku).
• Track record in Agile project management and methods (e.g., Scrum, SAFe).
Skills
• Knowledge of reference architectures, especially concerning integrated, data-driven landscapes and solutions.
• Expert SQL skills, preferably in mixed environments (i.e., classic DWH and distributed).
• Working automation and troubleshooting experience in Python using Jupyter Notebooks or common IDEs.
• Data preparation for reporting/analytics and visualization tools (e.g., Tableau, Power BI or Python-based).
• Applying a data quality framework within the architecture.
Role description
• Datasets and data pipelines preparation, support for Business, data troubleshooting.
• Closely collaborate with the Data & Analytics Program Management and stakeholders to co-design Enterprise Data Strategy and Common Data Model.
• Implementation and promotion of Data Platform, transformative data processes, and services.
• Develop data pipelines and structures for Data Scientists, testing such to ensure that they are fit for use.
• Maintain and model JSON-based schemas and metadata to re-use it across the organization (with central tools).
• Resolving and troubleshooting data-related issues and queries.
• Covering all processes from enterprise reporting to data science (incl. ML Ops).
Big Data Developer
Posted today
Job Viewed
Job Description
My Client
It is a global cryptocurrency exchange dedicated to providing secure, efficient, and innovative digital asset trading services. The platform offers a wide range of services, including spot trading, derivatives trading, and wealth management. They are looking for experienced and motivated Mobile app developer to join the dynamic team. The role involves working closely with other developers, designers, and product managers to create high-quality, user-friendly mobile apps that deliver exceptional user experiences.
Key Responsibilities:
- Develop scalable data processing architectures using Hadoop, Flink, and other Big Data tools.
- Implement and optimize data ingestion, transformation, and analysis pipelines.
- Design efficient data storage and retrieval mechanisms to support analytical needs.
- Work closely with business stakeholders to translate requirements into technical solutions.
- Ensure data quality, security, and compliance with industry standards.
- Identify and resolve performance bottlenecks in data processing systems.
- Keep up with advancements in Big Data technologies and recommend improvements.
Requirements:
- Degree in Computer Science, Information Technology, or a related field.
- 3+ years of hands-on experience in Big Data engineering.
- Proficiency in Hadoop, Flink, Kafka, and related frameworks.
- Experience with database modeling and large-scale data storage solutions.
- Strong understanding of data security best practices.
- Excellent analytical and troubleshooting skills.
- Effective communication and teamwork capabilities.
- A proactive mindset with a passion for data-driven innovation.
If it sounds like your next move, please don't hesitate to apply. Kindly note that only shortlisted candidates will be contacted. Appreciate your understanding. Data provided is for recruitment purposes only.
About Us
Dada Consultants was established in 2017, with the commitment of providing the best recruitment services in Singapore. We are comprised of a dynamic head-hunting team dedicated to sourcing for highly competent professionals in IT industry. We provide enterprises with customized talent solutions, and bring talents to career advancement.
EA Registration Number: R
Business Registration Number: W. Licence Number: 18S9037
Be The First To Know
About the latest Big data Jobs in Singapore !
Big Data Engineer - TikTok
Posted today
Job Viewed
Job Description
About TikTok
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our innovative product is built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and bring joy - a mission we work towards every day.
We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team. We're resilient and embrace challenges as they come. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Job highlights
Positive team atmosphere, Career growth opportunity, Paid leave, 100+ mil users, Meals provided
Responsibilities
Team Introduction
The mission of the Data Platform Singapore Business Partnering (DPSG BP) team is to empower the TikTok Business with data. Our goal is to build a Data Warehouse that can cater to batch and streaming data, Data Products that provide useful information to build efficient data metrics & dashboards which will be used to make smarter business decisions to support business growth. If you're looking for a challenging ground to push your limits, this is the team for you!
Responsibilities:
- Translate business requirements & end to end designs into technical implementations and responsible for building batch and real-time data warehouse.
- Manage data modeling design, writing, and optimizing ETL jobs.
- Collaborate with the business team to building data metrics based on data warehouse.
- Responsible for building and maintaining data products.
- Involvement in rollouts, upgrades, implementation, and release of data system changes as required for streamlining of internal practices.
- Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software.
- Apply data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets.
- Visualise, interpret, and report data findings and may create dynamic data reports as well.
Qualifications
Minimum Qualifications:
- At least 3 years in software engineering and 2 years of relevant experience in data engineering.
- Proficient in creating and maintaining complex ETL pipelines end-to-end while maintaining high reliability and security.
- Familiar with data warehouse concept and have production experience in modeling design.
- Familiar with at least 1 distributed computing engine (e.g. Hive, Spark, Flink).
- Familiar with at least 1 NoSQL database (e.g. HBase).
Preferred Qualifications:
- Excellent interpersonal and communication skills with the ability to engage and manage internal and external stakeholders across all levels of seniority.
- Strong collaboration skills with the ability to build rapport across teams and stakeholders.
Big Data Support Engineer
Posted today
Job Viewed
Job Description
POSITION OVERVIEW : Big Data Support Engineer
POSITION GENERAL DUTIES AND TASKS :
Skillset
Up to 5 years of IT experience & 3+ Years on Teradata or Hadoop
Last 3 years of relevant experience in banking/financial services industry
Hands-on technologist, strong in SQL, UNIX and Hadoop/Teradata tools and technologies
Demonstrable analytical skills with good knowledge of Big Data Ecosystem in a Production Support Environment.
Strong SQL and Unix shell scripting
Proficient in any job scheduling tool.
Incident, problem, and service Outage management experience is Plus.
Good Communication and Articulation Skills
Roles/Responsibilities
He/she is responsible for ensuring system availability and timely deliverables for datamarts for various business unit like Finance, regulatory, Campaign.
Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Provide operational support and ensure SLI of Applications
Troubleshoot Batch Issues & root causes to Mitigate Impacts to SLA
Proactively Identify and Identify opportunities for preventive Maintenance
Perform Administration & Optimization to achieve of Optimal System Performance
Work with Project/Enhancement Teams to Review, Transition for Run Team
Participate in Disaster Recovery and Business Continuity exercises
Fine tune applications and systems for high performance and higher volume throughput
Perform impact analysis of enhancements/projects that will impact supported systems
Big Data Test Engineer
Posted today
Job Viewed
Job Description
About PatSnap
Patsnap empowers IP and R&D teams by providing better answers, so they can make faster decisions with more confidence. Founded in 2007, Patsnap is the global leader in AI-powered IP and R&D intelligence. Our domain-specific LLM, trained on our extensive proprietary innovation data, coupled with Hiro, our AI assistant, delivers
actionable insights that increase productivity for IP tasks by 75% and reduce R&D wastage by 25%. IP and R&D teams collaborate better with a user-friendly platform across the entire innovation lifecycle. Over 15,000 companies trust Patsnap to innovate faster with AI, including NASA, Tesla, PayPal, Sanofi, Dow Chemical, and
Wilson Sonsini.
About the Role
We are looking for a Big Data Test Engineer to ensure the quality and reliability of our data platforms. The successful candidate will be responsible for ensuring the quality and reliability of our big data solutions through rigorous testing and validation processes, working with technologies like Hadoop and Spark while collaborating with our data engineering team. If you are passionate about automation testing, and big data testing platform development and are eager to grow your technical skills, this opportunity is for you.
Key Responsibilities- Responsible for testing and quality assurance of big data and data-related products.
- Participate in the development of big data testing frameworks or testing tools, and contribute to the construction of continuous integration platforms and automation development.
- Validate data quality, consistency, and accuracy across data pipelines.
- Analyze test results and provide detailed reports on software quality and test coverage.
- Monitor and validate data workflows and pipelines.
- Continuously improve testing processes and methodologies to enhance efficiency and effectiveness.
- Ensure data integrity and accuracy across various data sources and platforms.
- Work closely with data engineers to improve data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 2+ years of experience in software testing.
- Proficiency in programming languages such as Python or Java, experience with SQL and data querying for data validation.
- Familiarity with test automation tools and frameworks.
- Knowledge of version control systems (e.g., Git) and experience with CI/CD tools.
- Strong communication and interpersonal skills.
- Ability to work independently and collaboratively in a team environment.
- Attention to detail and a commitment to quality.
- Experience with big data platforms (Hadoop, Spark, Hive, HBase).
- Knowledge of ETL processes and data warehousing concepts.
- Experience with performance testing tools (JMeter).
- AWS/Azure/GCP cloud platform experience.
- ISTQB certification or equivalent.
Why Join Us
- Work with cutting-edge big data technologies and tools.
- Collaborative and innovative work environment.
- Professional growth and learning opportunities.
- Regular team events and knowledge sharing sessions.