757 Database Developers jobs in Singapore
Data Engineer
Posted today
Job Viewed
Job Description
Rapsodo is a Sports Technology company with offices in the USA, Singapore, Turkey & Japan. We develop sports analytics products that are data-driven, portable and easy-to-use to empower athletes at all skill levels to analyse and improve their performance. From Major League Baseball star pitchers to Golf tour players, athletes use Rapsodo technology to up their game across the world. Trusted by coaches and players from youths to professionals, Rapsodo provides real-time insights for all-time performance.
We are innovative, focused, and rapidly growing. We are continuously looking for team players who will stop at nothing to deliver state-of-the-art solutions as part of Team Rapsodo.
If you have a passion for sports analytics , love working with Python, R, SQL , and enjoy building data pipelines and visualizations in Tableau, Power BI, and Matplotlib , this is your opportunity to play a pivotal role in shaping the future of data-driven sports performance.
- Develop scalable ETL pipelines to process performance and tracking data , ensuring seamless flow from data capture to analysis.
- Analyze and optimize sports data , translating raw sensor readings into meaningful performance metrics for athletes and coaches.
- Leverage Python, R, and SQL to build advanced analytics models, unlocking deeper insights into player performance.
- Create intuitive dashboards and reports using Tableau, Power BI, and Matplotlib , enabling users to visualize and interpret data effectively.
- Ensure data accuracy, consistency, and accessibility , collaborating with sports scientists, product teams, and software engineers.
- Continuously improve data workflows to enhance efficiency and support real-time analytics for Rapsodo’s cutting-edge sports products.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Mathematics, Statistics, or a related field (or equivalent hands-on experience ).
- 3-5 years of experience in data engineering, analytics, or a related role.
- Strong proficiency in Python, R, and SQL for data transformation and statistical analysis.
- Experience working with sensor-based data or time-series data (preferred ).
- Hands-on expertise in Tableau, Power BI, and Matplotlib for creating insightful visualizations.
- A passion for sports analytics and the ability to bridge the gap between raw data and real-world performance insights .
- Strong problem-solving skills and the ability to work independently in a fast-paced environment.
- Excellent communication skills , able to collaborate effectively with both technical and non-technical stakeholders.
At Rapsodo, you'll help to drive innovation in sports analytics .
You’ll work with state-of-the-art tracking technology to develop high-performance data systems that transform complex data into actionable insights for athletes and coaches.
You’ll have the opportunity to collaborate with athletes, engineers, and product teams, pushing the boundaries of real-time analytics .
If you're passionate about solving complex data challenges , optimizing large-scale data workflows , and shaping the future of sports technology , we’d love to hear from you!
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Data Engineer role at Sembcorp Industries Ltd
Continue with Google Continue with Google
1 day ago Be among the first 25 applicants
Join to apply for the Data Engineer role at Sembcorp Industries Ltd
Sembcorp is a leading energy and urban solutions provider headquartered in Singapore. Led by its purpose to drive energy transition, Sembcorp delivers sustainable energy solutions and urban developments by leveraging its sector expertise and global track record.
Purpose & Scope
We are seeking a highly skilled and self-driven Azure Data Engineer with expertise in PySpark, Python, and modern Azure data services including Synapse Analytics and Azure Data Explorer. The ideal candidate will design, develop, and maintain scalable data pipelines and architectures, enabling effective data management, analytics, and governance.
Key Roles and Responsibilities
- Design, develop, and maintain scalable and efficient data pipelines (both batch and real-time streaming) using modern data engineering tools.
- Build and manage data lakes, data warehouses, and data marts using Azure Data Services.
- Integrate data from various sources including APIs, structured/unstructured files, IoT devices, and real-time streams.
- Develop and optimize ETL/ELT workflows using tools such as Azure Data Factory, Databricks, and Apache Spark.
- Implement real-time data ingestion and processing using Azure Stream Analytics, Event Hubs, or Kafka.
- Ensure data quality, availability, and security across the entire data lifecycle.
- Collaborate with analysts, data scientists, and engineering teams to deliver business-aligned data solutions.
- Contribute to data governance efforts and ensure compliance with data privacy standards.
- Establish and manage source system connectivity (on-prem, APIs, sensors, etc.).
- Handle deployment and migration of data pipeline artifacts between environments using Azure DevOps.
- Design, develop, and troubleshoot PySpark scripts and orchestration pipelines.
- Perform data integration using database joins and other transformations aligned with project requirements.
Qualifications, Skills & Experience
- Bachelor’s Degree in Computer Science, Engineering, or related field
- 3–5 years of experience in Azure-based data engineering, PySpark, and Big Data technologies
- Strong hands-on experience with Azure Synapse Analytics for pipeline orchestration and data handling
- Expertise in SQL, data warehousing, data marts, and ingestion using PySpark and Python
- Solid experience building and maintaining cloud-based ETL/ELT pipelines, especially with Azure Data Factory or Synapse
- Familiarity with cloud data environments such as Azure and optionally AWS
- Experience with Azure DevOps for CI/CD and artifact deployment
- Excellent communication, problem-solving, and interpersonal skills
- 1–2 years of experience working with Azure Data Explorer (including row-level security and access controls).
- Experience with Azure Purview for metadata management, data lineage, governance, and discovery
- Ability to work independently and take full ownership of assignments
- Proactive in identifying and resolving blockers and escalating when needed
- Exposure to real-time processing with tools like Azure Stream Analytics or Kafka
Our Culture at Sembcorp
At Sembcorp, our culture is shaped by a strong set of shared behaviours that guide the way we work and uphold our commitment to driving the energy transition.
We foster an institution-first mindset, where the success of Sembcorp takes precedence over individual interests. Collaboration is at the heart of what we do, as we work seamlessly across markets, businesses, and functions to achieve our goals together. Accountability is a core principle, ensuring that we take ownership of our commitments and deliver on them with integrity and excellence. These values define who we are and create a workplace where our people can thrive while making a meaningful impact on driving energy transition.
Join us in making a real impact!
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries Utilities and Services for Renewable Energy
Referrals increase your chances of interviewing at Sembcorp Industries Ltd by 2x
Get notified about new Data Engineer jobs in Singapore, Singapore .
Project Intern, Digital Innovations & Solutions (Full Stack Developer) Web Frontend Engineer(Work Location: Remote in Taiwan) Back-end Software Engineer (On-site 202506)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About PatSnap
Patsnap empowers IP and R&D teams by providing better answers, so they can make faster decisions with more confidence. Founded in 2007, Patsnap is the global leader in AI-powered IP and R&D intelligence. Our domain-specific LLM, trained on our extensive proprietary innovation data, coupled with Hiro, our AI assistant, delivers actionable insights that increase productivity for IP tasks by 75% and reduce R&D wastage by 25%. Over 15,000 companies trust Patsnap to innovate faster with AI, including NASA, Tesla, PayPal, Sanofi, Dow Chemical, and Wilson Sonsini.
Key Responsibilities
- Manage daily operations of patent data and related data, including integration, parsing, storage, and backup, ensuring data security, stability, and availability;
- Analyze and understand new data requirements, design data processing solutions, write relevant documentation, and implement code;
- Monitor data quality, design and implement targeted optimization solutions, and continuously improve data integrity and accuracy;
- Optimize data processing frameworks and performance, enhancing the efficiency of system data handling to support rapid iteration of product data;
- Work closely with data architects, participate in architecture upgrades and updates, and enhance system scalability and stability.
- Bachelor’s degree or above in computer science, data engineering, software engineering, or related fields; at least 3 years of experience in data development;
- Proficiency in Java or Python with a solid programming foundation;
- Familiarity with basic components of big data processing, such as Flink and Spark, with experience in performance optimization preferred;
- Familiarity with Linux systems and common operations;
- Strong sense of responsibility and proactive learning ability, able to adapt flexibly in a fast-paced environment;
- Clear thinking with excellent communication skills and team spirit, able to collaborate effectively with team members from diverse cultural backgrounds;
- Basic understanding of data privacy and compliance, capable of working in a regulatory environment.
Why Join Us
Technical Growth: Directly participate in large-scale data processing and optimization projects, gaining practical technical experience.
Systematic Training: Engage in architecture optimization under the guidance of senior data architects, enhancing data processing and system design capabilities.
Diverse Team: Rich opportunities for cross-departmental and multinational collaboration, helping you rapidly grow into a full-stack data engineer.
Career Development: Clear career development paths and ample space for growth.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Rapsodo is a Sports Technology company with offices in the USA, Singapore, Turkey & Japan. We develop sports analytics products that are data-driven, portable and easy-to-use to empower athletes at all skill levels to analyse and improve their performance. From Major League Baseball star pitchers to Golf tour players, athletes use Rapsodo technology to up their game across the world. Trusted by coaches and players from youths to professionals, Rapsodo provides real-time insights for all-time performance.
We are innovative, focused, and rapidly growing. We are continuously looking for team players who will stop at nothing to deliver state-of-the-art solutions as part of Team Rapsodo.
If you have a passion for sports analytics , love working with Python, R, SQL , and enjoy building data pipelines and visualizations in Tableau, Power BI, and Matplotlib , this is your opportunity to play a pivotal role in shaping the future of data-driven sports performance.
Responsibilities:- Develop scalable ETL pipelines to process performance and tracking data , ensuring seamless flow from data capture to analysis.
- Analyze and optimize sports data , translating raw sensor readings into meaningful performance metrics for athletes and coaches.
- Leverage Python, R, and SQL to build advanced analytics models, unlocking deeper insights into player performance.
- Create intuitive dashboards and reports using Tableau, Power BI, and Matplotlib , enabling users to visualize and interpret data effectively.
- Ensure data accuracy, consistency, and accessibility , collaborating with sports scientists, product teams, and software engineers.
- Continuously improve data workflows to enhance efficiency and support real-time analytics for Rapsodo’s cutting-edge sports products.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Mathematics, Statistics, or a related field (or equivalent hands-on experience ).
- 3-5 years of experience in data engineering, analytics, or a related role.
- Strong proficiency in Python, R, and SQL for data transformation and statistical analysis.
- Experience working with sensor-based data or time-series data (preferred ).
- Hands-on expertise in Tableau, Power BI, and Matplotlib for creating insightful visualizations.
- A passion for sports analytics and the ability to bridge the gap between raw data and real-world performance insights .
- Strong problem-solving skills and the ability to work independently in a fast-paced environment.
- Excellent communication skills , able to collaborate effectively with both technical and non-technical stakeholders.
At Rapsodo, you'll help to drive innovation in sports analytics . You’ll work with state-of-the-art tracking technology to develop high-performance data systems that transform complex data into actionable insights for athletes and coaches. You’ll have the opportunity to collaborate with athletes, engineers, and product teams, pushing the boundaries of real-time analytics .
If you're passionate about solving complex data challenges , optimizing large-scale data workflows , and shaping the future of sports technology , we’d love to hear from you!
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Engineers Gate (EG) is a leading quantitative investment company focused on computer-driven trading in global financial markets. We are a team of researchers, engineers, and financial industry professionals using sophisticated statistical models to analyze data and identify predictive signals to generate superior investment returns. EG’s investment teams each focus on their independent strategies while utilizing the firm’s proprietary, state-of-the-art technology and data platform to optimize their alpha research.
We are passionate about implementing scientific and mathematical methods to explore, isolate, and solve problems in the global financial markets. We believe that career fulfillment and enterprise success converge when smart, hard-working, and intellectually curious people come together with a shared goal of innovation and the pursuit of excellence.
Job Summary:The Data Engineer will join Engineers Gate’s Core Technology Team as a key player in building and scaling the company’s data platform, one of the cornerstones of our research infrastructure. This individual will be responsible for creating new research datasets by cleaning, normalizing, and loading data into the platform, as well as enhancing the underlying platform itself. As part of a small, focused team, the Data Engineer will collaborate closely with team members and end users, gain full stack data experience, and have immediate firmwide impact.
Key Responsibilities- Develop efficient, scalable data and research infrastructure.
- Implement robust workflows for processing structured and unstructured financial datasets.
- Collaborate closely with quantitative portfolio managers and researchers to:
- Understand their data needs
- Solve data-related challenges
- Propose potential data applications
- Build impactful data products
- Optimize existing data processes to accelerate research
- Analyze dataset coverage and quality; build data knowledge base; develop and maintain reusable libraries for data analysis.
- Actively working with data vendors, brokers to sleuth and onboard new dataset
- Undergraduate degree in CS, EE, or related field.
- 2-5 years professional experience in software or data engineering.
- Programming experience in Python.
- Familiarity with data pipelines/ETL tools, relational databases, and/or SQL.
Please review the candidate privacy notice (the “Notice ”) available at . By seeking employment with EG SG Pte. Ltd. or submitting your application and/or personal data to EG SG Pte. Ltd., you acknowledge that you have read and understood the Notice and have agreed and consented to EG SG Pte. Ltd. collecting, using, disclosing, processing and/or transferring your personal data in accordance with the Notice.
Apply for this job*
indicates a required field
First Name *
Last Name *
Email *
Phone *
Resume/CV *
Enter manually
Accepted file types: pdf, doc, docx, txt, rtf
Enter manually
Accepted file types: pdf, doc, docx, txt, rtf
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Experienced Professional - Management Consulting Full-time Hybrid Singapore, Singapore
Share job
We are an established, globally active management consulting company with offices in Switzerland, Germany, Austria, UK, USA, Singapore, Hong Kong, the Philippines, Australia, Indonesia and India. We are a valued partner to many of the world‘s largest international financial services and insurance firms. We support our clients at all project management stages from the development of strategies and operational frameworks to the technical implementation and handover. Our expertise in business and technology combined with our methodic approach enable us to create sustainable added value for our clients business.
About the job:- Develop processes of the ingestion of data using various programming languages, techniques and tools from systems implemented using Oracle, Teradata, SAP, and Hadoop technology stack
- Evaluate and make decisions around dataset implementations designed and proposed by peer engineers
- Build large consumer database models for financial planning & analytics including Balance Sheet, Profit and Loss, Cost Analytics and Related Ratios
- Develop ETL, real time and batch data processes feeding into in-memory data infrastructure
- Perform and document data analysis, data validation, and data mapping/design
- Work with clients to solve business problems in fraud, compliance and financial crime and present project results
- Use emerging and open-source technologies such as Spark, Hadoop, and Scala
- Collaborate on scalability issues involving access to massive amounts of data and information
- You should be comfortable with working with high profile clients on their sites
Requirements
- Bachelor's degree in computer science, Physics, Mathematics, or similar degree or equivalent
- Experience with open source big-data tools, such as Spark, Hadoop, and specially Scala
- 2 to 6 years of experience working in the Financial Services sector on big data project implementations
- Demonstrate strong analytical and problem-solving skills and the ability to debug and solve technical challenges with sometimes unfamiliar technologies
- Client facing experience, good communication and presentation skills
- Strong technical communication skills with demonstrable experience of working in rapidly changing client environments
- Flexible working hours with part-time working models and hybrid options
- Attractive fringe benefits and salary structures in line with the market
- Modern and central office space with good public transport connections
- Can-do mentality and one-spirit culture
- Varied events and employee initiatives
- Resume
- Job references
- Qualifications (bachelor/ master diploma, etc.) with certificate of grades
- Motivation letter: Why Synpulse? Why you? Why this function?
Do you approach your tasks with commitment and enjoyment and are you convinced that teamwork achieves better results than working alone? Are you proactive and willing to go the extra mile for your clients? Are you motivated not only to design solutions but also to implement them? As a flexible and goal-oriented person, you will quickly assume entrepreneurial responsibility with us.
Do you appreciate the spirit of a growing international company with Swiss roots and a strong corporate culture? Then we look forward to receiving your online application at
Our people are our most valuable asset. They drive our growth and anchor our success. They are experts, thought leaders, and trusted partners of our global clients. The Synpulse OneSpirit is reflected in our people and unrivaled culture of collaboration, inclusion, entrepreneurship, and fun. We are good corporate citizen in our communities and we celebrate success together with our Synpulse crypto token.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
AgriG8 is a Singapore-based Agri-Fintech startup committed to empowering regional rice farmers by connecting them with financial institutions. As part of our ongoing journey to harness the power of AI and data, we are looking for a passionate Data Engineer to join our mission-driven team.
In this role, you’ll work with rich datasets from our farming and value chain ecosystem to unlock insights that drive meaningful innovation. You’ll design data visualizations, explore patterns, and apply mathematical models to create actionable intelligence for our partners and clients.
Your work will directly contribute to enhancing the livelihoods of smallholder farmers and promoting sustainable agriculture throughout the region.
Key Responsibilities
- Analyze existing datasets to identify patterns, anomalies, and opportunities for value creation.
- Collaborate with stakeholders to understand business needs and design data-driven solutions.
- Build clear and insightful data visualizations to communicate findings effectively.
- Develop and apply mathematical and statistical models to extract actionable insights.
- Create and maintain dashboards and reports that support informed decision-making.
- Design basic data pipelines for data preparation and visualization workflows.
- Perform root-cause analysis and explore trends in agricultural data and its value chain.
- Research and apply suitable analytical techniques to solve specific business problems.
- Present complex findings to both technical and non-technical stakeholders.
- Stay up to date with the latest trends in analytics, modeling, and visualization.
- Collaborate across cross-functional teams to embed data insights into our platforms.
- Support development and integration of data-driven features in digital products.
Qualifications & Requirements
- Bachelor’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related technical field.
- Up to 3 years of experience in data engineering, data analysis, or data science. Fresh graduates are welcome to apply.
- Strong programming skills in Python for data manipulation and analysis.
- Proficiency in SQL for querying and managing data.
- Hands-on experience with data visualization tools (e.g. Power BI).
- Familiarity with statistical modelling, mathematical analysis, and drawing business insights.
- Ability to communicate complex data in an accessible and impactful way.
- Understanding of data cleaning and preparation techniques.
- Knowledge of machine learning concepts and practical applications.
- Strong analytical and problem-solving skills.
- Clear and effective communication with diverse audiences.
- High attention to detail and a commitment to data quality.
Preferred Skills
- Experience with time series analysis and forecasting methods.
- Familiarity with interactive dashboard development.
- Exposure to cloud platforms like AWS, Azure, or GCP.
- Experience working with geospatial data.
- Understanding of ETL processes and data pipeline development.
What We Offer
- Direct impact on farmers’ livelihoods and sustainable agricultural practices.
- Career opportunity in the data and AI space with hands-on opportunities.
- A collaborative, inclusive, and innovative work environment.
- Mentorship from experienced professionals across domains.
- Opportunities for learning, growth, and regional exposure.
- Competitive compensation and benefits package.
- Seniority level Entry level
- Employment type Full-time
- Job function Information Technology
Referrals increase your chances of interviewing at AgriG8 by 2x
Get notified about new Data Engineer jobs in Singapore .
Junior Data Analyst - Business Intelligence, Regional Operations Internship, Technology (ML/Data Engineer) May/June - December 2025 Data Analyst Intern, Regional BI & Planning (Fall 2025)We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBe The First To Know
About the latest Database developers Jobs in Singapore !
Data Engineer
Posted today
Job Viewed
Job Description
The Customer Lifecycle Management (CLM) team at StarHub is dedicated to understanding, enhancing, and optimizing the customer journey. From acquisition to retention, the CLM team employs data-driven strategies to provide unparalleled customer experiences. Through a combination of data science, business intelligence, customer insights, NPS, and digital analytics, the CLM team ensures that StarHub's offerings are aligned with customer needs, leading to increased loyalty, satisfaction, and growth.
Job DescriptionThe Data Engineer plays a crucial role in the CLM team by designing, implementing, and maintaining the data infrastructure that supports the team's analytics and data science initiatives. This position is responsible for developing and optimizing data pipelines, ensuring data quality and accessibility, and collaborating with data scientists and analysts to enable efficient data-driven decision-making. The Data Engineer will work on integrating data from various sources, implementing data governance practices, and creating scalable solutions that support the CLM team's objectives in enhancing customer experiences and driving business growth.
Key Responsibilities:- Data Pipeline Development : Design, implement, and maintain efficient ETL (Extract, Transform, Load) processes to integrate data from various sources. Optimize existing data pipelines for improved performance and scalability.
- Data Warehouse Management: Develop and maintain the data warehouse architecture, ensuring it meets the needs of the CLM team. Implement data modeling techniques to optimize data storage and retrieval.
- Data Quality Assurance: Implement data quality checks and monitoring systems to ensure the accuracy and reliability of data used in analytics and reporting. Develop and maintain data documentation and metadata.
- Big Data Technologies: Utilize big data technologies (e.g., Hadoop, Spark) to process and analyze large volumes of customer data efficiently. Implement solutions for real-time data processing when required.
- Data Governance: Collaborate with relevant stakeholders to implement data governance policies and procedures. Ensure compliance with data privacy regulations and internal data management standards.
- Infrastructure Optimization: Continuously assess and optimize the data infrastructure to improve performance, reduce costs, and enhance scalability. Implement automation solutions to streamline data processes.
Education Level:
- Bachelor's degree in Computer Science, Information Systems, Data Engineering, or a related field. Master's degree in a relevant field is preferred.
- Required Experience and Knowledge
- 3-5 years of experience in data engineering or a related field.
- Strong knowledge of data warehouse concepts, ETL processes, and data modeling techniques.
- Experience with cloud-based data platforms (e.g., AWS, SnowFlake).
- Proficiency in SQL and experience with NoSQL databases.
- Experience with big data technologies such as Hadoop, Spark, or Kafka.
- Knowledge of data governance principles and data privacy regulations.
Job-Specific Technical Skills:
- Proficiency in Python or Scala for data processing and automation.
- Experience with ETL tools (e.g., Apache NiFi, Talend, Informatica).
- Knowledge of data visualization tools (e.g., Tableau, PowerBI) to support data quality checks and pipeline monitoring.
- Familiarity with version control systems (e.g., Git) and CI/CD practices.
- Experience with container technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Understanding of data security best practices and implementation.
Behavioural Skills:
- Strong problem-solving and analytical skills.
- Excellent communication abilities to collaborate with technical and non-technical team members.
- Proactive approach to identifying and resolving data-related issues.
- Ability to manage multiple projects and priorities effectively.
- Detail-oriented with a focus on data quality and system reliability.
- Adaptability to work with evolving technologies and changing business requirements.
- Strong teamwork skills and ability to work in a collaborative environment.
Data Engineer
Posted today
Job Viewed
Job Description
2 weeks ago Be among the first 25 applicants
Get AI-powered advice on this job and more exclusive features.
Direct message the job poster from Neurones IT Asia
Crafting Careers | Ex-Kotak Bank | HR Innovator | Talent Sculptor | Change Catalyst | Mastering Stakeholder Synergy & Workforce TransformationWe are looking for a passionate and skilled Python Data Engineer to join our Singapore Team.
The role will cover below areas:
- Collaborate with BAs to clarify requirements in a fast-paced environment.
- Develop scalable Python and PySpark scrapers integrated with existing Databricks frameworks.
- Ingest structured and unstructured data from websites (HTML, PDF, Excel, APIs, CSV, etc.).
- Build and test scrapers in Databricks (DBX), following GM design patterns and using shared libraries.
- Use ADF for orchestration and integrate with GM frameworks for logging, error handling, and translation.
- Write clean, reusable Python code for data processing, automation, and transformation.
- Monitor, debug, and maintain data pipelines to ensure reliability and fast issue resolution.
- Review technical specifications and work closely with FO Analysts for validation and clarification.
- Ensure scrapers support Japanese websites with power market data and adhere to HTML/API nuances.
- Follow IaC and AZD standards as per GM setup for deployment and infrastructure.
- Document pipelines and approaches based on existing standards; explain solutions clearly to users.
- Present work regularly to the Data Engineering Manager, wider team, and Head of Data when needed.
- Validate solutions against functional and non-functional requirements.
- Demonstrate a proactive, problem-solving, and delivery-focused mindset.
Technologies:
- Strong expertise in Python programming and SQL .
- Hands-on experience with web scraping and industry best practices.
- Familiarity with Python libraries for language translation (nice to have).
Knowledge of modern cloud-based data architectures, including Data Lakehouse on Databricks
- Experience with Databricks and Azure is highly desirable.
- Good understanding of Big Data frameworks like Spark and file formats like Parquet.
Software engineering and delivery
- Agile delivery methodologies such as SCRUM or Kanban
- Knowledge and work management tools (e.g., JIRA, Confluence)
- Certified in Data Engineering, Azure or Python
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at Neurones IT Asia by 2x
Get notified about new Data Engineer jobs in Singapore, Singapore .
Project Intern, Digital Innovations & Solutions (Full Stack Developer) Back-end Software Engineer (On-site 202506) Software Engineer – Frontend / Backend / Fullstack Frontend Engineer-Search - Singapore-2025 Start Software Engineer, Backend (International Exchange) Frontend Software Engineer - TikTok Live - 2025 StartWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Join to apply for the Data Engineer role at Bitdeer (NASDAQ: BTDR)
Get AI-powered advice on this job and more exclusive features.
About Bitdeer:
Bitdeer Technologies Group (NASDAQ: BTDR) is a leader in the blockchain and high-performance computing industry. It is one of the world's largest holders of proprietary hash rate and suppliers of hash rate. Bitdeer is committed to providing comprehensive computing solutions for its customers.
The company was founded by Jihan Wu, an early advocate and pioneer in cryptocurrency who cofounded multiple leading companies serving the blockchain economy. Headquartered in Singapore, Bitdeer has deployed mining datacenters in the United States, Norway, and Bhutan. It offers specialized mining infrastructure, high-quality hash rate sharing products, and reliable hosting services to global users. The company also offers advanced cloud capabilities for customers with high demands for artificial intelligence. Dedication, authenticity, and trustworthiness are foundational to our mission of becoming the world's most reliable provider of full-spectrum blockchain and high-performance computing solutions. We welcome global talent to join us in shaping the future.
About The JobWe are seeking an experienced Data Engineer to join our Data Platform team with a focus on improving and optimizing our existing data infrastructure. The ideal candidate will have deep expertise in data management, cloud-based big data services, and real-time data processing, collaborating closely with cross-functional teams to enhance scalability, performance, and reliability.
Key Responsibilities- Optimize and improve existing data pipelines and workflows to enhance performance, scalability, and reliability.
- Collaborate with the IT team to design and enhance cloud infrastructure, ensuring alignment with business and technical requirements.
- Demonstrate a deep understanding of data management principles to optimize data frameworks, ensuring efficient data storage, retrieval, and processing.
- Act as the service owner for cloud big data services (e.g., AWS EMR with Spark) and orchestration tools (e.g., Apache Airflow), driving operational excellence and reliability.
- Design, implement, and maintain robust data pipelines and workflows to support analytics, reporting, and machine learning use cases.
- Develop and optimize solutions for real-time data processing using technologies such as Apache Flink and Kafka.
- Monitor and troubleshoot data systems, identifying opportunities for automation and performance improvements.
Stay updated on emerging data technologies and best practices to drive continuous improvement in data infrastructure.
Seniority level- Mid-Senior level
- Full-time
- Information Technology
- Software Development