1,067 Data Infrastructure jobs in Singapore
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
Job Title: Data Infrastructure Specialist
Job Description:
We are seeking highly skilled professionals to work as part of our data engineering team, collaborating with cross-functional partners to efficiently develop and construct innovative data solutions. Key responsibilities include working closely with product managers and internal engineering teams to understand data requirements and provide tailored data solutions.
- Evaluate, implement, and maintain cutting-edge data infrastructure tools and technologies to support efficient data processing, storage, and querying.
- Design, build, and optimize scalable data pipelines to ingest, process, and transform large volumes of data in support of complex analytical queries and reporting requirements.
- Implement robust data quality checks, validation processes, and monitoring mechanisms to ensure data integrity, accuracy, and consistency.
Required Skills and Qualifications:
To succeed in this role, candidates should possess a Bachelor's degree or higher in Computer Science, Information Technology, or a related field. A minimum of 5 years of experience as a data engineer or similar role is required, along with solid knowledge of SQL and work experience with relational and non-relational databases. Proficiency in at least one programming language such as Python, Java, Go, etc. is also necessary.
- Solid mastery of data modeling and data warehouse concepts, data integration, and ETL/ELT technologies.
- Effective communication skills and the ability to collaborate effectively with cross-functional teams.
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
Data Infrastructure Specialist
Job DescriptionWe are seeking a highly skilled Data Infrastructure Specialist to maintain and enhance our IT environment. The successful candidate will be responsible for ensuring high availability, security, and performance of the data ecosystem.
- Install, update, and troubleshoot engineering design software (EDA tools) and licenses.
- Maintain computer servers and resolve issues promptly.
- Create automated systems to streamline workflows and increase productivity.
The ideal candidate will possess excellent analytical and problem-solving skills, with a strong background in Unix/Linux environments and EDA license management.
- Familiarity with SGE/LSF job schedulers.
- Proficiency in version control software, such as Git/Gerrit/Jenkins/ClearCase.
To be successful in this role, candidates must have a Bachelor's degree and 5 years of experience in IT, specifically related to data infrastructure.
- Excellent communication skills.
- Strong analytical and problem-solving abilities.
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
We are seeking an experienced Data Infrastructure Specialist to join our team. In this role, you will be responsible for the design, implementation, and maintenance of our database infrastructure.
- Database Maintenance and Administration: Ensure the smooth operation of our databases, including data backup and recovery, performance optimization, and troubleshooting.
- Installation and Configuration: Install and configure new databases, ensuring they meet our requirements and standards.
- Fine-tuning: Work with the applications team to fine-tune database performance and ensure optimal system utilization.
- Patching and Upgrades: Apply patches and upgrades to databases as required, minimizing downtime and ensuring business continuity.
- Performance Optimization: Monitor and analyze database performance, identifying areas for improvement and implementing changes to enhance efficiency.
Required Skills and Qualifications:
- AWS Certification: An AWS certification is highly desirable but not essential.
- Public Sector Experience: A minimum of 2 years' experience working in the public sector is required.
- GCC Experience: A minimum of 2 years' experience working in the GCC region is required.
Benefits:
- Opportunity to work on high-profile projects: As a Data Infrastructure Specialist, you will have the opportunity to work on high-profile projects that make a significant impact on our organization.
- Professional development opportunities: We offer regular training and development opportunities to help you grow your skills and advance your career.
Data Infrastructure Developer
Posted today
Job Viewed
Job Description
Backend Software Engineer Position
We are seeking an experienced Backend Software Engineer to join our team. In this role, you will be responsible for developing high-quality databases and services that connect users with nearby merchants and local services.
You will work on building a global POI database, integrating multiple sources, and enabling a nearby search service to support TikTok local services.
The ideal candidate will have a strong background in software development and experience with geocoding databases, geocoding services, and IP location services.
In this position, you will participate in designing and implementing an online streaming data scheduling system, constructing the entire POI data access process, and developing the core POI processing system.
You will collaborate with AI engineers to establish efficient module update and release mechanisms.
We offer a dynamic work environment where you can apply your skills and expertise to push the boundaries of technology and innovation.
Key qualifications include:
- Bachelor's degree or higher in Computer Science or a related field.
- Familiarity with at least one programming language such as Java, Golang, PHP, Python, or C++, with a preference for experience in Golang or Java.
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced Storage and Backup Administrator to join our team. This individual will be responsible for providing technical administration, maintenance, and troubleshooting support for our storage and backup infrastructure.
"),Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
Data Architect
We are seeking a highly skilled Data Architect to lead the design and implementation of our data infrastructure. The successful candidate will have a proven track record of delivering scalable and reliable data solutions, with expertise in big data technologies.
- Contribute to the full data lifecycle, including conceptualization, data modeling, implementation, and operational management of data systems.
- Build and maintain robust data infrastructure, including real-time and batch ETL/ELT data pipelines, using orchestration tools like Apache Airflow.
- Support the development and management of the organization's data lake and data warehouse by implementing data models and applying modern data transformation practices with tools like dbt.
- Automate data processes by integrating different systems, APIs, and third-party services to ensure smooth data flow and synchronization.
- Adhere to best practices for data governance, quality, and lineage, while implementing monitoring and alerting to ensure system reliability.
- Collaborate with cross-functional teams to help translate business requirements into technical specifications and production-ready solutions.
- Develop, deploy, and maintain internal automation solutions and tools by applying AI, web scraping, and custom logic to create effective data products.
- Any ad hoc duties as assigned.
Requirements:
- Bachelor's degree in computer science, software engineering, or related field is preferred.
- Minimum 2 years of similar experience.
- Solid understanding of software engineering principles, system architecture, and object-oriented design, with experience building applications.
- Strong proficiency in Python, including experience with data manipulation libraries (e.g., Pandas, NumPy) and frameworks for web scraping (e.g., Scrapy) or AI/ML (e.g., TensorFlow, PyTorch).
- Experience with workflow orchestration tools like Apache Airflow for managing data pipelines.
- Familiarity with data transformation tools like dbt and an understanding of modern ETL/ELT design patterns.
- Experience with data modeling techniques (conceptual, logical, physical) and data warehouse schemas.
- Hands-on experience with any cloud platform (AWS, GCP, or Azure).
- Proficient in SQL with experience in designing and optimizing relational databases (e.g., PostgreSQL, Oracle, SQL Server).
- Experience with containerization and CI/CD is a strong plus.
- Good communication skills with the ability to explain technical concepts to various stakeholders.
- A proactive and analytical approach to problem-solving with attention to detail.
- Strong sense of ownership with the ability to work independently and manage assigned tasks and projects effectively.
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
Job Title: Data Engineering Lead
About the Role:We are seeking a highly skilled and motivated Data Engineering Lead to lead the migration of on-premises data warehouses to Databricks. This role is crucial for designing and maintaining a robust, integrated, and governed data infrastructure while ensuring high performance, security, and compliance.
The ideal candidate will have extensive experience in data engineering with expertise in AWS or Azure services, Databricks, and/or Informatica IDMC. They will also have proficiency in Python, Java, or Scala for building data pipelines.
Main Responsibilities:- Lead end-to-end data migration projects from on-premises environments to Databricks with minimal downtime.
- Collaborate with architects to design solutions that meet functional and non-functional requirements.
- Implement solutions on Databricks with hands-on experience in Amazon Web Services (AWS) environments.
- Configure Databricks clusters, write PySpark code, and build CI/CD pipelines for deployments.
- Apply optimization techniques such as Z-ordering, auto-compaction, and vacuuming.
- Process near real-time data using Auto Loader and DLT pipelines.
- Identify, communicate, and mitigate risks and issues related to data processes.
- Ensure data availability and integrity while resolving data-related issues.
- Optimize AWS and Databricks resource usage to control costs while maintaining performance and scalability.
- Stay updated with the latest Databricks and AWS advancements and data engineering best practices.
- Implement engineering methodologies, standards, and leading practices proactively.
- Bachelor's Degree in Computer Science, Data Engineering, or a related field.
- At least 5 years of experience in data engineering, with expertise in AWS or Azure services, Databricks, and/or Informatica IDMC.
- Proficiency in Python, Java, or Scala for building data pipelines.
- Excellent hands-on knowledge of SQL and NoSQL databases.
- Experience in data modelling, schema design, and performance optimization for complex data transformations.
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
- Excellent problem-solving, analytical, and communication skills.
Interested candidates may send in their resume and cover letter directly to our HR department.
Please note that your response to this advertisement and communications with us pursuant to this advertisement will constitute informed consent to the collection, use and/or disclosure of personal data by our company for the purpose of carrying out its business, in compliance with the relevant provisions of the Personal Data Protection Act 2012.
To learn more about our company's Global Privacy Policy, please visit our website.
Be The First To Know
About the latest Data infrastructure Jobs in Singapore !
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
We are seeking an experienced Data Engineer to join our organization with a focus on enhancing and optimizing our existing data infrastructure.
The ideal candidate will have deep expertise in data management, cloud-based big data services, and real-time data processing, collaborating closely with cross-functional teams to boost scalability, performance, and reliability.
Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
As a skilled Data Engineer , you will play a pivotal role in orchestrating the end-to-end data migration process from Informatica PowerCenter to Informatica IDMC. This entails ensuring minimal disruption to business operations and hands-on expertise in crafting mappings, workflows, and setting up Secure Agents.
The ideal candidate will possess a degree in Computer Science or equivalent and experience in data engineering with expertise in AWS or Azure services, Databricks, and/or Informatica IDMC. They should also exhibit a solid understanding of data integration concepts, ETL processes, and data quality management.
Apart from designing and implementing ETL data mappings, transformations, and workflows, you will evaluate potential technical solutions and provide recommendations to resolve data issues. You should be proficient in BI software such as Oracle Analytics Server (OAS), Tableau, and possess strong knowledge of Oracle SQL and NoSQL databases.
In this role, you will collaborate with cross-functional teams to understand data requirements and deliver tailored solutions across AWS, Databricks, and Informatica IDMC. Your exceptional problem-solving and analytical skills will be essential in monitoring and optimizing data processing and query performance in both environments.
- Manage the end-to-end data migration process from Informatica PowerCenter to Informatica IDMC
- Develop hands-on expertise in creating mappings, workflows, and setting up Secure Agents
- Integrate data from various sources into AWS and Databricks environments
- Design and implement ETL processes to cleanse, transform, and enrich data
- Monitor and optimize data processing and query performance
- Implement security best practices and data encryption methods
- Maintain clear and comprehensive documentation of data infrastructure and pipelines
- Collaborate with cross-functional teams to understand data requirements
Chief Data Infrastructure Specialist
Posted today
Job Viewed
Job Description
Data Engineer for Cloud Migration Projects
As a Data Engineer, you will play a key role in designing and implementing data pipelines that enable business growth through cloud migration projects. You will utilize AWS, Databricks, and Informatica IDMC to develop scalable and efficient data solutions.
- Develop ETL processes to cleanse, transform, and enrich data using Databricks' Spark capabilities and Informatica IDMC for data transformation and quality.
- Monitor and optimize data processing and query performance in both AWS and Databricks environments, making necessary adjustments to meet performance and scalability requirements.
- Implement security best practices and data encryption methods to protect sensitive data in both AWS and Databricks, while ensuring compliance with data privacy regulations.
- Implement automation for routine tasks, such as data ingestion, transformation, and monitoring, using AWS services like AWS Step Functions, AWS Lambda, Databricks Jobs, and Informatica IDMC for workflow automation.
- Maintain clear and comprehensive documentation of data infrastructure, pipelines, and configurations in both AWS and Databricks environments, with metadata management facilitated by Informatica IDMC.
- Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver appropriate solutions across AWS, Databricks, and Informatica IDMC.
- Identify and resolve data-related issues and provide support to ensure data availability and integrity in both AWS, Databricks, and Informatica IDMC environments.
- Optimize AWS, Databricks, and Informatica resource usage to control costs while meeting performance and scalability requirements.
Required Skills and Qualifications
Technical Requirements
- Bachelor's or master's degree in computer science, data engineering, or a related field.
- Minimum 4 years of experience in data engineering, with expertise in AWS services, Databricks, and/or Informatica IDMC.
- Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
- Evaluate potential technical solutions and make recommendations to resolve data issues especially on performance assessment for complex data transformations and long running data processes.
- Strong knowledge of SQL and NoSQL databases.
- Familiarity with data modeling and schema design.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- AWS certifications, Databricks certifications, and Informatica certifications are a plus.
Benefits
This is an exciting opportunity to work with cutting-edge technologies and contribute to the success of our organization. If you are passionate about data engineering and have a strong desire to learn and grow, we encourage you to apply.
Others
Our company offers a dynamic and inclusive work environment, with opportunities for professional development and career advancement. We value diversity, equity, and inclusion and are committed to creating a workplace where everyone feels welcome and valued.