217 AI And Data jobs in Singapore
AI Data Engineer
Posted 2 days ago
Job Viewed
Job Description
We are looking for a skilled and experienced AI Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to support the processing and analysis of clinical study and digital device sensor data. As a Data Engineer, you will work closely with data scientists and software engineers to ensure the efficient and reliable flow of data from source systems to analytical tools and platforms.
Responsibilities :
- Design, develop, and maintain scalable data pipelines to ingest, transform, and load clinical study data from various sources, including digital device sensors.
- Optimize data storage and retrieval processes in cloud-based platforms to ensure high performance and reliability.
- Collaborate with data scientists to integrate data processing pipelines with AI-powered algorithms and third-party analytical tools or platforms.
- Implement data quality checks and monitoring mechanisms to ensure the integrity and accuracy of the data.
- Troubleshoot and resolve issues related to data pipeline performance, reliability, and scalability.
- Work closely with software developers, system architects and other cross-functional teams to develop data-driven business solutions.
- Stay up-to-date with emerging technologies in AI & Cloud computing and best practices in data engineering to continuously improve data processing pipelines and infrastructure.
Requirements :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in designing and building data pipelines using ETL tools and frameworks such as Apache Spark, Apache Beam, or Apache Airflow.
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong understanding of database systems, data warehousing concepts, SQL and NoSQL.
- Experience with AI and Cloud Computing: Hands-on experience with cloud platforms like AWS and familiarity with AI solutions in these environments.
- Excellent problem-solving and troubleshooting skills with a strong attention to detail and quality.
- Effective communication and collaboration skills with the ability to work in a team environment.
Preferred Qualifications :
- Experience with containerization and orchestration tools such as Docker and Kubernetes.
- Familiarity with big data technologies such as Hadoop, Hive, or Presto.
- Knowledge of distributed computing frameworks such as Apache Hadoop or Apache Spark.
- Familiarity with ElasticSearch or AWS OpenSearch is plus
- Prior experience working with healthcare or clinical data is a plus.
AI Data Engineer
Posted 7 days ago
Job Viewed
Job Description
Role: Data Engineer
Salary Range: S$6000 – S$8000
Expected Years of Experience: 3 to 8 years
Duration: 1 Year Contract
Key Responsibilities:
- Data Solution Design & Implementation: Design, develop, and implement scalable data solutions tailored for large-scale AI/ML projects and complex data engineering initiatives.
- AI/ML Data Pipelines: Build, maintain, and optimize robust data pipelines and ETL frameworks using technologies like Spark (Java/Scala/PySpark). These pipelines will be crucial for operationalizing data that integrates with machine learning models and data lakes/data marts.
- Performance and Troubleshooting: Proactively identify and resolve system performance issues and data processing bottlenecks in distributed data environments, ensuring data is available for AI/ML model consumption.
- Engineering Best Practices: Champion best practices for code quality, data reliability, and workflow orchestration. This includes implementing automation strategies to improve efficiency and maintain the integrity of data throughout the AI/ML lifecycle.
- Technical Leadership & Collaboration: Offer technical guidance and mentorship to team members. Actively participate in code reviews and design discussions to ensure solutions meet the specific needs of data scientists and machine learning engineers.
Key Requirements:
- AI/ML Data Pipelines: Experience in designing, building, and maintaining end-to-end feature engineering pipelines that operationalize data for AI/ML model training and real-time inference.
- Scalable Data Ingestion: Proven experience in building scalable data pipelines and ingestion frameworks using languages like PySpark, Scala, and Java to handle large-scale datasets for ML applications.
- Cloud-Native & Distributed Systems: Hands-on expertise with cloud-native data ecosystems on platforms like AWS, Azure, or GCP. This includes experience with modern data lake and data warehousing solutions, as well as distributed storage like HDFS.
- Programming & Automation: Strong programming skills in Python, Java, and Scala with deep expertise in Spark for processing and transforming structured and semi-structured data for AI/ML.
- Data Architecture: Deep understanding of modern data lake, data warehouse, and feature store architectures. Ability to design performant data models and serving layers optimized for AI/ML model consumption.
- Performance & MLOps: Experience with Spark performance tuning and job optimization for large-scale distributed data processing. Familiarity with MLOps practices for pipeline orchestration, monitoring, and deployment.
- Real-time & Batch Processing: Expertise in both batch and real-time data processing, leveraging tools like Kafka and workflow orchestrators (e.g., Airflow, Kubeflow) for reliable data ingestion.
Added Advantage (Expanded):
- Proficiency in scripting languages such as Bash, Python, and Perl for automation and monitoring of data jobs.
- Strong knowledge of Linux/Unix systems administration, including services management, cron scheduling, memory/process debugging, and system performance tuning.
- Experience with data orchestration tools like Apache Airflow, Apache Nifi, Control-M, or Oozie for workflow management.
- Familiarity with CI/CD pipelines, Git, Jenkins, and automated deployment strategies for big data and Spark jobs.
- Exposure to containerization and orchestration platforms like Docker and Kubernetes for deploying data services.
- Working knowledge of monitoring and logging tools such as Clouder Manager, Grafana, Prometheus, ELK Stack, or Splunk for proactive issue detection.
- Understanding of data security best practices, including Kerberos, LDAP, TLS encryption, and role-based access control (RBAC).
- Familiarity with DevOps culture and agile methodologies to improve delivery efficiency and team collaboration.
AI Data Engineer
Posted today
Job Viewed
Job Description
Role: Data Engineer
Salary Range: S$6000 – S$8000
Expected Years of Experience: 3 to 8 years
Duration: 1 Year Contract
Key Responsibilities:
- Data Solution Design & Implementation: Design, develop, and implement scalable data solutions tailored for large-scale AI/ML projects and complex data engineering initiatives.
- AI/ML Data Pipelines: Build, maintain, and optimize robust data pipelines and ETL frameworks using technologies like Spark (Java/Scala/PySpark). These pipelines will be crucial for operationalizing data that integrates with machine learning models and data lakes/data marts.
- Performance and Troubleshooting: Proactively identify and resolve system performance issues and data processing bottlenecks in distributed data environments, ensuring data is available for AI/ML model consumption.
- Engineering Best Practices: Champion best practices for code quality, data reliability, and workflow orchestration. This includes implementing automation strategies to improve efficiency and maintain the integrity of data throughout the AI/ML lifecycle.
- Technical Leadership & Collaboration: Offer technical guidance and mentorship to team members. Actively participate in code reviews and design discussions to ensure solutions meet the specific needs of data scientists and machine learning engineers.
Key Requirements:
- AI/ML Data Pipelines: Experience in designing, building, and maintaining end-to-end feature engineering pipelines that operationalize data for AI/ML model training and real-time inference.
- Scalable Data Ingestion: Proven experience in building scalable data pipelines and ingestion frameworks using languages like PySpark, Scala, and Java to handle large-scale datasets for ML applications.
- Cloud-Native & Distributed Systems: Hands-on expertise with cloud-native data ecosystems on platforms like AWS, Azure, or GCP. This includes experience with modern data lake and data warehousing solutions, as well as distributed storage like HDFS.
- Programming & Automation: Strong programming skills in Python, Java, and Scala with deep expertise in Spark for processing and transforming structured and semi-structured data for AI/ML.
- Data Architecture: Deep understanding of modern data lake, data warehouse, and feature store architectures. Ability to design performant data models and serving layers optimized for AI/ML model consumption.
- Performance & MLOps: Experience with Spark performance tuning and job optimization for large-scale distributed data processing. Familiarity with MLOps practices for pipeline orchestration, monitoring, and deployment.
- Real-time & Batch Processing: Expertise in both batch and real-time data processing, leveraging tools like Kafka and workflow orchestrators (e.g., Airflow, Kubeflow) for reliable data ingestion.
Added Advantage (Expanded):
- Proficiency in scripting languages such as Bash, Python, and Perl for automation and monitoring of data jobs.
- Strong knowledge of Linux/Unix systems administration, including services management, cron scheduling, memory/process debugging, and system performance tuning.
- Experience with data orchestration tools like Apache Airflow, Apache Nifi, Control-M, or Oozie for workflow management.
- Familiarity with CI/CD pipelines, Git, Jenkins, and automated deployment strategies for big data and Spark jobs.
- Exposure to containerization and orchestration platforms like Docker and Kubernetes for deploying data services.
- Working knowledge of monitoring and logging tools such as Clouder Manager, Grafana, Prometheus, ELK Stack, or Splunk for proactive issue detection.
- Understanding of data security best practices, including Kerberos, LDAP, TLS encryption, and role-based access control (RBAC).
- Familiarity with DevOps culture and agile methodologies to improve delivery efficiency and team collaboration.
AI Data Engineer
Posted today
Job Viewed
Job Description
Role: Data Engineer
Salary Range: S$6000 – S$8000
Expected Years of Experience: 3 to 8 years
Duration: 1 Year Contract
Key Responsibilities:
- Data Solution Design & Implementation: Design, develop, and implement scalable data solutions tailored for large-scale AI/ML projects and complex data engineering initiatives.
- AI/ML Data Pipelines: Build, maintain, and optimize robust data pipelines and ETL frameworks using technologies like Spark (Java/Scala/PySpark). These pipelines will be crucial for operationalizing data that integrates with machine learning models and data lakes/data marts.
- Performance and Troubleshooting: Proactively identify and resolve system performance issues and data processing bottlenecks in distributed data environments, ensuring data is available for AI/ML model consumption.
- Engineering Best Practices: Champion best practices for code quality, data reliability, and workflow orchestration. This includes implementing automation strategies to improve efficiency and maintain the integrity of data throughout the AI/ML lifecycle.
- Technical Leadership & Collaboration: Offer technical guidance and mentorship to team members. Actively participate in code reviews and design discussions to ensure solutions meet the specific needs of data scientists and machine learning engineers.
Key Requirements:
- AI/ML Data Pipelines: Experience in designing, building, and maintaining end-to-end feature engineering pipelines that operationalize data for AI/ML model training and real-time inference.
- Scalable Data Ingestion: Proven experience in building scalable data pipelines and ingestion frameworks using languages like PySpark, Scala, and Java to handle large-scale datasets for ML applications.
- Cloud-Native & Distributed Systems: Hands-on expertise with cloud-native data ecosystems on platforms like AWS, Azure, or GCP. This includes experience with modern data lake and data warehousing solutions, as well as distributed storage like HDFS.
- Programming & Automation: Strong programming skills in Python, Java, and Scala with deep expertise in Spark for processing and transforming structured and semi-structured data for AI/ML.
- Data Architecture: Deep understanding of modern data lake, data warehouse, and feature store architectures. Ability to design performant data models and serving layers optimized for AI/ML model consumption.
- Performance & MLOps: Experience with Spark performance tuning and job optimization for large-scale distributed data processing. Familiarity with MLOps practices for pipeline orchestration, monitoring, and deployment.
- Real-time & Batch Processing: Expertise in both batch and real-time data processing, leveraging tools like Kafka and workflow orchestrators (e.g., Airflow, Kubeflow) for reliable data ingestion.
Added Advantage (Expanded):
- Proficiency in scripting languages such as Bash, Python, and Perl for automation and monitoring of data jobs.
- Strong knowledge of Linux/Unix systems administration, including services management, cron scheduling, memory/process debugging, and system performance tuning.
- Experience with data orchestration tools like Apache Airflow, Apache Nifi, Control-M, or Oozie for workflow management.
- Familiarity with CI/CD pipelines, Git, Jenkins, and automated deployment strategies for big data and Spark jobs.
- Exposure to containerization and orchestration platforms like Docker and Kubernetes for deploying data services.
- Working knowledge of monitoring and logging tools such as Clouder Manager, Grafana, Prometheus, ELK Stack, or Splunk for proactive issue detection.
- Understanding of data security best practices, including Kerberos, LDAP, TLS encryption, and role-based access control (RBAC).
- Familiarity with DevOps culture and agile methodologies to improve delivery efficiency and team collaboration.
Machine Learning
Scala
Kubernetes
Azure
Big Data
Pipelines
Scripting
ETL
Reliability
Data Engineering
Python
Data Architecture
Docker
Java
Apache
Data Warehousing
AI Data Engineer
Posted today
Job Viewed
Job Description
We are looking for a skilled and experienced AI Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to support the processing and analysis of clinical study and digital device sensor data. As a Data Engineer, you will work closely with data scientists and software engineers to ensure the efficient and reliable flow of data from source systems to analytical tools and platforms.
Responsibilities :
Design, develop, and maintain scalable data pipelines to ingest, transform, and load clinical study data from various sources, including digital device sensors.
Optimize data storage and retrieval processes in cloud-based platforms to ensure high performance and reliability.
Collaborate with data scientists to integrate data processing pipelines with AI-powered algorithms and third-party analytical tools or platforms.
Implement data quality checks and monitoring mechanisms to ensure the integrity and accuracy of the data.
Troubleshoot and resolve issues related to data pipeline performance, reliability, and scalability.
Work closely with software developers, system architects and other cross-functional teams to develop data-driven business solutions.
Stay up-to-date with emerging technologies in AI & Cloud computing and best practices in data engineering to continuously improve data processing pipelines and infrastructure.
Requirements :
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Proven experience in designing and building data pipelines using ETL tools and frameworks such as Apache Spark, Apache Beam, or Apache Airflow.
Proficiency in programming languages such as Python, Java, or Scala.
Strong understanding of database systems, data warehousing concepts, SQL and NoSQL.
Experience with AI and Cloud Computing: Hands-on experience with cloud platforms like AWS and familiarity with AI solutions in these environments.
Excellent problem-solving and troubleshooting skills with a strong attention to detail and quality.
Effective communication and collaboration skills with the ability to work in a team environment.
Preferred Qualifications :
Experience with containerization and orchestration tools such as Docker and Kubernetes.
Familiarity with big data technologies such as Hadoop, Hive, or Presto.
Knowledge of distributed computing frameworks such as Apache Hadoop or Apache Spark.
Familiarity with ElasticSearch or AWS OpenSearch is plus
Prior experience working with healthcare or clinical data is a plus.
#J-18808-Ljbffr
AI Data Engineer
Posted today
Job Viewed
Job Description
Role: Data Engineer
Salary Range: S$6000 – S$8000
Expected Years of Experience: 3 to 8 years
Duration: 1 Year Contract
Key Responsibilities:
Data Solution Design & Implementation: Design, develop, and implement scalable data solutions tailored for large-scale AI/ML projects and complex data engineering initiatives.
AI/ML Data Pipelines: Build, maintain, and optimize robust data pipelines and ETL frameworks using technologies like Spark (Java/Scala/PySpark). These pipelines will be crucial for operationalizing data that integrates with machine learning models and data lakes/data marts.
Performance and Troubleshooting: Proactively identify and resolve system performance issues and data processing bottlenecks in distributed data environments, ensuring data is available for AI/ML model consumption.
Engineering Best Practices: Champion best practices for code quality, data reliability, and workflow orchestration. This includes implementing automation strategies to improve efficiency and maintain the integrity of data throughout the AI/ML lifecycle.
Technical Leadership & Collaboration: Offer technical guidance and mentorship to team members. Actively participate in code reviews and design discussions to ensure solutions meet the specific needs of data scientists and machine learning engineers.
Key Requirements:
AI/ML Data Pipelines: Experience in designing, building, and maintaining end-to-end feature engineering pipelines that operationalize data for AI/ML model training and real-time inference.
Scalable Data Ingestion: Proven experience in building scalable data pipelines and ingestion frameworks using languages like PySpark, Scala, and Java to handle large-scale datasets for ML applications.
Cloud-Native & Distributed Systems: Hands-on expertise with cloud-native data ecosystems on platforms like AWS, Azure, or GCP. This includes experience with modern data lake and data warehousing solutions, as well as distributed storage like HDFS.
Programming & Automation: Strong programming skills in Python, Java, and Scala with deep expertise in Spark for processing and transforming structured and semi-structured data for AI/ML.
Data Architecture: Deep understanding of modern data lake, data warehouse, and feature store architectures. Ability to design performant data models and serving layers optimized for AI/ML model consumption.
Performance & MLOps: Experience with Spark performance tuning and job optimization for large-scale distributed data processing. Familiarity with MLOps practices for pipeline orchestration, monitoring, and deployment.
Real-time & Batch Processing: Expertise in both batch and real-time data processing, leveraging tools like Kafka and workflow orchestrators (e.g., Airflow, Kubeflow) for reliable data ingestion.
Added Advantage (Expanded):
Proficiency in scripting languages such as Bash, Python, and Perl for automation and monitoring of data jobs.
Strong knowledge of Linux/Unix systems administration, including services management, cron scheduling, memory/process debugging, and system performance tuning.
Experience with data orchestration tools like Apache Airflow, Apache Nifi, Control-M, or Oozie for workflow management.
Familiarity with CI/CD pipelines, Git, Jenkins, and automated deployment strategies for big data and Spark jobs.
Exposure to containerization and orchestration platforms like Docker and Kubernetes for deploying data services.
Working knowledge of monitoring and logging tools such as Clouder Manager, Grafana, Prometheus, ELK Stack, or Splunk for proactive issue detection.
Understanding of data security best practices, including Kerberos, LDAP, TLS encryption, and role-based access control (RBAC).
Familiarity with DevOps culture and agile methodologies to improve delivery efficiency and team collaboration.
#J-18808-Ljbffr
AI Data Engineer
Posted 5 days ago
Job Viewed
Job Description
We are looking for a skilled and experienced AI Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to support the processing and analysis of clinical study and digital device sensor data. As a Data Engineer, you will work closely with data scientists and software engineers to ensure the efficient and reliable flow of data from source systems to analytical tools and platforms.
Responsibilities :
- Design, develop, and maintain scalable data pipelines to ingest, transform, and load clinical study data from various sources, including digital device sensors.
- Optimize data storage and retrieval processes in cloud-based platforms to ensure high performance and reliability.
- Collaborate with data scientists to integrate data processing pipelines with AI-powered algorithms and third-party analytical tools or platforms.
- Implement data quality checks and monitoring mechanisms to ensure the integrity and accuracy of the data.
- Troubleshoot and resolve issues related to data pipeline performance, reliability, and scalability.
- Work closely with software developers, system architects and other cross-functional teams to develop data-driven business solutions.
- Stay up-to-date with emerging technologies in AI & Cloud computing and best practices in data engineering to continuously improve data processing pipelines and infrastructure.
Requirements :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in designing and building data pipelines using ETL tools and frameworks such as Apache Spark, Apache Beam, or Apache Airflow.
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong understanding of database systems, data warehousing concepts, SQL and NoSQL.
- Experience with AI and Cloud Computing: Hands-on experience with cloud platforms like AWS and familiarity with AI solutions in these environments.
- Excellent problem-solving and troubleshooting skills with a strong attention to detail and quality.
- Effective communication and collaboration skills with the ability to work in a team environment.
Preferred Qualifications :
- Experience with containerization and orchestration tools such as Docker and Kubernetes.
- Familiarity with big data technologies such as Hadoop, Hive, or Presto.
- Knowledge of distributed computing frameworks such as Apache Hadoop or Apache Spark.
- Familiarity with ElasticSearch or AWS OpenSearch is plus
- Prior experience working with healthcare or clinical data is a plus.
Be The First To Know
About the latest Ai and data Jobs in Singapore !
AI & Data Engineer
Posted 7 days ago
Job Viewed
Job Description
Working Hours : Monday - Thursday (8.30am - 6pm), Friday (8.30am - 5.30pm)
Working Location : South
Salary Package : Up to $5,500
We are looking for a Data Engineer with AI/ML exposure to support the adoption of next-generation AI and Machine Learning technologies. This role focuses on building and optimizing data pipelines, developing dashboards, and integrating AI-driven components into security workflows.
Responsibilities:
- Develop and maintain SQL queries, data tables, and schemas to support cybersecurity analytics.
- Build and maintain data pipelines across data warehouses and lakes (already established by vendors).
- Develop interactive dashboards and reporting tools using Tableau, Looker, or equivalent.
- Integrate LLM APIs via LangChain and leverage cloud AI services (e.g., Vertex AI, AWS Bedrock) to enhance security workflows.
- Design and manage vector databases (Pinecone, PGVector, FAISS) to support search and retrieval tasks.
- Apply prompt engineering techniques to optimize LLM performance for cybersecurity use cases.
- Collaborate with cross-functional teams to deliver secure, scalable, and production-ready AI/ML solutions.
- Support CI/CD pipelines and Git workflows for continuous testing and deployment.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related fields
- Minimum 2 years’ experience in data engineering or related role.
By submitting your resume, you consent to the collection, use, and disclosure of your personal information per ScienTec’s Privacy Policy (scientecconsulting.com/privacy-policy).
This authorizes us to:
- Contact you about potential opportunities.
- Delete personal data as it is not required at this application stage.
- All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.
Wong Siew Ting (Maeve) - R25127375
ScienTec Consulting Pte Ltd - 11C5781
AI Data Engineer
Posted 12 days ago
Job Viewed
Job Description
Role: Data Engineer
Salary Range: S$6000 – S$8000
Expected Years of Experience: 3 to 8 years
Duration: 1 Year Contract
Key Responsibilities:
- Data Solution Design & Implementation: Design, develop, and implement scalable data solutions tailored for large-scale AI/ML projects and complex data engineering initiatives.
- AI/ML Data Pipelines: Build, maintain, and optimize robust data pipelines and ETL frameworks using technologies like Spark (Java/Scala/PySpark). These pipelines will be crucial for operationalizing data that integrates with machine learning models and data lakes/data marts.
- Performance and Troubleshooting: Proactively identify and resolve system performance issues and data processing bottlenecks in distributed data environments, ensuring data is available for AI/ML model consumption.
- Engineering Best Practices: Champion best practices for code quality, data reliability, and workflow orchestration. This includes implementing automation strategies to improve efficiency and maintain the integrity of data throughout the AI/ML lifecycle.
- Technical Leadership & Collaboration: Offer technical guidance and mentorship to team members. Actively participate in code reviews and design discussions to ensure solutions meet the specific needs of data scientists and machine learning engineers.
Key Requirements:
- AI/ML Data Pipelines: Experience in designing, building, and maintaining end-to-end feature engineering pipelines that operationalize data for AI/ML model training and real-time inference.
- Scalable Data Ingestion: Proven experience in building scalable data pipelines and ingestion frameworks using languages like PySpark, Scala, and Java to handle large-scale datasets for ML applications.
- Cloud-Native & Distributed Systems: Hands-on expertise with cloud-native data ecosystems on platforms like AWS, Azure, or GCP. This includes experience with modern data lake and data warehousing solutions, as well as distributed storage like HDFS.
- Programming & Automation: Strong programming skills in Python, Java, and Scala with deep expertise in Spark for processing and transforming structured and semi-structured data for AI/ML.
- Data Architecture: Deep understanding of modern data lake, data warehouse, and feature store architectures. Ability to design performant data models and serving layers optimized for AI/ML model consumption.
- Performance & MLOps: Experience with Spark performance tuning and job optimization for large-scale distributed data processing. Familiarity with MLOps practices for pipeline orchestration, monitoring, and deployment.
- Real-time & Batch Processing: Expertise in both batch and real-time data processing, leveraging tools like Kafka and workflow orchestrators (e.g., Airflow, Kubeflow) for reliable data ingestion.
Added Advantage (Expanded):
- Proficiency in scripting languages such as Bash, Python, and Perl for automation and monitoring of data jobs.
- Strong knowledge of Linux/Unix systems administration, including services management, cron scheduling, memory/process debugging, and system performance tuning.
- Experience with data orchestration tools like Apache Airflow, Apache Nifi, Control-M, or Oozie for workflow management.
- Familiarity with CI/CD pipelines, Git, Jenkins, and automated deployment strategies for big data and Spark jobs.
- Exposure to containerization and orchestration platforms like Docker and Kubernetes for deploying data services.
- Working knowledge of monitoring and logging tools such as Clouder Manager, Grafana, Prometheus, ELK Stack, or Splunk for proactive issue detection.
- Understanding of data security best practices, including Kerberos, LDAP, TLS encryption, and role-based access control (RBAC).
- Familiarity with DevOps culture and agile methodologies to improve delivery efficiency and team collaboration.
Principal AI & Data Consultant
Posted 11 days ago
Job Viewed
Job Description
Founded in Switzerland in 1968, Zühlke is owned by its partners and located across Europe and Asia. We are a global transformation partner, with engineering and innovation in our DNA. We're trusted to help clients envision and build their businesses for the future – to run smarter today while adapting for tomorrow’s markets, customers, and communities. Our multidisciplinary teams specialise in tech strategy and business innovation, digital solutions and applications, and device and systems engineering. We excel in complex, regulated spaces including health and finance, connecting strategy, tech implementation, and operational services to help clients become more effective, resilient businesses.
If you share our values and want to do the best work, for the right reasons, we can offer you the chance to do it on a global scale and play a real role in shaping our exciting journey.
The RoleAs a Zühlke AI & Data Consultant, your #1 priority is to ensure our Data & AI projects generate the expected impact – from offering development to pre-sales and sales to delivery and ongoing relationship management as a trusted advisor.
How you'll make impact:- You drive our AI & Data business by developing tailored offerings with our industry colleagues, driving project proposals, delivering on projects with interdisciplinary Zühlke colleagues, and becoming a trusted advisor for our clients.
- As a versatile, impact-focused, tech-savvy generalist, you enable technical implementations (e.g., AI applications, data platforms) as a hands-on project/product manager or product owner, and consult clients on the organizational aspects of successful Data & AI transformations (strategy, governance, education, etc.).
- You are active both internally and externally as an ambassador and thought leader for our discipline.
- You support team & talent development in AI & Data consulting and delivery.
Required
- University degree from a leading institution in business or STEM.
- 5+ years hybrid consulting and Data & AI-related work experience.
- Strong consulting foundations: effective communication, project management, running workshops, selling consulting work, pragmatism, output-focused work, etc.
- Expertise in data and/or AI strategy, governance, and management with a deep understanding of how to effectively structure and oversee data & AI initiatives.
- Expertise with Data & AI implementation projects, from idea to production.
- Demonstrated ability to inspire, build and lead interdisciplinary teams.
- Excellent written and verbal communication skills in English.
Preferred
- Personal network with relevant decision-makers in Singapore.
- Experience in digital product management & development.
- Hands-on experience with LLM-based applications in production, at scale.
- Entrepreneurial mindset and leadership experience.
- You are assertive, eloquent, proactive and enthusiastic.
- You are highly customer-centric and can think and communicate across disciplines and levels of seniority.
- You are results-oriented and able to win over stakeholders and decision-makers.
- Work life blend: we offer a safe & healthy workplace, with flexible working hours and the possibility to work from home.
- Profit share scheme: In addition to your annual salary, you may receive a profit share defined by the company’s success in the previous year.
- Global and Diverse Zühlke community: witness how colleagues from all our 17 offices across the globe come together to create a unique, positive and inclusive work culture, learning from one another at annual team camps, and celebrating year-end parties and other local festivities.
- Committed to development: we are committed to the growth of our people and are investing in your development. We’re empowering you to build the skills you need to make a positive impact, both personally and for our clients, today and in the future.
If you feel you don't meet all the requirements, we are still happy to get to know you, learn more about your ambitions and ideas and look forward to receiving your application!
We welcome people from all backgrounds, regardless of their gender, personality, national origin, race, religion, colour, sexual orientation, gender identity, age, marital status, disability or veteran status.
#J-18808-Ljbffr