We are seeking a highly experienced Data Engineer to build and operate the data, big-data infrastructure, and MLOps backbone for enterprise-scale AI/ML solutions. This role focuses on designing scalable data pipelines, cloud-native and hybrid infrastructure, and production-grade AI/ML deployments, while ensuring security, reliability, and performance.
The ideal candidate has strong hands-on experience with big-data platforms, PySpark, MLOps workflows, and cloud infrastructure, and works closely with data scientists to turn data into actionable insights.
• Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
• Strong experience as a Data Engineer in AI, big-data, or advanced analytics environments.
• Hands-on expertise with PySpark / Apache Spark and distributed data processing systems.
• Experience building scalable big-data infrastructure and pipelines.
• Deep understanding of MLOps principles and lifecycle management.
• Proficiency in Python and SQL (Scala/Java a plus).
• Experience with big-data and streaming frameworks (Spark, Kafka, Airflow, Flink, Hadoop ecosystem).
• Strong knowledge of cloud-native architectures, hybrid environments, and infrastructure fundamentals (compute, storage, networking).
• Proven ability to debug and optimize complex distributed systems.
Send your resume to