Ready to turn real-world device data into insights that shape patient care? This is your chance to work at the heart of an innovative medical robot entering the market
Our client is a fast-growing MedTech scale-up (~100 people, strong engineering focus)developing the world’s first autonomous blood-drawing robot. With CE approvalsecured and funding through 2027, they are moving from development into large-scaledeployment. You’ll join the Application Team, working closely with engineering andbusiness to make data reliable, traceable, and actionable.
You’ll help build and scale the data backbone behind a regulated, hardware-drivenproduct. Working in a cloud-native setup, you’ll design and maintain production-gradeETL and ELT pipelines that support both operational insight and long-term productperformance. Day to day, you’ll develop Python-based pipelines using modern librarieslike Polars, run them in containerised environments, and integrate them with AWSorchestration services. You’ll collaborate closely with a Senior Data Engineer to improveperformance and robustness, investigate data issues, extend automated qualitychecks, and translate requirements into clean datasets and clear dashboards. There’salso room to grow into data science, fault analysis, and support for ML and AI workflowsembedded in the devices.
• 2+ years of experience with AWS-native services, containers, & DataOpspractices.
• 3–5 years of experience in data engineering, backend engineering, or analyticsfocused roles.
• Strong Python skills.
• Hands-on experience with ETL or ELT pipelines in data lake or lakehouseenvironments.
• Familiar with modern data tooling such as Polars & Pandas.
• Experience in delivering cleanand insightful dashboards to businessstakeholders