My client has the fastest growing on-premises bare metal Linux Infra platform globally. With an Ops team of 35 they are running a CentOS, Puppet, Python, Postgres environment. Their DNA: build what you need yourself, so no ‘of the shelve’ networking, load balancing or firewalling. Everything is Linux based. And automate as much as possible!
Due to scaling, security, stability and control considerations they do not use third party software. No Cloud. Just Linux to run a massive Infra platform that services global enterprises.
We are now looking for several Data Engineers that combine the experience and knowledge of a Linux Production environment with the Hadoop ecosystem.
You will facilitate Developers and Data Scientist to make sense out of the data that you deliver from the Infra platform. You are by heart a Linux Infra Engineer that has specialized on Hadoop Big data.
The data engineer:
- 3+ years on big Linux production platform;
- Strong automation focus Bash/Python & Ansible/Puppet;
- Specialized in Hadoop ecosystem (HDFS, Yarn, Hive).
Do you want to be part of a very successful international company, basically dominating their niche globally? Do you like building and forming the whole Hadoop driven Big Data landscape from an Infra/Linux perspective?
Global candidates accepted.
Salary up to 100K.
Interested? Please contact me on email@example.com / +316 19805616