Empresa: Michael Page
Perfil buscado (Hombre/Mujer)
• Leverage your strong background in distributed data processing, stream processing and data modeling to develop reliable and scalable production components
• Develop data stream processing pipelines (Kafka)
• Build, tune, and troubleshoot data processing clusters
• Develop and integrate machine learning algorithms for data processing and analysis
• Build models to address business problems
• Think in Cloud-native design patterns i.e. auto-scaling, elasticity, container orchestration
• Participate in technical design and architectural discussions within the product engineering team to solve some real operational issues
• Be outspoken in meetings and generate new ideas for the team
• Cloud BigData Engineer
• Technology start-up in the logistics sector
• 2+ years of related experience in data engineering
• Knowledge of GCP infrastructure and services
• Familiar with GCP datastores, GCP Dataproc, GCP Pub/Sub, etc.
• Strong exposure to the Hadoop ecosystem, e.g. HDFS, Yarn, Zookeeper, Hive
• Good knowledge of batch and streaming frameworks, e.g. Spark, Flink
• Hands-on experience of (Cloud-based) NoSQL databases
• Excellent Linux scripting skills
• Solid experience in Terraform scripting, or equivalent
• Experience building CI/CD pipelines, with tools like Jenkins in production environments
• Experience of GitHub Actions a plus
• Experience orchestrating the build and deployment of a containerized environment
• Proven experience of Kubernetes
• Experience with languages such as Python, Go
Technology start-up in the logistics sector.
• Collaborative work environment.
• Permanent contract.
• Flexible remote work schedule.
Tecnologías: Google Cloud Platform, Hadoop, HDFS, Yarn, Zookeeper, Hive
Tipo de Contrato:
Salario: Sin especificar
Experiencia: 2 años
Funciones: Big Data
Descubre más: https://www.tecnoempleo.com/cloud-big-data-engineer-barcelona/google-cloud-platform-hadoop-hdfs-yarn-z/rf-d41d08cd9r8f0a0b20j4