Empresa: HAYS
Provincia: Madrid
Población: Madrid, Madrid centro
Descripción: Be a key driver on BASF´s path to digitalisation by supporting existing products and initiatives as well as innovate additional digital solutions that supports BASF´s global businesses
You will be part of the unit Knowledge Innovation, responsible for a knowledge system containing a huge number of scientific documents. You will be working in an international and dynamic environment. Your responsibilities will include:
– Build (big) data pipelines for complex and distributed data processing
– Closely collaborate with the product team to understand requirements, work out and implement new features for fast and insightful interaction with data
– Implement security principles and align with our external security consultants
– Work in an international team and agile environment
Must have qualification
– A degree in computer science, software engineering, or mathematics/natural sciences or similar experience
– 5+ years of building (big) data pipelines for complex and distributed data processing
– Experience with distributed data processing frameworks like Spark
– Experience working with high volumes of unstructured data (text, documents)
– 3+ years of experience with Java/Scala or Python
– Experience working in a DevOps environment using development tools and CI/CD pipelines, also with containers and container orchestration
– Experience using cloud environments
– Fluent communication in English is required
Nice to have
– Experience using graph databases, full-text search engines, document-oriented databases and messaging systems – ideally Neo4j, ElasticSearch, MongoDB and RabbitMQ.
– Experience with tools like Airflow or Talend
Tecnologías: Spark, Java, Scala, Python,
Tipo de Contrato:
Indefinido
Salario: Sin especificar
Experiencia: Más de 5 años
Funciones: Big Data