75% Remote: Senior Data DevOps Engineer (f/m/d)

Firmenname für PREMIUM-Mitglieder sichtbar

  • November 2024
  • März 2025
  • D-Großraum Berlin
  • auf Anfrage
  • Remote
  • 27.09.2024

Projekt Insights

seit wann aktiv?
Projektansichten
Bewerbungen

Projektbeschreibung

For our client we are looking for a Data DevOps Engineer (f/m/d).

Frame data:
Start: November 2024
Duration: 31.03.25 ++ (long term)
Capacity: 100% if possible
Location: Berlin / Remote (1 week Berlin / 3 weeks remote in rotation), up to 50% onsite in peak times

Role:
- The role of a DevOps Engineer within the program will be focusing on Data Services portability and automation towards smooth, robust and reliable operations. This includes taking ownership for CI/CD pipelines and supporting the team to deploy efficiently and effectively.
- This role contributes to bring our Data services into different target environments and help to manage the lifecycle of these services
- We are looking for an experienced DevOps engineer to join our team, focusing on engineering excellence, robust infrastructure and deployability.
- In this role you will join a DevOps engineering team developing hybrid cloud solutions for several products, such as Databases, Message Brokers und Data Catalogs. DevOps Engineers play a critical role in the development, deployment, and maintenance of our products.
- Your ability to automate and optimize processes, ensure system stability, and collaborate effectively within our and with other teams will be instrumental in the success of our team.

Skills (must-have):
- Proven hands-on DevOps experience.
- Proficiency to set up and manage CI/CD pipelines using tools like GitLab, ArgoCD. GitOps knowledge ArgoCD / Flux.
- Thorough knowledge of how Kubernetes operates internally, which involves setting it up, adjusting its size, fixing issues, and managing various Kubernetes versions. This also includes use of Helm charts.
- Understanding how its networking and storage functions work on a low level. Having experience with cloud provided (like AKS, GKE) as well as self-managed/on-prem K8s clusters.
- Experience in administering infrastructure components and proficiency in Infrastructure as Code (IaC) is a must.
- A strong grasp of how data is stored, shared, and secured (storage) and how information moves within the cloud setups from a networking perspective is necessary.
- Good understanding of ingress and egress flows both internally and externally including interconnect across the cloud.
- Tools like Terraform and Terragrunt.
- Conceptual understanding and practical experience in deploying and operating Data Storage Technologies, eg. RDBMS, NoSQL, …
- Proficiency in both speech and writing in English (at least C1)

Skills (should-have):
- Deeper K8s skills and experience, e.g. k8s operators’ development experience and/or k8s operators for Big Data technologies).
- Security: Prioritizing security-by-design principles. Proficiency in securing the system using SSL/TLS encryption for data protection, experience with secret stores like HashiCorp Vault, and an understanding of the zero-trust and the least privilege security concept.
- Observability Systems: Proficiency in setting up monitoring and logging systems for real-time insights into system performance. Familiarity with tools like Prometheus, Grafana, and optionally other similar stacks (e.g., EFK - Elasticsearch, Fluentd, Kibana) and monitoring technologies like Splunk, Datadog, etc.
- Admin and Ops Experience: Previous experience in system administration and operations roles is preferred.
- Understanding of releasing concepts and service versioning (e.g., SemVer).
- Proficiency in German.

Kontaktdaten

Als registriertes Mitglied von freelance.de können Sie sich direkt auf dieses Projekt bewerben.

Kostenlos registrieren

Sie suchen Freelancer?

Schreiben Sie Ihr Projekt aus und erhalten Sie noch heute passende Angebote.

Jetzt Projekt erstellen