Kubecon Europe 2020 - Day 4

Dernière mise à jour : 8 sept. 2021

The fourth and last day of the KubeCon CloudNativeCon Europe 2020 virtual event. An amazing conference, congratulations to the CNCF team. And thanks to our SoKube team for following the confs and for writing these blog posts!


Check out other days:

Kubecon Europe 2020 - Day 1

Kubecon Europe 2020 - Day 2

Kubecon Europe 2020 - Day 3




Going Beyond CI/CD with Prow


By Leonardo Di Donato Open Source Software Engineer, Sysdig


At the beginning of Falco, CI was done through Travis CI.

Pain points were especially about the non-interactive workflow between a classical CI and GitHub (the CI does not handle status from Github repo):

  • no clear ownership

  • PR merged event GitHub status is KO

  • Some policies but they were not easily discoverable, auditable

  • No automation

  • No enforcement for approvals

Falco context: didn’t want to spend time to:

  • build a custom ci/cd

  • create automatic policy enforcer

Falco team wants to focus only on development of their product.

As Kubernetes used Prow, Falco chosed to follow this path.


Prow capabilities:

  • GitHub ChatOps

  • Manage and enforce policies

  • Auto-merge bot, with considerations for GitHub status

  • Prow is OSS, so you can add some plugins and extensions if needed

  • Built for Kubernetes, on Kubernetes

=> With these capabilities (and the one in particular), Prow is by nature very scalable.


It seems that Falco now uses Prow as their CI/CD solution, and it fits perfectly their needs.

We think it is very interesting to have a kubernetes native solution for CI/CD such as Prow, but as of now it is limited to GitHub repositories, and that is a pain point.

A huge proportion of organizations have other SCM from the market in place (Bitbucket, GitLab, SourceForge, …) and don’t want to migrate to GitHub. It seems that GitLab is considering helping the Prow project by provising an integration with their system, but for now it’s only at the ideation stage.


The Past, Present, and Future of Cloud Native API Gateways


By Daniel Bryant Product Architect, Datawire


Boundaries between apps and users has evolved in last 30 years. We will refer at those boundaries of an application (networking etc) as “edge”, as the speaker used this vocabulary.

1990: Hardware load balancers

2000: software load balancers appear (nginx/haproxy,…)

2010: API and so… API gateways begin

2015: Microservices => independent, and so: different protocols, languages, locations, authentication systems, …

API gateway needs to to handle all of this: authentication, load-balancing, discovering of new services.

Since the advent of micro-services, the workflow changed and now app teams are fully responsible for a service delivery


2 biggest challenges:

  • Scale edge management (who does what), because we have more and more resources like, routes, etc in the API gateway (retries, authentication, caching, tracing, rate limiting are the main features of an API gateway solution)

  • Support all these requirements in different ways, since every service will choose a solution that best fits its own needs.

Three strategies:

  • Deploy an additional kube API gateway:

  • dev teams are responsible

  • OR existing ops teams can manage this

  • Extend existing API gateway:

  • Augmenting an existing API gateway solution

  • Custom ingress controller or load balancer

  • Enable sync between the API endpoints and location of k8s services

  • Hard to maintain (custom scripts must avoid conflict between routes inside the cluster)

  • Deploy an in-cluster edge stack:

  • Deploy Kubernetes-native API gateway

  • Install in each of your kube clusters

  • Ops team own it, and provides default