Service Mesh, Containers, Kubernetes
Wednesday, October 28, 2020
Earlier, developers simply wrote their program, built it and ran it. Today, developers need to also think of the various ways of running it whether it be as a binary on a machine (virtual most likely), by packaging it into a container, by making that container a part of a bigger deployment (K8s) or by deploying it into a serverless environment or a service mesh. However, these deployment options are not part of the programming experience for a developer. The developer has to write code in a certain way to work well in a given execution environment, and removing this from the programming problem isn’t good.
Ballerina is an open-source programming language that specializes in moving from code to cloud while providing a unique developer experience. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions. This session will demonstrate how you can move from code to cloud by using the built-in Kubernetes annotation support in Ballerina.
PRO SESSION (MICROSERVICES): How Can Thee Automate Security as Code in Kubernetes Pipelines? Let Me Count the WaysJoin on Hopin
For DevOps and DevSecOps teams, implementing security-as-code is among the highest of mountaintops…and yet still so seldom achieved. Dev teams certainly recognize the advantages of automatically integrating security across the entire software development lifecycle – but they’re too often daunted by the challenge of automating security policies within rapidly changing Kubernetes environments. While it’s now standard practice to automate vulnerability scanning, creating security policies to protect production workloads has, unfortunately, remained a manual, tedious, and error-prone process.
However, by leveraging Kubernetes custom resources, DevOps teams have opportunities at-the-ready to successfully – and pretty easily! – implement security policies as code. Getting it right means fully automating security across the entire CI/CD pipeline.
Attendees of this presentation should expect these takeaways:
• How to use Kubernetes custom resource definitions (CRDs) to implement security policy as code.
• What can and should be a declarative configuration for security policies in Kubernetes.
• Why network inspection is so essential to implement in a Kubernetes environment.
• How to secure communications among microservices and achieve network visibility.
• A demonstration of how to easily and beneficially apply security policies and introduce fully automated security-as-code for Kubernetes.
You've built a SPA and an API backend, and you're now looking to deploy with ease. Docker is the natural fit, but where do we begin? We'll use the Vue CLI and the dotnet CLI to startup our codebases, then craft Dockerfiles to deploy these with ease in various configurations. Whether you'd rather deploy both parts together or scale different pieces separately, Docker can empower you to deploy your solutions at cloud scale.
Kubernetes is a mature technology for operations but very much remains a work-in-progress for developers.
This presentation will look at the challenges of Kubernetes-based microservices development, possible tooling and approaches.
Learn about managing dependencies for services and how to avoid having to use different tool sets for Dev and Upper environments. We will be looking at tools that help developers with inner loop development leveraging kubernetes but without the need for understanding advanced concepts in depth.
Microservices with Kubernetes and Service Mesh are patterns in building new applications that decouple dependencies between the application code, infrastructure and how the services should communicate. In this architecture, the network becomes critical for a properly functioning application teams need to consider both North / South traffic (incoming requests from end users to the cluster) and East / West (intra cluster) communication between the services.
In this talk we discuss:
* Changing traffic patterns from edge to service mesh for microservices
* Envoy as the proxy for the modern application network
* New capabilities available to the applications
* Guidance and considerations on how to incrementally adopt Envoy into your infrastructure
Service meshes normally target the application space, but they can be used to great effect for sources within the infrastructure. We will explore the concept of implementing a service mesh for interacting with infrastructure sources using a declarative resource protocol. This approach provides users with a single mechanism for executing RPC calls and pub/sub operations as well as accessing object class definitions across products from various vendors.
Thursday, October 29, 2020
Service Mesh has evolved into a messy state of affairs. While almost every possible situation involving east-west and egress-bound traffic has a solution, these solutions often require significant oversight, planning, and capacity to implement. The downside is that solving all these problems resulted in the metaphorical equivalent of stuffing a pratfall of clowns into a Volkswagen.
Kevin will cover how the team at Traefik Labs created Traefik Mesh, a lightweight alternative to the traditional service mesh approach, by utilizing a service mesh proxy endpoint running on each node as a DaemonSet. He will discuss how this approach allowed for flexible opt-in behavior using the Service Mesh Interface (SMI) versus the more traditional sidecar approach. He will cover what the SMI is, and demonstrate how to utilize the interface with Traefik Mesh to handle access control and traffic splitting.
GitOps is the gold standard for managing and deploying Kubernetes applications. In this talk, we’ll show you how we use Helm and Argo CD to manage our deployments to a wide range of clusters and clouds.
Codefresh operates a global SaaS product in the cloud using Kubernetes. CloudPosse helps companies like PeerStreet, RMS, Sportech, and others migrate to Kubernetes. Between the two of us, we’ve deployed to every cloud and in almost any kind of situation. We’ve built an established, well-worn path for using the principles of “GitOps” to take advantage of Kubernetes declarative infrastructure in order to deploy more often, and with more reliability.
In this session, you’ll learn the principles of GitOps and how it solves both technology and organizational problems around CI/CD, Kubernetes application drift, and traceability through the engineering process.