Tuesday, October 27, 2020
PRO WORKSHOP (MICROSERVICES): Streaming Microservices Architecture with Apache Kafka, Kubernetes and Istio Service Mesh
Apache Kafka became the de facto standard for microservice architectures. It goes far beyond reliable and scalable high-volume messaging. In addition, you can leverage Kafka Connect for integration and the Kafka Streams API for building lightweight stream processing microservices in autonomous teams. However, microservices also introduce new challenges like observability of the whole ecosystem.
A Service Mesh technology like Istio (including Envoy) complements the architecture. It describes the network of microservices that make up such applications and the interactions between them. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
This session explores the problems of distributed Microservices communication and how both Apache Kafka and Service Mesh solutions address it together on top of Kubernetes. I cover different approaches for combining both to build a reliable and scalable microservice architecture with decoupled and secure microservices.
In a typical monolithic application, you have a large application talking to a large database. In a microservices architecture, instead of building a large application, we build a number of smaller applications, known as microservices, which exchange messages among themselves. While the process of migrating from a monolith to microservices can be tricky, we will examine why you would want to make this change, the challenges of this migration, how feature flags solve those challenges, and best practices to tie it all together.
PRO WORKSHOP (MICROSERVICES): Developing While There Are Hundreds of Microservices Around You Without Breaking Your APIs
Managing dozens or hundreds of microservices in scale can be very challenging. As developers, we often find ourselves blind to how our application is actually behaving in production, what dependencies we should be aware of and what should we check before deploying a new version.
In this talk, we will introduce how to leverage tracing capabilities in production during development & testing phases to improve our code.
Application modernization for higher agility and cost optimization has become table stakes for enterprises to be competitive. One of the most common patterns to accomplish that is through containerization. Container platforms have become an integral part of any Hybrid Cloud IT landscape. They accelerate the multi cloud adoption in enterprises. Containers can be deployed in any cloud (IBM, AWS, Azure, Google) or on-premise. In fact, most of the public clouds have Container platforms as Service.
The question, what are the relevant KPIs and measures to keep in mind when moving from a traditional development to the container world. How do you know your App modernization strategy through containers is working? This session will discuss these questions and others to provoke thinking among the business and technical executives
Microservices is not a buzz word but Observability is.
The session details the observability needs of building distributed systems followed by a workshop covering the below.
1. Building Observable Spring Boot Java Microservices by adding Resilience patterns (Circuit Breaker, Retry, Rate-limiting etc...) using Resilence4j
2. Enabling them for Event Driven
3. Adding Actuator for exposing metrics
4. Monitoring using Prometheus
5. Alerting using Prometheus Alert Manager
6. Tracing using Zipkin
Wednesday, October 28, 2020
Earlier, developers simply wrote their program, built it and ran it. Today, developers need to also think of the various ways of running it whether it be as a binary on a machine (virtual most likely), by packaging it into a container, by making that container a part of a bigger deployment (K8s) or by deploying it into a serverless environment or a service mesh. However, these deployment options are not part of the programming experience for a developer. The developer has to write code in a certain way to work well in a given execution environment, and removing this from the programming problem isn’t good.
Ballerina is an open-source programming language that specializes in moving from code to cloud while providing a unique developer experience. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions. This session will demonstrate how you can move from code to cloud by using the built-in Kubernetes annotation support in Ballerina.
Modern, distributed applications are often seen as a graph of nodes and edges, each node representing a container, a function, or an API service.
At scale, with thousands of small microservices visualizing and analyzing becomes critical to understand performance, flow, and the health of our applications.
To accomplish that we need to learn about observability, distributed tracing, popular tools and frameworks, and more.
In this talk, we'll go over the challenges, the solutions, the approaches, and the tools to help you understand your APIs and microservices.
In this session we will go over best practices for building cloud native architecture that is designed to scale rapidly. We will go over the design of Kubernetes based microservice architecture that we implement day to day at our firm, PeakActivity.
PRO SESSION (MICROSERVICES): How Can Thee Automate Security as Code in Kubernetes Pipelines? Let Me Count the Ways
For DevOps and DevSecOps teams, implementing security-as-code is among the highest of mountaintops…and yet still so seldom achieved. Dev teams certainly recognize the advantages of automatically integrating security across the entire software development lifecycle – but they’re too often daunted by the challenge of automating security policies within rapidly changing Kubernetes environments. While it’s now standard practice to automate vulnerability scanning, creating security policies to protect production workloads has, unfortunately, remained a manual, tedious, and error-prone process.
However, by leveraging Kubernetes custom resources, DevOps teams have opportunities at-the-ready to successfully – and pretty easily! – implement security policies as code. Getting it right means fully automating security across the entire CI/CD pipeline.
Attendees of this presentation should expect these takeaways:
• How to use Kubernetes custom resource definitions (CRDs) to implement security policy as code.
• What can and should be a declarative configuration for security policies in Kubernetes.
• Why network inspection is so essential to implement in a Kubernetes environment.
• How to secure communications among microservices and achieve network visibility.
• A demonstration of how to easily and beneficially apply security policies and introduce fully automated security-as-code for Kubernetes.
You've built a SPA and an API backend, and you're now looking to deploy with ease. Docker is the natural fit, but where do we begin? We'll use the Vue CLI and the dotnet CLI to startup our codebases, then craft Dockerfiles to deploy these with ease in various configurations. Whether you'd rather deploy both parts together or scale different pieces separately, Docker can empower you to deploy your solutions at cloud scale.
Kubernetes is a mature technology for operations but very much remains a work-in-progress for developers.
This presentation will look at the challenges of Kubernetes-based microservices development, possible tooling and approaches.
Learn about managing dependencies for services and how to avoid having to use different tool sets for Dev and Upper environments. We will be looking at tools that help developers with inner loop development leveraging kubernetes but without the need for understanding advanced concepts in depth.
Microservices with Kubernetes and Service Mesh are patterns in building new applications that decouple dependencies between the application code, infrastructure and how the services should communicate. In this architecture, the network becomes critical for a properly functioning application teams need to consider both North / South traffic (incoming requests from end users to the cluster) and East / West (intra cluster) communication between the services.
In this talk we discuss:
* Changing traffic patterns from edge to service mesh for microservices
* Envoy as the proxy for the modern application network
* New capabilities available to the applications
* Guidance and considerations on how to incrementally adopt Envoy into your infrastructure
Service meshes normally target the application space, but they can be used to great effect for sources within the infrastructure. We will explore the concept of implementing a service mesh for interacting with infrastructure sources using a declarative resource protocol. This approach provides users with a single mechanism for executing RPC calls and pub/sub operations as well as accessing object class definitions across products from various vendors.
Thursday, October 29, 2020
Service Mesh has evolved into a messy state of affairs. While almost every possible situation involving east-west and egress-bound traffic has a solution, these solutions often require significant oversight, planning, and capacity to implement. The downside is that solving all these problems resulted in the metaphorical equivalent of stuffing a pratfall of clowns into a Volkswagen.
Kevin will cover how the team at Traefik Labs created Traefik Mesh, a lightweight alternative to the traditional service mesh approach, by utilizing a service mesh proxy endpoint running on each node as a DaemonSet. He will discuss how this approach allowed for flexible opt-in behavior using the Service Mesh Interface (SMI) versus the more traditional sidecar approach. He will cover what the SMI is, and demonstrate how to utilize the interface with Traefik Mesh to handle access control and traffic splitting.
Microservices developers are slowed down by the friction in debugging and testing API interactions with other services. We first highlight problems with common approaches such as spinning up dev clusters or manually creating API mocks. We then illustrate how to overcome these issues with an approach that enables developers to debug and test all API interactions locally in their IDE. Mesh Dynamics product realizes this approach and gives developers the flexibility of using a mix of rich auto-created API mocks or live services.
GitOps is the gold standard for managing and deploying Kubernetes applications. In this talk, we’ll show you how we use raw Kubernetes, in addition to Helm and Terraform to manage our deployments to a wide range of clusters and clouds.
Codefresh operates a global SaaS product in the cloud using Kubernetes. CloudPosse helps companies like PeerStreet, RMS, Sportech, and others migrate to Kubernetes. Between the two of us, we’ve deployed to every cloud and in almost any kind of situation. We’ve built an established, well-worn path for using the principles of “GitOps” to take advantage of Kubernetes declarative infrastructure in order to deploy more often, and with more reliability.
In this session, you’ll learn the principles of GitOps and how it solves both technology and organizational problems around CI/CD, Kubernetes application drift, and traceability through the engineering process.
PRO SESSION (MICROSERVICES): Waking up in the Weeds of Microservices? How to Diagnose Your First Bug
You invest your time and effort breaking up that monolithic Frankenstein into a suite of elegant composable micro-services, you containerize them and you deploy them somewhere in the cloud. Then you proudly watch it all come together reaping the benefits of the most scalable architectures. It is all fine and dandy from this point on. Too good to be true? Of course! This session is about what to do when you wake up to find yourself in the weeds diagnosing that first bug and tracing calls through the convoluted web of micro-services of your own doing. Through a series of demos and code snippets, we will introduce the most important open-source projects tools to strike the right balance of monitoring at the infrastructure, container, and services.
PRO WORKSHOP (MICROSERVICES): Why GraphQL Between Services Is the Worst Idea and the Best Idea at the Same Time
It seems like everyone is talking about GraphQL. "GraphQL all the things!!". But does GraphQL really fits everywhere? What might be some of the issues of using GraphQL between services?
In this talk I will demonstrate different approaches that are currently being discussed in the community, their downsides and pitfalls and also reveal a new radical approach that might shine a new light on the subject, using the "GraphQL Mesh" library
Modern applications are increasingly becoming more of a distributed computing problem. With the availability of feature rich cloud services, our solutions are increasingly relying on these in implementing functionality. Also, the application itself is adopting more of a disaggregated architecture in favor of extensibility, scalability, re-usability, and deployment flexibility. This is how microservices architectures are becoming popular everyday. But there is no free lunch; with the benefits, there are new challenges that are introduced. Compared to monoliths, with microservices we need to handle the complexities that come with networked architectures, such as communication latency, unreliable connections, protocols, data formats, and transactions. So while we come up with many new techniques to tackle these problems, it’s vital for us to have a proper observability functionality to verify the behavior. The Ballerina programming, which is designed from ground-up to work with networked applications, takes in a unique approach by having built-in observability functionality in language constructs. It exploits the language awareness for network operations, such as service types, remote function invocations, communication resiliency mechanisms, to automatically observe the operations done by the users. Basically the Ballerina platform takes care of the majority observability situations automatically, so the developer can only focus on the core business logic, and not sprinkle your code with lots of observability code. In this session, we will look at these features on how these built-in functionality will be used for metrics generation and distributed tracing using the Ballerina platform.
Containerization gave applications portability from local dev to production, but in our pursuit of service-oriented design that portability has been lost. This talk will discuss how we can build upon containerization to make complex services portable through dependency management and resolution.