Tuesday, September 29, 2020
Building a cloud-agnostic platform used to be a challenging task as one had to deal with a large number of different cloud APIs and service offerings. Today, as most Cloud providers are offering a managed Kubernetes solution (e.g., GKE, AKS, or EKS), it seems like developers could simply build a platform based on Kubernetes and be cloud-agnostic. While this assumption is mostly correct, there are still a number of differences and pitfalls when deploying across those managed Kubernetes solutions.
This talk discusses the experiences made while building the ArangoDB Managed Service offering across and GKE, AKS, or EKS.
While the (managed) Kubernetes API being a great abstraction from the actual cloud provider, a number of challenges remain including for example networking, autoscaler, cluster provisioning, or node sizing. This talk provides an overview of those challenges and also discusses how they were solved as part of the ArangoDB managed Service.
Developers are moving away from large monolithic apps in favor of small, focused microservices that speed up implementation and improve resiliency. Microservices and containers changed application design and deployment patterns, but along with them brought challenges like service discovery, routing, failure handling, security and visibility to microservices.
“Service mesh” architecture was born to handle these features. Applications are getting decoupled internally as microservices, and the responsibility of maintaining coupling between these microservices is passed to the service mesh.Istio, a joint collaboration between IBM, Google and Lyft provides an easy way to create a service mesh that will manage many of these complex tasks automatically, without the need to modify the microservices themselves.
In this talk we will see how istio can be used to manage traffic, gather metrics and enforce policies in a demo application running microservices. We will learn why kubernetes need “service mesh” and how does Istio improve managing Kubernetes workload.
Cloud native application development has now become the norm. As a result, every developer is becoming a cloud native app developer. They are becoming masters of not only programming languages, but also distributed primitives/abstractions offered by containers, Kubernetes, and serverless frameworks. Effectively leveraging platforms such as Kubernetes can be a daunting task for developers due to many reasons, including managing hundreds of lines of YAML configuration.
Programming languages, frameworks, and tools that abstract away the complexities faced by developers are on the rise. They are collectively called the languages of infrastructure. In this session, I will explore two such tools -- Pulumi and Ballerina. Let’s explore how these languages of infrastructure hide the complexities of cloud native platforms and boost developer productivity.
Interested in Containers and want to learn more? This talk will introduce you to the basics of why containers are important, how they work, and how Kubernetes is making containers the DevOps way of the future - through fun comic illustrations and analogies! You'll learn and retain the key points you'll need whether you're trying to convince your leadership that container adoption is right for the company, or talking to that person a few cubes down who just can't seem to stop talking about containers!
A discussion on the changes, trends, and database technologies that are going to impact your business in the next 12-18 months.
In the current technology landscape, we have a lot of great innovation happening especially when it comes to Database Technology. A few examples include introducing new data models such as time series or graph, which focus on solving SQL at hyper-scale problem, this has been an elusive solution and scale was becoming synonymous with NoSQL environments. We now have a new Cloud-Native database design coming to market using the power of Kubernetes as well as employing Serverless concepts.
In this presentation, we will look at database technologies changing trends and what is driving them as well as talk about changes to the Open Source licenses and Cloud-based deployment and emerging class of not-quite Open Source Database software.
Microservices, APIs and containers are more than just the latest industry buzzwords—they’re becoming essentials of the modern tech stack, with Kubernetes gaining significant momentum as a popular choice in the container orchestration market. However, the reality of orchestration can be daunting, and with all this noise, it can also be challenging to differentiate between hype and practicality.
Dennis Kelly, a Kubernetes engineer at Packet, will demonstrate a lightweight but enterprise-ready solution for container orchestration leveraging a number of tools, including Terraform, Consul, Vault and Nomad. He will also share learnings on how this solution helped his team increase management efficiencies while decreasing the cost of container orchestration.
Attendees will leave with a clear, easy-to-implement stack that will improve cloud resource usage, save money, increase visibility and simplify application management.
Business leaders desire data driven insights to help improve customer experience. Data engineers, data scientists, and software developers desire a self-service, cloud-like experience to access tools/frameworks, data, and compute resources anywhere to rapidly build, scale, and share results of their projects to accelerate delivery of AI-powered intelligent applications into production. This keynote will provide a brief overview of the AI/ML use cases, required capabilities, and execution challenges. Next we will discuss the value of Hybrid Cloud powered by containers, Kubernetes, and DevOps to help fast track AI/ML projects from pilot to production, and accelerate delivery of intelligent applications. Finally, the session will share real world success stories from various industries globally.
The dynamic development of Cloud Computing and novel Cloud models like serverless creates new challenges for Cloud deployment. This presentation describes how to implement Multi-Cloud native strategies using advanced platform that allows for Cloud-agnostic Multi-Cloud deployment of serverless.
Managing cloud security risk is a requirement. Why wait to tackle container and Kubernetes security? Many teams worry security will slow down releases. Studies show that top performing DevOps teams integrate more functions into their tool chain, including monitoring and security. Learn how other leading companies are reducing risk with a secure DevOps approach.
Join us to understand:
- Best practices to get started quickly and integrate security into existing toolchains
- Key areas where you should integrate security: CI/CD pipelines, registries, workload admission, service communication
- Ways to take advantage of Kubernetes native controls and open source projects such as Falco cloud native runtime security project and Open Policy Agent (OPA)
- How others are monitoring cloud security across multiple clouds and services including for Fargate, EKS, OpenShift, and AKS
- Approaches to managing incident response and forensics after the container is gone
Cloud deployments offer the potential for almost infinite resources and flexible scalability. But there are so many options! It can be overwhelming to know which services are best for your use case. Building distributed systems which take advantage of in-memory computing only adds to the complexity. During this session we will introduce the Apache Ignite in-memory computing platform and identify key metrics that can help you maximize application performance on your existing cloud infrastructure. We will provide best practices on how best to structure and deploy in-memory applications on both public and hybrid clouds.
Many of us are familiar and comfortable with deploying automation for parts of our software development lifecycles. We don’t build all of our software by hand; continuous integration practices have been available, and improving, for a number of years. Our test engineers write more automated testing components to replace manually clicking through QA environments. Our production deployments are governed by automated delivery or deployment, coupled with automated infrastructure tooling to keep our services running.
But what happens when something goes wrong? How we respond to incidents, and the speed at which our teams can do so, is increasingly important. Employing automation at this edge stage of the lifecycle will help teams deal with the increasing complexity of modern systems and recover time from unplanned work. This talk will discuss some of the features to keep in mind when automating for incident response as well as approaches to introducing automation to help reduce alert fatigue for your teams.
This is the story of a small infrastructure team’s quest to forever change how infrastructure is provisioned and managed across multiple cloud providers. We walk through all of the steps required to go from toiling over a pile of hand-crafted misery to a well-oiled continuous integration machine fueled by Terraform. Learn how we approached code design, workflows, prototyping, bake-offs, team collaboration, remote state, repository design, and more. Demos and sample code included!
Wednesday, September 30, 2020
Continuous Integration, Deployment and Delivery can be a hard concept for many people coming into agile. The act of continuously pushing new code into production frequently can be scary. For some the idea is so far out-there, they think they could not possibly achieve it and never try. This workshop will focus on practicing building, testing, deploying, and managing self-served services in cloud.
There is no public data available on how developers use feature flagging and the resulting impact on delivery cycles. This lack of data prevents teams from benchmarking their feature flagging practice against the industry. In this talk, I will present data - collected across hundreds of customers - on how developers use feature flags in their daily jobs. This data will range from the time it takes teams to release a feature behind a flag to the number of times a flag has to be turned off in an emergency. Armed with data, teams can understand what to expect when they widely adopt flags as well as compare their existing state to the industry for improvement.
In this session, you will learn about developing robust application modernization strategies as well as tooling and accelerators that can be used to realize business value quickly and iteratively. We will start by discussing how to analyze applications and infrastructure using expert systems and automated tooling as well as evaluating the existing IT operating model and foundational capabilities to identify gaps. Then we will explore several modernization techniques using cloud, containers, and exponential technologies. Next there will be a brief overview of tooling and accelerators that illustrate key transformation patterns such as containerization and decomposition of applications into microservices. We will wrap up the discussion with some considerations for building a robust business case which is a critical success factor for launching, and continuing, your modernization journey.
git is one of those things that you either get or you don't yet. Having used git almost exclusively since 2008 I will share a non accurate but very useful way of making sense of it all!
By the end of this session, you will be rebasing your git experience onto master!
In this talk, I will talk about how GraphQL works really well for monoliths and what problems arise when taking it to microservice and serverless architectures.
I will then present a few patterns that can allow for extracting the benefit of GraphQL while keeping the benefits of a microservices/serverless architecture.
Specifically, I will talk about patterns for (reading) querying across types/schemas distributed across microservices and data-sources and also for "writing" with CQRS style actions instead of CRUD style "mutations" and how to make them work with event driven patterns. I will draw upon existing commentary (both successes and failures) in the GraphQL community, personal experiences and our learnings from Hasura users.
Earlier, developers simply wrote their program, built it and ran it. Today, developers need to also think of the various ways of running it whether it be as a binary on a machine (virtual most likely), by packaging it into a container, by making that container a part of a bigger deployment (K8s) or by deploying it into a serverless environment or a service mesh. However, these deployment options are not part of the programming experience for a developer. The developer has to write code in a certain way to work well in a given execution environment, and removing this from the programming problem isn’t good.
Ballerina is an open-source programming language with a built-in cloud technology integration helps developers write their application just work in Kubernetes. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions. This session will demonstrate how you can move from code to cloud by using the built-in Kubernetes annotation support in Ballerina.
Today’s monitoring systems, designed for different times, to maintain availability and performance of traditional applications and architectures. Digital business has forced monitoring of the past to transform into observability of today. Choices have progressed significantly for both small teams and large enterprises. These observability signals encompass logs, metrics, and traces which create the monitoring tooling for today’s DevOps teams.
Previously products had been in silos, which created burdens in maintaining and scaling solutions, but today this is no longer an issue with many commercial options which unify these signals. There have been significant improvements in the open source projects, but scaling them requires expertise and investment in people and infrastructure. If the investment is too great there are SaaS options available across the board from simple to complex.
The market is changing faster than ever, driven by open source initiatives and projects along with software foundations which underpin today’s cloud native architectures. In this talk you’ll learn how monitoring has shifted, which open source technologies are meeting new challenges, and how the community initiatives will change observability significantly in the next 12 months. Your organization can adopt these new technologies to save money, support new architectures, and provide new capabilities to your development, operations and DevOps teams.
Digital business requires observability and agility.
Yesterday’s siloed organizations do not work in high velocity environments, which require agile DevOps teams.
DevOps teams need many tools and technologies to handle the complete life-cycle of the application and environments.
Observability includes collecting logs, metrics, and traces from applications and infrastructure.
Open source projects have improved vastly in the last few years, but scaling them is a challenge
Markets are changing, learn which technologies meet these challenges and where the ecosystem is headed in the next year