Cloud-Native: Containers & Kubernetes
Wednesday, November 17, 2021
WebAssembly is a high-performance, lightweight, polyglot, portable, and secure application runtime. It is ideally suited for many cloud native use cases, such as service mesh, embedded APIs for SaaS, and serverless functions for edge networks. However, as Docker and k8s’ success in cloud native infrastructure has shown, orchestration and management solutions are crucial for the adoption of application runtimes. In this presentation, Michael Yuan will use CNCF sandbox project WasmEdge as an example to discuss how to integrate and use widely used cloud native orchestration tools to manage WebAssembly workloads. Specific topics include the following. - WebAssembly use cases in cloud native infrastructure - Use runw to add WebAssembly management capabilities in CRI-O - Use crunw to support WebAssembly in k8s - Support WebAssembly as a service mesh sidecar in Dapr and others - Standardization efforts around WebAssembly orchestration and management
The number of microservices running in enterprises increases daily. As a result, service composition, governance, security, and observability are becoming a challenge to implement and incorporate. A “cell-based” architecture is an approach that can be applied to current or desired development and technologies to address these issues. This technology-neutral approach helps cloud-native dev teams become more efficient, act in a more self-organized manner, and speed overall release times. In this talk, Asanka will introduce the "cell-based" reference architecture, which is decentralized, API-centric, cloud-native and microservices friendly. He will explain the role of APIs in the cell-based approach, as well as examine how real applications are built as cells. Asanka will explore the metrics and approaches that can be used to measure the effectiveness of the architecture and explore how organizations can implement the cell approach.
One service call into your application can generate hundreds or more internal service calls across a mesh. How do you extract real value from the mesh by getting visibility into how your microservices are performing inside your applications? In this talk Hart Hoover, Field Engineer at Kong, will walk through an overview of CNCF’s Kuma, and the process of plugging in Prometheus metrics, CNCF Jaeger traces, and Grafana Loki log policies with a Kuma service mesh so you can gain real observability into your applications.
Kubernetes can solve many of the problems that apply to running dev environments in the cloud. This talk explores the benefits of cloud IDEs and discusses using K8s to host and manage remote IDEs for a dev team. Ben will present a demo of a Kubernetes cluster with workspaces for 25 developers that include code-server, JetBrains Projector, VS Code Remote, and pre-installed dev tools. Beyond the demo, Ben will also explore some of the complications associated with hosting dev environments at scale.
So your company has finally decided to move to the Cloud Native ecosystem. You’ve landed on containerization as your first step. You heard that all you needed to do was containerize your first app and then push it to Kubernetes/OpenShift/Nomad, and the cost savings just come. You’ve done this, and well, things have gone not as planned. Some of the tech didn’t do what you expected, and wait, what do you mean our OpEx has gone up? Simply said: the promise of containerization or migrating to the Cloud Native ecosystem can be a lie if you don’t do your homework. Sadly most companies don’t. In this talk, I’ll explain a few gotchas that a “few” enterprises, in the guise of AsgharLabs, hit moving towards the Cloud Native world, and hopefully, you’ll learn from their mistakes, so you’re trip down this path will be more comfortable and closer to the promise.Outline IntroductionsWhat is AsgharLabs and where they started, what they thought they needed to doWhere I came into the conversation to help AsgharLabs Questions you should ask after getting your app containerized Where are the architectural advantages and disadvantages? Are we doubling up on things? Isn’t automation good here? Why is this thing so complicated now? Questions you should ask about the cultural shift that will happen How the economics of the Cloud can differ from your DatacenterWhat do you mean our support is now Stack Overflow?What do you mean our goal is to move away from the CCB? Some tangible things you can start with to help become more successful Build that pipeline extension Collaborate with other teams Visibility and Monitoring Conclusion and where you can go from here
If you feel like you’re spending a pretty penny on Kubernetes-related cloud costs these days…well, at least you’re not alone. A 2021 Cloud Native Computing Foundation report—the first of its kind—recently found that 68% of organizations are spending considerably more on Kubernetes than they were a year ago. Kubernetes spending has been skyrocketing, stemming from a combination of overprovisioning coupled with low accountability and a lack of visibility into ever-higher costs.But writing increasingly bigger checks isn’t the only option. By understanding different Kubernetes cost monitoring techniques and implementing best practices for allocation and efficiency, you can drastically rein in Kubernetes costs without a ton of effort (and improve relations between finance and engineering teams in the process). This DeveloperWeek session will guide you through the various Kubernetes cost monitoring models at your disposal, such as showback and chargeback, and help frame the decision about which model is best suited for your organization (these solutions aren’t one-size-fits-all). The talk will also present best practices for implementing a Kubernetes cost monitoring strategy that’ll tick all the boxes for cost transparency, visibility, and accuracy. Attendees will come away with a clear plan of attack for how they can champion better Kubernetes cost controls within their organizations.
Thursday, November 18, 2021
Serverless development introduces a new methodology of how to build real “cloud native” applications or workloads. In monolithic and microservices architectures, it is simple to develop locally and then push the code to the CI/CD pipeline to be integrated and tested with the work of others. It is relatively simple to write and run an integration test as well and use a staging environment like the "real" environment. In some teams, developers are doing all these tasks, but in many, there are dedicated DevOps and QA engineers to continue the process after the developer checks in his code. Practicing serverless, the developer carries the entire responsibility to do all of the above. In this talk, we’ll share the process and tools we used for CI/CD to our serverless based application at Lumigo.- Dev Environment- Testing Methodology - Deployment Pipeline, combining Bash, AWS CLI, and Serverless - - Framework to create a seamless CI/CD pipeline. - Monitoring Let's discuss good serverless practices.
The IT world has evolved from the stateless 12-factor simple “Hello World!” app on Kubernetes to refactor more complex data-driven apps and incorporate newer paradigms such as microservices, service mesh, etc. However, Dev, DevOps and Ops of these distributed teams and systems are still an ongoing major challenge.
How are teams and technologies evolving to deal with this myriad of challenges and what steps are they taking to mitigate some of the issues? In this session we will start with identifying these challenges and how to solve them with a comprehensive practical example based around open sourced k8ssandra.io which relies on the cass-operator and is evolving to provide multi data center support.
After attending this session, attendees (Devs, Devops and Ops audience alike) will get a holistic perspective of the day-to-day challenges of the cloud-native approach -- gain a better understanding of data durability, routine backups and restore, observability, HA and DR. Dissecting the example with a step-by-step approach, will enable attendees to walk away with practical tips for a robust architecture and how to operationalize it.