Containers & Kubernetes
Wednesday, February 17, 2021
If you have been watching the development of the cloud native technology stack ecosystem, you’re probably getting the gist of why people are migrating to it.
Cloud native technologies promise an unparalleled productivity and reliability jump for application development and operations. But with a multitude of options for cloud native newcomers, it can be challenging to know where to start.
Join this session to see how you can setup a cloud native development and operations CI/CD environment and pipelines with Kubernetes.
We will provide a template github projects and walk you through:
1. How to establish Kubernetes as your infrastructure for a portable and truly cloud native environment for optimal productivity and cost.
2. Using Kublr for infrastructure and build environment as code approach for fast, reliable and inexpensive production-ready DevOps environment setup bringing together a combination of technologies - Kubernetes; AWS Mixed Instance Policies, Spot Instances and availability zones; AWS EFS; Nexus and Jenkins
3. Using Jenkins and Jenkins Kubernetes integration for pipeline as code approach
4. Best practices based on open source tools such as Nexus and Jenkins.
5. How to tackle build process dilemmas and difficulties including managing dependencies, hermetic builds and build scripts.
No server configuration? No problem! With serverless & JAMStack becoming more and more popular, it’s like static sites never went out of fashion. Though, unlike the 90s, we don’t have to sacrifice style for performance. Let’s recreate a Japanese style photo booth with React & WebAssembly, and get some insight into how our users are interacting with our site so we know how to make improvements on future versions.
Thursday, February 18, 2021
Containerization gave applications portability from local dev to production, but in our pursuit of service-oriented design that portability has been lost. This talk will discuss how we can build upon containerization to make complex, cloud-native applications easier for developers to create and contribute to.
Containers are the new ZIP format to distribute software" is a fitting description of today's development world. However, it's not always that easy and this talk highlights the development of Elastic's container strategy over time:
- Docker images: A new distribution model.
- Helm Chart: Going from demo to production.
- Kubernetes Operator: Day two operations with upgrades and scaling.
Besides the strategy, we'll discuss specific technical details and hurdles that appeared during the development. Or why the future is a combination of Helm Chart and Operator.
GitOps uses Git as the “single source of truth” for declarative infrastructure and enables developers to manage infrastructure with the same Git-based workflows they use to manage a codebase. Having all configuration files version-controlled by Git has many advantages, but best practices for securely managing secrets with GitOps remain contested. Join us in this presentation about GitOps and secret management. Attendees will learn about the pros and cons of various approaches and why the Jenkins X project has chosen to standardize on Kubernetes External Secrets for secret management.
How confident are you that your code—including any 3rd party code your team brought in—is running in a secure and compliant manner before you deploy to production?
Imagine this - your developers check-in code for a new feature. It includes pieces of code your team wrote and pieces of code from a 3rd party. The code passes SAST & SCA and you deploy it to production. A day later, your production server is breached...and the attacker leveraged a bug in your code that caused privilege escalation and was able to become root.
In today’s microservices-containers/Kubernetes/Docker-DevOps world, a static code scanner isn't sufficient. You need RUNTIME observability into the application’s security, privacy, and compliance. Your developers need to know if their code or a 3rd party’s code can cause issues at runtime.
This panel of RUNTIME observability and security developers and experts will discuss the what, why, and how DeepFactor’s Continuous Observability platform:
- Automatically observes more than 170 parameters—across system call, library, network, web, and API behaviors in every thread of every process in every running container of your application—and detects security and compliance risks in your CI pipeline
- Detects insecure behaviors that only manifest at runtime and cannot be caught with code scanning or just looking at known CVE databases
- Reduces alert volume by prioritizing the findings of your SCA tools with runtime insights from observability tools
- Empowers Engineering leadership to accelerate productivity and decrease mean-time-to-remediate (MTTR) security and compliance risks pre-production as their teams ship secure releases on schedule
You’ll leave this session armed with the knowledge to immediately leverage continuous observability to consistently deploy apps with confidence.
Need a Kubernetes cluster for a short amount of time, but always forget to destroy them? Worry no more, as in this session we'll show you how to create a self-destructing Kubernetes Cluster.
During this talk, we'll showcase a number of technology principles: Infrastructure as Code, CI/CD, identity in the cloud and scheduling jobs on Kubernetes. We'll use Terraform, GitHub Actions and Azure Kubernetes as demo material, but the concepts of this talk translate to any technology platform.
By attending this talk you'll get a practical understanding of Infrastructure as Code, CI/CD, identity in the cloud and scheduling jobs on Kubernetes.
Runtime security for containers, Kubernetes and cloud native isn't for the faint of heart. To confidently secure your applications, you need a recipe. And, much like the one grandma used for her consistently amazing chocolate chip cookies, the one you get from this session will guarantee your security success.
In this session, Scott Surovich and POP will share practical experience and excerpts from Scott's new book Kubernetes and Docker - An Enterprise Guide. They’ll share the key ingredients for tooling that provides an engine, ruleset, and outputs that fit real-world scenarios.
They will cover:
- An introduction to CNCF open source project Falco for runtime security of applications/ cloud native infrastructure
- Real world use cases of Falco with a short demo showing rulesets and outputs valid for your business
- A primer to how to contribute your own capabilities to Falco
- A kickass chocolate chip cookie recipe to wow your friends and family
Kubernetes has transformed the way in which we manage cloud environments and build cloud-native applications, but for many developers, a higher degree of transparency within Kubernetes is needed. This session will explore how data-centric security in Kubernetes can provide technical controls for data in use by an application. Today’s approach of network-level security through the use of service mesh relies on blind trust that data is used for its intended purpose.
Instead, in this session, we’ll explore the next frontier of Kubernetes security: one in which technical controls persist throughout the pipeline to protect data in-motion and in-use. We’ll discuss low-friction methods for enabling control at the data level, including how to enable non-humans to access data in specific ways and places and how to create a strong form of identity.
The evolution of architecture from monolithic to microservices-based has enabled organizations to meet the ever-evolving needs of their customers. The need for getting insights into these microservices has become critical for developers and operations teams alike. In this session, we will explore how observability plays a critical role in the microservices world. We will deep dive into distributed tracing to achieve full observability and monitoring for production environments. Finally, we will discuss the checklist that every DevOps person should look into for incorporating observability into their environment. The attendees can expect to learn about the three pillars of observability, details of distributed tracing and strategies for achieving observability and monitoring.
In this session, Lewis Marshall from Appvia, will discuss best practice for teams managing the critical 'Day-2' phase of Kubernetes deployments, and the key areas they must have visibility and coverage on -production topology, updating, monitoring, scaling.
Day-2 is the time between the initial deployment of a cluster for development and when kubernetes clusters are hosting a production business service. It sits between designing the deployment (Day-1), and ongoing maintenance (Day-3)
Moving from Day 1 to Day 2 isn't as simple as it might seem. It's a critical time period where you bring technology out of the development staging phase and into production. Without a solid plan to overcome the traps along the way, you won't be able to realise the potential benefits of Kubernetes, you'll struggle to scale your environments and put the entire infrastructure in danger.
Key points Lewis will cover include what best practice looks like and his own experience of deploying Kubernetes for major organisations (including in the UK's Home Office).
We discuss how the business service requirements are affected when running on Kubernetes:
Production Topology - How isolation of workloads, clusters and cloud resources e.g. networking, affects all other day two concerns.
Upgrading - the choice a team makes around upgrading is essential to making sure there's no downtime to hosted applications within your cluster.
Monitoring - the business drivers around actual service availability and support and how Kubernetes helps and hinders observability.
Should application developers invest the time to learn Kubernetes? Do they even need to be aware of Kubernetes within their infrastructure? It’s become an increasingly popular question that DevOps, platform engineering, and dev teams are asking.
While Kubernetes delivers robust capabilities – far more than most developers need – developers don’t really care about Kubernetes itself. What they care about is delivering their product to users. The arguments in favor of developers learning Kubernetes often revolve around the fact that it’s an incredible tool and well-liked by DevOps. For most developers, these arguments are like being told how fulfilling it is to make your own pizza from scratch, when you have a lot of work to do and would much rather simply order one. Developers only appreciate Kubernetes to the degree that it allows them to focus on doing it faster. They want to eat and not have to cook.
Things can go wrong in the kitchen as well: small changes to Kubernetes have outsized ripple effects. Even experienced developers may find that operators are reluctant to grant them cluster access. The complexity of Kubernetes makes it easy for developers to mess up in unpredictable ways. Because of this, many organizations make years of investments attempting to build a layer between their applications and Kubernetes, in order to abstract Kubernetes away from developers and allow them to simply push code.
Kubernetes needs to transform into a user-friendly application management framework, in the same way Docker turned complex tools such as cgroups into user-friendly products. This session’s audience will learn strategies and tactics for transforming Kubernetes into that user-friendly solution, enabling developers to focus on application code and DevOps and platform engineers to keep control of their clusters and infrastructure.
Takeaways from this presentation will include:
1. How to stop prioritizing Kubernetes, but instead focus more on the applications and the teams that develop and control them.
2. How to stop worrying about ConfigMaps, ingress rules, PVs, PVCs, and other complications in your day-to-day activities.
3. How to enable DevOps and platform engineering teams to move Kubernetes across clusters or even providers without impacting how applications are deployed, operated, and controlled.
Kubernetes is much more than a runtime platform for Docker containers. Through its API not only can you create custom clients, but you can also extend Kubernetes. Those custom Controllers are called Operators and work with application-specific custom resource definitions.
Not only can you write those Kubernetes operators in Go, but you can also do this in Java. Within this talk, you will be guided through setting up and your first explorations of the Kubernetes API within a plain Java program. We explore the concepts of resource listeners, programmatic creation of deployments and services and how this can be used for your custom requirements.
If you are building applications today, you are probably using either cloud or Kubernetes ... or both! As a result, we are entering an era that we don’t have to make complex architecture decisions by weighing tradeoffs on scale, uptime, and usability. Patrick McFadin has been building and supporting scale applications for a long time and has seen all the evolution that has brought us to today. Engineer to engineer, Patrick wants to show you his journey into this world and what he’s been doing at DataStax and the Apache Cassandra project to help make it a reality. Here’s what he’ll cover.
-How you can shorten application development time and ship code fast
-The role of open source in this next wave of modern application development
-Ways to participate in this fast-moving community of data services
-How you can futureproof your code and be ready for the next big thing
As we get deeper into Kubernetes yaml files, we see a lot of duplication. Can we move to a higher level that eliminates this duplication? Let's look at Helm, a tool both for templating k8s yaml files and for installing complex infrastructure dependencies as one package. With Helm 3, we now have deeper integration and more security when working with Kubernetes. Join us on this path to a simpler, more repeatable, and more discoverable yaml experience.
Service Fabric is the foundational technology introduced by Microsoft Azure to empower the large-scale Azure service.
In this session, you’ll get an overview of containers like Docker after an overview of Service Fabric, explain the difference between it and Kubernetes as a new way To Orchestrate Microservices. You’ll learn how to develop a Microservices application and how to deploy those services to Service Fabric clusters and the new serverless Service Fabric Mesh service. We’ll dive into the platform and programming model advantages including stateful services and actors for low-latency data processing and more.
You will learn:
Overview of containers
Overview of Service Fabric
Difference between Kubernetes and Service Fabric
Setup Environment to start developing an application using Microservices with Service Fabric
Kubernetes brings new ideas of how to organize the caching layer for your applications. You can still use the old-but-good client-server topology, but now there is much more than that. This session will start with the known distributed caching topologies: embedded, client-server, and cloud. Then, I'll present Kubernetes-only caching strategies, including:
- Sidecar Caching
- Reverse Proxy Caching with Nginx
- Reverse Proxy Sidecar Caching with Hazelcast
- Envoy-level caching with Service Mesh
In this session you'll see:
- A walk-through of all caching topologies you can use in Kubernetes
- Pros and Cons of each solution
- The future of caching in container-based environments
You’ve developed a fabulous application in a container/Kubernetes Continuous Integration (CI) pipeline. The application works like it should, and the static scans look secure, but, is it actually operating securely? Are any 3rd party components you’ve integrated doing something they shouldn’t be doing? How do you know?
To be confident about the behavior of your app, active inspection of running binaries within a container, utilizing live telemetry is key. Pre-production observability enables this by filling the gaps that static code (SAST) and dynamic external inspections (DAST) don’t cover.
During this technical session, you’ll see pre-production observability in action and the benefits the solution delivers to developers and their teams. Mike Larkin, CTO at DeepFactor, and John Day, Customer Success Engineer at DeepFactor, will discuss a straightforward method to obtain this information from any container to deliver extracting metric data with minimal overhead. This information can then be processed to indicate issues that may affect the unknowing behavior of your container be it security, performance, or operational intention. You’ll leave this session armed with the knowledge to immediately leverage pre-production observability to consistently deploy apps with confidence.
Friday, February 19, 2021
OPEN TALK: WordPress as a Service: Get It Done in Less Than 30 Minutes with Terraform & K8s on IONOS Cloud
Let’s assume that we’d like to become the next big internet tycoon by offering an awesome, high end managed WordPress service to the world, including monitoring, completely dedicated database, backups, restores and the whole nine yards. In addition to our great idea, let’s also set some…less realistic goals:
– We need a Proof of Concept up and running in less than 30 minutes
– We need to reach our goal(s) without having to deep-dive into WordPress specifics
– A new dedicated wordpress site needs to be deployable with a single command or API call.
– Upon deployment, each WordPress website needs to be secured through HTTPS
This speech will give you an overview on how to start a project like this by leveraging the power of Kubernetes Operators running on the IONOS Cloud infrastructure.
Join our resident Kubernetes and modern apps experts in a discussion of the challenges of Kubernetes traffic management in today’s technology landscape. While Kubernetes Ingress gets most of the attention, how you handle egress traffic is just as important. Egress isn’t just about traffic leaving a cluster, either, but also concerns traffic among managed and unmanaged services within the cluster. We demo a solution using NGINX Service Mesh and NGINX Ingress Controller to control egress from the cluster and between NGINX Service Mesh and unmanaged services. Whether you’re new to modern application architectures, or looking to improve your current microservices deployment, this webinar is for you.
Join this webinar to learn:
* Solutions to common challenges when managing traffic in Kubernetes
* How to control both ingress and egress in a single configuration
* Which solutions from NGINX can best serve your needs, depending on your requirements
* About NGINX Service Mesh and NGINX Ingress Controller with live demos
Cloud and Kubernetes adoption led to greater container usage in 2020. Staying up-to-date with the latest trends in security and monitoring for Kubernetes and container environments is more important than ever.
In this session, you’ll hear real-world examples of nearly one billion unique containers deployed in today’s modern global enterprises. You’ll walk away with new knowledge about:
- How organizations are dealing with container security concerns
- Interesting shifts in runtime and registry usage
- Usage trends that impact container security
- Practices others are using to to run containers with greater confidence
- Trends in lifespan and density as container usage matures
Kubernetes brings promises of application modernization and agile applications development and deployment, but it also brings new challenges in managing these environments. Nowhere are these challenges more of an issue than with traffic management and security of microservice environments, especially in those which require high-volume, high-reliability, and high-security. But the good news is that you’re not alone: These challenges impact everyone moving to Kubernetes, and there are solutions to make traffic management and security in Kubernetes microservice environments easier. Come join NGINX Service Mesh engineers to talk about NGINX Service Mesh and tools available for your microservice traffic management challenges. This session will be an engineering round-table where we'll have an open discussion about service mesh use cases, demos, and a Q&A with the engineers. Join us to talk all things mesh, the need for ingress/egress traffic management within a mesh, and some best practice guidelines with microservice traffic management.
K8ssandra has made it effortless to deploy Apache Cassandra on Kubernetes. Long a simple means of deploying stateless applications, modern tooling and APIs has facilitated the move of databases to this pervasive platform. Join Chris Bradford in deploying the K8ssandra stack to Kubernetes. Learn how it packages a production Cassandra deployment with supporting tooling alongside Stargate, a next generation data gateway. We will explore everything from the management interfaces leveraged by DevOps teams to performant, highly available, REST, Graph, and Document APIs for developers.