Tuesday, September 29, 2020
In this session participants will learn how to create real-time cloud-connected augmented reality (AR) apps in less than 15 minutes.
Participants will see how to build native apps for Android with ARCore, iOs with ARKit, HoloLens & MagicLeap with Unity, and web browsers with WebXR and AR.js, all connected to the same cloud. Participants will learn how to change AR content across all platforms simultaneously through the cloud without rebuilding the apps. Participants will see how a small change in one cloud-connected app, can send updates to all other apps across the different AR SDKs & platforms.
Platforms for experimentation include: Google ARCore, WebXR, Vuforia, Unity-based apps, and more. Participants are encouraged to either bring their laptops with Unity installed, bring an ARCore-enabled mobile device, or can follow along the live demonstration.
Kubernetes is today’s hottest way to deploy and manage contemporary applications in the cloud, but it also offers the essential foundation for repeatable and reliable machine learning workflows. In this session, Sophie will demonstrate open source tools that build on Kubernetes to facilitate solving data science workflow challenges for practitioners, without forcing data scientists to care about the primitive details of their infrastructure.
You’ll leave this talk with an understanding of how Kuberenetes supports data scientists at each step of the machine learning workflow. You’ll be introduced to high-level tools that effortlessly provision custom research environments, publish reproducible notebooks, operationalize models and pipelines as services, and detect data drift automatically.
Microservices systems running on Kubernetes and containerized environments are complex and hard to monitor and troubleshoot. Join us as we discuss the growth in adoption of K8s and containers and the challenges that they have presented us all, focusing on why standard metrics alone are leaving gaps in your observability strategy. We’ll then deep dive into how distributed tracing fits into this, as well as how we at Epsagon are approaching this in modern environments.
Developers are moving away from large monolithic apps in favor of small, focused microservices that speed up implementation and improve resiliency. Microservices and containers changed application design and deployment patterns, but along with them brought challenges like service discovery, routing, failure handling, security and visibility to microservices.
“Service mesh” architecture was born to handle these features. Applications are getting decoupled internally as microservices, and the responsibility of maintaining coupling between these microservices is passed to the service mesh.Istio, a joint collaboration between IBM, Google and Lyft provides an easy way to create a service mesh that will manage many of these complex tasks automatically, without the need to modify the microservices themselves.
In this talk we will see how istio can be used to manage traffic, gather metrics and enforce policies in a demo application running microservices. We will learn why kubernetes need “service mesh” and how does Istio improve managing Kubernetes workload.
Cloud native application development has now become the norm. As a result, every developer is becoming a cloud native app developer. They are becoming masters of not only programming languages, but also distributed primitives/abstractions offered by containers, Kubernetes, and serverless frameworks. Effectively leveraging platforms such as Kubernetes can be a daunting task for developers due to many reasons, including managing hundreds of lines of YAML configuration.
Programming languages, frameworks, and tools that abstract away the complexities faced by developers are on the rise. They are collectively called the languages of infrastructure. In this session, I will explore two such tools -- Pulumi and Ballerina. Let’s explore how these languages of infrastructure hide the complexities of cloud native platforms and boost developer productivity.
Kubernetes makes it easy to run distributed applications, even those that manage persistent state, within the confines of a single cluster. Running the same applications in a multi-region or multi-cloud fashion across multiple Kubernetes clusters, however, is considerably more difficult due to the networking and service discovery problems involved.
In this talk, Chris will walk through his team’s experience of running a distributed database across Kubernetes clusters in different regions and their attempts to make the process repeatable on different cloud providers and on-prem environments. He’ll cover common problems they encountered, solutions they’ve tried, how they’re running things today, and the future improvements he’s most excited about from projects like Istio.
Even as self-professed feature flag experts, when we started out we quickly found ourselves with code that was hard to read, hard to reason about, and hard to manage.
After exploring several strategies, we found a pattern that aligns feature flags with larger code units (components, reducers, actions) rather than at individual lines of code. This alleviates much of the pain that a naîve application of feature flags to a code base can introduce, and ultimately restored the development velocity that is required for any technology company.
Interested in Containers and want to learn more? This talk will introduce you to the basics of why containers are important, how they work, and how Kubernetes is making containers the DevOps way of the future - through fun comic illustrations and analogies! You'll learn and retain the key points you'll need whether you're trying to convince your leadership that container adoption is right for the company, or talking to that person a few cubes down who just can't seem to stop talking about containers!
As vital as open source is to building software in today’s world, it’s a mistake to think of it as a silver bullet. The ability to change the world with it is clear - but so is the significant room for error, when not properly managed.
A shifting battlefield of attacks based on OSS consumption has emerged. Five years ago, large and small enterprises alike witnessed the first prominent Apache Struts vulnerability. In this case, Apache responsibly and publicly disclosed the vulnerability at the same time they offered a new version to fix the vulnerability. Despite Apache doing their best to alert the public and prevent attacks from happening — many organizations were either not listening, or did not act in a timely fashion — and, therefore, exploits in the wild were widespread. Simply stated, hackers profit handsomely when companies are asleep at the wheel and fail to react in a timely fashion to public vulnerability disclosures.
Since that initial Struts vulnerability in 2013, the community has witnessed Shellshock, Heartbleed, Commons Collection and others, including the 2017 attack on Equifax, all of which followed the same pattern of widespread exploit post-disclosure.
Shift forward to today - and hackers are now creating their own opportunities to attack.
This new form of attack on our software supply chains, where OSS project credentials are compromised and malicious code is intentionally injected into open source libraries, allows hackers to poison the well. The vulnerable code is then downloaded repeatedly by millions of software developers who unwittingly pollute their applications to the direct benefit of bad actors. In the past 24 months, no less than 17 real-world examples of this attack pattern have been documented.
It’s become clear that we are in the middle of a systematic attack on the social trust and infrastructure used to distribute open source. In just a few years, we’ve gone from attacks on pre-existing vulnerabilities occurring months after a disclosure down to two days - and now, we are at the point where attackers are directly hijacking publisher credentials and distributing malicious components.
Open source developers are the front line of the new battle. Attackers have recognized the power of open source and are seeking to use that against the industry. We must not let them ruin the reputation of the things we’ve built. Or worse, the entire open source ecosystem.
It's 2020, and Cloud-Native has been widely accepted as industry best-practice. However, in reality many companies still struggle with their Cloud Migration and with creating a DevOps environment where talents can unleash their full potential. We will share some smart approaches around tagging and managing lifecycles in the Public Cloud, as well as around automatic documentation of Microservices & dependencies that can help to reconcile governance needs with a great workplace.
As developers continue to adopt containers and orchestrate their workloads with Kubernetes, one question keeps popping up: how do we deal with stateful workloads like databases in Kubernetes?
You will learn:
- Why cloud native storage is important?
- The 8 principles for cloud native storage in Kubernetes
- A high level overview of the cloud native storage landscape
- How to run stateful workloads in Kubernetes? Examples for popular databases
You will walk away with an understanding of where storage fits in the Kubernetes stack along with practical guidance on how to run, automate and orchestrate stateful workloads.
Microservices, APIs and containers are more than just the latest industry buzzwords—they’re becoming essentials of the modern tech stack, with Kubernetes gaining significant momentum as a popular choice in the container orchestration market. However, the reality of orchestration can be daunting, and with all this noise, it can also be challenging to differentiate between hype and practicality.
Dennis Kelly, a Kubernetes engineer at Packet, will demonstrate a lightweight but enterprise-ready solution for container orchestration leveraging a number of tools, including Terraform, Consul, Vault and Nomad. He will also share learnings on how this solution helped his team increase management efficiencies while decreasing the cost of container orchestration.
Attendees will leave with a clear, easy-to-implement stack that will improve cloud resource usage, save money, increase visibility and simplify application management.
Data is king. Data analytics are the be-all and end-all... but cold hard numbers cannot tell the entire story of your developer community. In this short presentation, we will challenge the current worship of numbers and discuss why you must never forget the human factors and emotion emanating from your development team. Join Jesse Davis, Devada’s Chief Technologist as he covers Diversity and Inclusion in Development circles, what real human investment looks like for developer communities, and why ROI matters but the Human Element matters more. He will outline how to carry the ameliorating message of thoughts and feelings forward to an oft disbelieving audience within the company can reap valuable rewards.
The cornerstone of DevOps is Automation. As the capabilities of cloud options continue to diversify with such tools as one-click deployments and containers, it becomes easy to visualize the idea of NoOps replacing DevOps.
In this talk we will:
- Look at the failings of early cloud deployment options
- See where PaaS and other early cloud schemes took advantage of the future of DevOps
- Look at various container solutions and examine how it can automate a deployment solution for developers
In addition, we will look at the ideas behind DevOps and advocate for NoOps as its ultimate goal.
The dynamic development of Cloud Computing and novel Cloud models like serverless creates new challenges for Cloud deployment. This presentation describes how to implement Multi-Cloud native strategies using advanced platform that allows for Cloud-agnostic Multi-Cloud deployment of serverless.
As a DevOps professional it can be daunting to select a cloud provider when you’re tasked with building out your company’s infrastructure from scratch. There may seem like some “obvious choices” out there, but this isn’t necessarily the case at a startup where you are the solo or duo DevOps team. You could encounter several challenging things along the way, including (but certainly not limited to!) orchestration in the cloud, future-proofing a fresh, rapidly growing environment, security compliance requirements from SOC2, RBAC implementations and, of course, incorporating “ease of use” into every process along the way.
At Cmd, we are able to deploy an immutable/mutable hybrid infrastructure, while at the same time controlling user access with ease! We hope our contributions at this talk help the DevOps community understand why we looked past some choices when selecting our tech stack, the ways in which we approached the challenges that come from going off the beaten path and how we resolved these challenges at Cmd to create a more efficient operation that adds value across the organization.
Services are the backbone of our systems. They are the pieces that make up our businesses—whether they are literal microservices or functional components of a traditional application, we can’t do the computer thing without services.
When it comes to a service in your company or organization, who’s responsible for it? The cast of characters involved in the lifecycle of a service is more than just software engineers. It can include program managers, product owners, sustainability teams (SREs/operations engineers), and business stakeholders, just to name a few.
Topics covered in this workshop include:
- Defining what a service means to you and your organization
- Roles in service ownership
- What are you observing about your service?
- How you want a team to respond to a service outage or incident
- Managing your service in production
- Tuning your service
- Understanding how the service impacts the business
Managing cloud security risk is a requirement. Why wait to tackle container and Kubernetes security? Many teams worry security will slow down releases. Studies show that top performing DevOps teams integrate more functions into their tool chain, including monitoring and security. Learn how other leading companies are reducing risk with a secure DevOps approach.
Join us to understand:
- Best practices to get started quickly and integrate security into existing toolchains
- Key areas where you should integrate security: CI/CD pipelines, registries, workload admission, service communication
- Ways to take advantage of Kubernetes native controls and open source projects such as Falco cloud native runtime security project and Open Policy Agent (OPA)
- How others are monitoring cloud security across multiple clouds and services including for Fargate, EKS, OpenShift, and AKS
- Approaches to managing incident response and forensics after the container is gone
Google did it again. They gave us another buzzword that is changing how organizations operate. Site Reliability Engineer (SRE) is both a strategy and in some organizations brand new function that helps complete the DevOps lifecycle. In this meetup we will talk about how the SRE function is changing the way organizations think about on-call and completing the DevOps feedback loop from prod to plan. We will cover:
1.) The rise of the SRE role
2.) How the SRE function is not rooted in monitoring and ops anymore
3.) How high-performing teams are leveraging SRE to advance their delivery chain
4.) What modern on-call looks like
5.) Best practices and Considerations
Cloud deployments offer the potential for almost infinite resources and flexible scalability. But there are so many options! It can be overwhelming to know which services are best for your use case. Building distributed systems which take advantage of in-memory computing only adds to the complexity. During this session we will introduce the Apache Ignite in-memory computing platform and identify key metrics that can help you maximize application performance on your existing cloud infrastructure. We will provide best practices on how best to structure and deploy in-memory applications on both public and hybrid clouds.
While many organizations are rolling out Kubernetes, breaking up their monoliths, and adopting DevOps practices with the hope of increasing developer velocity and improving reliability, it’s not enough just to put these tools in the hands of developers – you’ve got to incentivize developers to use them! Service ownership is a critical piece of DevOps: it provides these incentives by holding teams accountable for metrics like the performance and reliability of their services as well as by giving them the agency to improve those metrics.
In this talk, I’ll cover how distributed tracing can serve as the backbone of service ownership. For SRE teams that are setting standards for their organizations, it can help drive things like documentation, communication, on-call processes, and SLOs by providing a single source of truth for what’s happening across the entire application. It can also accelerate root cause analysis and make alerts more actionable by showing developers what’s changed – even if that change was a dozen services away. Throughout the talk, I’ll use examples drawn from more than a decade of experience with SRE teams in organizations big and small.
By the end of this talk, you’ll understand why service ownership is such an important piece of DevOps and how distributed tracing can drive that ownership in your organization and, by doing so, improve performance and reliability for your application.
Have you ever tried to explain K8s to a non-developer? It's complicated. It's abstract. And it's overwhelming. In this talk, Walt will show you how to explain these concepts of containerization, architecture, and cloud computing so that even your niece can understand them.
This is the story of a small infrastructure team’s quest to forever change how infrastructure is provisioned and managed across multiple cloud providers. We walk through all of the steps required to go from toiling over a pile of hand-crafted misery to a well-oiled continuous integration machine fueled by Terraform. Learn how we approached code design, workflows, prototyping, bake-offs, team collaboration, remote state, repository design, and more. Demos and sample code included!
Wednesday, September 30, 2020
I'll provide an overview of the OpenAPI Initiative and how we use OpenAPI specifications to automatically generate client bindings, models, and API documentation for Splunk's next generation cloud native platform - Splunk Cloud Services. I'll guide you through the open-source tools available to let you do the same for your apis.
Continuous Integration, Deployment and Delivery can be a hard concept for many people coming into agile. The act of continuously pushing new code into production frequently can be scary. For some the idea is so far out-there, they think they could not possibly achieve it and never try. This workshop will focus on practicing building, testing, deploying, and managing self-served services in cloud.
Your DevOps teams need to embed security as they ramp containers and Kubernetes in production. As cloud providers release new services constantly, you not only need visibility inside containers, but also the cloud infrastructure, applications and services used by your teams. With a secure DevOps workflow, your team can spend more time developing apps and less time reacting to issues.
Running secure containers requires that security and DevOps work better together. Join us to understand how to:
- Automate scanning including for Fargate workloads within CI/CD pipelines (Jenkins, Gitlab) and registries (ECR, GCR)
- Detect runtime threats with open-source tools like Falco and continuously monitor your cloud using AWS CloudTrail
- Prevent threats at runtime using Kubernetes PodSecurityPolicies that doesn’t impact performance
- Conduct incident response and forensics, even after the container is gone
- Continuously validate compliance against PCI, NIST, CIS. etc.
Cloud misconfiguration is now the leading cause of cloud-based data breaches, typically due to a lack of secure cloud architecture practices. Because cloud infrastructure is 100% software, cloud security is a software engineering problem, not a traditional security analysis problem. In order to prevent data breaches in the cloud, we must address it with secure software architecture right from the start.
In this talk, Josh Stella will run a live simulation of an advanced cloud misconfiguration exploits to show a number of ways common cloud architectural anti-patterns create opportunities for hackers to gain entry to cloud environments, move laterally using tools like IAM services, and ultimately discover and breach data. Many of the misconfigurations exploited won’t be flagged by compliance scans and often aren’t considered risky by security teams.
At each step, Josh will share alternative approaches to architecting cloud infrastructure services to ensure our applications run efficiently while denying bad actors the tools and means to exploit them. Attendees will leave with actionable insights to evaluate their own cloud environment for misconfiguration vulnerabilities, how to address them, and how to bake secure cloud architecture approaches into software development.
OPEN TALK: Whose Fault Is It When Kubernetes Breaks? How to Build Trust and Resolve Incidents Faster With Distributed TracingJoin on Hopin
So, you've gone "cloud native." You're running apps in containers, you're scheduling them with Kubernetes, and now you're trying to create a better experience for your team and for your customers. But when things break — and they often do — it can be challenging to understand how to resolve an incident quickly, or even which service owner is responsible. Distributed tracing brings the code execution to the forefront, and gives a new view focused on service performance.
Software systems age just like living entities; they have a period of robust life and then start to get slower and more fragile over time. At some point you have to take an objective look at your system and determine whether you can refactor it back into robustness, or whether your best option is to simply replace it. In this session we will go over the factors that play into this determination, including current architectural design and code smells, development team experience, SDLC processes, risk tolerance, and leadership. We will also work through several high-level approaches to managing both refactoring and replacement efforts.
git is one of those things that you either get or you don't yet. Having used git almost exclusively since 2008 I will share a non accurate but very useful way of making sense of it all!
By the end of this session, you will be rebasing your git experience onto master!
In this talk, I will talk about how GraphQL works really well for monoliths and what problems arise when taking it to microservice and serverless architectures.
I will then present a few patterns that can allow for extracting the benefit of GraphQL while keeping the benefits of a microservices/serverless architecture.
Specifically, I will talk about patterns for (reading) querying across types/schemas distributed across microservices and data-sources and also for "writing" with CQRS style actions instead of CRUD style "mutations" and how to make them work with event driven patterns. I will draw upon existing commentary (both successes and failures) in the GraphQL community, personal experiences and our learnings from Hasura users.
The race to out-innovate one’s competition has led to high performing organizations chasing increased deployment velocities but often ignoring the quality of parts being used to manufacture their applications. It was 2003 when Bruce Schneier (@schneierblog) penned, "Today there are no real consequences for having bad security, or having low-quality software of any kind. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality."
As nimble organizations deliver new innovations using DevOps principles, adversaries are also upping their game, something we saw in a series of high profile and devastating cyber attacks last year. Adversaries have the intent and ability to exploit security vulnerabilities in the software supply chain - and in some cases plant the vulnerabilities themselves. They have increased scale through automation and improved breach success through precision targeting. If the IT industry doesn’t fight back by doing the same - automating security directly in the DevOps pipeline, then we’ll never be able to win.
The industry currently lacks meaningful open source controls. The most common way to introduce controls is through the application of open source governance policies across a software supply chain. But, when over 5500 IT professionals were asked if their organisation employed open source governance policies, just 63% responded positively. That percentage degraded further when participants were asked if they followed the policy. For those without a DevOps practice just 25% of said they both had an OSS governance policy and adhered to it. Effectively, 75% of those who don’t deploy a DevOps strategy, either ignore policies or don’t have one at all.
Further evidence of the lack of cybersecurity hygiene was revealed by 67% of survey participants who admitted to not having meaningful controls over what open source components are used in their applications.
Modern software supply chains can only operate safely when protected with automated security and quality assessments of these upstream open source components and containers.
This sentiment was echoed in Forrester’s Top Recommendations For Your Security Program (March 2018) where analysts advised, "Automate faster than evil does. If you thought your security team struggled with alert volume — and alert fatigue — then you Manual methods to detect, investigate, and respond to threats will guarantee
failure in the near future."
Earlier, developers simply wrote their program, built it and ran it. Today, developers need to also think of the various ways of running it whether it be as a binary on a machine (virtual most likely), by packaging it into a container, by making that container a part of a bigger deployment (K8s) or by deploying it into a serverless environment or a service mesh. However, these deployment options are not part of the programming experience for a developer. The developer has to write code in a certain way to work well in a given execution environment, and removing this from the programming problem isn’t good.
Ballerina is an open-source programming language with a built-in cloud technology integration helps developers write their application just work in Kubernetes. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions. This session will demonstrate how you can move from code to cloud by using the built-in Kubernetes annotation support in Ballerina.