Tuesday, September 29, 2020
In this session participants will learn how to create real-time cloud-connected augmented reality (AR) apps in less than 15 minutes.
Participants will see how to build native apps for Android with ARCore, iOs with ARKit, HoloLens & MagicLeap with Unity, and web browsers with WebXR and AR.js, all connected to the same cloud. Participants will learn how to change AR content across all platforms simultaneously through the cloud without rebuilding the apps. Participants will see how a small change in one cloud-connected app, can send updates to all other apps across the different AR SDKs & platforms.
Platforms for experimentation include: Google ARCore, WebXR, Vuforia, Unity-based apps, and more. Participants are encouraged to either bring their laptops with Unity installed, bring an ARCore-enabled mobile device, or can follow along the live demonstration.
Kubernetes is today’s hottest way to deploy and manage contemporary applications in the cloud, but it also offers the essential foundation for repeatable and reliable machine learning workflows. In this session, Sophie will demonstrate open source tools that build on Kubernetes to facilitate solving data science workflow challenges for practitioners, without forcing data scientists to care about the primitive details of their infrastructure.
You’ll leave this talk with an understanding of how Kuberenetes supports data scientists at each step of the machine learning workflow. You’ll be introduced to high-level tools that effortlessly provision custom research environments, publish reproducible notebooks, operationalize models and pipelines as services, and detect data drift automatically.
Microservices systems running on Kubernetes and containerized environments are complex and hard to monitor and troubleshoot. Join us as we discuss the growth in adoption of K8s and containers and the challenges that they have presented us all, focusing on why standard metrics alone are leaving gaps in your observability strategy. We’ll then deep dive into how distributed tracing fits into this, as well as how we at Epsagon are approaching this in modern environments.
Kubernetes makes it easy to run distributed applications, even those that manage persistent state, within the confines of a single cluster. Running the same applications in a multi-region or multi-cloud fashion across multiple Kubernetes clusters, however, is considerably more difficult due to the networking and service discovery problems involved.
In this talk, Chris will walk through his team’s experience of running a distributed database across Kubernetes clusters in different regions and their attempts to make the process repeatable on different cloud providers and on-prem environments. He’ll cover common problems they encountered, solutions they’ve tried, how they’re running things today, and the future improvements he’s most excited about from projects like Istio.
Even as self-professed feature flag experts, when we started out we quickly found ourselves with code that was hard to read, hard to reason about, and hard to manage.
After exploring several strategies, we found a pattern that aligns feature flags with larger code units (components, reducers, actions) rather than at individual lines of code. This alleviates much of the pain that a naîve application of feature flags to a code base can introduce, and ultimately restored the development velocity that is required for any technology company.
As vital as open source is to building software in today’s world, it’s a mistake to think of it as a silver bullet. The ability to change the world with it is clear - but so is the significant room for error, when not properly managed.
A shifting battlefield of attacks based on OSS consumption has emerged. Five years ago, large and small enterprises alike witnessed the first prominent Apache Struts vulnerability. In this case, Apache responsibly and publicly disclosed the vulnerability at the same time they offered a new version to fix the vulnerability. Despite Apache doing their best to alert the public and prevent attacks from happening — many organizations were either not listening, or did not act in a timely fashion — and, therefore, exploits in the wild were widespread. Simply stated, hackers profit handsomely when companies are asleep at the wheel and fail to react in a timely fashion to public vulnerability disclosures.
Since that initial Struts vulnerability in 2013, the community has witnessed Shellshock, Heartbleed, Commons Collection and others, including the 2017 attack on Equifax, all of which followed the same pattern of widespread exploit post-disclosure.
Shift forward to today - and hackers are now creating their own opportunities to attack.
This new form of attack on our software supply chains, where OSS project credentials are compromised and malicious code is intentionally injected into open source libraries, allows hackers to poison the well. The vulnerable code is then downloaded repeatedly by millions of software developers who unwittingly pollute their applications to the direct benefit of bad actors. In the past 24 months, no less than 17 real-world examples of this attack pattern have been documented.
It’s become clear that we are in the middle of a systematic attack on the social trust and infrastructure used to distribute open source. In just a few years, we’ve gone from attacks on pre-existing vulnerabilities occurring months after a disclosure down to two days - and now, we are at the point where attackers are directly hijacking publisher credentials and distributing malicious components.
Open source developers are the front line of the new battle. Attackers have recognized the power of open source and are seeking to use that against the industry. We must not let them ruin the reputation of the things we’ve built. Or worse, the entire open source ecosystem.
It's 2020, and Cloud-Native has been widely accepted as industry best-practice. However, in reality many companies still struggle with their Cloud Migration and with creating a DevOps environment where talents can unleash their full potential. We will share some smart approaches around tagging and managing lifecycles in the Public Cloud, as well as around automatic documentation of Microservices & dependencies that can help to reconcile governance needs with a great workplace.
As developers continue to adopt containers and orchestrate their workloads with Kubernetes, one question keeps popping up: how do we deal with stateful workloads like databases in Kubernetes?
You will learn:
- Why cloud native storage is important?
- The 8 principles for cloud native storage in Kubernetes
- A high level overview of the cloud native storage landscape
- How to run stateful workloads in Kubernetes? Examples for popular databases
You will walk away with an understanding of where storage fits in the Kubernetes stack along with practical guidance on how to run, automate and orchestrate stateful workloads.
Data is king. Data analytics are the be-all and end-all... but cold hard numbers cannot tell the entire story of your developer community. In this short presentation, we will challenge the current worship of numbers and discuss why you must never forget the human factors and emotion emanating from your development team. Join Jesse Davis, Devada’s Chief Technologist as he covers Diversity and Inclusion in Development circles, what real human investment looks like for developer communities, and why ROI matters but the Human Element matters more. He will outline how to carry the ameliorating message of thoughts and feelings forward to an oft disbelieving audience within the company can reap valuable rewards.
The cornerstone of DevOps is Automation. As the capabilities of cloud options continue to diversify with such tools as one-click deployments and containers, it becomes easy to visualize the idea of NoOps replacing DevOps.
In this talk we will:
- Look at the failings of early cloud deployment options
- See where PaaS and other early cloud schemes took advantage of the future of DevOps
- Look at various container solutions and examine how it can automate a deployment solution for developers
In addition, we will look at the ideas behind DevOps and advocate for NoOps as its ultimate goal.
As a DevOps professional it can be daunting to select a cloud provider when you’re tasked with building out your company’s infrastructure from scratch. There may seem like some “obvious choices” out there, but this isn’t necessarily the case at a startup where you are the solo or duo DevOps team. You could encounter several challenging things along the way, including (but certainly not limited to!) orchestration in the cloud, future-proofing a fresh, rapidly growing environment, security compliance requirements from SOC2, RBAC implementations and, of course, incorporating “ease of use” into every process along the way.
At Cmd, we are able to deploy an immutable/mutable hybrid infrastructure, while at the same time controlling user access with ease! We hope our contributions at this talk help the DevOps community understand why we looked past some choices when selecting our tech stack, the ways in which we approached the challenges that come from going off the beaten path and how we resolved these challenges at Cmd to create a more efficient operation that adds value across the organization.
Services are the backbone of our systems. They are the pieces that make up our businesses—whether they are literal microservices or functional components of a traditional application, we can’t do the computer thing without services.
When it comes to a service in your company or organization, who’s responsible for it? The cast of characters involved in the lifecycle of a service is more than just software engineers. It can include program managers, product owners, sustainability teams (SREs/operations engineers), and business stakeholders, just to name a few.
Topics covered in this workshop include:
- Defining what a service means to you and your organization
- Roles in service ownership
- What are you observing about your service?
- How you want a team to respond to a service outage or incident
- Managing your service in production
- Tuning your service
- Understanding how the service impacts the business
Google did it again. They gave us another buzzword that is changing how organizations operate. Site Reliability Engineer (SRE) is both a strategy and in some organizations brand new function that helps complete the DevOps lifecycle. In this meetup we will talk about how the SRE function is changing the way organizations think about on-call and completing the DevOps feedback loop from prod to plan. We will cover:
1.) The rise of the SRE role
2.) How the SRE function is not rooted in monitoring and ops anymore
3.) How high-performing teams are leveraging SRE to advance their delivery chain
4.) What modern on-call looks like
5.) Best practices and Considerations
While many organizations are rolling out Kubernetes, breaking up their monoliths, and adopting DevOps practices with the hope of increasing developer velocity and improving reliability, it’s not enough just to put these tools in the hands of developers – you’ve got to incentivize developers to use them! Service ownership is a critical piece of DevOps: it provides these incentives by holding teams accountable for metrics like the performance and reliability of their services as well as by giving them the agency to improve those metrics.
In this talk, I’ll cover how distributed tracing can serve as the backbone of service ownership. For SRE teams that are setting standards for their organizations, it can help drive things like documentation, communication, on-call processes, and SLOs by providing a single source of truth for what’s happening across the entire application. It can also accelerate root cause analysis and make alerts more actionable by showing developers what’s changed – even if that change was a dozen services away. Throughout the talk, I’ll use examples drawn from more than a decade of experience with SRE teams in organizations big and small.
By the end of this talk, you’ll understand why service ownership is such an important piece of DevOps and how distributed tracing can drive that ownership in your organization and, by doing so, improve performance and reliability for your application.
Have you ever tried to explain K8s to a non-developer? It's complicated. It's abstract. And it's overwhelming. In this talk, Walt will show you how to explain these concepts of containerization, architecture, and cloud computing so that even your niece can understand them.
Wednesday, September 30, 2020
I'll provide an overview of the OpenAPI Initiative and how we use OpenAPI specifications to automatically generate client bindings, models, and API documentation for Splunk's next generation cloud native platform - Splunk Cloud Services. I'll guide you through the open-source tools available to let you do the same for your apis.
Your DevOps teams need to embed security as they ramp containers and Kubernetes in production. As cloud providers release new services constantly, you not only need visibility inside containers, but also the cloud infrastructure, applications and services used by your teams. With a secure DevOps workflow, your team can spend more time developing apps and less time reacting to issues.
Running secure containers requires that security and DevOps work better together. Join us to understand how to:
- Automate scanning including for Fargate workloads within CI/CD pipelines (Jenkins, Gitlab) and registries (ECR, GCR)
- Detect runtime threats with open-source tools like Falco and continuously monitor your cloud using AWS CloudTrail
- Prevent threats at runtime using Kubernetes PodSecurityPolicies that doesn’t impact performance
- Conduct incident response and forensics, even after the container is gone
- Continuously validate compliance against PCI, NIST, CIS. etc.
Cloud misconfiguration is now the leading cause of cloud-based data breaches, typically due to a lack of secure cloud architecture practices. Because cloud infrastructure is 100% software, cloud security is a software engineering problem, not a traditional security analysis problem. In order to prevent data breaches in the cloud, we must address it with secure software architecture right from the start.
In this talk, Josh Stella will run a live simulation of an advanced cloud misconfiguration exploits to show a number of ways common cloud architectural anti-patterns create opportunities for hackers to gain entry to cloud environments, move laterally using tools like IAM services, and ultimately discover and breach data. Many of the misconfigurations exploited won’t be flagged by compliance scans and often aren’t considered risky by security teams.
At each step, Josh will share alternative approaches to architecting cloud infrastructure services to ensure our applications run efficiently while denying bad actors the tools and means to exploit them. Attendees will leave with actionable insights to evaluate their own cloud environment for misconfiguration vulnerabilities, how to address them, and how to bake secure cloud architecture approaches into software development.
OPEN TALK: Whose Fault Is It When Kubernetes Breaks? How to Build Trust and Resolve Incidents Faster With Distributed TracingJoin on Hopin
So, you've gone "cloud native." You're running apps in containers, you're scheduling them with Kubernetes, and now you're trying to create a better experience for your team and for your customers. But when things break — and they often do — it can be challenging to understand how to resolve an incident quickly, or even which service owner is responsible. Distributed tracing brings the code execution to the forefront, and gives a new view focused on service performance.
Software systems age just like living entities; they have a period of robust life and then start to get slower and more fragile over time. At some point you have to take an objective look at your system and determine whether you can refactor it back into robustness, or whether your best option is to simply replace it. In this session we will go over the factors that play into this determination, including current architectural design and code smells, development team experience, SDLC processes, risk tolerance, and leadership. We will also work through several high-level approaches to managing both refactoring and replacement efforts.
The race to out-innovate one’s competition has led to high performing organizations chasing increased deployment velocities but often ignoring the quality of parts being used to manufacture their applications. It was 2003 when Bruce Schneier (@schneierblog) penned, "Today there are no real consequences for having bad security, or having low-quality software of any kind. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality."
As nimble organizations deliver new innovations using DevOps principles, adversaries are also upping their game, something we saw in a series of high profile and devastating cyber attacks last year. Adversaries have the intent and ability to exploit security vulnerabilities in the software supply chain - and in some cases plant the vulnerabilities themselves. They have increased scale through automation and improved breach success through precision targeting. If the IT industry doesn’t fight back by doing the same - automating security directly in the DevOps pipeline, then we’ll never be able to win.
The industry currently lacks meaningful open source controls. The most common way to introduce controls is through the application of open source governance policies across a software supply chain. But, when over 5500 IT professionals were asked if their organisation employed open source governance policies, just 63% responded positively. That percentage degraded further when participants were asked if they followed the policy. For those without a DevOps practice just 25% of said they both had an OSS governance policy and adhered to it. Effectively, 75% of those who don’t deploy a DevOps strategy, either ignore policies or don’t have one at all.
Further evidence of the lack of cybersecurity hygiene was revealed by 67% of survey participants who admitted to not having meaningful controls over what open source components are used in their applications.
Modern software supply chains can only operate safely when protected with automated security and quality assessments of these upstream open source components and containers.
This sentiment was echoed in Forrester’s Top Recommendations For Your Security Program (March 2018) where analysts advised, "Automate faster than evil does. If you thought your security team struggled with alert volume — and alert fatigue — then you Manual methods to detect, investigate, and respond to threats will guarantee
failure in the near future."