Tuesday, November 10, 2020
As more enterprises adopt cloud-native environments and with the growing complexity of infrastructures and systems, understanding your modern architectures is crucial. At scale, with thousands of microservices, visualization becomes critical to understand performance, flow, and the health of applications. and In order to properly scale and still obtain full control, you have to have the right observability strategy and tools. In this talk, we'll go over the challenges, the solutions, and the tools to truly achieve end-to-end observability and get you scaling with confidence.
For the past two years Postgres has appeared as the RDBMS that developers love more than any other database.
This talk will explore why software developers love Postgres. Specific examples of Postgres features developers love will be presented.
Examples include how Postgres can be used as a JSON document store that outperforms Mongo and how complex GeoSpatial queries can be achieved with 4-5 lines of code. Additional examples for Postgres brilliance and simplicity will be presented.
Other topics that will be covered is how Postgres is a powerful enterprise ready database but also a database that is written by developers for developers.
We live in an uncertain world and your business needs to be able to adapt. In an ever more digital world, what are you doing to keep up? Let's dive into rapid web development tools to help you bring your business website or online offering. Rapid web development platforms are becoming more abundant and can help you rapidly adapt your website in a matter of hours, not weeks or months. With lower maintenance costs and shorter development times, a rapid web dev solution can change your business landscape. Let's look into some customer use cases and see how a rapid web development platform helped their businesses adapt, especially in 2020.
When building applications in a pure microservices architecture you have the luxury of flexibility around how you persist the data. Rather than being forced into a database system that tries to support all of your cross-domain use cases, you can choose a data persistence strategy that makes the most sense for that microservice.
In this session will take a look at the major data persistence technologies that are available and discuss the kinds of use cases where they each make sense. The technologies that we will discuss include:
- In Memory
- Time Series
This workshop draws from PagerDuty's open-source postmortem framework to teach you strategies for conducting successful blameless postmortems. Learn basic concepts following our step-by-step guide and complete practice exercises to help you develop strategies for overcoming common pitfalls.
SkySQL is the first and only MariaDB Database-as-a-Service (DBaaS) engineered, run and supported by MariaDB. Built for multi-cloud on Kubernetes, SkySQL can deploy databases and data warehouses for transactional (OLTP), analytical (OLAP), hybrid transactional/analytical (HTAP), and distributed OLTP workloads (Distributed SQL). Leveraging MariaDB’s storage engine architecture, MariaDB MaxScale and a combination of SSD and S3 Object Storage, SkySQL has become the most comprehensive cloud database offering for a wider range of workloads.
In this session, we’ll provide an overview of its architecture and capabilities. Technical detail will be brought to life via code-level (Python, Jupyter, and more) demos to give developers a first-hand look at how to develop modern applications using the combination of transactional, analytical and cross-engine queries with MariaDB SkySQL. Code samples, documentation as well as free access to SkySQL will be provided.
Many of today’s business-essential tasks have become digitized and, as a result, IT teams have had to learn to deal with constant change while ensuring zero downtime. The irony is that, although IT has become business-critical, the productivity and agility of the people building and supporting services behind the scenes has plummeted. Now, companies simply generate too much data for humans to monitor and understand manually, leading to an incredible amount of toil and noise.
We cannot continue to scale monitoring and observability by simply devoting more humans to the task. Artificial Intelligence and Machine Learning have emerged as the cornerstone of a new observability strategy. Using algorithms correctly can eliminate toil, help accelerate the discovery of potential issues across applications and infrastructure, avoid emergencies, maintain agility and ultimately continue delivering innovative business services.
This talk will explain the methods which DevOps practitioners and SREs teams can use to more effectively:
- surfacing important anomalies and events from the deluge of data
- understanding the relationships between alerts
- obtain the context needed to engage the right teams and people
From data discovery to blameless analysis, algorithms automate the cognitive load required by humans to remove the operations toil and continuously assure your customer experience. Learn what DevOps teams can expect from these algorithms and how to apply them.
We’ve all heard the buzz around pushing application security into the hands of developers, but if you’re like most companies, it has been hard to actually make this a reality. You aren’t alone - putting the culture, processes, and tooling into place to make this happen is tough. Join StackHawk CSO Scott Gerlach as he shares his triumphs and failures while building DevSecOps practices and tools at companies such as GoDaddy, SendGrid, and Twilio. Dig into specific reasons why developers struggle with AppSec and what you can do to make it work better. Whether you’re a seasoned DevSecOps pro or just starting out, this will be an entertaining (and judgement-free!) talk you won’t want to miss!
Imagine your team has faced a recent Kafka outage or needs to serve digital content across multiple regions. Then, imagine you have real-time customer orders that need to be handled and replicated to various teams across the world. Now imagine you have to prevent both (1) data loss and (2) duplication of that data, such as with customer orders.
This interactive talk will have members of the audience send multiple text messages to a server connected to Kafka. Then, in the middle of the exercise, the Kafka server will shut off – creating what most organizations would generously call a clusterf***. The session will show how you can recover from such a predicament using resilience engineering with Kafka’s MirrorMaker 2.0 and smart application workflows.
Key session takeaways include:
- How to ensure disaster recovery when using Kafka
- How to duplicate records
- How to use MirrorMaker 2.0 like a pro
Verifying Kafka Streaming applications in data decoupling nature of projects comes with extreme constraints of potential data loss, processing errors, and other useful use of decoupled data for verification. This talk will present the problem which many large enterprises face while adopting on Kafka and how applied fitness functions in test could be the visible identifier of success
Wednesday, November 11, 2020
As in life, so in software: boundaries are critical. Without setting the correct boundaries we can get stuck maintaining a logically broken system potentially for years.
You may be designing a greenfield application using microservices architecture with goals of speed, scale, and flexibility or you may be attempting drastic new features to an existing system with a more monolithic design. But without doing the work upfront to define the true business boundaries of each service in your system, you end up with a brittle, tightly-coupled system that negates all these longed-for benefits or doubles down on a coupled monstrosity.
So how do you know you’re setting the right boundaries? In this session, we’ll explore some tactical tools for identifying those boundaries and show you how to evaluate them via a change scenario to an existing system. The exercise focuses attention on considerations for data, reliability, and the human factor of ownership and customer experience. We will also discuss some pitfalls to avoid coupling in your system.
You’ll leave empowered to set boundaries that enable true autonomy and help you get the most value out of your distributed architecture.
In a distributed world we all depend on the distributed systems more than ever. As these systems become more complex, the failures are much harder to predict.
Chaos Engineering introduces the injection of failures as a discipline for building confidence in the resilience capability of the systems.
Chaos Engineering and Postmortems are mainly considered by Operations Teams, but considering failures from Development provide teams with the opportunity to execute potentially highly disruptive experiments in a safer, more controlled way.
In this talk I am going to explain why all roles involved in building software products should practice Chaos Engineering. I am going to show the benefits and how can these exercises provide recovery procedures and validate resolution protocols.