Tuesday, November 10, 2020
One of the largest banks in the U.S. is focused on creating first-class customer experiences. To do this, they continue to modernize their applications towards a next-generation application architecture that relies on a Microservices architectural style. Valuable data in core systems is unlocked and exposed to experience layers via APIs, and used to create new and unique customer experiences. In this talk you will learn key lessons learned implementing Microservices at Scale including establishing a centre of excellence (CoE), governance through code, decentralized governance, and organizational/cultural changes to support the model.
As more enterprises adopt cloud-native environments and with the growing complexity of infrastructures and systems, understanding your modern architectures is crucial. At scale, with thousands of microservices, visualization becomes critical to understand performance, flow, and the health of applications. and In order to properly scale and still obtain full control, you have to have the right observability strategy and tools. In this talk, we'll go over the challenges, the solutions, and the tools to truly achieve end-to-end observability and get you scaling with confidence.
Event Driven paradigm is one of the most popular micro services architecture. Many enterprises are levering this architecture in their digital journeys. With the promise to make enterprises more reactive to changing business needs. As part of this talk, I will talk about my experience of building enterprise event driven systems and at the same time elaborating the fallacies of building and adopting such systems.
For the past two years Postgres has appeared as the RDBMS that developers love more than any other database.
This talk will explore why software developers love Postgres. Specific examples of Postgres features developers love will be presented.
Examples include how Postgres can be used as a JSON document store that outperforms Mongo and how complex GeoSpatial queries can be achieved with 4-5 lines of code. Additional examples for Postgres brilliance and simplicity will be presented.
Other topics that will be covered is how Postgres is a powerful enterprise ready database but also a database that is written by developers for developers.
A discussion on the changes, trends, and database technologies that are going to impact your business in the next 12-18 months.
In the current technology landscape, we have a lot of great innovation happening especially when it comes to Database Technology. A few examples include introducing new data models such as time series or graph, which focus on solving SQL at hyper-scale problem, this has been an elusive solution and scale was becoming synonymous with NoSQL environments. We now have a new Cloud-Native database design coming to market using the power of Kubernetes as well as employing Serverless concepts.
In this presentation, we will look at database technologies changing trends and what is driving them as well as talk about changes to the Open Source licenses and Cloud-based deployment and emerging class of not-quite Open Source Database software."
We live in an uncertain world and your business needs to be able to adapt. In an ever more digital world, what are you doing to keep up? Let's dive into rapid web development tools to help you bring your business website or online offering. Rapid web development platforms are becoming more abundant and can help you rapidly adapt your website in a matter of hours, not weeks or months. With lower maintenance costs and shorter development times, a rapid web dev solution can change your business landscape. Let's look into some customer use cases and see how a rapid web development platform helped their businesses adapt, especially in 2020.
To make good decisions you need to use all the available information. For many companies, a lot of this information is locked up within their infrastructure, and getting to it is both difficult and time-consuming. IT Enterprises need to release this stored data to allow for better decision making, ultimately saving time, reducing waste and lowering costs.
We’ve talked to many organisation and they all suffer from the same problem. That they’re not easily able to see what’s right in front of them. They’re unable to answer simple questions about their infrastructure and software. They simply don’t know what they’ve got, where it is, or exactly how many of “it” they have. It’s a common problem and it comes down to a lack of available tools to fit a multitude of physical and virtual challenges. This puts pressure on operational staff to deliver the answers, but with no built-for-purpose tools available, they have to piece the information together taking a lot of time and effort. This massively reduces their productivity, and the ability of the company to respond quickly, when the information they need should be readily available.
lets first look at some of the questions we regularly ask of our infrastructure/applications. You should be able to find the answers to these questions in a very short space of time, if not already know the answer. If you can't, you have a problem.
· How many servers do you have across all cloud and on-prem environments - Total number (you're paying for them).
· Which teams support the various parts of your infrastructure.
· What containers/build versions are running on all your Kubernetes/OpenShift clusters.
· Are all your servers patched to the latest release.
· Able to easily collect specific data for a simple licence renewal.
· Do your servers meet the agreed CIS controls. Are they as secure as possible?
· How many servers are still using a vulnerable version of software (CVE vulnerability)
· Do any of your servers have an uptime of over 3 months.
· What regions do all your cloud instances run in.
Reading through this list will probably resonate with some of you. Either because you asked the question, or because you had to find the answer. How did you find the answers? Was it straight forward? Did it only take you 2 minutes?
I'm betting the answer is probably NO.
When building applications in a pure microservices architecture you have the luxury of flexibility around how you persist the data. Rather than being forced into a database system that tries to support all of your cross-domain use cases, you can choose a data persistence strategy that makes the most sense for that microservice.
In this session will take a look at the major data persistence technologies that are available and discuss the kinds of use cases where they each make sense. The technologies that we will discuss include:
- In Memory
- Time Series
This talk will focus on why inclusive remote teams are better, and how to build them. Starting with “why?”, moving towards “how?” by explaining key methods of inclusivity: design, data analysis, building inclusive teams, leading with empathy, and SDLC with diversity, and sharing resources as inclusion from an overhead cost to a business enabler is a need of tomorrow
This workshop draws from PagerDuty's open-source postmortem framework to teach you strategies for conducting successful blameless postmortems. Learn basic concepts following our step-by-step guide and complete practice exercises to help you develop strategies for overcoming common pitfalls.
Apache Kafka is getting used as an event backbone in new organizations every day. We would love to send every byte of data through the event bus. However, most of the time, connecting to simple third party applications and services becomes a headache that involves several lines of code and additional applications. As a result, connecting Kafka to services like Google Sheets, communication tools such as Slack or Telegram, or even the omnipresent Salesforce, is a challenge nobody wants to face. Wouldn’t you like to have hundreds of connectors readily available out-of-the-box to solve this problem?
Due to these challenges, communities like Apache Camel are working on how to speed up development on key areas of the modern application, like integration. The Camel Kafka Connect project, from the Apache foundation, has enabled their vastly set of connectors to interact with Kafka Connect natively. So, developers can start sending and receiving data from Kafka to and from their preferred services and applications in no time without a single line of code.
In summary, during this session we will:
- Introduce you to the Camel Kafka Connector sub-project from Apache Camel
- Go over the list of connectors available as part of the project
- Showcase a couple of examples of integrations using the connectors
- Share some guidelines on how to get started with the Camel Kafka Connectors
SkySQL is the first and only MariaDB Database-as-a-Service (DBaaS) engineered, run and supported by MariaDB. Built for multi-cloud on Kubernetes, SkySQL can deploy databases and data warehouses for transactional (OLTP), analytical (OLAP), hybrid transactional/analytical (HTAP), and distributed OLTP workloads (Distributed SQL). Leveraging MariaDB’s storage engine architecture, MariaDB MaxScale and a combination of SSD and S3 Object Storage, SkySQL has become the most comprehensive cloud database offering for a wider range of workloads.
In this session, we’ll provide an overview of its architecture and capabilities. Technical detail will be brought to life via code-level (Python, Jupyter, and more) demos to give developers a first-hand look at how to develop modern applications using the combination of transactional, analytical and cross-engine queries with MariaDB SkySQL. Code samples, documentation as well as free access to SkySQL will be provided.
Many of today’s business-essential tasks have become digitized and, as a result, IT teams have had to learn to deal with constant change while ensuring zero downtime. The irony is that, although IT has become business-critical, the productivity and agility of the people building and supporting services behind the scenes has plummeted. Now, companies simply generate too much data for humans to monitor and understand manually, leading to an incredible amount of toil and noise.
We cannot continue to scale monitoring and observability by simply devoting more humans to the task. Artificial Intelligence and Machine Learning have emerged as the cornerstone of a new observability strategy. Using algorithms correctly can eliminate toil, help accelerate the discovery of potential issues across applications and infrastructure, avoid emergencies, maintain agility and ultimately continue delivering innovative business services.
This talk will explain the methods which DevOps practitioners and SREs teams can use to more effectively:
- surfacing important anomalies and events from the deluge of data
- understanding the relationships between alerts
- obtain the context needed to engage the right teams and people
From data discovery to blameless analysis, algorithms automate the cognitive load required by humans to remove the operations toil and continuously assure your customer experience. Learn what DevOps teams can expect from these algorithms and how to apply them.
Many consider agile a process to implement within an existing organization. A set of rules to follow that will produce some useful outcomes. This approach can provide improvements in many different structures of organizations. As agile maturity improves, however, the benefits can become limited by the structure and culture of the organization itself.
Agile is more than a framework for organizing tasks for a team. Agile is a culture, a mindset, and a structure for improving the velocity of innovation and providing real business value to customers. To gain the most benefit from Agile it must be considered as part of a more extensive system that incorporates organizational structure, software architecture, and company culture.
This talk considers the interactions between how the work, the software, and the people are organized in high performing agile organizations. Using my own experiences at companies large and small, I will share what I have learned and some best practices I use. These lessons will help you as you improve and scale your Agile teams.
I will discuss:
* How to structure your organization to remove the bottlenecks in coordination and decision-making that can slow velocity to a crawl
* How to take advantage of modern systems architectures to allow teams to move faster
* Using data to provide accountability for autonomous teams without creating more process
* By the end, you will have concrete examples and ideas that you can bring back to your team to help you improve and scale agile within your organization.
We’ve all heard the buzz around pushing application security into the hands of developers, but if you’re like most companies, it has been hard to actually make this a reality. You aren’t alone - putting the culture, processes, and tooling into place to make this happen is tough. Join StackHawk CSO Scott Gerlach as he shares his triumphs and failures while building DevSecOps practices and tools at companies such as GoDaddy, SendGrid, and Twilio. Dig into specific reasons why developers struggle with AppSec and what you can do to make it work better. Whether you’re a seasoned DevSecOps pro or just starting out, this will be an entertaining (and judgement-free!) talk you won’t want to miss!
Imagine your team has faced a recent Kafka outage or needs to serve digital content across multiple regions. Then, imagine you have real-time customer orders that need to be handled and replicated to various teams across the world. Now imagine you have to prevent both (1) data loss and (2) duplication of that data, such as with customer orders.
This interactive talk will have members of the audience send multiple text messages to a server connected to Kafka. Then, in the middle of the exercise, the Kafka server will shut off – creating what most organizations would generously call a clusterf***. The session will show how you can recover from such a predicament using resilience engineering with Kafka’s MirrorMaker 2.0 and smart application workflows.
Key session takeaways include:
- How to ensure disaster recovery when using Kafka
- How to duplicate records
- How to use MirrorMaker 2.0 like a pro
In response to the current environment many companies are having employees work remotely to keep them safe and healthy. And, it’s no secret, that in this “new normal,” human connection has become even more important – as people (both personally and professionally) have to find new ways of working together without in-person interaction. While there is no rule book for managing teams in the midst of a pandemic, there are steps managers can take to ensure their global team of developers are cross-collaborating and drawing on the benefits of workplace international and cultural diversity. Notably, developers—by nature—are well-suited to interacting online for work (and leisure). However, though developers might not have to learn to use as many new tools, they will have to become more flexible (and patient) as other departments (who might be less digital in terms of communication) shift to work more online. Additionally, organizations and their employees are facing new challenges from a privacy and security standpoint, and they need to find new ways to combat these pandemic related breeches. For example, the general consensus is that people are seeing a rise in phishing emails that use language about the pandemic as bait. These days, as companies jump to offer up their vast amounts of data to help develop new COVID-related solutions, such as using cell phone data to monitor social distancing, there’s an even greater issue with data privacy.This session will discuss how developer teams can prepare for increased remote work over the course of the year – while keeping collaboration, data privacy and security top of mind. It will also center on the principles that developers, and entire companies, should stand by when using data.
Verifying Kafka Streaming applications in data decoupling nature of projects comes with extreme constraints of potential data loss, processing errors, and other useful use of decoupled data for verification. This talk will present the problem which many large enterprises face while adopting on Kafka and how applied fitness functions in test could be the visible identifier of success
Preventing the spread of COVID-19 has been our collective priority in this pandemic and the current climate is allowing us to keep a pulse on data in a very important way. SingleStore is working with True Digital group to prevent the spread of COVID-19 in Thailand by using anonymized cell phone location data on 500,000 location events every second for over 30 million+ mobile phones to track population movement in two-minute intervals. This vast amount of real-time, geospatial data provides a view of population densities enabling the Thai government authorities to see when large gatherings are forming and quickly helps them to adapt their facilities.
Wednesday, November 11, 2020
In this presentation, management of a test automation project experiences are told. System under test is the cloud-based, open IoT operating system.
Digitalization is the hottest topic of businesses where companies invest in transforming their processes by leveraging digital technologies. On this trend topic, applied approaches and techniques are explained to construct ideas about the whole testing lifecycle of cloud-based platform. Test levels, priorities, release scopes and regression suites and the structure of the self-developed automation framework with infrastructural components and tools are investigated.
Like all other journeys, there are ramps and landings on the way. Lessons learnt are continuously utilized and processes are improved by evaluating various strategies and adding feedbacks collected from all parties. Some applications have not resulted positively, and the project has reacted as how an agile organization should do. Challenges are listed and actions against them are summarized with visualizing the benefits by comparing before and after situations. The whole progress from the first stages to the last gives an insight about how the project develops and reaches to maturity.
This is the story of automation journey, which is started with a motto: “We are all in the same boat”. The goal is to make insights of a good test management process and best practices.
I think that this submission has an interesting content which can make great attention. Rather than theoretical claims, it consists of practical real-life experiences. A full journey will be shared with challenges and proposals against them of course. Successful adaptation of latest technologies and trends is also in the context. In this scope, I talk about API&UI testing automation frameworks, zero downtime release activities such as blue-green deployments. Initiatives like application of artificial intelligence in testing stages will be told as well. Instead of what to do, I go over how to do.
Software is vulnerable. The good news is, software is vulnerable in ways that are known and can be addressed. For the past 15+ years, the security community has been publishing and tracking a list of common security vulnerabilities called the OWASP Top 10.This session provides a brief overview of ten common DevSecOps security vulnerability categories. It's a lot to cover in 25 minutes, so this session focuses on the general concepts.
As in life, so in software: boundaries are critical. Without setting the correct boundaries we can get stuck maintaining a logically broken system potentially for years.
You may be designing a greenfield application using microservices architecture with goals of speed, scale, and flexibility or you may be attempting drastic new features to an existing system with a more monolithic design. But without doing the work upfront to define the true business boundaries of each service in your system, you end up with a brittle, tightly-coupled system that negates all these longed-for benefits or doubles down on a coupled monstrosity.
So how do you know you’re setting the right boundaries? In this session, we’ll explore some tactical tools for identifying those boundaries and show you how to evaluate them via a change scenario to an existing system. The exercise focuses attention on considerations for data, reliability, and the human factor of ownership and customer experience. We will also discuss some pitfalls to avoid coupling in your system.
You’ll leave empowered to set boundaries that enable true autonomy and help you get the most value out of your distributed architecture.
In a distributed world we all depend on the distributed systems more than ever. As these systems become more complex, the failures are much harder to predict.
Chaos Engineering introduces the injection of failures as a discipline for building confidence in the resilience capability of the systems.
Chaos Engineering and Postmortems are mainly considered by Operations Teams, but considering failures from Development provide teams with the opportunity to execute potentially highly disruptive experiments in a safer, more controlled way.
In this talk I am going to explain why all roles involved in building software products should practice Chaos Engineering. I am going to show the benefits and how can these exercises provide recovery procedures and validate resolution protocols.
Successful businesses are built on the shoulders of many different roles and personas. Your personal success certainly depends on your individual contributions, but it also depends on how well you can work across different functions as a team member. Interactions between Developers and Product Management, Developers and Test, or with business focused functions such as Marketing and Sales are key to getting things done in an organization, building a great product, and ultimately your own success! Each role may have different “languages” they speak, priorities and scopes of focus. Navigating these variations is not taught in school, and may initially seem like a huge challenge. Understanding and working well with your counterparts enables everyone’s impact to grow larger than each individual’s contributions, and can help create robust products, close working relationships and increase job satisfaction.
Nic, an Engineering Manager, and Lauren, Product Line Lead, started working together 5 years ago. They have not only built a collaborative engineering/product relationship, but have become good friends as well. They’ll outline the four pillars they think are critical to a great cross-functional relationship: Respect, Empathy, Trust and Communication, and how to cultivate them in your own teams.
You’ll walk away not only with a better understanding of the importance of your colleagues in different roles and how these cross-functional relationships aid in everyone’s success, but also with clear tactics to improve your daily interactions. Nic and Lauren will share their own proven methods for a great cross-functional relationship including; how they support and amplify each others’ needs, effective methods and tools for communicating with each other, and how together they prioritize their work efforts for the most impact.
BUSINESS PROBLEM & CHALLENGE
Network automation was not well practiced or well understood inside our network engineering team, but was sorely needed. We needed to decrease effort and mistakes on daily management tasks by minimizing the direct human interaction with network devices. High on our priority list of goals, was improving network security by recognizing and fixing security vulnerabilities and increasing the network performance.
HOW WE OVERCAME THE CHALLENGE
We started by simplifying daily workflows, baselining our configurations and removing snowflakes. While this can be very labour-intensive at the outset when you’re working on a global scale in a highly critical customer environment, the long-term benefits far outweighed the labour.
Next, we created an inventory file which listed all network devices by type, model, location and IP address - this enabled us to retrieve info about devices and using network programming and automation, allowing us to deploy to all devices, or even a subset of devices (eg. only those in a specific area), depending on what was needed. The benefit to this is we avoided manual configuration and logging into hundreds of different devices to add configuration to each one.
Overcoming these two big challenges set us up for success and enabled us to deploy at a global scale. We lived by the mantra:
“If it’s not repeatable, it’s not automatable. And if it’s not automatable, it’s not scalable.”
LEARNINGS AND MEASURABLE OUTCOMES
So what did we learn? For starters, it can be hard to automate a use case or test in the same way you would if doing it manually. Testing that requires physical movement, for example losing service provider links or hardware failure is also a challenge, as automating something like that is very tricky. We also learned that code reviews are extremely important. Shared code ownership means the entire team can make changes anywhere, at any time.
And what we’re the measurable outcomes?
Faster deployment times - we were able to efficiently push changes to over 300 network devices and audit the configuration of our global network, taking the time to execute from days down to hours.
Removed the fear of large and complex network changes - the accuracy and efficiency with which we were able to deploy at scale, gave business and the leadership more confidence in subsequent large scale network changes and deployments.
Faster feedback on network changes - it allowed us to get reviews on network configuration changes with version control and peer review, treating infrastructure as code (IaC).
Helped with adhering to PSIRT/CSIRT challenging timeframes and security vulnerabilities.
We started by simplifying daily workflows, baselining our configurations and removing snowflakes. Next, we created an inventory file which listed all network devices by type, model, location and IP address.
Speed of deployment; speed of feedback on network changes; speed of adherence to PSIRT/CSIRT timeframes; confidence and buy-in from senior leadership on subsequent deployments!