Tuesday, November 10, 2020
As more enterprises adopt cloud-native environments and with the growing complexity of infrastructures and systems, understanding your modern architectures is crucial. At scale, with thousands of microservices, visualization becomes critical to understand performance, flow, and the health of applications. and In order to properly scale and still obtain full control, you have to have the right observability strategy and tools. In this talk, we'll go over the challenges, the solutions, and the tools to truly achieve end-to-end observability and get you scaling with confidence.
For the past two years Postgres has appeared as the RDBMS that developers love more than any other database.
This talk will explore why software developers love Postgres. Specific examples of Postgres features developers love will be presented.
Examples include how Postgres can be used as a JSON document store that outperforms Mongo and how complex GeoSpatial queries can be achieved with 4-5 lines of code. Additional examples for Postgres brilliance and simplicity will be presented.
Other topics that will be covered is how Postgres is a powerful enterprise ready database but also a database that is written by developers for developers.
We live in an uncertain world and your business needs to be able to adapt. In an ever more digital world, what are you doing to keep up? Let's dive into rapid web development tools to help you bring your business website or online offering. Rapid web development platforms are becoming more abundant and can help you rapidly adapt your website in a matter of hours, not weeks or months. With lower maintenance costs and shorter development times, a rapid web dev solution can change your business landscape. Let's look into some customer use cases and see how a rapid web development platform helped their businesses adapt, especially in 2020.
When building applications in a pure microservices architecture you have the luxury of flexibility around how you persist the data. Rather than being forced into a database system that tries to support all of your cross-domain use cases, you can choose a data persistence strategy that makes the most sense for that microservice.
In this session will take a look at the major data persistence technologies that are available and discuss the kinds of use cases where they each make sense. The technologies that we will discuss include:
- In Memory
- Time Series
This workshop draws from PagerDuty's open-source postmortem framework to teach you strategies for conducting successful blameless postmortems. Learn basic concepts following our step-by-step guide and complete practice exercises to help you develop strategies for overcoming common pitfalls.
SkySQL is the first and only MariaDB Database-as-a-Service (DBaaS) engineered, run and supported by MariaDB. Built for multi-cloud on Kubernetes, SkySQL can deploy databases and data warehouses for transactional (OLTP), analytical (OLAP), hybrid transactional/analytical (HTAP), and distributed OLTP workloads (Distributed SQL). Leveraging MariaDB’s storage engine architecture, MariaDB MaxScale and a combination of SSD and S3 Object Storage, SkySQL has become the most comprehensive cloud database offering for a wider range of workloads.
In this session, we’ll provide an overview of its architecture and capabilities. Technical detail will be brought to life via code-level (Python, Jupyter, and more) demos to give developers a first-hand look at how to develop modern applications using the combination of transactional, analytical and cross-engine queries with MariaDB SkySQL. Code samples, documentation as well as free access to SkySQL will be provided.
Many of today’s business-essential tasks have become digitized and, as a result, IT teams have had to learn to deal with constant change while ensuring zero downtime. The irony is that, although IT has become business-critical, the productivity and agility of the people building and supporting services behind the scenes has plummeted. Now, companies simply generate too much data for humans to monitor and understand manually, leading to an incredible amount of toil and noise.
We cannot continue to scale monitoring and observability by simply devoting more humans to the task. Artificial Intelligence and Machine Learning have emerged as the cornerstone of a new observability strategy. Using algorithms correctly can eliminate toil, help accelerate the discovery of potential issues across applications and infrastructure, avoid emergencies, maintain agility and ultimately continue delivering innovative business services.
This talk will explain the methods which DevOps practitioners and SREs teams can use to more effectively:
- surfacing important anomalies and events from the deluge of data
- understanding the relationships between alerts
- obtain the context needed to engage the right teams and people
From data discovery to blameless analysis, algorithms automate the cognitive load required by humans to remove the operations toil and continuously assure your customer experience. Learn what DevOps teams can expect from these algorithms and how to apply them.
We’ve all heard the buzz around pushing application security into the hands of developers, but if you’re like most companies, it has been hard to actually make this a reality. You aren’t alone - putting the culture, processes, and tooling into place to make this happen is tough. Join StackHawk CSO Scott Gerlach as he shares his triumphs and failures while building DevSecOps practices and tools at companies such as GoDaddy, SendGrid, and Twilio. Dig into specific reasons why developers struggle with AppSec and what you can do to make it work better. Whether you’re a seasoned DevSecOps pro or just starting out, this will be an entertaining (and judgement-free!) talk you won’t want to miss!
Wednesday, November 11, 2020
The use of modern software development methods is drastically increasing the rate of software delivery among traditional banks and breaking the view that they can only deliver software every few months. This talk will explore the impact and practices of Modern Engineering and how it has enabled Lloyds banking group to adapt to the challenges of a complex world.
Stephen Hawking predicted the 21st century would be the century of complexity, a world which is unpredictable, ever-evolving and unbounded. This complex world requires the Bank to move from the old business paradigm and mindset of controlling time, scope and cost to the Modern Engineering-led business paradigm and mindset of quality, speed and value.
Bringing it to life, we will see how the Bank applied the Modern Engineering tool-sets to improve their ability to focus on quality, speed and value. We will share examples of how modern engineering practices have helped the Consumer Servicing Value Stream within the Bank to become more adaptable and to respond to COVID19 pandemic. Lastly, we will explore what the new business paradigm of quality, speed and value means for the Executives within the Bank.
Key audience takeaways from this talk:
• Learn how Lloyds Banking Group applied a practical application of Modern Engineering in the Consumer Servicing Value Stream
• Understand the modern engineering principles to help you and your organisation adapt to our changing world
• Reflect on how to keep your skills relevant
• Reflect on whether you are clear on your products and how they deliver value to the customers
In this presentation, management of a test automation project experiences are told. System under test is the cloud-based, open IoT operating system.
Digitalization is the hottest topic of businesses where companies invest in transforming their processes by leveraging digital technologies. On this trend topic, applied approaches and techniques are explained to construct ideas about the whole testing lifecycle of cloud-based platform. Test levels, priorities, release scopes and regression suites and the structure of the self-developed automation framework with infrastructural components and tools are investigated.
Like all other journeys, there are ramps and landings on the way. Lessons learnt are continuously utilized and processes are improved by evaluating various strategies and adding feedbacks collected from all parties. Some applications have not resulted positively, and the project has reacted as how an agile organization should do. Challenges are listed and actions against them are summarized with visualizing the benefits by comparing before and after situations. The whole progress from the first stages to the last gives an insight about how the project develops and reaches to maturity.
This is the story of automation journey, which is started with a motto: “We are all in the same boat”. The goal is to make insights of a good test management process and best practices.
I think that this submission has an interesting content which can make great attention. Rather than theoretical claims, it consists of practical real-life experiences. A full journey will be shared with challenges and proposals against them of course. Successful adaptation of latest technologies and trends is also in the context. In this scope, I talk about API&UI testing automation frameworks, zero downtime release activities such as blue-green deployments. Initiatives like application of artificial intelligence in testing stages will be told as well. Instead of what to do, I go over how to do.
Software is vulnerable. The good news is, software is vulnerable in ways that are known and can be addressed. For the past 15+ years, the security community has been publishing and tracking a list of common security vulnerabilities called the OWASP Top 10.This session provides a brief overview of ten common DevSecOps security vulnerability categories. It's a lot to cover in 25 minutes, so this session focuses on the general concepts.
As the world changes and remote experiences emerge at the center of daily life, a central question emerges: what does it take to create an experience where all participants are remote?
Todd Greene, CEO of PubNub, will share his insight into the software and network considerations required to bring people, data, and devices together for the future of remote life.
Tune in and gain insight from the industry leader in realtime innovation. Don't miss it!
Successful businesses are built on the shoulders of many different roles and personas. Your personal success certainly depends on your individual contributions, but it also depends on how well you can work across different functions as a team member. Interactions between Developers and Product Management, Developers and Test, or with business focused functions such as Marketing and Sales are key to getting things done in an organization, building a great product, and ultimately your own success! Each role may have different “languages” they speak, priorities and scopes of focus. Navigating these variations is not taught in school, and may initially seem like a huge challenge. Understanding and working well with your counterparts enables everyone’s impact to grow larger than each individual’s contributions, and can help create robust products, close working relationships and increase job satisfaction.
Nic, an Engineering Manager, and Lauren, Product Line Lead, started working together 5 years ago. They have not only built a collaborative engineering/product relationship, but have become good friends as well. They’ll outline the four pillars they think are critical to a great cross-functional relationship: Respect, Empathy, Trust and Communication, and how to cultivate them in your own teams.
You’ll walk away not only with a better understanding of the importance of your colleagues in different roles and how these cross-functional relationships aid in everyone’s success, but also with clear tactics to improve your daily interactions. Nic and Lauren will share their own proven methods for a great cross-functional relationship including; how they support and amplify each others’ needs, effective methods and tools for communicating with each other, and how together they prioritize their work efforts for the most impact.
BUSINESS PROBLEM & CHALLENGE
Network automation was not well practiced or well understood inside our network engineering team, but was sorely needed. We needed to decrease effort and mistakes on daily management tasks by minimizing the direct human interaction with network devices. High on our priority list of goals, was improving network security by recognizing and fixing security vulnerabilities and increasing the network performance.
HOW WE OVERCAME THE CHALLENGE
We started by simplifying daily workflows, baselining our configurations and removing snowflakes. While this can be very labour-intensive at the outset when you’re working on a global scale in a highly critical customer environment, the long-term benefits far outweighed the labour.
Next, we created an inventory file which listed all network devices by type, model, location and IP address - this enabled us to retrieve info about devices and using network programming and automation, allowing us to deploy to all devices, or even a subset of devices (eg. only those in a specific area), depending on what was needed. The benefit to this is we avoided manual configuration and logging into hundreds of different devices to add configuration to each one.
Overcoming these two big challenges set us up for success and enabled us to deploy at a global scale. We lived by the mantra:
“If it’s not repeatable, it’s not automatable. And if it’s not automatable, it’s not scalable.”
LEARNINGS AND MEASURABLE OUTCOMES
So what did we learn? For starters, it can be hard to automate a use case or test in the same way you would if doing it manually. Testing that requires physical movement, for example losing service provider links or hardware failure is also a challenge, as automating something like that is very tricky. We also learned that code reviews are extremely important. Shared code ownership means the entire team can make changes anywhere, at any time.
And what we’re the measurable outcomes?
Faster deployment times - we were able to efficiently push changes to over 300 network devices and audit the configuration of our global network, taking the time to execute from days down to hours.
Removed the fear of large and complex network changes - the accuracy and efficiency with which we were able to deploy at scale, gave business and the leadership more confidence in subsequent large scale network changes and deployments.
Faster feedback on network changes - it allowed us to get reviews on network configuration changes with version control and peer review, treating infrastructure as code (IaC).
Helped with adhering to PSIRT/CSIRT challenging timeframes and security vulnerabilities.
We started by simplifying daily workflows, baselining our configurations and removing snowflakes. Next, we created an inventory file which listed all network devices by type, model, location and IP address.
Speed of deployment; speed of feedback on network changes; speed of adherence to PSIRT/CSIRT timeframes; confidence and buy-in from senior leadership on subsequent deployments!
Transform Performance, Alignment, Confidence, Happiness and Quality
Flow Engineering takes just the best parts Value Stream Mapping and Capability Mapping as a framework of techniques you can use right now, with tools you already have. Discover opportunities, build and share your vision and save hours of toil every week to confidently invest in what's next.
- How Value Streams and Capabilities affect Flow
- How to build actionable, data-driven maps that make the path clear to everyone
- How to use maps to confidently decide what to tackle and how