Enterprise Tech Trends
Tuesday, November 10, 2020
One of the largest banks in the U.S. is focused on creating first-class customer experiences. To do this, they continue to modernize their applications towards a next-generation application architecture that relies on a Microservices architectural style. Valuable data in core systems is unlocked and exposed to experience layers via APIs, and used to create new and unique customer experiences. In this talk you will learn key lessons learned implementing Microservices at Scale including establishing a centre of excellence (CoE), governance through code, decentralized governance, and organizational/cultural changes to support the model.
Event Driven paradigm is one of the most popular micro services architecture. Many enterprises are levering this architecture in their digital journeys. With the promise to make enterprises more reactive to changing business needs. As part of this talk, I will talk about my experience of building enterprise event driven systems and at the same time elaborating the fallacies of building and adopting such systems.
A discussion on the changes, trends, and database technologies that are going to impact your business in the next 12-18 months.
In the current technology landscape, we have a lot of great innovation happening especially when it comes to Database Technology. A few examples include introducing new data models such as time series or graph, which focus on solving SQL at hyper-scale problem, this has been an elusive solution and scale was becoming synonymous with NoSQL environments. We now have a new Cloud-Native database design coming to market using the power of Kubernetes as well as employing Serverless concepts.
In this presentation, we will look at database technologies changing trends and what is driving them as well as talk about changes to the Open Source licenses and Cloud-based deployment and emerging class of not-quite Open Source Database software."
To make good decisions you need to use all the available information. For many companies, a lot of this information is locked up within their infrastructure, and getting to it is both difficult and time-consuming. IT Enterprises need to release this stored data to allow for better decision making, ultimately saving time, reducing waste and lowering costs.
We’ve talked to many organisation and they all suffer from the same problem. That they’re not easily able to see what’s right in front of them. They’re unable to answer simple questions about their infrastructure and software. They simply don’t know what they’ve got, where it is, or exactly how many of “it” they have. It’s a common problem and it comes down to a lack of available tools to fit a multitude of physical and virtual challenges. This puts pressure on operational staff to deliver the answers, but with no built-for-purpose tools available, they have to piece the information together taking a lot of time and effort. This massively reduces their productivity, and the ability of the company to respond quickly, when the information they need should be readily available.
lets first look at some of the questions we regularly ask of our infrastructure/applications. You should be able to find the answers to these questions in a very short space of time, if not already know the answer. If you can't, you have a problem.
· How many servers do you have across all cloud and on-prem environments - Total number (you're paying for them).
· Which teams support the various parts of your infrastructure.
· What containers/build versions are running on all your Kubernetes/OpenShift clusters.
· Are all your servers patched to the latest release.
· Able to easily collect specific data for a simple licence renewal.
· Do your servers meet the agreed CIS controls. Are they as secure as possible?
· How many servers are still using a vulnerable version of software (CVE vulnerability)
· Do any of your servers have an uptime of over 3 months.
· What regions do all your cloud instances run in.
Reading through this list will probably resonate with some of you. Either because you asked the question, or because you had to find the answer. How did you find the answers? Was it straight forward? Did it only take you 2 minutes?
I'm betting the answer is probably NO.
This talk will focus on why inclusive remote teams are better, and how to build them. Starting with “why?”, moving towards “how?” by explaining key methods of inclusivity: design, data analysis, building inclusive teams, leading with empathy, and SDLC with diversity, and sharing resources as inclusion from an overhead cost to a business enabler is a need of tomorrow
Apache Kafka is getting used as an event backbone in new organizations every day. We would love to send every byte of data through the event bus. However, most of the time, connecting to simple third party applications and services becomes a headache that involves several lines of code and additional applications. As a result, connecting Kafka to services like Google Sheets, communication tools such as Slack or Telegram, or even the omnipresent Salesforce, is a challenge nobody wants to face. Wouldn’t you like to have hundreds of connectors readily available out-of-the-box to solve this problem?
Due to these challenges, communities like Apache Camel are working on how to speed up development on key areas of the modern application, like integration. The Camel Kafka Connect project, from the Apache foundation, has enabled their vastly set of connectors to interact with Kafka Connect natively. So, developers can start sending and receiving data from Kafka to and from their preferred services and applications in no time without a single line of code.
In summary, during this session we will:
- Introduce you to the Camel Kafka Connector sub-project from Apache Camel
- Go over the list of connectors available as part of the project
- Showcase a couple of examples of integrations using the connectors
- Share some guidelines on how to get started with the Camel Kafka Connectors
Many consider agile a process to implement within an existing organization. A set of rules to follow that will produce some useful outcomes. This approach can provide improvements in many different structures of organizations. As agile maturity improves, however, the benefits can become limited by the structure and culture of the organization itself.
Agile is more than a framework for organizing tasks for a team. Agile is a culture, a mindset, and a structure for improving the velocity of innovation and providing real business value to customers. To gain the most benefit from Agile it must be considered as part of a more extensive system that incorporates organizational structure, software architecture, and company culture.
This talk considers the interactions between how the work, the software, and the people are organized in high performing agile organizations. Using my own experiences at companies large and small, I will share what I have learned and some best practices I use. These lessons will help you as you improve and scale your Agile teams.
I will discuss:
* How to structure your organization to remove the bottlenecks in coordination and decision-making that can slow velocity to a crawl
* How to take advantage of modern systems architectures to allow teams to move faster
* Using data to provide accountability for autonomous teams without creating more process
* By the end, you will have concrete examples and ideas that you can bring back to your team to help you improve and scale agile within your organization.
Imagine your team has faced a recent Kafka outage or needs to serve digital content across multiple regions. Then, imagine you have real-time customer orders that need to be handled and replicated to various teams across the world. Now imagine you have to prevent both (1) data loss and (2) duplication of that data, such as with customer orders.
This interactive talk will have members of the audience send multiple text messages to a server connected to Kafka. Then, in the middle of the exercise, the Kafka server will shut off – creating what most organizations would generously call a clusterf***. The session will show how you can recover from such a predicament using resilience engineering with Kafka’s MirrorMaker 2.0 and smart application workflows.
Key session takeaways include:
- How to ensure disaster recovery when using Kafka
- How to duplicate records
- How to use MirrorMaker 2.0 like a pro
In response to the current environment many companies are having employees work remotely to keep them safe and healthy. And, it’s no secret, that in this “new normal,” human connection has become even more important – as people (both personally and professionally) have to find new ways of working together without in-person interaction. While there is no rule book for managing teams in the midst of a pandemic, there are steps managers can take to ensure their global team of developers are cross-collaborating and drawing on the benefits of workplace international and cultural diversity. Notably, developers—by nature—are well-suited to interacting online for work (and leisure). However, though developers might not have to learn to use as many new tools, they will have to become more flexible (and patient) as other departments (who might be less digital in terms of communication) shift to work more online. Additionally, organizations and their employees are facing new challenges from a privacy and security standpoint, and they need to find new ways to combat these pandemic related breeches. For example, the general consensus is that people are seeing a rise in phishing emails that use language about the pandemic as bait. These days, as companies jump to offer up their vast amounts of data to help develop new COVID-related solutions, such as using cell phone data to monitor social distancing, there’s an even greater issue with data privacy.This session will discuss how developer teams can prepare for increased remote work over the course of the year – while keeping collaboration, data privacy and security top of mind. It will also center on the principles that developers, and entire companies, should stand by when using data.
Preventing the spread of COVID-19 has been our collective priority in this pandemic and the current climate is allowing us to keep a pulse on data in a very important way. SingleStore is working with True Digital group to prevent the spread of COVID-19 in Thailand by using anonymized cell phone location data on 500,000 location events every second for over 30 million+ mobile phones to track population movement in two-minute intervals. This vast amount of real-time, geospatial data provides a view of population densities enabling the Thai government authorities to see when large gatherings are forming and quickly helps them to adapt their facilities.
Wednesday, November 11, 2020
In a distributed world we all depend on the distributed systems more than ever. As these systems become more complex, the failures are much harder to predict.
Chaos Engineering introduces the injection of failures as a discipline for building confidence in the resilience capability of the systems.
Chaos Engineering and Postmortems are mainly considered by Operations Teams, but considering failures from Development provide teams with the opportunity to execute potentially highly disruptive experiments in a safer, more controlled way.
In this talk I am going to explain why all roles involved in building software products should practice Chaos Engineering. I am going to show the benefits and how can these exercises provide recovery procedures and validate resolution protocols.