API & Microservices
Tuesday, April 27, 2021
Not too long ago, a reactive variant of the JDBC API was released, known as Reactive Relational Database Connectivity (R2DBC). While R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models, it now specifies a robust specification that can be implemented to manage data in a fully-reactive and completely non-blocking fashion.
In this session, we’ll briefly go over the fundamentals that make R2DBC so powerful. We'll keep light on the slides so that we can jump directly into application code to get a first-hand look at the recently released R2DBC client from MariaDB. From there we'll examine how you can take advantage of crucial concepts, like event-driven behavior and backpressure, that enable fully-reactive, non-blocking interactions with a relational database.
Kafka is great, but it's not well suited for event sourcing.
For streaming events Kafka is a great solution which is scalable and easy to use. But for event sourcing it's not a perfect fit. One of the things we can't get from Kafka is quickly getting all items with the same id. Which is something event stored as made for. This means it's not easy to quickly rehydrate an aggregate.
In an event sourcing architecture often CQRS is applies. To easily handle queries, some intelligent message distribution is needed, which is also something which is not available with the Kafka api.
I will explore these problems and offer some solutions by building a few things onto Kafka instead of sending and receiving the messages directly from Kafka.
For older software developers new trends are often nothing but "old wine in new bottles." That's exactly what I thought when I heard about microservices for the first time.
Through my work as a software renovator, though, I realized that many aspects of microservice architectures have a huge impact on value retention and can greatly simplify future migrations.
The first part of the talk will explain how microservice architectures are based on some very old principles, and will discuss the advantages they bring for the life cycle of an application. In the second part, I will discuss my experience in a recent customer project and how it demonstrates how to move from a monolithic legacy application to a modern, sustainable microservice architecture.
When defining APIs the most common considerations are from what our payload looks like, and then from a implementer perspective.
However, good APIs whether they’re internal or public are far more than just a payload description and need a consumer’s perspective.
In this session we look at what makes up a good API; from OWASP Top 10 implications to ISO and data definitions, to how to make it easy for your consumers, why these points are important and the implications. We’ll explore techniques to overcome of the challenges seen when producing good APIs.
Whilst we all think we know how to define APIs, you’ll be surprised at the things that get overlooked or opportunities to be better.
Good news, everyone! Helidon got a jet engine! Now Helidon is packed with modern, high-tech, James Bond-level features and it flies like a rocket! Also, thanks to a nicely crafted fitness plan, the weight has been reduced and concentration increased - resulting in less RAM consumption and faster waking. Come to my live coding session to learn about all of the new features added in Helidon 2.1; such as GraalVM native image support in Helidon MP, MicroProfile Reactive Streams and Reactive Stream Operators, Helidon DB Client and HTTP Client in Helidon SE. I will also be demonstrating the new command line tool and live-reloading feature which will nitro-boost your development process.
Businesses in every industry are using event streaming to build real-time applications and drive innovative new experiences across web, mobile, and IoT systems and applications. Managing the distribution and operation of real-time event streams over the Internet, mobile, and satellite networks, external to the corporate network, in a cost-efficient, reliable, and secure manner, presents a unique set of development challenges, particularly in relation to scalability.
The wide array of corporate applications require different types of scale including the abilities to: serve large and often variable client volumes, to handle tens or hundreds of thousands of unique data streams, and to provide high throughput of data across geographically dispersed and/or remote regions. This talk will highlight how an Intelligent Event Data Platform is purpose-built to deliver optimal performance, and reduce operational risk and cost across both axes of scale-traffic volume and data throughput – regardless of congested or fluctuating network conditions.
The presentation will also discuss how popular platforms, such as Apache Kafka, do not adequately address the challenges of the Internet, e.g. over the edge of corporate networks. Undoubtedly, Kafka can reliably stream high volume data within enterprises’ networks. However, there are serious issues that occur over the last mile i.e. when data must be delivered over the edge onto the public Internet and mobile networks. Kafka is not designed for last mile streaming which poses application and system development scalability challenges. This talk will draw from real-world examples of how to address the challenges and successfully extend Kafka event-streams across the Internet.
As developers, we can remember the time when Nagios was state-of-the-art technology. We hated looking at all the numbers that seemed disconnected from our reality. The world has changed though, and Observability provides us with a new swiss army-knife in our toolbox. Used correctly it helps to improve reliability, brings additional focus on what matters, the business logic, and offers aid in case of problems or failures. Especially in time-critical situations, a distributed system with many service dependencies can be hard to analyze.
This session you will learn how to use Observability to assist developers instead of distract them.
Blazor and GraphQL combined will revolutionize how we build rich SPA applications with pure .NET.
Blazor for the first time in years gives us .NET developers the ability to develop applications that run in the browser. This allows us to use our knowledge that we acquired in the backend or with desktop applications and use that in the web.
GraphQL on the other hand changed how we work with data fetching. With GraphQL the frontend developer defines how the interface between the frontend and the backend looks like. We no more have friction between backend and frontend developer and are able to iterate much faster.
Let us explore together how we can put those to together and change how we design components by binding them to GraphQL fragments. With GraphQL the data becomes front and centre and drives our application.
After having a fundamental understanding of how GraphQL improves our data fetching needs in web applications we will move on and build a nice real-time application with Blazor and GraphQL. Let us together build a truly engaging next gen application and push Blazor to the limit.
Wednesday, April 28, 2021
How do you build increasingly better APIs? It’s easier than you may think! In this session, we will talk about how to build better APIs with API management and show the key advantages of using APIM to drive your API development. We will cover the basics of APIM features and some of the use cases for these features.
Whether you are looking to provide better service for your users, better reporting and metrics for your stakeholders, or to help your support team to become more efficient at supporting your API portfolio, stop in to see how API management can power these improvements.
Ruben Rincon from the HelloSign team will show you the benefits of incorporating eSignature directly into your website to boost your user’s onboarding and will demo a practical sample using NodeJS and React.