API & Microservices
Wednesday, February 17, 2021
We will see how to manage a complex microservices mesh using Istio and Anthos Service Mesh to get application obervability, security and intelligent request routing with no changes to application code.
We will show how to gain visibility on application golden signals, monitor Service Level Objectives and Error Budgets, centrally manage encryption, authentication and authorization and define request routing policies.
Do you remember that microservice that you wrote and successfully deployed in production a few weeks ago? I am afraid to tell you that it is under performing and occasionally generating faults that are compromising the rest of the system. Could you please join our war room in an hour to help us fix this as quickly as possible?
For most developers this reads like the plot of a horror movie. There is nothing scarier for developers than fixing problems that they have no idea about where to start — because their microservice is just a piece of a larger puzzle composed by different other microservices, databases, and distributed systems. Though observability is the right answer for scenarios like this the reality is that it might not be in place yet to effectively make the difference.
The internet is not running in short supply of resources that explain in detail what this new buzzword called observability is about and why you need it so hard. However, very few explain how to implement it as quickly as possible to be used in troubleshooting scenarios like the aforementioned. This talk will show in 40 minutes how to implement observability from scratch and how to leverage logs, metrics, and traces to identify problems in microservices.
We have been hearing a lot about the benefits of using the reactive approach to solving concurrency problems in distributed systems. While reactive programming refers to the implementation techniques being used on the coding level, on the systems deployment and runtime level, we can leverage on a robust yet very flexible and lightweight framework such as Vert.x to deliver. In this session, we will first learn about what the missions of a reactive system are, which, among many things, include handling multiple concurrent data stream flows and being able to control back pressure as well as managing errors in an elegant manner. The very loosely-coupled nature of a reactive system also lends itself very well to building microservices that can communicate well within its messaging infrastructure. We will also discuss the special polyglot nature of Vert.x, its event loop, and its use of the Vertical model. Live coding will accompany this session to illustrate how to program a simple use case using multiple JVM languages such as Java and Kotlin, we will then build and dockerize it to be deployed as a serverless container to the Knative cluster on the cloud in a delightful manner.
Manage the end to end lifecycle of an API program requires to have in mind the detailed knowledge and management of several assets: From the api specs published in an API Portal, to the API policies and artifacts (a lot of them) in an API Gateway. And we could not forget the backend solution that supports the API business logic or the several environments we have to manage (from sandbox to production, p.e.).
This talk is about how to manage the several changes in all the artifacts associated to an API lifecycle in an automated way using DevOps principles and tools, to ensure the value stream in a efficient way.e a Full end to end API lifecycle
Thursday, February 18, 2021
The evolution of architecture from monolithic to microservices-based has enabled organizations to meet the ever-evolving needs of their customers. The need for getting insights into these microservices has become critical for developers and operations teams alike. In this session, we will explore how observability plays a critical role in the microservices world. We will deep dive into distributed tracing to achieve full observability and monitoring for production environments. Finally, we will discuss the checklist that every DevOps person should look into for incorporating observability into their environment. The attendees can expect to learn about the three pillars of observability, details of distributed tracing and strategies for achieving observability and monitoring.
Not too long ago, a reactive variant of the JDBC API was released, known as Reactive Relational Database Connectivity (R2DBC). While R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models, it now specifies a robust specification that can be implemented to manage data in a fully-reactive and completely non-blocking fashion.
In this session, we’ll briefly go over the fundamentals that make R2DBC so powerful. We'll keep light on the slides so that we can jump directly into application code to get a first-hand look at the recently released R2DBC client from MariaDB. From there we'll examine how you can take advantage of crucial concepts, like event-driven behavior and backpressure, that enable fully-reactive, non-blocking interactions with a relational database.
Shifting Application Security Left and into the hands of developers has been a topic of discussion, but remains just that, a discussion. Legacy solutions in the market are not built from the ground up to enable this and achieve DevSecOps. In this session we will discuss the key features that your AppSec testing tools need to enable shift left, or shift everywhere, to empower developers to detect, prioritise and remediate security issues EARLY, as part of your agile development and unit testing processes, without slowing down DevOps. The talk will include specific examples from leading organizations that have deployed these solutions, the business impact they have achieved and the steps you can take to achieve the same, across your applications and APIs
Human connections are so important to us all. By adding the ability to have real time conversations, it allows communities to grow and flourish, increase trust in transactions, drive down cancelations, and improve stickiness. Join us in exploring what makes humans, and real time communications, tick.
With trillions of programmable endpoints to be interconnected and the “new” urgency to drive digitization, API’s role in the open, Hybrid Cloud world is poised to grow exponentially. In addition, APIs will create powerful ways for us to streamline how we engage with our ecosystem and together deliver new services and applications that empower our consumers and developers to build solutions.
IBM API Hub is the platform to drive the API Economy and easily discover, try, adopt and consume APIs from all of IBM and an open ecosystem. Key highlights are –
· Built on Industry-proven platform of IBM API Connect in Cloud
· Simplified onboarding
· Subscription and Key management
· Self-service lifecycle management
· Cognitive based Recommendation
· High availability infrastructure
Explore the integrated Playground experience that comes loaded with code accelerators to start on a fast track to build solutions for their business needs.
This session will reinforce the importance of APIs in the “Digital” world and how IBM is driving the API Economy mission with a reimagined developer experience as well as ecosystem.
Observability, instrumentation, telemetry--what does it all mean? This introduction to observability is for software practitioners who want to better understand the health of their production systems. Learn how to generate better data and gain new insights. You'll walk away ready to use observability to level up everyone on your team!
Creating consistent, quality APIs is a tough job, especially as ecosystems move past 100 APIs and beyond. API Style Guides can help, but developers rarely read them, and those that do don't always remember everything. API Design Reviews and Governance tooling is maturing to help solve this, automating API Style Guides allowing teams to introduce new guidance over time. Learn how to roll this out at your company easily.
The low-code movement is creating a lot of buzz. In this workshop, hear about how no/low-code identity verification is making compliance and fraud prevention more cost-effective and accessible.
-How no/low-code solutions can be easily implemented by anyone, including non-technical entrepreneurs
-Why identity verification is making cross-border KYC and fraud mitigation faster with less friction
-How an identity platform streamlines verification workflows by leveraging multiple identity solutions and a network of single-point data sources
Friday, February 19, 2021
Did you know that over $200M has been paid out to Shopify app developers in the past 4 years? That Salesforce AppExchange is a $4B+/year business? In this session, we're going to demystify this under-the-radar world of SaaS platform app ecosystems. We'll be discussing hot topics related to SaaS app ecosystems like:
* When does a SaaS platform open up for developers to build apps on?
* What kind of business models exist for these SaaS app ecosystems?
* What team should own this?
* What kind of internal apps do companies build on SaaS platforms?
* Can these apps become venture-scale businesses?
Bring your questions and get ready to learn about about this underdog world of SaaS-on-SaaS startups that is quietly generating several billion in revenue underneath the hood!
PRO SESSION: The Potential Pitfalls of Deploying Real-Time APIs in an Event-Driven Microservices Architecture
Today, software must be efficient, adaptable, and easy to use – Microservices deliver simple solutions to complex problems and are vogue within the developer community. As is event-driven architecture, which is a system of loosely coupled microservices that exchange information between each other through the production and consumption of events.
This type of architecture is particularly well suited to event streams and through this in-stream processing businesses are able to make fast decisions, literally in milliseconds. Event stream processing enables applications to respond to changing business solutions as they happen and make decisions based on all available current and historical data.
By implementing event-driven architectures, it is possible to build a resilient microservice-based architecture that is truly decoupled, giving increased agility and flexibility to the development lifecycle. Many developers now consider event-driven architecture as best practice for microservices implementations.
This trend in development approach coincides with an increasing desire from business for real time data – producers and consumers demand faster experiences which has led to emergence of the real-time APIs. But…problems exist.
Real-time data management and delivery introduces a unique set of development challenges that must be understood and effectively addressed, in order to reliably deliver real-time data and easily scale.
When you expose event streams to millions of consumers the assumptions made for traditional APIs no longer hold true. Developing real-time APIs means rethinking latency, scalability, security, and publication from the ground up. This talk will discuss the hard lessons I have learned while helping companies successfully deploy real-time APIs in an event-driven microservices environment, I will touch upon:
• The realities of the Internet, and how to address the challenges at the transport, protocol, and application layers
• What caching looks like in a push-oriented world, and how it drives significant efficiencies
• How to prevent your data model from impacting security and latency at scale over the Internet
• Why basic Pub/Sub is not sufficient for today's event-driven applications.
Join our resident Kubernetes and modern apps experts in a discussion of the challenges of Kubernetes traffic management in today’s technology landscape. While Kubernetes Ingress gets most of the attention, how you handle egress traffic is just as important. Egress isn’t just about traffic leaving a cluster, either, but also concerns traffic among managed and unmanaged services within the cluster. We demo a solution using NGINX Service Mesh and NGINX Ingress Controller to control egress from the cluster and between NGINX Service Mesh and unmanaged services. Whether you’re new to modern application architectures, or looking to improve your current microservices deployment, this webinar is for you.
Join this webinar to learn:
* Solutions to common challenges when managing traffic in Kubernetes
* How to control both ingress and egress in a single configuration
* Which solutions from NGINX can best serve your needs, depending on your requirements
* About NGINX Service Mesh and NGINX Ingress Controller with live demos
Tech giants like Amazon, Google, and Microsoft have set a north star for companies around the world to stay competitive. They engineer away every impediment to fast, reliable software releases.
To achieve Internet and cloud speed and scale, you can’t wait for anything. Everything has to be programmable and API-driven.
Over the last two decades, storage, compute, and code have all been automated, giving rise to the cloud and fast CI/CD releases.
Data is the last automation frontier. It is heavy, complex, and filled with security and privacy risk.
API integrations and the reasons a business would want to do them are plentiful. Many are driven by wanting to drive higher user (employees and customers) productivity, greater employee engagement, or reducing busywork and task switching in order to increase, quicker, smoother cross-application workflows. As APIs are built multiple and different types of vendors are involved, fostering some critical business considerations. Having built dozens of these integrations we've come to understand the critical business considerations to keep in mind in terms of types of business restrictions and terms of service to put around APIs.
Questions this session will help answer:
• When you think about building APIs from multiple, different vendors how will that impact your business model?
• How do you set up a win-win alliance between your business and other vendors involved so that everybody's business model wins, and your customer received value?
• What considerations should you keep in mind when building packaging pricing models?
• What is it going to cost to drive a transaction when multiple vendors, with different pricing models are involved?
With the average enterprise organization consuming thousands of APIs, it has become increasingly challenging for developers to locate or socialize created APIs. It is even more difficult for organizations to manage their API collections. Even companies are now offering to socialize business partner’s APIs to their customers in order to create microservices, complex integrations, and product solutions.
IBM’s answer to this problem is the IBM API Hub. This essential API hub offering allows for managing friction, preventing resource duplication, and breaks silos. It provides a single place for organizations to publish and share created APIs in an easy discoverable, searchable, highly available, and curated environment.
In this session, learn about the key attributes of the IBM API Hub. Explore the consumer experience and its low barrier of entry for any API Provider. Discover how IBM Sterling has created an all new enriched experience for their developers using the IBM API Hub. Try out the IBM API Hub yourself with a full set of API consumer features and functionality.
Kubernetes brings promises of application modernization and agile applications development and deployment, but it also brings new challenges in managing these environments. Nowhere are these challenges more of an issue than with traffic management and security of microservice environments, especially in those which require high-volume, high-reliability, and high-security. But the good news is that you’re not alone: These challenges impact everyone moving to Kubernetes, and there are solutions to make traffic management and security in Kubernetes microservice environments easier. Come join NGINX Service Mesh engineers to talk about NGINX Service Mesh and tools available for your microservice traffic management challenges. This session will be an engineering round-table where we'll have an open discussion about service mesh use cases, demos, and a Q&A with the engineers. Join us to talk all things mesh, the need for ingress/egress traffic management within a mesh, and some best practice guidelines with microservice traffic management.
PDF forms are widely created and used in many facets and applications. You’ve probably seen, filled out, or maybe even created your own PDF Forms for your clients or customers to fill out. In this talk, we’ll cover XFA PDF forms and AcroForms which are the two most common types of PDF Forms. XFA forms were designed to enhance the processing of web forms, however, they are not always compatible with PDF viewers and software. This means that a majority of PDF software does not support the XFA format and you can’t open, view, process, convert or print these forms. So what can you do to fix it? In this presentation, we’ll dive into showcasing PDF forms and their advantages, we’ll also cover XFA forms and explore why they are such a unique PDF Form format, and guide you on how to efficiently work with any PDF forms despite some of their challenges.
During the past year I’ve implemented or have witnessed implementations of several key patterns of event-driven messaging designs on top of Kafka that have facilitated creating a robust distributed microservices system at Wix that can easily handle increasing traffic and storage needs with many different use-cases.
In this talk I will share these patterns with you, including:
* Consume and Project (data decoupling)
* End-to-end Events (Kafka+websockets)
* In memory KV stores (consume and query with 0-latency)
* Events transactions (Exactly Once Delivery)
Companies have long relied upon static analysis to secure their code, but the typical process with delayed results and high false positive rates is painful for developers and generates unnecessary work for security engineers. A recent trend is changing that. Code analysis tools are increasingly delivering better developer experiences, coverage of a broader set of bugs, and improving results over time. These improvements allow a much tighter integration into modern agile development processes, shifting left the detection of reliability and security issues. Google and Facebook have pioneered this new model of static analysis that involves broad deployment of extremely scalable analysis tools (billions of lines of code / thousands of commits per day) and have collected and published extensive data on its impact on code quality. Amazon has also used static analysis to streamline certification and compliance tasks. With development teams more distributed than ever, tools like static analysis become increasingly critical for development organizations to overcome the loss of productivity and risk to code quality.
After a long evolution, the browser has become a programmable client that lives in a globally connected world of APIs. This combination of a ubiquitous client with a sea of serverless APIs and the emergence of APIs with advanced security features have enabled the new, client-serverless application model. In such a model, we slowly move away from three-tier applications. In three-tier applications, APIs were typically guarded by the backend. In client-serverless, clients are rapidly taking on a more central role, where clients become responsible for gathering their data services directly from the data source. Needless to say, this reduces complexity, but also brings an entirely different security model which SaaS providers will need to prepare for.
You invest your time and effort breaking up that monolithic Frankenstein into a suite of elegant composable micro-services, you containerize them and you deploy them somewhere in the cloud. Then you proudly watch it all come together reaping the benefits of the most scalable architectures. It is all fine and dandy from this point on. Too good to be true? Of course! This session is about what to do when you wake up to find yourself in the weeds diagnosing that first bug and tracing calls through the convoluted web of micro-services of your own doing. Through a series of demos and code snippets, we will introduce the most important open-source projects tools to strike the right balance of monitoring at the infrastructure, container, and services.
Paperwork and PDFs are the primary bottlenecks restricting faster adoption of digital tools in every industry. But in 2020, companies can free themselves from the burden of endless paperwork, mundane tasks, and antiquated ways of working by adopting software that helps bridge the gap.
In this talk, I will share:
* How PDFs became the default medium for information transfer
* How and why that needs to evolve for the digital world
* A live demo of the Anvil PDF API, the easiest way to incorporate PDF creation, filling, and signing into your product
As software is becoming more pervasive, APIs are fast becoming the building blocks of business enterprises. And with that, the role of the API developer is gaining more responsibility for driving growth ranging from small to large companies. In this session, we will discuss how to think about building tools for the API developer that leads to quicker time to market value for both the business and the consumers.
Fintech is changing at a rapid pace and developers are always iterating and building on platforms to create new features and tools to make better user experiences. At Plaid, we’ve been uniquely positioned to build a platform to help developers start and grow their apps and services to solve for some of the most challenging problems in financial services.
Over the last few years since we first launched Plaid Link, the interface that connects users with fintech apps that integrates directly with the Plaid API, we’ve learned a lot of valuable lessons about how to build a platform that’s optimized for growth and user experience. Samir Naik, Director of Engineering at Plaid is going to share some of the key lessons that Plaid has learned over the years building alongside this community of developers and navigating the ever changing fintech landscape. He’ll talk about how Plaid thinks about the developer experience and building an API that has enabled thousands of apps in the ecosystem to connect to over 11,000 financial institutions in North America
Why neutral foundational elements like identity can unlock a future of agility.
Modern applications require neutrality and flexibility. Long gone are the days where a developer could rely upon a single vendor’s technology stack or assume that connecting systems would share the same language, platform, geography or timezone. Organizations have evolved from on-premises, to cloud to multi-cloud strategies and with this complexity comes the added requirement that development go faster, user stories become richer and code lives longer. Teams must demonstrate agility through neutrality in order to achieve increased velocity.
Okta is rooted in the concept that proprietary methods and restrictive ecosystems are contrary to the precepts of the modern developer. In this session, a discussion will revolve around why identity is the common thread that allows for interoperability and extensibility across systems, infrastructure and technologies. Identity isn’t a component of an application, but rather a fundamental attribute of all applications.
Are you struggling with security testing of your APIs, web-services or cloud-native applications? Are you looking for new ways to test security without impacting velocity? Would you like to get visibility into sensitive data that your application handles? If answer to any of these questions is yes, allow us to introduce you to new and unique ways to perform security testing. In this session, we will give you an overview of developer friendly security test tools from Synopsys for unparalleled accuracy and visibility into application vulnerabilities with remediation guidance and just-in-time contextual training to help your developers with remediation effort to improve your application security posture.
Rockset integrates with Amazon DynamoDB to provide a powerful analytical serverless tech stack that is flexible with your data and provides millisecond query latency. This talk will focus on how Rockset made changes on RocksDB, a popular open-source storage engine for a persistent key-value store, to run in a serverless way in the AWS eco-system. Finally, we’ll show how you can run serverless analytics on DynamoDB.
A technical talk on architecture choices that help build data-driven microservices with loose coupling and bounded contexts. The workflow and data flows have to be architected together, as concepts of CRUD, CQRS, Event sourcing, and Sagas come to the forefront of app, message, and data scaling. Built on a modern cloud platform, we illustrate a sample food ordering application with microservices patterns that can be applied in any project. A containerized converged database architecture with a variety of data types on the cloud can make it simple to build and operate the data layer, but these principles can be applied to special purpose databases too. The app and data interfaces are explored with support from the data layer making the app coding simpler, especially with Messaging patterns and Saga APIs that reduce the load on the Java developer using a Microservices framework. A free self-service hands-on-lab is available for developers to go through on their own to test these.
The DBaaS style of deployment has taken the world by storm. I want to share my insights into the advantages, gotchas, and what we should be looking for in the future from DBaaS providers to continue providing an excellent developer self service experience that unlocks agility in the modern application development world.
In this session, hear from Jordan Schuetz, Developer Advocate at MuleSoft, on how to build, secure, and deploy your first API using MuleSoft's Anypoint Platform. This talk will cover enterprise-level topics on API development, security, and best practices when it comes to deploying your first mule application. Also, learn how to create integrations with Salesforce, databases, Twilio, and more!