Tuesday, April 27, 2021
Today, data is being generated from devices and containers living at the edge of networks, clouds and data centers. We need to run business logic, analytics and deep learning at the edge before we start our real-time streaming flows. Fortunately using the all Apache Mm FLaNK stack we can do this with ease! Streaming AI Powered Analytics From the Edge to the Data Center is now a simple use case. With MiNiFi we can ingest the data, do data checks, cleansing, run machine learning and deep learning models and route our data in real-time to Apache NiFi and Apache Kafka for further transformations and processing. Apache Flink will provide our advanced streaming capabilities fed real-time via Apache Kafka topics. Apache MXNet models will run both at the edge and in our data centers via Apache NiFi and MiNiFi. Our final data will be stored in Apache Kudu via Apache NiFi for final SQL analytics. We add microservices in Kafka Streams.
Apache Flink, Apache Kafka, Apache NiFi, MiNiFi, DJL.ai Apache MXNet, Apache Kudu, Apache Impala, Apache HDFS
Source Code: https://github.com/tspannhw/MmFLaNK
Getting Machine Learning to work let alone turn it into a sustainable business is a real pain in the ass. It sucks really... But you can do amazing stuff with it!
As a Machine Learning consultant I trained a lot of people to build their own Machine Learning algorithms and turn them into a customer benefit. And although it is not that hard in and of itself, it is really easy to make mistakes, even for the best of us. In this talk I will highlight some of the most common mistakes and how to avoid them. But if you think you will be able to stop making mistakes if you do everything right, you are wrong! Because Machine Learning sucks!
Where is your data? What data is available? What are its size and quality? Who can give you access to the necessary tables for the amazing project with your sales department (or customer service, or recruitment, whatever)? Any idea about how long your AI project is going to take? How much money it will generate? How will it impact important KPIs (revenue, CSAT, time to hire)?
These questions might sound overwhelming and "not in my job description!" for a data scientist. Well, fair enough, and good luck! Without answering them, you'll never manage a successful AI project, meaning not the one where you've built a fancy model, but the one that actually matters for your organization.
A necessary prerequisite for that is an understanding of your company's Data Maturity. Simply put, it's the company's ability to generate value with data. Four dimensions - Strategy, People, Technology, and Data, and several basic scenarios to start with.
In this session, I'll briefly present a summary of our Data Maturity Assessment and how it can help you build AI solutions that matter.
Leverage AWS AI/ML/DL services in you Application :
Are you looking for ways to add new AI/ML/DL technologies to your existing applications but don't know where to start?
In this session, learn how to leverage AWS machine-learning services for your .NET applications to do things like text translation, text to speech, transcription, sentiment analysis, and image analysis. Learn AWS Support for .NET Workloads and also understand .NET Application modernization on AWS.
Not too long ago, a reactive variant of the JDBC API was released, known as Reactive Relational Database Connectivity (R2DBC). While R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models, it now specifies a robust specification that can be implemented to manage data in a fully-reactive and completely non-blocking fashion.
In this session, we’ll briefly go over the fundamentals that make R2DBC so powerful. We'll keep light on the slides so that we can jump directly into application code to get a first-hand look at the recently released R2DBC client from MariaDB. From there we'll examine how you can take advantage of crucial concepts, like event-driven behavior and backpressure, that enable fully-reactive, non-blocking interactions with a relational database.
A CI/CD pipeline seems straightforward to implement and maintain. Yet it can often quickly become a tedious time sink and a source of universal frustration on many teams. From flaky builds, to long running builds, to flaky long running builds, the sources of frustration are endless. With the goal to ship more and faster as well as to compete in an ever changing industry, we can (and must) do better.
This talk will cover best practices for performance, stability, security, and maintainability of CI/CD pipelines, each supported with practical examples and counterexamples.
This talk is the perfect opportunity for you to see where Cloud Native PostgreSQL, developed by EDB, is currently standing and how it can be integrated in your Kubernetes and OpenShift Container Platform workloads.
Cloud Native PostgreSQL is built on solid concepts and principles such as immutable infrastructure, declarative configuration and application containers, making it also ideal to use in your CI/CD pipelines as part of the applications’ E2E tests.
Join me to discover how our operators adapt to public/private/hybrid environments, how core features such as self-healing, high-availability, scalability and updates work, and – last but not least – what our DevSecOps culture and processes have produced in the area of security.
Security and development teams might not have a lot in common, but there's always a collective sigh of relief when a difficult compliance audit ends. Auditors for SOC 2, ISO 27001 — or really, any framework — will inevitably pull your developers into providing evidence, explaining vague processes, and correcting identified issues. If both teams don't start following best practices well before the audit begins, it sidetracks roadmaps and hurts your ability to deliver on business-critical projects.
So what can development leads do now to minimize disruption later? What changes can your team start already, and what items should you be expecting from your security colleagues? I'll aim to answer both these questions, pulling from 8+ years of experience in leading security teams through compliance audits across a variety of business sizes and industries.
OPEN TALK: Do Not Download Your PDF: A Story of Digital Document Usability and Security in Your Application
The usage of digital documents within an app affects basically every industry and use-case and now more than ever. Have you ever looked into incorporating documents into your app? There’s a lot to consider. And what about digital security? When it comes to thinking about the document lifecycle within an app, there are several things to think about:
- The in-app experience when working with multiple documents
- Integrating a viewer inside of the app beyond any built-in viewers
- Providing consistent behavior across multiple browsers
- Providing customized UI for annotating PDFs, images, MS Office documents and videos
- Improving your search across multiple documents beyond just title and metadata
As developers, we can remember the time when Nagios was state-of-the-art technology. We hated looking at all the numbers that seemed disconnected from our reality. The world has changed though, and Observability provides us with a new swiss army-knife in our toolbox. Used correctly it helps to improve reliability, brings additional focus on what matters, the business logic, and offers aid in case of problems or failures. Especially in time-critical situations, a distributed system with many service dependencies can be hard to analyze.
This session you will learn how to use Observability to assist developers instead of distract them.
Shifting Application Security Left and into the hands of developers has been a topic of discussion, but remains just that, a discussion. Legacy solutions in the market are not built from the ground up to enable this and achieve DevSecOps. In this session we will discuss the key features that your AppSec testing tools need to enable shift left, or shift everywhere, to empower developers to detect, prioritize and remediate security issues EARLY, as part of your agile development and unit testing processes, without slowing down DevOps. The talk will include specific examples from leading organizations that have deployed these solutions, the business impact they have achieved and the steps you can take to achieve the same, across your applications and APIs
Wednesday, April 28, 2021
Scaling is hard.
Decentralized architectures are complicated.
So how do we scale decentralized architectures? How should we improve upon existing protocols to allow them to handle a large number of transactions, of users, while maintaining a certain level of decentralization?
We’ll try to get some answers to these questions by first looking at what we’re doing most of the time in centralized architectures, and trying to see if it fits the specifics of decentralized architectures. Then we’ll take a look at some of the options chosen by the most populars public blockchain protocols.
Cloud Native software development principles have fundamentally changed the way modern IT organizations work. Teams who have operated in silo in the past, are now working more closely with each other, with developer teams owning the entire stack and lifecycle. In these scenarios, automation has become more critical than ever before. This includes building, testing, deploying and operating applications at scale and at a high frequency. Security and compliance needs to be treated the same way and cannot be an after-thought or slow down development or deployment.
In this session we will provide examples of how to embed a variety of security and compliance practices into a fully automated pipeline, from the perspective of development and security teams.
Join us to learn more about how you can secure your entire application lifecycle, starting as early as possible, putting into practice “shift left” without making compromises on security and compliance.
How do you build increasingly better APIs? It’s easier than you may think! In this session, we will talk about how to build better APIs with API management and show the key advantages of using APIM to drive your API development. We will cover the basics of APIM features and some of the use cases for these features.
Whether you are looking to provide better service for your users, better reporting and metrics for your stakeholders, or to help your support team to become more efficient at supporting your API portfolio, stop in to see how API management can power these improvements.
Microservices running in Kubernetes and containerized environments are complex and hard to monitor and troubleshoot. Join us as we discuss the growth in the adoption of Kubernetes and containers and the challenges that they have presented us all, focusing on why standard metrics and logs by themselves are leaving gaps in your observability strategy.
Ruben Rincon from the HelloSign team will show you the benefits of incorporating eSignature directly into your website to boost your user’s onboarding and will demo a practical sample using NodeJS and React.
It is not feasible to run an observability infrastructure that is the same size as your production infrastructure. Past a certain scale, the cost to collect, process, and save every log entry, every event, and every trace that your systems generate dramatically outweigh the benefits. If your SLO is 99.95%, then you'll be naively collecting 2,000 times as much data about requests that satisfied your SLI as those that burnt error budget. The question is, how to scale back the flood of data without losing the crucial information your engineering team needs to troubleshoot and understand your system's production behaviors?
Statistics can come to our rescue, enabling us to gather accurate, specific, and error-bounded data on our services' top-level performance and inner workings. This talk advocates a three-R approach to data retention: Reducing junk data, statistically Reusing data points as samples, and Recycling data into counters. We can keep the context of the anomalous data flows and cases in our supported services while not allowing the volume of ordinary data to drown it out.
The future of work has arrived. As product development teams have shifted to working remotely, we’ve had to adjust our processes, communication, and culture. Join Lucidspark to learn how to effectively drive collaboration across distributed teams both now and going forward.