Tuesday, November 17, 2020
"Containers are the new ZIP format to distribute software" is a fitting description of today's development world. However, it is not always that easy and this talk highlights the development of Elastic's container strategy over time:
* Docker images: A new distribution model.
* Docker Compose: Local demos and a little more.
* Helm Chart: Going from demo to production.
* Kubernetes Operator: Full control with upgrades, scaling,...
Besides the strategy we are also discussing specific technical details and hurdles that appeared during the development. Or why the future will be a combination of Helm Chart and Operator (for now).
Serverless has been widely accepted as a mechanism to deploy and run software easily. However, the cost of running production scale workloads on Serverless can be surprisingly high. Wouldn't it be great if we could get the benefits of running workloads on serverless without the hassle of worrying about costs ?
Enter Knative, an open standard allowing you to run serverless workloads on your own Kubernetes clusters. In this talk, we will walk through a real world example of how we used Google CloudFunctions (FaaS) at WP Engine to deploy a serverless data pipeline, and then transitioned that workload to using Cloud Run (built upon Knative) and Kubernetes to achieve the same results at a greatly reduced cost. We will dive deep into how we achieved the cost savings, handling fault tolerance, concurrency and auto-scaling
Now our developers can focus on code instead of the plumbing of managing infrastructure, while delivering customer and business value quickly and easily.
The software we write does not always work as smoothly as we would like. In order to know if something went wrong, understand the root cause and fix the problem, we need to monitor our system and get alerts whenever issues pop up. There are many useful tools and practices for Kubernetes based applications. As we adopt serverless architecture can we continue to use the same practice? Unfortunately, the answer is no.
In this session, we will discuss:
- The differences between monitoring Kubernetes and serverless based applications
- Best practices for serverless monitoring
- Methods to efficiently troubleshoot serverless based applications
You ever play for money? I’m not hustling you, but the other 57 Million players may try to. What’s our edge? The power of bare metal! We'll be Entering daily fantasy sports competitions after taking a look at tools and techniques for setting up a system to analyze player picks and lineup construction.
This talk is for informational purposes only and not to be used by residents of Arizona, Hawaii, Idaho, Louisiana, Montana, Nevada, or Washington.
Covered topics include:
Provisioning Bare Metal Servers
Daily Fantasy Sports
Levenshtein distance fuzzy text matching
All successful APIs have one thing in common: users. And users need tools, including an API Explorer. Join us to see how the Google Blockly library can be used to create an easy to use graphical block-oriented API Explorer that works well with any API, including those with deeply-nested API objects.
Kubernetes brings new ideas of how to organize the caching layer for your applications. You can still use the old-but-good client-server topology, but now there is much more than that. This session will start with the known distributed caching topologies: embedded, client-server, and cloud. Then, I'll present Kubernetes-only caching strategies, including:
- Sidecar Caching
- Reverse Proxy Caching with Nginx
- Reverse Proxy Sidecar Caching with Hazelcast
- Envoy-level caching with Service Mesh
In this session you'll see:
- A walk-through of all caching topologies you can use in Kubernetes
- Pros and Cons of each solution
- The future of caching in container-based environments
The embrace of Infrastructure as Code risks suffering the same problems as early software approaches - out of date dependencies, and potentially exploitable vulnerabilities. Once the speed of development slows down, how do teams stabilize a project of interconnected infrastructure components (charts, images, etc) so that it can be kept up-to-date with a mature DevOps approach?
In this presentation Rhys Arkins will introduce industry best practices for managing software dependency updates and vulnerabilities and how these same practices are perfectly suited for the layered containerization approach of container orchestration tools like manifests, Helm charts, Kustomize templates, etc.
Microsoft's CEO Satya Nadella has said: "Human Language is the new UI layer, bots are like new application". As more and more bots are getting popular in homes and enterprises, the demand for custom bots is increasing at rapid space.
The Microsoft Bot Framework is a comprehensive offering that we can use to build and deploy high-quality bots for our users to enjoy wherever they are talking.
Microsoft Cognitive Services let you build apps with powerful algorithms to see, hear, speak, understand and interpret our needs using natural methods of communication, with just a few lines of code. Easily add intelligent features – such as emotion and sentiment detection, vision and speech recognition, language understanding, knowledge, and search – into your app, across devices and platforms such as iOS, Android, and Windows, keep improving, and are easy to set up.
In this session, we will cover how to build the intelligent help desk bots in using Microsoft Power Virtual Agents, Microsoft Bot Framework and Microsoft Cognitive Services. The help desk bot will be able to answer questions related to employee benefits, open healthcare enrollment, book meeting rooms etc.
You will learn:
What are Power Virtual Agents?
What is Microsoft Bot Framework?
What is Azure Bot Service?
How to create bots using Microsoft Bot Framework?
What are Cognitive Services?
How to leverage Bot Framework and Cognitive Services to implement enterprise-grade bots?
RESTful APIs have been around for a while now, but they are flawed. Things like non-standard CRUD operations, response validation, error handling, in-memory state management, etc. just get really hard. In this talk, you will learn how GraphQL - a standard that unifies server and client communication - comes to save the day, and why the tooling around it is a game-changer.
We will answer the following questions. What is the philosophy behind GraphQL? How do you architect a scalable schema? How can GraphQL boost productivity? How can you avoid common pitfalls?
We will then get a GraphQL server up and running while focusing on exploring real-world patterns for architecting our schema. We will discuss and implement practical steps to improve query performance, error handling and caching.
There is a strong demand for using Rust to write web services and applications. We need a fast and safe execution engine for Rust programs on the server side. WebAssembly could fill this role. In this talk, we will introduce open source tools and frameworks that enable WebAssembly and Rust-based microservices.
The Rust programming language is Stackoverflow’s most beloved programming language for the past 4 years. As it's adoption reaches beyond alpha developers writing system software, there is a strong demand for using Rust to write web services and applications.
Originally created by Mozilla, Google, Microsoft, and Apple as the next generation code execution engine web browsers, WebAssembly is the ideal execution engine for running Rust applications on the server side.
Compared with low level virtualization containers like Docker, WebAssembly is lighter, safer, and easier to manage. WebAssembly is supported on almost any computer operating system today. WebAssembly bytecode applications can just run anywhere. There is no need to package an operating system inside your virtual machine like Docker does.
Being a bytecode virtual machine, WebAssembly has a well-designed security model for accessing hardware. It is a lot harder to crash or do dangerous things in WebAssembly than in a native environment like Docker.
In this talk, we will review recent open source innovations that bring WebAssembly-based services to the market. We will provide a live demo on how to create and deploy a Rust application and deploy it on WebAssembly as a microservice. Topics we will cover include the following.
* Introduction to Rust and WebAssembly
* The case for WebAssembly on the server
* WebAssembly implementations that are optimized for the server side
* Dependency Injection container for WebAssembly
* RPC services for WebAssembly
* A complete stateless microservice example in Rust
* A complete stateful microservice example in RustPRO SESSION: Rust and WebAssembly Powered Microservices
Algorand is a new blockchain built on a Permissionless, pure proof of stake, decentralized agreement protocol, where anyone can participate and requires minimal computational power. This protocol finalizes transactions very quickly and offers true decentralization.
Algorand 2.0 is an exciting release with many new features including:
• Algorand Standard Asset (ASA)
• Atomic Transfers
• Algorand Smart Contract Layer 1 (ASC1)
This session will demonstrate how to:
• Quickly get and up and running on Algorand
• Use the new Algorand features - ASA, Atomic Transfers and ASC1.
Algorand Standard Asset (ASA) - ASA provides a standardized, Layer-1 mechanism to represent any type of asset on the Algorand blockchain. ASAs can include fungible assets (such as currencies, stablecoins, utility tokens, etc), non-fungible assets (unique assets such as tickets, etc.), restricted fungible assets (such as securities), and restricted non-fungible assets (such as licenses, certifications). Asset issuers, or specified delegates, can optionally have the ability to freeze an account’s ability to transact with their asset and clawback their asset when required.
Atomic Transfers - Atomic Transfers offer a Layer-1 secure way to simultaneously transfer a number of assets among a number of parties. Specifically, many transactions are grouped together and either all transactions are executed or none of them are. This feature can be used for use cases such as matching funding, debt settlement, decentralized exchanges, and complex trades.
Algorand Smart Contract (ASC1) - ASC1s are Layer-1 smart contracts that automatically enforce custom rules and logic, typically around how assets (ASAs or Algos) can be transferred. They are complex economic relationships made up of basic transaction primitives written in a new language called Transaction Execution Approval Language (TEAL). Examples of ASC1s that can be written are escrow accounts, loan payments, limit and stop orders, subscription payments, and collateralized obligations.
Join Russ Fustino, Algorand Technical Evangelist, in this informative session on Algorand 2.0 Blockchain.
Join us for a fun time of developer trivia at the DZone happy hour! Meet DZone contributors like Dan Lines, Justin Albano, and more. Take part in a special dev themed trivia contest and win prizes.
Wednesday, November 18, 2020
As we are moving mission-critical applications to the cloud, containerization is a crucial consideration. Deciding which applications to containerize was a complex activity. It required experience and understanding of application subsystems, criticality, behavior, operational requirements, engineering practices, and hosting infrastructure. Collecting the data was challenging due to the time and effort involved. Architects and developers did not have the time to conduct lengthy assessments.
We have changed this by using AI to streamline data processing and decision making. We employed continuous learning AI models to provide containerization recommendations with high accuracy and confidence while considering application characteristics, including 12-factors. We used these properties to compute the complexity of the containerization activity. AI Explainability provided the answer to why it was feasible to containerize an app. Why was the complexity low or high? Why was my risk low?
This reasoning allowed the application owners to better understand the analysis, and then they provided feedback to improve the results. Compared to traditional methods, we make containerization decisions up by 50% faster and improved accuracy by up to 40%.
Attend this session to learn more about our journey on using AI to make containerization decisions.
With the ever increasing flow of data, comes the industry focus on how to use those data for driving business & insights; but what about the size of the data these days, we have to deal with ?
The more cleaner data you have, its good for training your ML ( Machine Learning ) models, but sadly neither the world feeds you clean data nor the huge amount of data is capable of fast processing using common libraries like Pandas etc.
How about using the potential of big data libraries with support in Python to deal with this huge amount of data for deriving business insights using ML techniques? But how can we amalgamate the two?
Here comes “ PySpark : Combining Machine Learning & Big Data“.
Usually people in the ML domain prefer using Python; so combining the potential of Big Data technologies like Spark etc to supplement ML is a matter of ease with pyspark ( A Python package to use the Spark’s capabilities ).
This talk would revolve around -
1) Why do we need to fuse Big Data with Machine Learning ?
2) How Spark’s architecture will help us boost our preparations for faster ML ?
3) How pyspark’s MLlib ( Machine Learning library ) helps you do ML so seamlessly ?
Most web applications simply provide the content for the user along with a standard list of links and articles. Wouldn't it be nice to be able to customize this list of links for each user, making it a better user experience? The Azure Custom Decision Service provides contextual decision-making, allowing for a more robust user experience. It does so by converting content into features for Machine Learning. This technology utilizes several other Microsoft Cognitive Services, such as Entity Linking, Text Analytics, Emotion, and Computer Vision for a more personalized and intelligent experience.
AI and ML are frequently utilized to optimize processing, adding efficiency and improving performance of applications. Approaching the use of AI and ML from a different perspective can dramatically change the way image processing and display delivers visual data to the eyes of users. Particularly volumetric data.
Holograms have been around for a long time, but the ability to efficiently produce, transmit and display interactive holographic images has historically placed insurmountable demands on processing engines, preventing the potential to make practical consumer-level applications a reality.
Rather than trying to produce and ship the complete volumetric data package, AI and ML can be used to train cores to understand how the human brain needs to receive images for volume perception and preselect the necessary data needed by a user’s retinas, dramatically reducing the necessary transmission bandwidth and display processing demands. Such threaded volumetric processing capabilities can be utilized by developers to add differentiating holographic capabilities and features to applications.
Putting these capabilities into a developer’s toolkit can facilitate the incorporation of volumetric imagery that can be displayed through advanced depth field solutions which entice users, promote loyalty and add exponential value, thereby creating new avenues for monetization.
The speaker will highlight design tools available in standard development platforms which facilitate the incorporation of 3-D and holographic content into applications. In addition, the speaker will demonstrate ways users can be empowered to create and manipulate volumetric content on mobile devices, further expanding the application scope.
Accessing and working with data is not easy with serverless, because traditional methods of database access don't work well. In this talk, I will discuss the key problems around working with databases when building serverless application logic and approaches to solving them. I will motivate problems like connection pooling, cold-starts, handling spiky concurrent loads, database transactions and transient failures. I will then present GraphQL as a possible solution for building a high-performance data API that can scale to serverless workloads. I will talk about the pros & cons of this approach. Finally, I will do some live code demos and make the problems and solutions discussed previously more concrete!