Tuesday, October 26, 2021
Document digitization is needed now more than ever to help us modernize from paper and manual workflows. In this session, you’ll learn how to develop a uniform PDF workflow for your end-users leveraging Adobe’s cloud-based APIs. We’ll cover how you can programmatically generate PDFs from data using PDF Services API or our new Document Generation API. Then we will demonstrate how to render the output on a webpage using PDF Embed API.
Wednesday, October 27, 2021
As anyone who’s worked in BI will tell you, visualization may be the flashy part of analytics, but a lot of hard work is needed in order to ensure the data is primed and ready. While the effort is being made to clean, blend, and normalize data, APIs can be a powerful way to analyze the data as part of the preparation process, augmenting the data set to uncover deeper insights and make the data easier to understand. In this presentation, you will see how natural language processing can be part of your iPaas or data preparation flow, adding structure to your unstructured data and adding metadata to enhance your ability to visualize and communicate insights. There’s data in your data, and it could be the key to maximizing your analytics.
Many eSignature technologies have seen rapid, steady growth for the same reason: digitizing approval workflows creates so much value for the parties involved. But what if there was a way to build even more trust and value with customers into this process? By leveraging the blockchain, it’s possible to facilitate digital agreements with significantly deeper levels of security and transparency. In this session, we’ll explore the topic of writing digital agreements to the blockchain and demo a working proof of concept that writes to the Polygon PoS (Proof of Stake) chain using open source tooling. We’ll have some time for questions at the end.
Do you want to take your next generation application to the next level? Have you ever wondered how you can use analytics, Artificial Intelligence and automation to build a better customer experience? Come join us at this session to see how.
In today's world of APIs, microservices, and cloud-native applications there's a common denominator, open-source software. Enterprises all over the world are not only moving to containerized or cloud-native applications, they are adopting the latest open-source innovations. From DevOps tools and containerized orchestration to the deployment of AI applications in production environments.
In this session, Perforce Chief Evangelist Javier Perez will examine the state of APIs, cloud-native applications, and open-source software in the context of today's application development and how enterprises can define strategies putting all the new trends together.
In this talk, attendees will learn:
• What open source technologies are driving application development and API strategies
• What does it mean to develop a cloud-native application
• What API integration strategies are being used with cloud-native and AI applications
• AI, ML, and DL in the context of API strategies
• Trends and future of software development
Edge computing enables you to run your application code as close to the customer as possible, reducing latency and improving the user experience. As your compute moves closer to the edge, what data options deliver the same performance, regardless of where your users are located?
In this session, you learn how to integrate Fauna with edge computing providers to provide a responsive, strongly consistent API. You learn how to build, test, and deploy a basic REST API that includes both authenticated and anonymous routes. Finally, you learn how Fauna delivers low-latency performance to the edge while still integrating seamlessly with your existing, centralized computing resources.
In this talk we will describe Adobe Content and Commerce AI - a suite of API first services developed for Content Intelligence. Content here refers to textual documents as well as images. Our services are created to extract meta-data from content and leverage it to power different use-cases. For instance, we extract key-phrases, entities, concepts among other things from text documents. Similarly we extract color profile, objects, text, personalities from images. We enable enterprises to categorize the content based on custom taxonomy. Such meta-data could power use-cases for content management, recommendation and personalization. Concretely, one such use-case is AEM - Adobe's content management offering. AEM Assets is a cloud native, platform as a service solution for experience management that helps businesses efficiently perform their Digital Asset Management. It leverages Adobe Sensei API’s for content intelligence to drive automation of tasks and operations that are typically done manually. For example, AEM leverages Sensei’s auto tagging API’s to produce a list of tags, or keywords, associated with an asset. These API’s are automatically run on asset ingestion, after an asset is uploaded to AEM. Having this list of tags makes the asset searchable across the DAM through keywords, heavily reducing the time for DAM users to deliver rich experiences to their customers.
In this session we will see how to assign a phone number to a chatbot created using Dialogflow, Google Cloud Platform, Node.Js and the Vonage API integrations. The architecture shown will allow the user to call your agent by phone with a user experience similar or equal to that possible via the web.You can use Dialogflow and Google Cloud Platform for many reasons, we can create interactions to be used within your own communities, may it be a conversational application for families, companies, sports; to help workflows for both customers and businesses. Sometimes it can be a bad thing to talk to an automated conversation, if it is not well-designed.These pieces of technology can also help you escalate the conversation to a real human, as it can help you detect when human intervention is needed by using the ability of sentiment analysis, leveraging both sides of AI and Machine Learning in one computer-human interaction platform!
OPEN TALK (API): Conversation Intelligence: Enabling Conversation Driven AI Is as Easy as Hitting a Few EndpointsJoin on Hopin
Conversation Intelligence (CI) enables developers to take their applications beyond basic speech recognition, and build more intelligent speech and conversation-driven functionalities and product experiences. Applications, enabled by CI, are not only able to understand the spoken words, but are capable of comprehending the context of entire conversations.
CI is a rapidly growing sector of AI, and has given rise to a new generation of AI-driven products such as Gong, Outreach, RingDNA, and more. Applications, driven by CI, are able to monitor, extract, and analyze contextual insights and conversation intelligence in real-time to automate workflows, increase revenue, elevate productivity, and provide more pleasant and innovative customer experiences.
Building and extending applications with CI-enabled functionalities and experiences no longer require developers to have any working knowledge of building or training their own machine learning models. Hitting a few end points is all it takes to enable CI-driven experiences. Some of the real life examples of how CI is being leveraged in everyday applications are products for sales and revenue intelligence, Agent Coaching, webinar platforms, accessibility, compliance, recruitment and more.
In this session, we will cover the key characteristics of the conversation intelligence API that enable developers to easily build and go-live with intelligence. We will talk about various AI aspects of conversation intelligence such as speech-to-text, extracting various contextual insights, summarizing conversations, generating domain-specific insights and intelligence, topics modeling for conversations and accessing advanced conversation analytics. We will discuss the difference between domain-specific and domain-agnostic CI. We will also take a look at an example to showcase the combination of few of these with the actual code.
Thursday, October 28, 2021
Today, data is being generated from devices and containers living at the edge of networks, clouds and data centers. We need to run business logic, analytics and deep learning at the edge before we start our real-time streaming flows. Fortunately using the all Apache FLiP stack we can do this with ease! Streaming AI Powered Analytics From the Edge to the Data Center is now a simple use case. With MiNiFi we can ingest the data, do data checks, cleansing, run machine learning and deep learning models and route our data in real-time to Apache NiFi and/or Apache Pulsar for further transformations and processing. Apache Flink will provide our advanced streaming capabilities fed real-time via Apache Pulsar topics. Apache MXNet models will run both at the edge and in our data centers via Apache NiFi and MiNiFi.
The beauty of IoT solutions is that they can be managed from a central location and deployed all around the world. Although the challenge in this situation is that if the devices are disconnected from the network, the only way is to send someone to the location and try to diagnose the problem and fix it which can be tedious, expensive and slow.
This is not an ideal solution as the whole point of this type of deployment is that it can be deployed to remote places. Moreover you don't always have people everywhere the devices are deployed.
In these conditions, working with your cellular network provider(s) to diagnose connectivity issues can be a struggle as their network infrastructure is often a black box. There needs to be a way with which you can remotely diagnose a network issue or take preemptive actions so avoid failures.
In this talk, I will take you through ways of using meta-data from the cloud-native core network of EMnify to troubleshoot network issues using the EMnify API.
Attendees with walk away with the following knowledge:
How to get more out of your SIM card and connection used in your IoT device?
What could be the possible reasons for your IoT device to go offline?
What kind of meta-data can you get from the core of a Cellular Network infrastructure?
How to troubleshoot your offline devices?
How to use this network meta-data to form comprehensive dashboards to keep an eye on all your devices?
How else can network meta-data help you with daily operations in managing your IoT solution.
The target attendee would be developers, product managers, operations people, CTO etc. who work in the IoT industry and use cellular communication for their IoT devices.
After years of fintech companies site-scraping bank websites, we’re finally seeing APis. Plaid now lets you go to Chase bank directly, log in, and get secure, reliable, API access. And as those much needed APIs came, the industry now has several “decacorns” and a longer list of unicorns.Fintech APIs came later than others, but experienced a growth spurt shocking even to the tech industry. And while we’ve seen well-designed APIs that adopted good standards already present, differences and inconsistencies between Fintech APIs show that these APIs aren’t at the quality they could be. Fintech API businesses are debating internally what standards and designs work best - formats, user representations, etc - all the while, ensuring security and privacy in APIs where stakes are higher. We’ll highlight differences among successful APIs in the space to identify the open questions that lead to more solid standards for the Fintech space.
Event-driven architectures are not new - but the way they are used, documented, and specified has matured significantly in the past few years. The drivers behind the EDA Revolution are varied: the explosion of microservices, the advent of 'real-time' interaction models, and the creation of tooling and specifications to design, document, govern, implement, test, and monitor event-driven systems. What can we learn from our journey with RESTful APIs about the future of event-driven architecture in our organizations? What role do asynchronous services play in delivering value in our organizations?
“Blockchain for API Developers” We are not headed toward the era of Blockchain. We are in it. See in depth demos on Installing and using the Algorand API SDKs. Are you an API developer and want to learn how to build Blockchain solutions using APIs? If so, this session is for you! New to blockchain? We got you covered on that as well. We will discuss Blockchain basics and use cases for blockchain as well as getting started with Blockchain Developer tools. Blockchain’s usage has now become ubiquitous across all sectors of the economy including: Medical, Charities, Automotive, Telecom, the Food Industry, Voting, Gaming and more. Blockchain’s primary use case is to maintain the integrity of replicated data. Blockchain is considered one of the Web 3.0 technologies and we will cover how to build blockchain solutions. Algorand is a modern blockchain which creates blocks in under five seconds, with instant transaction finality. It scales to billions of users and is profiled at 1000 transactions per second. You will learn how to use the Java SDK to build Algorand blockchain solutions. Algorand Standard Asset(ASA), Atomic Transfers and Smart Contracts will be covered. In this session you will learn: How to build blockchain solutions using APIs. How-to access tools for Getting Started on Algorand development. How to access developer portal resources including, SDKs, REST APIs, Command Line Tools, Tutorials, Solutions, Getting Started, Articles and... Benefits of Algorand’s Reward and Ambassador programs. Join Russ Fustino, Algorand Developer Advocate, for this informative session on Algorand Blockchain.
tl;dr - Simplify API development by generating your OpenAPI specs that automatically follow your API Design Guide. Produce 100x more consistent and conforming APIs with 1/10th the work ... for every development team. Ok, ok. So your company has decided to standardize on OpenAPI with a contract-first approach. Awesome. But job done? Hardly. Does your company already have an API Design Guide to ensure your developers produce uniform APIs that your customers will love? If so, that's a great next step. OpenAPI can be used to implement pretty much any HTTP-based API design. But this leaves the unpleasant task of translating from the API Design Guide to a conforming OpenAPI spec to your developers. Newer alternatives to REST APIs such as GraphQL and GRPC benefit significantly from removing a lot of this design and development friction from developers. They hide the infra plumbing and expose tooling at an abstraction level that developers care about. In this talk, we'll describe how we've bridged this gap for REST APIs at Confluent. Rather than asking overworked developers to read and internalize the myriad details from our (quite lengthy) API Design Guide, we created an internal DSL and CLI tool to generate OpenAPI specs that follow our API Design Guide. Even better than API linting, OpenAPI generation results in the most consistent API designs. In turn, this simplifies API adoption and expansion for our customers, while reducing the workload for our overburdened engineers. Win-win!