AI & Machine Learning
Wednesday, September 30, 2020
With most machine learning (ML) and deep learning (DL) frameworks, it can take hours to move data and to train models. It's also hard to scale, with data sets increasingly being larger than the capacity of any single server. The size of the data also makes it hard to incrementally test and retrain models in near real-time to improve results.
Learn how Apache Ignite in-memory computing platform addresses these ML limitations with distributed model training and execution, to provide near-real-time, continuous learning capabilities. This discussion will explain how distributed ML/DL works with Apache Ignite, and how to get started. Topics include:
-Overview of distributed ML/DL including design, implementation, usage patterns, pros and cons
-Overview of Apache Ignite ML/DL, including prebuilt ML/DL, and how to add your own ML/DL algorithms
-How to integrate Apache Ignite with Apache Spark in order to improve the Apache Spark data pipeline throughput.
-How Apache Ignite and TensorFlow can be used together to build distributed DL model training and execution
We all love the conventional uses of CI/CD platforms, from automating unit tests to multi-cloud service deployment. But most CI/CD tools are abstract code execution engines, meaning that we can also leverage them to do non-deployment-related tasks. In this session, we'll explore how GitHub Actions can be used to train a machine learning model, then run predictions in response to file commits, enabling an untrained end-user to predict the value of their home by simply editing a text file. As a bonus, we'll leverage Apple's CoreML framework, which normally only runs in an OSX or iOS environment, without ever requiring the developer to lay their hands on an Apple device.
With supercomputers in our pockets, self-driving vehicles, and software recognizing images better than humans, what we recently thought of as the future is already here, so how do we define the next future? Rod Cope explains how different aspects of artificial intelligence, augmented reality, high-performance computing, digital platforms, massive bandwidth, and an obsessive focus on user experience will be the fundamental drivers to future application success as we build upon lower barriers to entry and shift from improving technology to improving life.
Rod shares his 20+ year journey from the forefront of open source to a predicted future where the IoT and big data are the new normal and the key questions are less “How can we do it?” and more “How do we make it better?” Come to this session to learn what you can do now in terms of research, planning, and investment to get the most out of our inevitable future.
When building cloud applications, we should always bear in mind that our services are exposed on the Internet and can be accessed by anyone and may have untrusted users.
Because of this, we need to be proactive and aware of these possible security threats so that we can design our cloud applications to be able to handle them properly. Apart from preventing malicious attacks, cloud applications must also be designed to protect sensitive data and grant access for certain resources to only authorized users.
In this session, I will be talking about 3 security patterns that can be used to prevent malicious or accidental actions outside of the applications designed usage, and to prevent disclosure or loss of information when building for the cloud.
PRO SESSION: Enable AI to Drive Intelligent Customer Interactions and Reach Your Contact Center's Full PotentialJoin on Hopin
With each new year, artificial intelligence (AI) creeps higher on enterprises’ list of priorities, thanks to the promise of many benefits such as automating processes and enhancing the customer experience with more intelligent interactions. Certainly, with AI, the types of digital transformation that seemed too far-fetched or futuristic just a few years ago are within reach today. It’s becoming clear that communications is at the core of business’ digital transformation, and programmable technology is the key. Programmability offers the flexibility and agility needed to enable the implementation of intelligent communications tools, customized to meet the individual needs of each business and their employees. To meet this growing need for tools that enable digital transformation, providers must innovate to offer enterprises unprecedented capabilities to customize their business communications applications. In this presentation, Tony Hung, Senior Software Engineer at Vonage, will discuss how enterprises with limited machine learning expertise can leverage simple, secure and flexible solutions to deploy intelligent solutions in their contact centers. He’ll demonstrate in real time how true programmability has the power to enhance UC and contact center applications, allowing enterprises to adapt to customer demands and generate new insights to better serve and delight customers, improving the overall customer journey. By enabling AI-based tools, such as programmable building blocks, chatbots, open source skills-based routing, and real-time sentiment analysis, intelligent communications are helping companies to completely transform the way they communicate, how they operate, and how they connect with customers.