Deep AI And Neural Networks

Tuesday, October 25, 2022

- PDT
PRO Workshop (AI): Sparsity without Sacrifice – How to Accelerate AI Models Without Losing Accuracy
Anshuman Mishra
Anshuman Mishra
Numenta, Principal Researcher
Lawrence Spracklen
Lawrence Spracklen
Numenta, Director of Machine Learning Architecture

Most companies with AI models in production today are grappling with stringent latency requirements and escalating energy costs. One way to reduce these burdens is by pruning such models to create sparse lightweight networks. Pruning involves the iterative removal of weights from a pre-trained dense network to obtain a network with fewer parameters, trading off against model accuracy. Determining which weights should be removed in order to minimize the impact to the network’s accuracy is critical. For real-world networks with millions of parameters, however, analytical determination is often computationally infeasible; heuristic techniques are a compelling alternative.In this presentation, we talk about how to implement commonly-used heuristics such as gradual magnitude pruning (GMP) in production, along with their associated accuracy-speed trade offs, using the BERT family of language models as an example.Next, we cover ways of accelerating such lightweight networks to achieve peak computational efficiencies and reduce energy consumption. We walk through how our acceleration algorithms optimize hardware efficiency, unlocking order-of-magnitude speedups and energy savings.Finally, we present best practices on how these techniques can be combined to achieve multiplicative effects in reducing energy consumption costs and runtime latencies without sacrificing model accuracy.


Wednesday, October 26, 2022

- PDT
OPEN TALK (AI): Deep Dive on Creating a Photorealistic Talking Avatar
Sebastiano Galazzo
Sebastiano Galazzo
Synapsia.ai, Artificial intelligence researcher

Creating a photorealistic avatar speaking any sentence starting from a written input text.

Focusing on autoencoders, we will do a journey from the beginning (Of the speaker experience), mistakes and tips learned along the path.
Will be showcased:

- Intro, the timeline from beginning to nowadays
- Is NOT a deepfake
- Audio processing techniques: STFT (Short Term Fourier Transform), MELs and custom solutions
- Deeplearning models and architecture
- The technique, inspired to inpaiting, used to animate the mouth
- Masks and convolution
- Landmarks extraction
- Morphing animation technique based on autoencoders features
- Microsoft Azure Speech services used to support audio and animation processing
- Putting all together 

- PDT
OPEN TALK (AI): Pushing Deepfakes to the Limit - Fake Video Calls with AI
Martin Förtsch
Martin Förtsch
TNG Technology Consulting GmbH, Principal Consultant
Thomas Endres
Thomas Endres
TNG Technology Consulting GmbH, Partner
Jonas Mayer
Jonas Mayer
TNG Technology Consulting GmbH, Senior Consultant

Today's real-time Deepfake technology makes it possible to create indistinguishable doppelgängers of a person and let them participate in video calls. Since 2019, the TNG Innovation Hacking Team has intensively researched and continuously developed the AI around real-time Deepfakes. The final result and the individual steps towards photorealism will be presented in this talk.

Since its first appearance in 2017, Deepfakes have evolved enormously from an AI gimmick to a powerful tool. Meanwhile different media outlets such as "Leschs Kosmos", Galileo and other television formats have been using TNG Deepfakes.

In this talk we will show the different evolutionary steps of the Deepfake technology, starting with the first Deepfakes and ending with real-time Deepfakes of the entire head in high resolution. Several live demos will shed light on individual components of the software. In particular, we focus on various new technologies to improve Deepfake generation, such as Tensorflow 2 and MediaPipe, and the differences in comparison to our previous implementations. 

Thursday, October 27, 2022

- PDT
PRO TALK (AI): Physics-Based Graph Neural Networks Enable Composable, Strongly Typed Neural Networks
Troy Harvey
Troy Harvey
PassiveLogic, Co-founder, CEO, and Product Architect

PassiveLogic’s (www.passivelogic.com) platform for generalized autonomy utilizing Deep Digital Twins is built on systems-level control theory. The platform is generalized because it can be used to control any kind of system. At its core, this type of platform works on the sensor-fusion and control-fusion of digital models. In these Deep Digital Twin models, the digital twin literally is the AI structure. Each digital twin utilizes the fundamentals of physics to model a single component or piece of equipment. When multiple digital twins are linked to each other in a graph neural network, they form a system description. Because their physics are integral to the models themselves, these graph-based system descriptions model not only the real complexities of systems but also their emergent behavior and the system semantics.
Deep physics networks are structured similar to neural networks, but unlike the homogeneous activation functions of neural nets, each neuron comprises unique physical equations representing a function in a thermodynamic system. The Deep Physics approach is built on heterogeneous neural nets that are composable, have physics guarantees, allow users to define their own systems, learn unsupervised, and generate a physics description of a system. Being so principled, it is also necessarily more constrained, meaning the physics-based graph neural networks can be used to predict future system behavior.
The physics-based graph neural network provides a systems-level intelligence as it understands the interconnectivity of components in a system. As such, it can automatically infer behavior and introspect results, even where sensors do not exist. Using this inference ability, an autonomous control platform built on Deep Digital Twins can provide self-commissioning, automate point-mapping, validate installation, and provide continuous system measurement and verification against its original design. Real-time system operational data can be brought into the model for real-time machine learning so that the model can adapt for improved accuracy of predicting the system behavior.
In this talk, Troy Harvey, CEO at PassiveLogic, will describe Deep Digital Twin AI structures and the applications for generalized autonomy. 

Tuesday, November 1, 2022

- PDT
[#VIRTUAL] PRO Workshop (AI): Sparsity without Sacrifice – How to Accelerate AI Models Without Losing Accuracy
Anshuman Mishra
Anshuman Mishra
Numenta, Principal Researcher
Lawrence Spracklen
Lawrence Spracklen
Numenta, Director of Machine Learning Architecture

Most companies with AI models in production today are grappling with stringent latency requirements and escalating energy costs. One way to reduce these burdens is by pruning such models to create sparse lightweight networks. Pruning involves the iterative removal of weights from a pre-trained dense network to obtain a network with fewer parameters, trading off against model accuracy. Determining which weights should be removed in order to minimize the impact to the network’s accuracy is critical. For real-world networks with millions of parameters, however, analytical determination is often computationally infeasible; heuristic techniques are a compelling alternative.In this presentation, we talk about how to implement commonly-used heuristics such as gradual magnitude pruning (GMP) in production, along with their associated accuracy-speed trade offs, using the BERT family of language models as an example.Next, we cover ways of accelerating such lightweight networks to achieve peak computational efficiencies and reduce energy consumption. We walk through how our acceleration algorithms optimize hardware efficiency, unlocking order-of-magnitude speedups and energy savings.Finally, we present best practices on how these techniques can be combined to achieve multiplicative effects in reducing energy consumption costs and runtime latencies without sacrificing model accuracy.


Wednesday, November 2, 2022

- PDT
[#VIRTUAL] OPEN TALK (AI): Pushing Deepfakes to the Limit - Fake Video Calls with AI
Thomas Endres
Thomas Endres
TNG Technology Consulting GmbH, Partner
Martin Förtsch
Martin Förtsch
TNG Technology Consulting GmbH, Principal Consultant
Jonas Mayer
Jonas Mayer
TNG Technology Consulting GmbH, Senior Consultant

Today's real-time Deepfake technology makes it possible to create indistinguishable doppelgängers of a person and let them participate in video calls. Since 2019, the TNG Innovation Hacking Team has intensively researched and continuously developed the AI around real-time Deepfakes. The final result and the individual steps towards photorealism will be presented in this talk.

Since its first appearance in 2017, Deepfakes have evolved enormously from an AI gimmick to a powerful tool. Meanwhile different media outlets such as "Leschs Kosmos", Galileo and other television formats have been using TNG Deepfakes.

In this talk we will show the different evolutionary steps of the Deepfake technology, starting with the first Deepfakes and ending with real-time Deepfakes of the entire head in high resolution. Several live demos will shed light on individual components of the software. In particular, we focus on various new technologies to improve Deepfake generation, such as Tensorflow 2 and MediaPipe, and the differences in comparison to our previous implementations. 

Thursday, November 3, 2022

- PDT
[#VIRTUAL] OPEN TALK (AI): Deep Dive on Creating a Photorealistic Talking Avatar
Sebastiano Galazzo
Sebastiano Galazzo
Synapsia.ai, Artificial intelligence researcher

Creating a photorealistic avatar speaking any sentence starting from a written input text.

Focusing on autoencoders, we will do a journey from the beginning (Of the speaker experience), mistakes and tips learned along the path.
Will be showcased:

- Intro, the timeline from beginning to nowadays
- Is NOT a deepfake
- Audio processing techniques: STFT (Short Term Fourier Transform), MELs and custom solutions
- Deeplearning models and architecture
- The technique, inspired to inpaiting, used to animate the mouth
- Masks and convolution
- Landmarks extraction
- Morphing animation technique based on autoencoders features
- Microsoft Azure Speech services used to support audio and animation processing
- Putting all together 

- PDT
[#VIRTUAL] PRO TALK (AI): Enriching 3D Point Cloud Data with Artificial Intelligence
Rodrigo Cabello
Rodrigo Cabello
Plain Concepts, Research Engineer

3D point clouds provide us with detailed and precise information about any environment thanks to the use of LIDAR scanners. The use of artificial intelligence over point clouds allows us to create a digital twin.

In this session, we will introduce the point cloud concept and explain in detail the current state of the art of different artificial intelligence techniques to object detection and segmentation.

Point cloud datasets have a million points and are difficult to process. For this reason, the most efficient encoder for object detection will be used: CUDA-Point pillars. This model has a good performance to make inferences in IoT devices in real-time.

A real case about pipes detection (in industrial plants) will be shown. All the deep learning workflow will be explained step by step: from training (with Pytorch) to model optimization and quantization (with tensorRT). This demo will be run in an Nvidia Jetson nano. 

- PDT
[#VIRTUAL] PRO TALK (AI): Physics-Based Graph Neural Networks Enable Composable, Strongly Typed Neural Networks
Troy Harvey
Troy Harvey
PassiveLogic, Co-founder, CEO, and Product Architect

PassiveLogic’s (www.passivelogic.com) platform for generalized autonomy utilizing Deep Digital Twins is built on systems-level control theory. The platform is generalized because it can be used to control any kind of system. At its core, this type of platform works on the sensor-fusion and control-fusion of digital models. In these Deep Digital Twin models, the digital twin literally is the AI structure. Each digital twin utilizes the fundamentals of physics to model a single component or piece of equipment. When multiple digital twins are linked to each other in a graph neural network, they form a system description. Because their physics are integral to the models themselves, these graph-based system descriptions model not only the real complexities of systems but also their emergent behavior and the system semantics.
Deep physics networks are structured similar to neural networks, but unlike the homogeneous activation functions of neural nets, each neuron comprises unique physical equations representing a function in a thermodynamic system. The Deep Physics approach is built on heterogeneous neural nets that are composable, have physics guarantees, allow users to define their own systems, learn unsupervised, and generate a physics description of a system. Being so principled, it is also necessarily more constrained, meaning the physics-based graph neural networks can be used to predict future system behavior.
The physics-based graph neural network provides a systems-level intelligence as it understands the interconnectivity of components in a system. As such, it can automatically infer behavior and introspect results, even where sensors do not exist. Using this inference ability, an autonomous control platform built on Deep Digital Twins can provide self-commissioning, automate point-mapping, validate installation, and provide continuous system measurement and verification against its original design. Real-time system operational data can be brought into the model for real-time machine learning so that the model can adapt for improved accuracy of predicting the system behavior.
In this talk, Troy Harvey, CEO at PassiveLogic, will describe Deep Digital Twin AI structures and the applications for generalized autonomy.