Tuesday, November 10, 2020
Your company has grown and has hired a security team, or a security person. We’re done, right? Everything is secure? Clearly, this is not the reality. Integrating security into development and operational practices is an ongoing, iterative process, and DevSecOps will look different across organizations. So no, you can’t just buy the same tools as everyone else and unlock the security achievement.
Effective DevSecOps is about recognizing where different functions add unique value and optimizing around that in order to continuously improve security of your products with the ultimate goal of keeping your customers and your company safe. In this talk, I’ll share some ways to more effectively utilize your security team and where automation can help you and your security team scale together.
The goal of this talk is to provide you with some tools to meaningfully discuss security improvements and give you options for where to start making immediate progress. I’ll be sharing some of the pitfalls I’ve experienced, including where automation can hinder your progress. I will also talk about how I think about prioritization of security improvements and share my perspective as a security engineer.
With robust DevOps processes in place, teams are leveraging multiple tools and technologies to build and deploy their applications faster than ever with ease. However numerous high-risk issues may exist in these enterprises if security is not considered as one of the quality characteristics or if just ignored. The intent of DevSecOps is to ensure the close collaboration between Development, Operations, and Security Teams. In this session Vivek will explain how DevSecOps enables everyone to consider infrastructure and application security right from the start of the project and thus makes everyone responsible for the application security. It can reduce the cost associated with the security issues by detecting and fixing it in the early stages of development. He will also walk us through some of the key benefits of DevSecOps which include robust infrastructure, reduced vulnerabilities, continuous security, enhanced compliance, easy threat hunting, increased code coverage, and automation etc.
AWS kicked it off. Azure and Google followed along. The three main cloud providers have their own Frameworks of how to better architect around their services, focusing on delivering value to customers with high performance, enterprise-grade security, operational excellence, high reliability and all while optimizing cost. During this talk we will learn the basic pillars of these frameworks and specific ways they can improve your overall cloud posture.
Of course no Dev talk would be complete without a good automation debate, so we are also going to discuss briefly the power of Infrastructure as Code (IaC) and how it can be a great friend on your journey to automating security, compliance and an overall well-architected cloud environment.
The days of manually deploying infrastructure are over. IT teams need automation tools to modernize towards IT-as-Code. This is achieved through flexibility; IT teams must operate on a platform that accommodates CI/CD pipelines. The pipelines, in turn, must go beyond traditional DevOps and bring Security and Ops to truly take a holistic DevSecOps approach. The goal is to enable all tech teams including security and ops to use DevOps tools to integrate with ticketing systems, run security remediation playbooks, deploy Kubernetes with a security benchmark assessment, automate the creation of SSL certificates and even spin up virtual firewalls with an applied configuration in the cloud. All this with each team leveraging DevOps and security tools like Terraform, Vault, kubectl, Ansible, CIS-CAT Assessor and others.
- What is DevSecOps and what are CI/CD pipelines
- How CI/CD Pipelines work for DevSecOps
- Why enterprises need hybrid CI/CD pipelines
- Real world use cases with Kubernetes, Terraform Vault, CIS-CAT Assessor
The cloud has changed the way hackers operate, and you need to change how you think about securing your cloud assets against this new generation of exploits.
In this talk, Josh Stella, CTO of Fugue, will walk through a simulation of an advanced cloud misconfiguration exploit. He’ll explain at every step how common—but frequently overlooked—mistakes leave cloud data vulnerable, and how most cloud-based data breaches go undetected, even long after the fact.
You’ll gain fresh insights into how to think critically about your cloud security posture and how to identify and eliminate serious misconfiguration risks.
In this session, you’ll learn:
How cloud misconfigurations occur and why they go frequently go undetected
How to assess your cloud environment for misconfiguration vulnerabilities
How to prevent misconfigurations using policy-as-code
In this session, we will discuss concerns over security, privacy, and compliance holding back organizations from making the move to fully cloud-native initiatives. As more and more companies orchestrate their containerized applications in Kubernetes, enabling DevSecOps and continuous security becomes a must.
We will look at the end-to-end SDLC process - from the first line of code up to an application running in a Kubernetes cluster - to examine the importance of DevSecOps. Where can you start, what does it look like for a developer, key patterns for success, and how you can achieve speed and scale while reducing risk and ensuring compliance.
When you combine the efficiency of containers, agility of serverless and flexibility of event-driven services, you end up with a more reusable, interoperable and scalable architecture with minimal management overhead.
In this talk, we’ll explore the open-source Knative Eventing and its managed version Cloud Run. We’ll explore what they provide for event-driven serverless containers and we’ll deep dive into some real-world reference architectures.
At the end of this session, you’ll have a solid understanding on how Knative and Cloud Run can power your event-driven apps.
We know that costs for public cloud services can quickly get out of hand, especially as we adopt, automate, and scale CI/CD practices. Without cloud costs visibility the invoice would be a black box. Still for many users, desperately seeking to reduce their cloud costs, the root cause of these costs is hidden within subtleties of cloud resources. This makes it difficult for teams to plan work, test their services, and manage costs. Join this session to learn how to manage development resources to reduce cloud costs. Attendees will learn how to optimize Kubernetes workloads.
The goals of this talk is to explain the different between two common continuous deployment strategies for Kubernetes clusters.
There are two common strategies to continuous deployment to Kubernetes-based environments. The first one is a push based model, where a deployment pipeline (e.g. Jenkins / Azure Devops /…) pushes new applications to a container cluster. A second approach is a pull based model (e.g. Flux / Argo ) where services running on the cluster will pull new application configuration into the cluster.
In this talk we'll discuss both approaches, and do a compare and contrast between them. We'll also see a demo of both approaches to continuous delivery on a Kubernetes Cluster.
Many Enterprises have existed long before Public cloud and Virtual Machines were a reality. They are heavily invested in their own data centers running on physical machines. They use traditional technologies to deploy software using Puppet or Ansible and they daydream about modern microservices architecture and envy the next blog post coming out of Google’s and Facebooks of the world. With Kubernetes, this has changed. We show to those enterprises, that its possible to run your software in the same infrastructure that was once only accessible to a select few companies. We also show , in the absence of VM’s and managed cloud offerings on bare metal it's still possible to manage Kubernetes on physical hardware and with a little investment in processes and monitoring, you can run a billion dollar empire and modernize your software to microservice architecture and follow modern cloud native processes like CI/CD and 12 factor app.
"Containers are the new ZIP format to distribute software" is a fitting description of today's development world. However, it is not always that easy and this talk highlights the development of Elastic's container strategy over time:
* Docker images: A new distribution model.
* Docker Compose: Local demos and a little more.
* Helm Chart: Going from demo to production.
* Kubernetes Operator: Full control with upgrades, scaling,...
Besides the strategy we are also discussing specific technical details and hurdles that appeared during the development. Or why the future will be a combination of Helm Chart and Operator (for now).
Wednesday, November 11, 2020
In this pandemic season nations across the world are in various stages of economic response and recovery. As technology plays a vital role in the recovery of nations, the role developers need to play is also undergoing a massive shift. Much like the gap between developers and operations teams brought about Devops, its time now for developers to integrate far more tightly with business leadership to help companies navigate the economic turmoil leveraging tech. I'll talk about 4 skills the new developer needs to master beyond tech alone to realize the potential for transformation - BizDEv teams, understand the business and its problems, dig into pricing and optimization, culture of rapid experimentation and communication. Experiential learning from Asian dev community and customer.
Breakthroughs in artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) have helped customers and call agents alike, to get more done in less time. It draws on multiple data sources to anticipate customer and company needs, handles interactions on its own where possible, and provides in-call support where needed.
The future of AI in the contact center is one where software tools make humans more efficient and allow the customers to have natural conversations with a bot via voice, webchat, social messaging app or other channels, handling requests, retrieving information and delivering answers to frequently asked questions. In short, creating the ultimate customer experience.
During this session, Noam Fine will discuss how enterprises with limited machine learning expertise can leverage communications APIs to unlock simple, secure and flexible solutions to deploy AI in their contact centers, elevating issues to experienced agents when needed to ensure personalized, emotive CX. He will draw on his experience to explain how enterprises can automate their agent-based live chats and streamline their support channels and operations, while offering a personalized human-like interaction. Most importantly, he will discuss how to find the right balance between seamless, intelligent self-service and efficient human intervention using integrated AI-driven communications - applications, APIs and the best of both.
We all love the conventional uses of CI/CD platforms, from automating unit tests to multi-cloud service deployment. But most CI/CD tools are abstract code execution engines, meaning that we can also leverage them to do non-deployment-related tasks. In this session, we'll explore how GitHub Actions can be used to train a machine learning model, then run predictions in response to file commits, enabling an untrained end-user to predict the value of their home by simply editing a text file. As a bonus, we'll leverage Apple's CoreML framework, which normally only runs in an OSX or iOS environment, without ever requiring the developer to lay their hands on an Apple device.
Artificial intelligence (AI) is increasingly being used in systems affecting our everyday lives. These systems are often referred to as “black box” systems because we do not know how the data is processed, we are just given a result. The advent and widespread adoption of deep neural networks, while providing impressive results, made this even more important since it is quite hard for a human to interpret how information is processed within thousands of neurons. In such scenarios, how can we trust the decisions that have been made? This is especially important when considering critical systems like diagnosis tools for doctors, where patients lives are at risk. So how can Open Source help us trust AI?
In this session, we will explore the many ways in which open source can create trust for AI systems. By leveraging the ideas of peers, open source can give greater opportunity to innovate and create features that people support. The transparency of open source helps to improve the relationship users have with the algorithms and implementations behind these systems.
We will also investigate different open source projects that help to explain “black box” models, relating this to how it increases user understanding and trust. These projects help to promote responsible AI, ensuring systems (as mentioned above) can be trusted and applied to real-world situations.
This session is intended for anyone with a keen interest in open source or AI and will give an insight into:
-How open source supports trust in AI systems.
-Open source projects that enable explanations of black-box models.
-Why we need to trust AI systems in real-world applications.
JSON has become a popular data interchange format because it is easy to read, write, and understand. Various relational databases are popular and trusted repositories for persistence of data. Microservices have become the preferred architectural components for building modern applications. RESTful APIs using JSON data are becoming an attractive way to interact with microservices. However, it is tedious and time-consuming to write the low-level code necessary to implement RESTful APIs for persisting JSON objects in relational databases.