Dev Innovation Summit
Wednesday, August 18, 2021
Traditionally the development process of a product or feature is being passed around between people with different roles - Scrum masters, Architects, developers, QA, DevOps, etc.
By empowering developers to do most or all of these responsibilities we allow for a streamlined process.
You can rightfully argue that a QA professional, for example, will be much better on average than a developer in finding bugs but in some cases, the cost to the process and the lack of flexibility is greater than the benefit.
After working this way for a few years I want to share how we managed to get this approach working for us and where we are still trying to improve.
I'll also share what this approach does to the culture of an R&D organization and what can we benefit from it in terms of growth opportunities and retention for developers.
We had a dream. To have continuously releasable code, and to provide functionality in more than one language without too much effort.
But in the past (like many before us), we didn’t always succeed. Our Open Source codebase is available on two platforms, Java & .NET and it wasn’t an easy task to keep them always in sync and buildable. In the old days we did this manually, risking broken develop branches in both codebases and this meant getting a .NET release out could take a month (or more). There had to be a better way…
In this talk we’ll share how we overcame these hurdles; and how we transitioned from a manually tested and ported codebase to an automated system, where develop is always green!
Our automated system is not final yet. It’s continuously being improved, and we still have many ideas… We’ll share some of these such as the introduction of build agents in the cloud. This would enable us to run tests on different platforms and configurations almost effortlessly.
Guided by a timeline, we’ll go through step by step how we achieved this by introducing different tooling, some in-house and some external. The main part of our talk will be about our Merge Pipeline (trademark pending) based on Jenkins, which forms the backbone of our fully-fledged automated system.
We’ll share its internal details and explain how it handles the different steps that are needed to get a Java branch merged, and automatically ported into the Java and .NET develop branches.
It’s designed in such a way that as little as possible time is wasted by running steps in parallel, keeping track of what was, or was not run already. Code that does not pass Sonar will not make it into develop. Functional tests are in a separate repository but go together with the code they are testing.
We firmly believe that others would benefit from learning the steps we took along the way, and how simple tooling can be used to introduce a merge pipeline to help the development process at any company.
SaaS is not for all! Many organizations are prohibited storing data outside their data centers, worried about security or restrained by limited change control!
The solution is a hybrid architecture that allows deploying your SaaS offering on-premises. While we have known examples such as AWS Outputs or Google Anthos the fact is that not everyone is Google or Amazon! Join my session where I share how we at Dynatrace moved from SaaS to a hybrid offering including an on-premises deployment. I discuss my top 3 aspects of a successful hybrid implementation: pro-active support, automated updates delivery, and zero-configuration approach and how our “Mission Control” takes care of these. This talk should inspire you how to expand your software offering from SaaS to premises with the lowest total cost of ownership!
With the increasing complexity of modern applications, continuous profiling methods and tools are gaining popularity among the Developer and Engineering communities. In this session, we cover what continuous profiling entails and why you should implement a profiler into your tech stack (if you haven’t done so already). We’ll then bring theory to practice and demonstrate a real-life scenario using gProfiler, a free open-source continuous profiling tool.
OPEN TALK: Innovation Beyond Technology: Creating a Groundbreaking Tech Solution with Freelance TalentJoin on Hopin
When Alexander Weekes, a Toptal freelance project manager, began working with client ArthroLense, the small start-up consisted of two orthopedic surgeons and an idea. As the project manager, Alex and a team of freelance developers helped transform their idea to use augmented reality in surgeries into a salable product with a multi-million dollar valuation.
In this interactive session, Alex will tell the story of ArthroLense, the pioneering innovator of image-guided surgical assist technologies.
He will discuss:
- How, as a project manager contracted by Toptal, he created the processes, selected the tools, and bridged the gap between technical, medical, design, and financial perspectives.
- Working with cutting-edge augmented reality hologram technology.
- Why the future of high-caliber, innovative projects is freelance contractors.
Site Reliability Engineering and the DevOps movement share a similar set of challenges but addresses each in a different way. SRE got its start at Google in 2003 and according to Ben Treynor, VP of 24/7 Operations: ”SRE is what happens when you ask a software engineer to design an operations team”. In 2016, Google published a book about Site Reliability Engineering principles, practices and organizational constructs.
The practice of Site Reliability Engineering at Google encompasses more than just managing production systems and responding to emergencies. Applying software engineering in a principled way to operations allows SRE to holistically address the reliability of software applications across the product lifecycle.
Implementing SRE in an organization requires a commitment to supporting some core principles and a fundamental culture shift -SRE needs Service Level Objectives, with consequences.
-SREs have time to make tomorrow better than today.
-SRE teams have the ability to regulate their workload.
-SREs and the organization’s leaders remove the word ‘blame’ from their vocabulary.
This talk will highlight key SRE principles and how they map to recognized DevOps focus areas. We’ll also discuss how any organization can adopt SRE, and how our recent experience of working with our customers on implementing SRE practices has shown these principles will work across a range of organizations of different types and sizes.
New tools like Github CoPilot use AI to help you write better code. Developer Workflow Optimization tools use AI to streamline the rest of your dev pipeline - reviews, ticket tracking, status updates and meetings.
In this session you’ll learn how Developer Workflow Optimization (DWO) tools surface data to help you make informed decisions about what you’re building and shipping as well as automate menial tasks you perform 10-50 times a day.
Here’s examples we’ll cover in the talk:
You have 20 minutes until your next meeting. DWO suggests a code review you can complete in 15 minutes.
You’re assigned 5 pull requests. DWO tells you which reviews to prioritize based on the urgency and importance of the projects they’re related to.
You’re fixing an urgent bug and quickly pushing it into production. DWO automatically opens a ticket and tracks its state.
You’re issuing a pull request. DWO estimates the CI and PR review time based on availability to help you plan what’s next.
Join us and see how to improve developer and team productivity, cut idle time, decrease cognitive overhead and maximize situational awareness with Developer Workflow Optimization.
Software is in the heart of many products that we use today, from consumer to mobile applications. The way we deliver software has changed as well now that we work in the cloud. Every developer became a little DevSecOps engineer, when expected to deliver secured, robust and scalable software. CI/CD technology was supposed to make time from coding to production much faster and processes were supposed to be fully automated. But are we really doing it right?
In this session I will talk about current trends in developer-centric security, best practices for implementation and some of the lessons learned from creating a shift-left cloud security product.
While the business keeps increasing the pressure and demand of flexibility of the development team, the agile movement was pushed to the limits. CI/CD was born to reduce manual step to reduce human errors and increase speed to go-live! Last not least, with DevOps the teams took application responsibilities, from cradle to grave. Nevertheless, software security is still missing in many full-stack developers resume and application security responsibilities are pushed off to the security department still. Petty, because the exactly agile, CI/CD and DevOps are security enabling practices.
This session is explaining Shift-left, early security enablement in the development Lifecycle. As the application development becomes more developer centric, the developer’s toolset must match the new challenges to have responsibilities matching capabilities. Learn from rugged software to supply chain cleanliness. Learn to avoid the common pitfalls and benefits of modern application development strategies. Hear why security champions programs tend to fail, compliance driven security trainings are a waste of time and money. Take back the best practices, proven solutions and Shift Left beyond the development.
Single cloud is no longer enough, it is time for Multi-cloud! In the session, the latest trend and technologies related to Multi-cloud will be presented. However technology is not enough, the real business case studies and their benefits will be presented. Finally, the future research directions will be briefly described. The session is based on the commercial and research experience for over 5 years period.
Did you know that the way you type is unique?
Join this session to find out how typing biometrics evolved, its current use-cases in CyberSecurity, and get a glimpse into its future applications for Personal Productivity and beyond.
OPEN TALK: Observability at Scale: Using AWS Services to Monitor the Performance of Your MicroservicesJoin on Hopin
Detecting and diagnosing performance issues in microservices can be difficult due to the distributed and decoupled nature of microservice-based architecture. While it is important to have metrics on the individual components, it may be necessary to follow the progress of a request as it travels across multiple service boundaries in order to identify problems. In this session, we will explore how AWS provides native monitoring, logging, alarming, and dashboards with Amazon CloudWatch and tracing through AWS X-Ray and OpenTelemetry to provide the 3 pillars (Metric, Logs & Traces) of an overall observability solution.
Organizations are thriving for productivity and agility. And the billion dollar question of the technology industry today is, how do we get there? The answer mostly lies in the hands of developers. If developers can produce quality software, at pace, and keep sustaining it forever, we can build dynamic and agile organizations successfully. However, the technology landscape of today’s industry is overwhelming. Developers are burdened with so much complexity, it creates a lot of confusion and sucks out their energy in solving meta problems instead of focusing on the business need.
This talk is about introducing the concepts of Cloud Native Engineering and teaching how it can help organizations build dynamic teams capable of delivering quality software at scale.
Should you always run your cluster in multiple availability zones? How can a transition rule on S3 double your storage costs? I want to monitor and understand my data transfer costs, where should I start? Following so-called “best practices” works only when you fully understand the implications, costs included. We will discuss a few cloud anti-patterns, making your bill smaller and your deployment better. And possibly reducing your cloud carbon footprint too.
As organizations move to DevOps, the demand for faster software releases increases the chances of vulnerable code making its way into production. The good news is that coding errors leading to vulnerabilities are extremely preventable when on-demand, interactive secure coding lessons are readily available. This presentation will explain how modern AppSec awareness and training can bridge the gap between an organization’s secure software initiatives and the lack of secure coding best practices.
Join this session to learn:
• The secure coding challenges organizations face
• Developers as well as organizations are asking for an AppSec Awareness Program or Solution
• See how Checkmarx can help with an AppSec Awareness program (Demo)
The cloud is a fortress. Public cloud infrastructure is owned by a handful of companies with hardly any oversight. They get to decide who to welcome, who to block and they can slow transmissions. Not that they will, but they can. Once organizations select a cloud provider, they essentially have no choice but to trust a monopoly that could also become a competitor. But it does not need to be. Thanks to Kubernetes, Cloud Neutrality is now only possible, it is easy.
In a few minutes, we will deploy a cloud neutral Kubernetes cluster that spans across any number of cloud providers and will show how your workloads are free to roam between clouds, automatically switching for cost optimization, scale, performance or simply for resilience to a cloud failure.
Cloud security in most dev environments is broken. With ever-changing environments, engineers focused on features, and DevOps enabling incredible agility, traditional cloud security can't keep up. Even with a security resource at hand, the chance of catching each bad Terraform default or hidden * in a wide-open IAM policy is near impossible across endless cloud services. In this session, we'll show how (with very little effort) you can adopt DevSecOps with the right training, tools, processes and strategy. You’ll get practical advice and tactical tips to start implementing IaC security scanning and fixing security issues right away.
We live in the world of the visual web. Images and videos are being consumed by millions of people every single day. Digital assets represent about 75% of an average website, which, often times leads to performance bottlenecks. During the talk we will investigate the history of the visual web, discuss its impact on performance while also showcasing how developers can enhance the performance of their website(s) by using various media optimisation and transformation techniques on the web today.
For a long time Java Messaging Service has been the API to handle messaging systems in the Java World, and now the messaging ecosystem is moving to the next generation of streaming services like Apache Pulsar.
Why? Because Pulsar is free, Open Source, Cloud Native and it comes with cool new features that are not well supported by traditional JMS vendors.
In this session you will learn how to use Pulsar in a JakartaEE Web Application deployed on Apache TomEE via the JMS/EJB API, without installing any additional components to your cluster.
In 2020, OpenAI has launched GPT-3, an autoregressive language model that uses deep learning to produce human-like text which was trained on 175 Billion parameters.
In 2021, Google AI has open sourced Switch Transformer, an artificial intelligence language model which was trained on 1.6 Trillion parameters.
How do these developments affect the tech industry?
Data-driven is the how tech is going right now, and the next step is becoming AI-driven. This applies to any field, from meditation apps, through investment robots to the cloud infrastructure. This won’t stop there, as new fields that are being disrupted by tech, like legal-tech, med-tech and others, are also AI-driven.
Bringing innovation to your work means keeping up with these changes across all the tech teams - product, software, DevOps, QA, etc. In this talk we will cover the current state of AI and how you can make your product and teams future-compatible.
Remote work is a major paradigm shift that comes with unique challenges and big opportunities. As a “Virtual First” company, Dropbox is betting on the future of remote work. Join us for a chat with Allison Vendt, Head of Virtual First, People & Culture at Dropbox, to dig into what it takes to create a virtual first work culture and empower virtual first employees.
In this talk, I will walk through how someone can set up and run continuous SQL queries against Pulsar topics utilizing Apache Flink. We will walk through creating Pulsar topics, schemas and publishing data.
We will then cover consuming Pulsar data, joining Pulsar topics and inserting new events into Pulsar topics as they arrive. This basic overview will show hands-on techniques, tips and examples of how to do this using Pulsar tools.
Conversation Intelligence (CI) APIs enables to build applications that go beyond basic speech to text, creating a new array of sophisticated AI-driven experiences and functionalities. Basic speech recognition is designed to recognize or respond to explicit words and phrases, while conversation intelligence is capable of contextual comprehension of any human conversations to effectively extract key insights, identify user intent, surface actionable insights, detect sentiment, and more. Conversation Intelligence has given a rise to a new generation of AI driven applications and platforms across various verticals such as revenue intelligence, tele-health, call centers and customer support, collaboration and productivity platforms and more…
Join our session to learn more about Conversation Intelligence, creating new app experiences with it, and how to do so with APIs.
How can you make time for real innovation and improvement?
How do you know what to automate next?
How do you escape process prison?
How can you get everyone aligned to make a difference?
How do you get from where you are to your next performance target?
Flow Engineering builds on the lean practice of value stream mapping with a full framework of collaborative mapping techniques. You can use it right now to reveal your biggest opportunities, eliminate hours of friction every week, and invest in what's next.
I'll introduce 4 powerful maps: Outcome, Value Stream, Dependency, and Capability, that you can co-create with your teams to uncover hidden insights and opportunities. I'll show you how to take those insights and create a powerful roadmap of actions and experiments to dramatically improve flow and deliver continuous value.
Use it to improve your:
- Development process
- Planning and shaping
- Delivery/Data/Testing/Analytics/Logging Pipeline
- Employee/Customer Onboarding
- Support/Failure Recovery/Incident Management
- Workflow of choice
…and start spending more time on what's next
Enterprise blockchain projects more than doubled from 2019 to 2020, and industry analysts expect use cases to keep growing at the same pace year after year. Despite blockchain technology moving beyond hype, the technology still has a ways to go before mass adoption and building these products is easier said than done. With Blockchain.com valuing at $5.2B and the number of wallet users continuing to rapidly grow, Lewis will discuss how to build high-performing blockchain applications aligned with company growth—from the challenges companies face when building products on top of blockchain and how to overcome them, to best practices to achieve and surpass user milestones.
In this talk, I provide some insights about the growth of software testing. Starting from the early times of software testing, finally we discuss what we can face in the future. Key take-away’s are to be ready for the next generation testing activities (AI-supported testing and others).
We all observe that software testing continues to grow, proving that it is a living organism. Software testing processes are started to be adapted into Software Development Life Cycle (SDLC) in waterfall approaches. At the end of the development activities, verification and validation are performed to check the product before shipment to customers. What was the problem with Waterfall methodology? Testing activities were scheduled at the end of timeline and testers were out of time since previous activities are shifted.
Then, in agile methodologies, we see testing activities in all phases of the Software Development Life Cycle (SDLC). It starts from the first sprint. In this point, challenges are bigger since it is a very dynamic environment with lots of changes in a short duration.
To cope with complex scope to be verified in a limited time, automated testing started to appear in our life. Nowadays we meet lots of “Continuous X” terms, such as Continuous Integration, Deployment and Testing. Can we go home and get some rest when we automate all cases? Of course not. We have to continue to track test results, maintain flaky results and keep quality in high standard. Still we have many manual tasks on healing, maintenance and analysis.
Nowadays, researches are looking for adaptation of Machine Learning algorithms and other hot topics to testing processes to reduce the manual effort and improve quality. To sum up, improvement of software testing never ends, but sometimes the growth confuses people. What is the deal of Scrum? Why are people crazy about continuous integration and continuous delivery (CI/CD)? What is the difference between Agile and Devops? We will go over lots of this kind of questions.
Objective of the talk is: To provide some insights about the growth of software testing. Starting from the early times of software testing, we will try to overview the big picture and finally we will discuss what we can face in the future.
* Growth of software Testing
+ Replacement of manual activities to Automation
+ Transition to Agile Methodologies
+ Adaptation of Devops
+ Machine Learning in Software Testing
* Wrap-Up & Questions
Take-aways: Key take-away’s will be awareness on the software testing lifecycle and be ready for the next generation activities (AI-supported testing and others).
In this talk, I will share the story of how LinkedIn designed our software engineering system, Multiproduct, and what we’ve learned from implementing, operating and evolving over the last ten years. I will share examples of design and implementation decisions we’ve made and how those decisions impacted our ability to develop and deploy software. I will describe tools and automation we’ve built and the organizational structures that have emerged to support our software development system. You will learn about LinkedIn’s multi-repo code setup and how we leverage semantic versioning and dependency management to share code across our product ecosystem. The lessons we learned will help you with your decisions when designing a software engineering system for your company.
Once upon a time, devices like Nest or connected coffee makers were all anyone talked about when mentioning IoT. But these use cases, while seemingly innovative at the time, were more about creating shortcuts over Solving Problems, and those are inherently limited the number of use cases. As we enter the second phase of IoT, we’re solving problems behind the scenes and building hidden infrastructure that’s silently connecting the world
Few software-driven organizations have the resources to interview and achieve aggressive hiring targets, while at the same time building innovative products that drive revenue. During this session, Mo can discuss how the way organizations hire today is driving down company performance, productivity, and morale - and how we can fix it
To truly scale application security testing, developers need to maintain their role in the security process beyond SCA and SAST, continuing the automation you are already achieving and rely less on manual testing.
Traditional DAST scanners are a blocker to this automation. They are hard to use, impossible to integrate, not developer friendly and produce too many false positives. This results in crippling human bottlenecks that stifle CI/CD, whether it's the need for security to constantly tweak scanners or the drain of manually validating vulnerabilities.
Either way, technical and security debt is compounded, resulting in insecure product hitting production. Change is needed, and fast.
In this session with Bar Hofesh, CTO and Co-Founder at NeuraLegion, you will discover:
1. Key features that your dev-first DAST needs to enable developers to take ownership of security
2. How you can detect, prioritise and remediate security issues early, automated in the pipeline
3. Insights into reducing the noise of false alerts to remove your manual bottlenecks to shift left
4. Steps you can take to achieve security testing automation as part of your CI/CD, to test your applications and APIs.
The concept of “progressive delivery” using feature flags has taken the world of software delivery by storm in recent years, but what does this mean for enterprise software development operations teams and how they should change their technology and practices? Like many things, the application of progressive delivery in the enterprise setting is much easier said than done with disparate technology and teams around the world.
This talk will cover the state of progressive delivery, the potential benefits and use cases unlocked by adding feature flags into the release management process, and technical considerations for creating CD pipeline integrity with shared feature flag management and control.
Thursday, August 19, 2021
Product Manager, this is a title that is very hard to explain and most of the time comes with big responsibilities but yet it is easy to overlook. Moreover, in the Application Modernization journey that focuses on modernizing your legacy application.
We are not delivering code or creating a prototype and definitely our job description is NOT attending meetings all day. We believe there are skills and mindset as Product Manager to accelerate the team to be successful in building modern applications. In this talk, I will share why we need product management skills to increase the success in your Application Modernization journey.
OPEN TALK: Add Natural Language Understanding Capabilities to a Browser App in Minutes with the expert.ai NL APIJoin on Hopin
Ready to get a little hands-on experience working with natural language capabilities? In this session, Antonio Linari, Head of Product Innovation for expert.ai, will provide a coding lesson to develop a Chrome plug-in that uses Natural Language Understanding (NLU) to sift through bookmarked pages more effectively. Save your favorite pages and let the expert.ai NL API analyze the content and automatically generate tags for faster retrieval. No URLs or content will be collected on the server side so that your bookmarks’ list will remain private.
This exercise will help build your understanding of natural language technology, while showing you how easy it is to leverage the expert.ai NL API in the development of web plugins.
Observability is about more than building a reactionary response to latency and outages. Whether or not you focus on it today, at the core of your team is an “Engineering Flywheel”. Keeping talented engineers engaged, maintaining a cadence of feature releases, measuring the impact of new tech - these improve when you tighten the feedback loop on the one thing they all focus on, the service itself.
In this session, we'll cover the new challenges microservices architectures have presented us all with and explain how to create an effective Observability strategy that can accelerate your Engineering Flywheel.
Managing dozens or hundreds of distributed services and microservices on a scale can be very challenging. As developers, we are often blind to how our application behaves in production and the areas we need to check to find and prevent issues early on in the development process, before deploying new versions.
In this talk, we’ll show you how to leverage the open-source OpenTelemetry to collect and analyze the relevant data from production, and how to use it pre-production, during development and testing phases, to improve your code quality and overall success in preventing issues before deployment.
By relying on production behavior, we can automatically generate more efficient tests, catch dependencies that are about to break in real life, and have our developers more productive & product-oriented.
Modern environments such as Kubernetes and serverless, have made it easy to manage and scale microservices but observability into these environments is still a challenge for DevOps. In this session, we will describe how to use request flows to build intuition about your architecture and build resilient applications. We will also dive into correlating metrics, events, & logs using distributed tracing, and creating alerts for anomalies detected in your environments.
Since the emergence of Kubernetes, we hoped that developers will adopt it. That did not happen, and it will likely never happen. Developers do not need Kubernetes. They need to write code, and they need an easy way to build, test, and deploy their applications. It is unrealistic to expect developers to spend years learning Kubernetes.
On the other hand, operators and sysadmins need Kubernetes. It gives them all they need to run systems at scale. Nevertheless, operators also need to empower developers to deploy their own applications. They need to enable developers by providing services rather than doing actual deployments.
So, we have conflicting needs. Kubernetes is necessary to some and a burden to others. Can we satisfy all? Can we have a system that is based on Kubernetes yet easy to operate? Can we make Kubernetes disappear and become an implementation detail running in the background?
Let's discuss where Kubernetes is going and how it might look like in the future.
Boosted by the pandemic, the level of digitalization in document workflows took a steep rise over the last year. While most professionals welcome this evolution, there are important factors to consider to make your digitalization efforts worthwhile, such as the processing performance of document-based operations, and the price of data storage locally or within the cloud.
With PDF being the preferred format for digital documents, a performant and automated solution for optimizing PDFs in high volumes should be at the very heart of your digitalization strategy.
With pdfOptimizer, the latest add-on for the iText 7 PDF library, iText offers an efficient solution to this challenge. In this webinar Cal Reynolds, Pre-Sales Engineer at iText and pdfOptimizer product expert, will
- Show you different ways to compress PDFs without loss of visual quality
- Quantify what these optimizations can save you in time and money
- Guide you through how pdfOptimizer can easily synergize with your document workflows to expedite processes, such as digital signing.
- Answer all your questions in a live Q & A session.
As real-time distribution mechanisms like Pub/Sub become commodified parts of application architectures, developers are discovering a need for more sophisticated functionality than just simple message delivery. Traditionally, developers and software architects have struggled with the complexities of creating event-driven, real-time web, mobile, and IoT applications. This is because they are not data “wrangling” experts. Data wrangling comprises mapping raw data into another format suitable for another purpose and is critical to event-driven application development. However, without the right tools, data wrangling can be a laborious task, as it typically involves restructuring of large amounts of data.
This talk explores the growing value of data-wrangling at the network edge, and how pragmatic, app-focused platforms like GraphQL mark the future of real-time data architectures.”
The process of hiring has always been simple: candidates apply, they are interviewed, sometimes given a task or test to complete, then they are hired. But what if the future of hiring was still simple, but used complex AI to find the perfect candidate? Vivek Ravisankar says AI in hiring is now becoming essential, but that’s not all. In order to scale your teams successfully, companies need to start utilizing new hiring tools, but they also must be using those tools correctly. Right now, we need to rethink AI’s current and future role in hiring, so Vivek will dive deeper into the benefits and challenges of using AI while recruiting and how AI will play an important role in sourcing and hiring as we move forward in an increasingly digital world where face-to-face options aren’t readily available anymore
In this session, Vlad will explore why monday.com changed the technology its API had traditionally run on as the platform matured, and challenges the team overcame as it adjusted to working with GraphQL. Vlad will share monday.com’s journey into building the API and discuss how users across industries are using monday.com's API for custom workflow apps.
OPEN TALK: Let’s Play Tag: DevSecOps Edition! Automated IaC Resource Tagging Strategy for Security Policy EnrichmentJoin on Hopin
Through GitOps practices, automated security checks, and Infrastructure as Code (IaC) strategic tagging automation, we can begin to build pre-flight and runtime policy-as-code to ensure that misconfigured and insecure resource definitions are caught prior to deployment. When resource misconfiguration or drift is discovered at runtime, a consistent tagging strategy allows resources to be traced back to the appropriate commit. This reveals a best fix location and author to vastly reduce MTTR. To show how this all works, we'll use a combination of open source solutions: Checkov (IaC Policy and Scanning) + Yor (IaC Tag and Trace)
AI, has transcended its theoretical existence to become present in our day-to-day lives, and is encountered by most people from morning to night. Some examples are
* The e-tailer Amazon is one way many people are exposed to AI regularly, their AI algorithms learned what we like and what other people who are like us purchased the items which we would like to purchase
* Digital voice assistants like Amazon Alexa are quickly becoming our part of life. They use NLP and generators driven by AI to return answers to us.
And so on. But there are many other areas which could be improved by leveraging AI.
One such are of improvement is crowd management in Rapid Transit System (Metro)
Metro train ridership has grown significantly over the past decades and this growth is expected to continue into the future.
Crowding at train and metro stations is therefore experienced more frequently, resulting in safety issues, decreased comfort levels, increased total travel times etc...
As a high-capacity public transportation, the Metro Rail Transit has been operating at a level above its intended capacity by every government.
Despite numerous efforts in implementing an effective crowd control scheme, it still falls short in containing the formation of crowds and long lines, thus affecting the amount of time before they can proceed to the platforms
In this workshop, let us see how AI and Cloud Platforms can be leveraged in managing the crowd in Metro Stations and in Trains.
Each time we talk to our customers, the same story repeats. Hundreds of APIs are being built by agile development teams, released several times per week, with limited consideration for how secure they will be. AppSec teams play a constant game of whack-a-mole, trying to patch issues in production, issues which occur because they could not test and review the APIs as they were published. Too many changes, too little time, very few resources.
How do we break this vicious circle ?
This talk is inspired by my experience working with many large enterprises, helping them engrain security into their APIs lifecycle and changing their development culture. I will share the lessons learned as we worked together on breaking the habits that led to 1 billion of data records leaked via APIs in the last 12 months alone. We will use real data breaches to illustrate the mistakes that lead to those security issues and explain how to address them by changing the way you design and develop your APIs.
In early 2019, Chris decided to start a messaging and streaming SaaS business. He knew he wanted to build on open source technologies. Apache Kafka is the most popular open source technology in this space and would have been the easy answer. Instead, he decided to build using the lesser known alternative, Apache Pulsar.
In this talk, Chris will go over the key reasons why he felt (and still feels) that Apache Pulsar was the ideal choice for a message streaming SaaS platform. He will discuss the key architectural advantages of Pulsar over Kafka, including how Pulsar uses the open-source Apache BookKeeper project to its advantage. I will contrast and compare the open-source feature sets of Pulsar and Kafka. He will also discuss why running Apache Pulsar in Kubernetes simplifies operations and enables me to build a multi-tenant, elastic SaaS service.
We describe a way of the established on-prem ISV re-inventing itself as a SaaS provider, incidentally breaking enterprise content management industry standards in scalability in the process, and returning to on-prem customers in a hybrid scenario, thus completing a full circle. We will share the tools we built and used, DOs and DON'Ts and postulate a trend: it is still users that matter the most at the end.
When defining APIs the most common considerations are from what our payload looks like, and then from a implementer perspective.
However, good APIs whether they’re internal or public are far more than just a payload description and need a consumer’s perspective.
In this session we look at what makes up a good API; from OWASP Top 10 implications to ISO and data definitions, to how to make it easy for your consumers, why these points are important and the implications. We’ll explore techniques to overcome of the challenges seen when producing good APIs.
Whilst we all think we know how to define APIs, you’ll be surprised at the things that get overlooked or opportunities to be better.
Web applications are high-priority targets for hackers. The inherent complexity of their source code, which increases the likelihood of unattended vulnerabilities and malicious code manipulation, allow cyber criminals to easily automate and launch an attack against thousands, or even tens or hundreds of thousands of targets at a time. And best of all, they may result in a plethora of rewards - sensitive private data and damaged customer relationships. In this session, Imperva CTO Kunal Anand will review best practices in web application security. He will explain how their vulnerabilities are often exploited to either manipulate source code or gain unauthorized access and the various attack vectors used. Finally, he will outline the processes that should be part of any web application security checklist. He will also speak to the more challenging questions around who bears the risk in such a connected IT environment.
Do you want to get started using APIs and automation? APIs can add great value to any Automation use case and a wide range of platforms now expose REST APIs. The goal of this session is to introduce attendees to the basics of using REST APIs in an application, and to provide them with the skills to start engaging in this growing area. This session will teach participants the concepts needed to create applications that consume REST APIs. We will go through the anatomy of a REST API and some tools and examples to get you started.
Women around the world have been directly affected by the pandemic in more ways than one, especially women in technology. Lack of Representation; Lack of Supplier Diversity Allocation; Lack of Access to Investment; and Increased displacement of women-held jobs are all constituting a global crisis. Well, what does one do then? The answer begins with Fortune 1000 companies. Systematic change requires collective action by organizations large enough to influence and maintain change. An attempt to change the norm requires Fortune 1000 companies to come together to acknowledge the problem and consciously take steps towards change.
In this session, I will highlight the gaps in the technology industry that is disabling women to succeed and create an economic; furthermore the session will also focus on ways to bridge that gap through collaboration and collective action.
APIs connect businesses, people, and things. They are everywhere nowadays, allowing developers to unlock new opportunities for innovation. This talk is on public API design and governance process. It is meant for technical people involved in creating interfaces that empower 3rd party developers. The audience will learn about the overall governance process with a focus on design, compliance with standards, relying on patterns and the OpenAPI specification.
What are world Megatrends? How do these relate to advances in technology like data analytics/machine learning/artificial intelligence, robotics/automation, 3D Printing/Functional Materials, Digital Manufacturing, Smart Devices/Industrial IoT, and Generative Design? How are these technologies tied to Industry 4.0 and the digital threads of hardware products across their lifecycle? How will these technologies evolve over time to realize the full potential and enable imagining and building things which cannot be designed and manufactured today? In this talk answers to these questions will be presented, and the Far Future will be tied to Near Present current solutions which will set the irreversible sequence of events into motion.
For the most flexible, powerful stream processing engines, it seems like the barrier to entry has never been higher than it is now. If you’ve tried, or have been interested in leveraging the strengths of real-time data processing - maybe for machine learning, IoT, anomaly detection or data analysis - but you’ve been held back: I’ve been there, and it’s frustrating. And that’s why this talk is for you.
That being said, this talk is also for you if you ARE experienced with stream processing but you want an easy (and if I say so myself, pretty fun) way to add some of the newest, bleeding edge features to your toolbelt.
This session will be about getting started with Flink SQL. Apache Flink’s high level SQL language has the familiarity of the SQL you know and love (or at least, know…), but with some powerful new functionality, and of course, the benefit of being able to be used with Flink and PyFlink.
More specifically, this will be a pragmatic entry into creating data pipelines with Flink SQL, as well as a sneak peek into some of its newest and most interesting features.