New Developer Technologies (Data, Edge, Python, Rust & more)
Monday, February 7, 2022
Cloud-native applications today are increasingly complex and therefore increasingly hard to understand. It’s critical to connect decisions around resource allocation and architecture to business metrics such as end-user latency, but very difficult to do in practice. Ultimately, understanding how your systems behave and why is a data analytics problem. Like most data analytics problems, the trick is in collecting and wrangling the right data sources. In this talk, you will learn how Pixie, an open-source observability platform for Kubernetes, can be used to painlessly turn low-level telemetry data into high-level signals about system health. The talk will also show these high-level signals can be used as input to infrastructure workloads such as CI/CD and load balancing in order to improve their performance.
Utilizing an all Apache stack for Rapid Data Lake Population and querying utilizing Apache Flink, Apache Pulsar and Apache NiFi. We can quickly stream data to and from any datalake, data lake house, lakehouse, database or any datamart regardless of cloud or size. FLiP allows for Java and Python developers to build scalable solutions that span messaging and streaming in cloud native fashion with full monitoring.
Tuesday, February 8, 2022
Context switching between your IDE, GitHub repo, JIRA, Terminal and Slack is no way to optimize developer collaboration and it results in countless hours of distraction and lack of focus, hurting code quality. Team alignment and productivity depend on just the right mix of collaboration and staying in the zone. At the same time, developers are hindered by lack of visibility into the performance of applications across the entire tech stack. In the future, development teams must be able to troubleshoot their tools in context, integrate telemetry and observability as part of the workflow, and work with engineers on debugging as part of the collaboration. Here we will discuss how such an integration should happen, and we will spell out the benefits that accrue to the individual developer, the team and the organization.
Once seen as a far off idea, the metaverse today is lauded as the future, discussed daily in the media, and prominent in the public discourse — anything but ignored. Many experts have hypothesized what this will look like, and companies across industries and around the world are starting to come forth with their plans for their presence in this next stage of the digital age. For Dan Sturman, there’s one concept that’s central to this vision of the future online — human co-experience. In this session, Dan will outline the five key pillars of creating an ecosystem for human co-experience — fully user generated, persistent identity, universal availability, immediate teleportation, shared fabric — and dive into the latest technologies available for developers within each, including open source software, storage enhancements, Luau programming language advancements, avatar tech, search and discovery, authentication, and more. He will also spotlight how developers across platforms are leveraging technology and advancing what is possible both today and in the future. Furthermore, he will share Roblox’s vision of what will become the foundation of an accessible, safe, creative, and civil human co-experience.
In DevOps everyone performs security work, whether they like it or not. With a ratio of 100/10/1 for Development, Operations, and Security, it’s impossible for the security team alone to get it all done. We must build security into each of “the three ways”; automating and/or improving efficiency of all security activities, speeding up feedback loops for security related activities, and providing continuous learning opportunities in relation to security. While it may sound like the security team needs to learn to sprint, give feedback, and teach at the same time, the real challenge is creating a culture that embodies the mindset that security is everybody's job.
“I really want to develop a tool that aggregates user interactions!”, said no developer ever.
Product-Led Growth (PLG) has stormed into our lives over the past few years. Concepts like usage-based pricing, seamless onboarding, built-in security, and product analytics are now taking a toll on developers. Companies are investing more and more engineering resources on developing self-service features that are shifting the focus from building innovative code for your product’s core technology.
From the product side, this surely looks innovative and unique. However, from the development side, it adds another variable into the equation, which already includes bugs, security issues, never-ending product feedback loops, and other things that stop developers from building exceptional code.
But while investing resources in creating a seamless product experience is crucial, isn’t the core value of the product more important? How can developers build self-service features, while achieving their innovative selves?
In this talk, we will be discussing the application side of the story for PLG success. This will be a practical demonstration of how developers can integrate self-service and data-driven by-design capabilities, while ensuring speed, flexibility, and full user observability, without sacrificing innovation.
In case you haven't heard, relational databases are the new hotness. Why? Distributed SQL. Wait, distributed what now?
Distributed SQL databases are relational databases engineered with a distributed architecture to scale out on commodity hardware, on premises or in the cloud…without any trade-offs. These databases retain standard SQL, ACID transactions and strong consistency while adding unprecedented levels of scalability. But that's only the tip of the iceberg.
In this session, you'll gain an understanding of the fundamental concepts of distributed SQL and get a quick look at MariaDB's new distributed SQL implementation, Xpand. Using MariaDB Xpand we'll take a more pragmatic look at how distributed SQL takes database elasticity, scale and high availability to the next level. Then, diving deeper, you'll be introduced to the novel concept of MariaDB's columnar indexing with distributed SQL, and how it can be used to dramatically improve the execution speed of analytical queries on massive datasets.
Secure software development isn’t always a top concern to the business unless you are in a highly regulated industry. Today, time to market is often more important than security, increasing the value of the product that you sell with continuous improvement and quick software releases. To create and maintain a lead on the competition, you have to be really good at Agile and DevOps.
A potential scenario: the security team has called an emergency meeting. A new vulnerability has been publicly disclosed that impacts not only your software, but your company and your customers. Will the required remediation take hours or even weeks to complete? It depends on your preparedness.
To improve your readiness and reduce impact, we will look at tips and actions you can take now.
1. Learn more about the scope of the mess that was created by the Log4j CVE.
2. Why most companies struggled to address it quickly.
3. What steps you can take now to be ready for the next one.
With testing and new releases, errors are going to creep through the cracks and new debugging approaches are needed. Nick Hodges, Developer Advocate at Rollbar, will uncover the 4 main insights that are transforming the ways we approach debugging to help return more productive time to developers.
Traditional monitoring and observability platforms continue to support the same approach: DevOps and SRE teams must centralize logs, metrics, and traces before they can start to analyze them. Faced with exploding data volumes, teams dependent on these platforms are left trying to predict which systems and datasets to monitor and centralize. What doesn’t meet the bar gets neglected or discarded altogether. You shouldn’t have to compromise data visibility to stay within budget. In this session, Edge Delta CEO and Co-Founder, Ozan Unlu will break down Edge Observability -- a novel approach to observability that aims to solve this issue. You will learn how DevOps and SRE teams can maximize visibility, optimize costs, and respond to issues orders of magnitude faster.
Developers know what they want and don’t want. And we are pretty sure they don’t want ops. The world is becoming serverless…Including the database.
In this session, we will deliver a deep-dive exploration into the internals of a serverless database, exploring the following, and more:
- How to automatically scale your workload with zero downtime
- How Raft and MVCC are used to guarantee serializable isolation for transactions
- How Cockroach automates scale and guarantees an always-on resilient database
- How to tie data to a location to help with performance and data privacy
- How to only pay what you use and never overspend
CockroachDB - a Distributed SQL cloud-native database designed for consistency, resiliency, located data and scale - is the core of CockroachDB Serverless. We’d love for you to join us and see how it works!
Roughly 60% of stream processing is spent doing mundane transformation tasks like format unification for ML workloads, filtering for privacy, simple enrichments like geo-ip translations, etc.
In this session, we will show you how easy it can be to do streaming data transformations while also eliminating data ping-ponging between storage and compute — thanks to Redpanda’s built-in support for WebAssembly (WASM). We’ll share best practices for data transforms using Redpanda, our Kafka API-compatible streaming data platform.
We will also cover:
- Overview of Redpanda and our WASM architecture
- Example use cases for data transforms
- Live demo of data transforms
Cadence is an exciting new technology open sourced by Uber in 2017 and that is a foundation technology for Uber and several other leading tech companies. Cadence makes it easier and much more efficient to develop and operate long-running, highly reliable process-based business logic (or workflows) at the highest levels of reliability and scale.
This session will explain the basic concepts of Cadence by walking through some simple code examples, discuss how to determine if your use-case is a good fit for Cadence, and outline some considerations for the successful adoption of Cadence in your organization.
It has never been more important to build secure applications from the ground up starting with developers implementing the DevSecOps framework. One facet of DevSecOps is building code that emits high quality telemetry so development teams can deliver new software and services at agile speed without compromising application security. In this session, Cribl technical evangelist Ed Bailey, will discuss three ways to instrument applications at the code level to give operations and security observability platforms enhanced data to provide next level fault detection capabilities that are not otherwise available. We have never seen a more challenging environment to monitor and secure modern applications and advanced telemetry based observability is the only way to meet this challenge.
Wednesday, February 9, 2022
OPEN TALK: Fake Your Data: Mimicking Production to Maximize Testing, Shorten Sprints, and Release 5x FasterJoin on Hopin
Raise your hand if you’ve ever written a script or built a tool to generate test data for your staging environment. Keep your hand up if it was fun. And easy. And still works. If your hand (and shoulders and morale) fell, rest assured you’re not alone. Now for the good news: help is here.
With the increasing complexity of today’s data ecosystems and the expanding reach of privacy regulations, generating useful, safe test data has become more difficult and riskier than ever. An effective test data solution must work across a variety of database types and de-identify production in a way that ensures privacy. Challenging? Yes. Attainable? That, too.
Technologies now exist that integrate directly into your data ecosystem to create test data that looks, acts, and behaves just like your production data. By hydrating QA and staging with useful, safe, fake data, dev teams are upleveling testing, catching bugs faster, and shortening their development cycles by as much as 60%. Data mimicking sets a new standard of quality test data generation that combines the best aspects of anonymization, synthesis, and subsetting.
Explore these technologies in a live demo and discover how to use them to:
- Maintain consistency in your test data across tables and across databases
- Subset your data from PB down to GB without breaking referential integrity
- Achieve mathematical guarantees of data privacy
- Increase your team’s efficiency by 50%
- Realize 5x more releases per day
We don't usually set out to write a monolith...but it happens. With changes over time, and limited resources to refactor, our application can turn into a "legacy monolith" that runs for years, and years, and that we all dread working on! In this session, learn about the AWS Microservice Extractor for .NET and how it can help you identify and extract parts of your application into services. Transforming your monolithic applications into smaller, independent services makes them easier to scale, more efficient to operate, and faster to develop, accelerating time to market for new features. Then go a step further to re-platforming to ASP.NET Core running on Linux, by adding to your tool chain the Porting Assistant for .NET. Come and learn how!
Scaling design is not about throwing more designers at the problem. Scaling design effectively is about operationalizing design, aligning closer with the principals of DevOps. How do we enable product teams to successfully deliver useful and useable products to their customers. This is an evolution that they call DesignOps2.0
Nearly everything a product team deals with impacts UX. Traditional development issues like availability and latency have a significant impact the user’s experience. When viewing the problem through this lens; the entire product team is responsible for the user experience and needs to be accountable for it, not just the UX team.
Erica will discuss the philosophy and end to end methods her team has developed around DesignOps2.0 and where they are heading from here.
• Establishing tools and an environment that empowers product teams to deliver useful and useable products
• Gaining a common understanding with product teams around what impacts the user experience and who is responsible.
• Holding engineering and product teams accountable for delivering a good user experience.
I would like to talk a bit about Web 3.0 in this session. The what and how! A quick look at the stack, the current projects and why it matters.
As data drives new and evolving IoT opportunities across all segments of the market, the role of the developer becomes increasingly important in being able to utilize existing tools to drive new ways to create Edge AI solutions. However, solving for Edge AI can be a complex design and development process as it requires determining the right selection of sensors, hardware, deep learning frameworks, or deciding how to deploy the unique use case.
By democratizing access to AI and simplifying development, organizations can enable their developers to quickly experiment with different algorithms, processors and optimization techniques or prototype and customize without having to spend weeks obtaining and setting up development boards. In this session, Bill will discuss how organizations can achieve this and empower their developers to build innovative Edge AI solutions – solutions that will improve lives and transform industries.
If you work in an organization that uses open source to develop applications, by now you are probably aware of the recently disclosed vulnerability in log4j, commonly being referred to as the Log4Shell vulnerability.
Virtually every organization that uses Java (Maven/Gradle) uses log4j and has likely been impacted. According to data tracked by Tidelift, log4j-core has over 3,600 dependent packages in the Java language ecosystem and over 20,900 dependent software repositories on public code collaboration platforms.
Tidelift solutions architect Sean Wiley breaks down the current Log4Shell situation and shares tips for remediating the issue—including ways Tidelift can help your organization prepare for the next zero day vulnerability.
Context is a crucial component of moving from Monitoring to Observability. Attaching rich context to application tracing allows us to answer fundamental questions required for true Observability, like "What changed?" In this talk, we'll discuss what context is, how it's applied, what problems it solves, and what challenges it presents.
Common techniques protect data at rest and in transit. But what about while it is being processed? That’s what confidential computing aims to solve. Confidential apps run in secure execution environments called enclaves, isolated from the main processor. Data is encrypted at runtime and opaque to privileged users, the operating system, and even cloud providers.
Confidential computing is a powerful, emerging technology but requires some specialized knowledge to use it effectively. Tools and software are thus needed to make confidential computing more accessible and use it in cloud-native contexts.
This talk is for everyone looking for an overall introduction to confidential computing and the big ideas that make it special. Along the way we'll explore core concepts, explain what use cases can be tackled, and share the resources we've found most helpful for getting started.
Previously, HDR video capture was available only in high-end Cameras. With the launch of iPhone12 last year, HDR cameras has become accessible for common users, and HDR content is on the surge.
This has given rise to UGC HDR content, making it crucial for all online media platforms to support it seamlessly. In this talk, I will discuss what challenges online media platforms are facing in terms of capturing, backward compatibility, compute, storage etc.
The past few years have seen the appearance of different software companies promising to augment the developer through Artificial Intelligence. Within these, many have specialised on testing problems, and in particular on test generation.
In this presentation we will tell the story of AI-generated testing, and the technologies behind it. We will look at today’s main players in the industry - within different categories of testing, talk about the limitations of this technology and of its practical use cases, and explore the opportunities for the next few years.