Kotlin and the cloud are a perfect fit. Cloud providers such as Amazon have created clients that make use of Kotlin's coroutine and DSL features.Kotlin can be used for all sorts of cloud services, from Lambda (functions), containerized compute to large scale data processing applications. Amazon for example has also created a way for JVM language based lambda functions to start up faster without a large penalty.This session will include live coding demos of building cloud functions, containerized Kotlin services and using Kotlin along machine learning workloads. Attendees will learn how easy it to create and test cloud-native software using the Kotlin ecosystem.
Josh has been interested in computers since he was five years old. He has worked professionally as a developer, engineer, director, and architect for over 20 years. He considers himself a lifelong learner. He loves mentoring and working with others. His passion is seeing others... Read More →
Aaron Harrison, Gartner, Director Research & Advisory in Gartner's Software Engineering Leadership
Enable developer autonomy and self-service through platform teams. Learn how adidas' central platform engineering team at adidas offers standardization of common activities as an attractive, value-add service that enables product teams to create high-quality solutions faster, without reducing their autonomy in higher value activities.
Aaron Harrison is a Director of Advisory in Gartner's Information Technology Leaders practice. Mr. Harrison advises Application, Enterprise Architecture, and Software Engineering Leaders on subjects such as agile, modernization, API strategy, application rationalization, business... Read More →
In today's technological landscape, the synergy between Cloud Native technologies and Artificial Intelligence (AI) opens the stage to a myriad of unexplored challenges and opportunities. Dynamic, always-on and highly scalable infrastructures, typical of the Cloud Native paradigm, seamlessly integrate with AI's need to rapidly prototype solutions and access vast computational resources. The convergence of these two worlds has exposed some gaps in the Cloud Native ecosystem that need to be addressed. At the same time, AI opens numerous opportunities for innovation that are yet to be explored. This talk will delve into the Cloud Native Artificial Intelligence (CNAI) paradigm as a holistic approach to unlocking the full potential of the cloud in managing AI workloads and aim to anticipate the growth opportunities that lie ahead for the Cloud Native world.
Graziano is a software engineer and passionate about agile development and product management. Formerly a developer of distributed systems in enterprise environments and a product manager, he focuses on sharing the myriad beauties of the cloud-native world. Active in international... Read More →
Ever felt like RunBooks are just bandaids for tech mishaps? They pop up after an incident and only help when the same gremlins strike again. But where’s the RunBook for leveling up your troubleshooting game (fast)? Every company is a unique beast with its own architecture, tech stacks, and dependency webs. Scrambling to link faces with systems and knowing who to call during a crisis can be stressful. And then there’s the real kicker: How do you gauge the true business fallout from an outage, and and determine the best course of action? What if Observability Tools could pull something like your IDE, learning from “anonymous usage stats” to improve over time for everyone on the team? Imagine tapping into the instincts of senior engineers and sharing their troubleshooting savvy across the board. My talk dives into how Retrieval Augmented Techniques (RAG) in AI can make this more seamless in reality. Join me to explore how we can turn troubleshooting into a collective superpower!
Ruchir is the co-founder and CEO of Cardinal, which helps improve the ROI (Return On Investment) of an existing Observability Stack. In his previous life, he spent 7 years as a Lead Engineer on Netflix's Observability team, where he built petabyte-scale Observability products that... Read More →
The cloud-centric world has turned every line of code into a buying decision and highlighted the economic aspects of software design. After first struggling with the ramifications of this, (and a meager $3000.00 budget on my first cloud project in 2009) I discovered how cloud economics presented an opportunity to merge efficiency concepts taken from classic manufacturing with modern software engineering and architecture.
Since that time we have seen an industry wide focus on cloud cost management and new practices like FinOps emerge, but few have explored how to apply cloud economics directly to modern DevOps practices resulting in million dollar lines of code and unprofitable system design that has soured some on cloud computing as a whole. My experience however has been very different. I will share the lessons I've learned over the last 15 years treating cost as a key non-functional requirement during development.
By integrating the Theory of Constraints, Lean Manufacturing, and unit economics, alongside cloud cost efficiency goals and performance as crucial inputs, I will share a strategy that fosters continuous improvement and boosts profitability under current consumption-based cloud pricing models. This approach ensures an economical path from concept to deployment, and ongoing operations, without compromising innovation and time-to-market.
Erik Peterson is the Founder and CTO of CloudZero and a pioneer in engineering-led cost optimization and unit economics. He has been building in the cloud since its arrival and has over two decades of software startup experience, with a passion for cost-efficient engineering and excellent... Read More →
Daniel Myers, Snowflake, Director of Developer RelationsAPIs for data - do you start with the data model or the API model? Learn best practices and how to build data-intensive applications on Snowflake and large language models (LLMs). In this session, you’ll learn different API architectural patterns, including Connected Apps and Snowflake Native Apps. Daniel will demonstrate how to develop, deploy, and run applications directly on Snowflake.
With cross-functional experience in software engineering, product management, and business development, Daniel is the Director of Developer Relations at Snowflake. Daniel leads global, cross-functional teams in software development and customer adoption, with a focus on bottom-up... Read More →
Mike Simon, Splunk, Staff Developer Evangelist - Observability
In today’s cloud-native world, developers need to innovate faster while maintaining the reliability and performance of their applications. In this session, you’ll learn about why observability matters for developers and how it lets you spend more time coding and less time fixing things. You’ll also see a demo of how to automatically instrument your applications with Splunk’s distribution of the OpenTelemetry collector, and learn why OpenTelemetry is the most important skill you can learn in 2024.
Mike Simon is a Staff Developer Evangelist at Splunk, where he has been part of the team for two years. In his tenure at Splunk, Mike has held roles as an Observability Strategist and Technical Architect.Before joining Splunk, Mike built a distinguished 16-year career as an Observability... Read More →
Mike Simon, Splunk, Staff Developer Evangelist - Observability
In cloud-native environments, embedding observability as part of your infrastructure-as-code strategy is no longer optional—it’s essential. This session explores the value and necessity of adopting an observability-inclusive as-code approach, allowing teams to automate monitoring, reduce blind spots, and accelerate incident response. We’ll dive into the opportunities this brings for enhancing visibility, ensuring system reliability, and empowering teams to take proactive, data-driven actions without manual overhead.
Mike Simon is a Staff Developer Evangelist at Splunk, where he has been part of the team for two years. In his tenure at Splunk, Mike has held roles as an Observability Strategist and Technical Architect.Before joining Splunk, Mike built a distinguished 16-year career as an Observability... Read More →
Have you encountered a scenario where, while working on a feature or resolving a bug, you discovered that duplicate records were being generated in a database, or duplicate charges or orders were being processed? Have you handled situations where services communicate asynchronously over the network, idempotency becomes essential to handle message retries, network failures, and eventual consistency among services. Ensuring idempotency is essential, particularly with the help of durable execution systems like Temporal. In this session, we'll delve into the concept of idempotency, elucidating its significance in simplifying the management of cloud based software systems. Through examples and exploration of intricate cases, we'll demonstrate how idempotency can be leveraged effectively with Temporal.
Geetha is a Solutions Architect in the big data management and durable execution space with experience in executing solutions for business problems on cloud and on-premises. She loved distributed computing during her undergrad days and has followed my interest ever since. She provides... Read More →
GenAI has gone from generally available, novel technology to widely adopted in a matter of months. Most engineering organizations are using GenAI to generate code, write tests, and assist in code reviews. New code is becoming dirt cheap to write - but our delivery pipelines remain miserably unprepared for the tsunami of new code flowing at a much more rapid pace.
Our current pipelines need a hard reset to prepare us for the GenAI revolution - and engineering managers need to get started TODAY.
This talk will dive into where GenAI is starting to break down our delivery pipelines. While scaling CI/CD is easy, and we can always add a few more workers, scaling the humans in the process is the hard part. This talk will demonstrate how massive amounts of new machine-generated code will impact our pipelines, in ways that will require either greater headcount, or smarter, automated pipelines. You'll come away with ideas for how to modernize your delivery pipeline so you can fully embrace the GenAI revolution.
Yishai Beeri likes to solve problems, and that’s why he was so fascinated with programming when first encountered Logo back in the 80s, where the possibilities seemed endless.He has made it a focus of his career to solve complex programming problems, both as a consultant and entrepreneur... Read More →
Sravan Yella, Hewlett Packard, Lead Solutions Engineer
The integration of Artificial Intelligence (AI) in marketing, particularly through social media, presents profound opportunities and ethical challenges that demand careful consideration. As AI technologies like Machine Learning (ML), Natural Language Processing (NLP), and predictive analytics become central to marketing strategies, they facilitate unparalleled personalization and efficiency in analyzing vast datasets and optimizing marketing efforts. Despite these advancements enhancing customer engagement by 40% and reducing operational costs by 30%, the rapid proliferation of AI tools in marketing raises significant ethical concerns. Key issues include the potential for data privacy breaches, the accuracy and bias in AI algorithms, and the lack of transparency in AI-driven decisions. These challenges not only affect consumer trust, which has seen a decline of 20% in brands using AI indiscriminately but also pose risks to brand integrity and compliance with evolving regulatory frameworks. To navigate this landscape responsibly, this presentation will explore best practices and frameworks for ethical AI usage in marketing. We will discuss the implementation of rigorous data governance protocols that ensure user data protection and privacy, techniques for auditing and mitigating biases in AI models to boost decision transparency, and strategies for maintaining compliance with international data use regulations. By fostering an ethical AI deployment approach, companies can enhance customer satisfaction by 25% and improve brand loyalty. This talk aims to equip professionals with actionable insights to leverage AI in marketing ethically and sustainably, ensuring long-term business success and consumer trust.
This abstract emphasizes the balance between leveraging AI's capabilities and addressing ethical considerations, backed by relevant data points to enhance its appeal to conference reviewers.
Sravan Yella is an expert CRM Engineering Leader with extensive experience in leveraging exponential backoff strategies to enhance the robustness and efficiency of distributed systems. His strategic implementations have significantly reduced downtime and improved system response in... Read More →
Anushrut Gupta, Hasura, Senior Product Manager, Generative AI
“Over the last 3 months, summarize the top billing issues faced by my enterprise customers within the first 30 days of their onboarding.”
On the surface, building an internal AI customer intelligence application that can answer questions like this is a perfect use-case for Gen AI. However, building a production ready app that retrieves the data (RAG) before hauling it off to your favorite LLM for summarization soon becomes a terrible engineering experience.
The data is spread across 3 places: a tickets database (eg: elastic), a CRM (eg: salesforce) and your user-accounts transactional database (eg: postgres). In production, your app can’t access the data from these databases directly. Given security & privacy concerns, your app won’t have direct access to these databases. Making independent retrieval requests to each of these sources and then joining them in memory might be prohibitively expensive and needs a level of query planning to do efficiently. Moving all data into one location is expensive to build, maintain and govern Predictable quality is further made hard because underlying data formats and storage interfaces are continuously changing. Different types of user queries might require additional filtering and joining of data, which becomes hard to generalize.
APIs solve almost all of these very well known challenges. APIs offer standardization and security. APIs can provide a stable contract to interact with underlying data.
And in all likelihood, you already have APIs on these internal and external data sources.
Ironically, while APIs have become a necessity for other parts of the stack, they are clearly not the first thing that AI engineers building RAG reach for.
In this talk, we’ll discuss: Why API based retrieval doesn’t work well for RAG What we need from our existing internal and external APIs to make them RAG ready How we can get existing APIs to become RAG ready without needing to rebuild the APIs
This talk will be technical, with code demos (possibly with some live coding!) and end with key resources (reference architectures, API best practices, tools/technologies) that attendees can take back to their work.
Clemens Vasters is Lead Architect in Microsoft’s Azure Messaging team that builds and operates a fleet of hyper-scale messaging services, including Event Grid, Service Bus, and Event Hubs. Clemens represents Microsoft in messaging standardisation in OASIS (AMQP) and CNCF (CloudEvents... Read More →
Ishneet Dua, Amazon Web Services, Senior Generative AI Solutions Architect Parth Girish Patel, Amazon Web Services, Sr AI/ML Architect
The rapid growth of generative AI brings promising innovation and, at the same time, raises new challenges around its security, safe, and responsible development and use. These challenges include some that were common before generative AI, such as bias and explainability, and new ones unique to generative models, including hallucinations, toxicity, and intellectual property protection. During this session, participants will gain an overview of the challenges that generative AI presents, survey the emerging science surrounding these challenges, and engage in a discussion about the hands-on, security, and Responsible AI work currently being conducted on AWS.
Senior Generative AI Solutions Architect, Amazon Web Services
Ishneet Dua (Isha) is a recognized expert in leveraging AI and machine learning for sustainability solutions. She has established herself as a go-to authority on combating climate change, pollution, and other environmental challenges through cutting-edge technologies.Dua has authored... Read More →
Parth Girish Patel is a seasoned architect with a wealth of experience spanning over 17 years, encompassing management consulting and cloud computing. Currently, at Amazon Web Services (AWS), he specializes in Artificial Intelligence/Machine Learning, generative AI, sustainability... Read More →
Context is fundamental to well-run tech operations: With the right context, IT teams can better understand their systems, interpret real-time data quickly, and facilitate better incident management to achieve operational efficiency. But too often, gathering the necessary context is a lengthy, inconsistent, and elusive process. IT teams are forced to grapple with fragmented tools, siloed workflows, and inconsistent manual processes, which have turned context collection into a definitive pain point for the ITOps industry. Teams are losing out on precious time, money, and attention that should be directed towards digital transformation and innovation.
The tech industry has recently transformed thanks to the AI boom: ITOps is at a critical juncture where AI can enable faster, more efficient ITOps as well as deliver Full-Context Operations. Fred Koopmans, Chief Product Officer of AIOps platform BigPanda, will speak to the promise of Full-Context Operations – the process of unifying IT teams’ tools and processes with AI to provide the institutional knowledge needed to address every incident immediately. He’ll dive deep into the ways that teams can tangibly benefit from having the right context, outlining how the IT industry can leverage AI to collect comprehensive and contextual data to help operators achieve better incident resolution. Fred can share detailed proof points from developing BigPanda’s AI-powered assistant that was purpose-built for delivering full context in IT operations. With Full-Context Operations, the IT industry can finally fulfill the long-sought-after promise of AIOps, putting AI into practice to deliver unprecedented operational efficiency.
Fred Koopmans, BigPanda's Chief Product Officer, is dedicated to driving innovation and collaboration, building trusted partnerships with customers, creating product roadmaps, and empowering individuals to achieve the extraordinary. He leads product strategy, product management, product... Read More →
Aaron Harrison, Gartner, Director Research & Advisory in Gartner's Software Engineering Leadership
Enable developer autonomy and self-service through platform teams. Learn how adidas' central platform engineering team at adidas offers standardization of common activities as an attractive, value-add service that enables product teams to create high-quality solutions faster, without reducing their autonomy in higher value activities.
Aaron Harrison is a Director of Advisory in Gartner's Information Technology Leaders practice. Mr. Harrison advises Application, Enterprise Architecture, and Software Engineering Leaders on subjects such as agile, modernization, API strategy, application rationalization, business... Read More →
In today's technological landscape, the synergy between Cloud Native technologies and Artificial Intelligence (AI) opens the stage to a myriad of unexplored challenges and opportunities. Dynamic, always-on and highly scalable infrastructures, typical of the Cloud Native paradigm, seamlessly integrate with AI's need to rapidly prototype solutions and access vast computational resources. The convergence of these two worlds has exposed some gaps in the Cloud Native ecosystem that need to be addressed. At the same time, AI opens numerous opportunities for innovation that are yet to be explored. This talk will delve into the Cloud Native Artificial Intelligence (CNAI) paradigm as a holistic approach to unlocking the full potential of the cloud in managing AI workloads and aim to anticipate the growth opportunities that lie ahead for the Cloud Native world.
Graziano is a software engineer and passionate about agile development and product management. Formerly a developer of distributed systems in enterprise environments and a product manager, he focuses on sharing the myriad beauties of the cloud-native world. Active in international... Read More →
How to write interactive web applications with HTML and a minor open-source extension, HTMX. There is no need for JavaScript.
This is an alternative to the traditional frontend stack (React/Angular, TypeScript, building, etc.), which can feel over-engineered in many use cases.
It allows all engineers to contribute to the front end from their native environment.
Jacek started a career as an engineering intern at NVIDIA CUDA and Facebook. He joined pre-revenue startup Sumo Logic as ~20 Sumo Logic in the San Francisco Bay Area. He moved back to Poland and opened an office with 80+ full-time engineers. We optimized gross margins on AWS and... Read More →
GenAI has gone from generally available, novel technology to widely adopted in a matter of months. Most engineering organizations are using GenAI to generate code, write tests, and assist in code reviews. New code is becoming dirt cheap to write - but our delivery pipelines remain miserably unprepared for the tsunami of new code flowing at a much more rapid pace.
Our current pipelines need a hard reset to prepare us for the GenAI revolution - and engineering managers need to get started TODAY.
This talk will dive into where GenAI is starting to break down our delivery pipelines. While scaling CI/CD is easy, and we can always add a few more workers, scaling the humans in the process is the hard part. This talk will demonstrate how massive amounts of new machine-generated code will impact our pipelines, in ways that will require either greater headcount, or smarter, automated pipelines. You'll come away with ideas for how to modernize your delivery pipeline so you can fully embrace the GenAI revolution.
Yishai Beeri likes to solve problems, and that’s why he was so fascinated with programming when first encountered Logo back in the 80s, where the possibilities seemed endless.He has made it a focus of his career to solve complex programming problems, both as a consultant and entrepreneur... Read More →
Dileep Kumar Pandiya, ZoomInfo, Principal Engineer
Explore how AI is transforming development practices in cloud-native environments. Highlight innovative tools, frameworks, and methodologies that incorporate AI to enhance developer productivity and software quality.
Technology Leader with expertise in scaling digital businesses and navigating complex digital transformations has been pivotal in the success of numerous high-profile projects. Dileep dedicates himself to staying ahead of industry trends and utilizes his skills to create robust, scalable... Read More →
The cloud-centric world has turned every line of code into a buying decision and highlighted the economic aspects of software design. After first struggling with the ramifications of this, (and a meager $3000.00 budget on my first cloud project in 2009) I discovered how cloud economics presented an opportunity to merge efficiency concepts taken from classic manufacturing with modern software engineering and architecture.
Since that time we have seen an industry wide focus on cloud cost management and new practices like FinOps emerge, but few have explored how to apply cloud economics directly to modern DevOps practices resulting in million dollar lines of code and unprofitable system design that has soured some on cloud computing as a whole. My experience however has been very different. I will share the lessons I've learned over the last 15 years treating cost as a key non-functional requirement during development.
By integrating the Theory of Constraints, Lean Manufacturing, and unit economics, alongside cloud cost efficiency goals and performance as crucial inputs, I will share a strategy that fosters continuous improvement and boosts profitability under current consumption-based cloud pricing models. This approach ensures an economical path from concept to deployment, and ongoing operations, without compromising innovation and time-to-market.
Erik Peterson is the Founder and CTO of CloudZero and a pioneer in engineering-led cost optimization and unit economics. He has been building in the cloud since its arrival and has over two decades of software startup experience, with a passion for cost-efficient engineering and excellent... Read More →
Problem: The challenge at the heart of this presentation is the efficient training of AI models in scenarios where real-world data is limited, sensitive, or expensive to acquire. This issue is particularly pressing in fields such as autonomous vehicle development and medical research, where the quality and diversity of training data directly influence the performance and reliability of AI systems. Addressing this problem is crucial for advancing AI capabilities while ensuring ethical standards and privacy are upheld.
Methodology: To tackle this challenge, our approach involves the creation and use of synthetic data. The methodology encompasses techniques for generating high-fidelity, diverse synthetic datasets that mimic real-world complexities without compromising privacy or incurring high costs. Key techniques include Generative Adversarial Networks (GANs), simulation-based synthesis, and rule-based data generation. The presentation will detail these methods, along with strategies for validating the realism and utility of synthetic data in training robust AI models.
Conclusions: Preliminary results demonstrate that synthetic data can significantly enhance AI model training, especially in constrained environments. By leveraging synthetic datasets, we've observed improvements in model accuracy, robustness, and generalizability across several applications. The presentation will outline these findings, showcasing examples where synthetic data has successfully bridged the gap between the data needs of AI systems and the limitations of real-world datasets.
Customer Obsessed Product Leader with a proven track record at tech giants like Meta and Cisco Systems, where I've led the charge in product innovation and managed multi-billion dollar portfolios. My expertise lies in driving Machine Learning-focused product strategies and spearheading... Read More →
Daniel Myers, Snowflake, Director of Developer RelationsAPIs for data - do you start with the data model or the API model? Learn best practices and how to build data-intensive applications on Snowflake and large language models (LLMs). In this session, you’ll learn different API architectural patterns, including Connected Apps and Snowflake Native Apps. Daniel will demonstrate how to develop, deploy, and run applications directly on Snowflake.
With cross-functional experience in software engineering, product management, and business development, Daniel is the Director of Developer Relations at Snowflake. Daniel leads global, cross-functional teams in software development and customer adoption, with a focus on bottom-up... Read More →
Mike Simon, Splunk, Staff Developer Evangelist - Observability
In today’s cloud-native world, developers need to innovate faster while maintaining the reliability and performance of their applications. In this session, you’ll learn about why observability matters for developers and how it lets you spend more time coding and less time fixing things. You’ll also see a demo of how to automatically instrument your applications with Splunk’s distribution of the OpenTelemetry collector, and learn why OpenTelemetry is the most important skill you can learn in 2024.
Mike Simon is a Staff Developer Evangelist at Splunk, where he has been part of the team for two years. In his tenure at Splunk, Mike has held roles as an Observability Strategist and Technical Architect.Before joining Splunk, Mike built a distinguished 16-year career as an Observability... Read More →
Mike Simon, Splunk, Staff Developer Evangelist - Observability
In cloud-native environments, embedding observability as part of your infrastructure-as-code strategy is no longer optional—it’s essential. This session explores the value and necessity of adopting an observability-inclusive as-code approach, allowing teams to automate monitoring, reduce blind spots, and accelerate incident response. We’ll dive into the opportunities this brings for enhancing visibility, ensuring system reliability, and empowering teams to take proactive, data-driven actions without manual overhead.
Mike Simon is a Staff Developer Evangelist at Splunk, where he has been part of the team for two years. In his tenure at Splunk, Mike has held roles as an Observability Strategist and Technical Architect.Before joining Splunk, Mike built a distinguished 16-year career as an Observability... Read More →
Have you encountered a scenario where, while working on a feature or resolving a bug, you discovered that duplicate records were being generated in a database, or duplicate charges or orders were being processed? Have you handled situations where services communicate asynchronously over the network, idempotency becomes essential to handle message retries, network failures, and eventual consistency among services. Ensuring idempotency is essential, particularly with the help of durable execution systems like Temporal. In this session, we'll delve into the concept of idempotency, elucidating its significance in simplifying the management of cloud based software systems. Through examples and exploration of intricate cases, we'll demonstrate how idempotency can be leveraged effectively with Temporal.
Geetha is a Solutions Architect in the big data management and durable execution space with experience in executing solutions for business problems on cloud and on-premises. She loved distributed computing during her undergrad days and has followed my interest ever since. She provides... Read More →
Anushrut Gupta, Hasura, Senior Product Manager, Generative AI
“Over the last 3 months, summarize the top billing issues faced by my enterprise customers within the first 30 days of their onboarding.”
On the surface, building an internal AI customer intelligence application that can answer questions like this is a perfect use-case for Gen AI. However, building a production ready app that retrieves the data (RAG) before hauling it off to your favorite LLM for summarization soon becomes a terrible engineering experience.
The data is spread across 3 places: a tickets database (eg: elastic), a CRM (eg: salesforce) and your user-accounts transactional database (eg: postgres). In production, your app can’t access the data from these databases directly. Given security & privacy concerns, your app won’t have direct access to these databases. Making independent retrieval requests to each of these sources and then joining them in memory might be prohibitively expensive and needs a level of query planning to do efficiently. Moving all data into one location is expensive to build, maintain and govern Predictable quality is further made hard because underlying data formats and storage interfaces are continuously changing. Different types of user queries might require additional filtering and joining of data, which becomes hard to generalize.
APIs solve almost all of these very well known challenges. APIs offer standardization and security. APIs can provide a stable contract to interact with underlying data.
And in all likelihood, you already have APIs on these internal and external data sources.
Ironically, while APIs have become a necessity for other parts of the stack, they are clearly not the first thing that AI engineers building RAG reach for.
In this talk, we’ll discuss: Why API based retrieval doesn’t work well for RAG What we need from our existing internal and external APIs to make them RAG ready How we can get existing APIs to become RAG ready without needing to rebuild the APIs
This talk will be technical, with code demos (possibly with some live coding!) and end with key resources (reference architectures, API best practices, tools/technologies) that attendees can take back to their work.
Clemens Vasters is Lead Architect in Microsoft’s Azure Messaging team that builds and operates a fleet of hyper-scale messaging services, including Event Grid, Service Bus, and Event Hubs. Clemens represents Microsoft in messaging standardisation in OASIS (AMQP) and CNCF (CloudEvents... Read More →
Ishneet Dua, Amazon Web Services, Senior Generative AI Solutions Architect Parth Girish Patel, Amazon Web Services, Sr AI/ML Architect
The rapid growth of generative AI brings promising innovation and, at the same time, raises new challenges around its security, safe, and responsible development and use. These challenges include some that were common before generative AI, such as bias and explainability, and new ones unique to generative models, including hallucinations, toxicity, and intellectual property protection. During this session, participants will gain an overview of the challenges that generative AI presents, survey the emerging science surrounding these challenges, and engage in a discussion about the hands-on, security, and Responsible AI work currently being conducted on AWS.
Senior Generative AI Solutions Architect, Amazon Web Services
Ishneet Dua (Isha) is a recognized expert in leveraging AI and machine learning for sustainability solutions. She has established herself as a go-to authority on combating climate change, pollution, and other environmental challenges through cutting-edge technologies.Dua has authored... Read More →
Parth Girish Patel is a seasoned architect with a wealth of experience spanning over 17 years, encompassing management consulting and cloud computing. Currently, at Amazon Web Services (AWS), he specializes in Artificial Intelligence/Machine Learning, generative AI, sustainability... Read More →
Carl Moberg, Avassa, CTO and co-founder Amy Simonson, Avassa, Marketing Manager
Enough manual actions. Enough slow handovers. And enough K8mplexity.
For many innovative enterprises today, the journey to the centralized cloud has shaped the way of working when it comes to container orchestration and observability. Now, developers and IT teams are increasingly also managing containers at the distributed on-site edge and in IoT environments, which risk becoming a mind-boggling task due to the resource-constrained, distant nature of IoT and edge.
In this session, we address the challenges related to deploying, monitoring, observing, and securing container applications at the edge. We also present hands-on examples of what a self-service developer experience can look like for the container applications at the distributed edge and IoT infrastructure. It's automated, it's application-centric and it's astonishingly easy.
Carl has spent many years solving for automation and orchestration. He started building customer service platforms for ISPs back when people used dial-up for their online activities. He then moved on to focus on making multi-vendor networks programmable through model-driven architectures... Read More →
Amy is an experienced marketing professional who thrives right in the intersection between deep tech and marketing. She is currently the marketing manager of Swedish Edge Platform provider Avassa, who are set out to make the distributed on-site edge delightfully easy to manage.
In this session I will guide you from how to start with a CoPilot to deploy and own Azure Open AI instance to use cases which brings benefit to your company. We will also have a look how to build custom solution with the power of Azure.
- Takeaways: - What is a copilot and how you can use it in your daily business - How to setup Azure Open AI - How can you build a custom solution with help of Azure.
My name is Jannik Reinhard and I'm 25 years old and I am work in the internal IT department of the largest chemical company in the world. I am a senior solution architect in the area of modern device management and technical lead of AIOPS (AI of IT Operation).
Timothy Spann, Zilliz, Principal Developer Advocate
In this talk I walk through various use cases where bringing real-time data to LLM solves some interesting problems.
In one case we use Apache NiFi to provide a live chat between a person in Slack and several LLM models all orchestrated via NiFi and Kafka. In another case NiFi ingests live travel data and feeds it to HuggingFace and OLLAMA LLM models for summarization. I also do live chatbot. We also augment LLM prompts and results with live data streams. All with ASF projects. I call this pattern FLaNK AI.
Tim Spann is the Principal Developer Advocate for Data in Motion @ Zilliz. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal Field Engineer... Read More →
Aman Sardana, Discover Financial Services, Senior Principal Application Architect
Have you ever wondered what it takes to create resilient and highly available platform services that support mission-critical software systems? Please join me to find out how you can set the right strategy and foundational architecture for building platform services that businesses can trust for their most critical workloads.
Payment systems that support real-time transaction processing are expected to be highly available and highly responsive 24/7/365. These systems must be fault-tolerant and resilient to any failures that might happen during payment transaction processing. Mission-critical payment systems with distributed architecture often depend on platform services like distributed caching, messaging, event streaming, databases, etc. that should be independently designed for high availability and fault tolerance. In this talk, I’ll share the approach we took for architecting and designing platform services within the payments domain that can be applied to any domain that supports business-critical processes. This methodological approach starts with establishing a capability view for platform services and then defining the implementation and physical views. You’ll also gain an understanding of other aspects of platform services like provisioning, security, observability, testing, and automation that are important for creating a well-rounded platform strategy supporting business-critical systems.
Senior Principal Application Architect, Discover Financial Services
I am a technology professional working in the financial services and payments domain. I’m a hands-on technology leader, enabling business capabilities by implementing cutting-edge, modernized technology solutions. I am skilled in designing, developing, and implementing innovative... Read More →
Context is fundamental to well-run tech operations: With the right context, IT teams can better understand their systems, interpret real-time data quickly, and facilitate better incident management to achieve operational efficiency. But too often, gathering the necessary context is a lengthy, inconsistent, and elusive process. IT teams are forced to grapple with fragmented tools, siloed workflows, and inconsistent manual processes, which have turned context collection into a definitive pain point for the ITOps industry. Teams are losing out on precious time, money, and attention that should be directed towards digital transformation and innovation.
The tech industry has recently transformed thanks to the AI boom: ITOps is at a critical juncture where AI can enable faster, more efficient ITOps as well as deliver Full-Context Operations. Fred Koopmans, Chief Product Officer of AIOps platform BigPanda, will speak to the promise of Full-Context Operations – the process of unifying IT teams’ tools and processes with AI to provide the institutional knowledge needed to address every incident immediately. He’ll dive deep into the ways that teams can tangibly benefit from having the right context, outlining how the IT industry can leverage AI to collect comprehensive and contextual data to help operators achieve better incident resolution. Fred can share detailed proof points from developing BigPanda’s AI-powered assistant that was purpose-built for delivering full context in IT operations. With Full-Context Operations, the IT industry can finally fulfill the long-sought-after promise of AIOps, putting AI into practice to deliver unprecedented operational efficiency.
Fred Koopmans, BigPanda's Chief Product Officer, is dedicated to driving innovation and collaboration, building trusted partnerships with customers, creating product roadmaps, and empowering individuals to achieve the extraordinary. He leads product strategy, product management, product... Read More →