Neethu Elizabeth Simon, Intel, Senior Software Engineer Antonio Martinez, Intel, Software Engineer Brian McGinn, Intel, Software Solution Developer
Computer vision is revolutionizing various industries, and retail is no exception. While AI applications have many benefits, proof-of-concepts do not scale successfully into larger, implemented deployments. Retailers, independent software vendors (ISVs), and system integrators (SIs) require a good understanding of both hardware and software, maintaining AI models in production and having in-depth knowledge of costs involved in setting up and scaling these systems. Vision workloads are significantly larger, complex, undergo multiple stages of processing and therefore require systems to be architected, built, and deployed with several considerations. To make it easier for software developers to understand the optimizations and hardware capabilities, Automated Self-Checkout is developed- an open-source initiative for vision-enabled retail use cases, providing optimized code blocks, documentation, and performance benchmark data to enhance and accelerate developer’s product and project work.
This Open-Source project is a community developed project that provides the tools needed to launch and benchmark a computer vision-based workload on your local device.
In this hands-on workshop we will demonstrate how to utilize the automated-self-checkout solution to run Docker containers with open-source Intel OpenVINO toolkit and benchmark their performance on different hardware platforms. Through this interactive workshop, attendees will be able to setup and run the AI computer vision-based retail pipelines using this solution. Attendees will also gain knowledge on how to benchmark the workloads using this tool for obtaining maximum hardware performance.
Antonio is a software engineer with a strong focus on AI applications. He has more than 10 years of experience in software development.He currently holds the esteemed position of technical lead at Intel, where he is responsible for guiding projects and leading teams to develop cutting-edge... Read More →
Brian McGinn is a software architect and technical lead at Intel for the Health, Education, and Consumer Technologies group. He has developed software with Intel for the past 13 years and most recently working with open source computer vision and AI solutions in the retail space... Read More →
Neethu Elizabeth Simon is an IOT/ML Senior Software Engineer in Network & Edge Group at Intel Corporation, with vast industrial experience in building smart end-to-end computer vision-based AI/ML solutions in retail, biopharma, healthcare etc. Neethu holds Master’s in Computer Science... Read More →
The CNCF ecosystem has exploded with a diverse array of tools, each solving a unique problem. However, integrating multiple projects often leads to challenges requiring manual intervention and custom solutions, with shell scripting commonly used as a temporary stopgap. This approach, predominantly utilising Bash, frequently results in maintenance headaches such as poor code readability and debugging difficulties.
Nushell presents a modern take on shell scripting, offering a more intuitive syntax by dropping POSIX-compliance. It includes YAML friendly data model as a fundamental design, built-in commands equivalent to curl and jq, supports simple and clear module structures, and eliminates the confusion of single vs. double quotes.
We will have a quick look at Bash vs. Nushell difference, some example case of how Nushell data model makes it easier to deal with YAML configs, and extra automation and CI/CD ideas using its module and test capabilities.
Ryota is a Principal Engineer at Civo, overseeing the company's Kubernetes offerings. With over a decade of experience in the finance industry, he has a proven track record of designing payment processing systems from scratch, building platforms that embrace Kubernetes, Argo, Istio... Read More →
Ken Huang, DistributedApps, CEO and Chief AI Officer
This presentation introduces a comprehensive framework for testing and validating the security of generative AI applications, particularly those built on large language models (LLMs). Developed by Ken Huang (the Speaker) and World Digital Technology Academy's AI Safety, Trust, and Responsibility (STR) working group, the framework addresses the new attack surfaces and risks introduced by generative AI.
The standard covers the entire AI application stack, including base model selection, embedding and vector databases, prompt execution/inference, agentic behaviors, fine-tuning, response handling, and runtime security. For each layer, it outlines specific testing methods and expected outcomes to ensure AI applications behave securely and as designed throughout their lifecycle.
Key areas of focus include model compliance, data usage checks, API security, prompt injection testing, output validation, and privacy protection. The framework also addresses emerging challenges like agentic behaviors, where AI agents autonomously perform tasks based on predefined objectives.
Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning AI and Web3 business and technical guides and cutting-edge research. As Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance, and Co-Chair of AI STR Working... Read More →
Join Tristan Kalos & Antoine Carossio from Escape, for insights on critical risks from exposed API tokens. Their groundbreaking research, analyzing 1 million domains, uncovered 18,000+ API tokens and RSA keys accessible without authentication. 41% were highly critical. They will share his unique web scanning methodology, dive into sensitive API data revealing potential severe financial losses (up to 17 million $), and draw parallels to standard API security threats. Going beyond the findings, they'll present actionable remediation strategies and provide a practical API security checklist. Leave equipped with a clear path to secure your APIs.
Tristan Kalos, co-founder and CEO at Escape, draws from a background as a software engineer and Machine Learning Researcher at UC Berkeley. Motivated by firsthand experience witnessing a client's database stolen through an API in 2018, he has since become an expert in API security... Read More →
A story of how our infrastructure evolved over time to accommodate an increasing number of users - from on-premise to cloud and back down.
How does one make an infrastructure to handle more than a couple of users?
How do you go from 100 to 1000 to 100,000 to tens of millions?
What happens when due to popular demand hundreds of thousands of users hit your servers at the same time?
I'll tell you a story of how a small team of people managed to move software and services from one server to two, and then to dozens on cloud and then back to on-premise. What we encountered on the way, where we failed, and how we solved it.
Josip has been involved with computers for the better part of his life. Started with web development back in high school. Since then he's moved to backend and DevOps. Loves security stuff and is obsessed with optimising everything. Works in Zagreb as CTO @ Sofascore
Keploy. has become one of the popular tools for software end to end testing. If you can test your application with enough time to market that, this is what you're likely to use.
In this talk, we will classify end to end testing and discuss application areas for Keploy and traditional testing framework, and discuss which tool to choose for each use-case. And why not both? We will discuss using Keploy side by side with existing testcases to get even higher coverage with real-time based edge scenarios.
P.S: We will focus on enterprise grade application with Node, Java, but the same approaches can be used everywhere since Keploy is language agnostic.
I have a passion for learning and sharing my knowledge with others a public as possible.I love open source. I am not a heavy maintainer of any large libraries, but I really like the boyscout rule. I contribute to things as I come across issues that I think other people might struggle... Read More →
Basu Sugeerappa, Fidelity Investments, Director, API Center of Excellence
As our firm started modernization of API (Rest, Cloud, Gateway) it was evident that program needed to be accelerated rather than taking one API at a time. Team enthusiastic tech geeks defined standard technology, tools, process, development, and deployment procedure. Best thing is it is all automated and friendly UI where even non-technologist can develop API without knowing java or CICD. Our organization is contract first approach. Created various financial business need based swagger templates to jump start contract creation process, it goes through rigor governance committee review to avoid redundancy and to ensure new contract meets defined standards. After governance approval we use proprietary framework generates java code for Authentication, Authorization, Monitoring, logging, IO, Database, Caching etc.. It creates required configuration files and all deployable. Using generated java code, API goes through successful build procedure, and it auto deploys API runnable to cloud environment and the swagger proxy to envoy-based gateway. All these activities are completed less than 30 min from API contract to deployment which helps various development teams and business in focusing on business logic than infrastructure. Through this process we're able to decrease the time to market, bring consistent development practices and of course save money and bring operational efficiencies across the heterogeneous technologies and teams. Behind the scenes we use regular SDLC Swaggerhub, Github, Java Springboot, Jenkins, uDeploy, Venafi, GSLB, Azure and Envoy. Advantage is person who is developing API don't need to know how to do but they just focus on what business wants.
It explores the transformative potential of real-time data streaming and artificial intelligence (AI) in the context of e-commerce live streaming shopping. By leveraging advance technologies such as Storm, Trident, Samza, and Spark Streaming, businesses can process and analyze data in real-time, enhancing consumer engagement and driving sales in real time. This paper reviews the literature on live streaming selling, product promotion, and multichannel sales, and discusses the challenges and opportunities associated with these technologies. The findings provide valuable insights for businesses and researchers aiming to harness the power of real-time data streaming in the dynamic landscape of social commerce using real time streaming
My testament to his dedication and expertise, particularly in AI and ML:Published Papers and Written Book Chapter on the same as well: https://www.researchgate.net/profile/Arjun-Mantri-2Career Progression: I have has held pivotal roles at leading technology companies such as TikTok, Roku Inc., and Expedia Group, showcasing his ability to excel in various high-impact positions.Educational Background: I hold a Master of Science in Software Engineering from San Jose State University and a Bachelor... Read More →
Minimal API is a powerful tool for developers looking to build lightweight and efficient web applications. Unlike traditional web frameworks that can be cumbersome and difficult to work with, Minimal API streamlines the development process by providing a simple, yet effective, interface for creating RESTful APIs. With Minimal API, developers can easily define routes, handle HTTP requests and responses, and implement middleware with just a few lines of code. This results in faster development times, improved performance, and reduced complexity. Whether you're building a small application or a large-scale API, Minimal API is the perfect tool for the job.
Working in IT since 2009. Currently working as Head of Engineering at SoftwareHut and as an academic teacher at Białystok Technical University. Co-founder of meet.js Białystok. Book and articles author. Father, husband, huge H.P. Lovecraft fan and terrible poker player.
Dileep Kumar Pandiya, ZoomInfo, Principal Engineer
Explore how AI is transforming development practices in cloud-native environments. Highlight innovative tools, frameworks, and methodologies that incorporate AI to enhance developer productivity and software quality.
Technology Leader with expertise in scaling digital businesses and navigating complex digital transformations has been pivotal in the success of numerous high-profile projects. Dileep dedicates himself to staying ahead of industry trends and utilizes his skills to create robust, scalable... Read More →
Not having a DevSecOps Maturity Plan is like off-road racing in heavy fog.
This session lifts the fog to help you plan your way around hidden potholes, rocks, cliffs and trees.
An unplanned approach can end up adding substantial friction to the People, Process and Technology of DevSecOps.
In addition to your typical costs, the session will touch on economies of speed, value stream friction and the super-efficiency of aligning workflows with existing human habit loops. It will also discuss the frequent anti-pattern of comparing scaling-out DevSecOps capabilities to the cost of doing nothing, when it is well known that doing nothing is not really an option.
Every DevSecOps maturity level carries costs - learn how smart choices can mean lower costs, less friction and better outcomes.
Darwin Sanoy has spent his career in strategy, architecture, engineering and coding for scaled provisioning and operations automation. His early career was in enterprise IT automation, mid-career was running a solo business for enterprise automation training and the last decade has... Read More →
Karanveer Anand, Google, Technical Program Management
Last year, we solicited talks on the then new trend of cost and resource pressure: doing more with less in the face of uncertain future growth and revenue for the technology industry while also generative AI has improved the workflows of business.
Karanveer Anand is a technical program management leader at Google's SRE organization. Before joining Google, he managed program management in the SRE organization at Nutanix. He is a prominent voice in the industry, advocating for new program structures and standards.
Gabriel Schulhof, Auction.com, Senior Software Engineer
Two aspects of resolvers have an outsized influence on their performance: the size of the execution context, and the way we compute their value. In the Node.js implementation of graphql, promises wrapping primitive values are especially disruptive, since they add a large computing overhead. The context size creates a memory usage baseline that can rise very quickly with even small additions to the context, when there are many concurrent contexts. The execution can create temporary objects, increasing memory usage. Often-run resolvers, such as those responsible for filling out large arrays of objects, can become performance bottlenecks.
At Auction.com, our search results page (SRP) requests up to 500 items of roughly 80 fields each. The query resolving these fields was suffering a high latency. We shall examine the tools to instrument our code and identify memory usage and CPU utilization bottlenecks.
Our realtime elements (e.g. realtime updates to the status of currently viewed properties) are implemented using a translation of kafka messages to graphql updates. We shall present the tools and procedures to reduce memory usage and CPU usage when fanning out such messages.
The session walks through an industry solution of universal rendering, streaming & micro frontends that helped eBay deliver host app agnostic cross functional UX widgets at scale to over 700+ distributed site facing eBay apps & helped cut down the time to launch site wide UX from a few months to a few weeks. The system delivers over 150+ targeted UX widgets & has onboarded over 30 internal platforms within the company that have usecases of delivering site wide UX
Damodaran works as a Senior FE Engineer for eBay. He has worked on Header Platforms, universal render platforms, eBay's Seller experience & Buyer Experience SEO pages. He works predominantly on NodeJS, React & Marko.
Carl Moberg, Avassa, CTO and co-founder Amy Simonson, Avassa, Marketing Manager
Enough manual actions. Enough slow handovers. And enough K8mplexity.
For many innovative enterprises today, the journey to the centralized cloud has shaped the way of working when it comes to container orchestration and observability. Now, developers and IT teams are increasingly also managing containers at the distributed on-site edge and in IoT environments, which risk becoming a mind-boggling task due to the resource-constrained, distant nature of IoT and edge.
In this session, we address the challenges related to deploying, monitoring, observing, and securing container applications at the edge. We also present hands-on examples of what a self-service developer experience can look like for the container applications at the distributed edge and IoT infrastructure. It's automated, it's application-centric and it's astonishingly easy.
Carl has spent many years solving for automation and orchestration. He started building customer service platforms for ISPs back when people used dial-up for their online activities. He then moved on to focus on making multi-vendor networks programmable through model-driven architectures... Read More →
Amy is an experienced marketing professional who thrives right in the intersection between deep tech and marketing. She is currently the marketing manager of Swedish Edge Platform provider Avassa, who are set out to make the distributed on-site edge delightfully easy to manage.
In this session I will guide you from how to start with a CoPilot to deploy and own Azure Open AI instance to use cases which brings benefit to your company. We will also have a look how to build custom solution with the power of Azure.
- Takeaways: - What is a copilot and how you can use it in your daily business - How to setup Azure Open AI - How can you build a custom solution with help of Azure.
My name is Jannik Reinhard and I'm 25 years old and I am work in the internal IT department of the largest chemical company in the world. I am a senior solution architect in the area of modern device management and technical lead of AIOPS (AI of IT Operation).
Ken Huang, DistributedApps, CEO and Chief AI Officer
This presentation introduces a comprehensive framework for testing and validating the security of generative AI applications, particularly those built on large language models (LLMs). Developed by Ken Huang (the Speaker) and World Digital Technology Academy's AI Safety, Trust, and Responsibility (STR) working group, the framework addresses the new attack surfaces and risks introduced by generative AI.
The standard covers the entire AI application stack, including base model selection, embedding and vector databases, prompt execution/inference, agentic behaviors, fine-tuning, response handling, and runtime security. For each layer, it outlines specific testing methods and expected outcomes to ensure AI applications behave securely and as designed throughout their lifecycle.
Key areas of focus include model compliance, data usage checks, API security, prompt injection testing, output validation, and privacy protection. The framework also addresses emerging challenges like agentic behaviors, where AI agents autonomously perform tasks based on predefined objectives.
Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning AI and Web3 business and technical guides and cutting-edge research. As Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance, and Co-Chair of AI STR Working... Read More →
Timothy Spann, Zilliz, Principal Developer Advocate
In this talk I walk through various use cases where bringing real-time data to LLM solves some interesting problems.
In one case we use Apache NiFi to provide a live chat between a person in Slack and several LLM models all orchestrated via NiFi and Kafka. In another case NiFi ingests live travel data and feeds it to HuggingFace and OLLAMA LLM models for summarization. I also do live chatbot. We also augment LLM prompts and results with live data streams. All with ASF projects. I call this pattern FLaNK AI.
Tim Spann is the Principal Developer Advocate for Data in Motion @ Zilliz. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal Field Engineer... Read More →
Aman Sardana, Discover Financial Services, Senior Principal Application Architect
Have you ever wondered what it takes to create resilient and highly available platform services that support mission-critical software systems? Please join me to find out how you can set the right strategy and foundational architecture for building platform services that businesses can trust for their most critical workloads.
Payment systems that support real-time transaction processing are expected to be highly available and highly responsive 24/7/365. These systems must be fault-tolerant and resilient to any failures that might happen during payment transaction processing. Mission-critical payment systems with distributed architecture often depend on platform services like distributed caching, messaging, event streaming, databases, etc. that should be independently designed for high availability and fault tolerance. In this talk, I’ll share the approach we took for architecting and designing platform services within the payments domain that can be applied to any domain that supports business-critical processes. This methodological approach starts with establishing a capability view for platform services and then defining the implementation and physical views. You’ll also gain an understanding of other aspects of platform services like provisioning, security, observability, testing, and automation that are important for creating a well-rounded platform strategy supporting business-critical systems.
Senior Principal Application Architect, Discover Financial Services
I am a technology professional working in the financial services and payments domain. I’m a hands-on technology leader, enabling business capabilities by implementing cutting-edge, modernized technology solutions. I am skilled in designing, developing, and implementing innovative... Read More →
Harsh Sharma is an experienced professional having 2 years of experience in building APIs/Interfaces using Azure Integration Services, C# and .NET Core. Additionally he having experience in Research & Development by publishing 2 research papers in IEEE based on Networking & Machine... Read More →
In today's world, where environmental concerns are at the forefront of our minds, it's crucial to consider the impact of our actions, including those within the tech industry. Software development, while driving innovation and progress, also contributes to a significant carbon footprint. This is where Green DevOps steps in, offering a powerful solution for building software that is both efficient and environmentally friendly.
Green DevOps refers to the practice of integrating sustainable practices into your software development lifecycle. This means implementing tools, techniques, and methodologies that minimize the environmental impact of software development and delivery. By adopting Green DevOps, you can achieve significant benefits, including:
Reduced energy consumption: Green DevOps practices help optimize resource utilization, leading to lower energy consumption throughout your development process. This not only translates to cost savings but also minimizes the carbon footprint of your software.
Improved resource efficiency: Green DevOps encourages developers to utilize resources wisely, minimizing waste and maximizing efficiency. This can involve practices like code optimization, infrastructure right-sizing, and efficient testing processes.
Enhanced software quality: Green DevOps principles encourage a focus on quality from the very beginning of the development process. This leads to fewer bugs and defects, resulting in software that is more reliable and requires fewer resources to maintain.
Mentored more than 10 hackathons and open source programs and mentored more than 2000 peers to contribute to open source programsCo-organiser of HUG Ahmedabad,Gandhinagar , CNCF Gandhinagar ,GDG Cloud Gandhinagar
In the dynamic realm of cloud infrastructure, security remains a paramount concern. As organizations strive to fortify their digital assets against evolving threats, integrating security seamlessly into infrastructure development processes becomes imperative. DevSecOps offers a compelling framework for achieving this synergy between security and infrastructure operations.
This anticipated session at CloudX 2024 will provide actionable insights tailored to infrastructure professionals. The session will provide expert insights on establishing and sustaining a highly effective DevSecOps framework, anchored in five foundational tenets that prioritize people, tools, and processes.
Gursimar is trying to empower individuals via Education, Mentorship, and Open-Source. He was invited to Paris and presented at the HAProxy Conf 2022 in November 2022. He's a moderator and CFP review committee member for ContainerDays 2023 & 2024 and Staff Member & CFP review committee... Read More →