Loading…
Tuesday November 12, 2024 2:00pm - 2:50pm PST
Ken Huang, DistributedApps, CEO and Chief AI Officer

This presentation introduces a comprehensive framework for testing and validating the security of generative AI applications, particularly those built on large language models (LLMs). Developed by Ken Huang (the Speaker) and World Digital Technology Academy's AI Safety, Trust, and Responsibility (STR) working group, the framework addresses the new attack surfaces and risks introduced by generative AI.

The standard covers the entire AI application stack, including base model selection, embedding and vector databases, prompt execution/inference, agentic behaviors, fine-tuning, response handling, and runtime security. For each layer, it outlines specific testing methods and expected outcomes to ensure AI applications behave securely and as designed throughout their lifecycle.

Key areas of focus include model compliance, data usage checks, API security, prompt injection testing, output validation, and privacy protection. The framework also addresses emerging challenges like agentic behaviors, where AI agents autonomously perform tasks based on predefined objectives.
Speakers
avatar for Ken Huang

Ken Huang

CEO and Chief AI Officer, DistributedApps
Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning AI and Web3 business and technical guides and cutting-edge research. As Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance, and Co-Chair of AI STR Working... Read More →
Tuesday November 12, 2024 2:00pm - 2:50pm PST
VIRTUAL CloudX -- Main Stage
Feedback form is now closed.

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link