BACK

Risks and Attack Surfaces in AI/ML
- AI models and training data contain valuable IP susceptible to hacking.
- Attack surface includes training data, algorithms, model parameters, backend applications.
- Manipulation or theft in sectors like healthcare can lead to compromised patient data, malfunctioning AI, or stolen models.

Technical Aspects of Protection
- Protection required for both the AI model and application code, including hidden AI mechanisms within code.
- Basic license checks in code can be easily bypassed by attackers with access.
- Encryption of code binary makes it unreadable and hard to patch or reverse engineer.
- Memory dumps at runtime can reveal decrypted code; attackers use hooking or DLL injection to manipulate running code.
- Comparison of protection methods:
- Encryption: strong protection on stored code, weak during runtime due to decryption.
- Obfuscation: adds redundant or confusing instructions; simple obfuscation is easily removable, advanced obfuscation divides code into encrypted blocks with encrypted control flow, increasing difficulty.
- Anti-debugging techniques are employed but can often be bypassed by experienced hackers.
- Integrity checks (digital signatures, checksums) detect code manipulation and debugging breakpoints.
- Code execution in trusted environments (TEE, hardware security modules, cloud) significantly increases protection by isolating critical code from attackers.

VibuSystems Solutions
- VibuSystems, headquartered in Karlsruhe, Germany, offers protection technologies and products like AX Protector.
- AX Protector combines encryption with advanced obfuscation and control flow encryption.
- It supports trap mechanisms that detect unauthorized decryption attempts and lock licenses.
- Obfuscation requires early integration in development before compilation; encryption can be applied at binary level post-compilation.
- Tool integrates with LLVM toolchain supporting multiple languages (C, Rust, Swing).
- Combining multiple methods provides robust IP protection, enforces licensing models, and ensures applications operate as intended without manipulation.

Key Takeaways and Actionable Items
- Developers and companies using AI/ML must recognize the growing threat to IP and the need for proactive protection.
- Early integration of protection methods (encryption, advanced obfuscation) into the development process is crucial.
- Employ layered security strategies combining encryption, obfuscation, integrity checks, and trusted execution environments.
- Design licensing to restrict and monetize usage effectively, integrating with protection mechanisms.
- Consider consulting or implementing solutions like VibuSystems’ AX Protector for tailored protection.
- Visit VibuSystems’ booth (A51) for further discussion and to address specific protection demands.

Inside the Mind of Hackers: Safeguard your intellectual property in an AI driven future

Share:

14:00 - 14:30, 27th of May (Tuesday) 2025 / DEV ARCHITECTURE STAGE

As AI accelerates innovation, the risk of intellectual property theft grows. Application code - whether in C++, .NET, Python, or other environments - often contains sensitive algorithms, and machine learning (ML) models are now prime targets for attacks. From knowledge exposure to algorithm replay and integrity violations, organizations must defend their critical assets against evolving threats.

This presentation takes a hacker’s perspective on software security, analyzing real-world attack scenarios and presenting practical countermeasures. We will explore how to protect code and data in conventional and ML applications while securing the entire ML environment from development to deployment. With real-world cases, proven strategies, and ready-to-use techniques, we’ll demonstrate that effective software protection is not just a necessity - it’s a competitive advantage.
 

LEVEL:
Basic Advanced Expert
TRACK:
AI/ML Cybersecurity
TOPICS:
Cybersecurity ITarchitecture ML/DL Python SoftwareEngineering

Janusz Hryszkiewicz

WIBU-SYSTEMS AG

Oliver Winzenried

WIBU-SYSTEMS AG