Samsung to focus on hybrid AI, open  collaborations, Knox security to safeguard privacy

LAS VEGAS, Jan 6:  Samsung Electronics has said it will prioritise governance, protect privacy and secure its services to ensure that artificial intelligence (AI) emerges as a ‘true companion’ of users.
  The South Korean consumer electronics giant’s announcement comes amid a raging debate about consumers’ security and privacy in the rapidly evolving AI era.
Samsung, which is planning to embed AI in all its products and services starting this year, said its hybrid AI model ensures personal data remains on-device whenever possible, and cloud-based intelligence is used selectively when greater speed or scale is required, giving users flexibility without compromising privacy.
Additionally, the company emphasised that trust will grow when AI behaves predictably and securely across devices. Samsung’s Shin Baik, the head of its AI Platform Center (APC), highlighted Samsung’s open collaboration with industry leaders, such as Google and Microsoft, as a way to strengthen shared security research, interoperability and ecosystem-wide protection.
Baik was addressing a panel of global experts as part of Samsung’s Tech Forum series at CES 2026. The panel explored how, as intelligence becomes distributed across phones, TVs, and home appliances, security must evolve.
In the session, Samsung highlighted its Knox security platform — which now protects billions of devices from the chipset, as well as Knox Matrix, a cross-device security framework that enables products to authenticate and protect one another.
“Trust in AI starts with security that’s proven, not promised,” said Shin Baik while speaking on a session titled “In Tech We Trust? Rethinking Security & Privacy in the AI age”.
For more than a decade, Samsung Knox has provided a deeply embedded security platform designed to protect sensitive data at every layer. But trust goes beyond a single device — it requires an ecosystem that protects itself. With Knox, devices continuously authenticate and monitor one another, so each device acts as a shield for the rest, creating a resilient, secure environment users can rely on.
Allie Miller, CEO of Open Machine and one of the panellists, highlighted the importance of transparency for users, including clear visibility into where AI models run, how data is used and explicit labels that show what is powered by AI and what is not.
Zack Kass, Global AI Advisor at ZKAI Advisory and former Head of Go-To-Market at OpenAI, said misinformation and misuse present real challenges.
“For every risk, there is also a countermeasure and technology itself will play a critical role in mitigating AI’s downsides.”  (PTI)