Company News

The company, known for making flash controllers and SSDs, showed demonstrations at CES 2026 of the system working on laptops.

Edited by Brad Randall, Broadband Communities

Phison Electronics on Tuesday said it has expanded the capabilities of its aiDAPTIV+ technology to let more powerful AI workloads run on everyday PCs by using high-capacity NAND flash as an extension of GPU memory. The company, known for making flash controllers and SSDs, showed demonstrations at CES 2026 of the system working on laptops, desktops and mini‑PCs and said the approach reduces the amount of DRAM required to run large language models.

Phison says aiDAPTIV+ moves parts of a model’s working memory off DRAM and onto flash, which the company argues can lower costs and let devices with integrated GPUs handle models that would otherwise need much larger VRAM.

“By expanding GPU memory with high-capacity, flash-based architecture in aiDAPTIV+, we offer everyone, from consumers and SMBs to large enterprises, the ability to train and run large-scale models on affordable hardware,” Michael Wu, president and GM of Phison US, said in a statement.

Phison claims this lets a 120‑billion‑parameter setup run with 32 GB of DRAM instead of the roughly 96 GB traditionally required.

“This can significantly enhance the user experience”

The company and partners also released early test results suggesting speed and efficiency gains. Phison says storing previously used tokens in flash avoids re-computation during inference and can accelerate response times by up to tenfold while cutting power use; lab tests reportedly showed substantial improvements in “time to first token” on notebooks.

Collaborators include Acer, which Phison says ran a 120B-class model variant (gpt-oss-120b) on an Acer laptop configured with 32 GB of memory.

“This can significantly enhance the user experience interacting with on-device Agentic AI,” said Mark Yang of Acer’s compute software group.

Phison framed the effort as a way to democratize AI work that normally requires expensive workstations or cloud servers. The firm said that, in lab tests, a notebook pairing aiDAPTIV+ with Intel’s new Core Ultra Series 3 processors and integrated Arc GPUs could fine-tune a 70‑billion‑parameter model, a task the company says would previously have needed machines costing “up to ten times more.”

Phison and several OEMs, including Corsair, MSI, ASUS and Emdoor, are demonstrating different form factors at CES showing inferencing and meeting‑notes summarization use cases.

The company offered a caution in its release: many of the products and features described remain in development and timelines for availability could change. Phison also emphasized the demonstrations are the result of partner engineering work and internal lab tests; independent benchmarking and real‑world deployments will be needed to confirm whether the flash‑backed approach consistently matches the performance and latency demands of production AI workloads.

AI tools from Noah Wire Services have been used to help generate this report.

Get news like this in your inbox. Subscribe to the Broadband Communities newsletter!

Share