
Hardware, Machine Learning & AI News
AI cloud infrastructure gets faster and greener: NPU core improves inference performance by over 60%
The latest generative AI models such as OpenAI's ChatGPT-4 and Google's Gemini 2.5 require not only high memory bandwidth but also large memory capacity. This is why generative AI cloud operating companies like Microsoft and Google purchase hundreds of thousands of NVIDIA GPUs.
Hardware security tech can hide and reveal encryption keys on demand using 3D flash memory
Seoul National University College of Engineering announced that a research team has developed a new hardware security technology based on commercially available 3D NAND flash memory (V-NAND flash memory).
Engineers create first AI model specialized for chip design language
Researchers at NYU Tandon School of Engineering have created VeriGen, the first specialized artificial intelligence model successfully trained to generate Verilog code, the programming language that describes how a chip's circuitry functions.
AI models shrink to fit tiny devices, enabling smarter IoT sensors
Artificial intelligence is considered to be computationally and energy-intensive—a challenge for the Internet of Things (IoT), where small, embedded sensors have to make do with limited computing power, little memory and small batteries.
Selfies could one day be stored on DNA strands
When it comes to storing images, DNA strands could be a sustainable, stable alternative to hard drives. Researchers at EPFL are developing a new image compression standard designed specifically for this emerging technology.
New framework reduces memory usage and boosts energy efficiency for large-scale AI graph analysis
BingoCGN, a scalable and efficient graph neural network accelerator that enables inference of real-time, large-scale graphs through graph partitioning, has been developed by researchers at the Institute of Science Tokyo, Japan. This breakthrough framework utilizes an innovative cross-partition message quantization technique and a novel training algorithm to significantly reduce memory demands and increase computational and energy efficiency.
3D chip stacking method created to overcome traditional semiconductor limitations
A novel power supply technology for 3D-integrated chips has been developed by employing a three-dimensionally stacked computing architecture consisting of processing units placed directly above stacks of dynamic random access memory.
New all-silicon computer vision hardware advances in-sensor visual processing technology
Researchers at the University of Massachusetts Amherst have pushed forward the development of computer vision with new, silicon-based hardware that can both capture and process visual data in the analog domain. Their work, described in the journal Nature Communications, could ultimately add to large-scale, data-intensive and latency-sensitive computer vision tasks.
Wafer-scale accelerators could redefine AI
The promise of a new type of computer chip that could reshape the future of artificial intelligence and be more environmentally friendly is explored in a technology review paper published by UC Riverside engineers in the journal Device.
Tiny receiver chip uses stacked capacitors to block interference in 5G IoT devices
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.