Fast inference engine for Transformer models
-
Updated
May 12, 2026 - C++
Fast inference engine for Transformer models
oneAPI Deep Neural Network Library (oneDNN)
High-performance Spiking Neural Networks Library Written From Scratch with C++ and Python Interfaces.
LLM inference engine built with SYCL + oneDNN
Graiphic Toolkits for LabVIEW provide advanced AI, GPU, and graph-oriented computing capabilities directly inside LabVIEW. Built on ONNX Runtime, they enable seamless integration of SOTA, Accelerator, and Deep Learning Toolkit for high-performance execution across CPUs, GPUs, and edge devices.
edge-optimized neural style transfer using Intel oneDNN
Source of oneAPI Deep Neural Network Library (oneDNN)
high-level Rust bindings to the oneDNN C api
Demonstrate remote command and control techniques using chat platforms for security training and threat simulation.
C++ "more than perfect" library is a deep learning playground using Intel's oneDNN and SYCL.
Add a description, image, and links to the onednn topic page so that developers can more easily learn about it.
To associate your repository with the onednn topic, visit your repo's landing page and select "manage topics."