Описание товара
- A USB accessory that brings machine learning inferencing to existing systems. Works with Raspberry Pi and other Linux systems
- Performs high-speed ML inferencing: The on-board edge TPU Coprocessor is capable of performing 4 trillion operations (tera-operations) per second (tops), using 0.5 watts for each tops (2 tops per watt). for example, it can execute state-of-the-art mobile vision models such as mobilenet V2 AT 400 FPS, in a power efficient manner
- Works with Debian Linux: Connects to any debian-based Linux system with an included USB 3.0 Type-C cable
- Supports tensorflow Lite: No need to build models from the ground up. Tensorflow Lite models can be compiled to run on the edge TPE
- Supports automl vision edge: Easily build and deploy fast, high-accuracy custom image classification models to your device with automl vision edge
- Ml Accelerator: Google edge TPU Coprocessor
- Connector: USB 3.0 Type-C (data/power)
- Dimensions: 65 millimeter x 30 millimeter
-
The Coral USB Accelerator brings powerful ML inferencing capabilities to existing Linux systems.
Featuring the Edge TPU — a small ASIC designed and built by Google— the USB Accelerator provides high performance ML inferencing with a low power cost over a USB 3.0 interface. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 100+ fps, in a power efficient manner. This allows you to add fast ML inferencing to your embedded AI devices in a power-efficient and privacy-preserving way.
Models are developed in TensorFlow Lite and then compiled to run on the USB Accelerator.
Edge TPU Key Benefits
— High speed TensorFlow Lite inferencing
— Low power
— Small footprint
Coral by Google is a platform for building devices with local AI
Features
— Google Edge TPU ML accelerator coprocessor
— USB 3.0 Type-C socket
— Supports Debian Linux on host CPU
— Models are built using TensorFlow: Fully supports MobileNet and Inception architectures though custom architectures are possible
Specs
— Edge TPU ML accelerator: ASIC designed by Google that provides high performance ML inferencing for TensorFlow Lite models
— Arm 32-bit Cortex-M0+ Microprocessor (MCU): Up to 32 MHz max, 16 KB Flash memory with ECC, 2 KB RAM
— Connections: USB 3.1 (gen 1) port and cable (SuperSpeed, 5Gb/s t.ransfer speed), Included cable is USB Type-C to Type-A