Tesla has unveiled a dedicated D1 processor designed to train artificial intelligence systems inside data centers. The announcement took place as part of the Tesla AI Day event.
The D1 is made on a 7-nanometer process technology and has a processing power of 362 teraflops, while consuming 400 watts of energy. When working with single precision numbers, the D1 produces 22.6 teraflops, being on the level of the GeForce RTX 3070 Ti. The processor will be used in the automaker’s Dojo supercomputer.
The chip consists of 50 billion transistors, the die area is 645 mm², which is smaller than the industrial GPUs NVIDIA A100 and AMD Arcturus with 826 mm² and 750 mm², respectively. The D1 is equipped with 354 training units based on a 64-bit superscalar processor with 4 cores. The solution supports FP32, BFP16, CFP8, INT32, INT16 and INT8 instructions.
Tesla highlights D1 scalability. Multiple processors can be interconnected in large computing systems at speeds up to 10 TB / s. The company placed 25 chips on one module, and inserted 120 modules into the server, having received a performance of more than 1 exaflops.
Tesla believes that in this way it will be able to build the fastest computer for working with artificial intelligence and neural networks. Dojo is scheduled to launch next year.
Read also: Tesla announced a humanoid AI bot…