뉴스 & 기능
뉴스에서 | MSPoweruser
Meet Microsoft DeepSpeed, a new deep learning library that can train massive 100-billion-parameter models
Microsoft Research today announced DeepSpeed, a new deep learning optimization library that can train massive 100-billion-parameter models. In AI, you need to have larger natural language models for better accuracy. But training larger natural language models is time consuming and…
뉴스에서 | VentureBeat
Microsoft trains world’s largest Transformer language model
Microsoft AI & Research today shared what it calls the largest Transformer-based language generation model ever and open-sourced a deep learning library named DeepSpeed to make distributed training of large models easier.
뉴스에서 | InfoWorld
Microsoft speeds up PyTorch with DeepSpeed
Microsoft has released DeepSpeed, a new deep learning optimization library for PyTorch, that is designed to reduce memory use and train models with better parallelism on existing hardware.
Microsoft has announced that it has integrated an optimized implementation of BERT (Bidirectional Encoder Representations from Transformers) with the open source ONNX Runtime. Developers can take advantage of this implementation for scalable inferencing of BERT at an affordable cost.
뉴스에서 | WinBuzzer
Microsoft Open Sources BERT for ONNX Runtime
In December, Microsoft open sourced its ONNX Runtime inference engine. Now, the company says it also open-sourced an optimized version of BERT, a natural language model from Google, for ONNX.
뉴스에서 | VentureBeat
Microsoft open-sources ONNX Runtime model to speed up Google’s BERT
Microsoft Research AI today said it plans to open-source an optimized version of Google’s popular BERT natural language model designed to work with the ONNX Runtime inference engine. Microsoft uses to the same model to lower latency for BERT when…
뉴스에서 | ZDNet
Microsoft makes performance, speed optimizations to ONNX machine-learning runtime available to developers
Microsoft is open sourcing and integrating some updates it it has made in deep-learning models used for natural-language processing. On January 21, the company announced it is making available to developers these optimizations by integrating them into the ONNX Runtime.
뉴스에서 | Microsoft Open Source Blog
Microsoft open sources breakthrough optimizations for transformer inference on GPU and CPU
One of the most popular deep learning models used for natural language processing is BERT (Bidirectional Encoder Representations from Transformers). Due to the significant computation required, inferencing BERT at high scale can be extremely costly and may not even be possible…
뉴스에서 | Search Engine Journal
Bing is Now Utilizing BERT at a Larger Scale Than Google
Bing revealed today that it has been using BERT in search results before Google, and it’s also being used at a larger scale. Google’s use of BERT in search results is currently affecting 10% of search results in the US,…