Kur nopirkt

ASBIS nodrošina plašu klāstu IT produktu saviem klientiem visā Latvijā. Lai uzzinātu sev tuvāk esošā veikala atrašanās vietu, apmeklējiet ASBIS dīleru sadaļu

ASBIS ziņas

Septembris 02, 2024
Best place to put a space heater – How to use a heater in the room
Augusts 14, 2024
Introducing the World to its All-Round FastCharge 2.0 and HyperSpeed Lab for ...
Augusts 08, 2024
AMD
AMD Ryzen™ 9000 Series processors
Jūlijs 30, 2024
ASBIS, as distributor of Ubiquiti, a leading provider of networking technology, ...
Jūlijs 25, 2024
Meet the new pack of opportunities to earn more Intel points when purchasing ...
Jūnijs 25, 2024
ASBISC Enterprises Plc, a leading Value-Added Distributor, developer, and ...
Intel Xeon, Core™ Ultra and AI PC Accelerate GenAI Workloads

Aprīlis 29, 2024

Intel

Intel Xeon, Core™ Ultra and AI PC Accelerate GenAI Workloads

Intel has validated its AI product portfolio for the first Meta Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.

As part of its mission to bring AI everywhere, Intel invests in the software and AI ecosystem to ensure that its products are ready for the latest innovations in the dynamic AI space. In the data center, Intel Gaudi and Intel Xeon processors with Intel® Advanced Matrix Extension (Intel® AMX) acceleration give customers options to meet dynamic and wide-ranging requirements.

Ιntel Core Ultra processors and Intel Arc graphics products provide both a local development vehicle and deployment across millions of devices with support for comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch® used for local research and development and OpenVINO™ toolkit for model development and inference.

Intel’s initial testing and performance results for Llama 3 8B and 70B models use open source software, including PyTorch, DeepSpeed, Intel Optimum Habana library and Intel Extension for PyTorch to provide the latest software optimizations.

Intel Xeon processors address demanding end-to-end AI workloads, and Intel invests in optimizing LLM results to reduce latency. Intel® Xeon® 6 processors with Performance-cores (code-named Granite Rapids) show a 2x improvement on Llama 3 8B inference latency compared with 4th Gen Intel® Xeon® processors and the ability to run larger language models, like Llama 3 70B, under 100ms per generated token.

Intel Core Ultra and Intel Arc Graphics deliver impressive performance for Llama 3. In an initial round of testing, Intel Core Ultra processors already generate faster than typical human reading speeds. Further, the Intel® Arc™ A770 GPU has Xe Matrix eXtensions (XMX) AI acceleration and 16GB of dedicated memory to provide exceptional performance for LLM workloads.   

Visi ASBIS produkti tiek pārdoti klientam ievērojot pārdošanas nosacījumus un noteikumus, kādi ir spēkā produktu pārdošanas brīdī. Lūdzu, ņemiet vērā, ka ASBIS ir datortehnikas un programmatūru vairumtirdzniecības piegādātājs Eiropas, Vidus Āzijas un Āfrikas valstīs. Kompānija sadarbojas ar B2B klientiem, kā tālākpārdevējiem, mazumtirdzniecības pārstāvjiem, e-pārdevējiem, sistēmas administratoriem un OEM. ASBIS nepiegādā produkciju tieši gala patērētājam. Apmeklējiet ASBIS mājās lapā sadaļu tālākpārdevējiem, lai uzzinātu jums tuvākā IT veikala atrašanās vietu.