But the price is not beautiful, enterprise customers can rent $36,999 per month.ĭGX Cloud provides support from NVIDIA experts throughout the AI development process. It is characterized by the ease of handling high-load computing work, in addition to Nvidia also sells services to expand the scale and influence of Nvidia. The DGX Cloud is an AI supercomputing cloud service, and you know how invincible it is just by hearing the word "super". This time, Nvidia whole a big job is to "ChatGPT same model" moved to the cloud, open to the public. Chips, especially GPUs, CPUs, FPGAs, and custom high-performance chips, are the "core underlying power" behind ChatGPT and other potential AI chatbots.ĭGX Cloud: Bringing AI to Every EnterpriseĪs an early practitioner of the computing industry Huang Renxun witnessed the development of this industry for 40 years, his technology is getting closer and closer to everyone, but we also see the higher the threshold of computer programming, forming an invisible technical gap, so that most ordinary people are getting farther and farther away from programming, but also let many traditional industries do not really use digital technology to enjoy the benefits of arithmetic progress. Such a large number of parametric operations means that ChatGPT needs powerful arithmetic to support its training and deployment, and the arithmetic depends on the underlying chips. will reach over 30,000 GPUs when ChatGPT is commercialized. OpenAI has published information that thousands of NVIDIA V100 GPUs were used to train the previous generation of GPT-3, and research firm TrendForce reported that GPT-3 used around 20,000 NVIDIA A100 GPUs for training, and that the number of GPUs needed for commercialization of ChatGPT will be over 30,000. OpenAI launched GPT-1 in 2018 with just under 120 million parameters, while GPT-3, the last one to announce its number of parameters, has 175 billion, and while some experts believe that ChatGPT and GPT-4 have potentially smaller numbers of parameters, most voices in the industry believe that more powerful models assume this larger parameter size and consume more computing power. Nvidia is precisely optimistic about this, only to throw its weight behind the development of GPU products more suitable for AI. In fact, Huang has repeatedly said he has seen the development potential of the AI industry 10 years ago, as well as the decisive role of GPU for AI. NVIDIA is launching a more efficient arithmetic solution at this time, which is undoubtedly a big problem for the industry to solve. In a sense, computational cost has become the core issue hindering the development of generative AI today, and OpenAI has burned billions, if not tens of billions of dollars for this reason, and Microsoft has never opened the new Bing to the wider public for cost reasons, even limiting the number of conversations users can have per day. Ontology reduction now: H100 can reduce the cost of processing large language models by an order of magnitude. According to the website, the H100's combined technical innovations It is possible to increase the speed of large language models by 30 times. How faster: A standard server with four pairs of H100 and NVLINK can process 10 times faster than the HGX A100 for GPT-3 processing. The benefits are simple and straightforward: faster and lower cost. Launch of ChatGPT's dedicated GPU, H100 NVL with dual GPU NVLink.The highlight of the whole GTC conference The presentation focused almost entirely on artificial intelligence. Because there are too many professional terms involved, even if you are from science and technology, watching the video will also be a bit overwhelming, so I spent a long time to figure out how to use more simple and easy to understand words to explain to you, and analyze each new technology will bring what good impact for Nvidia in the future? After all, Wall Street institutions love it when companies have a steady stream of new stories, just like Tesla, where the number of stories and the stock price are often directly proportional.Īt NVIDIA GTC 2023, Jen-Hsun Huang, still wearing his signature leather jacket, gave a keynote speech while standing in front of the vertical green wall located at NVIDIA's headquarters. At the GTC, Silicon Valley's largest artificial intelligence event, Nvidia's founder CEO, Jen-Hsun Huang, showed the world his determination to march into AI and showcase a series of king-size products.
0 Comments
Leave a Reply. |