Cost is about US If used the total investment is billion yuan A data center with a computing power of P is required to support the operation of ChatGPT At least such data centers are needed and the investment in infrastructure is in the tens of billions So who is taking advantage of these computing power needs For OpenAI its naturally Microsoft According to data from Lambdas official website Microsoft designed a distributed cluster containing Nvidia V GPUs for OpenAI to train the GPT model Due to the large number of model parameters a total of billion parameters the training took a total of days
and consumed The total computing power is PFdays Based on the Nvidia Tesla V Lambda GPU instance pricing of hour the complete training cost of GPT will reach milliontime And this is also the pillar contribution USA Student Phone Number List of cloud vendors Recently investment institution AZ published an article Who Owns the Generative AI Platform which believes that almost everything in generative AI will pass through cloudhosted GPUs or
TPUs at some point Whether for model providers and scientific research labs training models hosting companies performing inference and finetuning tasks or application companies doing both floating point operations per second FLOPS are the lifeblood of generative AI For the first time in a long time advances in the most disruptive computing technologies are severely limited by the amount of computation required Therefore a large amount of funds in the generative AI market actually ended up flowing to infrastructure companies AZ estimates that the average application company spends about of its annual revenue on inference and customization finetuning This portion is typically paid directly to the cloud service provider to.