How much money do one need to train a large language model like GPT3 ?
The cost to train a large language model like GPT-3 can vary depending on several factors, such as the size of the model, the number of GPUs used, and the length of the training period.

OpenAI, the organization that developed GPT-3, estimated that the cost to train the model would be $4.6 million and take 355 years on a single GPU.
The cost of training GPT-3 is not officially disclosed by OpenAI, but there are some estimates based on the available information about the model size, architecture, data, and hardware. According to one estimate by Lambda Labs, it would take 355 GPU-years and cost $4.6 million to train GPT-3 using a Tesla V100 cloud instance. Another estimate by TechTalks suggests that it would take 175 GPU-years and cost $12 million to train GPT-3 using a TPU v3 cloud instance. These estimates assume a single training run and do not account for other costs such as development, maintenance, and electricity. The actual cost of training GPT-3 may vary depending on the optimization techniques, parallelization strategies, and hardware efficiency used by OpenAI.
You are correct that the exact cost of training GPT-3 has not been officially disclosed by OpenAI. However, the estimates provided by Lambda Labs and TechTalks give a general idea of the significant resources required to train such a large language model.

The 4.6 million estimate by Lambda Labs assumes using a Tesla V100 cloud instance for 355 GPU-years. On the other hand, the 12 million estimate by TechTalks is based on using a TPU v3 cloud instance for 175 GPU-years. These numbers only account for a single training run and do not include other expenses like development, maintenance, and electricity. Just Remember that the cost for each AI will be different depending on their complexity and size.