June 27 (Reuters) - OpenAI has recently begun renting
Google's artificial intelligence chips to power
ChatGPT and other products, The Information reported on Friday,
citing a person involved in the arrangement.
The move, which marks the first time OpenAI has used
non-Nvidia ( NVDA ) chips in a meaningful way, shows the Sam Altman-led
company's shift away from relying on backer Microsoft's ( MSFT )
data centers, potentially boosting Google's tensor processing
units (TPUs) as a cheaper alternative to Nvidia's ( NVDA )
graphics processing units (GPUs), the report said.
As one of the largest purchasers of Nvidia's ( NVDA ) GPUs, OpenAI
uses AI chips to train models and also for inference computing,
a process in which an AI model uses its trained knowledge to
make predictions or decisions based on new information.
OpenAI hopes the TPUs, which it rents through Google Cloud,
will help lower the cost of inference, according to the report.
However, Google, an OpenAI competitor in the AI race, is not
renting its most powerful TPUs to its rival, The Information
said, citing a Google Cloud employee.
Both OpenAI and Google did not immediately respond to
Reuters requests for comment.
OpenAI planned to add Google Cloud service to meet its
growing needs for computing capacity, Reuters had exclusively
reported earlier this month, marking a surprising collaboration
between two prominent competitors in the AI sector.
For Google, the deal comes as it is expanding external
availability of its in-house TPUs, which were historically
reserved for internal use. That helped Google win customers
including Big Tech player Apple ( AAPL ) as well as startups
like Anthropic and Safe Superintelligence, two OpenAI
competitors launched by former OpenAI leaders.