The DeepSeek drama has been making waves in Silicon Valley, with the Chinese AI lab’s model that’s similar to ChatGPT causing a stir. The model was built using a small fraction of the computing power used by US labs like OpenAI, forcing tech leaders to question the industry’s assumption that they need gajillions more dollars to effectively secure enough energy to power their AI advancements.
American tech leaders are trying to shift the narrative to make DeepSeek look like the villain, with OpenAI and Microsoft investigating evidence that DeepSeek used OpenAI’s intellectual property to build its competitor, violating its terms of service. However, OpenAI’s own practices have been called into question, with the company being sued by content creators for training its large language models on copyrighted material.
The irony of the situation has not been lost on some, with tech figures pointing out that distilling is a common practice in the AI industry. “I’d be surprised if DeepSeek hadn’t used it,” said Lutz Finger, senior visiting lecturer at Cornell University. “Technically, it’s easy to do, and if done well, it is easy to disguise and avoid detection.”
Some are reading OpenAI’s actions as a sour grapes moment, while others see it as an attempt to establish rules in an unregulated industry. “Regardless of the specifics here, we’re entering a phase where the AI community will have to define clearer norms around what constitutes fair use versus unauthorized replication,” said Zack Kass, an AI consultant and former OpenAI go-to-market lead.