DeepSeek R1-14B runs privately with nilAI, Nillion ensures AI data security

avatar
ABMedia
01-30
This article is machine translated
Show original
Here is the English translation of the text, with the specified terms preserved:

New technology combined with TEE, providing secure AI interaction experience

The privacy and security of artificial intelligence have always been the focus of users' attention. Now the DeepSeek R1-14B model can run privately inside the privacy blockchain project Nillion's nilAI, combined with the Trusted Execution Environment (TEE), ensuring data security and preventing unauthorized access.

(The popular DeFAI cryptocurrency Nillion is about to be launched! Nillion demonstrates how to prevent AI agents from compromising security)

The process of DeepSeek R1-14B running inside nilAI

The operation of this technology is quite simple and secure, mainly including the following three steps:

  1. User submits encrypted input: The user encrypts the input content and sends it to nilAI to ensure that the data will not be leaked during the transmission process.
  2. The model runs inside the TEE: The DeepSeek R1-distilled-Qwen-14B model will process the request within the trusted execution environment, generating a response based on the prompt. Due to the characteristics of the TEE environment, the external cannot access the running process, ensuring the privacy is not violated.
  3. Secure return of the answer: The response generated by the model will be returned to the user through an encrypted channel, ensuring the data security of the entire process.

The product manager takes you to quickly analyze the code

The product manager of nilAI, @davtbutler, has prepared a video of less than two minutes to analyze the core code of this technology in detail, demonstrating the operation of the DeepSeek R1-distilled-Qwen-14B and comparing its performance with the Llama 8B parameter model.

https://x.com/nillionnetwork/status/1884597054321856587

Demonstration of the actual application of DeepSeek R1-14B inside Nillion

In addition to the technical analysis, the AI and Product Vice President @RoelNuyts of nilAI also personally demonstrated the operation of DeepSeek inside Nillion. The application scope of this technology is quite extensive, and one of the innovative applications is the "Blind Guru": an AI-assisted tool that allows users to ask the most private questions with confidence, ensuring that personal information will not be leaked.

https://x.com/nillionnetwork/status/1884597137771839801

As AI technology advances, ensuring data security has become increasingly important. The running mode of DeepSeek R1-14B combined with the TEE of nilAI provides users with a more private and secure AI interaction experience, and also sets a new standard for the future development of AI technology.

Risk Warning

Cryptocurrency investment is highly risky, and its price may fluctuate dramatically, and you may lose your entire principal. Please carefully evaluate the risks.

Silicon Valley is facing a new AI controversy. The Chinese artificial intelligence company DeepSeek is challenging OpenAI with low-cost AI models, and news has broken out today that these models have allegedly used data from the developers of ChatGPT. OpenAI and its important partner Microsoft are investigating whether DeepSeek has illegally used OpenAI's proprietary data to train its models, which not only may violate OpenAI's terms of service, but may also constitute intellectual property (IP) theft.

The rise of DeepSeek raises alarms

DeepSeek has risen rapidly to become a strong competitor in the AI field, reportedly able to train its models at a much lower cost than OpenAI. Although OpenAI invested more than $100 million to train GPT-4, DeepSeek seems to have found a way to significantly reduce costs while still providing competitive AI performance.

According to a Bloomberg report, Microsoft's security researchers discovered in late 2024 that large amounts of data were being transferred through OpenAI's developer accounts, which were allegedly linked to DeepSeek. In other words, DeepSeek may have been systematically extracting AI outputs generated by OpenAI, which could have greatly accelerated its own model training process.

Is DeepSeek using OpenAI's technology to fight back?

The core of the controversy lies in a technique called "distillation," a common method in AI training where a smaller model learns knowledge by mimicking the outputs of a larger, more advanced model. While OpenAI allows developers to integrate ChatGPT into their applications through the API, using the API outputs to train a competitor's model may constitute intellectual property infringement. The Financial Times reported that OpenAI claims to have evidence of DeepSeek using its models through distillation, but has not yet made the specific evidence public. This incident has sparked critical discussions about the ethics of AI model development, particularly given that OpenAI itself has trained its models using large amounts of web data, often without the explicit consent of content creators.

The U.S. government intervenes, concerned about potential intellectual property theft

This issue has caught the attention of U.S. policymakers and AI experts. David Sacks, an AI policy advisor to former President Donald Trump, stated that DeepSeek's actions may constitute intellectual property theft. "There is ample evidence that DeepSeek has extracted knowledge from OpenAI models through distillation, and OpenAI is clearly very unhappy about this," Sacks said in an interview with Fox News on Tuesday. In addition to DeepSeek, OpenAI has also expressed concerns about efforts by Chinese and other international companies to extract the technology of leading U.S. AI companies. "We know that PRC companies and other international competitors are constantly trying to distill the models of leading U.S. AI companies," OpenAI said in response to Bloomberg's inquiry. The company emphasized that it will continue to protect its intellectual property and work with the U.S. government to prevent competitors from accessing its technology.

A somewhat ironic AI arms race

The allegations against DeepSeek bring a touch of irony to the AI competitive landscape. OpenAI itself has faced criticism for collecting large amounts of data from the internet without authorization to train its models, arguing that this practice falls under "fair use." However, the company's current accusations against DeepSeek highlight the contradictions and double standards in AI development. This controversy underscores the increasingly tense relationships between AI giants, as the global AI competition heats up. As OpenAI, Microsoft, and the U.S. government delve deeper into investigating DeepSeek's practices, this incident is likely to influence future discussions on the regulation of AI data usage, intellectual property protection, and model training methods. For now, the battle lines have been drawn: OpenAI is vowing to defend its technological innovations, while DeepSeek's rapid rise is facing strict scrutiny. Whether this storm will ultimately lead to legal action or result in stricter regulation of the AI industry remains to be seen. But one thing is certain: the competition in the AI field is more intense than ever before.

Risk Warning

Cryptocurrency investment is highly risky, and its price may fluctuate dramatically. You may lose your entire principal. Please carefully evaluate the risks.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
2
Add to Favorites
1
Comments
Followin logo