American AI Companies Open Up to Counter China
Open-weight models make their training parameters, or weights, publicly available but tend not to provide access to the source code or datasets. Open-source models typically include access to the source code, weights, and methodologies.
With weights publicly accessible, developers can analyze and fine-tune a model for specific tasks without requiring original training data.
Huawei founder Ren Zhengfei told Chinese state-run media in June that Chinese AI development will include “thousands upon thousands of open-source software.” Chinese state-run media Global Times on Aug. 7 published an editorial opining that US efforts to curb China’s AI strategy would fail, as “China has embraced an open-source approach” to meet its vast needs.

OpenAI notes that once an open-weight model is released, “adversaries may be able to fine-tune the model for malicious purposes.”
To counter this, it fine-tuned the two new models “on specialized biology and cybersecurity data, creating a domain-specific non-refusing version for each domain the way an attacker might” and tested the models to see if they would continue to operate within safety guardrails.
Vetting Software for Security
Chris Gogoel, vice president and public sector general manager at mobile app security firm Quokka, says the proliferation of AI apps, especially AI assistant apps, has increased security risks for users exponentially.It used to be that users would rely on different apps for different functions, segmenting the data collected and permissions given, but AI apps tend to be “do-everything” apps, Gogoel told The Epoch Times.
That elevated data collection translates into more inherent risk, he said. The data collected can also be more sensitive because users may be feeding the apps long passages or instructions revealing in-depth thoughts, intentions, and rationale, rather than simply having access to raw files.
With more data collected, the apps could be bigger targets of a potential breach to extract the data over a network or from a device. The bigger risk is if these apps come from sources that have not been proven to be secure. OpenAI has adopted an approach that values security, but there are plenty of other unvetted AI apps that have been downloaded millions of times, Gogoel said.
“‘What are these applications doing with our data?’ is a very serious question,” Gogoel added.
“The verification of what happens with that data, and where it goes, how it’s protected, becomes even more important, because if that data is misused, on accident or on purpose, you have a serious, serious problem,” he said, pointing to abuse of data being used to create deepfakes and phishing attacks.
Gogoel notes that the declarations a developer makes about what data their app collects may not be what the app does.
Sometimes, the developer might not know this is the case as they are often trying to jump on trends and launch apps in time to rise in the rankings, leading to mistakes like not using proper encryption, he said. They may fail to invest in security, perhaps using open-source software that contains flaws. App stores do not currently require verification of a developer’s declarations, and Gogoel advocates moving to a verify-first approach.
One bad app can spoil the bunch, he said.
Quokka, which began working with the Pentagon around its founding in 2011, provides mobile vetting services to the federal government and other clients, which led the firm to examine TikTok and ByteDance in 2018.
It found that TikTok not only requested ample permissions, but it would also connect with other apps on a user’s device to obtain permissions the user did not explicitly give. So, data collected by trusted applications for legitimate purposes may still present security risks if they come in contact with unvetted apps.
“It’s not something that we should be looking back after something has exploded and the fire is already raging, so to speak, and there’s tens of millions of users. We’re trying to enable, in our work, the ability to verify at every step,” Gogoel said. “Verify as soon as something hits the store, as soon as something hits your device, as soon as this brand new service comes out ... that it does what it says on the tin and nothing else.”
.


