Share:
JAKARTA - Hackers and cyber criminals can easily take over computers running open-source artificial intelligence models outside the security fences and restrictions applied by major AI platforms. This creates new security risks and vulnerabilities. This was revealed by researchers in a study released on Thursday, January 29.
According to the researchers, criminals can target computers running large language models (LLM) open-source, then direct them to carry out various illegal activities such as spam operations, phishing content creation, and disinformation campaigns, while avoiding security protocols that are usually applied by commercial AI platforms.
The research was conducted jointly by cybersecurity companies SentinelOne and Censys for 293 days, and shared exclusively…
Share:
JAKARTA - Hackers and cyber criminals can easily take over computers running open-source artificial intelligence models outside the security fences and restrictions applied by major AI platforms. This creates new security risks and vulnerabilities. This was revealed by researchers in a study released on Thursday, January 29.
According to the researchers, criminals can target computers running large language models (LLM) open-source, then direct them to carry out various illegal activities such as spam operations, phishing content creation, and disinformation campaigns, while avoiding security protocols that are usually applied by commercial AI platforms.
The research was conducted jointly by cybersecurity companies SentinelOne and Censys for 293 days, and shared exclusively with Reuters. The research provides a new insight into the scale of the potential for illegal misuse of thousands of open-source LLM implementations spread across the internet.
The researchers said that potential abuse includes hacking activities, hate speech and harassment, violent or sadistic content, theft of personal data, fraud and scams, and in some cases includes sexual harassment material against children.
Although there are thousands of open-source LLM variants, most of the models hosted on publicly accessible servers are variants of Meta’s Llama, Google DeepMind’s Gemma, and other models. Although some open-source models have been equipped with security fences, researchers have found hundreds of cases where the security mechanisms were deliberately removed.
Discussions on AI industry related to security control are considered not to fully take into account the excess capacity of these models that have been widely used.
"Conversations about security controls in the AI industry ignore this kind of surplus capacity that is clearly used for various things, some of which are legal, some of which are clearly criminal," said Juan Andres Guerrero-Saade, executive director of intelligence and security research at SentinelOne.
He likened the situation to an iceberg that is not fully visible and has not been seriously taken into account by the industry and the open-source community.
This research analyzes the implementation of publicly accessible open-source LLM and runs through Ollama, a tool that allows individuals and organizations to run their own versions of various large language models.
In about a quarter of the LLM observed, the researchers were able to see the prompt system, that is, instructions that govern the behavior of the model. Of this group, about 7.5 percent were judged to potentially allow for harmful activity.
Geographically, about 30 percent of the hosts observed operate from China, while about 20 percent are in the United States.
Rachel Adams, CEO and founder of the Global Center on AI Governance, said via email that after the open-source model was released to the public, responsibility for the subsequent impact became a shared responsibility across the entire ecosystem, including the laboratories that developed the model.
"Laboratories are not responsible for any downstream misuse, which is often difficult to predict. However, they still have a duty of care to anticipate foreseeable risks, document potential hazards, and provide mitigation tools and guidance, especially given the uneven global enforcement capacity," said Adams.
A Meta spokesperson declined to answer questions about developers’ responsibilities in dealing with the misuse of the open-source model, but alluded to the existence of Llama Protection tools and Meta’s Llama Responsible Use Guide provided for developers.
Meanwhile, Ram Shankar Siva Kumar, head of the Microsoft AI Red Team, stated that Microsoft believes that open-source models play an important role in various fields. However, he emphasized that open models, like other transformative technologies, can be misused by bad actors if released without adequate security.
Microsoft, he said, conducted an evaluation prior to launch, including a risk assessment for model scenarios connected to the internet, hosted on its own, or used with additional tools, where the potential for abuse was judged higher. The company is also actively monitoring emerging threats and patterns of abuse.
"Ultimately, responsible open innovation requires a shared commitment from model builders, implementers, researchers, and security teams," he said.
Ollama did not respond to a request for comment, while Alphabet (Google) and Anthropic also did not respond to questions.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)