Thousands of AI Systems Found Openly Accessible Online


As AI tools become easier to run outside of major cloud platforms, a new and largely unseen layer of AI infrastructure is quietly taking shape online. Joint research by SentinelLabs and Censys has revealed that over 170,000 AI systems are now publicly reachable on the open internet and operating without the safeguards, monitoring, or oversight that commercial AI services normally provide.

Ollama is a popular open-source tool that lets users run large language models directly on their own hardware. By default it’s designed to stay local, only accessible by the machine owner. Exposing it to the open internet takes only a single configuration change — one that can be made intentionally or by accident. When that happens across thousands of machines worldwide, the result is a sprawling, unmanaged AI network that nobody officially owns or controls.

Scanning conducted over several months picked up over 175,000 unique hosts across 130 countries. Most came and went quickly, but a persistent core of around 23,000 hosts stayed consistently online. They behave more like always-on services rather than casual experiments, which makes them both useful to their operators and attractive to anyone looking to exploit them.

The exposed systems aren’t concentrated in one place either. They span major cloud providers, independent servers, and even residential home networks. Home and telecom connections actually make up the largest share by host count, which creates a real accountability problem. A publicly reachable AI system could be running in a data center or on someone’s laptop at home. And in many cases, there’s no easy way to tell who’s responsible.

More Than Just Chatbots

These systems are capable of far more than generating text. Nearly half of the observed hosts can call external tools, access APIs or files, process images, and perform multi-step reasoning. That means some of these publicly exposed AI systems aren’t just producing responses — they can take actions. Open network access combined with little to no authentication makes the potential for misuse a serious concern.

The security implications are already drawing attention. Attackers can quietly leverage these open systems to generate spam, phishing content, or disinformation — without the owners ever knowing and without paying for the compute. Systems connecting AI to internal tools or data are especially vulnerable to prompt injection, where crafted inputs can trick a model into leaking information or triggering unintended actions. AI running on home networks can also act as a proxy, making malicious traffic appear to originate from a trusted residential connection.

Unlike commercial AI platforms, there’s no centralized reporting, no usage monitoring, and often no one with the authority to shut down a system being abused. As AI deployment becomes more decentralized while development remains concentrated among a handful of labs, existing governance frameworks are struggling to keep up.

For the full technical research breakdown, see the original research on the SentinelLabs website here.


Comments Section

Leave a Reply

Your email address will not be published. Required fields are marked *



,
Back to Top - Modernizing Tech