What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.

  • AnAmericanPotato@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 days ago

    As far as I can see #ollama and #lmstudio do not provide privacy statements.

    That’s because they are not online services (which is a good thing!). Online services like ChatGPT and desktop applications like LM Studio are not in the same product category.

    LM Studio is more akin to, say, VLC or Notepad++ (which also do not have privacy policies). These are desktop applications that have some limited network functions (like autoupdates).

    LM Studio does offer details of which features require internet access and which are fully offline here: https://lmstudio.ai/docs/offline . In short: everything important is offline. It has built-in search features so you can find and download models from Huggingface, and it also has an autoupdate feature to find and download new versions. You could run it on an airgapped system (or more likely, set it up in a container/VM without network access), and simply load in model files manually if you prefer.

    Personally I recommend LM Studio, because it’s super easy to set up and use but still quite powerful.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    10 days ago

    Since you ask, here are my thoughts https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence with numerous examples. To clarify your points :

    • rely on open-source repository where the code is auditable, hopefully audited, and try offline
    • see previous point
    • LLMs don’t “analyze” anything, they just spit out human looking text

    To clarify on the first point, as the other 2 unfold from there, such project would instantly lose credibility if they were to sneak in telemetry. Some FLOSS projects tried that in the past and it always led to uproars, reverts and often forks of the exact same codebase but without telemetry.

  • toastal@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    D) what is AMD support like or is the Python fan boys still focusing on Nvidia exclusively?

  • Tundra@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    10 days ago

    From my privacy trials on ollama - any model downloaded does not know the date or time and cannot access the internet.

    If you are still sceptical you could download something like alpaca on flathub and once youve acquired a model, remove internet access etc through flatseal.