wuphysics87@lemmy.ml to Privacy@lemmy.ml · 1 month agoCan you trust locally run LLMs?message-squaremessage-square8fedilinkarrow-up131arrow-down15file-text
arrow-up126arrow-down1message-squareCan you trust locally run LLMs?wuphysics87@lemmy.ml to Privacy@lemmy.ml · 1 month agomessage-square8fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareJack@slrpnk.netlinkfedilinkarrow-up3arrow-down1·1 month agoCan’t you run if from a container? I guess the will slow it down, but it will deny access to your files.
minus-squaremarcie (she/her)@lemmy.mllinkfedilinkarrow-up4·1 month agoyeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though
Can’t you run if from a container? I guess the will slow it down, but it will deny access to your files.
yeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though