Running it offline does avoid some of the censorship, but not all. Let me explain: Failsafes are implimented to check what topics are being talked about (like tieneman square). These are not included inside the model itself (though it does have a type of post-training, reinforcement-based censorship applied to the finished model). This second type of censorship (the kind actually included in the model weights) can actually be removed by retraining using similar reinforcement techniques. This means that the Tldr is: There is censorship baked into the model but because the weights are public, it can be removed /bypassed. In contrast the deepseek web app includes both kinds of censorship (and also definitely steals your data). The local model obviously does not.
About how much bandwidth does this use? I know it says not much but can we be more specific. I want to help but have limited upload speeds.