Apparently you don’t know that there are fast tor nodes and, gasp, speeds have improved over the 2 decades tor has been in existence.
People are stuck in 2014 thinking it is slower than molasses.
I’ve had this same argument before.
Pick a node. Run speed test. If too slow for your needs, pick another.
There are hundreds that are set up for high speed, high volume traffic.
https://metrics.torproject.org/onionperf-throughput.html
20-80Mbps is plenty fast.
And here’s your 6.4Mb image https://metrics.torproject.org/torperf.html?start=2024-10-06&end=2025-01-04&server=public&filesize=5mb
Edit: added a couple things to enable it on desktop (Linux)
Go to:
about:config
Set
browser.ml.enable true
browser.ml.chat.enabled true
browser.ml.chat.hideLocalhost false
Then go to:
about:inference
You can set the endpoint to your localhost or any server.
Open sidebar and select AI chatbot. Select localhost option and follow prompts
Note: I will never use this, I just wanted to know. Anyway, here are the docs
https://firefox-source-docs.mozilla.org/toolkit/components/ml/index.html
https://firefox-source-docs.mozilla.org/toolkit/components/ml/api.html
screenshots This is on nightly on mobile