Brave Search fully using their own index since April 27, 2023. But they refuse to identify their crawler and rely on googlebot if sites want to be excluded. Also their search API monetization of possible copyrighted content while understandable is a bit doubious due to their public stance on transparency.
StartPage also blocks VPN usage.
DuckDuckGo by their own admission now re-rank “trusted” sites to the top when it comes to what they clasify as"misinformation" so calling their “censorship” mild is huge understatement.
If I wanted to search for unverified info or misinfo, I could, but almost always I am lookkng for factual and sourced information. Please don’t force me to do otherwise.
That’s how all search engines fundamentally work though. The whole point if that they try to bring the most relevant results to the top and downrank things like spam and unhelpful/irrelevant results. Downranking misinfo spam websites isn’t “censorship”. Not ranking resullts would make search engines completely pointless.
I’d disagree with equating disinfo with spam. Spam seems easier to classify, sites that try to get ahead by having nonsense keywords or whatever and want to sell you something. Dis- or misinfo is trickier, you need to decide what is correct info. Do you understand what I mean?
They don’t pretent to be googlebot, they use their own crawler they just don’t share the name they use for it, so sites can’t exclude it with robots.txt. They just scrape the same sites that googlebot does, so if the site is excluded by googlebot they also skip it.
Brave Search fully using their own index since April 27, 2023. But they refuse to identify their crawler and rely on googlebot if sites want to be excluded. Also their search API monetization of possible copyrighted content while understandable is a bit doubious due to their public stance on transparency.
StartPage also blocks VPN usage.
DuckDuckGo by their own admission now re-rank “trusted” sites to the top when it comes to what they clasify as"misinformation" so calling their “censorship” mild is huge understatement.
If I wanted to search for unverified info or misinfo, I could, but almost always I am lookkng for factual and sourced information. Please don’t force me to do otherwise.
It’s more about someone else making the decision on what is “trustworthy” for you
That’s how all search engines fundamentally work though. The whole point if that they try to bring the most relevant results to the top and downrank things like spam and unhelpful/irrelevant results. Downranking misinfo spam websites isn’t “censorship”. Not ranking resullts would make search engines completely pointless.
I’d disagree with equating disinfo with spam. Spam seems easier to classify, sites that try to get ahead by having nonsense keywords or whatever and want to sell you something. Dis- or misinfo is trickier, you need to decide what is correct info. Do you understand what I mean?
Absolutely those things are different. But the point of a search engine is to, crudely and algorithmically, sort out both.
deleted by creator
Only accidental I think. They have the option of reporting that you’re behind a VPN proxy when it happens.
Didn’t have an issue since a year. Think they changed sth (airvpn)
Using mullvad I have to always come back to a specific country that I used a lot couple months ago
That sounds funny, why do you think that happens?
dunno, but already changed to another search eng on mobile
I still get it very occasionally with Proton VPN.
For me it was way worse a year ago. I would get blocked all the time, now it rarely ever happens.
I do agree, it used to be a lot worse. I switched away from StartPage for a while because it was so frequent.
Ancedotal but Startpage works perfectly fine with VPN for me. Certainly better than Google, which works but requires a lot of annoying captchas.
I used Startpage for a long time, and I’m perpetually connected to VPN on both my PC and my phone (different nodes at different times)
Never had a problem with my VPN
deleted by creator
They don’t pretent to be googlebot, they use their own crawler they just don’t share the name they use for it, so sites can’t exclude it with robots.txt. They just scrape the same sites that googlebot does, so if the site is excluded by googlebot they also skip it.