I use Duckduckgo, but I realised these big(ish) search engines give me all the commercialised results. Duckduckgo has been going down the slope for years, but not at such a rate as Google or Bing has.
I want to have a search engine that gives me all the small blogs and personal sites.
I'm intrigued. The search results are more akin to how they used to be 25 years ago on the internet that I loved Https://Search.marginalia.nu is definitely something I'll be exploring going forward!
Replying under the top comment but this really applies to all of these, how do these search engines determine what counts as a personal site?
For example I had procrastinated for years on finally spinning up a static, barren HTML blog. The infamous Lucidity AI post introduced me to Mataroa and I got over the hump and started writing. Would that get indexed? Etc
Teclis - Includes search results from Marginalia, free to use at the moment. This search index has been in the past closed down due to abuse.
Kagi, whose creation Teclis is, is a paid search engine (metasearch engine to be more precise) also incorporates these search results in their normal searches. I warmly recommend giving Kagi a try, it's great, I've been enjoying it a lot.
--
Other options I can recommend; You could always try to host your own search engine if you have list of small-web sites in mind or don't mind spending some effort collecting such list.
I personally host Yacy [github link] (and Searxng to interface with yacy and several other self-hosted indexes/search engines such as kiwix wiki's.).
Indexing and crawling your own search results surprisingly is not resource heavy at all, and can be run on your personal machine in the background.
Yes, I mentioned Kagi because of the Teclis search index is hosted by them.
However, most of the search results in Kagi are aggregated from dedicated search engines. (such as, but not limited to: Yandex, Brave, Google, Bing, etc.)
I tried running yacy for a while but it just ran for a bit less than a day then ran out of memory and crashed, over and over. Tried to figure out the problem, but it's niche enough that I couldn't get anywhere googling the issue.
This is a bit off-topic, but did you try to increase the JVM limits inside Yacy's administration panel?
Spoilering to hide wall of text related to this topic.
This setting located in /Performance_p.html-page for example gives the java runtime more memory. Same page also has other settings related to ram, such as setting how much memory Yacy must leave unused for the system. (These settings exist so people who run Yacy on their personal machines can have guaranteed resources for more important stuff)
Other things that would reduce memory usage is to limit the concurrency of the crawler for example. There's quite a lot of tunable settings that can affect memory usage. Would recommend trying to hit up one of the Yacy forums is also good place to ask questions. The Matrix channel (and IRC) are a bit dead, but there are couple of people including myself there!
Ah Marginalia is absolutely awesome! I feel like modern search is almost an extension of website names now, so if I want to find netflix but don't know it's website, I might search for "netflix". Marginalia is actually a cool way to find new stuff- like you can search "bike maintenance" and find cool blog posts about that topic.
I honestly can't remember if that's something google and the like used to do, but doesn't now, or if they never did. Either way, I love it!
Aside from SearXNG, I didn't know about these search engines until your recommendation. Thanks to Wiby and Marginalia, I found old rich content (old BBS list conversations, for example) that I was looking for, regarding studies on the occult and esotericism. Thank you so much!
This is a great question, in that it made me wonder why the Fediverse hasn't come up with a distributed search engine yet. I can see the general shape of a system, and it'd require some novel solutions to keep it scalable while still allowing reasonably complex queries. The biggest problems with search engines is that they're all scanning the entire internet and generating a huge percent of all internet traffic; they're all creating their own indexes, which is computationally expensive; their indexes are huge, which is space-expensive; and quality query results require a fair amount of computing resources.
A distributed search engine, with something like a DHT for the index, with partitioning and replication, and a moderation system to control bad actors and trojan nodes. DDG and SearX are sort of front ends for a system like this, except that they just hand off the queries to one (or two) of the big monolithic engines.
We'd love to build a distributed search engine, but it would be too slow I think. When you send us a query we go and search 8 billion+ pages, and bring back the top 10, 20....up to 1,000 results. For a good service we need to do that in 200ms, and thus one needs to centralise the index. It took years, several iterations and our carefully designed algos & architecture to make something so fast. No doubt Google, Bing, Yandex & Baidu went through similar hoops. Maybe, I'm wrong and/or someone can make it work with our API.
I think 200ms is an expectation of big tech. I know people have very little patience these days, but if you provided better quality searches in 5 seconds people would probably prefer that over a .2 second response of the crap we’re currently getting from the big guys. Even better if you can make the wait a little fun with some animations, public domain art, or quotes to read while waiting.
I'm designing off the top of my head, but I think you could do it with a DHT, or even just steal some distributed ledger algorithm from a blockchain. Or, you develop a distributed skip tree -- but you're right, any sort of distributed query is going to have a possibly unacceptable latency. So you might -- like Bitcoin -- distributed the index itself to participants (which could be large), but federate the indexing operation s.t. rather than a dozen different search engine crawlers hitting each web site, you'd have one or two crawlers per site feeding the shared index.
Distributed search engines have existed for over a decade. Several solutions for distributed Lucene clusters exist (SOLR, katta, ElasticSearch, O2) and while they're mostly designed to be run in a LAN where the latencies between nodes is small, I don't think it's impossible to imagine a fairly low-latency distributed, replicated index where the nodes have a small subset of peer nodes which, together, encompass the entire index. No instance has the same set of peer nodes, but the combined index is eventually consistent.
Again, I'm thinking more about federating and distributing the index-building, to reduce web sites being hammered by search engines which constitute 80% of their traffic. Federating and distributing the query mechanism is a harder problem, but there's a lot of existing R&D in this area, and technologies that could be borrowed from other domains (the aforementioned DHT and distributed ledger algorithms).
I'm hoping just as Proton do good free stuff using money I pay them (Visionary account) Kagi does/will do the same. The Internet as a whole needs to stop being ad-supported.
Google are the ones who have really gone down the toilet in recent years. They ditched cached pages, soured search results with paid ads and even their image search is as bad as Tineye for reverse image searching these days. Literally the only thing Alphabet really have going for them anymore is Android and YouTube.
It's baffling that a company which was once so dominant in the web search space that their name was literally used as a verb for looking things up for decades have now enshittified their flagship product so much that they're making rivals like Bing, Lycos, Duckduckgo, etc look like viable alternatives.
Thanks for the rec, I'll give Mojeek a try for a while. So far the results seem better than Brave (which I didn't seriously consider using regularly anyway) but I miss the bang options (!w, !yt, etc.) that DDG has.
The more obscure a web page is, the more likely it is to be indexed only by the large search engines (i.e. Google). There are search queries that return 0 results on DDG, but quite a few (relatively) obscure websites on Google. This is simply because the more money a search engine operator has, the more websites it will index.
Although Google indeed is the greatest indexer of the World Wide Web, unfortunately, the SEO and the AI makes it so hard to find something, for example, from before 2000s, such as BBS List archives, old blogosphere and personal webpages from that time, simply because they had no modern SEO nor AI keywords at that time. These old content are entirely free from AI-generated slop, (almost) free from dis- and mis-informations (because, at the time of BBS and Gopher, the Internet was still being born, and books were the main source of knowledge), so old content is sine qua non for one that's seeking real knowledge.
(almost) free from dis- and mis-informations (because, at the time of BBS and Gopher, the Internet was still being born
Yeah, no. There was tons of bullshit. I ran across a post back then saying you could get psychic powers by eating the Americium sensor in your fire detector.
Don't know if this fits your criteria, but I've been using Gruble a lot recently. You can personalise the look and language in the settings, plus it's open source.
Before google existed I used https://www.metacrawler.com it appears to still be around. I have not used it in a long time, so I know nothing about it any longer.