"Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearME

Lemmy_server speciifc, after months of study, anonymous reading vs. personalized PostgreSQL filtering

I think a client-API self-awareness of performance problems and cost of running a server.... could be built into the design as an owner/operator choice.

A search engine should see generic content, posts and filters, without changes.

But the lemmy SQL logic for PostgreSQL is to burden fetching posts and comments with all kinds of user-specific customization. This kills caching potential if it's done way at the backend.

Page 3 of posts in !memes@lemmy.ml will be different for a user who has blocked a person in that list. Right now, that burden is placed upon PostgreSQL and having to rewrite indexes on every INSERT and do steps in every SELECT.

For massive scale on lower-cost hardware, I suggest that the idea be placed where a smarter-client API is self-aware of this problem and page 3 of a community or All hot/active/top hour - be the same - and the client is given the burden of fetching the person block list and filtering. ---OR--- an intermediate front-end ,component of Lemmy that could run on multiple servers / scale out / do the filtering for that specific user.

Even paging itself, the page length - is already variable - another cache issue. Eliminate that and just encourage over-fetching of post and comments and filtering out duplicates. ---OR---- even just fetching ID numbers of new/old and a very-smart client having an ID listing of the entire page and filling in content.

But certainly during heavy server load, when servers are on the verge of crashing from too much data - eliminating personal exclusions of communities and persons on fetching posts/comments can have some PostgreSQL offloading. Even NSFW might fit into that.

Sorry about my language this morning, sloppy English.

2
1
Comments 1